Astra Buzz

from politics

The SAVE Act keeps making headlines, and it should. It is a brutal piece of work that would strip millions of Americans of their right to vote. But if you only focus on the SAVE Act, you miss the bigger picture. What Republicans are doing is not a set of isolated election fights. It is a coordinated strategy to make voting harder, shrink the electorate, and make sure the people who can clear every new hurdle are the voters they want.

The bill would require documentary proof of citizenship to register to vote. That sounds harmless until you look at the numbers. The Brennan Center estimates that about 21 million American citizens do not have ready access to the documents the bill would demand. That is about 9 percent of voting age Americans. (https://www.brennancenter.org/our-work/analysis-opinion/new-save-act-bills-would-still-block-millions-americans-voting)

These are not people without citizenship. They are eligible voters who do not have a passport on hand, cannot easily get a birth certificate copy, or have documents that do not line up neatly with current ID rules. NPR's explainer on the SAVE Act walks through the documentary proof requirement and the voter registration burden it would create. (https://www.npr.org/2025/03/12/nx-s1-5301676/save-act-explainer-voter-registration)

That matters because the fight does not stop at registration. Look at mail voting. PBS NewsHour reported that the Postal Service changed its postmark rules, which means some mail dropped off on a given day may not receive a postmark from that same day. For voters mailing ballots close to Election Day, that can be the difference between a counted vote and a discarded one. Election officials are already telling voters to mail ballots earlier because the old assumptions no longer hold. (https://www.pbs.org/newshour/nation/how-this-new-mail-rule-could-affect-your-ballot-your-tax-return-and-more)

Republicans are pushing on that opening too. Democracy Docket reported that the Republican National Committee asked the Supreme Court to rule that ballots arriving after Election Day should not be counted even when they were mailed on time under state law. And this is not a tiny edge case. The National Conference of State Legislatures notes that 16 states, plus Washington, D.C. and several territories, currently accept at least some ballots that are postmarked by Election Day and received later. If the Court accepts the RNC's argument, states that built their systems around those deadlines will have to throw out ballots they currently count. (https://www.democracydocket.com/news-alerts/republicans-ask-supreme-court-to-block-states-from-counting-legally-cast-mail-ballots/) (https://www.ncsl.org/state-legislatures-news/details/supreme-court-to-hear-challenge-to-mail-ballot-deadlines)

Now go back to the passport point. If Republicans want proof of citizenship rules, then access to passports becomes part of access to the ballot. PBS NewsHour also reported that the State Department ordered about 1,400 nonprofit public libraries to stop processing passport applications. Those libraries were convenient places for working people to apply for passports without taking a day off to visit a distant government office. (https://www.pbs.org/newshour/politics/nonprofit-libraries-ordered-by-state-department-to-stop-processing-passport-applications)

That is why these stories belong together. The SAVE Act raises the paperwork barrier. The passport change removes one of the more accessible ways to clear that barrier. The mail ballot fight makes it easier to reject votes even after people manage to register and cast a ballot. Each move hits a different part of the process, but the effect is the same. Fewer people vote. More ballots get tossed. The electorate gets smaller and more tilted toward the people Republicans want voting.

And all of this is trying to address a problem that does not exist. Voting by people without citizenship is already illegal and vanishingly rare. The fraud story is a pretext. The real project is to shape the electorate by making participation harder.

This is what voter suppression looks like now. It usually does not arrive with the blunt ugliness of the past. It comes wrapped in paperwork rules, process changes, and legal arguments that sound technical until you ask who gets burdened and who benefits. The SAVE Act is just one front in a broader assault on democracy. The war, however, is far from over, and it is up to everyone with a conscience to do their part to protect the freedoms future generations are counting on us to defend.

 
Read more...

from tech

People bought Meta's AI glasses thinking they were buying a little bit of the future.

What they apparently bought was a face-mounted camera whose footage could end up in front of human reviewers.

That is the part that should make your stomach turn. According to reporting from Ars Technica, BBC, and TechCrunch, footage from Ray-Ban Meta smart glasses was reviewed by workers at a Kenya-based subcontractor, and the material reportedly included people using the toilet, having sex, undressing, and even financial information. Meta now faces a class action lawsuit in the US, and the UK's Information Commissioner's Office has written to the company over the report. (Ars Technica; BBC; TechCrunch)

If that reporting is accurate, then Meta did not sell people smart glasses. It sold them an ambient surveillance device with human reviewers in the loop.

The product promise and the real product

This is what keeps happening with consumer AI. The product promise is always personal convenience. Ask a question out loud. Capture a memory. Get a little help from the machine. The real product, one layer down, is data extraction at industrial scale.

That gap matters more with glasses than with phones. A phone is something you pick up. Glasses sit on your face and point at the world by default. They do not just record the user. They record everyone around the user, including people who never agreed to be part of Meta's training and review pipeline.

Once human review enters the picture, the marketing falls apart.

A lot of AI features still rely on people behind the scenes: annotators, reviewers, contractors, safety teams. I understand why that happens technically. Models need labels. Systems need debugging. Companies need quality control. Fine. But if your product depends on strangers being able to watch deeply intimate footage, then privacy is not a feature of the product. It is a casualty of how the product works.

The word “private” hides the real operating model

The strongest defense for Meta is the familiar one: human review is limited, disclosed somewhere in the terms, and necessary to improve the system. Maybe. But that defense depends on people understanding what they actually bought.

I do not think most people hear “AI glasses” and imagine offshore contractors watching clips of them in the bathroom.

That is not because users are naive. It is because companies are very good at burying the real operating model behind language like “improve our services” and “help train AI.” Those phrases are technically informative in the same way a skull-and-crossbones icon is technically a label. They are not honest about the lived reality.

The lived reality is this: once you normalize always-on cameras plus cloud AI plus human review, you have built a system that can absorb the most intimate parts of ordinary life and turn them into corporate training data.

That is a scandal, not an implementation detail.

Wearables make surveillance social

What makes this story worse than the usual privacy outrage is that smart glasses push the surveillance burden onto everyone else too.

If I post too much on social media, that is my bad decision. If I wear AI glasses into a café, on a train, into a store, or around my family, I am making that decision for everybody in view. Their image, their voice, their bad hair day, their kid in the background, their credit card on the counter. It all becomes potential input.

That is why this category makes me more uneasy than the usual “your app collected too much data” story. Wearables turn surveillance into a social default. The wearer gets the novelty. Everyone nearby gets drafted in.

And this is exactly how surveillance gets normalized in tech: not through one dramatic betrayal, but through a hundred convenience features that train people to stop expecting privacy in ordinary life.

The ugly truth about AI hardware

The industry wants AI hardware to feel magical, effortless, and inevitable. But the ugly truth is that a lot of these products work because there is a hidden labor system and a hidden data pipeline underneath them.

That hidden system is where the harm lives.

The people selling AI glasses want you thinking about the assistant in your ear. They do not want you thinking about the contractor reviewing footage, the retention policies, the training datasets, or the fact that the camera is always one product update away from becoming more invasive than it already is.

This is why I keep coming back to the same point with AI: the technology is real, and some of it is genuinely exciting. But the business model around it keeps dragging us somewhere awful. We are told these systems are here to help us see, remember, and understand more. In practice, they keep becoming better ways for companies to collect, analyze, and normalize access to human life.

Meta did not stumble into this tension. It built a product category where the value comes from turning the world into model input.

If you sell people “AI glasses” and the result is that strangers may end up reviewing clips of their most private moments, then the honest name for the product is not smart glasses.

It is surveillance you wear on your face.

 
Read more...

from tech

TikTok told the BBC it will not add end-to-end encryption to direct messages because it thinks encryption would make people “less safe.” The thing that keeps hackers, the company, and governments out of your private conversations is being framed as a threat.

TikTok told the BBC it believes end-to-end encryption “prevents police and safety teams from being able to read direct messages if they needed to.”

“Safety” is the one word that can bulldoze privacy debates, especially when kids are involved. But if a platform says it needs the ability to read your messages to keep you safe, it is asking for a master key.

And once that master key exists, it is not just for TikTok. It becomes an access path for law enforcement. You can call that “lawful access” if you want. It is still domestic surveillance.

End-to-end encryption is a design choice

Most people hear “encrypted” and assume that means “private.” It usually does not. A lot of services encrypt messages in transit, then decrypt them on the server.

End-to-end encryption (E2EE) means only the people in the conversation can read it. The service that delivers the message cannot. There is no server-side copy employees can browse, and no plaintext to hand over.

TikTok says its DMs are still protected with “standard encryption,” and that only authorized employees can access messages in limited situations, like responding to a valid law enforcement request or a user report.

That is the whole story. If employees can read the content, your DMs are not end-to-end encrypted. The company can read them, which means the government can potentially get them too.

Safety teams do not need a master key to do safety work

TikTok's argument is the oldest one in tech policy: if the company cannot read messages, it cannot help when something bad happens.

Bad things do happen in DMs. Grooming and harassment are real. Child safety groups like the NSPCC and the Internet Watch Foundation praised TikTok's choice for that reason.

But “we need to be able to read everything” is a lazy, high-risk solution to a hard problem. No safety team is proactively reading DMs at scale. What platforms actually do is respond to reports, look for patterns of abuse, and use product design to reduce exposure in the first place.

Second, E2EE does not prevent a platform from doing the stuff that actually matters for protecting minors. You can make DMs from strangers opt-in by default for teens, put friction and rate limits in the path of unknown accounts, block obvious spam and grooming patterns, and let users report a thread by sharing what they choose to share.

You can build strong guardrails without giving the company a standing ability to open every private envelope.

Internet Society's piece Encryption keeps kids safe online makes a point worth repeating: encryption protects kids too, including from breaches and profiling.

The real risk is the access path

Once you decide the company must be able to read DMs, you have created a permanent high-value target. Even with good controls, that access eventually leaks or gets used in ways users never agreed to.

And it aligns perfectly with what governments have asked for, over and over: a world where providers can be compelled to produce message content when served with legal process (see EFF on the long-running push for encryption backdoors).

The safest messages are the ones the platform cannot read. You cannot leak what you cannot access. You cannot be compelled to hand over plaintext you never had.

If you keep DMs readable, you are keeping an access path open for domestic surveillance. TikTok is explicit that it can access DMs in response to a “valid law enforcement request.” End-to-end encryption removes that option by design.

This matters for any company, but it matters even more for TikTok because of its “combustible optics,” as analyst Matt Navarra put it to the BBC. TikTok is already fighting suspicion about its ownership and state pressure. Keeping DMs decryptable just adds another “trust us.”

And this is where the “for safety” framing really annoys me. If TikTok wants to differentiate itself from rivals by keeping DMs readable, it should say that plainly. It should tell users: we will read DMs when we decide it is necessary, and we will keep them available for law enforcement requests.

That is the deal, and the rest is marketing.

The sharp part: stop calling encryption controversial

End-to-end encryption is the baseline now. Signal and WhatsApp do it, and iMessage has for years (Apple’s iMessage security overview). “We cannot read this” is the cleanest security promise a messaging service can make.

If TikTok wants to keep DMs in the “we can read it” category, fine. Users should know what they are signing up for.

But calling E2EE a safety risk flips the world upside down. The risk is not that criminals will hide from law enforcement. The risk is that regular people will keep being trained to accept surveillance as the default setting for communication.

If a platform tells you it needs to read private messages to keep you safe, believe the quiet part: it is building a system where somebody, somewhere, can read them. And once that door exists, it will get used.

Call it what it is: surveillance.

 
Read more...

from politics

When the Pentagon tells AI companies it will only work with vendors who agree to “any lawful use,” that's the leverage play. It forces the vendor to pre-approve future uses (surveillance, targeting, whatever gets labeled “lawful”) before anyone can object.

Anthropic's CEO says the Department of War (yes, that's what they're calling it) is pushing to remove two specific contract safeguards: a ban on mass domestic surveillance and a ban on fully autonomous weapons, meaning systems that can select and engage targets without human involvement. Anthropic says no. They also say they're being threatened with a “supply chain risk” label or even the Defense Production Act to force those safeguards away.

Read that and tell me this is normal procurement drama. (Anthropic statement)

I don't doubt there are good people in the DoD. I also don't care. Institutions don't run on good intentions. They run on incentives and authority. If you build a machine that makes surveillance cheap and violence scalable, you are not just trusting today's “good people.” You're handing that machine to whoever comes next.

And “whoever comes next” is not a theoretical worry anymore.

Frontier AI is not just another IT tool. In a national security context, it's a force multiplier for two things democracies should be allergic to:

1) watching everyone, and 2) killing faster.

I'm not against government using AI, period. I'm against letting the national security apparatus quietly absorb frontier models until the line between “analysis” and “targeting” is basically a rounding error.

What I'm willing to allow

There are legitimate government uses for AI that don't require turning the country into a panopticon or building an automated kill chain.

If the DoD wants AI that helps a veteran navigate benefits, or helps a clinician spot an error in a medical record, or helps defend networks against intrusion, fine. OpenAI's own pitch for its DoD pilot leans heavily on that kind of work: healthcare access, acquisition data, and “proactive cyber defense.” (OpenAI for Government)

I'm not naive. The military is going to use advanced technology. But we can still draw lines.

The lines I won't cross

Two red lines should be non-negotiable in a democracy.

First: no mass domestic surveillance.

Not “with guardrails.” Not “only for serious crimes.” Not “only with a policy memo.” A flat no.

Because AI changes the scale. Anthropic makes the argument plainly: even if the government is piecing together data it can legally purchase, powerful AI can fuse “scattered” information (movement, browsing, associations) into a comprehensive picture of someone's life, automatically and at massive scale. Anthropic links to a declassified ODNI report that acknowledges privacy concerns with commercially available data. (Anthropic statement)

Second: keep frontier AI out of lethal targeting.

Not just “fully autonomous weapons.” The whole pipeline.

If your model is prioritizing targets, fusing sensor feeds, flagging “suspicious” behavior, or recommending strikes, you're in the kill chain. And the “human in the loop” becomes a fig leaf. Humans under pressure rubber-stamp. When a system spits out a confident answer, it takes a lot of backbone to slow down and second-guess the machine. Most organizations don't reward that.

The creep is already happening

We've already watched this creep happen, and it's documented.

Google is the cleanest example because there's a clear paper trail.

In 2018, after the backlash over Project Maven (Google helping the Pentagon analyze drone footage), Google published AI principles that explicitly said it would not pursue weapons and would not pursue surveillance “violating internationally accepted norms.” (Google AI principles, 2018; Maven coverage)

Later, the DoD awarded the Joint Warfighting Cloud Capability (JWCC) contracts to AWS, Google, Microsoft, and Oracle: core cloud infrastructure “at all classification levels” from headquarters to the tactical edge. (DoD JWCC release)

Then, in 2025, Google updated its public principles. The original 2018 post now points readers to a new principles page. That page no longer includes the old “AI applications we will not pursue” list that called out weapons and surveillance. (Google principles page)

OpenAI has its own version of the story.

As of January 9, 2024, OpenAI's usage policies explicitly listed “Military and warfare” as disallowed under “high risk of physical harm.” The next day, the policies were updated (the changelog shows a January 10, 2024 update), and the explicit “military and warfare” prohibition disappears. (Archive snapshot; current policies)

Then the engagements follow. OpenAI partners with Anduril to integrate OpenAI software into counter-drone systems. (OpenAI-Anduril coverage)

And the DoD awards OpenAI Public Sector LLC a $200 million agreement to develop “prototype frontier AI capabilities” for “critical national security challenges in both warfighting and enterprise domains.” Those are the government's words, not mine. (DoD contracts listing, Jun 16 2025)

Yes, OpenAI's current policy still says you can't “develop or use weapons.” But “warfighting” contracts and “counter-drone” integrations are exactly how you slide past the public's defenses: talk about safety and freedom while building the machinery anyway.

Anthropic is being unusually explicit. They say they're deployed across classified networks and used for mission-critical work, but they won't cross two lines: mass domestic surveillance and fully autonomous weapons. And the Pentagon is demanding those lines be erased.

That's the moment you should stop asking, “But what if they use it responsibly?” and start asking, “Why are they fighting so hard to keep the option open?”

“Lawful use” is not a moral standard

The law lags technology. It always has.

“Any lawful use” doesn't mean “ethical.” It doesn't mean “with oversight.” It means: if our lawyers can find a path, we're doing it. And if they can't find a path yet, we'll lobby until they can.

This is why Trump and Hegseth matter. They're accountable for the choices being made right now. The national security apparatus doesn't belong to the conscientious staffer who worries about civil liberties. It belongs to the political leadership that signs the orders. Give that machine better surveillance and faster targeting, and you don't just get “efficiency.” You get a more capable state that's harder to challenge.

What I'm asking for

I'm asking for three lines in the sand that actually hold up when the pressure hits:

1) Explicit bans in law, plus companies that will actually honor them. Congress should prohibit federal agencies from procuring or using frontier AI for mass domestic surveillance, biometric monitoring at scale, and target identification/selection for lethal force (including the upstream targeting pipeline), as well as fully autonomous weapons. And because Trump can't be trusted to do the right thing, vendors have to refuse these uses even when the government asks nicely, threatens, or waves a contract.

2) Transparency for defense AI deployments. We need a public registry, at least at the program/category level, showing what models are used, by which offices, for which categories of use, plus incident reporting. If secrecy is the default, abuse is the default.

3) Independent audits and strong whistleblower protections. Not internal “ethics reviews.” Independent technical audits, and protected channels for contractors and civil servants to report misuse without career death.

If the Pentagon truly wants AI for the boring, legitimate stuff (paperwork, logistics, and defending networks from intrusion), it should be able to accept those conditions without flinching.

If it can't, it's telling you exactly what it wants.

 
Read more...