politics

The SAVE Act keeps making headlines, and it should. It is a brutal piece of work that would strip millions of Americans of their right to vote. But if you only focus on the SAVE Act, you miss the bigger picture. What Republicans are doing is not a set of isolated election fights. It is a coordinated strategy to make voting harder, shrink the electorate, and make sure the people who can clear every new hurdle are the voters they want.

The bill would require documentary proof of citizenship to register to vote. That sounds harmless until you look at the numbers. The Brennan Center estimates that about 21 million American citizens do not have ready access to the documents the bill would demand. That is about 9 percent of voting age Americans. (https://www.brennancenter.org/our-work/analysis-opinion/new-save-act-bills-would-still-block-millions-americans-voting)

These are not people without citizenship. They are eligible voters who do not have a passport on hand, cannot easily get a birth certificate copy, or have documents that do not line up neatly with current ID rules. NPR's explainer on the SAVE Act walks through the documentary proof requirement and the voter registration burden it would create. (https://www.npr.org/2025/03/12/nx-s1-5301676/save-act-explainer-voter-registration)

That matters because the fight does not stop at registration. Look at mail voting. PBS NewsHour reported that the Postal Service changed its postmark rules, which means some mail dropped off on a given day may not receive a postmark from that same day. For voters mailing ballots close to Election Day, that can be the difference between a counted vote and a discarded one. Election officials are already telling voters to mail ballots earlier because the old assumptions no longer hold. (https://www.pbs.org/newshour/nation/how-this-new-mail-rule-could-affect-your-ballot-your-tax-return-and-more)

Republicans are pushing on that opening too. Democracy Docket reported that the Republican National Committee asked the Supreme Court to rule that ballots arriving after Election Day should not be counted even when they were mailed on time under state law. And this is not a tiny edge case. The National Conference of State Legislatures notes that 16 states, plus Washington, D.C. and several territories, currently accept at least some ballots that are postmarked by Election Day and received later. If the Court accepts the RNC's argument, states that built their systems around those deadlines will have to throw out ballots they currently count. (https://www.democracydocket.com/news-alerts/republicans-ask-supreme-court-to-block-states-from-counting-legally-cast-mail-ballots/) (https://www.ncsl.org/state-legislatures-news/details/supreme-court-to-hear-challenge-to-mail-ballot-deadlines)

Now go back to the passport point. If Republicans want proof of citizenship rules, then access to passports becomes part of access to the ballot. PBS NewsHour also reported that the State Department ordered about 1,400 nonprofit public libraries to stop processing passport applications. Those libraries were convenient places for working people to apply for passports without taking a day off to visit a distant government office. (https://www.pbs.org/newshour/politics/nonprofit-libraries-ordered-by-state-department-to-stop-processing-passport-applications)

That is why these stories belong together. The SAVE Act raises the paperwork barrier. The passport change removes one of the more accessible ways to clear that barrier. The mail ballot fight makes it easier to reject votes even after people manage to register and cast a ballot. Each move hits a different part of the process, but the effect is the same. Fewer people vote. More ballots get tossed. The electorate gets smaller and more tilted toward the people Republicans want voting.

And all of this is trying to address a problem that does not exist. Voting by people without citizenship is already illegal and vanishingly rare. The fraud story is a pretext. The real project is to shape the electorate by making participation harder.

This is what voter suppression looks like now. It usually does not arrive with the blunt ugliness of the past. It comes wrapped in paperwork rules, process changes, and legal arguments that sound technical until you ask who gets burdened and who benefits. The SAVE Act is just one front in a broader assault on democracy. The war, however, is far from over, and it is up to everyone with a conscience to do their part to protect the freedoms future generations are counting on us to defend.

When the Pentagon tells AI companies it will only work with vendors who agree to “any lawful use,” that's the leverage play. It forces the vendor to pre-approve future uses (surveillance, targeting, whatever gets labeled “lawful”) before anyone can object.

Anthropic's CEO says the Department of War (yes, that's what they're calling it) is pushing to remove two specific contract safeguards: a ban on mass domestic surveillance and a ban on fully autonomous weapons, meaning systems that can select and engage targets without human involvement. Anthropic says no. They also say they're being threatened with a “supply chain risk” label or even the Defense Production Act to force those safeguards away.

Read that and tell me this is normal procurement drama. (Anthropic statement)

I don't doubt there are good people in the DoD. I also don't care. Institutions don't run on good intentions. They run on incentives and authority. If you build a machine that makes surveillance cheap and violence scalable, you are not just trusting today's “good people.” You're handing that machine to whoever comes next.

And “whoever comes next” is not a theoretical worry anymore.

Frontier AI is not just another IT tool. In a national security context, it's a force multiplier for two things democracies should be allergic to:

1) watching everyone, and 2) killing faster.

I'm not against government using AI, period. I'm against letting the national security apparatus quietly absorb frontier models until the line between “analysis” and “targeting” is basically a rounding error.

What I'm willing to allow

There are legitimate government uses for AI that don't require turning the country into a panopticon or building an automated kill chain.

If the DoD wants AI that helps a veteran navigate benefits, or helps a clinician spot an error in a medical record, or helps defend networks against intrusion, fine. OpenAI's own pitch for its DoD pilot leans heavily on that kind of work: healthcare access, acquisition data, and “proactive cyber defense.” (OpenAI for Government)

I'm not naive. The military is going to use advanced technology. But we can still draw lines.

The lines I won't cross

Two red lines should be non-negotiable in a democracy.

First: no mass domestic surveillance.

Not “with guardrails.” Not “only for serious crimes.” Not “only with a policy memo.” A flat no.

Because AI changes the scale. Anthropic makes the argument plainly: even if the government is piecing together data it can legally purchase, powerful AI can fuse “scattered” information (movement, browsing, associations) into a comprehensive picture of someone's life, automatically and at massive scale. Anthropic links to a declassified ODNI report that acknowledges privacy concerns with commercially available data. (Anthropic statement)

Second: keep frontier AI out of lethal targeting.

Not just “fully autonomous weapons.” The whole pipeline.

If your model is prioritizing targets, fusing sensor feeds, flagging “suspicious” behavior, or recommending strikes, you're in the kill chain. And the “human in the loop” becomes a fig leaf. Humans under pressure rubber-stamp. When a system spits out a confident answer, it takes a lot of backbone to slow down and second-guess the machine. Most organizations don't reward that.

The creep is already happening

We've already watched this creep happen, and it's documented.

Google is the cleanest example because there's a clear paper trail.

In 2018, after the backlash over Project Maven (Google helping the Pentagon analyze drone footage), Google published AI principles that explicitly said it would not pursue weapons and would not pursue surveillance “violating internationally accepted norms.” (Google AI principles, 2018; Maven coverage)

Later, the DoD awarded the Joint Warfighting Cloud Capability (JWCC) contracts to AWS, Google, Microsoft, and Oracle: core cloud infrastructure “at all classification levels” from headquarters to the tactical edge. (DoD JWCC release)

Then, in 2025, Google updated its public principles. The original 2018 post now points readers to a new principles page. That page no longer includes the old “AI applications we will not pursue” list that called out weapons and surveillance. (Google principles page)

OpenAI has its own version of the story.

As of January 9, 2024, OpenAI's usage policies explicitly listed “Military and warfare” as disallowed under “high risk of physical harm.” The next day, the policies were updated (the changelog shows a January 10, 2024 update), and the explicit “military and warfare” prohibition disappears. (Archive snapshot; current policies)

Then the engagements follow. OpenAI partners with Anduril to integrate OpenAI software into counter-drone systems. (OpenAI-Anduril coverage)

And the DoD awards OpenAI Public Sector LLC a $200 million agreement to develop “prototype frontier AI capabilities” for “critical national security challenges in both warfighting and enterprise domains.” Those are the government's words, not mine. (DoD contracts listing, Jun 16 2025)

Yes, OpenAI's current policy still says you can't “develop or use weapons.” But “warfighting” contracts and “counter-drone” integrations are exactly how you slide past the public's defenses: talk about safety and freedom while building the machinery anyway.

Anthropic is being unusually explicit. They say they're deployed across classified networks and used for mission-critical work, but they won't cross two lines: mass domestic surveillance and fully autonomous weapons. And the Pentagon is demanding those lines be erased.

That's the moment you should stop asking, “But what if they use it responsibly?” and start asking, “Why are they fighting so hard to keep the option open?”

“Lawful use” is not a moral standard

The law lags technology. It always has.

“Any lawful use” doesn't mean “ethical.” It doesn't mean “with oversight.” It means: if our lawyers can find a path, we're doing it. And if they can't find a path yet, we'll lobby until they can.

This is why Trump and Hegseth matter. They're accountable for the choices being made right now. The national security apparatus doesn't belong to the conscientious staffer who worries about civil liberties. It belongs to the political leadership that signs the orders. Give that machine better surveillance and faster targeting, and you don't just get “efficiency.” You get a more capable state that's harder to challenge.

What I'm asking for

I'm asking for three lines in the sand that actually hold up when the pressure hits:

1) Explicit bans in law, plus companies that will actually honor them. Congress should prohibit federal agencies from procuring or using frontier AI for mass domestic surveillance, biometric monitoring at scale, and target identification/selection for lethal force (including the upstream targeting pipeline), as well as fully autonomous weapons. And because Trump can't be trusted to do the right thing, vendors have to refuse these uses even when the government asks nicely, threatens, or waves a contract.

2) Transparency for defense AI deployments. We need a public registry, at least at the program/category level, showing what models are used, by which offices, for which categories of use, plus incident reporting. If secrecy is the default, abuse is the default.

3) Independent audits and strong whistleblower protections. Not internal “ethics reviews.” Independent technical audits, and protected channels for contractors and civil servants to report misuse without career death.

If the Pentagon truly wants AI for the boring, legitimate stuff (paperwork, logistics, and defending networks from intrusion), it should be able to accept those conditions without flinching.

If it can't, it's telling you exactly what it wants.