“Any lawful use” is the problem: keep frontier AI out of surveillance and targeting
When the Pentagon tells AI companies it will only work with vendors who agree to “any lawful use,” that's the leverage play. It forces the vendor to pre-approve future uses (surveillance, targeting, whatever gets labeled “lawful”) before anyone can object.
Anthropic's CEO says the Department of War (yes, that's what they're calling it) is pushing to remove two specific contract safeguards: a ban on mass domestic surveillance and a ban on fully autonomous weapons, meaning systems that can select and engage targets without human involvement. Anthropic says no. They also say they're being threatened with a “supply chain risk” label or even the Defense Production Act to force those safeguards away.
Read that and tell me this is normal procurement drama. (Anthropic statement)
I don't doubt there are good people in the DoD. I also don't care. Institutions don't run on good intentions. They run on incentives and authority. If you build a machine that makes surveillance cheap and violence scalable, you are not just trusting today's “good people.” You're handing that machine to whoever comes next.
And “whoever comes next” is not a theoretical worry anymore.
Frontier AI is not just another IT tool. In a national security context, it's a force multiplier for two things democracies should be allergic to:
1) watching everyone, and 2) killing faster.
I'm not against government using AI, period. I'm against letting the national security apparatus quietly absorb frontier models until the line between “analysis” and “targeting” is basically a rounding error.
What I'm willing to allow
There are legitimate government uses for AI that don't require turning the country into a panopticon or building an automated kill chain.
If the DoD wants AI that helps a veteran navigate benefits, or helps a clinician spot an error in a medical record, or helps defend networks against intrusion, fine. OpenAI's own pitch for its DoD pilot leans heavily on that kind of work: healthcare access, acquisition data, and “proactive cyber defense.” (OpenAI for Government)
I'm not naive. The military is going to use advanced technology. But we can still draw lines.
The lines I won't cross
Two red lines should be non-negotiable in a democracy.
First: no mass domestic surveillance.
Not “with guardrails.” Not “only for serious crimes.” Not “only with a policy memo.” A flat no.
Because AI changes the scale. Anthropic makes the argument plainly: even if the government is piecing together data it can legally purchase, powerful AI can fuse “scattered” information (movement, browsing, associations) into a comprehensive picture of someone's life, automatically and at massive scale. Anthropic links to a declassified ODNI report that acknowledges privacy concerns with commercially available data. (Anthropic statement)
Second: keep frontier AI out of lethal targeting.
Not just “fully autonomous weapons.” The whole pipeline.
If your model is prioritizing targets, fusing sensor feeds, flagging “suspicious” behavior, or recommending strikes, you're in the kill chain. And the “human in the loop” becomes a fig leaf. Humans under pressure rubber-stamp. When a system spits out a confident answer, it takes a lot of backbone to slow down and second-guess the machine. Most organizations don't reward that.
The creep is already happening
We've already watched this creep happen, and it's documented.
Google is the cleanest example because there's a clear paper trail.
In 2018, after the backlash over Project Maven (Google helping the Pentagon analyze drone footage), Google published AI principles that explicitly said it would not pursue weapons and would not pursue surveillance “violating internationally accepted norms.” (Google AI principles, 2018; Maven coverage)
Later, the DoD awarded the Joint Warfighting Cloud Capability (JWCC) contracts to AWS, Google, Microsoft, and Oracle: core cloud infrastructure “at all classification levels” from headquarters to the tactical edge. (DoD JWCC release)
Then, in 2025, Google updated its public principles. The original 2018 post now points readers to a new principles page. That page no longer includes the old “AI applications we will not pursue” list that called out weapons and surveillance. (Google principles page)
OpenAI has its own version of the story.
As of January 9, 2024, OpenAI's usage policies explicitly listed “Military and warfare” as disallowed under “high risk of physical harm.” The next day, the policies were updated (the changelog shows a January 10, 2024 update), and the explicit “military and warfare” prohibition disappears. (Archive snapshot; current policies)
Then the engagements follow. OpenAI partners with Anduril to integrate OpenAI software into counter-drone systems. (OpenAI-Anduril coverage)
And the DoD awards OpenAI Public Sector LLC a $200 million agreement to develop “prototype frontier AI capabilities” for “critical national security challenges in both warfighting and enterprise domains.” Those are the government's words, not mine. (DoD contracts listing, Jun 16 2025)
Yes, OpenAI's current policy still says you can't “develop or use weapons.” But “warfighting” contracts and “counter-drone” integrations are exactly how you slide past the public's defenses: talk about safety and freedom while building the machinery anyway.
Anthropic is being unusually explicit. They say they're deployed across classified networks and used for mission-critical work, but they won't cross two lines: mass domestic surveillance and fully autonomous weapons. And the Pentagon is demanding those lines be erased.
That's the moment you should stop asking, “But what if they use it responsibly?” and start asking, “Why are they fighting so hard to keep the option open?”
“Lawful use” is not a moral standard
The law lags technology. It always has.
“Any lawful use” doesn't mean “ethical.” It doesn't mean “with oversight.” It means: if our lawyers can find a path, we're doing it. And if they can't find a path yet, we'll lobby until they can.
This is why Trump and Hegseth matter. They're accountable for the choices being made right now. The national security apparatus doesn't belong to the conscientious staffer who worries about civil liberties. It belongs to the political leadership that signs the orders. Give that machine better surveillance and faster targeting, and you don't just get “efficiency.” You get a more capable state that's harder to challenge.
What I'm asking for
I'm asking for three lines in the sand that actually hold up when the pressure hits:
1) Explicit bans in law, plus companies that will actually honor them. Congress should prohibit federal agencies from procuring or using frontier AI for mass domestic surveillance, biometric monitoring at scale, and target identification/selection for lethal force (including the upstream targeting pipeline), as well as fully autonomous weapons. And because Trump can't be trusted to do the right thing, vendors have to refuse these uses even when the government asks nicely, threatens, or waves a contract.
2) Transparency for defense AI deployments. We need a public registry, at least at the program/category level, showing what models are used, by which offices, for which categories of use, plus incident reporting. If secrecy is the default, abuse is the default.
3) Independent audits and strong whistleblower protections. Not internal “ethics reviews.” Independent technical audits, and protected channels for contractors and civil servants to report misuse without career death.
If the Pentagon truly wants AI for the boring, legitimate stuff (paperwork, logistics, and defending networks from intrusion), it should be able to accept those conditions without flinching.
If it can't, it's telling you exactly what it wants.