Anthropic Said No. OpenAI Said Yes. Here’s What That Actually Means.
Before you switch to Claude in protest, read this first.
You may have seen the headlines.
Anthropic, the company behind Claude, refused to let the Pentagon use its AI for lethal strikes or mass surveillance. So Pete Hegseth and Donald Trump cut them off.
OpenAI, the company behind ChatGPT, stepped in and agreed to work with the Pentagon.
If you’re opposed to Trump, this sounds simple. Anthropic is the good guy. OpenAI is the bad guy.
But before you cancel your paid ChatGPT subscription and switch over to Claude, there’s some nuance here.
Here’s how it breaks down.
The Pentagon wanted contract language that would allow it to use an AI system for “any lawful purpose.”
Anthropic wasn’t comfortable with that wording. The company wanted explicit language in the contract saying its AI could not be used for fully autonomous lethal strikes — meaning no human in the loop — and could not be used for mass domestic surveillance.
Anthropic wanted those limits written into the contract itself.
The Pentagon said those uses are already restricted by law and Department of Defense policy.
Anthropic said that wasn’t enough. So it refused the deal.
That was the standoff.
Now here’s where it gets more complicated.
OpenAI did sign a Pentagon contract.
But OpenAI says its agreement includes safety guardrails — including limits against fully autonomous killing systems and unlawful mass surveillance. In other words, the same express limits Anthropic wanted.
So, what’s the difference? The difference is leverage and contract language.
Anthropic refused to sign unless the restrictions were explicit and non-negotiable.
OpenAI signed an agreement that, according to the company, includes protections aligned with existing law and policy.
The Pentagon maintains it never intended to use AI for illegal domestic spying or fully autonomous lethal action anyway — and that existing law already prohibits that.
So this is less about killer robots tomorrow and more about control.
Who defines the limits? Where are they written? What happens if policy changes in the future?
Those are the real issues.
No one publicly says they want AI running assassination drones without humans involved. (Though the clock is ticking on someone saying the quiet part out loud.)
The fight is about how permanent the guardrails are — and who gets to loosen them later.
Here’s what remains, though. Pete Hegseth has shown a willingness to ignore rules and laws that get in his way. Witness the extrajudicial killings as the military targets what it says are drug-running boats — killings that many legal experts say could amount to war crimes.
So even if safeguards are built into how the Pentagon says it will use AI systems, there’s no guarantee that Trump’s military will follow them — and arguably less of a guarantee than under other administrations.
You can still cancel OpenAI and ChatGPT if you want. You can hate them for saying yes to the Pentagon. You do you. I’m just giving you some more context that goes beyond the either-or.
Let me know what you think. Leave a comment below. Check out the Disciples of Democracy podcast, as well as the audio version of Archer’s Line available on Apple, Spotify, or wherever you get your podcasts.




😵🥴🫤