Pete Hegseth Is Pushing AI for Surveillance and Lethal Force
The Pentagon gives Anthropic a Friday ultimatum: comply or else
The Pentagon is pressuring Anthropic, the company behind the AI model Claude, to remove limits on how its technology can be used by the U.S. military.
If Anthropic refuses, the Defense Department could cancel its $200 million contract. Officials have reportedly warned the company that it could be labeled a “supply chain risk.” They have even raised the possibility of invoking the Defense Production Act — a Cold War-era law that allows the government to compel companies to prioritize national defense needs.
Defense Secretary Pete Hegseth has given Anthropic only until Friday to comply or face a ban on government contracts.
What does the Pentagon want? Full access. The right to use AI for what it insists are “lawful purposes.” And I’m sorry to sound cynical here, but I don’t have a lot of faith in what Hegseth believes is lawful.
“Lawful purposes,” to the Pentagon, include what the military calls “kinetic operations.” In plain English: warfighting. Weapons. Targeting. Potentially systems that help decide who lives and who dies. Without human oversight.
It’s almost like they saw the Terminator films and thought, “Hey, now that’s a great idea!”
Anthropic has tried to draw lines. The company has said it does not want its systems used to power autonomous weapons that select and fire on targets without meaningful human control. It has also resisted the use of its models for broad, dragnet-style surveillance of Americans.
AI systems are already used for logistics, analysis, and planning. That’s one thing. But once you move into lethal decision-making or mass surveillance, you are in very different territory.
It starts turning into a dystopian movie.
Modern AI systems can sift through massive amounts of data. They can identify patterns in faces, voices, movements, and behavior. Tie that to drones, cameras, satellites, or predictive analytics, and you have the skeleton of something that used to live only in movies.
The Pentagon’s position, reportedly backed by Hegseth, is simple: if the use is legal, the military should not be constrained by a private company’s ethics policy. And they’re willing to threaten Anthropic to get their way.
If companies building frontier AI models give the government unrestricted access, then the internal “guardrails” these firms advertise to the public become negotiable. Conditional. Removable under pressure.
Today, it’s framed as national security. Tomorrow, it could be something else.
We are entering a phase where artificial intelligence is no longer just a productivity tool. It’s infrastructure, power, and leverage.
And whoever controls it — and controls its limits — will shape what kind of country we become.
Everything we do would be watched, examined, and interpreted by AI serving the U.S. military and the federal government. No recourse. No appeal.
Systems sent to kill with no human oversight. No one checking to see if the targets are enemy or friendly, if they’re a threat, or if they’re innocent.
It’s not like we don’t understand the stakes. We’ve got hundreds of years of dystopian novels and movies warning us.
I’ve often joked that “I, for one, would like to welcome our robot overlords,” but it didn’t hit me that those overlords might be programmed by people who don’t think my loved ones should hold certain opinions, vote the way they want, or love who they want.
I find it more frightening than I can express. It feels like the dark opening of a new dystopian story that’s too scary to keep reading.
But we could soon be living in it.
What do you think? Let me know in the comments below. Please share the article with your friends, and check out the Disciples of Democracy podcast.




I find this to be incredibly frightening and I am barely able to hope that Anthropic will refuse.
Thank you Rob.