AI safety row deepens as Anthropic risks losing Pentagon deal
Anthropic has refused to relax its safety limits on its AI tools, despite threats from the US Department of Defence to scrap a $200m contract and remove the company entirely from its supply chain.
Chief executive Dario Amodei said the booming AI firm would rather stop working with the Pentagon than allow its AI model, Claude, to be used for mass domestic surveillance or fully autonomous weapons.
“These threats do not change our position: we cannot in good conscience accede to their request,”, Amodei said, following a meeting this week with US defence secretary Pete Hegseth.
The Pentagon has asked that Amodei’s firm sign off on “any lawful use” of its tech within classified military systems.
Anthropic swiftly pushed back, arguing that certain uses, even if technically legal, cross ethical lines.
Amodei said the company supports AI being used for foreign intelligence and national security missions, but he drew a clear boundary around using AI to monitor Americans at scale or to power weapons that can act without direct human control.
“Using these systems for mass domestic surveillance is incompatible with democratic values”, he wrote in a blog post.
He also added that today’s AI systems are “simply not reliable enough” to be trusted in fully autonomous weapons.
Legal and contract pressure
The Defense Department has warned that if Anthropic does not agree by Friday, it could label the company a “supply chain risk”, effectively blocking it from working with the US military and potentially other defence contractors.
Officials have also raised the possibility of invoking the Defense Production Act, a law that allows the government to compel companies to prioritise national defence needs.
An Anthropic spokeswoman said revised contract wording received this week made “virtually no progress” in addressing concerns about surveillance and autonomous weapons, adding that proposed safeguards could be “disregarded at will”.
The Pentagon argues that the uses in question are already governed by existing laws and military policy.
Undersecretary of defence Emil Michael said: “At some level, you have to trust your military to do the right thing.”
The clash has become one of the clearest flashpoints yet between Washington and Silicon Valley over how far AI should go in military settings.
Anthropic has long positioned itself as one of the most safety-focused AI firms, even as it signs defence deals.
If the Pentagon follows through on its threats, the outcome could set a precedent for how much leverage AI companies really have when national security demands collide with their own safety pledges.