OpenAI rewrites Pentagon AI deal following backlash
OpenAI has amended its newly signed contract with the US Department of War after chief executive Sam Altman admitted the original announcement “looked opportunistic and sloppy”.
The news triggered fears the company’s technology could be used for domestic mass surveillance.
The San Francisco-based AI firm signed the agreement with the Pentagon on Friday, hours after rival Anthropic was blacklisted by the Trump administration and designated a “supply chain risk” following a dispute over safeguards.
But just days after, the ChatGPT owner was forced into damage control.
In a post on X late Monday, Altman said the company would “explicitly” bar its systems from being “intentionally used for domestic surveillance of U.S. persons and nationals”.
Intelligence agencies like the National Security Agency would also require a “follow-on modification” before being allowed to use OpenAI’s models under the contract.
“We shouldn’t have rushed to get this out on Friday”, Altman wrote. “The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”
Rivalry spills into Washington
This follows mounting rivalry between OpenAI and Anthropic, and underlines just how entangled AI has become with the US military strategy.
Anthropic’s chief executive, Dario Amodei, had refused to drop what he described as corporate “red lines” prohibiting the use of its Claude model for “mass domestic surveillance” or “fully autonomous weapons”.
US defence secretary Pete Hegseth retaliated by dubbing Anthropic as a supply chain risk, effectively barring Pentagon contractors from doing business with it altogether.
Donald Trump backed the move, calling Anthropic “leftwing nut jobs”, and ordering federal agencies to phase out its technology over the next six months.
Anthropic said it would challenge any such designation in court, claiming it was “legally unsound and set a dangerous precedent”.
Despite being dropped, Anthropic’s bot, Claude, has surged in popularity. It climbed to number one on Apple’s US App Store free app rankings over the weekend, overtaking ChatGPT.
Anthropic said “every single day last week was an all time record for Claude sign-ups”.
On the other hand, day-over-day uninstalls of ChatGPT jumped by nearly 300 per cent on Saturday compared with a typical nine per cent.
What’s more, a “delete ChatGPT” campaign trended on social media, with critics accusing the company of enabling military surveillance.
Guardrails come under scrutiny
When announcing the deal, OpenAI said the contract had “more guardrails than any previous agreement for classified AI deployments”.
The AI heavyweight reiterated that one of its red lines was “no use of OpenAI technology to direct autonomous weapons systems”.
Nearly 900 employees across OpenAI and Google have signed an open letter titled “we will not be divided”, warning that the Department of Defence was attempting to pressure AI firms into loosening safeguards.
“They’re trying to divide each company with fear that the other will give in”, the letter wrote. “We hope our leaders will put aside their differences and stand together to continue to refuse the DoW’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.”
OpenAI’s former head of policy research, Miles Brundage, questioned how the company secured a deal that Anthropic considered ethically untenable.
Posting on X, he wrote: “OpenAI employees’ default assumption here should unfortunately be that OpenAI caved and framed it as not caving”.
Similarly, back in 2018, Google abandoned its involvement in project maven, a Pentagon initiative using AI to analyse drone footage, after thousands of employees protested.
AI on the battlefield
AI has already been embedded across Western military infrastructure.
The US, Ukraine and NATO have been using analytics platforms like Palantir, which places big datasets into systems that can then be analysed by commercial AI tools.
Louis Mosley, head of Palantir’s UK operations, has said these systems allow for “faster, more efficient, and ultimately more lethal decisions where that’s appropriate”.
NATO officials have claimed that there is “always a human in the loop” and that AI systems would “never” make final decisions completely unsupervised.
But once these models are integrated into these networks, oversight becomes harder to verify and to maintain.
Professor Mariarosaria Taddeo of Oxford University said that with Anthropic out of the Pentagon’s ecosystem, “the most safety-conscious actor” was now “out of the room”. “That’s a real problem”, she added.
Meanwhile, Trump has threatened to use the Defence Production Act to compel compliance from AI firms and warned Anthropic it could face “major civil and criminal consequences” if it did not cooperate.
And elsewhere, reports suggest Google is in talks to bring its Gemini AI model into classified Pentagon systems.