The US just blacklisted an AI company. Is yours next?
A government demonstrated in a single week that it can transform a technology company from the sole provider of classified AI services into a designated national security threat, not for what the company did, but for what it refused to allow, says Paul Armstrong
Last week, the Trump administration blacklisted Anthropic, a company whose AI model powers classified military systems and, by its own account, serves eight of America’s ten largest corporations. No espionage, fraud, or sanctions evasion, Anthropic refused to remove two contractual guardrails: no mass domestic surveillance of US citizens, and no fully autonomous weapons without human oversight. Defence Secretary Pete Hegseth designated the firm a “supply chain risk to national security”, a label previously reserved for hostile foreign entities such as Huawei. Every company doing business with the US military must now certify it has no commercial relationship with Anthropic or face losing its Pentagon contracts. Hours later, OpenAIannounced a deal with the same department, claiming its agreement includes substantially identical restrictions. Within hours of both announcements, US and Israeli forces began striking Iran, a reminder that the debate over how AI, facial recognition and autonomous systems get used in military operations, and who sets the limits, is not hypothetical policy discussion but live operational reality.
Strip away the political theatre (Trump calling Anthropic “leftwing nut jobs,” Hegseth accusing the company of trying to “seize veto power over the operational decisions of the United States military”) and a hard commercial reality sits underneath. Businesses across the UK, the EU and the US are building critical operations on a handful of AI platforms whose terms of service, safety architecture and even availability can be rewritten overnight by government pressure. Anthropic’s Claude was not some niche defence product; Claude Cowork rattled software stocks earlier this month, the company is valued at $380bn, and its tools are embedded in enterprise workflows from financial services to law. A supply chain risk designation does not simply cancel a military contract worth$200m. Pentagon contractors, and anyone in their sprawling supply chains, must now audit whether Claude touches any workflow connected to defence work. Palantir, which embedded Claude into classified operations including the Maduro raid, must find and integrate a replacement within six months.
How far does the blast radius reach?
The immediate commercial question is not who was right in Washington but what happens when the AI provider you depend on falls out of favour with a government whose procurement apparatus touches nearly every global industry. The Pentagon’s annual budget is heading toward $1.5 trillion, and the web of contractors and vendors it touches is vast. A London-based consultancy using Claude for document analysis, whose client supplies software to a US defence prime, could find itself on the wrong side of a compliance line drawn in a social media post. Anthropic’s consumer app jumped to number one on Apple’s App Store over the weekend, a perverse reward for political martyrdom, but enterprise customers face the opposite calculus: reputational sympathy does not offset regulatory exposure.
None of this is happening in a regulatory vacuum, though the frameworks are wildly inconsistent. The EU AI Act explicitly exempts military and national security use from its scope, even as the European Parliament simultaneously calls for a prohibition on lethal autonomous weapons. The UK has backed UN discussions on binding rules for autonomous weapons but opposes a new treaty. Washington has moved in the opposite direction: the Biden-era executive order on AI safety was revoked, the Pentagon’s January 2026 AI strategy scrubbed all reference to ethical AI use, and the current administration demands models be available for “all lawful purposes” with no company-imposed restrictions. Musk’s xAI signed up to exactly that standard, agreeing to deploy Grok in classified systems without caveats. A model permitted for classified military use in the US may be subject to entirely different constraints under UK data protection law or EU fundamental rights obligations if data flows back into European operations.
Palantir’s position deserves particular attention, because defence-native AI firms carry a different risk profile to consumer platforms dragged into military service. Palantir landed a $10bn Army contract last year, holds a Navy deal worth nearly $500m, and its business model is built around government compliance. Anduril and a growing cohort of defence-tech firms are similarly constructed to absorb demands that consumer AI companies find intolerable. Military contracts insulate companies from failure, since governments do not usually allow their weapons systems suppliers to go bust, but the price is permanent alignment with whatever the government of the day demands. OpenAI’s deal, struck hours after Anthropic’s blacklisting, illustrates the dynamic neatly. Sam Altman claims the agreement preserves the same red lines Anthropic fought for, couched in language the Pentagon could accept: referencing existing law rather than writing explicit prohibitions. Whether the substance differs or merely the salesmanship remains unclear, andmore than 300 Google employees and 60 OpenAI staff signed letters demanding their employers refuse the Pentagon’s terms.
Act now, adjust later
Anthropic will challenge the supply chain risk designation in court, and serious questions remain about whether Hegseth has thestatutory authority to extend the ban beyond Pentagon-specific work. Legal challenges, shifting administrations and evolving international frameworks mean the ground will keep moving under this story for months, possibly years. None of that is a reason to wait. As I wrote in the latest Risk Quarterly, organisations that have allowed AI systems to embed in critical workflows without deliberate governance now face the reality that retrofitting oversight onto those dependencies means accepting operational instability. Insurance markets are hardening around exactly these exposures, with carriers increasingly limiting cover for autonomous system failures and governance gaps. Last week’s events add a new dimension to that risk: political durability of your AI vendor is now a material concern sitting alongside technical performance and regulatory compliance.
Boards can act now, even as the situation develops. Map where third-party AI models sit in your operations and identify which carry consequential authority rather than just providing support. Stress-test what a forced provider switch would cost in time, money and operational disruption. Build contractual flexibility into vendor agreements that accounts for the possibility of sudden geopolitical or regulatory shifts. Ensure compliance teams understand the cross-jurisdictional patchwork a single AI tool can trigger across the US, the UK and the EU. Review whether existing D&O, professional indemnity and cyber policies adequately cover the scenario that played out last week: a provider designation that cascades through supply chains overnight. A government demonstrated in a single week that it can transform a technology company from the sole provider of classified AI services into a designated national security threat, not for what the company did, but for what it refused to allow. Vendor relationships built on the assumption of stability need revisiting before the next political shock lands, not after.