The history of Silicon Valley is full of pivots that, in retrospect, were inevitable. But few have arrived with quite the symbolic weight of OpenAI’s decision in early March 2026 to allow its models to be deployed in classified US military settings. The move was not merely a contract announcement. It was a statement about what kind of company OpenAI intends to be — and a line drawn directly against the position taken by its closest rival, Anthropic.
To understand why this matters, you need to understand what Anthropic had refused to do first.
The Line Anthropic Held
Anthropic, founded in 2021 by former OpenAI employees including Dario and Daniela Amodei, had built its commercial identity around a particular kind of restraint. The company established explicit prohibitions on its AI being used for mass surveillance of Americans, or to power autonomous weapons capable of attacking without human oversight. When the Pentagon — now operating under the Trump administration’s rebranding as the Department of War — pushed back and argued it should have access to Anthropic’s models for any “lawful use,” Amodei declined.
“Anthropic understands that the Department of War, not private companies, makes military decisions,” Amodei wrote in a statement. “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.”
This position cost Anthropic government contracts. It did not cost Anthropic its users. When OpenAI subsequently announced its own deal with classified military operations — and ChatGPT uninstalls rose 295% the following day — Anthropic’s Claude climbed to the number one position in the App Store. The public had an opinion, and it was expressed through the one mechanism available to ordinary people: the download button.
What OpenAI Actually Agreed To
OpenAI stated that its agreement “makes clear its redlines: no autonomous weapons and no autonomous surveillance.” The company’s position is that it drew limits even within the military context, and that the deal represents a responsible engagement with defence needs rather than an abandonment of principles.
Critics, including Caitlin Kalinowski — who resigned from her position as OpenAI’s hardware executive in direct response — argued the decision was “rushed without the guardrails defined.” The speed of the announcement, rather than any single element of its content, was the red flag. When major decisions about AI deployment in warfare are made quickly and without adequate public deliberation, the institutional culture that produces them becomes the real concern.
Why This Moment Is Different From Previous AI Ethics Debates
AI ethics debates have been a fixture of the technology press since at least 2015. Most have been resolved, or at least managed, without producing the kind of immediate public reaction we saw in March 2026. The difference this time is that the stakes are legible. Military AI, autonomous weapons, and surveillance are concepts the general public understands intuitively. They do not require an explanation of transformer architecture or training data.
The other difference is commercial visibility. When a 295% spike in uninstalls follows a policy announcement within 24 hours, it demonstrates that AI governance decisions now have direct, measurable commercial consequences. This changes the incentive structure for every major AI lab. The market, as well as regulators and ethicists, is now watching — and is capable of responding at speed.
The Wider Race: Who Is Winning?
The commercial dimension of this story is not incidental. OpenAI has surpassed $25 billion in annualised revenue, according to reports from this period, while Anthropic is approaching $19 billion. Both companies are navigating a market that is simultaneously exploding in scale and intensifying in ethical scrutiny. The companies that define their values clearly and hold to them consistently — even at commercial cost — are increasingly differentiated from those that do not.
xAI’s Grok, meanwhile, has actively positioned itself to win defence contracts, with Elon Musk’s relationship with the Trump administration providing direct political access. The AI competitive landscape of 2026 is, in part, a race to define which ethical posture is most durable — commercially, reputationally, and politically.
What Happens Next
The UK’s ICO and Ofcom have already issued a formal information demand to xAI regarding Grok. The European Union’s AI Act is forcing companies to classify their systems by risk level, with military applications at the top of the scale. And public trust — demonstrated by the ChatGPT uninstall data — is proving to be a real and volatile variable in AI governance.
The question is no longer whether AI will be used in military contexts. It will be. The question is who decides the terms, on what timeline, and with what accountability. The March 2026 episode suggests that the answer is not going to be straightforward — and that AI companies that thought they were primarily in the software business now have to think carefully about what it means to be in the governance business too.
Frequently Asked Questions
What did OpenAI agree to with the US military in 2026?
OpenAI agreed to allow its AI models to be used in classified US military operations, subject to stated limits against autonomous weapons and autonomous surveillance. The deal departed from Anthropic’s explicit refusal to supply AI for these purposes.
Why did ChatGPT uninstalls spike in March 2026?
The day after OpenAI announced its military deployment agreement, ChatGPT uninstalls rose 295% day-over-day, reflecting immediate public opposition to the company’s decision to supply AI capabilities for military use without clearly defined safeguards.
What is Anthropic’s position on military AI?
Anthropic has maintained explicit prohibitions on its AI being used for mass surveillance of Americans or for autonomous weapons systems capable of attacking without human oversight, even at the cost of federal government contracts.