The Government Picked OpenAI. Anthropic Got Blacklisted.
February 28, 2026 (1m ago)
On Friday, February 27th, 2026, two things happened within hours of each other:
10:00 AM ET: President Trump orders all federal agencies to stop using Anthropic's AI. Defense Secretary Pete Hegseth declares Anthropic a "supply chain risk to national security."
10:47 PM ET: OpenAI CEO Sam Altman tweets: "Tonight, we reached an agreement with the Department of War to deploy our models in their classified network."
The timing is not a coincidence. The government just showed the AI industry exactly how it plans to pick winners—and obliterate anyone who doesn't play ball on its exact terms.
#The Deal vs. The Designation
OpenAI secured a contract to deploy GPT models on the Pentagon's classified networks. The financial details haven't been disclosed, but similar deals with AI labs have been valued at up to $200 million.
Anthropic, meanwhile, lost its $200 million Pentagon contract and was designated a "supply chain risk." Not "we're pausing talks" or "we need more time." A formal designation that, if interpreted as Defense Secretary Hegseth suggests, would bar every Pentagon contractor from using Anthropic's Claude models.
That's not just government contracts. That's potentially Amazon, Google, Nvidia—anyone doing business with the Defense Department. For a company that just raised $30 billion at a $380 billion valuation and was eyeing an IPO, this is existential.
What "Supply Chain Risk" Actually Means
- • $200 million Pentagon contract cancelled
- • All federal agencies ordered to stop using Claude (6-month phase-out)
- • Estimated $140B/year total federal tech spending at risk
- • Every Pentagon contractor must stop using Claude—even for non-government work
- • Amazon, Google, Nvidia may have to divest from Anthropic or stop defense contracts
- • Fortune 500 companies with any Pentagon exposure face risk using Claude
- • Anthropic becomes untouchable for US enterprise customers
- • $380 billion valuation at risk
- • IPO plans indefinitely on hold
- • Enterprise customer exodus from legal uncertainty
- • Potential investor divestment (Amazon, Google, Nvidia each invested billions)
- • $18B annual revenue target threatened
#What They Were Fighting About
Here's where it gets interesting: both companies had nearly identical red lines.
Anthropic wanted explicit contract language prohibiting:
- Mass domestic surveillance using its AI
- Fully autonomous weapons (AI making kill decisions without human oversight)
OpenAI agreed to... the exact same principles. From Altman's tweet:
"In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. We put them into our agreement."
So what's the difference? How they framed it.
| Issue | Anthropic's Approach | OpenAI's Approach |
|---|---|---|
| Mass Surveillance | ✗ Explicit contract prohibition "We will not allow our models to be used for mass domestic surveillance." | ✓ Trust current law + technical safeguards "We will build technical safeguards to ensure our models behave as they should." |
| Autonomous Weapons | ✗ Explicit contract prohibition "No fully autonomous weapons where AI makes kill decisions without human oversight." | ✓ Trust Pentagon's existing policy "The DoW displayed a deep respect for safety... humans must retain appropriate judgment." |
| Contract Language | ✗ Binding contractual restrictions Red lines written explicitly in the contract as enforceable limits. | ✓ "Any lawful purpose" + principles Government can use for any lawful purpose; OpenAI builds safeguards. |
| Who Decides Limits | ✗ The AI company We built it, we understand the risks, we set the boundaries. | ✓ The government U.S. law determines acceptable use. We're the vendor, you're the customer. |
| Result | ✗ Designated supply chain risk | ✓ $200M Pentagon contract |
Anthropic wanted the restrictions written into the contract as binding limits on government use.
OpenAI agreed to "any lawful purpose" while saying the Pentagon "respected safety" and allowed them to build "technical safeguards" into the models themselves.
The Pentagon got what it wanted from OpenAI: legal flexibility. Current U.S. law doesn't explicitly ban mass surveillance or autonomous weapons. By agreeing to "lawful use," OpenAI gave the government room to decide what's lawful—not bind them with contract terms a company wrote.
#The 24-Hour Blitz
The speed of this tells you everything. Anthropic CEO Dario Amodei met with Hegseth earlier in the week. Talks broke down. Within 48 hours:
- Trump orders all federal agencies to cease using Anthropic (with a 6-month phase-out)
- Hegseth designates Anthropic a supply chain risk
- OpenAI announces a deal with the same safety principles Anthropic wanted
This wasn't bureaucratic process. This was a message: negotiate on our terms, or we'll replace you in 24 hours.
#Is This Even Legal?
Multiple legal experts say the "supply chain risk" designation is on shaky ground.
Peter Harrell (former Biden NSC official, Georgetown Law): The DoW designation only applies to Defense Department contracts. "DoW can't, legally, tell its contractors 'don't use Anthropic even in your private contracts.'"
Dean Ball (Foundation for American Innovation, former Trump AI policy advisor): Hegseth's interpretation is "almost surely illegal" and amounts to "attempted corporate murder."
Charlie Bullock (Institute for Law & AI): The government can't make this designation without completing a risk assessment and notifying Congress—neither of which appears to have happened.
Amos Toh (Brennan Center for Justice, NYU): The statute requires proving risk of "sabotage, subversion, or manipulation by an adversary." It's unclear how Anthropic's usage restrictions could be exploited by adversaries.
Anthropic says it will challenge the designation in court. They'll probably win—eventually. But here's the thing: the damage is already done.
Every general counsel at every Fortune 500 company with Pentagon contracts is asking: "Is using Claude worth the risk?" Even if Anthropic wins in court two years from now, how many customers will they have lost?
#What OpenAI Actually Agreed To
Altman says the Pentagon agreed to OpenAI's "red lines." But read the language carefully:
"We are asking the Department of War to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept."
Translation: "These are reasonable terms. Anthropic should have taken them."
But if the terms are the same, why did Anthropic get designated as supply chain risk? Because Anthropic wanted contractual enforcement. They didn't trust the government to self-regulate on autonomous weapons and mass surveillance.
OpenAI's approach: Trust that current law prevents abuse. Build technical safeguards. Let the government decide what's "lawful use."
Anthropic's approach: Don't trust promises. Put hard limits in the contract.
The government picked the company that trusted them.
#The Precedent This Sets
This is the first time the U.S. government has designated an American AI company a supply chain risk. Previously, that designation was reserved for adversaries—Huawei, Chinese telecoms, foreign suppliers.
Now it's a tool to punish domestic companies that won't comply with contract terms.
Franklin Turner (government contracts attorney): "Designating Anthropic as a supply chain risk is the contractual equivalent of nuclear war."
Saif Khan (former Biden White House NSC): "The Department is arguably treating Anthropic as a greater national security threat than any Chinese AI companies, none of whom they've designated supply-chain risks."
Think about that. The Pentagon hasn't designated DeepSeek a supply chain risk. Or any Chinese AI lab. But Anthropic—a San Francisco company with $380 billion in valuation, backed by Google and Amazon? Supply chain risk.
#What This Reveals
This isn't really about red lines or contract language. It's about who controls AI governance.
Anthropic's position: We built this technology. We understand its risks. We should set limits on how it's used.
The Pentagon's position: You're a vendor. We're the customer. U.S. law determines acceptable use, not your safety team.
OpenAI threaded the needle: agreeing to the government's framing while claiming similar safety outcomes. Maybe they genuinely believe technical safeguards work. Maybe they calculated that fighting the Pentagon is a losing battle. Maybe they saw Anthropic's fate and decided compliance beats principles.
#Where This Goes Next
For Anthropic:
- Legal challenge to the supply chain risk designation (likely to succeed, but takes years)
- Immediate customer loss from enterprise fear
- Potential forced divestment from Amazon, Google, Nvidia if they're considered "Pentagon contractors"
- IPO plans on hold indefinitely
- Revenue hit from losing government contracts + spooked enterprise customers
For OpenAI:
- Classified network deployment deal worth up to $200 million
- First-mover advantage with Pentagon after Anthropic's exit
- Stronger positioning for future government AI contracts
- Reputational hit with the AI safety community (many OpenAI employees signed an open letter supporting Anthropic's position)
For everyone else:
- Clear message: the government will pick winners based on compliance, not capability
- Every AI lab now knows: negotiate on the Pentagon's terms, or risk supply chain designation
- "AI safety" principles are negotiable; government flexibility is not
#The Bigger Picture
We're watching the U.S. government decide how AI will be governed—and the answer is: they will govern it, not the companies building it.
Anthropic believed AI labs should set ethical boundaries. The Pentagon just proved that belief is a business liability.
OpenAI took the deal. Anthropic got nuked from orbit.
The question isn't whether Anthropic was right or wrong. The question is: when the government wants something, does your safety policy matter?
This week, we got the answer.
1. ^ Sources: Reuters, Fortune, CNBC, The New York Times