- The U.S. Department of War named eight AI vendors with classified-network access on May 1, 2026: SpaceX, OpenAI, Google, Nvidia, Reflection, Microsoft, AWS, and Oracle.
- The agreements are framed as accelerating an “AI-first fighting force” with “decision superiority across all domains of warfare,” with all use specified as “lawful operational use.”
- Anthropic refused the same wording, with CEO Dario Amodei arguing current laws contain loopholes (e.g., mass surveillance through commercial datasets) — the company has been excluded and the Trump administration ordered federal agencies to stop using its technology.
- In a leaked memo, Amodei described OpenAI’s Pentagon contract as “80% safety theater”; OpenAI cites three red lines (no domestic mass surveillance, no autonomous weapons, no automated high-risk decisions), but legal experts question whether those commitments hold without explicit contractual carve-outs.
What Happened
The U.S. Department of War signed deals with eight AI companies on May 1, 2026 to deploy AI across classified military networks: SpaceX, OpenAI, Google, Nvidia, Reflection, Microsoft, AWS, and Oracle. The agreements are framed as accelerating “the transformation toward establishing the United States military as an AI-first fighting force” and strengthening “decision superiority across all domains of warfare,” with all use specified as “lawful operational use.”
Why It Matters
The eight-vendor list is the most consequential public statement of U.S. AI procurement to date. Anthropic’s exclusion is the structural opposite of the announcement: every other top AI lab is in, while the company that publicly insisted on guardrails is the one frozen out. The “lawful operational use” framing is the central legal anchor, and Anthropic’s refusal of that exact wording — combined with leaked memo characterizations of OpenAI’s contract — sets up a continuing fight over what AI labs can negotiate as terms-of-service when their largest customer is the Department of War.
Technical Details
According to the announcement summary published by The Decoder, all eight vendors presumably accepted the same usage terms as OpenAI. OpenAI publicly cites three red lines on its Pentagon contract: no domestic mass surveillance, no autonomous weapons, and no automated high-risk decisions. Legal experts have questioned whether these red lines mean much without explicit contractual carve-outs, since “lawful operational use” leaves substantial discretion to the deploying agency.
Anthropic CEO Dario Amodei’s pushback centered on the “all lawful use” wording. Amodei argued that current laws contain loopholes — including mass surveillance through commercial data sets — that allow practices Anthropic does not want its models used for. The Pentagon then designated Anthropic a “supply-chain risk,” and the Trump administration ordered federal agencies to stop using Anthropic technology. Anthropic sued and obtained an injunction against the supply-chain-risk designation in March 2026. In a separately leaked memo, Amodei dismissed OpenAI’s Pentagon contract as “80% safety theater.”
Who’s Affected
The eight named vendors gain immediate access to one of the largest enterprise AI customers in the world. Reflection AI’s inclusion alongside Oracle and the four hyperscalers is the most striking — the smaller AI lab joins a tier previously reserved for the largest tech companies. Anthropic faces the cost of its principled position: continued exclusion from one of the largest AI customers and a public fight that has expanded to executive-branch action against the company. Other AI labs that may consider similar terms-of-service guardrails — including labs preparing to launch in the second half of 2026 — now have Anthropic’s experience as a clear cost of refusal.
What’s Next
The Anthropic-DOD legal fight continues, and the underlying disagreement over what counts as “lawful use” is likely to surface in formal congressional oversight as the contracts are spent against. Expect academic and civil-society legal analyses of whether the eight vendors’ terms include any meaningful constraints. Anthropic’s commercial trajectory through Q2 — without Pentagon access but with growing Mythos cybersecurity demand — will be a telling counterfactual for whether refusing federal terms is sustainable.