The U.S. Department of Defense has designated Anthropic as a “supply-chain risk” and banned federal agencies from using the company’s AI software, including the Claude model family, after Anthropic refused to remove safety guardrails from its government contracts. The dispute, which escalated through February and March 2026, centers on contract provisions that prohibit Claude’s use in weapons targeting, autonomous lethal decision-making, and mass surveillance.
The confrontation began when the DoD demanded that Anthropic remove contractual restrictions that prevent its AI tools from being used in certain military applications. Anthropic CEO Dario Amodei publicly stated on February 26 that the company would not concede to the demand. The DoD responded by setting a compliance deadline of 5:01 PM on February 27, 2026. When Anthropic did not comply, the department invoked its supply-chain risk authority to bar federal procurement of Anthropic’s products.
The “supply-chain risk” designation is typically reserved for foreign adversary-linked technology companies — most notably Huawei and Kaspersky. Applying it to a U.S.-headquartered AI company over a policy disagreement rather than a security vulnerability represents an unprecedented use of the designation. Anthropic has characterized the DoD’s framing as legally unsupported, arguing that the safety provisions in its contracts are standard terms that have been accepted by other government agencies and enterprise customers.
The timing coincides with Anthropic’s most capable model releases. Claude Opus 4.6, launched February 5, 2026, features a one-million-token context window and a 14.5-hour task completion horizon. Claude Sonnet 4.6 followed on February 17. Both models have attracted significant enterprise adoption, making the federal ban commercially consequential — government agencies and their contractors represent a substantial portion of the enterprise AI market.
The dispute places Anthropic in a position that no other AI company has occupied: banned by its own government not for a technical failure or foreign ties, but for maintaining safety commitments that the military establishment considers operationally restrictive. OpenAI, Google, and Palantir have all accepted government contracts without equivalent restrictions on military use. Whether Anthropic’s stance strengthens its credibility with commercial customers who value safety — or costs it material revenue from the defense sector — will likely define the company’s strategic direction for the remainder of 2026.
