U.S. District Judge Rita F. Lin granted Anthropic a preliminary injunction on March 26, 2026, temporarily blocking the Trump administration from enforcing its ban on federal agencies using the company’s AI technology. The ruling pauses Defense Secretary Pete Hegseth’s designation of Anthropic as a supply chain risk to national security — a label typically reserved for foreign adversaries and terrorist organizations, not American companies.
The dispute began in January 2026 when Hegseth issued a memorandum requiring all AI defense contracts to include language permitting any lawful use of the technology. Anthropic refused, maintaining red lines that prohibit Claude from being used for mass domestic surveillance and fully autonomous lethal weapons. On February 27, President Trump directed federal agencies to cease using Anthropic’s products, and Hegseth followed by designating the company a supply chain risk — effectively barring any federal contractor from doing business with Anthropic.
Judge Lin found the government’s actions likely both contrary to law and arbitrary and capricious, suggesting they were designed to punish Anthropic for maintaining its acceptable use policy rather than to address genuine security concerns. The ruling noted that Claude is the only large-scale AI system that has been operational on the Defense Department’s classified systems, making the sudden designation as a security risk difficult to reconcile with years of trusted deployment.
The case carries implications well beyond Anthropic. Microsoft, the ACLU, retired military leaders, and the Cato Institute filed amicus briefs supporting the company, reflecting broad concern about the precedent of using national security designations to coerce private companies into removing safety guardrails. If the government can designate a domestic AI company as a supply chain risk for refusing to remove ethical restrictions, every AI company faces the same pressure.
Anthropic, valued at $380 billion, had a reported $200 million defense contract and its tools have been engaged in classified operations since 2024. The company’s Claude model was used for intelligence analysis, operational planning, cyber operations, and target selection — including during Operation Epic Fury against Iran. Palantir, which uses Claude in its Maven military platform, also faces uncertainty from the dispute.
The injunction is temporary while the full case proceeds. But the ruling establishes that the government cannot simply label an American AI company a security threat to force compliance with policy preferences — at least not without proper legal process. The question of whether AI companies can maintain ethical boundaries while serving military customers remains unresolved, but for now, Anthropic’s red lines stand.
