REGULATION

Federal Judge Blocks Pentagon Ban on Anthropic, Calls Supply Chain Risk Label Unprecedented

M megaone_admin Mar 27, 2026 2 min read
Engine Score 9/10 — Critical

This story reports a critical legal development with a preliminary injunction granted in a case involving Anthropic and the U.S. Department of War. Sourced directly from court documents, it has high industry impact on AI regulatory and operational precedents, making it highly actionable for legal and industry professionals.

Editorial illustration for: Federal Judge Blocks Pentagon Ban on Anthropic, Calls Supply Chain Risk Label Unprecedented

U.S. District Judge Rita F. Lin granted Anthropic a preliminary injunction on March 26, 2026, temporarily blocking the Trump administration from enforcing its ban on federal agencies using the company’s AI technology. The ruling pauses Defense Secretary Pete Hegseth’s designation of Anthropic as a supply chain risk to national security — a label typically reserved for foreign adversaries and terrorist organizations, not American companies.

The dispute began in January 2026 when Hegseth issued a memorandum requiring all AI defense contracts to include language permitting any lawful use of the technology. Anthropic refused, maintaining red lines that prohibit Claude from being used for mass domestic surveillance and fully autonomous lethal weapons. On February 27, President Trump directed federal agencies to cease using Anthropic’s products, and Hegseth followed by designating the company a supply chain risk — effectively barring any federal contractor from doing business with Anthropic.

Judge Lin found the government’s actions likely both contrary to law and arbitrary and capricious, suggesting they were designed to punish Anthropic for maintaining its acceptable use policy rather than to address genuine security concerns. The ruling noted that Claude is the only large-scale AI system that has been operational on the Defense Department’s classified systems, making the sudden designation as a security risk difficult to reconcile with years of trusted deployment.

The case carries implications well beyond Anthropic. Microsoft, the ACLU, retired military leaders, and the Cato Institute filed amicus briefs supporting the company, reflecting broad concern about the precedent of using national security designations to coerce private companies into removing safety guardrails. If the government can designate a domestic AI company as a supply chain risk for refusing to remove ethical restrictions, every AI company faces the same pressure.

Anthropic, valued at $380 billion, had a reported $200 million defense contract and its tools have been engaged in classified operations since 2024. The company’s Claude model was used for intelligence analysis, operational planning, cyber operations, and target selection — including during Operation Epic Fury against Iran. Palantir, which uses Claude in its Maven military platform, also faces uncertainty from the dispute.

The injunction is temporary while the full case proceeds. But the ruling establishes that the government cannot simply label an American AI company a security threat to force compliance with policy preferences — at least not without proper legal process. The question of whether AI companies can maintain ethical boundaries while serving military customers remains unresolved, but for now, Anthropic’s red lines stand.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime

M
MegaOne AI Editorial Team

MegaOne AI monitors 200+ sources daily to identify and score the most important AI developments. Our editorial team reviews 200+ sources with rigorous oversight to deliver accurate, scored coverage of the AI industry. Every story is fact-checked, linked to primary sources, and rated using our six-factor Engine Score methodology.

About Us Editorial Policy