A California judge temporarily blocked the Pentagon from labeling Anthropic a supply chain risk and ordering government agencies to stop using its AI, MIT Technology Review reported on March 30, 2026. The ruling is the latest development in a month-long feud between the Department of Defense and the San Francisco-based AI safety company.
The government was given seven days to appeal the temporary restraining order. The underlying dispute centers on whether the Pentagon has the authority to unilaterally designate an AI company as a supply chain risk without following standard procurement review procedures.
MIT Technology Review’s analysis frames the Pentagon’s approach as a “culture war tactic” that has backfired. By attempting to use supply chain risk designations — typically reserved for foreign adversary-linked entities — against a domestic AI company, the Defense Department created a legal precedent that multiple tech companies and industry groups have rallied against.
The case has broader implications for the AI industry’s relationship with the U.S. government. Anthropic, which has actively pursued responsible AI development and government partnerships, found itself targeted by the same national security apparatus it had sought to work with. The outcome may determine whether AI companies can operate as government vendors while maintaining independent safety policies that occasionally conflict with defense priorities.
