BLOG

The US Military Used Claude AI to Attack Iran — The Same Company Trump Blacklisted

Z Zara Mitchell Apr 5, 2026 2 min read
Engine Score 7/10 — Important
Editorial illustration for: The US Military Used Claude AI to Attack Iran — The Same Company Trump Blacklisted

CBS News confirmed via two military sources that the US used Anthropic’s Claude AI over the weekend for operations related to the attack on Iran — and continues to use it. This is the same AI company the Trump administration labeled a “supply chain risk” and blacklisted from government systems just weeks earlier.

What CBS Confirmed

Two military sources told CBS News that Claude was used in connection with the Iran strikes. While the exact applications weren’t specified, military AI deployments typically involve targeting analysis, logistics optimization, intelligence synthesis, and situational awareness — not autonomous weapons systems.

Anthropic CEO Dario Amodei has publicly stated that Claude should not be used to power fully autonomous weapons without human oversight. The company’s acceptable use policy explicitly requires human-in-the-loop decision-making for military applications.

The Blacklisting Timeline

The sequence of events creates a striking contradiction:

  • Early 2026: Anthropic declined aspects of a Pentagon AI deal, citing ethical concerns about autonomous weapons applications
  • March 2026: The Trump administration labeled Anthropic a “supply chain risk” and barred its products from government systems
  • March 2026: Anthropic sued the federal government, arguing the ban violated constitutional protections
  • Late March 2026: A federal judge ruled in Anthropic’s favor, finding the ban likely violated free speech
  • April 4-5, 2026: The US military used Claude AI during Iran strikes

The government went from banning Claude to deploying it in military operations within weeks.

Why Claude Specifically

The military’s choice of Claude raises practical questions. Claude’s extended thinking capability — where the model can reason through complex problems step by step — is particularly suited to the kind of multi-variable analysis military operations require. Intelligence synthesis, where thousands of data points must be evaluated against each other, is one of Claude’s strongest demonstrated capabilities.

Anthropic recently signed an AI safety MOU with Australia, one of the US’s Five Eyes intelligence partners. The company has also established relationships with the UK and Japanese governments. The military may have chosen Claude specifically because it was already cleared through allied intelligence-sharing frameworks.

The Ethical Tension

Anthropic occupies an unusual position: an AI safety company whose technology is now confirmed to be used in military strikes. Amodei’s public position — human oversight required, no fully autonomous weapons — provides some ethical boundary. But the distinction between “AI assists targeting analysis” and “AI selects targets” is narrow in practice, especially when military operations move at the speed they did over the weekend.

Whether Anthropic consented to this specific use, was aware of it in advance, or learned about it from CBS News like everyone else remains unclear. The company has not issued a public statement.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime

NB
Nikhil B

Founder of MegaOne AI. Covers AI industry developments, tool launches, funding rounds, and regulation changes. Every story is sourced from primary documents, fact-checked, and rated using the six-factor Engine Score methodology.

About Us Editorial Policy