- Anthropic’s annualized revenue run rate reached $19 billion in early March 2026, more than doubling from $9 billion three months prior.
- The company refused to remove AI safeguards requested by the Pentagon, leading the Department of Defense to cancel a $200 million contract.
- The Trump administration classified Anthropic as a “supply chain risk,” a designation normally reserved for companies connected to foreign adversaries.
- A federal judge issued a preliminary injunction blocking the government’s ban on doing business with Anthropic.
What Happened
Anthropic hit a $19 billion annualized revenue run rate in early March 2026, according to Bloomberg. The figure more than doubled from $9 billion approximately three months earlier, with the company adding roughly $6 billion in annualized revenue during February alone. CFO Krishna Rao confirmed the trajectory at a Morgan Stanley TMT conference, stating that “Claude is increasingly becoming critical to how businesses work” across entrepreneurs, startups, and large enterprises.
The revenue milestone arrived during a high-profile standoff with the U.S. Department of Defense. Defense Secretary Pete Hegseth told CEO Dario Amodei that if Anthropic did not allow its AI model to be used “for all lawful purposes,” the Pentagon would cancel the company’s $200 million contract. Anthropic refused to comply with the demand.
On February 26, Amodei publicly explained the decision, stating that “in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.” The company declined to remove safeguards that restricted the use of Claude in fully autonomous weapons systems and mass surveillance applications.
Why It Matters
Anthropic’s revenue growth demonstrates that refusing military contracts has not slowed the company’s commercial momentum. CEO Dario Amodei has stated that enterprise customers account for approximately 80% of Anthropic’s business, providing a commercial revenue base that does not depend on government procurement contracts.
The Pentagon dispute set a significant precedent for how AI companies respond to government pressure on safety policies. The Trump administration’s decision to classify Anthropic as a “supply chain risk” — a designation the Pentagon typically reserves for companies linked to foreign adversaries such as China or Russia — marked the first time the U.S. government used this classification against a domestic AI company over policy disagreements rather than security concerns.
OpenAI moved quickly to fill the gap, announcing its own Pentagon deal shortly after the Anthropic ban, positioning itself as the compliant alternative for government AI procurement.
Technical Details
The revenue surge was driven largely by enterprise adoption of Claude Code, Anthropic’s developer-focused coding tool that saw rapid uptake among software teams. The company also rolled out Cowork, a new feature designed to embed Claude as role-specific agents within existing business workflows, and expanded its plugin capabilities for third-party integrations.
Anthropic reached a $380 billion valuation during its Series G funding round, raising $30 billion from investors. The company’s projected 2026 revenue stands at approximately $18 billion in actual (non-annualized) terms, though Bloomberg estimated a $14 billion EBITDA loss for the same period. The gap reflects massive infrastructure spending on compute clusters and model training that continues to outpace even the company’s rapid revenue growth.
The “supply chain risk” classification, if enforced, would bar all federal agencies and military contractors from purchasing or using Anthropic products, effectively cutting the company off from the entire U.S. government market.
Who’s Affected
Commercial enterprise customers using Claude face no immediate disruption from the Pentagon dispute. Anthropic’s commercial API, Claude Code, and workplace integrations continue to operate independently of government contracts. The company’s consumer-facing Claude chatbot and developer tools remain fully available.
Federal agencies and defense contractors that had begun adopting Claude were directly impacted. The Trump administration ordered these organizations to cease using the platform and transition to alternative providers. This created immediate operational disruption for government teams that had integrated Claude into existing workflows.
What’s Next
On March 26, federal judge Lin granted Anthropic’s motion for a preliminary injunction, blocking the government from enforcing its ban. The judge questioned why Anthropic was being blacklisted, noting the classification “seems a pretty low bar.” The case continues in court and could establish binding legal precedent on whether the government can retaliate against AI companies for maintaining safety restrictions on their models. The outcome will shape how every major AI provider negotiates military contracts and usage policies going forward.
