LAUNCHES

Anthropic and Amazon Commit $100B+ to AWS for Up to 5 GW of AI Compute

R Ryan Matsuda Apr 30, 2026 4 min read
Engine Score 8/10 — Important

Anthropic + Amazon 5GW compute expansion — major infra

Editorial illustration for: Anthropic and Amazon Commit $100B+ to AWS for Up to 5 GW of AI Compute
  • Anthropic has committed more than $100 billion to AWS over the next ten years, securing up to 5 gigawatts of compute capacity spanning Trainium2 through Trainium4 chips.
  • Amazon is investing an additional $5 billion in Anthropic today, supplementing the $8 billion previously committed, with up to $20 billion more available in the future.
  • Anthropic’s annualized run-rate revenue has surpassed $30 billion in 2026, up from approximately $9 billion at the end of 2025.
  • Nearly 1 gigawatt of combined Trainium2 and Trainium3 capacity is expected to come online before the end of 2026, with Trainium2 additions arriving in Q2 2026.

What Happened

Anthropic and Amazon Web Services signed a new infrastructure and investment agreement, committing more than $100 billion in AWS spending over the next ten years and securing up to 5 gigawatts of compute capacity to train and deploy Claude. The deal, detailed in an April 21 announcement on Anthropic’s website, expands a partnership that began in 2023 and adds fresh capital at a moment when the company says infrastructure strain has affected service reliability across its consumer tiers.

Amazon is also investing $5 billion in Anthropic as part of this agreement, on top of the $8 billion already committed, with an option for up to $20 billion in additional investment in the future.

Why It Matters

Anthropic’s annualized run-rate revenue surpassed $30 billion in 2026, up from approximately $9 billion at the end of 2025. The pace of that growth has created what Anthropic describes as a sharp rise in consumer usage that has caused reliability and performance issues for free, Pro, Max, and Team users, particularly during peak hours.

The deal also reflects a broader race among hyperscalers to lock in frontier AI labs as long-term compute customers. Microsoft maintains a long-standing arrangement with OpenAI on Azure; Google has separately invested in and supplies compute to Anthropic through Google Cloud Vertex AI. With this agreement, AWS becomes Anthropic’s primary training and cloud provider for mission-critical workloads for at least the next decade.

Technical Details

The agreement covers compute capacity across Amazon’s Graviton processors and Trainium2 through Trainium4 chips, with an option to purchase future generations of Amazon’s custom silicon. Anthropic currently operates more than one million Trainium2 chips for training and serving Claude—a scale built through Project Rainier, a large-scale compute cluster the two companies launched jointly.

Significant new Trainium2 capacity is scheduled to come online in Q2 2026. Scaled Trainium3 capacity is targeted for later in 2026. Combined, those additions are projected to bring nearly 1 gigawatt of Trainium2 and Trainium3 capacity online before year-end. The 5 gigawatt total commitment spans training and inference, with expanded inference coverage planned in Asia and Europe to serve international customers.

“Our custom AI silicon offers high performance at significantly lower cost for customers, which is why it’s in such hot demand,” said Andy Jassy, CEO of Amazon. “Anthropic’s commitment to run its large language models on AWS Trainium for the next decade reflects the progress we’ve made together on custom silicon.”

Who’s Affected

More than 100,000 organizations currently run Claude on Amazon Bedrock. Those customers stand to benefit from expanded capacity and lower latency through planned regional inference expansions in Asia and Europe. The deal also formalizes deeper platform integration: a feature called Claude Platform on AWS will make Anthropic’s full platform available within AWS using existing customer accounts, billing, and governance controls, without requiring separate credentials or contracts. That feature was listed as coming soon in the updated announcement.

Individual consumers on Anthropic’s free, Pro, Max, and Team subscription tiers have experienced degraded performance during peak hours. Anthropic indicated that new capacity coming online in the next three months is specifically intended to address those reliability issues.

What’s Next

Dario Amodei, CEO and co-founder of Anthropic, said the company will use the expanded infrastructure to continue AI research while meeting current demand. “Our users tell us Claude is increasingly essential to how they work, and we need to build the infrastructure to keep pace with rapidly growing demand,” Amodei said in the announcement. “Our collaboration with Amazon will allow us to continue advancing AI research while delivering Claude to our customers, including the more than 100,000 building on AWS.”

Meaningful compute additions are expected within three months, with nearly 1 gigawatt of combined Trainium2 and Trainium3 capacity targeted by end of 2026. Anthropic noted it continues to pursue a diversified hardware strategy, distributing workloads across multiple chip architectures. Claude remains the only frontier AI model available across all three of the largest public cloud platforms: AWS Bedrock, Google Cloud Vertex AI, and Microsoft Azure Foundry.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime