BLOG

Amazon Expanded Its Chip Deal With Anthropic While Funding OpenAI Too

Z Zara Mitchell Apr 7, 2026 4 min read
Engine Score 8/10 — Important

Amazon expanding chip supply to Anthropic while funding OpenAI reveals major cloud provider AI hedging strategy.

Amazon Expanded Its Chip Deal With Anthropic While Funding OpenAI Too

Key Takeaways

  • Amazon expanded its partnership to supply Anthropic with additional AI chips, according to reporting from The Information, increasing Anthropic‘s total compute capacity.
  • Anthropic’s available compute has more than doubled in recent months, bringing it closer to parity with OpenAI’s total infrastructure.
  • Amazon is simultaneously part of OpenAI’s $122 billion funding round, with a reported $50 billion commitment ($35 billion contingent on achieving AGI milestones).
  • The expanded deal likely involves Amazon’s custom Trainium 2 chips alongside NVIDIA GPUs procured through AWS.

Amazon has expanded its existing arrangement to supply Anthropic with more AI chips, The Information reported, deepening a partnership that already represents one of the largest compute commitments in the AI industry. The expanded deal increases the total silicon available to Anthropic for training and serving its Claude family of models, and sources familiar with the arrangement say Anthropic’s compute capacity has more than doubled in the past several months.

The specific hardware involved has not been fully disclosed, but the deal is understood to include a mix of Amazon’s custom Trainium 2 chips and NVIDIA GPUs provisioned through AWS infrastructure. Trainium 2, Amazon’s second-generation AI training accelerator, began volume deployment in late 2025 and offers performance that Amazon claims is competitive with NVIDIA’s H100 at lower cost per training hour. Anthropic has been one of the earliest and largest external customers for Trainium, using the chips alongside its substantial NVIDIA GPU fleet.

The timing is significant. Amazon is simultaneously participating in OpenAI’s $122 billion funding round — the largest private capital raise in history. Amazon’s reported commitment is $50 billion, though $35 billion of that is contingent on OpenAI achieving certain technical milestones that people briefed on the terms have described as AGI-related benchmarks. This means Amazon is placing substantial bets on both of the leading US-based AI labs, a hedging strategy that mirrors its broader approach to cloud computing, where AWS offers chips from NVIDIA, AMD, Intel, and its own Graviton and Trainium lines.

“Amazon’s strategy is to be the infrastructure layer for whoever wins,” said a cloud industry analyst at Bernstein Research. “They don’t need to pick one lab. They need both labs to need AWS.” That logic is straightforward: Anthropic is an AWS-exclusive customer for cloud infrastructure under the terms of its earlier investment agreements, and if Anthropic’s models continue to gain market share, that drives AWS consumption. The OpenAI investment, meanwhile, gives Amazon a financial stake in the other leading model provider and potentially positions AWS as an alternative to Microsoft Azure for OpenAI inference workloads.

For Anthropic, the expanded compute access arrives at a critical moment. The company is widely expected to release its next frontier model — likely Claude 5 or a model under a different naming convention — in the coming months. Training frontier AI models requires clusters of tens of thousands of GPUs running for weeks or months, and compute availability is frequently the binding constraint on model development timelines. CEO Dario Amodei has publicly stated that compute is “the most important input” to AI capability progress, and doubling Anthropic’s capacity meaningfully accelerates its ability to train and iterate on larger models.

The competitive context is relevant. OpenAI currently operates what is estimated to be the largest privately held AI compute infrastructure in the world, with substantial clusters at Microsoft Azure data centers plus its own Stargate joint venture with SoftBank and Oracle. Google DeepMind has access to Google’s TPU fleet, which is among the largest AI compute installations globally. Anthropic, despite raising over $15 billion in total funding, has historically operated with a compute disadvantage relative to these peers. The Amazon expansion narrows that gap.

Amazon’s total investment in Anthropic now exceeds $8 billion, including equity investments in 2023 and 2024 plus the compute commitments under their cloud partnership. Anthropic’s most recent valuation was reported at $61.5 billion following a funding round in early 2025. The expanded chip deal does not appear to involve additional equity investment — it is structured as a commercial infrastructure arrangement through AWS.

The deal also underscores the increasingly tight coupling between AI labs and cloud providers. Neither OpenAI nor Anthropic owns significant physical data center infrastructure. Their ability to train frontier models depends entirely on the willingness of Microsoft, Amazon, Google, and Oracle to allocate scarce GPU and custom chip capacity. That dependency gives cloud providers substantial leverage, and it explains why Amazon can simultaneously invest in competing labs without contradiction — the infrastructure fees flow to AWS regardless of which lab’s model a customer ultimately deploys.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime