Anthropic, the AI safety company behind Claude, signed a multi-gigawatt TPU computing agreement with Google and chip architect Broadcom in April 2026, securing enough dedicated compute capacity to power between 750,000 and 2 million homes — all earmarked for training the next generation of Claude models. Capacity comes online starting in 2027.
This is not a marginal infrastructure upgrade. A multi-gigawatt commitment at this scale is a structural declaration that Anthropic intends to remain at the frontier of AI development, not just adjacent to it.
What “Multi-Gigawatt” Actually Means in Compute Terms
One gigawatt equals one billion watts of continuous power draw. The average AI training cluster used to train a frontier model — GPT-4, Claude 3 Opus — consumes 100–200 megawatts. A multi-gigawatt commitment means Anthropic is reserving the equivalent of 5–20 of these clusters running simultaneously.
For scale: the entire country of Iceland runs on roughly 3 gigawatts of installed electricity capacity. Anthropic is not renting spare capacity — it is carving out a nation-scale slice of Google’s TPU infrastructure for years.
Google’s Tensor Processing Units, co-designed with Broadcom, are purpose-built for the matrix multiplication operations that dominate large model training. Google’s published benchmarks show TPU v5e chips delivering approximately 393 teraFLOPS of bfloat16 compute per chip. At multi-gigawatt scale, aggregate throughput reaches a range that was not commercially available to any AI lab as recently as 2023.
Why Anthropic Is Buying Compute From Its Biggest Competitor
Google is simultaneously Anthropic’s largest strategic investor, primary cloud infrastructure partner, and direct competitor in the frontier model market. Gemini 2.5 Pro, released in March 2026, competes in the same benchmark tier as Claude 3.7 Sonnet. The vendor relationship is structurally uncomfortable — and entirely rational.
The logic is straightforward: Google’s TPU infrastructure is the only non-Nvidia alternative at this scale. AMD’s MI300X chips remain behind in software ecosystem maturity. Nebius’s planned $10 billion AI data center in Finland won’t serve a 2027 training timeline. If Anthropic wants multi-gigawatt non-Nvidia compute by 2027, Google Cloud is the only realistic supplier.
The arrangement also reflects Google’s interest in keeping Anthropic dependent on its infrastructure. A Claude trained on TPUs is a Claude that remains a Google Cloud customer, regardless of which enterprise clients it ultimately serves.
Broadcom’s Role: The Silent Chip Architect
Broadcom co-designs the networking silicon and ASIC components that make Google’s TPU pods function at scale. The company’s custom ASIC business — which also counts Meta and ByteDance as clients — generated an estimated $12.2 billion in AI-related revenue in fiscal 2025, according to Broadcom’s own earnings guidance.
Broadcom’s inclusion in the deal structure suggests this extends well beyond standard cloud compute procurement. It likely involves custom chip design commitments: Anthropic securing TPU configurations tuned for Claude’s specific training workloads rather than off-the-shelf access. This mirrors a broader industry pattern: frontier labs are no longer just buying compute — they are co-designing the hardware their models train on.
This Complements, Not Replaces, the Amazon Deal
Amazon extended its Anthropic investment to $4 billion total in early 2025 and deepened the AWS Trainium2 chip partnership alongside it. The Google TPU agreement does not displace this — it runs in parallel.
Anthropic now operates across at least three silicon supply chains: NVIDIA GPUs (via AWS and GCP spot capacity), Amazon Trainium2 (dedicated chips on AWS), and Google TPUs (via this new deal). This is deliberate diversification against single-vendor risk. The 2021 NVIDIA supply crunch — when GPU waitlists stretched 12 or more months — burned every major AI lab. Anthropic has moved aggressively on multiple infrastructure fronts throughout 2026, and compute security is clearly a board-level priority.
OpenAI’s Compute Roadmap by Comparison
OpenAI’s infrastructure strategy runs through Microsoft Azure and the Stargate joint venture — a $500 billion commitment announced in January 2025 with SoftBank, Oracle, and MGX. Stargate’s first phase targets 100,000 NVIDIA H100-equivalent GPUs deployed across Texas data centers, with broader buildout extending through 2028.
The contrast is instructive. OpenAI is building owned infrastructure through Stargate — capital-intensive, betting on long-term hardware ownership over vendor flexibility. Anthropic is securing reserved capacity across multiple partners — more capital-efficient, trading ownership for optionality and redundancy. Neither approach is obviously wrong; they reflect different assumptions about where chip economics land by 2028.
OpenAI has also expanded its commercial surface aggressively, including a landmark content deal with Disney in 2026. Anthropic has taken the opposite posture: fewer commercial partnerships, deeper infrastructure commitments.
What Comes Online in 2027 and Why That Timeline Matters Now
The 2027 date is deliberate. Current frontier models — Claude 3.7, GPT-4.5, Gemini 2.5 — were trained on compute clusters that will look modest by 2027 standards. The next generation of models is broadly expected to require 10–100x more training compute than today’s frontier, and that scale cannot be trained on infrastructure that currently exists.
Anthropic is not buying compute for Claude 3.7. It is buying the runway to train Claude 5 or Claude 6, whatever those models require. The deal is a capital allocation signal: compute scale will continue to determine model capability, and whoever locks in infrastructure now holds structural advantages when that capacity goes live.
MegaOne AI tracks 139+ AI tools across 17 categories. The compute arms race between frontier labs is the single largest structural factor shaping which tools remain competitive through 2028. Infrastructure secured today sets the model quality ceiling in three years.
The Competitive Calculus
The practical implication is direct: Anthropic has secured the hardware runway to remain at the frontier through at least 2028. Without this infrastructure, a well-funded competitor with locked-in compute could outscale Claude’s training runs regardless of research quality or safety investment.
Combined with the Amazon Trainium2 partnership, Anthropic now has redundant access to two of the three serious non-NVIDIA compute platforms at scale. If NVIDIA supply tightens again — a credible scenario given sustained global chip demand and export controls — Anthropic is structurally insulated in a way that smaller labs and newer entrants are not.
Compute is not the only variable in frontier AI. But it is the necessary condition. Anthropic just locked in that condition through 2027 and beyond.