- The four largest U.S. cloud and AI spenders — Google, Amazon, Microsoft, Meta — now project a combined $725 billion in 2026 AI capital spending, per Financial Times reporting summarized May 1, 2026.
- The figure is up from a $610 billion February 2026 estimate, and a 77% jump over 2025’s record $410 billion combined spend.
- The four companies burned through $130 billion in Q1 2026 alone, faster than the implied annual run-rate.
- Google’s cloud revenue grew 63% year-over-year in its latest quarterly report; both Google and Microsoft say they still lack sufficient compute capacity to meet demand.
What Happened
Combined 2026 AI capital spending projections at Google, Amazon, Microsoft, and Meta have reached $725 billion, according to Financial Times reporting summarized on May 1, 2026. The figure is a sharp increase from a $610 billion estimate published in February and a 77% increase over 2025’s combined $410 billion. The four companies spent $130 billion in Q1 2026 alone — already faster than the run-rate the original full-year forecasts implied.
Why It Matters
The capex acceleration is the clearest signal that the AI infrastructure buildout has not slowed despite repeated public concerns about overspending. Microsoft’s per-company increase is the steepest at +192% (from $65B in 2025 to $190B in 2026). Alphabet and Meta both more than double their 2025 spend. Even at this scale, Google and Microsoft each say they still cannot meet customer demand for compute, suggesting the buildout will continue accelerating into 2027. The competitive structure for U.S. AI is now driven less by model capability — where multiple labs are within months of each other — and more by who can fund the most compute fastest.
Technical Details
The 2025-to-2026 capex changes broken down (in billions): Amazon $132B → ~$200B (+51.5%); Alphabet $92B → up to $190B (+106.5%); Meta $71B → up to $145B (+104.2%); Microsoft $65B → $190B (+192.3%). Total: $360B → ~$725B, a 101.4% year-over-year increase.
Google’s most recent quarterly report posted cloud revenue growth of 63% year-over-year, accompanying the increased capex projection. Rising prices for memory chips and other AI-server components are pushing per-rack costs up, which compounds the spending increase. Microsoft CEO Satya Nadella offered one hint at the recouping mechanism: software is shifting from flat per-seat pricing to per-seat plus usage fees, meaning customers should expect higher bills as AI features deepen integration with existing products.
Who’s Affected
The capex flow benefits Nvidia (still the dominant AI accelerator vendor), AMD (Instinct MI series), Broadcom (custom silicon partnerships with Google’s TPU and Meta’s MTIA), and a long tail of memory, networking, and power-infrastructure suppliers. Smaller cloud providers — Oracle, CoreWeave, Crusoe, Lambda — face a narrowing window to compete on price as the hyperscalers’ compute supply expands. Enterprise AI customers face Nadella’s per-seat-plus-usage shift directly, with billing structures changing from predictable seat licenses to consumption-driven fees on every Copilot, Workspace AI, and Bedrock integration.
What’s Next
Q2 earnings reports for all four companies will confirm whether the $725B target is being maintained or revised further upward. Watch for early signals on power constraints — multiple data-center buildouts have been delayed by transmission and substation availability, and the capex projection assumes those constraints can be cleared. The shift to per-seat-plus-usage software pricing is the leading indicator for whether the AI-infrastructure spending finds a profitable customer-side equivalent.