- CoreWeave Inc. has expanded its existing cloud computing agreement with Meta Platforms Inc. to $21 billion, extending through 2032.
- The deal deepens Meta’s reliance on CoreWeave’s GPU infrastructure as the company accelerates development of large AI models.
- The contract is one of the largest disclosed third-party compute agreements in the AI industry to date.
- CoreWeave, which completed its IPO in March 2025, has built its core business around long-term GPU capacity contracts with major AI firms.
What Happened
CoreWeave Inc. has expanded its cloud computing contract with Meta Platforms Inc. to $21 billion, covering GPU infrastructure supply through 2032, Bloomberg reported on April 9, 2026. The agreement enlarges an earlier contract between the two companies, giving Meta secured access to CoreWeave’s GPU cluster capacity as the social media and AI firm deepens its compute commitments. Bloomberg described Meta as a company “trying to catch up in the race to build more powerful artificial intelligence models.”
Why It Matters
Meta has made AI infrastructure a central capital priority, announcing plans to spend $60–65 billion on capital expenditures in 2025, a figure that included significant GPU procurement. Securing capacity under a long-term contracted arrangement with a dedicated GPU-cloud provider insulates Meta from spot-market pricing volatility and capacity constraints that have pressured AI developers since late 2022.
CoreWeave’s business model — purpose-built around Nvidia GPU clusters and long-term enterprise contracts — has attracted major AI customers including Microsoft and OpenAI in addition to Meta. The company’s March 2025 IPO was the first major AI-infrastructure listing of that cycle, raising approximately $1.5 billion at a $40-per-share offer price.
Technical Details
The $21 billion contract spans approximately six years and covers ongoing supply of GPU-based compute capacity. CoreWeave’s infrastructure relies predominantly on Nvidia hardware; Nvidia itself is a strategic investor in CoreWeave. Long-term compute agreements of this structure typically involve committed capacity reservations in exchange for guaranteed revenue for the provider, with pricing locked against future Nvidia GPU generation cycles.
The specific hardware configurations, Nvidia GPU generations, and workload types covered under the expanded Meta agreement were not disclosed in available reporting. Meta’s primary AI workloads include training and inference for its Llama open-source model family, ranking systems for Reels and Feed, and advertising personalization infrastructure.
Who’s Affected
Meta’s AI research and infrastructure teams gain multi-year capacity certainty for training large foundation models and running inference at scale. CoreWeave gains substantial long-term revenue visibility following its public listing, reducing reliance on shorter-term contracts. Competing hyperscalers — Amazon Web Services, Google Cloud, and Microsoft Azure — face continued pressure as major AI firms increasingly source GPU-intensive workloads from specialized providers rather than general-purpose cloud platforms.
Nvidia benefits indirectly: long-term GPU-cloud contracts of this size presuppose sustained demand for future Nvidia hardware generations, reinforcing the company’s order pipeline visibility.
What’s Next
CoreWeave and Meta have not publicly disclosed deployment timelines, specific GPU generations covered, or milestone provisions under the expanded agreement. Bloomberg’s reporting did not include commentary from either company on workload specifics or contract ramp schedules. Given the 2032 end date, the deal is expected to span at least two to three generations of Nvidia data center GPU hardware beyond the current Blackwell architecture.