- OpenAI secured 10 gigawatts of AI computing capacity in the United States, reaching the milestone years ahead of its original target, according to Bloomberg reporting published April 30, 2026.
- At 10 gigawatts, the figure is equivalent in scale to 20–100 large modern AI data center clusters, each typically drawing 100–500 megawatts.
- The milestone advances the Stargate Project, OpenAI’s joint infrastructure venture with SoftBank and Oracle announced in January 2025, which carried a stated commitment of up to $500 billion in investment.
- Bloomberg framed the secured capacity in terms of commitments, not exclusively operational infrastructure—a distinction relevant to assessing how quickly the compute can be deployed for training or inference.
What Happened
OpenAI reached a 10-gigawatt US AI computing capacity milestone years ahead of its original target, Bloomberg reported on April 30, 2026. The company had set domestic infrastructure expansion as a central objective following the January 2025 launch of the Stargate Project, a joint venture with SoftBank, Oracle, and additional partners.
Bloomberg described the early achievement as boosting OpenAI’s “ambitious plans for data center expansion,” and characterized the 10-gigawatt figure as a milestone for securing AI capacity in the United States—not merely a projection.
Why It Matters
The timeline acceleration matters because compute access has become the primary bottleneck constraining frontier model development and deployment at scale. The Stargate Project, announced at the White House on January 21, 2025, by OpenAI CEO Sam Altman alongside SoftBank’s Masayoshi Son and Oracle’s Larry Ellison, was structured precisely to address that constraint, with $100 billion committed immediately and up to $500 billion planned over four years.
Reaching a 10-gigawatt capacity target well ahead of the original schedule suggests that the land acquisition, power procurement, and partnership phases of Stargate’s buildout are proceeding faster than the company’s own internal projections had anticipated.
Technical Details
Ten gigawatts is a large figure in data center terms: modern hyperscale AI training clusters—such as those used to train large language models—typically draw between 100 megawatts and 500 megawatts of power. A 10-gigawatt aggregate therefore represents capacity in the range of 20 to 100 such clusters operating simultaneously.
Bloomberg’s reporting characterizes the milestone in terms of secured capacity, meaning contracted or committed infrastructure rather than exclusively operational compute. That distinction is material: secured capacity encompasses power agreements, land, and construction commitments that may not yet be available for active model workloads. The geographic distribution of that capacity across owned facilities, leased colocation space, or cloud provider agreements was not detailed in Bloomberg’s summary of the report.
Power procurement at this scale also carries significant grid-level implications. A 10-gigawatt draw from a single operator is comparable to the total electricity consumption of a mid-sized US city, and requires coordinated engagement with utility providers and grid operators across multiple states.
Who’s Affected
Microsoft, which holds a multi-year cloud partnership with OpenAI and has historically supplied a substantial share of its compute through Azure, has direct exposure to this development. A faster-than-expected buildout of OpenAI’s independently secured capacity could shift the balance between OpenAI’s reliance on Microsoft infrastructure versus its own or Stargate-affiliated facilities.
Competing frontier AI developers—including Google DeepMind, Anthropic, and Meta—are each executing large-scale infrastructure programs of their own. OpenAI demonstrating an ability to exceed its own compute procurement timeline raises the baseline for what infrastructure execution looks like among top-tier AI developers. Hyperscalers and data center operators that have signed or are negotiating capacity agreements with OpenAI stand to benefit directly from the accelerated deployment pace.
What’s Next
OpenAI’s Stargate initiative is designed to continue scaling US data center construction through the late 2020s, with SoftBank’s $100 billion initial tranche already committed as of early 2025. Given the early arrival of the 10-gigawatt milestone, the company has not publicly announced revised capacity targets or disclosed the timeline for bringing the full secured capacity online.
Bloomberg’s April 30 report did not detail whether OpenAI plans to announce an updated infrastructure roadmap in connection with this milestone, or whether the acceleration changes the company’s previously stated compute availability timelines for future model releases.