NVIDIA and Emerald AI announced partnerships with six major energy companies — AES, Constellation, Invenergy, NextEra Energy, Nscale Energy & Power, and Vistra — to build AI data centers that function as flexible grid assets rather than fixed power consumers. The initiative, unveiled at CERAWeek 2026 in Houston on March 23, aims to unlock up to 100 gigawatts of capacity across the U.S. power system by designing AI facilities that can ramp compute workloads up or down in response to grid conditions.
The concept inverts the traditional relationship between data centers and power grids. Instead of data centers demanding constant baseload power — which strains grids and delays interconnection approvals — the proposed AI factories would operate as dispatchable loads that reduce consumption during peak demand periods and increase it when excess generation is available. This flexibility makes them functionally similar to battery storage from the grid operator’s perspective: assets that absorb surplus power and curtail during shortages.
The technical foundation uses NVIDIA’s Blackwell GPU architecture, which supports dynamic power scaling across inference and training workloads. When grid operators signal high demand, the AI factory can defer non-time-sensitive training jobs and reduce power consumption by 30 to 50 percent within minutes. When renewable generation peaks — typically midday solar or overnight wind — the facility ramps up to full capacity, effectively storing renewable energy as computed AI tokens rather than in batteries.
The 100-gigawatt figure represents the total potential capacity that could be unlocked if grid-flexible AI factories were deployed at scale across all six partners’ service territories. For context, total U.S. data center power consumption is currently estimated at 30 to 40 gigawatts, so 100 gigawatts of new flexible capacity would more than triple the national data center footprint. The partners collectively operate generation and transmission assets across most of the continental United States.
NVIDIA CEO Jensen Huang has positioned this initiative as the intersection of two of the company’s strategic priorities: expanding GPU deployment for AI workloads and addressing the infrastructure constraints that limit data center growth. By solving the grid interconnection bottleneck — the primary reason new data center projects face multi-year delays — NVIDIA creates additional demand for its hardware while giving energy companies a new revenue stream from AI compute hosting.
