The White House on March 20 published its first national policy framework for artificial intelligence, calling on Congress to adopt what officials described as a “light-touch” regulatory approach. The blueprint, spearheaded by AI czar David Sacks and OSTP Director Michael Kratsios, outlines six principles for federal AI legislation and recommends preempting state laws that the administration considers too restrictive. The full framework follows an Executive Order signed by President Trump in December 2025 that directed the development of a unified national AI policy.
The six guiding principles cover protecting children from AI-generated content, preventing surges in electricity costs from AI data centers, respecting intellectual property rights, preventing censorship in AI systems, educating Americans on AI use, and maintaining U.S. competitiveness in the global AI race. Notably, the framework recommends against creating a new federal AI regulatory agency, instead preferring that existing agencies adapt their oversight to cover AI within their current jurisdictions.
A central motivation is preventing what the administration calls a “patchwork of 50 different state regulatory regimes.” Four states — Colorado, California, Utah, and Texas — have already enacted their own AI legislation, with provisions ranging from disclosure requirements for AI-generated content to liability frameworks for automated decision-making. The White House framework argues that fragmented state regulation would increase compliance costs for AI companies and risk pushing development overseas.
The framework specifically addresses AI-generated child sexual abuse material, digital replicas created without consent, and infrastructure requirements for AI compute facilities. On energy, it acknowledges that AI data centers are driving significant electricity demand growth and recommends federal standards for siting and permitting new power generation, including nuclear facilities, to support AI infrastructure without destabilizing local grids.
Congressional response has been mixed. Senator Ted Cruz introduced a companion bill aligning with the framework’s principles, while critics argue that a light-touch approach leaves gaps in consumer protection, algorithmic bias mitigation, and workplace surveillance. The framework does not address frontier model safety testing, mandatory incident reporting, or restrictions on AI use in law enforcement — areas where the EU AI Act, which took effect in August 2025, has established binding requirements. Whether Congress acts on the blueprint before the midterm elections remains uncertain, but the framework establishes the administration’s position that AI regulation should prioritize enabling innovation over constraining it.
