- OpenAI published a 12-page policy paper titled “Industrial Policy for the Intelligence Age,” outlining redistribution and labor proposals for the transition to superintelligence.
- Key recommendations include a national Public Wealth Fund distributing AI-driven returns to all citizens, higher capital gains taxes for top earners, and corporate taxes on sustained AI profits.
- The paper proposes employer-union pilot programs for a 32-hour, four-day workweek at full pay, with permanent adoption if productivity is maintained.
- OpenAI acknowledges the risk that AI economic gains could concentrate within a small number of firms, including OpenAI itself.
What Happened
OpenAI published a twelve-page policy document titled “Industrial Policy for the Intelligence Age” in April 2026, laying out proposals for how governments should structure the economic transition to superintelligence. The paper, reported by The Decoder, covers redistribution mechanisms, labor protections, AI access infrastructure, and security frameworks. CEO Sam Altman discussed the proposals in a video interview with Axios alongside the paper’s release.
Why It Matters
OpenAI compares the projected disruption to the Progressive Era and the New Deal, both of which restructured U.S. labor and tax policy following industrial transformation. The company argues the current shift will be faster and more compressed, writing that “the choices we make in the near term will shape how its benefits and risks are distributed for decades to come.” The paper explicitly lists job losses, abuse, loss of control, and concentration of power among near-term risks.
Technical Details
OpenAI defines superintelligence as AI systems “capable of outperforming the smartest humans even when AI assists them,” and states that frontier models have already progressed from tasks requiring minutes of human effort to tasks requiring hours. The paper identifies months-long projects as the next capability threshold. On security, the document proposes targeted audit requirements for models that could materially advance “chemical, biological, radiological, nuclear, or cyber risks,” with those requirements explicitly scoped to “a small number of companies and the most advanced models.” It also calls for “Model-containment playbooks” modeled on cybersecurity and public health incident-response protocols, for scenarios in which dangerous model weights are published or models begin self-replicating.
Who’s Affected
Workers in automation-exposed sectors are the paper’s stated primary beneficiaries, with proposals for automatic cash assistance and training vouchers triggered by specific labor market warning indicators that sunset once conditions stabilize. AI data center operators face new requirements under the proposal: the paper states they should “pay their own way on energy so that households aren’t subsidizing them,” while also generating local jobs and tax revenue. Companies with sustained AI-driven profits would face new corporate taxes; those retaining and retraining employees would qualify for wage-linked incentives.
OpenAI does not exempt itself from scrutiny. The paper states directly: “There is also a risk that the economic gains concentrate within a small number of firms like OpenAI, even as the technology itself becomes more powerful and widely used.”
What’s Next
OpenAI describes its proposals as “intentionally early and exploratory,” framing them as inputs to a policy conversation rather than finalized demands. The paper focuses primarily on U.S. policy structures but states that “the conversation—and the solutions—must ultimately be global.” No legislative body or regulatory agency has publicly responded to the document as of its release date.