OpenAI’s plan for an AGI world: AI for all and a 4-day work week
The company’s policy paper calls for a new social contract that includes AI at the center of everything, which could lower costs and create cures for diseases, but also warned it may upend the public safety net.
OpenAI has published a policy paper titled “Industrial Policy for the Intelligence Age,” which lays out a list of proposals that would radically shift how our economy works. It also calls for mitigations to help protect against some of the "dangerous systems" that may come about with the advent of AGI. Looking back to the transition to the Industrial Age caused by the disruptive technological breakthroughs in automation, OpenAI is calling for a new social contract, with AI sitting at the center of everything.
The first proposal paints a picture of an economy built around AI, where AI is a right of every citizen, and the benefits of the technology have been distributed fairly. What does that look like in terms of policies? It sure looks pretty different from the economy we have in place today.
OpenAI thinks access to AI should be a right for all citizens, while at the same time acknowledging that it may very well take away many of their jobs. The proposal calls for workers to get a seat at the table to discuss how the economy transitions to this strange, new, AGI-optimized world.
The paper projects that people will rapidly see the benefits of AI in the form of lower costs for essential goods, cures for disease, and energy breakthroughs. That might mean a four-day workweek (with no pay cut), and a "Public Wealth Fund" that invests in AI companies, distributing gains back to every citizen. But the company also warns of the risk that “the economic gains concentrate within a small number of firms like OpenAI.”
OpenAI acknowledges that this radical shift could upend the public safety net, where workers fund benefit programs like Social Security, Medicare, and Medicaid. To make up for those lost contributions from displaced workers, OpenAI calls for a "rebalancing" of the tax base:
“Policymakers could rebalance the tax base by increasing reliance on capital-based revenues—such as higher taxes on capital gains at the top, corporate income, or targeted measures on sustained AI-driven returns—and by exploring new approaches such as taxes related to automated labor. ”
The second proposal sounds the alarm for some of the massive risks that a super-powered AI could present to humanity. OpenAI and its competitors like Anthropic have dedicated serious effort to mitigate potential harms from their AI tools before release, but this paper urges consideration about what happens if a harmful model is released into the wild:
“As AI capabilities advance, societies may face scenarios where dangerous systems cannot be easily recalled—because model weights have been released, developers are unwilling or unable to limit access to dangerous capabilities, or the systems are autonomous and capable of replicating themselves.”
The proposal calls for international safety standards for large foundational models, but also suggests regulation should apply “only to a small number of companies and the most advanced models.” This could allow smaller startups to avoid regulation that would present “unnecessary barriers.”
Fresh from its chaotic negotiations with the Pentagon following Anthropic’s blacklisting, OpenAI says there should be “clear rules for how governments can and cannot use AI, with especially high standards for reliability, alignment, and safety,” and that AI models should create a papertrail of public records documenting how AI systems make decisions.
