Every technological leap forces society to renegotiate its relationship with power. Intelligence, once a uniquely human advantage, is now being abstracted, scaled, and embedded into machines. As AI evolves from a tool into an autonomous force shaping economies and institutions, the question is no longer what AI can do, but who it will ultimately serve.
A new framework published by OpenAI sets out a vision for managing the transition towards advanced AI systems, often described as superintelligence. Framed as a policy agenda for governments and institutions, it attempts to define how societies should respond to rapid advances in AI governance, economic transformation, and workforce disruption.
At its core, the document is not a regulation but influence: an attempt to shape how policymakers think about industrial policy for AI, productivity gains, and the redistribution of technological power.

AI industrial policy and the next economic transformation
The central argument is that AI will act as a general-purpose technology comparable to electricity or the combustion engine. It promises higher productivity, lower costs, and accelerated innovation across industries. In policy terms, this aligns with broader discussions around AI-driven productivity growth and economic restructuring.
However, historical precedent suggests that such transitions are rarely evenly distributed. Industrial revolutions typically begin with labour displacement, rising inequality, and capital concentration, before broader gains are realised. AI may intensify this dynamic due to its dependence on compute infrastructure, proprietary models, and large-scale data ecosystems.
Economic power may become increasingly concentrated among a small number of AI developers and infrastructure providers, posing a structural risk of reinforcing existing inequalities rather than reducing them.

The return of industrial policy in the AI economy
A key feature of the document is its explicit endorsement of AI industrial policy as a necessary response to market limitations. Governments, it argues, must play a more active role in shaping outcomes through regulation, investment, and public-private coordination.
A broader global shift in economic thinking is reflected in this approach. Strategic sectors such as semiconductors, energy, and digital infrastructure are already experiencing increased state intervention. AI now joins that category as a critical technology.
Yet this approach introduces a significant tension. When leading AI firms contribute directly to the design of AI regulation and governance frameworks, the risk of regulatory capture increases. Policies intended to ensure fairness and safety may inadvertently reinforce the dominance of incumbent companies by raising compliance costs and technical barriers for smaller competitors.
In this sense, AI industrial policy may not only guide innovation but also determine market entry, competition, and the long-term economic structure.

Redistribution, taxation, and the question of AI wealth
The document places strong emphasis on economic inclusion in the AI economy, proposing mechanisms such as a public wealth fund, AI taxation, and expanded access to capital markets. These ideas are designed to address one of the central challenges of AI-driven growth: the potential for extreme wealth concentration.
As AI systems increase productivity while reducing reliance on human labour, traditional tax bases such as wages and payroll contributions may weaken. The proposal to tax AI-generated profits or automated labour reflects an attempt to stabilise public finances in an increasingly automated economy.
Equally significant is the idea of a ‘right to AI’, which frames access to AI as a foundational requirement for participation in modern economic life. This positions AI not merely as a tool, but as a form of digital infrastructure essential to economic agency and inclusion.
However, these proposals face major implementation challenges. Measuring AI-generated value is complex, particularly in hybrid systems where human and machine inputs are deeply integrated. Without clear definitions, AI taxation frameworks and redistribution mechanisms could prove difficult to enforce at scale.

Workforce disruption and the future of work
The document recognises that AI will significantly reshape labour markets. Many tasks that currently require hours of human effort are already being automated, with future systems expected to handle more complex, multi-step workflows.
To manage this transition, the proposal highlights reskilling programmes, portable benefits systems, and adaptive social safety nets, alongside experimental ideas such as a reduced working week. These measures aim to mitigate the impact of automation and workforce disruption while maintaining economic stability.
However, the pace of change introduces uncertainty. Historically, labour markets have adjusted over decades, allowing new roles to emerge gradually. AI-driven disruption may occur much faster, compressing adjustment periods and increasing transitional risk.
While the document highlights expansion in sectors such as healthcare, education, and care services, these ‘human-centred jobs’ require substantial investment in training, wages, and institutional support to absorb displaced workers effectively.

AI safety, governance, and systemic control
Beyond economic considerations, the proposal places a strong emphasis on AI safety, auditing frameworks, and risk mitigation systems. The proposed measures include model evaluation standards, incident reporting mechanisms, and international coordination structures.
These safeguards respond to growing concerns around cybersecurity risks, biosecurity threats, and systemic model misalignment. As AI systems become more autonomous and embedded in critical infrastructure, governance mechanisms must evolve accordingly.
However, safety frameworks also introduce questions of control. Determining which systems are classified as high-risk inevitably centralises authority within regulatory and institutional bodies. In practice, this may restrict access to advanced AI systems to organisations capable of meeting stringent compliance requirements.
A structural trade-off between security and openness is emerging in the AI economy, raising questions about how innovation and oversight can coexist without reinforcing centralisation.

Strategic influence and the future of AI governance
The proposal from OpenAI is both policy-oriented and strategically positioned. It acknowledges legitimate risks- inequality, labour disruption, and systemic instability, while offering a roadmap for managing them through structured intervention.
At the same time, it reflects the perspective of a leading actor in the AI industry. As a result, its recommendations exist at the intersection of public interest and commercial strategy. The dual role raises important questions about who defines AI governance frameworks and how economic power is distributed in the intelligence age.
The broader challenge is not only technological but also institutional: ensuring that AI industrial policy, regulation, ethics and economic design are shaped through transparent and democratic processes, rather than through concentrated private influence.

AI industrial policy will define economic power
AI is no longer solely a technological development- it is a structural force reshaping global economic systems. The emergence of AI industrial policy frameworks reflects an attempt to manage this transformation proactively rather than reactively.
The success or failure of these approaches will determine whether AI-driven growth leads to broader prosperity or deeper concentration of wealth and power. Without effective governance, the risks of inequality and centralisation are significant. With carefully designed policies, there is real potential to expand access, improve productivity, and distribute benefits more widely.
Digital diplomacy may increasingly come to the fore as a mechanism for arbitrating competing approaches to AI policy and governance across jurisdictions. As regulatory frameworks diverge, diplomatic channels could serve to bridge gaps, negotiate standards, and balance strategic interests, positioning digital diplomacy as a practical tool for managing fragmentation in the evolving AI economy.
Ultimately, the intelligence age will not be defined by technology alone, but by the AI governance systems, economic frameworks, and industrial policy decisions that guide its development. The outcome will depend on the extent to which global stakeholders succeed in building a shared and coordinated vision for its future.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
