AI industrial policy questions control over power, wealth and governance

Every technological leap forces society to renegotiate its relationship with power. Intelligence, once a uniquely human advantage, is now being abstracted, scaled, and embedded into machines. As AI evolves from a tool into an autonomous force shaping economies and institutions, the question is no longer what AI can do, but who it will ultimately serve.

A new framework published by OpenAI sets out a vision for managing the transition towards advanced AI systems, often described as superintelligence. Framed as a policy agenda for governments and institutions, it attempts to define how societies should respond to rapid advances in AI governance, economic transformation, and workforce disruption.

At its core, the document is not a regulation but influence: an attempt to shape how policymakers think about industrial policy for AI, productivity gains, and the redistribution of technological power.

OpenAI introduces an AI industrial policy approach exploring how AI is redefining global structures in the intelligence age and shaping future governance.
Image via freepik

AI industrial policy and the next economic transformation

The central argument is that AI will act as a general-purpose technology comparable to electricity or the combustion engine. It promises higher productivity, lower costs, and accelerated innovation across industries. In policy terms, this aligns with broader discussions around AI-driven productivity growth and economic restructuring.

However, historical precedent suggests that such transitions are rarely evenly distributed. Industrial revolutions typically begin with labour displacement, rising inequality, and capital concentration, before broader gains are realised. AI may intensify this dynamic due to its dependence on compute infrastructure, proprietary models, and large-scale data ecosystems.

Economic power may become increasingly concentrated among a small number of AI developers and infrastructure providers, posing a structural risk of reinforcing existing inequalities rather than reducing them.

 OpenAI introduces an AI industrial policy approach exploring how AI is redefining global structures in the intelligence age and shaping future governance.
Image via freepik

The return of industrial policy in the AI economy

A key feature of the document is its explicit endorsement of AI industrial policy as a necessary response to market limitations. Governments, it argues, must play a more active role in shaping outcomes through regulation, investment, and public-private coordination.

A broader global shift in economic thinking is reflected in this approach. Strategic sectors such as semiconductors, energy, and digital infrastructure are already experiencing increased state intervention. AI now joins that category as a critical technology.

Yet this approach introduces a significant tension. When leading AI firms contribute directly to the design of AI regulation and governance frameworks, the risk of regulatory capture increases. Policies intended to ensure fairness and safety may inadvertently reinforce the dominance of incumbent companies by raising compliance costs and technical barriers for smaller competitors.

In this sense, AI industrial policy may not only guide innovation but also determine market entry, competition, and the long-term economic structure.

OpenAI introduces an AI industrial policy approach exploring how AI is redefining global structures in the intelligence age and shaping future governance.
Image via freepik

Redistribution, taxation, and the question of AI wealth

The document places strong emphasis on economic inclusion in the AI economy, proposing mechanisms such as a public wealth fund, AI taxation, and expanded access to capital markets. These ideas are designed to address one of the central challenges of AI-driven growth: the potential for extreme wealth concentration.

As AI systems increase productivity while reducing reliance on human labour, traditional tax bases such as wages and payroll contributions may weaken. The proposal to tax AI-generated profits or automated labour reflects an attempt to stabilise public finances in an increasingly automated economy.

Equally significant is the idea of a ‘right to AI’, which frames access to AI as a foundational requirement for participation in modern economic life. This positions AI not merely as a tool, but as a form of digital infrastructure essential to economic agency and inclusion.

However, these proposals face major implementation challenges. Measuring AI-generated value is complex, particularly in hybrid systems where human and machine inputs are deeply integrated. Without clear definitions, AI taxation frameworks and redistribution mechanisms could prove difficult to enforce at scale.

OpenAI introduces an AI industrial policy approach exploring how AI is redefining global structures in the intelligence age and shaping future governance.
Image via freepik

Workforce disruption and the future of work

The document recognises that AI will significantly reshape labour markets. Many tasks that currently require hours of human effort are already being automated, with future systems expected to handle more complex, multi-step workflows.

To manage this transition, the proposal highlights reskilling programmes, portable benefits systems, and adaptive social safety nets, alongside experimental ideas such as a reduced working week. These measures aim to mitigate the impact of automation and workforce disruption while maintaining economic stability.

However, the pace of change introduces uncertainty. Historically, labour markets have adjusted over decades, allowing new roles to emerge gradually. AI-driven disruption may occur much faster, compressing adjustment periods and increasing transitional risk.

While the document highlights expansion in sectors such as healthcare, education, and care services, these ‘human-centred jobs’ require substantial investment in training, wages, and institutional support to absorb displaced workers effectively.

OpenAI introduces an AI industrial policy approach exploring how AI is redefining global structures in the intelligence age and shaping future governance.
Image via freepik

AI safety, governance, and systemic control

Beyond economic considerations, the proposal places a strong emphasis on AI safety, auditing frameworks, and risk mitigation systems. The proposed measures include model evaluation standards, incident reporting mechanisms, and international coordination structures.

These safeguards respond to growing concerns around cybersecurity risks, biosecurity threats, and systemic model misalignment. As AI systems become more autonomous and embedded in critical infrastructure, governance mechanisms must evolve accordingly.

However, safety frameworks also introduce questions of control. Determining which systems are classified as high-risk inevitably centralises authority within regulatory and institutional bodies. In practice, this may restrict access to advanced AI systems to organisations capable of meeting stringent compliance requirements.

A structural trade-off between security and openness is emerging in the AI economy, raising questions about how innovation and oversight can coexist without reinforcing centralisation.

OpenAI introduces an AI industrial policy approach exploring how AI is redefining global structures in the intelligence age and shaping future governance.
Image via freepik

Strategic influence and the future of AI governance

The proposal from OpenAI is both policy-oriented and strategically positioned. It acknowledges legitimate risks- inequality, labour disruption, and systemic instability, while offering a roadmap for managing them through structured intervention.

At the same time, it reflects the perspective of a leading actor in the AI industry. As a result, its recommendations exist at the intersection of public interest and commercial strategy. The dual role raises important questions about who defines AI governance frameworks and how economic power is distributed in the intelligence age.

The broader challenge is not only technological but also institutional: ensuring that AI industrial policy, regulation, ethics and economic design are shaped through transparent and democratic processes, rather than through concentrated private influence.

OpenAI introduces an AI industrial policy approach exploring how AI is redefining global structures in the intelligence age and shaping future governance.
Image via freepik

AI industrial policy will define economic power

AI is no longer solely a technological development- it is a structural force reshaping global economic systems. The emergence of AI industrial policy frameworks reflects an attempt to manage this transformation proactively rather than reactively.

The success or failure of these approaches will determine whether AI-driven growth leads to broader prosperity or deeper concentration of wealth and power. Without effective governance, the risks of inequality and centralisation are significant. With carefully designed policies, there is real potential to expand access, improve productivity, and distribute benefits more widely.

Digital diplomacy may increasingly come to the fore as a mechanism for arbitrating competing approaches to AI policy and governance across jurisdictions. As regulatory frameworks diverge, diplomatic channels could serve to bridge gaps, negotiate standards, and balance strategic interests, positioning digital diplomacy as a practical tool for managing fragmentation in the evolving AI economy. 

Ultimately, the intelligence age will not be defined by technology alone, but by the AI governance systems, economic frameworks, and industrial policy decisions that guide its development. The outcome will depend on the extent to which global stakeholders succeed in building a shared and coordinated vision for its future.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Hong Kong and China cyberspace authority sign AI and blockchain cooperation deal

The Chief Executive of Hong Kong, John Lee, met the Director of the Cyberspace Administration of China (CAC), Zhuang Rongwen, in Hong Kong to discuss cooperation in innovation and technology.

During the meeting, officials from the Innovation, Technology and Industry Bureau and the CAC signed a Memorandum of Understanding (MOU) on innovation and technology development. The agreement covers areas including AI, cross-border data flow and blockchain.

The MOU aims to support the development of Hong Kong as an international innovation and technology centre. It also focuses on strengthening cybersecurity cooperation and promoting the digital economy through technological development.

Officials said the agreement aligns with China’s national development plans and supports Hong Kong’s integration into broader economic strategies. It also highlights plans to enhance international exchanges and technology-driven economic growth.

The Chief Executive said Hong Kong will continue to expand its role as a technology and investment hub under the ‘one country, two systems’ framework. The CAC said the partnership will support long-term innovation and development goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Canada launches hybrid AI weather model

Environment and Climate Change Canada has announced the launch of a hybrid AI weather forecasting model aimed at improving predictions of severe weather. The system combines AI with traditional physics-based forecasting methods.

According to Environment and Climate Change Canada, the model uses AI to analyse large datasets while relying on established models to account for local weather factors such as temperature, wind and precipitation. This combination is expected to improve forecast accuracy.

The department states the system will enhance performance across all forecast timeframes and provide earlier warnings of major weather events. In some cases, forecasts could identify large systems more than 24 hours earlier than current capabilities.

Environment and Climate Change Canada said the model has been extensively tested alongside existing systems and will support better preparedness and public safety as extreme weather events increase in Canada.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US expands AI focus in schools

The US Department of Education has introduced a new supplemental priority focused on advancing AI in education, published in the Federal Register. The measure is intended for use in discretionary grant programmes.

According to the US Department of Education, the priority and related definitions may be applied across current and future funding competitions. The Secretary can adopt all or part of the priority depending on programme needs.

The initiative builds on earlier supplemental priorities covering areas such as literacy, educational choice, meaningful learning and workforce readiness. It forms part of a broader framework guiding federal education funding in the US.

Why does it matter?

The new priority will take effect in May 2026, expanding the role of AI in US education policy and grant allocation. This is a global shift in which AI is playing a more prominent role in education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft launches MPowerHer programme to upskill women in AI and tech in Singapore

Microsoft has launched the MPowerHer initiative in Singapore to support women in building AI and digital skills through training, mentorship, and career pathways. The programme is delivered with partners including SG Women in Tech, Mums@Work, and Code; Without Barriers.

The initiative was officially launched by Minister of State for the Ministry of Digital Development and Information, Rahayu Mahzam, at Microsoft Public Sector Solutions Day. It aims to support women across different life and career stages, including those returning to work after a career break.

MPowerHer combines foundational AI training with practical, team-based projects and career support. It also provides access to mentorship networks and community programmes designed to help participants move into employment or entrepreneurship.

The programme includes training in AI fundamentals, Microsoft Copilot, AI agents, and low-code and no-code tools. It is open to members of national communities such as SG Women in Tech, Mums@Work, and Code; Without Barriers, as well as other women across Singapore.

Microsoft Singapore Managing Director Wee Luen Chia said the initiative focuses on ensuring women are included in the AI-driven workforce. He added that it supports inclusive skills development and prepares participants for opportunities in the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

World Internet Conference Asia-Pacific Summit opens in Hong Kong

The 2026 World Internet Conference Asia-Pacific Summit has opened in Hong Kong, hosted by the World Internet Conference, organised by the Hong Kong Special Administrative Region Government, and co-organised by the Innovation, Technology and Industry Bureau.

The Hong Kong government says the two-day summit is expected to bring together around 1,000 participants from more than 50 countries and regions, including government and business leaders, representatives of international organisations, and experts and scholars.

The programme includes remarks by Hong Kong Chief Executive John Lee and World Internet Conference Chairman and Director of the Cyberspace Administration of China Zhuang Rongwen, alongside other invited speakers from government, industry, and international organisations.

A ministerial meeting was convened during the summit, with officials and representatives of international organisations discussing topics including how AI can support high-quality economic growth. The programme also includes a government-enterprise dialogue and a main forum focused on the digital economy, innovation, and technology development.

Six sub-forums are scheduled as part of the summit, covering innovation and application of AI agents, digital finance, AI security and governance, AI for a better life, digital and intelligent health, and digital transformation and dissemination of classical texts.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Belgian DPA releases new AI harms information brochure

The Belgian Data Protection Authority has outlined the impact of AI on privacy in a new publication, highlighting growing concerns around data use and protection. The analysis forms part of its ongoing work on emerging technologies.

According to the Belgian Data Protection Authority, AI systems rely on large volumes of data, which can increase risks related to the processing of personal data and compliance with existing regulations. This raises questions about transparency and accountability.

The authority notes that AI can make it more difficult for individuals to understand how their data is used, particularly in complex or automated decision-making systems. This may challenge established data protection principles.

The Authority emphasises the need to adapt regulatory approaches and safeguards to ensure privacy rights remain protected as AI adoption expands in Belgium.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNESCO to unveil AI in education observatory for Latin America and the Caribbean

UNESCO will launch the Observatory on AI in Education for Latin America and the Caribbean at a high-level event during the 2026 Forum of the Countries of Latin America and the Caribbean on Sustainable Development, organised by the Economic Commission for Latin America and the Caribbean.

The observatory is intended to support states in integrating AI into education systems across the region. UNESCO says the initiative is being developed with regional and international partners, including the Development Bank of Latin America and the Caribbean, the National Centre for AI of Chile, the Regional Center for Studies on the Development of the Information Society of Brazil, and the Economic Commission for Latin America and the Caribbean.

UNESCO describes the observatory as a regional cooperation platform bringing together knowledge production, institutional strengthening, and technical assistance in response to the growing use of AI in teaching, learning, and educational management. Its work covers research and policy, capacity development, innovation, and regional collaboration.

The organisation says the observatory will support comparative analysis, identify opportunities and risks, and assist in the design of regulatory frameworks, national strategies, and pilot initiatives. It also presents the launch as a coordination space for ministries of education, universities, research centres, the technology sector, civil society, and multilateral organisations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

South Korea warns on AI fake news risks

Reporting by The Korea Herald states that South Korean Prime Minister Kim Min-seok has warned of the risks of AI-generated fake news ahead of an upcoming election. Authorities are urging greater vigilance as digital content becomes harder to verify.

According to the report, AI technologies are increasingly capable of producing realistic false information, including manipulated images and videos. This raises concerns about their potential impact on public opinion and trust.

The government has called for precautionary measures to limit the spread of misinformation and protect the integrity of democratic processes. This includes encouraging awareness and responsible use of AI tools.

The warning reflects broader concerns about the influence of AI driven disinformation during election cycles in South Korea.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Corporate AI governance gaps highlighted in UNESCO report

UNESCO and the Thomson Reuters Foundation have published ‘Responsible AI in practice: 2025 global insights from the AI Company Data Initiative‘, presenting findings from what the report describes as the largest global dataset of corporate responsible AI disclosures.

The report analyses 2,972 companies across 11 sectors and multiple regions using publicly available disclosures and company survey responses collected through the AI Company Data Initiative.

The report says AI is being embedded across companies’ products, services, and internal operations faster than governance and disclosure are developing. It states that 43.7% of companies publicly communicate having an AI strategy or guidelines, but only 13% publicly claim adherence to a formal AI governance framework.

Among those that do cite a framework, 53% refer to the EU AI Act, while the report says 43.6% cite ‘other’ frameworks, which it presents as weakening comparability across the wider AI governance ecosystem.

The publication also says many companies describe AI governance in conceptual terms while providing less evidence on operational controls, accountability pathways, monitoring, and remediation. It states that 40% report board- or committee-level oversight on AI, and 12.4% report having a policy to ensure a human oversees AI systems.

At the same time, the publication says 72% of companies do not report conducting any AI-related impact assessment. Of those that do, 11% report environmental impact assessments and 7% report human rights impact assessments. The key statistics on page 10 visually present these findings.

Regarding labour impacts, the report says companies do not provide adequate protection for workers as AI reshapes jobs. It states that while 31% of companies claim to have AI training programmes, only 12% offered structured training with comprehensive coverage. It also argues that effective worker protection requires stronger evidence of reskilling, retraining, redeployment, transition support, and access to remedy where AI affects workers’ rights.

Why does it matter?

The report further states that ethical issues, including human rights and environmental impacts, are being sidelined in AI governance and risk management, while transparency regarding training data, third-party systems, and user rights remains uneven. It presents the AI Company Data Initiative as a tool to help companies assess their governance practices against UNESCO’s Recommendation on the Ethics of AI and to give investors more comparable information on how AI is governed in practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!