AI industrial policy questions control over power, wealth and governance

Every technological leap forces society to renegotiate its relationship with power. Intelligence, once a uniquely human advantage, is now being abstracted, scaled, and embedded into machines. As AI evolves from a tool into an autonomous force shaping economies and institutions, the question is no longer what AI can do, but who it will ultimately serve.

A new framework published by OpenAI sets out a vision for managing the transition towards advanced AI systems, often described as superintelligence. Framed as a policy agenda for governments and institutions, it attempts to define how societies should respond to rapid advances in AI governance, economic transformation, and workforce disruption.

At its core, the document is not a regulation but influence: an attempt to shape how policymakers think about industrial policy for AI, productivity gains, and the redistribution of technological power.

OpenAI introduces an AI industrial policy approach exploring how AI is redefining global structures in the intelligence age and shaping future governance.
Image via freepik

AI industrial policy and the next economic transformation

The central argument is that AI will act as a general-purpose technology comparable to electricity or the combustion engine. It promises higher productivity, lower costs, and accelerated innovation across industries. In policy terms, this aligns with broader discussions around AI-driven productivity growth and economic restructuring.

However, historical precedent suggests that such transitions are rarely evenly distributed. Industrial revolutions typically begin with labour displacement, rising inequality, and capital concentration, before broader gains are realised. AI may intensify this dynamic due to its dependence on compute infrastructure, proprietary models, and large-scale data ecosystems.

Economic power may become increasingly concentrated among a small number of AI developers and infrastructure providers, posing a structural risk of reinforcing existing inequalities rather than reducing them.

 OpenAI introduces an AI industrial policy approach exploring how AI is redefining global structures in the intelligence age and shaping future governance.
Image via freepik

The return of industrial policy in the AI economy

A key feature of the document is its explicit endorsement of AI industrial policy as a necessary response to market limitations. Governments, it argues, must play a more active role in shaping outcomes through regulation, investment, and public-private coordination.

A broader global shift in economic thinking is reflected in this approach. Strategic sectors such as semiconductors, energy, and digital infrastructure are already experiencing increased state intervention. AI now joins that category as a critical technology.

Yet this approach introduces a significant tension. When leading AI firms contribute directly to the design of AI regulation and governance frameworks, the risk of regulatory capture increases. Policies intended to ensure fairness and safety may inadvertently reinforce the dominance of incumbent companies by raising compliance costs and technical barriers for smaller competitors.

In this sense, AI industrial policy may not only guide innovation but also determine market entry, competition, and the long-term economic structure.

OpenAI introduces an AI industrial policy approach exploring how AI is redefining global structures in the intelligence age and shaping future governance.
Image via freepik

Redistribution, taxation, and the question of AI wealth

The document places strong emphasis on economic inclusion in the AI economy, proposing mechanisms such as a public wealth fund, AI taxation, and expanded access to capital markets. These ideas are designed to address one of the central challenges of AI-driven growth: the potential for extreme wealth concentration.

As AI systems increase productivity while reducing reliance on human labour, traditional tax bases such as wages and payroll contributions may weaken. The proposal to tax AI-generated profits or automated labour reflects an attempt to stabilise public finances in an increasingly automated economy.

Equally significant is the idea of a ‘right to AI’, which frames access to AI as a foundational requirement for participation in modern economic life. This positions AI not merely as a tool, but as a form of digital infrastructure essential to economic agency and inclusion.

However, these proposals face major implementation challenges. Measuring AI-generated value is complex, particularly in hybrid systems where human and machine inputs are deeply integrated. Without clear definitions, AI taxation frameworks and redistribution mechanisms could prove difficult to enforce at scale.

OpenAI introduces an AI industrial policy approach exploring how AI is redefining global structures in the intelligence age and shaping future governance.
Image via freepik

Workforce disruption and the future of work

The document recognises that AI will significantly reshape labour markets. Many tasks that currently require hours of human effort are already being automated, with future systems expected to handle more complex, multi-step workflows.

To manage this transition, the proposal highlights reskilling programmes, portable benefits systems, and adaptive social safety nets, alongside experimental ideas such as a reduced working week. These measures aim to mitigate the impact of automation and workforce disruption while maintaining economic stability.

However, the pace of change introduces uncertainty. Historically, labour markets have adjusted over decades, allowing new roles to emerge gradually. AI-driven disruption may occur much faster, compressing adjustment periods and increasing transitional risk.

While the document highlights expansion in sectors such as healthcare, education, and care services, these ‘human-centred jobs’ require substantial investment in training, wages, and institutional support to absorb displaced workers effectively.

OpenAI introduces an AI industrial policy approach exploring how AI is redefining global structures in the intelligence age and shaping future governance.
Image via freepik

AI safety, governance, and systemic control

Beyond economic considerations, the proposal places a strong emphasis on AI safety, auditing frameworks, and risk mitigation systems. The proposed measures include model evaluation standards, incident reporting mechanisms, and international coordination structures.

These safeguards respond to growing concerns around cybersecurity risks, biosecurity threats, and systemic model misalignment. As AI systems become more autonomous and embedded in critical infrastructure, governance mechanisms must evolve accordingly.

However, safety frameworks also introduce questions of control. Determining which systems are classified as high-risk inevitably centralises authority within regulatory and institutional bodies. In practice, this may restrict access to advanced AI systems to organisations capable of meeting stringent compliance requirements.

A structural trade-off between security and openness is emerging in the AI economy, raising questions about how innovation and oversight can coexist without reinforcing centralisation.

OpenAI introduces an AI industrial policy approach exploring how AI is redefining global structures in the intelligence age and shaping future governance.
Image via freepik

Strategic influence and the future of AI governance

The proposal from OpenAI is both policy-oriented and strategically positioned. It acknowledges legitimate risks- inequality, labour disruption, and systemic instability, while offering a roadmap for managing them through structured intervention.

At the same time, it reflects the perspective of a leading actor in the AI industry. As a result, its recommendations exist at the intersection of public interest and commercial strategy. The dual role raises important questions about who defines AI governance frameworks and how economic power is distributed in the intelligence age.

The broader challenge is not only technological but also institutional: ensuring that AI industrial policy, regulation, ethics and economic design are shaped through transparent and democratic processes, rather than through concentrated private influence.

OpenAI introduces an AI industrial policy approach exploring how AI is redefining global structures in the intelligence age and shaping future governance.
Image via freepik

AI industrial policy will define economic power

AI is no longer solely a technological development- it is a structural force reshaping global economic systems. The emergence of AI industrial policy frameworks reflects an attempt to manage this transformation proactively rather than reactively.

The success or failure of these approaches will determine whether AI-driven growth leads to broader prosperity or deeper concentration of wealth and power. Without effective governance, the risks of inequality and centralisation are significant. With carefully designed policies, there is real potential to expand access, improve productivity, and distribute benefits more widely.

Digital diplomacy may increasingly come to the fore as a mechanism for arbitrating competing approaches to AI policy and governance across jurisdictions. As regulatory frameworks diverge, diplomatic channels could serve to bridge gaps, negotiate standards, and balance strategic interests, positioning digital diplomacy as a practical tool for managing fragmentation in the evolving AI economy. 

Ultimately, the intelligence age will not be defined by technology alone, but by the AI governance systems, economic frameworks, and industrial policy decisions that guide its development. The outcome will depend on the extent to which global stakeholders succeed in building a shared and coordinated vision for its future.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

First Dutch credit institution enters crypto market under MiCA framework

ClearBank Europe has become the first Dutch credit institution to secure Crypto Asset Service Provider status under the EU’s Markets in Crypto-Assets Regulation. The Dutch Authority for the Financial Markets confirmed the approvalafter the bank completed its MiCAR notification on 9 April 2026.

The new status allows ClearBank to deliver regulated digital asset services across the European Union. The institution will use Circle’s Mint platform to provide clients with access to EURC, a euro-referenced stablecoin, and USDC, a US dollar-referenced stablecoin.

Under MiCA rules, the EU credit institutions can access a notification pathway distinct from the standard licensing regime for crypto service providers.

ClearBank becomes the first Dutch bank to complete the process, enabling seamless movement between fiat and digital assets within a regulated banking environment.

ClearBank operates under European Central Bank authorisation and is supervised by De Nederlandsche Bank. Its digital asset strategy, developed since gaining its banking licence in the Netherlands, is now advancing to its first large-scale implementation through MiCA compliance.

The development signals how the EU regulation is evolving to integrate traditional banking institutions into the crypto ecosystem, creating a more unified and compliant framework for digital asset adoption across financial markets.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

EU delegation in China calls for sustainable e-commerce and safety standards

Members of the European Parliament (MEPs) completed a visit to Beijing and Shanghai to address pressing e-commerce challenges affecting the European single market.

The delegation studied local business models and market supervision frameworks, engaging with Chinese regulators, e-commerce platforms, and the EU company representatives.

The discussions highlighted the surge of parcels from China, which now account for 91% of small shipments to Europe, and the resulting pressures on fair competition.

MEPs stressed that regulatory compliance must be consistent across all operators, ensuring consumer protection is not compromised by disparities in market practices or enforcement gaps.

The delegation urged representatives of e-commerce platforms to implement preventive measures, reinforcing accountability in areas such as product safety, customs compliance, and the removal of unsafe goods from the market.

MEPs underscored that these standards are essential to maintaining a sustainable and secure e-commerce environment for European citizens.

The visit, the first in eight years, demonstrated the EU’s commitment to safeguarding consumer rights, strengthening international cooperation, and ensuring digital commerce evolves in a manner that is fair, transparent, and safe for all citizens.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

California challenges federal approach with new AI rules

The government of California is advancing a more interventionist approach to AI governance, signalling a divergence from federal deregulatory preferences.

An executive order signed by Gavin Newsom mandates the development of comprehensive AI policies within 4 months, prioritising public safety and protecting fundamental rights.

The proposed framework requires companies seeking state contracts to demonstrate safeguards against harmful outputs, including the prevention of child exploitation material and violent content.

It also calls for measures addressing algorithmic bias and unlawful discrimination, alongside increased transparency through mechanisms such as watermarking AI-generated media.

Federal guidance has discouraged state-level intervention, framing such efforts as obstacles to technological leadership.

The evolving policy landscape reflects growing concern over the societal impact of AI systems, including risks to employment, content integrity and civil liberties.

An initiative by California that may therefore serve as a testing ground for future regulatory models, shaping broader debates on balancing innovation with accountability in digital governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU boosts fact-checking with €5 million disinformation resilience plan

The European Commission has committed €5 million to strengthen independent fact-checking networks, reinforcing efforts to counter disinformation across Europe. The initiative seeks to expand verification capacity in all EU languages while improving coordination among key stakeholders.

The programme introduces a comprehensive support system for fact-checkers, covering legal assistance, cybersecurity protection and psychological support.

It also establishes a centralised European repository of verified information, designed to enhance transparency and improve access to reliable content across the EU.

Led by the European Fact-Checking Standards Network, the project builds on existing frameworks such as the European Digital Media Observatory. The initiative forms part of the EU’s broader strategy to strengthen information integrity and safeguard democratic processes.

By reinforcing independent verification ecosystems, the programme reflects a policy-driven effort to address disinformation threats while supporting a more resilient and trustworthy digital environment across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EPO strengthens industry collaboration on European patent innovation

The European Patent Office (EPO) has reinforced cooperation with industry stakeholders through discussions with the German Association of Industry IP Experts, focusing on strengthening the European patent system and supporting innovation.

A meeting that brought together representatives from major industrial actors to align priorities and explore future collaboration.

Discussions between the EPO and the stakeholders centred on enhancing technology transfer, empowering startups and fostering economic growth across Europe.

Participants emphasised the importance of inclusive engagement among patent system users instead of fragmented approaches, ensuring that innovation strategies reflect both industrial and societal needs.

The Unitary Patent system was highlighted as gaining traction, particularly among smaller entities such as SMEs, individual inventors and research organisations. Such a trend reflects broader efforts to improve accessibility and scalability within the European innovation ecosystem.

AI also featured prominently, with both sides recognising its growing role in improving efficiency and quality in patent processes.

A human-centric approach remains essential, ensuring that AI deployment supports responsible innovation while maintaining high standards in patent examination and services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Italy fines major bank over data protection failures

The Italian Data Protection Authority has imposed a €31.8 million fine on Intesa Sanpaolo following serious shortcomings in its handling of personal data.

The case stems from unauthorised access by an employee to thousands of customer accounts, raising concerns about internal oversight and data protection safeguards.

Investigations revealed that monitoring systems failed to detect repeated unjustified access to sensitive financial information over an extended period. The breach also involved high-risk individuals, highlighting weaknesses in risk-based controls instead of robust, targeted protection measures.

Authorities in Italy identified violations of core data protection principles, including integrity, confidentiality and accountability. Additional concerns arose from delays in notifying both regulators and affected individuals, limiting the ability to respond effectively to the incident.

The case of Intesa Sanpaolo underscores increasing regulatory scrutiny of data governance practices in the financial sector. Strengthening internal controls and ensuring timely breach reporting remain essential for maintaining trust and compliance in data-driven banking environments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK authorities have fined an Apple subsidiary over a sanctions breach

The UK has fined Apple Inc. subsidiary Apple Distribution International £390,000 for breaching sanctions linked to Russia. The penalty relates to payments routed through a UK bank to a Russian streaming platform.

The payments, totalling more than £635,000, were made to Okko from a UK-based account. The subsidiary, responsible for Apple product sales across Europe and the Middle East, instructed the transfers despite the platform’s ownership links to sanctioned entities.

The Office of Financial Sanctions Implementation found the funds were linked to Sberbank and a company later sanctioned after the 2022 Ukraine invasion. Payments were made shortly after those restrictions came into force.

Regulators said the firm had voluntarily disclosed the transactions and had not been aware of the sanctions breach at the time. Apple stated it follows all applicable laws and has strengthened its compliance procedures following the incident.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Campaign highlights risks of profit-driven digital platforms

A global campaign led by the Norwegian Consumer Council (NCC) has drawn attention to the decline in quality across digital platforms, a phenomenon widely referred to as ‘enshitification’, in which services deteriorate over time as companies prioritise monetisation over user experience.

The initiative has gained momentum through a viral video and coordinated advocacy efforts across multiple regions.

Inshitification is a term coined by journalist Cory Doctorow that describes a pattern in which platforms initially serve users well, then shift towards extracting value from both users and business partners.

In practice, it often results in increased advertising, paywalls, and reduced functionality, with platforms leveraging user dependence to introduce less favourable conditions.

More than 70 advocacy groups across the EU, the US and Norway have urged policymakers to take stronger action, arguing that declining competition and market concentration allow platforms to degrade services without losing users.

Network effects and high switching costs further limit consumer choice, making it difficult to move to alternative platforms even when dissatisfaction grows.

Existing frameworks, such as the Digital Markets Act and the Digital Services Act, aim to address some of these issues by promoting interoperability, transparency, and accountability.

However, experts argue that enforcement remains too slow and insufficient to deter harmful practices, suggesting that stronger regulatory intervention will be necessary to restore balance between consumers, platforms, and competition in the digital economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Major service disruption affects DeepSeek chatbot in China

DeepSeek’s chatbot suffered a seven-hour-plus disruption in China, prompting multiple updates as the company worked to restore full functionality. Users began reporting issues on Sunday evening, with further performance problems recorded on Monday morning.

Initial alerts appeared on monitoring platforms and DeepSeek’s own status page, which acknowledged an incident shortly after it began. Although early fixes were deployed within hours, additional disruptions followed, requiring further corrective updates before the system stabilised.

The company has not disclosed the cause of the outage, and no official comment has been provided. The extended downtime stands out for a platform known for consistent performance, which has maintained a near 99 percent uptime record since the launch of its R1 model in 2025.

The disruption comes at a time of heightened anticipation for DeepSeek’s next major update, as speculation builds across China’s competitive AI sector, where firms continue to race to release new models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot