The European Commission has set out proposed measures that would require Google to share key search data with third-party providers under the Digital Markets Act (DMA), in a fresh step to open Europe’s online search market to greater competition. The move comes in the form of preliminary findings sent to Google, rather than a final decision, and is now subject to public consultation.
Under the proposal, Google would have to provide access to anonymised search data, including ranking, query, click, and view data, on fair, reasonable, and non-discriminatory terms. According to the Commission, the aim is to allow third-party search engines to improve their services and better challenge Google Search’s market position.
The proposed measures go beyond a general obligation to share data. They set out detailed conditions covering who should qualify for access, what data must be made available, how frequently it should be shared, how personal data should be anonymised, how pricing should be set, and how access procedures should work in practice. The consultation also explicitly includes companies offering online search services that incorporate AI chatbot functionality, showing that the case could shape competition not only in traditional search but also in AI-assisted search services.
The consultation is tied to Article 6(11) of the DMA, which requires gatekeepers operating online search engines to share certain anonymised data with other search engines under FRAND terms. The Commission says it opened proceedings against Alphabet in January 2026 to specify how Google should comply with that obligation in practice.
Brussels is now asking stakeholders to comment on whether the proposed framework would work in practice, whether the anonymised data would remain useful enough to help rivals improve their services, whether additional measures are needed, and whether the implementation timeline is realistic. The consultation opened on 16 April 2026 and will run until 1 May 2026, with the Commission expecting to adopt a final decision by 27 July 2026.
The case is significant because it shows the DMA moving from broad obligations to detailed implementation. Rather than debating only whether large platforms should share data, the Commission is now trying to define what meaningful access would look like in operational terms, including what must be handed over, on what conditions, and with what privacy safeguards. In that sense, the Google case may become an important test of how far the DMA can reshape competition in digital search markets and related AI services.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
South Korea’s Fair Trade Commission closes its public consultation on proposed amendments to the Enforcement Decree of the Act on Consumer Protection in Electronic Commerce, including new rules on domestic agents for certain overseas businesses.
According to the Fair Trade Commission, an overseas business without an address or place of business in South Korea would be required to designate a domestic agent if it meets at least one of three criteria: sales in the previous year exceeding ₩1 trillion, an average of more than 1 million domestic consumers accessing the cyber mall each month in the three months immediately preceding the end of the previous year, or a Fair Trade Commission request to submit reports and materials.
The proposed rules would also require overseas businesses, once a domestic agent is designated, to submit the agent’s name, address, telephone number, and email address to the Fair Trade Commission in writing without delay and to disclose that information on the first screen of the cyber mall they operate.
The Fair Trade Commission also says the amendments would establish business suspension standards for violations of the domestic agent obligation. According to the proposal, a first violation would lead to a three-month business suspension, a second violation to six months, and a third violation to 12 months.
In the same legislative notice, the Fair Trade Commission also proposed reducing the scope of identity information that platforms facilitating person-to-person transactions must verify for individual sellers, from five items to two: telephone number and email address.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has adopted revised rules governing technology transfer agreements (Technology Transfer Block Exemption Regulation and Guidelines on the application of Article 101 of the Treaty to technology transfer agreements), updating a framework originally introduced in 2014.
These changes aim to reflect developments in the digital economy, particularly the growing role of data and standardised technologies in enabling interoperability across markets.
Technology transfer agreements allow firms to license intellectual property such as patents, software and design rights, supporting the dissemination of innovation. While such agreements are often considered pro-competitive, they may also create risks if they restrict market access or distort competition.
The revised framework clarifies how these agreements are assessed under Article 101 of the Treaty on the Functioning of the European Union.
The updated rules introduce specific guidance on data licensing and licensing negotiation groups, addressing new market practices.
They also refine conditions under which agreements benefit from exemptions, including simplified criteria for early-stage technologies and clearer safeguards for technology pools linked to industry standards.
Overall, the revision by the EU seeks to improve legal certainty for businesses while ensuring that licensing practices support innovation, competition and the broader functioning of the single market. The new framework will apply from May 2026.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has issued preliminary findings proposing measures for Google under the Digital Markets Act, focusing on access to search engine data.
These measures aim to ensure that third-party services can compete more effectively in digital markets characterised by high concentration.
The proposal would require Google to provide access to key categories of search data, including ranking, query, click and view data, on fair, reasonable and non-discriminatory terms.
Eligible recipients may include competing search engines as well as AI-based services with search functionalities.
Additional provisions address how data should be shared, including frequency, technical access conditions and pricing parameters. The framework also includes safeguards for anonymisation, reflecting the need to balance competition objectives with data protection requirements.
The Commission has opened a public consultation to gather stakeholder input on the proposed measures.
A case that illustrates ongoing efforts to operationalise the Digital Markets Act by addressing structural imbalances in access to data within the platform economy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has issued a supplementary charge sheet to Meta (called Supplementary Statement of Objections), outlining concerns over potential restrictions on third-party AI assistants’ access to WhatsApp.
A move that forms part of an ongoing investigation into a possible abuse of dominant market position under the EU competition rules.
The Commission’s preliminary assessment suggests that recent policy changes, including the introduction of access fees, may have effects equivalent to an earlier exclusion of competing AI services.
Something that raises concerns about barriers to entry and reduced competition in the emerging market for AI assistants.
Every technological leap forces society to renegotiate its relationship with power. Intelligence, once a uniquely human advantage, is now being abstracted, scaled, and embedded into machines. As AI evolves from a tool into an autonomous force shaping economies and institutions, the question is no longer what AI can do, but who it will ultimately serve.
A new framework published by OpenAI sets out a vision for managing the transition towards advanced AI systems, often described as superintelligence. Framed as a policy agenda for governments and institutions, it attempts to define how societies should respond to rapid advances in AI governance, economic transformation, and workforce disruption.
At its core, the document is not a regulation but influence: an attempt to shape how policymakers think about industrial policy for AI, productivity gains, and the redistribution of technological power.
Image via freepik
AI industrial policy and the next economic transformation
The central argument is that AI will act as a general-purpose technology comparable to electricity or the combustion engine. It promises higher productivity, lower costs, and accelerated innovation across industries. In policy terms, this aligns with broader discussions around AI-driven productivity growth and economic restructuring.
However, historical precedent suggests that such transitions are rarely evenly distributed. Industrial revolutions typically begin with labour displacement, rising inequality, and capital concentration, before broader gains are realised. AI may intensify this dynamic due to its dependence on compute infrastructure, proprietary models, and large-scale data ecosystems.
Economic power may become increasingly concentrated among a small number of AI developers and infrastructure providers, posing a structural risk of reinforcing existing inequalities rather than reducing them.
Image via freepik
The return of industrial policy in the AI economy
A key feature of the document is its explicit endorsement of AI industrial policy as a necessary response to market limitations. Governments, it argues, must play a more active role in shaping outcomes through regulation, investment, and public-private coordination.
A broader global shift in economic thinking is reflected in this approach. Strategic sectors such as semiconductors, energy, and digital infrastructure are already experiencing increased state intervention. AI now joins that category as a critical technology.
Yet this approach introduces a significant tension. When leading AI firms contribute directly to the design of AI regulation and governance frameworks, the risk of regulatory capture increases. Policies intended to ensure fairness and safety may inadvertently reinforce the dominance of incumbent companies by raising compliance costs and technical barriers for smaller competitors.
In this sense, AI industrial policy may not only guide innovation but also determine market entry, competition, and the long-term economic structure.
Image via freepik
Redistribution, taxation, and the question of AI wealth
The document places strong emphasis on economic inclusion in the AI economy, proposing mechanisms such as a public wealth fund, AI taxation, and expanded access to capital markets. These ideas are designed to address one of the central challenges of AI-driven growth: the potential for extreme wealth concentration.
As AI systems increase productivity while reducing reliance on human labour, traditional tax bases such as wages and payroll contributions may weaken. The proposal to tax AI-generated profits or automated labour reflects an attempt to stabilise public finances in an increasingly automated economy.
Equally significant is the idea of a ‘right to AI’, which frames access to AI as a foundational requirement for participation in modern economic life. This positions AI not merely as a tool, but as a form of digital infrastructure essential to economic agency and inclusion.
However, these proposals face major implementation challenges. Measuring AI-generated value is complex, particularly in hybrid systems where human and machine inputs are deeply integrated. Without clear definitions, AI taxation frameworks and redistribution mechanisms could prove difficult to enforce at scale.
Image via freepik
Workforce disruption and the future of work
The document recognises that AI will significantly reshape labour markets. Many tasks that currently require hours of human effort are already being automated, with future systems expected to handle more complex, multi-step workflows.
To manage this transition, the proposal highlights reskilling programmes, portable benefits systems, and adaptive social safety nets, alongside experimental ideas such as a reduced working week. These measures aim to mitigate the impact of automation and workforce disruption while maintaining economic stability.
However, the pace of change introduces uncertainty. Historically, labour markets have adjusted over decades, allowing new roles to emerge gradually. AI-driven disruption may occur much faster, compressing adjustment periods and increasing transitional risk.
While the document highlights expansion in sectors such as healthcare, education, and care services, these ‘human-centred jobs’ require substantial investment in training, wages, and institutional support to absorb displaced workers effectively.
Image via freepik
AI safety, governance, and systemic control
Beyond economic considerations, the proposal places a strong emphasis on AI safety, auditing frameworks, and risk mitigation systems. The proposed measures include model evaluation standards, incident reporting mechanisms, and international coordination structures.
These safeguards respond to growing concerns around cybersecurity risks, biosecurity threats, and systemic model misalignment. As AI systems become more autonomous and embedded in critical infrastructure, governance mechanisms must evolve accordingly.
However, safety frameworks also introduce questions of control. Determining which systems are classified as high-risk inevitably centralises authority within regulatory and institutional bodies. In practice, this may restrict access to advanced AI systems to organisations capable of meeting stringent compliance requirements.
A structural trade-off between security and openness is emerging in the AI economy, raising questions about how innovation and oversight can coexist without reinforcing centralisation.
Image via freepik
Strategic influence and the future of AI governance
The proposal from OpenAI is both policy-oriented and strategically positioned. It acknowledges legitimate risks- inequality, labour disruption, and systemic instability, while offering a roadmap for managing them through structured intervention.
At the same time, it reflects the perspective of a leading actor in the AI industry. As a result, its recommendations exist at the intersection of public interest and commercial strategy. The dual role raises important questions about who defines AI governance frameworks and how economic power is distributed in the intelligence age.
The broader challenge is not only technological but also institutional: ensuring that AI industrial policy, regulation, ethics and economic design are shaped through transparent and democratic processes, rather than through concentrated private influence.
Image via freepik
AI industrial policy will define economic power
AI is no longer solely a technological development- it is a structural force reshaping global economic systems. The emergence of AI industrial policy frameworks reflects an attempt to manage this transformation proactively rather than reactively.
The success or failure of these approaches will determine whether AI-driven growth leads to broader prosperity or deeper concentration of wealth and power. Without effective governance, the risks of inequality and centralisation are significant. With carefully designed policies, there is real potential to expand access, improve productivity, and distribute benefits more widely.
Digital diplomacy may increasingly come to the fore as a mechanism for arbitrating competing approaches to AI policy and governance across jurisdictions. As regulatory frameworks diverge, diplomatic channels could serve to bridge gaps, negotiate standards, and balance strategic interests, positioning digital diplomacy as a practical tool for managing fragmentation in the evolving AI economy.
Ultimately, the intelligence age will not be defined by technology alone, but by the AI governance systems, economic frameworks, and industrial policy decisions that guide its development. The outcome will depend on the extent to which global stakeholders succeed in building a shared and coordinated vision for its future.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
ClearBank Europe has become the first Dutch credit institution to secure Crypto Asset Service Provider status under the EU’s Markets in Crypto-Assets Regulation. The Dutch Authority for the Financial Markets confirmed the approvalafter the bank completed its MiCAR notification on 9 April 2026.
The new status allows ClearBank to deliver regulated digital asset services across the European Union. The institution will use Circle’s Mint platform to provide clients with access to EURC, a euro-referenced stablecoin, and USDC, a US dollar-referenced stablecoin.
Under MiCA rules, the EU credit institutions can access a notification pathway distinct from the standard licensing regime for crypto service providers.
ClearBank becomes the first Dutch bank to complete the process, enabling seamless movement between fiat and digital assets within a regulated banking environment.
ClearBank operates under European Central Bank authorisation and is supervised by De Nederlandsche Bank. Its digital asset strategy, developed since gaining its banking licence in the Netherlands, is now advancing to its first large-scale implementation through MiCA compliance.
The development signals how the EU regulation is evolving to integrate traditional banking institutions into the crypto ecosystem, creating a more unified and compliant framework for digital asset adoption across financial markets.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Members of the European Parliament (MEPs) completed a visit to Beijing and Shanghai to address pressing e-commerce challenges affecting the European single market.
The delegation studied local business models and market supervision frameworks, engaging with Chinese regulators, e-commerce platforms, and the EU company representatives.
The discussions highlighted the surge of parcels from China, which now account for 91% of small shipments to Europe, and the resulting pressures on fair competition.
MEPs stressed that regulatory compliance must be consistent across all operators, ensuring consumer protection is not compromised by disparities in market practices or enforcement gaps.
The delegation urged representatives of e-commerce platforms to implement preventive measures, reinforcing accountability in areas such as product safety, customs compliance, and the removal of unsafe goods from the market.
MEPs underscored that these standards are essential to maintaining a sustainable and secure e-commerce environment for European citizens.
The visit, the first in eight years, demonstrated the EU’s commitment to safeguarding consumer rights, strengthening international cooperation, and ensuring digital commerce evolves in a manner that is fair, transparent, and safe for all citizens.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The government of California is advancing a more interventionist approach to AI governance, signalling a divergence from federal deregulatory preferences.
An executive order signed by Gavin Newsom mandates the development of comprehensive AI policies within 4 months, prioritising public safety and protecting fundamental rights.
The proposed framework requires companies seeking state contracts to demonstrate safeguards against harmful outputs, including the prevention of child exploitation material and violent content.
It also calls for measures addressing algorithmic bias and unlawful discrimination, alongside increased transparency through mechanisms such as watermarking AI-generated media.
The evolving policy landscape reflects growing concern over the societal impact of AI systems, including risks to employment, content integrity and civil liberties.
An initiative by California that may therefore serve as a testing ground for future regulatory models, shaping broader debates on balancing innovation with accountability in digital governance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has committed €5 million to strengthen independent fact-checking networks, reinforcing efforts to counter disinformation across Europe. The initiative seeks to expand verification capacity in all EU languages while improving coordination among key stakeholders.
It also establishes a centralised European repository of verified information, designed to enhance transparency and improve access to reliable content across the EU.
Led by the European Fact-Checking Standards Network, the project builds on existing frameworks such as the European Digital Media Observatory. The initiative forms part of the EU’s broader strategy to strengthen information integrity and safeguard democratic processes.
By reinforcing independent verification ecosystems, the programme reflects a policy-driven effort to address disinformation threats while supporting a more resilient and trustworthy digital environment across Europe.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!