UK backs stronger cooperation on AI and frontier technologies at OSCE

The UK has highlighted both the opportunities and risks linked to frontier technologies during a high-level conference organised by the Organization for Security and Co-operation in Europe in Geneva.

Speaking at the event, UK Tech Envoy Sarah Spencer said AI could support early warning and early action in humanitarian crises, but could also amplify misinformation and instability if misused or deployed without adequate safeguards.

Spencer said responsible governance of frontier technologies requires partnerships between states, institutions, industry and civil society, arguing that such cooperation matters more than individual products in building inclusive, responsible and sustainable digital ecosystems.

She also highlighted the OSCE’s role in fostering dialogue on frontier technologies, reducing misunderstandings and supporting anticipatory approaches to governance. The UK said it was ready to support efforts to ensure technological progress contributes to a safer, more secure and more humane future.

The conference, titled ‘Anticipating technologies – for a safe and humane future’, brought together participants to discuss how emerging technologies are affecting security, stability and international cooperation.

Why does it matter?

The statement places AI and other frontier technologies within a security and diplomacy context, rather than treating them only as innovation issues. It highlights growing concern that emerging technologies can support humanitarian and development goals, but also create risks for misinformation, conflict escalation and strategic stability if governance and cooperation lag behind deployment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU Commission reviews Android DMA rules on interoperability

The European Commission is consulting third parties on proposed measures requiring Alphabet to ensure effective interoperability between Google Android and AI services under the Digital Markets Act.

The draft measures focus on AI services’ access to key Android capabilities, including wake-word activation, contextual data, integration with applications, and access to hardware and software resources needed for reliable and responsive services.

The Commission opened proceedings in January 2026 to specify how Alphabet should comply with DMA interoperability obligations for features relevant to AI services. Its proposed measures cover invocation, context, actions on apps and the operating system, access to resources, and general requirements such as free access, documented frameworks and APIs, technical assistance and reporting.

Stakeholders were asked to comment on the effectiveness, completeness, feasibility and implementation timelines of the proposed measures, particularly from the perspective of AI service providers and Android device manufacturers.

Input from Alphabet and interested third parties may lead to adjustments before the Commission adopts a final decision-making the measures legally binding. The Commission is expected to adopt that decision by 27 July 2026.

Why does it matter?

The case shows how the DMA is being applied to the emerging competitive landscape for AI assistants and mobile operating systems. If third-party AI services need access to Android features such as wake words, contextual data, app actions and on-device resources to compete effectively, interoperability rules could shape which AI tools reach users and how much control gatekeepers retain over mobile AI ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta tests compromise plan in EU WhatsApp AI access dispute

European Commission officials are examining whether Meta’s policy on access to WhatsApp for AI providers may raise competition concerns in the European Economic Area.

Changes to the WhatsApp Business Solution terms are at the centre of the investigation, particularly as they affect how third-party AI providers can offer services on the platform. The Commission is assessing whether the policy could limit access for competing AI services and reduce choice for users and businesses.

Messaging platforms are becoming important distribution channels for AI-powered services. As chatbots and AI assistants become more integrated into everyday communication tools, access to widely used platforms such as WhatsApp may become an important factor in competition between providers.

Commission officials have said they will examine whether Meta’s conduct complies with the EU competition rules. Opening an investigation does not mean that the Commission has reached a conclusion or found an infringement.

The broader EU scrutiny of large digital platforms is increasingly focused on how access to infrastructure, services and user ecosystems is managed as AI tools become more widely adopted.

Why does it matter?

Competition questions are expanding into AI distribution channels. Messaging platforms can shape which AI services reach users and businesses at scale, making access rules an important part of the emerging AI market. The outcome could influence how major platforms design access policies for third-party AI providers while regulators seek to preserve competition and user choice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!

European Ombudsman criticises Commission over X risk report access

The European Ombudswoman has criticised the European Commission’s handling of a request for public access to a risk assessment report submitted by social media platform X under the Digital Services Act.

The case concerned a journalist’s request to access X’s 2023 risk assessment report, which large online platforms must provide under the DSA. The Commission refused to assess the report for possible disclosure, arguing that access could undermine X’s commercial interests, an ongoing DSA investigation and an independent audit.

The Ombudswoman found it unreasonable for the Commission to rely on a general presumption of non-disclosure rather than individually assessing the report. She said the circumstances in which the EU courts have allowed such presumptions differ from the rules applying to DSA risk assessment reports.

Although X has since made the report public with redactions, the Ombudswoman recommended that the Commission conduct its own assessment and aim to give the journalist the widest access possible, including potentially to parts redacted by the company. If access is refused for any sections, the Commission must explain why.

The finding of maladministration highlights the importance of transparency in the oversight of very large online platforms under the DSA, particularly where documents are relevant to public scrutiny of platform risk management and regulatory enforcement.

Why does it matter?

The case tests how far transparency obligations around very large online platforms can be limited by broad claims of commercial sensitivity or ongoing investigations. DSA risk assessment reports are central to understanding how platforms identify and manage systemic risks, so access decisions affect public oversight of the EU digital regulation as much as the rights of individual requesters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Stablecoin rules updated in revised US Senate proposal

The US Senate Banking Committee has released a revised 309-page draft of the Digital Asset Market Clarity Act ahead of a markup vote, reopening debate on stablecoin rewards, DeFi protections and the regulation of digital asset markets.

The draft, proposed by Committee Chair Tim Scott, seeks to provide a federal framework for digital asset market structure, including provisions on securities innovation, illicit finance, decentralised finance, banking innovation, regulatory sandboxes, software developers and customer protection.

A key section addresses stablecoin rewards. The draft would prohibit digital asset service providers from paying interest or yield on payment stablecoin balances in a way that is economically or functionally equivalent to bank deposit interest. However, it would permit certain activity-based or transaction-based rewards and incentives, provided they are not equivalent to interest or yield on a bank deposit.

The text also includes provisions affecting decentralised finance. It covers rules on non-decentralised finance trading protocols, illicit finance obligations for distributed ledger messaging systems, temporary holds for certain digital asset transactions, voluntary cybersecurity programmes for DeFi trading protocols and studies on digital asset mixers, foreign intermediaries and financial stability risks.

Software developer protections are also included in the draft. The bill contains a dedicated title on protecting software developers and software innovation, including provisions on non-fungible tokens, self-custody and blockchain regulatory certainty.

The draft still faces further negotiation before any final vote. Lawmakers continue to debate the balance between consumer protection, illicit finance controls, innovation, stablecoin incentives and the treatment of decentralised finance. At the same time, the legislation needs to be aligned with other Senate work on digital asset market structure.

Why does it matter?

The revised Clarity Act is another step towards a federal framework for digital asset markets in the United States, with rules that could shape how crypto firms, stablecoin platforms and decentralised finance projects operate. Its provisions on stablecoin rewards, DeFi and software developers show lawmakers trying to balance innovation, consumer protection and oversight in one of the world’s most important financial markets.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

China launches AI ethics review pilot programme

A national pilot programme for AI ethics review and services has been launched by China, as authorities move to strengthen oversight of growing risks linked to advanced AI systems.

The initiative, announced by China’s Ministry of Industry and Information Technology, aims to establish practical mechanisms for AI ethics governance as concerns over algorithmic discrimination, emotional dependence, and broader societal risks continue to grow. Authorities said the initiative will initially operate in provincial-level regions hosting national AI industrial innovation pilot zones. It will focus on refining provincial AI ethics review rules, supporting the creation of ethics committees, and developing specialised ethics review and service centres. Chinese regulators also plan to transform the ethics review process into technical standards while improving mechanisms for reporting AI-related ethical concerns.

The Ministry of Industry and Information Technology has also called for the creation of a national AI ethics risk monitoring service network, along with training materials, ethics education courses, and early-warning systems to support pilot cities.

By embedding ethics reviews into AI development and deployment processes, China appears to be building a more institutionalised framework for managing the societal and technological risks associated with increasingly powerful AI systems.

Why does it matter?

China’s latest move signals a shift from broad AI governance principles towards operational enforcement mechanisms embedded directly into regional innovation ecosystems. The programme could influence how other governments approach AI oversight, particularly as global concerns grow over algorithmic bias, psychological manipulation, and accountability in frontier AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Australia launches national AI platform ‘AI.gov.au’

The Department of Industry, Science and Resources has announced the launch of AI.gov.au through the National Artificial Intelligence Centre. The platform is designed to help organisations adopt AI safely and responsibly in line with the National AI Plan.

AI.gov.au provides a central source of guidance, tools and resources to support businesses and not-for-profits. It aims to help users identify AI opportunities, plan implementation, manage risks and build internal capability.

The platform’s development was informed by research and engagement with industry and government, highlighting the need for clear starting points, practical advice and support for AI organisational change. It also supports the AI Safety Institute’s work by improving access to safety guidance.

Initial features focus on small and medium-sized enterprises and include training, case studies and adoption tools, with further updates planned. The initiative reflects efforts to strengthen AI uptake and governance in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

China outlines AI and energy integration plan

The Chinese National Energy Administration, alongside the National Development and Reform Commission, the Ministry of Industry and Information Technology and the National Data Administration, has released an action plan to promote mutual development between AI and the energy sector.

The plan focuses on ensuring a reliable energy supply for computing infrastructure while using AI to support energy transformation. It outlines 29 key tasks covering green energy use, efficient coordination between power and computing, and expanding high-value AI applications in energy.

Authorities aim to significantly improve the clean energy supply for AI computing and strengthen AI adoption in energy by 2030. The strategy also seeks to enhance data use and drive innovation in AI models within the energy sector.

The agencies will establish coordination mechanisms across government and industry to support implementation and innovation. The initiative reflects a broader push to integrate AI and energy systems more deeply in China.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EDPS frames safe AI as Europe’s next big idea

The European Data Protection Supervisor has framed safe and ethical AI as a defining European idea, linking AI governance to Europe’s history of collective initiatives rooted in shared values and fundamental rights.

In a Europe Day blog post, EDPS official Leonardo Cervera Navas argues that Europe’s approach to AI builds on earlier initiatives such as data protection, the creation of the EDPS and the adoption of the General Data Protection Regulation. He presents the AI Act as a continuation of that tradition, aimed at ensuring that AI systems operate safely, ethically and in line with fundamental rights.

The post highlights the AI Act’s risk-based model, which prohibits AI systems posing unacceptable risks to health, safety and fundamental rights, while setting binding requirements for high-risk systems in areas such as safety, transparency, human oversight and rights protection. It also notes that most AI systems are considered minimal risk and fall outside the regulation’s scope.

Cervera Navas also points to the EDPS’s practical role under the AI Act as the AI supervisor for the EU institutions, agencies and bodies. The post refers to the EDPS network of AI Act correspondents, the mapping of AI systems used in the EU public administration, and a regulatory sandbox pilot for testing AI systems in compliance with the AI Act.

The post also emphasises international cooperation, including EDPS engagement through the AI Board, cooperation with market surveillance authorities, UNESCO’s Global Network of AI Supervising Authorities, Council of Europe work on AI risk and impact assessment, and AI discussions within the OECD.

Why does it matter?

As it seems, EDPS wants Europe’s AI governance model to be understood not only as regulation, but as part of a broader rights-based digital policy tradition. Its significance lies in linking the AI Act with practical supervision, institutional coordination and international cooperation, suggesting that the next test for Europe’s AI approach will be implementation rather than rule-making alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Europe pushes for unified capital markets and stronger banking union

European Central Bank Vice-President Luis de Guindos has called for deeper financial integration in Europe, arguing that more unified capital markets and a stronger banking union are needed to support growth, resilience and competitiveness.

Speaking at a joint European Commission and ECB conference on financial integration, de Guindos said Europe has made progress in integrating financial markets, including through stronger cross-border capital flows and reduced differences in some asset prices across member states. However, he warned that fragmentation persists in areas such as corporate lending, equity markets and foreign direct investment.

Cross-border corporate lending within the euro area accounts for only 14% of total corporate lending, while equity market integration has shown signs of decline since 2022, and foreign direct investment within the euro area has fallen to a historical low, according to the speech.

De Guindos said policy priorities should include a genuine single rulebook for capital markets, a more European supervisory framework and support for a tokenised financial ecosystem through the distributed ledger technology pilot regime. He argued that these measures would reduce legal uncertainty, support digital financial innovation and help remove barriers to cross-border capital market integration.

He also called for further banking union reforms, including treating the banking union as a single European jurisdiction, finalising a European deposit insurance scheme and allowing capital and liquidity to move more freely within cross-border banking groups. Such steps, he said, would help reduce fragmentation and strengthen the euro area’s financial system.

The speech also pointed to the need for a more coherent regulatory framework, including simpler and more harmonised rules for banks, closer attention to regulatory gaps between banks and non-bank financial institutions, and the removal of legal and tax barriers that still limit cross-border activity.

Why does it matter?

Financial fragmentation affects how efficiently Europe can channel savings into investment, support innovation and absorb economic shocks. Deeper capital markets make it easier for businesses to access funding across borders, while a stronger banking union could reduce national barriers and improve resilience during periods of stress.

The speech also connects financial integration with digital finance and strategic autonomy. By linking capital market reform with tokenisation, EU-level supervision and banking union, the ECB is framing financial integration as part of Europe’s broader effort to remain competitive in a more fragmented global economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot