UK backs stronger cooperation on AI and frontier technologies at OSCE

The UK has highlighted both the opportunities and risks linked to frontier technologies during a high-level conference organised by the Organization for Security and Co-operation in Europe in Geneva.

Speaking at the event, UK Tech Envoy Sarah Spencer said AI could support early warning and early action in humanitarian crises, but could also amplify misinformation and instability if misused or deployed without adequate safeguards.

Spencer said responsible governance of frontier technologies requires partnerships between states, institutions, industry and civil society, arguing that such cooperation matters more than individual products in building inclusive, responsible and sustainable digital ecosystems.

She also highlighted the OSCE’s role in fostering dialogue on frontier technologies, reducing misunderstandings and supporting anticipatory approaches to governance. The UK said it was ready to support efforts to ensure technological progress contributes to a safer, more secure and more humane future.

The conference, titled ‘Anticipating technologies – for a safe and humane future’, brought together participants to discuss how emerging technologies are affecting security, stability and international cooperation.

Why does it matter?

The statement places AI and other frontier technologies within a security and diplomacy context, rather than treating them only as innovation issues. It highlights growing concern that emerging technologies can support humanitarian and development goals, but also create risks for misinformation, conflict escalation and strategic stability if governance and cooperation lag behind deployment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Singapore cooperation with Japan targets AI in patent examination

The Intellectual Property Office of Singapore and the Japan Patent Office have announced a new cooperation initiative on the use of AI in patent substantive examination, as patent offices adapt to rapid technological change.

The initiative was announced after a bilateral meeting in Singapore between IPOS Chief Executive Tan Kong Hwee and JPO Commissioner Yasuyuki Kasai. It builds on a Memorandum of Cooperation signed in Tokyo last November.

Under the initiative, IPOS and JPO will launch a bilateral patent examiner exchange programme and hold regular technical exchanges on the use of AI in patent examination. The two offices said the cooperation is intended to strengthen capabilities, share best practices and develop robust processes for high-quality and trusted patent examination.

Tan said AI is reshaping innovation and work processes, making it necessary for IP offices to evolve while maintaining examination quality and trust. Kasai said the cooperation would bring together the experience and expertise of both offices and support innovation in both countries.

The cooperation will also cover patent search and examination quality management, benchmarking of examination practices, IT infrastructure development, operational management and IP policy exchanges. Both offices will also coordinate initiatives to support enterprises, including SMEs, and strengthen trade and IP flows between Singapore and Japan.

IPOS and JPO said the partnership reflects their shared commitment to addressing emerging challenges in the intellectual property landscape and keeping innovation ecosystems trusted, efficient and future-ready.

Why does it matter?

Patent offices are increasingly facing pressure to handle more complex applications while maintaining examination quality, consistency and trust. Cooperation between Singapore and Japan on AI-assisted examination shows how intellectual property authorities are beginning to adapt their own administrative systems to AI, not only to regulate AI-related inventions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New IRIS report links AI narratives to civic action

A report by International Resource for Impact and Storytelling examines how organisations worldwide are adapting to AI and algorithm-driven platforms. It focuses on how technology and storytelling are being used to support democracy and counter harmful narratives.

The study draws on insights from 10 organisations, identifying key approaches such as co-opting technology, countering surveillance and disinformation, and innovating in storytelling. These strategies aim to reshape narratives and challenge authoritarian pressures.

Examples include campaigns addressing digital surveillance, projects using journalism to amplify marginalised voices, and creative approaches to civic engagement. The report also highlights the role of artists and storytellers in influencing how AI is understood.

The findings highlight the growing importance of narrative and culture in the digital landscape, as organisations experiment with new forms of communication and resistance. The research reflects global efforts to align AI with democratic values.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

European Ombudsman criticises Commission over X risk report access

The European Ombudswoman has criticised the European Commission’s handling of a request for public access to a risk assessment report submitted by social media platform X under the Digital Services Act.

The case concerned a journalist’s request to access X’s 2023 risk assessment report, which large online platforms must provide under the DSA. The Commission refused to assess the report for possible disclosure, arguing that access could undermine X’s commercial interests, an ongoing DSA investigation and an independent audit.

The Ombudswoman found it unreasonable for the Commission to rely on a general presumption of non-disclosure rather than individually assessing the report. She said the circumstances in which the EU courts have allowed such presumptions differ from the rules applying to DSA risk assessment reports.

Although X has since made the report public with redactions, the Ombudswoman recommended that the Commission conduct its own assessment and aim to give the journalist the widest access possible, including potentially to parts redacted by the company. If access is refused for any sections, the Commission must explain why.

The finding of maladministration highlights the importance of transparency in the oversight of very large online platforms under the DSA, particularly where documents are relevant to public scrutiny of platform risk management and regulatory enforcement.

Why does it matter?

The case tests how far transparency obligations around very large online platforms can be limited by broad claims of commercial sensitivity or ongoing investigations. DSA risk assessment reports are central to understanding how platforms identify and manage systemic risks, so access decisions affect public oversight of the EU digital regulation as much as the rights of individual requesters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EDPS frames safe AI as Europe’s next big idea

The European Data Protection Supervisor has framed safe and ethical AI as a defining European idea, linking AI governance to Europe’s history of collective initiatives rooted in shared values and fundamental rights.

In a Europe Day blog post, EDPS official Leonardo Cervera Navas argues that Europe’s approach to AI builds on earlier initiatives such as data protection, the creation of the EDPS and the adoption of the General Data Protection Regulation. He presents the AI Act as a continuation of that tradition, aimed at ensuring that AI systems operate safely, ethically and in line with fundamental rights.

The post highlights the AI Act’s risk-based model, which prohibits AI systems posing unacceptable risks to health, safety and fundamental rights, while setting binding requirements for high-risk systems in areas such as safety, transparency, human oversight and rights protection. It also notes that most AI systems are considered minimal risk and fall outside the regulation’s scope.

Cervera Navas also points to the EDPS’s practical role under the AI Act as the AI supervisor for the EU institutions, agencies and bodies. The post refers to the EDPS network of AI Act correspondents, the mapping of AI systems used in the EU public administration, and a regulatory sandbox pilot for testing AI systems in compliance with the AI Act.

The post also emphasises international cooperation, including EDPS engagement through the AI Board, cooperation with market surveillance authorities, UNESCO’s Global Network of AI Supervising Authorities, Council of Europe work on AI risk and impact assessment, and AI discussions within the OECD.

Why does it matter?

As it seems, EDPS wants Europe’s AI governance model to be understood not only as regulation, but as part of a broader rights-based digital policy tradition. Its significance lies in linking the AI Act with practical supervision, institutional coordination and international cooperation, suggesting that the next test for Europe’s AI approach will be implementation rather than rule-making alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI cyber capabilities raise risk of correlated financial system failures, IMF warns

AI is rapidly reshaping the global financial system’s cyber risk landscape, according to analysis associated with the International Monetary Fund. While AI improves defence, it also helps attackers find and exploit vulnerabilities more quickly, increasing the risk of systemic disruption.

Financial infrastructure is highly interconnected, relying on shared software, cloud services, and payment networks. IMF analysis suggests that AI-enabled cyberattacks could trigger correlated institutional failures, leading to funding stress, solvency risks, and disruptions to payments and market operations.

Recent developments in advanced AI models demonstrate how quickly offensive capabilities are evolving, with systems now able to identify weaknesses across widely used platforms.

At the same time, defensive AI tools are being deployed to detect threats and strengthen resilience, but their effectiveness depends on governance, oversight, and integration within financial institutions.

Authorities are now being urged to treat cyber risk as a core financial stability issue rather than a purely technical challenge. Stronger supervision, resilience standards, and international coordination are viewed as essential, particularly as cyber threats increasingly cross borders and exploit shared global infrastructure.

Why does it matter? 

Cyber risks related to AI are a macroeconomic threat that can affect liquidity, confidence, and core financial intermediation. At the same time, the same technology is essential for defence, meaning resilience now depends on how quickly supervision, governance, and international coordination can keep pace with rapidly scaling offensive capabilities.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our  chatbot!  

OSCE chairpersonship opens Geneva conference on AI and quantum risks

The Swiss OSCE Chairpersonship has opened a high-level conference in Geneva on how emerging technologies are affecting security, international governance, and co-operation across the OSCE region.

The two-day event, titled ‘Anticipating technologies – for a safe and humane future’, brings together about 200 participants from OSCE participating States and Partners for Co-operation, alongside representatives from international organisations, academia, the private sector, and civil society.

The conference focuses on the security implications of rapid technological change, including AI and quantum technologies. The discussions are intended to examine how anticipation, dialogue, and cooperation can help reduce misunderstandings, build trust, and strengthen security in a fast-changing technological environment.

Opening the conference, OSCE Chairman-in-Office and Swiss Federal Councillor Ignazio Cassis said: ‘Technology will not wait for us. Geopolitics will not slow down. If we want to remain relevant, we must anticipate – not react. This is the responsibility we share across the OSCE region. The OSCE still offers something rare: a space where adversaries can speak, where differences can be managed, and where common ground can still be built.’

The organisation’s Secretary General, Feridun H. Sinirlioğlu, also stressed the need for dialogue as emerging technologies evolve faster than governance frameworks. He said: ‘Today, emerging technologies are evolving faster than the frameworks that govern them. This creates a widening gap between what technology can do and how we manage it. This gap must be addressed through dialogue – our most important stabilizing force in uncertain times – and this is where the OSCE has a vital role to play.’

The programme includes discussions on anticipating technological change and its geopolitical impact, water and energy security in the digital age, and the role of AI in early warning and conflict prevention. The conference also highlights Geneva’s role as a meeting point for science and diplomacy, including through institutions such as CERN, the Geneva Science and Diplomacy Anticipator, and the Open Quantum Institute.

The event forms part of the Chairpersonship’s priority to connect scientific and technological anticipation with policy action. It is the second of four international conferences Switzerland is hosting under its chairpersonship, ahead of the OSCE Ministerial Council meeting in Lugano in December.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

WTO members form duty-free pact after e-commerce moratorium lapses

The United States and 18 other World Trade Organization members have moved to create a separate pact pledging not to impose customs duties on electronic transmissions, after members failed to renew the wider WTO e-commerce moratorium.

According to the document cited in the report, the group includes the United States, Japan, South Korea, Singapore, Australia, Norway, and Argentina. The 19 members said they would not impose duties on electronic transmissions for an unspecified period and expressed disappointment that the multilateral moratorium had lapsed.

Members of the group said they remained committed to providing businesses and consumers with a measure of predictability and certainty in the absence of the WTO-wide moratorium. They also invited other WTO members to join the arrangement.

First agreed in 1998 and renewed repeatedly since then, the moratorium prevents WTO members from imposing customs duties on cross-border electronic transmissions, including streaming, downloads and software transfers.

At MC13 in March 2024, WTO members adopted the most recent ministerial decision on the issue, extending the practice of not imposing customs duties on electronic transmissions until the 14th Ministerial Conference or 31 March 2026, whichever came earlier.

Its lapse followed failed efforts to extend the arrangement, with Brazil maintaining its opposition to a four-year renewal.

US Ambassador to the WTO Joseph Barloon told delegates that Washington was launching the plurilateral agreement to give businesses and consumers greater certainty and predictability. He said the move did not close the door to multilateral engagement, but that the United States would not wait for all WTO members to agree before responding to stakeholder needs.

Business groups warned that the failure to preserve a WTO-wide moratorium would raise concerns about global digital trade. Sabina Ciofu of techUK said the 19-member pact offered a way forward but that the absence of a multilateral agreement was worrying. At the same time, International Chamber of Commerce Secretary General John Denton described the pact as a temporary fix rather than a substitute for a WTO-wide deal.

Why does it matter?

The lapse of the WTO e-commerce moratorium weakens one of the longest-standing global understandings underpinning digital trade. A 19-member pact may preserve duty-free treatment among participating economies, but it also points to a more fragmented environment in which rules for electronic transmissions could increasingly depend on partial arrangements rather than WTO-wide consensus.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US and China reportedly weigh AI risk talks ahead of leaders’ summit

The United States and China are considering launching official discussions on AI risk management, The Wall Street Journal reported, citing people familiar with the matter.

According to the report, the White House and the Chinese government are also considering whether to place AI on the agenda for a planned summit in Beijing between US President Donald Trump and Chinese President Xi Jinping. If agreed, the talks would mark the first AI-specific engagement between the two governments under the current US administration.

The possible dialogue could focus on risks linked to advanced AI systems, including unexpected model behaviour, autonomous military applications and misuse by non-state actors using powerful open-source tools, people familiar with the discussions told the newspaper. The report said Washington is waiting for Beijing to designate a counterpart for the talks.

The WSJ reported that US Treasury Secretary Scott Bessent is leading the US side, while Chinese Vice Finance Minister Liao Min has been involved in discussions on setting up such a channel. The newspaper added that the two presidents would ultimately decide whether AI appears on the formal summit agenda.

Liu Pengyu, spokesperson for the Chinese Embassy in Washington, was cited as saying that China is ready to engage in communication on AI risk mitigation. Analysts have raised the possibility that any future dialogue could support crisis-management tools, including an AI hotline between senior leaders.

The report places the latest deliberations in the context of earlier US-China engagement on AI. In 2023, then US President Joe Biden and Xi launched a formal AI dialogue, and both sides later said humans, not AI, would retain authority over nuclear-launch decisions. The WSJ said the earlier process produced limited results, but AI has remained a high-level focus in bilateral relations.

Non-governmental discussions have also reportedly continued in parallel, including exchanges involving former Microsoft research executive Craig Mundie and Chinese counterparts from Tsinghua University and major AI companies. Participants cited by the newspaper said those exchanges have focused on frontier-model safety, technical guardrails and broader questions of strategic stability.

Why does it matter?

A formal AI risk channel between Washington and Beijing would signal that both governments see advanced AI as a strategic stability issue, not only an economic or technological race. Even brief talks could matter if they create channels for crisis communication about military AI, frontier-model failures, or misuse by non-state actors. However, because the discussions are still only reported as under consideration, the significance lies in the possibility of a risk-management mechanism, not in any confirmed diplomatic breakthrough.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EESC backs revised Cybersecurity Act with warnings on ENISA and supply chains

The European Economic and Social Committee has backed the EU’s proposed revision of the Cybersecurity Act, supporting reforms to ENISA, the cybersecurity certification framework and ICT supply-chain security, while warning that the next phase of the EU cyber policy must remain workable in practice.

In its opinion, the committee argues that cybersecurity and ICT supply-chain security should not be treated as narrow technical questions. Instead, it presents them as matters of economic security and geopolitical resilience, closely linked to the EU’s competitiveness, legal certainty and broader resilience.

The opinion welcomes the European Commission’s attempt to update the Cybersecurity Act and align related rules under NIS 2, particularly where the package aims to simplify compliance and reduce overlapping obligations. At the same time, the committee says that a stronger ENISA will require stronger backing. If the agency is expected to take on more responsibilities, those tasks should come with adequate resources, specialist staff and a mandatory workforce plan.

The committee also supports a single-entry point for incident reporting. It says parallel reporting requirements under NIS 2, DORA and sector-specific rules should be streamlined so that one comprehensive report can serve all relevant regulatory regimes.

On ICT supply-chain security, the opinion supports a structured EU framework for identifying key assets and addressing high-risk suppliers. However, it warns that restrictions and phase-outs should be transparent, proportionate and supported by realistic transition plans that account for replacement timelines, service continuity, costs, labour-market effects and the risk of shifting compliance burdens onto smaller firms outside the regulation’s scope.

The committee also calls for the cyber debate to address democratic resilience. A proposed amendment would give ENISA a clearer role in supporting election security, democratic resilience and public awareness of cyber threats, disinformation and safe digital behaviour.

Why does it matter?

The opinion supports a more centralised and strategic EU cybersecurity framework, but also highlights the practical risks of expanding cyber regulation faster than institutions and companies can implement it. The debate around ENISA’s mandate, incident reporting and ICT supply-chain restrictions will shape how far the EU can strengthen cyber resilience without creating fragmented obligations or disproportionate burdens for smaller firms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!