US and China reportedly weigh AI risk talks ahead of leaders’ summit

The United States and China are considering launching official discussions on AI risk management, The Wall Street Journal reported, citing people familiar with the matter.

According to the report, the White House and the Chinese government are also considering whether to place AI on the agenda for a planned summit in Beijing between US President Donald Trump and Chinese President Xi Jinping. If agreed, the talks would mark the first AI-specific engagement between the two governments under the current US administration.

The possible dialogue could focus on risks linked to advanced AI systems, including unexpected model behaviour, autonomous military applications and misuse by non-state actors using powerful open-source tools, people familiar with the discussions told the newspaper. The report said Washington is waiting for Beijing to designate a counterpart for the talks.

The WSJ reported that US Treasury Secretary Scott Bessent is leading the US side, while Chinese Vice Finance Minister Liao Min has been involved in discussions on setting up such a channel. The newspaper added that the two presidents would ultimately decide whether AI appears on the formal summit agenda.

Liu Pengyu, spokesperson for the Chinese Embassy in Washington, was cited as saying that China is ready to engage in communication on AI risk mitigation. Analysts have raised the possibility that any future dialogue could support crisis-management tools, including an AI hotline between senior leaders.

The report places the latest deliberations in the context of earlier US-China engagement on AI. In 2023, then US President Joe Biden and Xi launched a formal AI dialogue, and both sides later said humans, not AI, would retain authority over nuclear-launch decisions. The WSJ said the earlier process produced limited results, but AI has remained a high-level focus in bilateral relations.

Non-governmental discussions have also reportedly continued in parallel, including exchanges involving former Microsoft research executive Craig Mundie and Chinese counterparts from Tsinghua University and major AI companies. Participants cited by the newspaper said those exchanges have focused on frontier-model safety, technical guardrails and broader questions of strategic stability.

Why does it matter?

A formal AI risk channel between Washington and Beijing would signal that both governments see advanced AI as a strategic stability issue, not only an economic or technological race. Even brief talks could matter if they create channels for crisis communication about military AI, frontier-model failures, or misuse by non-state actors. However, because the discussions are still only reported as under consideration, the significance lies in the possibility of a risk-management mechanism, not in any confirmed diplomatic breakthrough.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

WEF report says AI is reshaping cybersecurity defence

Advanced AI models are reshaping cybersecurity by accelerating both offensive and defensive capabilities, forcing organisations to rethink how they detect, assess and respond to cyber threats.

A new World Economic Forum report argues that AI is becoming a defining force in cybersecurity, with organisations increasingly moving from pilot projects to operational deployment. According to the WEF, AI is already being used to improve vulnerability identification, threat detection, response speed and resilience.

The report highlights how AI can help security teams process large volumes of data, detect threats faster and support more efficient responses. At the same time, it warns that threat actors are also using AI to automate deception, generate malware and scale attacks at machine speed.

WEF’s analysis says the growing speed and scale of AI-enabled cyber operations are putting pressure on traditional cybersecurity models. Instead of relying mainly on prevention and scheduled patching cycles, organisations are being pushed towards continuous detection, automated response, stronger access controls and more resilient infrastructure.

The report also stresses that AI’s value in cybersecurity depends on strategy, governance and human oversight. Rather than treating AI as a standalone tool, organisations are encouraged to test use cases carefully, build appropriate safeguards and invest in the skills and processes needed to defend at machine speed.

Why does it matter?

AI is changing cybersecurity on both sides of the equation. It can lower the barriers for faster and more scalable attacks, but it can also help defenders improve detection, response and resilience. The wider significance is that cybersecurity strategies built around periodic assessment and manual response may become less effective as AI-driven threats and defences operate at greater speed and scale.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

European Commission updates guidance on generative AI use in research

The European Commission has updated the ERA Living Guidelines on the responsible use of generative AI in research, reflecting the growing use of AI tools across scientific work. The revised guidance aims to support researchers, research organisations and funding bodies in adopting generative AI while maintaining core principles of research integrity.

The guidelines emphasise reliability, honesty, respect and accountability, including transparency over AI use, protection of privacy and confidential information, and responsibility for research outputs. They also stress that researchers remain ultimately responsible for scientific output and should verify AI-generated results.

New recommendations address risks linked to the use of generative AI by third parties, including in meetings, note-taking, summaries and document overviews, where confidential information, data protection or intellectual property rights may be affected. The guidelines encourage researchers and organisations to inform third parties about the use of such tools and related risks.

A specific addition concerns the risk of ‘hidden prompts’, where instructions may be secretly embedded in documents or inputs to influence generative AI tools. The guidelines call on research funding organisations to remain aware of such risks, set rules prohibiting manipulation where relevant, and introduce appropriate safeguards in IT systems used to process information.

Developed through the European Research Area Forum, the guidelines are intended as a non-binding supporting tool for the research community. The Commission says they will be updated regularly and that users can continue to provide feedback as generative AI and the surrounding policy landscape evolve.

Why does it matter?

Generative AI is becoming part of everyday research workflows, from drafting and summarising to proposal preparation and document analysis. The updated guidelines show that research integrity risks now extend beyond individual misuse to organisational processes, third-party tools and hidden technical behaviours that may affect scientific judgement. Shared guidance across the European Research Area can help institutions adopt AI without weakening transparency, accountability or trust in research.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our  chatbot!  

India and France discuss expanding AI and space cooperation

India and France have discussed expanding cooperation in space, AI, applied mathematics and advanced technologies following a bilateral meeting between Indian Minister of State for Science and Technology Dr Jitendra Singh and French Minister for Higher Education, Research and Space Philippe Baptiste.

The talks reviewed the countries’ growing strategic partnership in science, technology and space, with the 2026 Indo-French Year of Innovation identified as an opportunity to deepen collaboration in emerging technology fields.

Both sides discussed stronger links between Indian and French research institutions, including initiatives related to AI, advanced materials and digital sciences. Space cooperation also featured prominently, building on long-standing collaboration between the Indian Space Research Organisation and France’s Centre National d’études Spatiales through joint missions such as Megha-Tropiques and SARAL, and ongoing work on TRISHNA.

France also expressed interest in expanding cooperation on human spaceflight, microgravity experiments and ocean-related data-sharing initiatives.

Indian officials highlighted the expansion of the country’s space ecosystem following recent reforms, noting that nearly 400 space start-ups are now active in the sector. The discussions also covered opportunities linked to India’s Deep Ocean Mission and future engagement around the International Space Summit planned in Paris in September 2026.

Why does it matter?

The meeting reflects how AI, space, ocean data and advanced research are increasingly being treated as linked areas of strategic technology cooperation. For India and France, the agenda goes beyond scientific exchange: it connects national innovation ecosystems, space-sector reforms, research partnerships and the use of data-intensive technologies for climate, ocean and public-interest applications.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OECD finds audit institutions are building AI capacity but struggling to scale

Public audit institutions are expanding their use of AI, but most remain at an early stage of adoption, with a significant gap between pilot projects and full operational deployment, according to a new OECD paper.

Drawing on consultations with 15 institutions across 14 countries and the European Union, the paper says AI is being explored to strengthen oversight and improve audit processes in areas such as anomaly detection, document processing, knowledge management and predictive risk assessment.

The OECD says institutional commitment is already visible across several indicators. Among the institutions consulted, 67% reported having a formal AI strategy, 80% had internal AI guidelines or policies, 87% offered AI-related staff training, and 87% had at least one AI tool in production.

However, the paper stresses that maturity levels vary widely and that many tools remain limited in scale or are still being tested. It identifies a gap between experimentation and scalable operational deployment, despite the growing integration of AI into broader digital transformation efforts.

The paper highlights several emerging audit use cases, including machine-learning systems for anomaly detection in procurement and financial records, predictive models to identify entities at higher risk of distress or non-compliance, intelligent document processing for extracting data from unstructured files, and generative AI tools for drafting, summarising and translating documents.

It also points to more specialised applications, such as semantic search, knowledge management, and visual or spatial analysis using satellite imagery, drones or other sensor-based systems.

Despite growing experimentation, the OECD says the main barriers to wider use remain structural. Fragmented data systems, weak interoperability, limited internal technical expertise and uneven digital infrastructure continue to slow progress.

The paper argues that robust data governance, secure and interoperable systems, and stronger in-house development capacity will be critical if public audit bodies are to scale AI responsibly while maintaining transparency, accountability and public trust.

It also stresses that AI is being positioned as a support tool rather than a substitute for auditors. Across the cases reviewed, human oversight remains central, both because of current limitations in explainability and reliability and because audit institutions are treating AI adoption cautiously in high-stakes oversight settings.

The OECD presents the current period as a transitional phase in which public audit institutions are building the foundations needed for broader and more trustworthy use of AI in oversight.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ATxSummit 2026 to focus on AI governance and digital growth in Asia

ATxSummit 2026 will take place in Singapore on 20 and 21 May 2026 as part of Asia Tech x Singapore. Organisers state that the event will convene more than 4,000 participants from over 50 countries, including policymakers, technology companies, researchers, and industry representatives.

The programme will focus on five themes related to AI deployment and governance. These include agentic systems in enterprise operations, AI applications for public-sector and national use, scientific research and embodied intelligence, workforce and organisational changes, and the implementation of AI governance approaches.

Participants include representatives from organisations such as the World Bank Group, NVIDIA, Google, Amazon, and OpenAI. The programme also includes academic and policy discussions involving AI research, security, and digital governance.

The summit will include technical workshops, government roundtables, and the Digital Frontier Forum, focused on AI, deep technology, and digital growth strategies. ATxEnterprise will also take place alongside the summit, with sessions addressing infrastructure investment, digital trust, cross-border connectivity, and responsible AI deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ICESCO and Morocco sign agreement on AI and digital capacity building

The Islamic World Educational, Scientific and Cultural Organisation (ICESCO) and Morocco’s Ministry of Digital Transition and Administrative Reform have signed a memorandum of understanding on cooperation in digital transformation, AI and strategic foresight.

The agreement was signed in Rabat on the sidelines of the African Open Government Conference by ICESCO Director-General Dr Salim M. AlMalik and Dr Amal El Fallah, Minister Delegate to the Head of Government in charge of Digital Transition and Administrative Reform of Morocco.

The memorandum provides for workshops, training programmes and joint seminars aimed at building capacity among public and private sector professionals in digital transformation, AI, strategic foresight and digital diplomacy. It also covers the exchange of expertise and open data, the preparation of reference materials, and research related to future skills and professions in ICESCO member states.

The agreement further includes cooperation with universities and research centres to support a knowledge ecosystem aligned with the requirements of the digital economy. It also refers to innovation laboratories and digital tools for the digitisation, indexing, research and analysis of cultural and scientific heritage materials.

Why does it matter?

The agreement places AI within a broader capacity-building agenda that includes public-sector skills, digital diplomacy, open data, foresight and heritage digitisation. Also, the policy relevance lies in how international organisations and national governments are using AI cooperation not only for technology adoption, but also for institutional readiness and future skills development across member states.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Automation fuels inequality more than productivity gains, study finds

A new study co-authored by economists from Massachusetts Institute of Technology and Yale University finds that automation in the United States has often been driven less by productivity gains and more by firms’ efforts to reduce labour costs.

Rather than replacing workers to maximise efficiency, companies have frequently targeted employees earning a ‘wage premium’, effectively lowering higher-than-average salaries within comparable roles.

The research suggests this pattern has contributed significantly to widening income inequality while delivering only limited productivity improvements.

The analysis, which examines data spanning multiple decades and industries, indicates that automation has disproportionately affected higher-earning workers within affected groups. It also estimates that inefficient automation deployment may have offset a large share of potential productivity gains over time.

Researchers argue that the findings highlight a structural tension in how automation is applied, where short-term cost reduction can take priority over long-term economic efficiency, shaping both wage distribution and overall growth dynamics in the US economy since 1980.

Why does it matter? 

The findings challenge the assumption that automation primarily improves efficiency and productivity, showing instead that firms can strategically use it to reshape wage structures and concentrate economic gains.

From a broader perspective, this helps explain why technological progress has not translated evenly into higher productivity or shared prosperity, while also highlighting how corporate incentives can steer innovation in ways that deepen inequality across labour markets.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Singapore Ministry of Health addresses AI-developed drugs and patient data safeguards

Singapore’s Ministry of Health has said that drugs developed with the use of AI will be subject to the same regulatory expectations as conventionally developed medicines, including requirements on quality, safety and efficacy.

The ministry made the statement in response to a parliamentary question on the regulation of AI-developed drugs, clinical trials and safeguards for patient data used in AI-related healthcare innovation.

It said the Health Sciences Authority’s approach is aligned with international regulatory principles on the responsible use of AI in drug development, including those outlined by the US Food and Drug Administration and the European Medicines Agency.

The ministry also said that patient data used for AI development is covered by existing data protection and cybersecurity safeguards, including obligations under Singapore’s Personal Data Protection Act to maintain patient confidentiality and prevent data leakage.

Authorities will continue to monitor developments in AI-related healthcare innovation and strengthen safeguards where necessary.

Why does it matter?

The response signals that Singapore is not creating a separate, lighter pathway for AI-developed medicines, but is applying existing drug safety standards while monitoring how AI changes research, development and clinical use. The issue is relevant for digital health governance because AI in drug development depends not only on regulatory approval of final products, but also on the protection of patient data used to train, test or validate health-related AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK’s ICO issues guidance on AI-generated FOI requests

The UK Information Commissioner’s Office (ICO) has published new guidance to help public authorities handle Freedom of Information (FOI) requests generated using AI, as public authorities report growing pressure from higher volumes and more complex requests.

According to the ICO, some AI-generated requests misquote or misinterpret FOI legislation, while others require significant clarification before they can be processed. The regulator says the guidance is intended to give FOI teams practical support so they can continue meeting their legal duties without adding new burdens.

The guidance addresses issues that practitioners say are increasingly common, including requests generated with AI that misstate the law, a rising number of submissions that need refinement, and the need to ensure requests are handled fairly and consistently regardless of how they were created.

It also includes example wording that public authorities can use to encourage more responsible use of AI by requesters and to support clearer and more effective FOI submissions. The ICO says the aim is to reduce delays, errors, and complaints linked to poorly framed or confusing requests.

Deborah Clark, the ICO’s Upstream Regulation Manager, clarified: ‘This guidance is about giving teams practical, sensible support, not adding new burdens. It does not change the law or create new requirements; instead, it helps teams apply existing FOI principles consistently, regardless of how a request is created. Used responsibly, AI also has the potential to help public authorities improve how they handle FOI requests, and this guidance sits alongside our wider work to support innovation that delivers real benefits for organisations and the public.’

The ICO says the guidance applies to all public authorities covered by the Freedom of Information Act and draws on existing casework, stakeholder engagement, practitioner feedback, and input from its AI specialists.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!