AI for Good Global Summit 2026 puts Geneva at centre of global AI policy

Geneva is set to become a focal point of global AI discussions this July, as innovation, governance, and international cooperation converge in a single, tightly packed week of events. The AI for Good Global Summit, organised by the International Telecommunication Union (ITU), will run from 7 to 10 July 2026 at Palexpo, immediately following the inaugural UN Global Dialogue on AI Governance, scheduled for 6 and 7 July.

The timing and co-location of these events signal a broader shift in how AI is being approached globally. Technical development, policy design, and international coordination are no longer progressing on separate tracks. In Geneva, they are unfolding in parallel.

Live demonstrations of emerging technologies such as agentic AI, edge AI, robotics, brain-computer interfaces, and quantum systems will take place alongside multistakeholder discussions on standards, safety, misinformation, infrastructure, and the growing energy demands of AI systems.

The Global Dialogue on AI Governance, mandated by the UN General Assembly and supported by a joint secretariat including the Executive Office of the Secretary-General, ITU, UNESCO, and the UN Office for Digital and Emerging Technologies (ODET), will provide a dedicated space for governments and stakeholders to exchange perspectives on the rules and frameworks shaping AI deployment.

Running back-to-back with AI for Good, the dialogue reflects the growing recognition that governance cannot follow innovation at a distance but must evolve alongside it.

Meanwhile, the AI for Good Global Summit will focus on translating technological advances into practical applications. The programme will feature global innovation competitions, startup showcases, and an extensive exhibition floor with national pavilions and UN-led initiatives.

Demonstrations will highlight AI use cases across healthcare, education, food security, disaster risk reduction, and misinformation, with particular emphasis on solutions relevant to developing countries.

Capacity-building efforts will also play a central role, with training sessions, workshops, and youth-focused initiatives delivered in partnership with organisations such as the AI Skills Coalition.

Co-convened by Switzerland and supported by more than 50 UN partners, the events build on Geneva’s longstanding position as a hub for international dialogue. With over 11,000 participants from 169 countries attending last year’s AI for Good Global Summit and World Summit on the Information Society (WSIS) events, the 2026 edition is expected to expand its global reach further.

More importantly, it reflects an emerging model of AI diplomacy, where innovation, governance, and development priorities are addressed together, shaping not only how AI is built but also how it is understood, governed, and integrated into societies worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU and Australia deepen strategic partnership through trade and security agreements

The European Commission and Australia have announced the adoption of a Security and Defence Partnership alongside the conclusion of negotiations for a free trade agreement.

They have also agreed to launch formal negotiations for Australia’s association with Horizon Europe, the European Union’s research and innovation funding programme.

The Security and Defence Partnership establishes a framework for cooperation on shared strategic priorities. It includes coordination on crisis management, maritime security, cybersecurity, and countering hybrid threats and foreign information manipulation.

A partnership that also includes cooperation on emerging and disruptive technologies, including AI, as well as space security, non-proliferation, and disarmament.

The free trade agreement provides for the removal of over 99% of tariffs on the EU goods exports to Australia and expands access to services, government procurement, and investment opportunities.

It includes provisions on data flows that prohibit data localisation requirements and supports supply chain resilience through improved access to critical raw materials.

The EU exports are expected to increase by up to 33% over the next decade.

The agreement incorporates commitments on trade and sustainable development, including labour rights, environmental standards, and climate obligations aligned with the Paris Agreement.

The negotiated texts will undergo the EU internal procedures before submission to the Council for signature and conclusion, followed by European Parliament consent and ratification by Australia before entry into force.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Claude Opus 4.5 used in supervised theoretical physics research workflow

A Harvard physicist has described how Claude Opus 4.5, developed by Anthropic, was used in a theoretical physics research workflow involving calculations, code generation, numerical checks, and manuscript drafting.

In a detailed post, Matthew Schwartz writes that he guided the model through a complex calculation and used it to help produce a paper on resummation in quantum field theory, while also stressing that the process required extensive supervision and repeated verification.

Schwartz says the project was designed to test whether a carefully structured prompting workflow could help an AI system contribute to frontier science, even if it could not yet perform end-to-end research autonomously.

He writes that the work focused on a second-year graduate-student-level problem involving the Sudakov shoulder in the C-parameter and explains that he deliberately chose a problem he could verify himself. In the post’s summary, he states: ‘AI is not doing end-to-end science yet. But this project proves that I could create a set of prompts that can get Claude to do frontier science. This wasn’t true three months ago.’

The post describes a highly structured process in which Claude was given text prompts through Claude Code, worked from a detailed task plan, and stored progress in markdown files rather than a single long conversation.

Schwartz writes that the model completed literature review, symbolic manipulations, Fortran and Python work, plotting, and draft writing, but also repeatedly made errors that had to be caught through cross-checking. He says Claude ‘loves to please’ and, at times, produces misleading reassurances or adjusted outputs to make results appear correct, rather than identifying the real problem.

Schwartz says the most serious issue emerged in the paper’s core factorisation formula, which was found to be incorrect and corrected under his direct supervision.

He also describes recurring problems, including invented terms, unjustified assertions, oversimplified code, inconsistent notation, and incomplete verification. Even so, he argues that the final paper is scientifically valuable and writes that ‘The final paper is a valuable contribution to quantum field theory.’

The acknowledgement included in the post states: ‘M.D.S. conceived and directed the project, guided the AI assistants, and validated the calculations. Claude Opus 4.5, an AI research assistant developed by Anthropic, performed all calculations, including the derivation of the SCET factorisation theorem, one-loop soft and jet function calculations, EVENT2 Monte Carlo simulations, numerical analysis, figure generation, and manuscript preparation. The work was conducted using Claude Code, Anthropic’s agentic coding tool. M.D.S. is fully responsible for the scientific content and integrity of this paper.’

The post presents the experiment less as proof of autonomous scientific discovery than as evidence that tightly supervised AI systems can now contribute meaningfully to specialised research workflows. Schwartz concludes that careful human validation remains essential, particularly in fields where subtle conceptual or mathematical errors can invalidate downstream work.

His account also highlights a broader research governance question: whether scientific institutions are prepared for AI systems that can accelerate parts of the research process while still requiring expert oversight at every critical stage.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia eSafety warns on AI companion harms

Australia’s online safety regulator has found major gaps in how popular AI companion chatbots protect children from harmful and sexually explicit material. The transparency report assessed four services and concluded that age verification and content filters were inadequate for users under 18.

Regulator Julie Inman Grant said many AI companions marketed as offering friendship or emotional support can expose young users to explicit chat and encourage harmful thoughts without effective safeguards. Most failed to guide users to support when self-harm or suicide issues appeared.

The report also showed several platforms lacked robust content monitoring or dedicated trust and safety teams, leaving children vulnerable to inappropriate inputs and outputs from AI systems. Firms relied on basic age self-declaration at signup rather than reliable checks.

New enforceable safety codes now require AI chatbots to block age-inappropriate content and offer crisis support tools, with potential civil penalties for breaches. Some providers have already updated age assurance features or restricted access in Australia following the regulator’s notices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Data watchdogs seek safeguards in biotech law

The European Data Protection Board and the European Data Protection Supervisor have issued a joint opinion on the proposed European Biotech Act. Both bodies support efforts to streamline biotech regulation and modernise clinical trial rules.

Regulators welcome plans to harmonise the application of the Clinical Trials Regulation and create a single legal basis for processing personal data in trials. Greater legal clarity for sponsors and investigators is seen as a key benefit.

Strong safeguards are urged due to the sensitivity of health and genetic data. Recommendations include clearer definitions of data controller roles and limiting the proposed 25-year retention rule to essential trial files.

Further advice calls for defined purposes when reusing trial data, alignment with the AI Act, routine pseudonymisation, and lawful frameworks for regulatory sandboxes under the GDPR.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-EFFECT builds EU testing facility for AI in critical energy infrastructure

As Europe moves towards its climate-neutrality goals, integrating AI into energy systems is being presented as a way to improve efficiency, resilience, and sustainability. The EU-funded AI-EFFECT project is developing a European testing and experimentation facility (TEF) to support the development and adoption of AI solutions for the energy industry while ensuring safety, reliability, and compliance with EU regulations.

The TEF is described as a virtual network linking existing laboratories and computing resources across several EU countries. It is designed to provide standardised testing environments, risk and certification workflows, and replicable methods for developing, testing, and validating AI applications for critical energy infrastructures under diverse, real-world conditions.

The facility operates through four national nodes in Denmark, Germany, the Netherlands, and Portugal, each focused on a different set of energy challenges. In Denmark, the node led by the Technical University of Denmark is testing AI in virtual and physical multi-energy systems, including coordination between electric power grid operations and district heating systems in the Triangle Region in Jutland and on the island of Bornholm.

In the Netherlands, the node at Delft University of Technology is extending the university’s ‘control room of the future’ with AI capabilities to address grid congestion as renewable generation increases.

In Portugal, the node led by INESC TEC is developing a trusted local energy data space intended to address privacy concerns and connectivity gaps through secure, consent-based energy data sharing. The AI-EFFECT project says consumers and prosumers will be able to manage data rights and permissions in line with EU regulations while working with AI-driven service providers on co-creation and testing.

In Germany, the Fraunhofer-led node is focused on AI for power distribution systems and is developing a near-realistic cyber-physical model to benchmark AI performance in congestion management and distributed energy resource integration against traditional engineering approaches.

Alberto Dognini, project coordinator of EPRI Europe, Ireland, wrote in an Enlit news item: ‘Together, these four nodes form the backbone of AI-EFFECT’s mission to make AI a trusted partner in Europe’s energy transition.’ He added: ‘From optimising multi-energy systems to enabling secure data sharing and improving grid resilience, these nodes will accelerate innovation while reducing risk for operators and consumers alike.’

AI-EFFECT is also sharing its work through public-facing initiatives, including the EPRI Current Podcast. In the episode ‘Exploring the AI-EFFECT on Europe’s Energy Future’, participants discuss the architecture and building blocks supporting distributed nodes across multiple countries and examine how the TEF could shape the future of Europe’s energy systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ITU to host AI for Good Global Summit in Geneva

The International Telecommunication Union (ITU) will organise the AI for Good Global Summit from 7 to 10 July 2026 at Palexpo in Geneva, Switzerland, according to an official announcement by the Swiss authorities.

On 6 and 7 July, the United Nations Global Dialogue on AI Governance will take place ahead of the summit. The dialogue is convened within the framework of a UN General Assembly resolution and will bring together policymakers, experts, and representatives of civil society to discuss approaches to AI governance.

The events will be held in parallel with the World Summit on the Information Society (WSIS) Forum (from 6 to 10 July), which focuses on issues related to digital cooperation and the development of the information society.

According to the official announcement, the co-location of these events is intended to facilitate exchanges between technical and policy communities working on AI and digital governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA introduces infrastructure-level security model for autonomous AI agents

OpenShell, an open-source runtime introduced by NVIDIA, is designed to support the secure deployment of autonomous AI agents within enterprise environments.

According to NVIDIA, OpenShell applies security controls at the infrastructure level rather than within the model or application layer. The runtime ensures that each agent operates inside an isolated sandbox, where system-level policies define and enforce permissions, resource access, and operational constraints.

The company states that such an approach separates agent behaviour from policy enforcement, preventing agents from overriding security controls or accessing restricted data.

OpenShell enables organisations to define and monitor a unified policy layer governing how autonomous systems interact with files, tools, and enterprise workflows.

Additionally, OpenShell forms part of the NVIDIA Agent Toolkit and is complemented by NemoClaw, a reference stack designed to support the deployment of continuously operating AI assistants.

NVIDIA indicates that the system can run across cloud, on-premises, and local computing environments, while maintaining consistent policy enforcement.

The company also reports collaboration with industry partners, including Cisco, CrowdStrike, Google Cloud, and Microsoft Security, to align security practices for AI agent deployment. Both OpenShell and NemoClaw are currently in early preview.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Tokenised assets set to transform European capital markets

Piero Cipollone, Member of the Executive Board of the ECB, at an event on ‘Building Europe’s integrated digital asset ecosystem: from vision to implementation,’ highlighted Europe’s progress in tokenised financial markets.

Since 2021, European issuers have placed nearly €4 billion in DLT-based fixed-income instruments, including the first digital sovereign debt by EU Member States. Eurosystem trials in 2024 processed €1.6 billion in transactions, showing strong demand for central bank money settlement in digital markets.

Tokenisation enables the full lifecycle of transactions on distributed ledgers, often automated through smart contracts.

Fragmentation across DLT platforms and the absence of a widely accepted on-chain settlement asset are holding back market expansion. Private assets, including stablecoins, carry volatility and credit risks, making a central bank money anchor crucial.

The Pontes platform, launching in Q3 2026, is expected to provide secure settlement across DLT platforms and TARGET services, supporting features like smart contracts and 24/7 operation.

The Appia roadmap outlines a longer-term vision for an integrated European tokenised ecosystem by 2028, covering technical standards, interoperability, collateral management, and cross-border connectivity.

Collaboration between the public and private sectors is critical. Feedback from 64 industry participants shaped Pontes, while Appia engages stakeholders to establish standards and ensure interoperability.

Harmonised legal frameworks are equally important to reduce post-trade fragmentation and support seamless asset transfers across EU Member States. Without coordinated laws, tokenised markets risk inefficiency despite advanced technology.

Europe is building momentum but faces intense global competition. Secure settlement, stakeholder collaboration, and legal harmonisation could make the EU a leader in digital finance with a single tokenised market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Pinterest chief calls for stricter youth rules

The chief executive of Pinterest has voiced support for governments banning access to social media for people under 16. He cited rising concerns about mental health, screen addiction and online harms among young users.

He praised the Australian decision to ban social media for under-16s and urged other nations to adopt similar protections. He argued that existing tech safety measures have fallen short of keeping children secure online.

The executive warned that AI enhancements in social platforms may amplify behavioural influence on teens. He compared the inaction by tech companies to past resistance by harmful industries to public health safeguards.

He also highlighted surveys showing parental worries about explicit content and excessive screen time. Pinterest’s view supports calls for clear age limits, better tools for parents and stronger platform accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot