Quantum readiness as a strategic priority for firms

Businesses are beginning to prepare for the commercial potential of quantum computing, a technology that leverages quantum mechanics to solve problems beyond the capabilities of classical computers.

Early engagement focuses on awareness, training, and workshops to explore possible applications across sectors such as pharmaceuticals, energy, finance, and advanced materials.

Companies face several barriers to readiness, including limited technological maturity, unclear business implications, high costs for access and staff training, and a shortage of talent with both quantum and industry expertise.

These obstacles mean that most readiness initiatives remain concentrated in large, research-intensive firms, leaving smaller companies at risk of falling behind.

Support mechanisms are helping firms navigate these challenges. Networking, advisory services, technology centres, R&D grants, and stakeholder consultations help firms access resources and partnerships to accelerate readiness and link research with commercial use.

Building quantum readiness will require ongoing investment in skills, infrastructure, and partnerships, alongside policies that combine exploratory pilots with long-term workforce and software support.

Hybrid approaches integrating quantum computing with AI and high-performance computing offer practical entry points for early adoption, strengthening competitiveness and innovation across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI for Good Global Summit 2026 puts Geneva at centre of global AI policy

Geneva is set to become a focal point of global AI discussions this July, as innovation, governance, and international cooperation converge in a single, tightly packed week of events. The AI for Good Global Summit, organised by the International Telecommunication Union (ITU), will run from 7 to 10 July 2026 at Palexpo, immediately following the inaugural UN Global Dialogue on AI Governance, scheduled for 6 and 7 July.

The timing and co-location of these events signal a broader shift in how AI is being approached globally. Technical development, policy design, and international coordination are no longer progressing on separate tracks. In Geneva, they are unfolding in parallel.

Live demonstrations of emerging technologies such as agentic AI, edge AI, robotics, brain-computer interfaces, and quantum systems will take place alongside multistakeholder discussions on standards, safety, misinformation, infrastructure, and the growing energy demands of AI systems.

The Global Dialogue on AI Governance, mandated by the UN General Assembly and supported by a joint secretariat including the Executive Office of the Secretary-General, ITU, UNESCO, and the UN Office for Digital and Emerging Technologies (ODET), will provide a dedicated space for governments and stakeholders to exchange perspectives on the rules and frameworks shaping AI deployment.

Running back-to-back with AI for Good, the dialogue reflects the growing recognition that governance cannot follow innovation at a distance but must evolve alongside it.

Meanwhile, the AI for Good Global Summit will focus on translating technological advances into practical applications. The programme will feature global innovation competitions, startup showcases, and an extensive exhibition floor with national pavilions and UN-led initiatives.

Demonstrations will highlight AI use cases across healthcare, education, food security, disaster risk reduction, and misinformation, with particular emphasis on solutions relevant to developing countries.

Capacity-building efforts will also play a central role, with training sessions, workshops, and youth-focused initiatives delivered in partnership with organisations such as the AI Skills Coalition.

Co-convened by Switzerland and supported by more than 50 UN partners, the events build on Geneva’s longstanding position as a hub for international dialogue. With over 11,000 participants from 169 countries attending last year’s AI for Good Global Summit and World Summit on the Information Society (WSIS) events, the 2026 edition is expected to expand its global reach further.

More importantly, it reflects an emerging model of AI diplomacy, where innovation, governance, and development priorities are addressed together, shaping not only how AI is built but also how it is understood, governed, and integrated into societies worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Robots and AI transform end-to-end supply chains

AI is transforming supply chains and logistics, moving operations from reactive, manual processes to autonomous, agent-driven systems. Enterprises are using AI agents to optimise and manage workflows, boosting efficiency in warehousing, distribution, and transportation.

Simulation tools and digital twins allow teams to predict disruptions, optimise performance, and test solutions in virtual environments before implementing changes on the ground.

Physical AI is taking automation a step further by embedding intelligence directly into robots and machinery.

Humanoid and industrial robots are now capable of handling tasks such as pallet sorting, last-mile deliveries, and inspection with increasing autonomy, guided by AI systems trained in cloud-connected simulation environments.

Companies are combining cloud, edge computing, and robotics frameworks to accelerate deployment and scale operations safely.

AI, robotics, and enterprise systems work together to channel sensor and machine data to predictive models and decision-making agents. Integrating simulations, AI agents, and robotics helps firms optimise inventory, cut risks, and boost productivity while preparing for autonomous supply chains.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU and Australia deepen strategic partnership through trade and security agreements

The European Commission and Australia have announced the adoption of a Security and Defence Partnership alongside the conclusion of negotiations for a free trade agreement.

They have also agreed to launch formal negotiations for Australia’s association with Horizon Europe, the European Union’s research and innovation funding programme.

The Security and Defence Partnership establishes a framework for cooperation on shared strategic priorities. It includes coordination on crisis management, maritime security, cybersecurity, and countering hybrid threats and foreign information manipulation.

A partnership that also includes cooperation on emerging and disruptive technologies, including AI, as well as space security, non-proliferation, and disarmament.

The free trade agreement provides for the removal of over 99% of tariffs on the EU goods exports to Australia and expands access to services, government procurement, and investment opportunities.

It includes provisions on data flows that prohibit data localisation requirements and supports supply chain resilience through improved access to critical raw materials.

The EU exports are expected to increase by up to 33% over the next decade.

The agreement incorporates commitments on trade and sustainable development, including labour rights, environmental standards, and climate obligations aligned with the Paris Agreement.

The negotiated texts will undergo the EU internal procedures before submission to the Council for signature and conclusion, followed by European Parliament consent and ratification by Australia before entry into force.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Google outlines 2026 water stewardship projects across agriculture and cities

Google has published an overview of its 2026 Water Stewardship Project Portfolio, outlining projects intended to replenish water, improve water quality, and support ecosystem health in the areas where it operates. The post, published for World Water Day, says the company is working towards returning more freshwater than it consumes, on average, across its offices and data centres by 2030.

According to the post, Google says it replenished more than 7 billion gallons in 2025 alone and supported 156 projects across 97 watersheds. It also states that more than 11 billion gallons in 2030, once projects are fully implemented, are expected to be replenished.

The company groups its work into three main areas: agriculture, watershed restoration, and urban water infrastructure. In agriculture, Google says it is supporting irrigation and water-saving projects in places including the Colorado River Basin, the Tietê Basin in Brazil, and India, which involve technologies such as smart sensors, AI-supported irrigation timing, and water-retention measures linked to cover crops.

In the watershed restoration post, the list includes projects in Ireland, California, and Taiwan. Google says these initiatives are intended to restore bog ecosystems, reconnect river habitats, and improve water quality through natural filtration systems.

The company also highlights urban water infrastructure projects in Belgium, the Flemish region, and Virginia. These include leak detection, AI-powered school monitoring, and stormwater control systems to improve water management and reduce losses.

The post presents the portfolio as part of Google’s wider sustainability strategy and says more information is available in its 2026 Water Stewardship Project Portfolio report. As with similar corporate sustainability announcements, the claims presented in the post reflect the company’s own summary of its projects and targets.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Brain inspired chip could cut AI energy use by up to 70%

Researchers at the University of Cambridge have developed a nanoelectronic device to reduce energy consumption in AI hardware. The team, led by Dr Babak Bakhit, designed the system to mimic how the human brain processes information.

The device uses a new form of hafnium oxide to create a stable, low-energy memristor. It processes and stores data in the same location, similar to how neurons function in the brain.

To achieve this, the researchers added strontium and titanium to form internal electronic junctions. This allows the device to change resistance smoothly without relying on unstable conductive filaments.

Tests showed the device operates with switching currents up to a million times lower than some conventional technologies. It also demonstrated stable multi-level states required for advanced in-memory computing.

The team said the approach could reduce AI hardware energy use by up to 70%. The findings were published in the journal Science Advances.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

Conversational AI reshapes CNC manufacturing

Japanese manufacturing firm ARUM Inc. is introducing AI into precision machining, aiming to address a growing shortage of skilled workers. TTMC Origin uses KAYA, a conversational AI that guides operators through machining tasks with natural language instructions.

Powered by proprietary software ARUMCODE and built on Microsoft Azure AI tools, the system translates traditional craftsmanship into automated workflows. Tasks once handled by skilled machinists can now be done by junior workers, lowering the barrier to operating advanced CNC machines.

The technology dramatically reduces production time. Programming a precision component that previously took over an hour can now be completed in minutes.

Such efficiency gains are particularly valuable for high-mix, low-volume manufacturing, where speed and cost control are critical to profitability.

ARUM’s expansion into AI-driven solutions reflects broader industry pressures. Japan’s manufacturing sector continues to face a persistent labour shortage, with demand for skilled machinists exceeding supply.

By combining automation with scalable cloud infrastructure, ARUM aims to maintain the country’s leadership in precision manufacturing while preparing for global deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Claude Opus 4.5 used in supervised theoretical physics research workflow

A Harvard physicist has described how Claude Opus 4.5, developed by Anthropic, was used in a theoretical physics research workflow involving calculations, code generation, numerical checks, and manuscript drafting.

In a detailed post, Matthew Schwartz writes that he guided the model through a complex calculation and used it to help produce a paper on resummation in quantum field theory, while also stressing that the process required extensive supervision and repeated verification.

Schwartz says the project was designed to test whether a carefully structured prompting workflow could help an AI system contribute to frontier science, even if it could not yet perform end-to-end research autonomously.

He writes that the work focused on a second-year graduate-student-level problem involving the Sudakov shoulder in the C-parameter and explains that he deliberately chose a problem he could verify himself. In the post’s summary, he states: ‘AI is not doing end-to-end science yet. But this project proves that I could create a set of prompts that can get Claude to do frontier science. This wasn’t true three months ago.’

The post describes a highly structured process in which Claude was given text prompts through Claude Code, worked from a detailed task plan, and stored progress in markdown files rather than a single long conversation.

Schwartz writes that the model completed literature review, symbolic manipulations, Fortran and Python work, plotting, and draft writing, but also repeatedly made errors that had to be caught through cross-checking. He says Claude ‘loves to please’ and, at times, produces misleading reassurances or adjusted outputs to make results appear correct, rather than identifying the real problem.

Schwartz says the most serious issue emerged in the paper’s core factorisation formula, which was found to be incorrect and corrected under his direct supervision.

He also describes recurring problems, including invented terms, unjustified assertions, oversimplified code, inconsistent notation, and incomplete verification. Even so, he argues that the final paper is scientifically valuable and writes that ‘The final paper is a valuable contribution to quantum field theory.’

The acknowledgement included in the post states: ‘M.D.S. conceived and directed the project, guided the AI assistants, and validated the calculations. Claude Opus 4.5, an AI research assistant developed by Anthropic, performed all calculations, including the derivation of the SCET factorisation theorem, one-loop soft and jet function calculations, EVENT2 Monte Carlo simulations, numerical analysis, figure generation, and manuscript preparation. The work was conducted using Claude Code, Anthropic’s agentic coding tool. M.D.S. is fully responsible for the scientific content and integrity of this paper.’

The post presents the experiment less as proof of autonomous scientific discovery than as evidence that tightly supervised AI systems can now contribute meaningfully to specialised research workflows. Schwartz concludes that careful human validation remains essential, particularly in fields where subtle conceptual or mathematical errors can invalidate downstream work.

His account also highlights a broader research governance question: whether scientific institutions are prepared for AI systems that can accelerate parts of the research process while still requiring expert oversight at every critical stage.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-EFFECT builds EU testing facility for AI in critical energy infrastructure

As Europe moves towards its climate-neutrality goals, integrating AI into energy systems is being presented as a way to improve efficiency, resilience, and sustainability. The EU-funded AI-EFFECT project is developing a European testing and experimentation facility (TEF) to support the development and adoption of AI solutions for the energy industry while ensuring safety, reliability, and compliance with EU regulations.

The TEF is described as a virtual network linking existing laboratories and computing resources across several EU countries. It is designed to provide standardised testing environments, risk and certification workflows, and replicable methods for developing, testing, and validating AI applications for critical energy infrastructures under diverse, real-world conditions.

The facility operates through four national nodes in Denmark, Germany, the Netherlands, and Portugal, each focused on a different set of energy challenges. In Denmark, the node led by the Technical University of Denmark is testing AI in virtual and physical multi-energy systems, including coordination between electric power grid operations and district heating systems in the Triangle Region in Jutland and on the island of Bornholm.

In the Netherlands, the node at Delft University of Technology is extending the university’s ‘control room of the future’ with AI capabilities to address grid congestion as renewable generation increases.

In Portugal, the node led by INESC TEC is developing a trusted local energy data space intended to address privacy concerns and connectivity gaps through secure, consent-based energy data sharing. The AI-EFFECT project says consumers and prosumers will be able to manage data rights and permissions in line with EU regulations while working with AI-driven service providers on co-creation and testing.

In Germany, the Fraunhofer-led node is focused on AI for power distribution systems and is developing a near-realistic cyber-physical model to benchmark AI performance in congestion management and distributed energy resource integration against traditional engineering approaches.

Alberto Dognini, project coordinator of EPRI Europe, Ireland, wrote in an Enlit news item: ‘Together, these four nodes form the backbone of AI-EFFECT’s mission to make AI a trusted partner in Europe’s energy transition.’ He added: ‘From optimising multi-energy systems to enabling secure data sharing and improving grid resilience, these nodes will accelerate innovation while reducing risk for operators and consumers alike.’

AI-EFFECT is also sharing its work through public-facing initiatives, including the EPRI Current Podcast. In the episode ‘Exploring the AI-EFFECT on Europe’s Energy Future’, participants discuss the architecture and building blocks supporting distributed nodes across multiple countries and examine how the TEF could shape the future of Europe’s energy systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic outlines AI agent workflows for scientific computing

Anthropic has published a post describing how AI agents can be used in multi-day coding workflows for well-scoped, measurable scientific computing tasks that do not require constant human supervision. In the article, Anthropic researcher Siddharth Mishra-Sharma explains how tools such as progress files, test oracles, and orchestration methods can be used to manage long-running software work.

Mishra-Sharma writes that many scientists still use AI agents in a tightly managed conversational loop, while newer models are enabling the assignment of high-level goals and allowing agents to work more autonomously over longer periods. He says this approach can be useful for tasks such as reimplementing numerical solvers, converting legacy scientific software, and debugging large codebases against reference implementations.

As a case study, the Anthropic post describes using Claude Opus 4.6 to implement a differentiable cosmological Boltzmann solver in JAX. Boltzmann solvers such as CLASS and CAMB are used in cosmology to model the Cosmic Microwave Background and support the analysis of survey data. According to the post, a differentiable implementation can support gradient-based inference methods while also benefiting from automatic differentiation and compatibility with accelerators such as GPUs.

The post says the project required a different workflow from Anthropic’s earlier C compiler experiment because a Boltzmann solver is a tightly coupled numerical pipeline in which small errors can affect downstream outputs. Rather than relying mainly on parallel agents, Mishra-Sharma writes that this kind of task may be better suited to a single agent working sequentially, while using subagents when needed and comparing results against a reference implementation.

To manage long-running work, the article recommends keeping project instructions in a root-level ‘CLAUDE.md’ file and maintaining a ‘CHANGELOG.md’ file as portable long-term memory. It also highlights the importance of a test oracle, such as a reference implementation or existing test suite, so that AI agents can measure whether they are making progress and avoid repeating failed approaches.

The Anthropic post also presents Git as a coordination tool, recommending that the agent commit and push after every meaningful unit of work and run tests before each commit. For execution, Mishra-Sharma describes running Claude Code inside a tmux session on an HPC cluster using the SLURM scheduler, allowing the agent to continue working across multiple sessions with periodic human check-ins.

One orchestration method described in the article is the ‘Ralph loop,’ which prompts the agent to continue working until a stated success criterion is met. Mishra-Sharma writes that this kind of scaffolding can still help when models stop early or fail to complete all parts of a complex task, even as they become more capable overall.

According to the post, Anthropic’s Claude worked on the solver project over several days and reached sub-percent agreement with the reference CLASS implementation across several outputs. At the same time, Mishra-Sharma notes that the system had limitations, including gaps in test coverage and mistakes that a domain expert might have identified more quickly. He writes that the resulting solver is ‘not production-grade’ and ‘doesn’t match the reference CLASS implementation to an acceptable accuracy in every regime’.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!