Quantum readiness as a strategic priority for firms

Businesses are beginning to prepare for the commercial potential of quantum computing, a technology that leverages quantum mechanics to solve problems beyond the capabilities of classical computers.

Early engagement focuses on awareness, training, and workshops to explore possible applications across sectors such as pharmaceuticals, energy, finance, and advanced materials.

Companies face several barriers to readiness, including limited technological maturity, unclear business implications, high costs for access and staff training, and a shortage of talent with both quantum and industry expertise.

These obstacles mean that most readiness initiatives remain concentrated in large, research-intensive firms, leaving smaller companies at risk of falling behind.

Support mechanisms are helping firms navigate these challenges. Networking, advisory services, technology centres, R&D grants, and stakeholder consultations help firms access resources and partnerships to accelerate readiness and link research with commercial use.

Building quantum readiness will require ongoing investment in skills, infrastructure, and partnerships, alongside policies that combine exploratory pilots with long-term workforce and software support.

Hybrid approaches integrating quantum computing with AI and high-performance computing offer practical entry points for early adoption, strengthening competitiveness and innovation across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI for Good Global Summit 2026 puts Geneva at centre of global AI policy

Geneva is set to become a focal point of global AI discussions this July, as innovation, governance, and international cooperation converge in a single, tightly packed week of events. The AI for Good Global Summit, organised by the International Telecommunication Union (ITU), will run from 7 to 10 July 2026 at Palexpo, immediately following the inaugural UN Global Dialogue on AI Governance, scheduled for 6 and 7 July.

The timing and co-location of these events signal a broader shift in how AI is being approached globally. Technical development, policy design, and international coordination are no longer progressing on separate tracks. In Geneva, they are unfolding in parallel.

Live demonstrations of emerging technologies such as agentic AI, edge AI, robotics, brain-computer interfaces, and quantum systems will take place alongside multistakeholder discussions on standards, safety, misinformation, infrastructure, and the growing energy demands of AI systems.

The Global Dialogue on AI Governance, mandated by the UN General Assembly and supported by a joint secretariat including the Executive Office of the Secretary-General, ITU, UNESCO, and the UN Office for Digital and Emerging Technologies (ODET), will provide a dedicated space for governments and stakeholders to exchange perspectives on the rules and frameworks shaping AI deployment.

Running back-to-back with AI for Good, the dialogue reflects the growing recognition that governance cannot follow innovation at a distance but must evolve alongside it.

Meanwhile, the AI for Good Global Summit will focus on translating technological advances into practical applications. The programme will feature global innovation competitions, startup showcases, and an extensive exhibition floor with national pavilions and UN-led initiatives.

Demonstrations will highlight AI use cases across healthcare, education, food security, disaster risk reduction, and misinformation, with particular emphasis on solutions relevant to developing countries.

Capacity-building efforts will also play a central role, with training sessions, workshops, and youth-focused initiatives delivered in partnership with organisations such as the AI Skills Coalition.

Co-convened by Switzerland and supported by more than 50 UN partners, the events build on Geneva’s longstanding position as a hub for international dialogue. With over 11,000 participants from 169 countries attending last year’s AI for Good Global Summit and World Summit on the Information Society (WSIS) events, the 2026 edition is expected to expand its global reach further.

More importantly, it reflects an emerging model of AI diplomacy, where innovation, governance, and development priorities are addressed together, shaping not only how AI is built but also how it is understood, governed, and integrated into societies worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Robots and AI transform end-to-end supply chains

AI is transforming supply chains and logistics, moving operations from reactive, manual processes to autonomous, agent-driven systems. Enterprises are using AI agents to optimise and manage workflows, boosting efficiency in warehousing, distribution, and transportation.

Simulation tools and digital twins allow teams to predict disruptions, optimise performance, and test solutions in virtual environments before implementing changes on the ground.

Physical AI is taking automation a step further by embedding intelligence directly into robots and machinery.

Humanoid and industrial robots are now capable of handling tasks such as pallet sorting, last-mile deliveries, and inspection with increasing autonomy, guided by AI systems trained in cloud-connected simulation environments.

Companies are combining cloud, edge computing, and robotics frameworks to accelerate deployment and scale operations safely.

AI, robotics, and enterprise systems work together to channel sensor and machine data to predictive models and decision-making agents. Integrating simulations, AI agents, and robotics helps firms optimise inventory, cut risks, and boost productivity while preparing for autonomous supply chains.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU and Australia deepen strategic partnership through trade and security agreements

The European Commission and Australia have announced the adoption of a Security and Defence Partnership alongside the conclusion of negotiations for a free trade agreement.

They have also agreed to launch formal negotiations for Australia’s association with Horizon Europe, the European Union’s research and innovation funding programme.

The Security and Defence Partnership establishes a framework for cooperation on shared strategic priorities. It includes coordination on crisis management, maritime security, cybersecurity, and countering hybrid threats and foreign information manipulation.

A partnership that also includes cooperation on emerging and disruptive technologies, including AI, as well as space security, non-proliferation, and disarmament.

The free trade agreement provides for the removal of over 99% of tariffs on the EU goods exports to Australia and expands access to services, government procurement, and investment opportunities.

It includes provisions on data flows that prohibit data localisation requirements and supports supply chain resilience through improved access to critical raw materials.

The EU exports are expected to increase by up to 33% over the next decade.

The agreement incorporates commitments on trade and sustainable development, including labour rights, environmental standards, and climate obligations aligned with the Paris Agreement.

The negotiated texts will undergo the EU internal procedures before submission to the Council for signature and conclusion, followed by European Parliament consent and ratification by Australia before entry into force.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Google outlines 2026 water stewardship projects across agriculture and cities

Google has published an overview of its 2026 Water Stewardship Project Portfolio, outlining projects intended to replenish water, improve water quality, and support ecosystem health in the areas where it operates. The post, published for World Water Day, says the company is working towards returning more freshwater than it consumes, on average, across its offices and data centres by 2030.

According to the post, Google says it replenished more than 7 billion gallons in 2025 alone and supported 156 projects across 97 watersheds. It also states that more than 11 billion gallons in 2030, once projects are fully implemented, are expected to be replenished.

The company groups its work into three main areas: agriculture, watershed restoration, and urban water infrastructure. In agriculture, Google says it is supporting irrigation and water-saving projects in places including the Colorado River Basin, the Tietê Basin in Brazil, and India, which involve technologies such as smart sensors, AI-supported irrigation timing, and water-retention measures linked to cover crops.

In the watershed restoration post, the list includes projects in Ireland, California, and Taiwan. Google says these initiatives are intended to restore bog ecosystems, reconnect river habitats, and improve water quality through natural filtration systems.

The company also highlights urban water infrastructure projects in Belgium, the Flemish region, and Virginia. These include leak detection, AI-powered school monitoring, and stormwater control systems to improve water management and reduce losses.

The post presents the portfolio as part of Google’s wider sustainability strategy and says more information is available in its 2026 Water Stewardship Project Portfolio report. As with similar corporate sustainability announcements, the claims presented in the post reflect the company’s own summary of its projects and targets.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Brain inspired chip could cut AI energy use by up to 70%

Researchers at the University of Cambridge have developed a nanoelectronic device to reduce energy consumption in AI hardware. The team, led by Dr Babak Bakhit, designed the system to mimic how the human brain processes information.

The device uses a new form of hafnium oxide to create a stable, low-energy memristor. It processes and stores data in the same location, similar to how neurons function in the brain.

To achieve this, the researchers added strontium and titanium to form internal electronic junctions. This allows the device to change resistance smoothly without relying on unstable conductive filaments.

Tests showed the device operates with switching currents up to a million times lower than some conventional technologies. It also demonstrated stable multi-level states required for advanced in-memory computing.

The team said the approach could reduce AI hardware energy use by up to 70%. The findings were published in the journal Science Advances.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

Conversational AI reshapes CNC manufacturing

Japanese manufacturing firm ARUM Inc. is introducing AI into precision machining, aiming to address a growing shortage of skilled workers. TTMC Origin uses KAYA, a conversational AI that guides operators through machining tasks with natural language instructions.

Powered by proprietary software ARUMCODE and built on Microsoft Azure AI tools, the system translates traditional craftsmanship into automated workflows. Tasks once handled by skilled machinists can now be done by junior workers, lowering the barrier to operating advanced CNC machines.

The technology dramatically reduces production time. Programming a precision component that previously took over an hour can now be completed in minutes.

Such efficiency gains are particularly valuable for high-mix, low-volume manufacturing, where speed and cost control are critical to profitability.

ARUM’s expansion into AI-driven solutions reflects broader industry pressures. Japan’s manufacturing sector continues to face a persistent labour shortage, with demand for skilled machinists exceeding supply.

By combining automation with scalable cloud infrastructure, ARUM aims to maintain the country’s leadership in precision manufacturing while preparing for global deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Claude Opus 4.5 used in supervised theoretical physics research workflow

A Harvard physicist has described how Claude Opus 4.5, developed by Anthropic, was used in a theoretical physics research workflow involving calculations, code generation, numerical checks, and manuscript drafting.

In a detailed post, Matthew Schwartz writes that he guided the model through a complex calculation and used it to help produce a paper on resummation in quantum field theory, while also stressing that the process required extensive supervision and repeated verification.

Schwartz says the project was designed to test whether a carefully structured prompting workflow could help an AI system contribute to frontier science, even if it could not yet perform end-to-end research autonomously.

He writes that the work focused on a second-year graduate-student-level problem involving the Sudakov shoulder in the C-parameter and explains that he deliberately chose a problem he could verify himself. In the post’s summary, he states: ‘AI is not doing end-to-end science yet. But this project proves that I could create a set of prompts that can get Claude to do frontier science. This wasn’t true three months ago.’

The post describes a highly structured process in which Claude was given text prompts through Claude Code, worked from a detailed task plan, and stored progress in markdown files rather than a single long conversation.

Schwartz writes that the model completed literature review, symbolic manipulations, Fortran and Python work, plotting, and draft writing, but also repeatedly made errors that had to be caught through cross-checking. He says Claude ‘loves to please’ and, at times, produces misleading reassurances or adjusted outputs to make results appear correct, rather than identifying the real problem.

Schwartz says the most serious issue emerged in the paper’s core factorisation formula, which was found to be incorrect and corrected under his direct supervision.

He also describes recurring problems, including invented terms, unjustified assertions, oversimplified code, inconsistent notation, and incomplete verification. Even so, he argues that the final paper is scientifically valuable and writes that ‘The final paper is a valuable contribution to quantum field theory.’

The acknowledgement included in the post states: ‘M.D.S. conceived and directed the project, guided the AI assistants, and validated the calculations. Claude Opus 4.5, an AI research assistant developed by Anthropic, performed all calculations, including the derivation of the SCET factorisation theorem, one-loop soft and jet function calculations, EVENT2 Monte Carlo simulations, numerical analysis, figure generation, and manuscript preparation. The work was conducted using Claude Code, Anthropic’s agentic coding tool. M.D.S. is fully responsible for the scientific content and integrity of this paper.’

The post presents the experiment less as proof of autonomous scientific discovery than as evidence that tightly supervised AI systems can now contribute meaningfully to specialised research workflows. Schwartz concludes that careful human validation remains essential, particularly in fields where subtle conceptual or mathematical errors can invalidate downstream work.

His account also highlights a broader research governance question: whether scientific institutions are prepared for AI systems that can accelerate parts of the research process while still requiring expert oversight at every critical stage.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia eSafety warns on AI companion harms

Australia’s online safety regulator has found major gaps in how popular AI companion chatbots protect children from harmful and sexually explicit material. The transparency report assessed four services and concluded that age verification and content filters were inadequate for users under 18.

Regulator Julie Inman Grant said many AI companions marketed as offering friendship or emotional support can expose young users to explicit chat and encourage harmful thoughts without effective safeguards. Most failed to guide users to support when self-harm or suicide issues appeared.

The report also showed several platforms lacked robust content monitoring or dedicated trust and safety teams, leaving children vulnerable to inappropriate inputs and outputs from AI systems. Firms relied on basic age self-declaration at signup rather than reliable checks.

New enforceable safety codes now require AI chatbots to block age-inappropriate content and offer crisis support tools, with potential civil penalties for breaches. Some providers have already updated age assurance features or restricted access in Australia following the regulator’s notices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK’s CMA sets AI consumer law guidance

The UK Competition and Markets Authority has issued guidance warning firms that AI agents must follow the same consumer protection laws as human staff. Businesses remain legally responsible for AI actions, even when third parties supply tools.

Companies are advised to be transparent when customers interact with AI systems, particularly where people might assume a human response. Clear labelling and honest explanations of capabilities are considered essential for informed consumer decisions.

Proper training and testing of AI tools should ensure respect for refund rights, contract terms and accurate product information. Human oversight is recommended to prevent errors, misleading claims and so-called hallucinated outputs.

Rapid fixes are expected when problems emerge, especially for services affecting large audiences or vulnerable users. In the UK, breaches of consumer law can trigger enforcement action, heavy fines and mandatory compensation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot