UAE executes first government payment using Digital Dirham

The United Arab Emirates has completed its first government financial transaction using the Digital Dirham, marking a significant milestone in its transition towards a fully digital economy.

The Ministry of Finance and Dubai Finance carried out the transaction in collaboration with the Central Bank of the UAE, confirming the country’s leadership in advancing next-generation financial technologies.

Part of the Central Bank’s Financial Infrastructure Transformation Programme, the pilot phase of the Digital Dirham aims to accelerate digital payment adoption and strengthen the UAE’s position as a global financial hub.

Senior officials, including Sheikh Mansour bin Zayed Al Nahyan and Sheikh Maktoum bin Mohammed bin Rashid Al Maktoum, described the initiative as a strategic step toward improving transparency, efficiency, and integration across government financial systems.

The first pilot transaction was executed through the government payments platform mBridge, which facilitates instant settlements using central bank digital currencies.

A transaction was completed in under two minutes, demonstrating the system’s technical efficiency and reliability. The mBridge platform, fully integrated with the Digital Dirham initiative, enables secure, intermediary-free settlements, reducing costs while improving accuracy and transparency.

Officials emphasised that the Digital Dirham will serve as a cornerstone for a sustainable digital economy, reinforcing national financial stability and global competitiveness.

The initiative reflects the UAE’s commitment to adopting cutting-edge technologies that promote integration and innovation across the public and private sectors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU regulators, UK and eSafety lead the global push to protect children in the digital world

Children today spend a significant amount of their time online, from learning and playing to communicating.

To protect them in an increasingly digital world, Australia’s eSafety Commissioner, the European Commission’s DG CNECT, and the UK’s Ofcom have joined forces to strengthen global cooperation on child online safety.

The partnership aims to ensure that online platforms take greater responsibility for protecting and empowering children, recognising their rights under the UN Convention on the Rights of the Child.

The three regulators will continue to enforce their online safety laws to ensure platforms properly assess and mitigate risks to children. They will promote privacy-preserving age verification technologies and collaborate with civil society and academics to ensure that regulations reflect real-world challenges.

By supporting digital literacy and critical thinking, they aim to provide children and families with safer and more confident online experiences.

To advance the work, a new trilateral technical group will be established to deepen collaboration on age assurance. It will study the interoperability and reliability of such systems, explore the latest technologies, and strengthen the evidence base for regulatory action.

Through closer cooperation, the regulators hope to create a more secure and empowering digital environment for young people worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Αnthropic pledges $50 billion to expand the US AI infrastructure

The US AI safety and research company, Anthropic, has announced a $50 billion investment to expand AI computing infrastructure inside the country, partnering with Fluidstack to build data centres in Texas and New York, with additional sites planned.

These facilities are designed to optimise efficiency for Anthropic’s workloads, supporting frontier research and development in AI.

The project is expected to generate approximately 800 permanent jobs and 2,400 construction positions as sites come online throughout 2026.

An investment that aligns with the Trump administration’s AI Action Plan, aiming to maintain the US leadership in AI while strengthening domestic technology infrastructure and competitiveness.

Dario Amodei, CEO and co-founder of Anthropic, highlighted the importance of such an infrastructure in developing AI systems capable of accelerating scientific discovery and solving complex problems.

The company serves over 300,000 business customers, with a sevenfold growth in large accounts over the past year, demonstrating strong market demand for its Claude AI platform.

Fluidstack was selected as Anthropic’s partner for its agility in rapidly deploying high-capacity infrastructure. The collaboration aims to provide cost-effective and capital-efficient solutions to meet the growing demand, ensuring that research and development can continue to be at the forefront of AI innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta expands AI infrastructure with $1 billion sustainable facility

The US tech giant, Meta, has announced the construction of its 30th data centre in Beaver Dam, Wisconsin, a $1 billion investment that will power the company’s growing AI infrastructure while benefiting the local community and environment.

A facility, designed to support Meta’s most demanding AI workloads, that will run entirely on clean energy and create more than 100 permanent jobs alongside 1,000 construction roles.

The company will invest nearly $200 million in energy infrastructure and donate $15 million to Alliant Energy’s Hometown Care Energy Fund to assist families with home energy costs.

Meta will also launch community grants to fund schools and local organisations, strengthening technology education and digital skills while helping small businesses use AI tools more effectively.

Environmental responsibility remains central to the project. The data centre will use dry cooling, eliminating water demands during operation, and restore 100% of consumed water to local watersheds.

In partnership with Ducks Unlimited, Meta will revitalise 570 acres of wetlands and prairie, transforming degraded habitats into thriving ecosystems. The facility is expected to achieve LEED Gold Certification, reflecting Meta’s ongoing commitment to sustainability and community-focused innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Police warn of scammers posing as AFP officers in crypto fraud

Cybercriminals are exploiting Australia’s national cybercrime reporting platform, ReportCyber, to trick people into handing over cryptocurrency. The AFP-led Joint Policing Cybercrime Coordination Centre (JPC3) warns scammers are posing as police and using stolen data to file fake reports.

In one recent case, a victim was contacted by someone posing as an AFP officer and informed that their details had been found in a data breach linked to cryptocurrency. The impersonator provided an official reference number, which appeared genuine when checked on the ReportCyber portal.

A second caller, pretending to be from a crypto platform, then urged the target to transfer funds to a so-called ‘Cold Storage’ account. The victim realised the deception and ended the call before losing money.

Detective Superintendent Marie Andersson said the scam’s sophistication lay in its false sense of legitimacy and urgency. Criminals verify personal data and act quickly to pressure victims, she explained. However, growing awareness within the community has helped authorities detect such scams sooner.

Authorities are reminding the public that legitimate officers will never request access to wallets, bank accounts, or seed phrases. Australians should remain cautious, verify unexpected calls, and report any suspicious activity through official channels.

The AFP reaffirmed that ReportCyber remains a safe platform for genuine reports and continues to be a vital tool in tracking and preventing cybercrime nationwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK moves to curb AI-generated child abuse imagery with pre-release testing

The UK government plans to let approved organisations test AI models before release to ensure they cannot generate child sexual abuse material. The amendment to the Crime and Policing Bill aims to build safeguards into AI tools at the design stage rather than after deployment.

The Internet Watch Foundation reported 426 AI-related abuse cases this year, up from 199 in 2024. Chief Executive Kerry Smith said the move could make AI products safer before they are launched. The proposal also extends to detecting extreme pornography and non-consensual intimate images.

The NSPCC’s Rani Govender welcomed the reform but said testing should be mandatory to make child safety part of product design. Earlier this year, the Home Office introduced new offences for creating or distributing AI tools used to produce abusive imagery, punishable by up to five years in prison.

Technology Secretary Liz Kendall said the law would ensure that trusted groups can verify the safety of AI systems. In contrast, Safeguarding Minister Jess Phillips said it would help prevent predators from exploiting legitimate tools.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI system tracks tsunami through atmospheric ripples

Scientists have successfully tracked a tsunami in real time using ripples in Earth’s atmosphere for the first time.

The breakthrough came after a powerful 8.8 magnitude earthquake struck off Russia’s Kamchatka Peninsula in July 2025, sending waves racing across the Pacific and triggering NASA’s newly upgraded Guardian monitoring system.

Guardian uses AI to detect disruptions in satellite navigation signals caused by atmospheric ripples above the ocean.

These signals revealed the formation and movement of tsunami waves, allowing alerts to be issued up to 40 minutes before they reached Hawaii, potentially giving communities vital time to respond.

Researchers say the innovation could transform global disaster monitoring by enabling earlier warnings for tsunamis, volcanic eruptions, and even nuclear tests.

Although the system is still in development, scientists in Europe are working on similar models that could expand coverage and provide life-saving alerts to remote coastal regions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU and Switzerland deepen research ties through Horizon Europe agreement

Switzerland has formally joined Horizon Europe, the EU’s flagship research and innovation programme, together with Digital Europe and the Euratom Research and Training Programme.

An agreement, signed in Bern by Commissioner Ekaterina Zaharieva and Federal Councillor Guy Parmelin, that grants Swiss researchers the same status as their EU counterparts.

They can now lead projects, receive EU funding, and access every thematic pillar, reinforcing cross-border collaboration in fields such as climate technology, digital transformation, and energy security.

The accord, effective from 1 January 2025, also enables Switzerland to become a member of Fusion for Energy in 2026, thereby integrating its researchers into ITER, the world’s largest fusion energy initiative.

Plans include Swiss participation in Erasmus+ from 2027 and in the EU4Health programme once a separate health agreement takes effect.

A development that forms part of a broader package designed to deepen EU–Swiss relations and modernise cooperation frameworks across science, technology, and education.

The European Commission reaffirmed its commitment to finalising ratification of all related agreements, ensuring long-term collaboration and strengthening Europe’s position as a global leader in innovation and research.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

€5.5bn Google plan expands German data centres, carbon-free power and skills programmes

Google will invest €5.5bn in Germany from 2026 to 2029, adding a Dietzenbach data centre and expanding its Hanau facility. It will expand offices in Berlin, Frankfurt, and Munich, and launch skilling and a first German heat-recovery project. Estimated impact: ~€1.016bn GDP and ~9,000 jobs annually.

Dietzenbach will strengthen German cloud regions within Google’s 42-region network, used by firms such as Mercedes-Benz. Google Cloud highlights Vertex AI, Gemini, and sovereign options for local compliance. Continued Hanau investment supports low-latency AI workloads.

Google and Engie will extend 24/7 Carbon-Free Energy in Germany through 2030, adding new wind and solar. The portfolio will be optimised with storage and Ørsted’s Borkum Riffgrund 3. Operations are projected to be 85% carbon-free in 2026.

A partnership with Energieversorgung Offenbach will utilise excess data centre heat to feed into Dietzenbach’s district network, serving over 2,000 households. Water work includes wetland protection with NABU in Hesse’s Büttelborn Bruchwiesen. Google reiterates its 24/7 carbon-free goal.

Office expansion includes Munich’s Arnulfpost for up to 2,000 staff, Frankfurt’s Global Tower space, and additional floors in Berlin. Local partnerships will fund digital skills and STEM programmes. Officials and customers welcomed the move for its benefits to infrastructure, sovereignty, and innovation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The AI soldier and the ethics of war

The rise of the machine soldier

For decades, Western militaries have led technological revolutions on the battlefield. From bows to tanks to drones, technological innovation has disrupted and redefined warfare for better or worse. However, the next evolution is not about weapons, it is about the soldier.

New AI-integrated systems such as Anduril’s EagleEye Helmet are transforming troops into data-driven nodes, capable of perceiving and responding with machine precision. This fusion of human and algorithmic capabilities is blurring the boundary between human roles and machine learning, redefining what it means to fight and to feel in war.

Today’s ‘AI soldier’ is more than just enhanced. They are networked, monitored, and optimised. Soldiers now have 3D optical displays that give them a god’s-eye view of combat, while real-time ‘guardian angel’ systems make decisions faster than any human brain can process.

Yet in this pursuit of efficiency, the soldier’s humanity and the rules-based order of war risk being sidelined in favour of computational power.

From soldier to avatar

In the emerging AI battlefield, the soldier increasingly resembles a character in a first-person shooter video game. There is an eerie overlap between AI soldier systems and the interface of video games, like Metal Gear Solid, where augmented players blend technology, violence, and moral ambiguity. The more intuitive and immersive the tech becomes, the easier it is to forget that killing is not a simulation.

By framing war through a heads-up display, AI gives troops an almost cinematic sense of control, and in turn, a detachment from their humanity, emotions, and the physical toll of killing. Soldiers with AI-enhanced senses operate through layers of mediated perception, acting on algorithmic prompts rather than their own moral intuition. When soldiers view the world through the lens of a machine, they risk feeling less like humans and more like avatars, designed to win, not to weigh the cost.

The integration of generative AI into national defence systems creates vulnerabilities, ranging from hacking decision-making systems to misaligned AI agents capable of escalating conflicts without human oversight. Ironically, the same guardrails that prevent civilian AI from encouraging violence cannot apply to systems built for lethal missions.

The ethical cost

Generative AI has redefined the nature of warfare, introducing lethal autonomy that challenges the very notion of ethics in combat. In theory, AI systems can uphold Western values and ethical principles, but in practice, the line between assistance and automation is dangerously thin.

When militaries walk this line, outsourcing their decision-making to neural networks, accountability becomes blurred. Without the basic principles and mechanisms of accountability in warfare, states risk the very foundation of rules-based order. AI may evolve the battlefield, but at the cost of diplomatic solutions and compliance with international law.  

AI does not experience fear, hesitation, or empathy, the very qualities that restrain human cruelty. By building systems that increase efficiency and reduce the soldier’s workload through automated targeting and route planning, we risk erasing the psychological distinction that once separated human war from machine-enabled extermination. Ethics, in this new battlescape, become just another setting in the AI control panel. 

The new war industry 

The defence sector is not merely adapting to AI. It is being rebuilt around it. Anduril, Palantir, and other defence tech corporations now compete with traditional military contractors by promising faster innovation through software.

As Anduril’s founder, Palmer Luckey, puts it, the goal is not to give soldiers a tool, but ‘a new teammate.’ The phrasing is telling, as it shifts the moral axis of warfare from command to collaboration between humans and machines.

The human-machine partnership built for lethality suggests that the military-industrial complex is evolving into a military-intelligence complex, where data is the new weapon, and human experience is just another metric to optimise.

The future battlefield 

If the past century’s wars were fought with machines, the next will likely be fought through them. Soldiers are becoming both operators and operated, which promises efficiency in war, but comes with the cost of human empathy.

When soldiers see through AI’s lens, feel through sensors, and act through algorithms, they stop being fully human combatants and start becoming playable characters in a geopolitical simulation. The question is not whether this future is coming; it is already here. 

There is a clear policy path forward, as states remain tethered to their international obligations. Before AI blurs the line between soldier and system, international law could enshrine a human-in-the-loop requirement for all lethal actions, while defence firms are compelled to maintain high ethical transparency standards.

The question now is whether humanity can still recognise itself once war feels like a game, or whether, without safeguards, it will remain present in war at all.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!