Meta expands AI infrastructure with $1 billion sustainable facility

The US tech giant, Meta, has announced the construction of its 30th data centre in Beaver Dam, Wisconsin, a $1 billion investment that will power the company’s growing AI infrastructure while benefiting the local community and environment.

A facility, designed to support Meta’s most demanding AI workloads, that will run entirely on clean energy and create more than 100 permanent jobs alongside 1,000 construction roles.

The company will invest nearly $200 million in energy infrastructure and donate $15 million to Alliant Energy’s Hometown Care Energy Fund to assist families with home energy costs.

Meta will also launch community grants to fund schools and local organisations, strengthening technology education and digital skills while helping small businesses use AI tools more effectively.

Environmental responsibility remains central to the project. The data centre will use dry cooling, eliminating water demands during operation, and restore 100% of consumed water to local watersheds.

In partnership with Ducks Unlimited, Meta will revitalise 570 acres of wetlands and prairie, transforming degraded habitats into thriving ecosystems. The facility is expected to achieve LEED Gold Certification, reflecting Meta’s ongoing commitment to sustainability and community-focused innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Police warn of scammers posing as AFP officers in crypto fraud

Cybercriminals are exploiting Australia’s national cybercrime reporting platform, ReportCyber, to trick people into handing over cryptocurrency. The AFP-led Joint Policing Cybercrime Coordination Centre (JPC3) warns scammers are posing as police and using stolen data to file fake reports.

In one recent case, a victim was contacted by someone posing as an AFP officer and informed that their details had been found in a data breach linked to cryptocurrency. The impersonator provided an official reference number, which appeared genuine when checked on the ReportCyber portal.

A second caller, pretending to be from a crypto platform, then urged the target to transfer funds to a so-called ‘Cold Storage’ account. The victim realised the deception and ended the call before losing money.

Detective Superintendent Marie Andersson said the scam’s sophistication lay in its false sense of legitimacy and urgency. Criminals verify personal data and act quickly to pressure victims, she explained. However, growing awareness within the community has helped authorities detect such scams sooner.

Authorities are reminding the public that legitimate officers will never request access to wallets, bank accounts, or seed phrases. Australians should remain cautious, verify unexpected calls, and report any suspicious activity through official channels.

The AFP reaffirmed that ReportCyber remains a safe platform for genuine reports and continues to be a vital tool in tracking and preventing cybercrime nationwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK moves to curb AI-generated child abuse imagery with pre-release testing

The UK government plans to let approved organisations test AI models before release to ensure they cannot generate child sexual abuse material. The amendment to the Crime and Policing Bill aims to build safeguards into AI tools at the design stage rather than after deployment.

The Internet Watch Foundation reported 426 AI-related abuse cases this year, up from 199 in 2024. Chief Executive Kerry Smith said the move could make AI products safer before they are launched. The proposal also extends to detecting extreme pornography and non-consensual intimate images.

The NSPCC’s Rani Govender welcomed the reform but said testing should be mandatory to make child safety part of product design. Earlier this year, the Home Office introduced new offences for creating or distributing AI tools used to produce abusive imagery, punishable by up to five years in prison.

Technology Secretary Liz Kendall said the law would ensure that trusted groups can verify the safety of AI systems. In contrast, Safeguarding Minister Jess Phillips said it would help prevent predators from exploiting legitimate tools.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI system tracks tsunami through atmospheric ripples

Scientists have successfully tracked a tsunami in real time using ripples in Earth’s atmosphere for the first time.

The breakthrough came after a powerful 8.8 magnitude earthquake struck off Russia’s Kamchatka Peninsula in July 2025, sending waves racing across the Pacific and triggering NASA’s newly upgraded Guardian monitoring system.

Guardian uses AI to detect disruptions in satellite navigation signals caused by atmospheric ripples above the ocean.

These signals revealed the formation and movement of tsunami waves, allowing alerts to be issued up to 40 minutes before they reached Hawaii, potentially giving communities vital time to respond.

Researchers say the innovation could transform global disaster monitoring by enabling earlier warnings for tsunamis, volcanic eruptions, and even nuclear tests.

Although the system is still in development, scientists in Europe are working on similar models that could expand coverage and provide life-saving alerts to remote coastal regions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU and Switzerland deepen research ties through Horizon Europe agreement

Switzerland has formally joined Horizon Europe, the EU’s flagship research and innovation programme, together with Digital Europe and the Euratom Research and Training Programme.

An agreement, signed in Bern by Commissioner Ekaterina Zaharieva and Federal Councillor Guy Parmelin, that grants Swiss researchers the same status as their EU counterparts.

They can now lead projects, receive EU funding, and access every thematic pillar, reinforcing cross-border collaboration in fields such as climate technology, digital transformation, and energy security.

The accord, effective from 1 January 2025, also enables Switzerland to become a member of Fusion for Energy in 2026, thereby integrating its researchers into ITER, the world’s largest fusion energy initiative.

Plans include Swiss participation in Erasmus+ from 2027 and in the EU4Health programme once a separate health agreement takes effect.

A development that forms part of a broader package designed to deepen EU–Swiss relations and modernise cooperation frameworks across science, technology, and education.

The European Commission reaffirmed its commitment to finalising ratification of all related agreements, ensuring long-term collaboration and strengthening Europe’s position as a global leader in innovation and research.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

€5.5bn Google plan expands German data centres, carbon-free power and skills programmes

Google will invest €5.5bn in Germany from 2026 to 2029, adding a Dietzenbach data centre and expanding its Hanau facility. It will expand offices in Berlin, Frankfurt, and Munich, and launch skilling and a first German heat-recovery project. Estimated impact: ~€1.016bn GDP and ~9,000 jobs annually.

Dietzenbach will strengthen German cloud regions within Google’s 42-region network, used by firms such as Mercedes-Benz. Google Cloud highlights Vertex AI, Gemini, and sovereign options for local compliance. Continued Hanau investment supports low-latency AI workloads.

Google and Engie will extend 24/7 Carbon-Free Energy in Germany through 2030, adding new wind and solar. The portfolio will be optimised with storage and Ørsted’s Borkum Riffgrund 3. Operations are projected to be 85% carbon-free in 2026.

A partnership with Energieversorgung Offenbach will utilise excess data centre heat to feed into Dietzenbach’s district network, serving over 2,000 households. Water work includes wetland protection with NABU in Hesse’s Büttelborn Bruchwiesen. Google reiterates its 24/7 carbon-free goal.

Office expansion includes Munich’s Arnulfpost for up to 2,000 staff, Frankfurt’s Global Tower space, and additional floors in Berlin. Local partnerships will fund digital skills and STEM programmes. Officials and customers welcomed the move for its benefits to infrastructure, sovereignty, and innovation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The AI soldier and the ethics of war

The rise of the machine soldier

For decades, Western militaries have led technological revolutions on the battlefield. From bows to tanks to drones, technological innovation has disrupted and redefined warfare for better or worse. However, the next evolution is not about weapons, it is about the soldier.

New AI-integrated systems such as Anduril’s EagleEye Helmet are transforming troops into data-driven nodes, capable of perceiving and responding with machine precision. This fusion of human and algorithmic capabilities is blurring the boundary between human roles and machine learning, redefining what it means to fight and to feel in war.

Today’s ‘AI soldier’ is more than just enhanced. They are networked, monitored, and optimised. Soldiers now have 3D optical displays that give them a god’s-eye view of combat, while real-time ‘guardian angel’ systems make decisions faster than any human brain can process.

Yet in this pursuit of efficiency, the soldier’s humanity and the rules-based order of war risk being sidelined in favour of computational power.

From soldier to avatar

In the emerging AI battlefield, the soldier increasingly resembles a character in a first-person shooter video game. There is an eerie overlap between AI soldier systems and the interface of video games, like Metal Gear Solid, where augmented players blend technology, violence, and moral ambiguity. The more intuitive and immersive the tech becomes, the easier it is to forget that killing is not a simulation.

By framing war through a heads-up display, AI gives troops an almost cinematic sense of control, and in turn, a detachment from their humanity, emotions, and the physical toll of killing. Soldiers with AI-enhanced senses operate through layers of mediated perception, acting on algorithmic prompts rather than their own moral intuition. When soldiers view the world through the lens of a machine, they risk feeling less like humans and more like avatars, designed to win, not to weigh the cost.

The integration of generative AI into national defence systems creates vulnerabilities, ranging from hacking decision-making systems to misaligned AI agents capable of escalating conflicts without human oversight. Ironically, the same guardrails that prevent civilian AI from encouraging violence cannot apply to systems built for lethal missions.

The ethical cost

Generative AI has redefined the nature of warfare, introducing lethal autonomy that challenges the very notion of ethics in combat. In theory, AI systems can uphold Western values and ethical principles, but in practice, the line between assistance and automation is dangerously thin.

When militaries walk this line, outsourcing their decision-making to neural networks, accountability becomes blurred. Without the basic principles and mechanisms of accountability in warfare, states risk the very foundation of rules-based order. AI may evolve the battlefield, but at the cost of diplomatic solutions and compliance with international law.  

AI does not experience fear, hesitation, or empathy, the very qualities that restrain human cruelty. By building systems that increase efficiency and reduce the soldier’s workload through automated targeting and route planning, we risk erasing the psychological distinction that once separated human war from machine-enabled extermination. Ethics, in this new battlescape, become just another setting in the AI control panel. 

The new war industry 

The defence sector is not merely adapting to AI. It is being rebuilt around it. Anduril, Palantir, and other defence tech corporations now compete with traditional military contractors by promising faster innovation through software.

As Anduril’s founder, Palmer Luckey, puts it, the goal is not to give soldiers a tool, but ‘a new teammate.’ The phrasing is telling, as it shifts the moral axis of warfare from command to collaboration between humans and machines.

The human-machine partnership built for lethality suggests that the military-industrial complex is evolving into a military-intelligence complex, where data is the new weapon, and human experience is just another metric to optimise.

The future battlefield 

If the past century’s wars were fought with machines, the next will likely be fought through them. Soldiers are becoming both operators and operated, which promises efficiency in war, but comes with the cost of human empathy.

When soldiers see through AI’s lens, feel through sensors, and act through algorithms, they stop being fully human combatants and start becoming playable characters in a geopolitical simulation. The question is not whether this future is coming; it is already here. 

There is a clear policy path forward, as states remain tethered to their international obligations. Before AI blurs the line between soldier and system, international law could enshrine a human-in-the-loop requirement for all lethal actions, while defence firms are compelled to maintain high ethical transparency standards.

The question now is whether humanity can still recognise itself once war feels like a game, or whether, without safeguards, it will remain present in war at all.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Global AI adoption rises quickly but benefits remain unequal

Microsoft’s AI Economy Institute has released its 2025 AI Diffusion Report, detailing global AI adoption, innovation hubs, and the impact of digital infrastructure. AI has reached over 1.2 billion users in under three years, yet its benefits remain unevenly distributed.

Adoption rates in the Global North are roughly double those in the Global South, highlighting the risk of long-term inequalities.

AI adoption depends on strong foundational infrastructure, including electricity, data centres, internet connectivity, digital and AI skills, and language accessibility.

Countries with robust foundations- such as the UAE, Singapore, Norway, and Ireland- have seen rapid adoption, even without frontier-level model development. In contrast, regions with limited infrastructure and low-resource languages lag significantly, with adoption in some areas below 10%.

Ukraine exemplifies the potential for rapid AI growth, despite current disruptions from the war, with an adoption rate of 9.1%. Strategic investments in connectivity, AI skills, and language-inclusive solutions could accelerate recovery, strengthen resilience, and drive innovation.

AI is already supporting cybersecurity and helping businesses and organisations maintain operations amid ongoing challenges.

The concentration of AI infrastructure remains high, with the US and China hosting 86% of the global data centre capacity. A few countries dominate frontier AI development, yet the performance gap between leading models is narrowing.

Coordinated efforts across infrastructure, skills, and policy are crucial to ensure equitable access and maximise AI’s potential worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MK1 joins AMD to accelerate enterprise AI and reasoning technologies

AMD has completed the acquisition of MK1, a California-based company specialising in high-speed inference and reasoning-based AI technologies.

The move marks a significant step in AMD’s strategy to strengthen AI performance and efficiency across hardware and software layers. MK1’s Flywheel and comprehension engines are designed to optimise AMD’s Instinct GPUs, offering scalable, accurate, and cost-efficient AI reasoning.

The MK1 team will join the AMD Artificial Intelligence Group, where their expertise will advance AMD’s enterprise AI software stack and inference capabilities.

Handling over one trillion tokens daily, MK1’s systems are already deployed at scale, providing traceable and efficient AI solutions for complex business processes.

By combining MK1’s advanced AI software innovation with AMD’s compute power, the acquisition enhances AMD’s position in the enterprise and generative AI markets, supporting its goal of delivering accessible, high-performance AI solutions globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK strengthens AI safeguards to protect children online

The UK government is introducing landmark legislation to prevent AI from being exploited to generate child sexual abuse material. The new law empowers authorised bodies, such as the Internet Watch Foundation, to test AI models and ensure safeguards prevent misuse.

Reports of AI-generated child abuse imagery have surged, with the IWF recording 426 cases in 2025, more than double the 199 cases reported in 2024. The data also reveals a sharp rise in images depicting infants, increasing from five in 2024 to 92 in 2025.

Officials say the measures will enable experts to identify vulnerabilities within AI systems, making it more difficult for offenders to exploit the technology.

The legislation will also require AI developers to build protections against non-consensual intimate images and extreme content. A group of experts in AI and child safety will be established to oversee secure testing and ensure the well-being of researchers.

Ministers emphasised that child safety must be built into AI systems from the start, not added as an afterthought.

By collaborating with the AI sector and child protection groups, the government aims to make the UK the safest place for children to be online. The approach strikes a balance between innovation and strong protections, thereby reinforcing public trust in AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!