EU and Japan deepen AI cooperation under new digital pact

In May 2025, the European Union and Japan formally reaffirmed their long-standing EU‑Japan Digital Partnership during the third Digital Partnership Council in Tokyo. Delegations agreed to deepen collaboration in pivotal digital technologies, most notably artificial intelligence, quantum computing, 5G/6G networks, semiconductors, cloud, and cybersecurity.

A joint statement committed to signing an administrative agreement on AI, aligned with principles from the Hiroshima AI Process. Shared initiatives include a €4 million EU-supported quantum R&D project named Q‑NEKO and the 6G MIRAI‑HARMONY research effort.

Both parties pledge to enhance data governance, digital identity interoperability, regulatory coordination across platforms, and secure connectivity via submarine cables and Arctic routes. The accord builds on the Strategic Partnership Agreement activated in January 2025, reinforcing their mutual platform for rules-based, value-driven digital and innovation cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI energy demand accelerates while clean power lags

Data centres are driving a sharp rise in electricity consumption, putting mounting pressure on power infrastructure that is already struggling to keep pace.

The rapid expansion of AI has led technology companies to invest heavily in AI-ready infrastructure, but the energy demands of these systems are outstripping available grid capacity.

The International Energy Agency projects that electricity use by data centres will more than double globally by 2030, reaching levels equivalent to the current consumption of Japan.

In the United States, they are expected to use 580 TWh annually by 2028—about 12% of national consumption. AI-specific data centres will be responsible for much of this increase.

Despite this growth, clean energy deployment is lagging. Around two terawatts of projects remain stuck in interconnection queues, delaying the shift to sustainable power. The result is a paradox: firms pursuing carbon-free goals by 2035 now rely on gas and nuclear to power their expanding AI operations.

In response, tech companies and utilities are adopting short-term strategies to relieve grid pressure. Microsoft and Amazon are sourcing energy from nuclear plants, while Meta will rely on new gas-fired generation.

Data centre developers like CloudBurst are securing dedicated fuel supplies to ensure local power generation, bypassing grid limitations. Some utilities are introducing technologies to speed up grid upgrades, such as AI-driven efficiency tools and contracts that encourage flexible demand.

Behind-the-meter solutions—like microgrids, batteries and fuel cells—are also gaining traction. AEP’s 1-GW deal with Bloom Energy would mark the US’s largest fuel cell deployment.

Meanwhile, longer-term efforts aim to scale up nuclear, geothermal and even fusion energy. Google has partnered with Commonwealth Fusion Systems to source power by the early 2030s, while Fervo Energy is advancing geothermal projects.

National Grid and other providers invest in modern transmission technologies to support clean generation. Cooling technology for data centre chips is another area of focus. Programmes like ARPA-E’s COOLERCHIPS are exploring ways to reduce energy intensity.

At the same time, outdated regulatory processes are slowing progress. Developers face unclear connection timelines and steep fees, sometimes pushing them toward off-grid alternatives.

The path forward will depend on how quickly industry and regulators can align. Without faster deployment of clean power and regulatory reform, the systems designed to power AI could become the bottleneck that stalls its growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump AI strategy targets China and cuts red tape

The Trump administration has revealed a sweeping new AI strategy to cement US dominance in the global AI race, particularly against China.

The 25-page ‘America’s AI Action Plan’ proposes 90 policy initiatives, including building new data centres nationwide, easing regulations, and expanding exports of AI tools to international allies.

White House officials stated the plan will boost AI development by scrapping federal rules seen as restrictive and speeding up construction permits for data infrastructure.

A key element involves monitoring Chinese AI models for alignment with Communist Party narratives, while promoting ‘ideologically neutral’ systems within the US. Critics argue the approach undermines efforts to reduce bias and favours politically motivated AI regulation.

The action plan also supports increased access to federal land for AI-related construction and seeks to reverse key environmental protections. Analysts have raised concerns over energy consumption and rising emissions linked to AI data centres.

While the White House claims AI will complement jobs rather than replace them, recent mass layoffs at Indeed and Salesforce suggest otherwise.

Despite the controversy, the announcement drew optimism from investors. AI stocks saw mixed trading, with NVIDIA, Palantir and Oracle gaining, while Alphabet slipped slightly. Analysts described the move as a ‘watershed moment’ for US tech, signalling an aggressive stance in the global AI arms race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta CEO unveils plan to spend hundreds of billions on AI data centres

Mark Zuckerberg has pledged to invest hundreds of billions of dollars to build a network of massive data centres focused on superintelligent AI. The initiative forms part of Meta’s wider push to lead the race in developing machines capable of outperforming humans in complex tasks.

The first of these centres, called Prometheus, is set to launch in 2026. Another facility, Hyperion, is expected to scale up to 5 gigawatts. Zuckerberg said the company is building several more AI ‘titan clusters’, each one covering an area comparable to a significant part of Manhattan.

He also cited Meta’s strong advertising revenue as the reason it can afford such bold spending despite investor concerns.

Meta recently regrouped its AI projects under a new division, Superintelligence Labs, following internal setbacks and high-profile staff departures.

The company hopes the division will generate fresh revenue streams through Meta AI tools, video ad generators, and wearable smart devices. It is reportedly considering dropping its most powerful open-source model, Behemoth, in favour of a closed alternative.

The firm has increased its 2025 capital expenditure to up to $72 billion and is actively hiring top talent, including former Scale AI CEO Alexandr Wang and ex-GitHub chief Nat Friedman.

Analysts say Meta’s AI investments are paying off in advertising but warn that the real return on long-term AI dominance will take time to emerge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia’s container toolkit patched after critical bug

Cloud security researchers at Wiz have uncovered a critical misconfiguration in Nvidia’s Container Toolkit, used widely across managed AI services, that could allow a malicious container to break out and gain full root privileges on the host system.

The vulnerability, tracked as CVE‑2025‑23266 and nicknamed ‘NVIDIAScape’, arises from unsafe handling of OCI hooks. Exploiters can bypass container boundaries by using a simple three‑line Dockerfile, granting them access to server files, memory and GPU resources.

With Nvidia’s toolkit integral to GPU‑accelerated cloud offerings, the risk is systemic. A single compromised container could steal or corrupt sensitive data and AI models belonging to other tenants on the same infrastructure.

Nvidia has released a security advisory alongside updated toolkit versions. Users are strongly advised to apply patches immediately. Experts also recommend deploying additional isolation measures, such as virtual machines, to protect against container escape threats in multi-tenant AI environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Power demands reshape future of data centres

As AI and cloud computing demand surges, Siemens is tackling critical energy and sustainability challenges facing the data centre industry. With power densities surpassing 100kW per rack, traditional infrastructure is being pushed beyond its limits.

Siemens highlighted the urgent need for integrated digital solutions to address growing pressures such as delayed grid connections, rising costs, and speed of deployment. Operators are increasingly adopting microgrids and forming utility partnerships to ensure resilience and control over power access.

Siemens views data centres not just as energy consumers but as contributors to the grid, using stored energy to balance supply. The shift is pushing the industry to become more involved in grid stability and renewable integration.

While achieving net zero remains challenging, data centres are adopting on-site renewables, advanced cooling systems, and AI-driven management tools to boost efficiency.

Siemens’ own software, such as the Building X Suite, is helping reduce energy waste and predict maintenance needs, aligning operational effectiveness with sustainability goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI eyes data centre deal with Humain

Elon Musk’s AI venture, xAI, has entered early discussions with Humain to secure data centre capacity instead of relying solely on existing infrastructure.

According to Bloomberg, the arrangement could involve several gigawatts of capacity, although Humain has yet to start building its facilities, meaning any deal would take years to materialise.

Humain is backed by Saudi Arabia’s Crown Prince Mohammed bin Salman and the Public Investment Fund (PIF). xAI is reportedly considering a fresh funding round where PIF might also invest.

At the same time, xAI is negotiating with a smaller company constructing a 200-megawatt data centre, offering a more immediate solution while waiting for larger projects.

Rather than operating in isolation, xAI joins AI competitors like Google, Meta and Microsoft in racing to secure vast computing power for training large AI models. The push for massive data centre capacity reflects the escalating demands of advanced AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Netherlands urges EU to reduce reliance on US cloud providers

The Dutch government has released a policy paper urging the European Union to take coordinated action to reduce its heavy dependence on non-EU cloud providers, especially from the United States.

The document recommends that the European Commission introduce a clearer and harmonized approach at the EU level.

Key proposals include creating a consistent definition of ‘cloud sovereignty,’ adjusting public procurement rules to allow prioritizing sovereignty, promoting open-source technologies and standards, setting up a common European decision-making framework for cloud choices, and ensuring sufficient funding to support the development and deployment of sovereign cloud technologies.

These measures aim to strengthen the EU’s digital independence and protect public administrations from external political or economic pressures.

A recent investigation found that over 20,000 Dutch institutions rely heavily on US cloud services, with Microsoft holding about 60% of the market.

The Dutch government warned this dependence risks national security and fundamental rights. Concerns escalated after Microsoft blocked the ICC prosecutor’s email following US sanctions, sparking political outrage.

In response, the Dutch parliament called for reducing reliance on American providers and urged the government to develop a roadmap to protect digital infrastructure and regain control.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korean firm unveils faster AI data centre architecture with CXL-over-Xlink

South Korean company Panmnesia has introduced a new architecture for AI data centres aimed at improving speed and efficiency.

Instead of using only PCIe or RDMA-based systems, its CXL-over-Xlink approach combines Compute Express Link (CXL) with fast accelerator links such as UALink and NVLink.

The company claims this design can deliver up to 5.3 times faster AI training and reduce inference latency sixfold. By allowing CPUs and GPUs to access large shared memory pools via the CXL fabric, AI workloads are no longer restricted by the fixed memory limits inside each GPU.

It will enable data centres to scale compute and memory independently, adapting to changing workload demands without hardware overprovisioning.

Panmnesia’s system also reduces communication overhead using accelerator-optimised links for CXL traffic, helping maintain high throughput with sub-100ns latency.

The architecture incorporates a hierarchical memory model blending local high-bandwidth memory with pooled CXL memory, alongside scalable CXL 3.1 switches that connect hundreds of devices efficiently without bottlenecks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google expands NotebookLM with curated content and mobile access

While Gemini often dominates attention in Google’s AI portfolio, other innovative tools deserve the spotlight. One standout is NotebookLM, a virtual research assistant that helps users organise and interact with complex information across various subjects.

NotebookLM creates structured notebooks from curated materials, allowing meaningful engagement with the content. It supports dynamic features, including summaries and transformation options like Audio Overview, making research tasks more intuitive and efficient.

According to Google, featured notebooks are built using information from respected authors, academic institutions, and trusted nonprofits. Current topics include Shakespeare, Yellowstone National Park and more, offering a wide spectrum of well-sourced material.

Featured notebooks function just like regular ones, with added editorial quality. Users can navigate, explore, and repurpose content in ways that support individual learning and project needs. Google has confirmed the collection will grow over time.

NotebookLM remains in early development, yet the tool already shows potential for transforming everyday research tasks. Google also plans tighter integration with its other productivity tools, including Docs and Slides.

The tool significantly reduces the effort traditionally required for academic or creative research. Structured data presentation, combined with interactive features, makes information easier to consume and act upon.

NotebookLM was initially released on desktop but is now also available as a mobile app. Users can download it via the Google Play Store to create notebooks, add content, and stay productive from anywhere.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!