Google commits to long-term power deal as NextEra advances nuclear restart

NextEra Energy and Google have launched a major collaboration to accelerate nuclear energy deployment in the United States, anchored by the planned restart of the Duane Arnold Energy Centre in Iowa. The plant has been offline since 2020 and is slated to be back online by early 2029.

Under their agreement, Google will purchase the plant’s energy output through a 25-year power purchase agreement (PPA). Additionally, NextEra plans to acquire the remaining minority stakes in Duane Arnold to gain full ownership.

Central Iowa Power Cooperative, which currently holds part of the facility, will secure the output under the same terms.

As the energy needs of AI and cloud computing infrastructure surge, the Duane Arnold partnership positions nuclear power as a reliable, carbon-free baseload resource.

The revival is expected to bring substantial economic benefits: thousands of direct and indirect jobs during construction and operation, and over US$9 billion in regional economic impact.

Beyond Iowa, Google and NextEra will explore broader nuclear development opportunities across the US, including next-generation technologies to meet long-term demand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Amazon plans up to 30,000 corporate job cuts as AI automation expands

Beginning Tuesday, Amazon plans to cut up to 30,000 corporate roles, nearly 10% of its white-collar workforce, to reduce costs after pandemic over-hiring.

Cuts may hit human resources, operations, devices and services, and Amazon Web Services. According to people familiar with the policy, the company has also tightened office-attendance rules; employees who are not swiping in daily have been told they are considered to have resigned without severance.

Analysts say AI-driven productivity gains and the need to fund long-term AI infrastructure are key factors behind the reductions in staff. Executives have indicated that greater use of automation and AI to handle routine tasks will drive further reductions.

Internal planning papers reported in US media suggest the company could avoid hiring more than 500,000 US workers by 2033, yielding around $12.6 billion in savings between 2025 and 2027.

The scale and timing of the layoffs could change as financial priorities evolve. Separately, Amazon still expects a busy holiday period and plans to hire 250,000 seasonal workers for warehouses and fulfilment roles unrelated to the corporate cuts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Nvidia and Deutsche Telekom plan €1 billion AI data centre in Germany

Plans are being rolled out for a €1 billion data centre in Germany to bolster Europe’s AI infrastructure, with Nvidia and Deutsche Telekom set to co-fund the project.

The facility is expected to serve enterprise customers, including SAP SE, Europe’s largest software company, and to deploy around 10,000 advanced chips known as graphics processing units (GPUs).

While significant for Europe, the build is modest compared with gigawatt-scale sites elsewhere, highlighting the region’s push to catch up with US and Chinese capacity.

An announcement is anticipated next month in Berlin alongside senior industry and government figures, with Munich identified as the planned location.

The move aligns with the EU efforts to expand AI compute, including the €200 billion initiative announced in February to grow capacity over the next five to seven years.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Citi and Coinbase unite to boost digital asset payments

Citi and Coinbase have announced a strategic partnership to enhance digital asset payment capabilities for institutional clients. The collaboration will begin by streamlining fiat transactions and strengthening links between traditional banking and digital assets via Coinbase’s on/off-ramps.

Both firms plan to introduce further initiatives in the coming months aimed at simplifying global access to crypto payments.

According to Citi’s Head of Payments, Debopama Sen, the partnership supports Citi’s goal of creating a ‘network of networks’ that enables borderless payments. Operating across 94 markets and 300 networks, Citi sees the move as progress towards integrating blockchain into mainstream finance.

Coinbase’s Brian Foster said the partnership merges Citi’s payments expertise with Coinbase’s digital asset leadership. Together, they aim to build next-generation infrastructure enabling seamless, round-the-clock access to crypto services for institutional clients.

The partnership builds on Citi’s ongoing investment in digital finance, including its Citi Token Services and 24/7 USD Clearing system. By aligning with Coinbase, the bank reinforces its commitment to innovation and positions itself at the forefront of the evolving digital money landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deepfake videos raises environmental worries

Deepfake videos powered by AI are spreading across social media at an unprecedented pace, but their popularity carries a hidden environmental cost.

Creating realistic AI videos depends on vast data centres that consume enormous amounts of electricity and use fresh water to cool powerful servers. Each clip quietly produced adds to the rising energy demand and increasing pressure on local water supplies.

Apps such as Sora have made generating these videos almost effortless, resulting in millions of downloads and a constant stream of new content. Users are being urged to consider how frequently they produce and share such media, given the heavy energy and water footprint behind every video.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New ChatGPT model reduces unsafe replies by up to 80%

OpenAI has updated ChatGPT’s default model after working with more than 170 mental health clinicians to help the system better spot distress, de-escalate conversations and point users to real-world support.

The update routes sensitive exchanges to safer models, expands access to crisis hotlines and adds gentle prompts to take breaks, aiming to reduce harmful responses rather than simply offering more content.

Measured improvements are significant across three priority areas: severe mental health symptoms such as psychosis and mania, self-harm and suicide, and unhealthy emotional reliance on AI.

OpenAI reports that undesired responses fell between 65 and 80 percent in production traffic and that independent clinician reviews show significant gains compared with earlier models. At the same time, rare but high-risk scenarios remain a focus for further testing.

The company used a five-step process to shape the changes: define harms, measure them, validate approaches with experts, mitigate risks through post-training and product updates, and keep iterating.

Evaluations combine real-world traffic estimates with structured adversarial tests, so better ChatGPT safeguards are in place now, and further refinements are planned as understanding and measurement methods evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI200 and AI250 set a rack-scale inference push from Qualcomm

Qualcomm unveiled AI200 and AI250 data-centre accelerators aimed at high-throughput, low-TCO generative AI inference. AI200 targets rack-level deployment with high performance per pound per watt and 768 GB LPDDR per card for large models.

AI250 introduces a near-memory architecture that boosts adequate memory bandwidth by over tenfold while lowering power draw. Qualcomm pitches the design for disaggregated serving, improving hardware utilisation across large fleets.

Both arrive as full racks with direct liquid cooling, PCIe for scale-up, Ethernet for scale-out, and confidential computing. Qualcomm quotes around 160 kW per rack for thermally efficient, dense inference.

A hyperscaler-grade software stack spans apps to system software with one-click onboarding of Hugging Face models. Support covers leading frameworks, inference engines, and optimisation techniques to simplify secure, scalable deployments.

Commercial timing splits the roadmap: AI200 in 2026 and AI250 in 2027. Qualcomm commits to an annual cadence for data-centre inference, aiming to lead in performance, energy efficiency, and total cost of ownership.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

A generative AI model helps athletes avoid injuries and recover faster

Researchers at the University of California, San Diego, have developed a generative AI model designed to prevent sports injuries and assist rehabilitation.

The system, named BIGE (Biomechanics-informed GenAI for Exercise Science), integrates data on human motion with biomechanical constraints such as muscle force limits to create realistic training guidance.

BIGE can generate video demonstrations of optimal movements that athletes can imitate to enhance performance or avoid injury. It can also produce adaptive motions suited for athletes recovering from injuries, offering a personalised approach to rehabilitation.

The model merges generative AI with accurate modelling, overcoming limitations of previous systems that produced anatomically unrealistic results or required heavy computational resources.

To train BIGE, researchers used motion-capture data of athletes performing squats, converting them into 3D skeletal models with precise force calculations. The project’s next phase will expand to other types of movements and individualised training models.

Beyond sports, researchers suggest the tool could predict fall risks among the elderly. Professor Andrew McCulloch described the technology as ‘the future of exercise science’, while co-author Professor Rose Yu said its methods could be widely applied across healthcare and fitness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FDA and patent law create dual hurdles for AI-enabled medical technologies

AI reshapes healthcare by powering more precise and adaptive medical devices and diagnostic systems.

Yet, innovators face two significant challenges: navigating the US Food and Drug Administration’s evolving regulatory framework and overcoming legal uncertainty under US patent law.

These two systems, although interconnected, serve different goals. The FDA protects patients, while patent law rewards invention.

The FDA’s latest guidance seeks to adapt oversight for AI-enabled medical technologies that change over time. Its framework for predetermined change control plans allows developers to update AI models without resubmitting complete applications, provided updates stay within approved limits.

An approach that promotes innovation while maintaining transparency, bias control and post-market safety. By clarifying how adaptive AI devices can evolve safely, the FDA aims to balance accountability with progress.

Patent protection remains more complex. US courts continue to exclude non-human inventors, creating tension when AI contributes to discoveries.

Legal precedents such as Thaler vs Vidal and Alice Corp. vs CLS Bank limit patent eligibility for algorithms or diagnostic methods that resemble abstract ideas or natural laws. Companies must show human-led innovation and technical improvement beyond routine computation to secure patents.

Aligning regulatory and intellectual property strategies is now essential. Developers who engage regulators early, design flexible change control plans and coordinate patent claims with development timelines can reduce risk and accelerate market entry.

Integrating these processes helps ensure AI technologies in healthcare advance safely while preserving inventors’ rights and innovation incentives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AMD powers US AI factory supercomputers for national research

The US Department of Energy and AMD are joining forces to expand America’s AI and scientific computing power through two new supercomputers at Oak Ridge National Laboratory.

Named Lux and Discovery, the systems will drive the country’s sovereign AI strategy, combining public and private investment worth around $1 billion to strengthen research, innovation, and security infrastructure.

Lux, arriving in 2026, will become the nation’s first dedicated AI factory for science.

Built with AMD’s EPYC CPUs and Instinct GPUs alongside Oracle and HPE technologies, Lux will accelerate research across materials, medicine, and advanced manufacturing, supporting the US AI Action Plan and boosting the Department of Energy’s AI capacity.

Discovery, set for deployment in 2028, will deepen collaboration between the DOE, AMD, and HPE. Powered by AMD’s next-generation ‘Venice’ CPUs and MI430X GPUs, Discovery will train and deploy AI models on secure US-built systems, protecting national data and competitiveness.

It aims to deliver faster energy, biology, and national security breakthroughs while maintaining high efficiency and open standards.

AMD’s CEO, Dr Lisa Su, said the collaboration represents the best public-private partnerships, advancing the nation’s foundation for science and innovation.

US Energy Secretary Chris Wright described the initiative as proof that America leads when government and industry work together toward shared AI and scientific goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!