Rare but real, mental health risks at ChatGPT scale

OpenAI says a small share of ChatGPT users show possible signs of mental health emergencies each week, including mania, psychosis, or suicidal thoughts. The company estimates 0.07 percent and says safety prompts are triggered. Critics argue that small percentages scale at ChatGPT’s size.

A further 0.15 percent of weekly users discuss explicit indicators of potential suicidal planning or intent. Updates aim to respond more safely and empathetically, and to flag indirect self-harm signals. Sensitive chats can be routed to safer models in a new window.

More than 170 clinicians across 60 countries advise OpenAI on risk cues and responses. Guidance focuses on encouraging users to seek real-world support. Researchers warn vulnerable people may struggle to act on on-screen warnings.

External specialists see both value and limits. AI may widen access when services are stretched, yet automated advice can mislead. Risks include reinforcing delusions and misplaced trust in authoritative-sounding output.

Legal and public scrutiny is rising after high-profile cases linked to chatbot interactions. Families and campaigners want more transparent accountability and stronger guardrails. Regulators continue to debate transparency, escalation pathways, and duty of care.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Adobe Firefly expands with new AI tools for audio and video creation

Adobe has unveiled major updates to its Firefly creative AI studio, introducing advanced audio, video, and imaging tools at the Adobe MAX 2025 conference.

These new features include Generate Soundtrack for licensed music creation, Generate Speech for lifelike multilingual voiceovers, and a timeline-based video editor that integrates seamlessly with Firefly’s existing creative tools.

The company also launched the Firefly Image Model 5, which can produce photorealistic 4MP images with prompt-based editing. Firefly now includes partner models from Google, OpenAI, ElevenLabs, Topaz Labs, and others, bringing the industry’s top AI capabilities into one unified workspace.

Adobe also announced Firefly Custom Models, allowing users to train AI models to match their personal creative style.

In a preview of future developments, Adobe showcased Project Moonlight, a conversational AI assistant that connects across creative apps and social channels to help creators move from concept to content in minutes.

A system that can offer tailored suggestions and automate parts of the creative process while keeping creators in complete control.

Adobe emphasised that Firefly is designed to enhance human creativity rather than replace it, offering responsible AI tools that respect intellectual property rights.

With such a release, the company continues integrating generative AI across its ecosystem to simplify production and empower creators at every stage of their workflow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Microsoft sign new $135 billion agreement to deepen AI partnership

Microsoft and OpenAI have signed a new agreement that marks the next phase of their long-standing partnership, deepening ties first formed in 2019.

The updated deal builds on years of collaboration in advancing responsible AI, positioning both organisations for long-term success while introducing new structural and operational changes.

Under the new arrangement, Microsoft supports OpenAI’s transition into a public benefit corporation (PBC) and recapitalisation. The technology giant now holds an investment valued at around $135 billion, representing about 27 percent of OpenAI Group PBC on an as-converted diluted basis.

Despite OpenAI’s recent funding rounds, Microsoft previously held a 32.5 percent stake in the for-profit entity.

The partnership maintains Microsoft’s exclusive rights to OpenAI’s frontier models and Azure API until artificial general intelligence (AGI) is achieved, but also introduces several new terms. Once AGI is declared, an independent panel will verify it.

Microsoft’s intellectual property rights are extended through 2032, including models developed after AGI with safety conditions. OpenAI may now co-develop certain products with third parties, while retaining the option to serve non-API products on any cloud provider.

OpenAI will purchase an additional $250 billion worth of Azure services, although Microsoft will no longer hold first-refusal rights for compute supply. The new framework allows both organisations to innovate independently, with Microsoft permitted to pursue AGI independently or with other partners.

The updated agreement reflects a more flexible collaboration that balances independence, growth, and shared innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google commits to long-term power deal as NextEra advances nuclear restart

NextEra Energy and Google have launched a major collaboration to accelerate nuclear energy deployment in the United States, anchored by the planned restart of the Duane Arnold Energy Centre in Iowa. The plant has been offline since 2020 and is slated to be back online by early 2029.

Under their agreement, Google will purchase the plant’s energy output through a 25-year power purchase agreement (PPA). Additionally, NextEra plans to acquire the remaining minority stakes in Duane Arnold to gain full ownership.

Central Iowa Power Cooperative, which currently holds part of the facility, will secure the output under the same terms.

As the energy needs of AI and cloud computing infrastructure surge, the Duane Arnold partnership positions nuclear power as a reliable, carbon-free baseload resource.

The revival is expected to bring substantial economic benefits: thousands of direct and indirect jobs during construction and operation, and over US$9 billion in regional economic impact.

Beyond Iowa, Google and NextEra will explore broader nuclear development opportunities across the US, including next-generation technologies to meet long-term demand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Citi and Coinbase unite to boost digital asset payments

Citi and Coinbase have announced a strategic partnership to enhance digital asset payment capabilities for institutional clients. The collaboration will begin by streamlining fiat transactions and strengthening links between traditional banking and digital assets via Coinbase’s on/off-ramps.

Both firms plan to introduce further initiatives in the coming months aimed at simplifying global access to crypto payments.

According to Citi’s Head of Payments, Debopama Sen, the partnership supports Citi’s goal of creating a ‘network of networks’ that enables borderless payments. Operating across 94 markets and 300 networks, Citi sees the move as progress towards integrating blockchain into mainstream finance.

Coinbase’s Brian Foster said the partnership merges Citi’s payments expertise with Coinbase’s digital asset leadership. Together, they aim to build next-generation infrastructure enabling seamless, round-the-clock access to crypto services for institutional clients.

The partnership builds on Citi’s ongoing investment in digital finance, including its Citi Token Services and 24/7 USD Clearing system. By aligning with Coinbase, the bank reinforces its commitment to innovation and positions itself at the forefront of the evolving digital money landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deepfake videos raises environmental worries

Deepfake videos powered by AI are spreading across social media at an unprecedented pace, but their popularity carries a hidden environmental cost.

Creating realistic AI videos depends on vast data centres that consume enormous amounts of electricity and use fresh water to cool powerful servers. Each clip quietly produced adds to the rising energy demand and increasing pressure on local water supplies.

Apps such as Sora have made generating these videos almost effortless, resulting in millions of downloads and a constant stream of new content. Users are being urged to consider how frequently they produce and share such media, given the heavy energy and water footprint behind every video.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New ChatGPT model reduces unsafe replies by up to 80%

OpenAI has updated ChatGPT’s default model after working with more than 170 mental health clinicians to help the system better spot distress, de-escalate conversations and point users to real-world support.

The update routes sensitive exchanges to safer models, expands access to crisis hotlines and adds gentle prompts to take breaks, aiming to reduce harmful responses rather than simply offering more content.

Measured improvements are significant across three priority areas: severe mental health symptoms such as psychosis and mania, self-harm and suicide, and unhealthy emotional reliance on AI.

OpenAI reports that undesired responses fell between 65 and 80 percent in production traffic and that independent clinician reviews show significant gains compared with earlier models. At the same time, rare but high-risk scenarios remain a focus for further testing.

The company used a five-step process to shape the changes: define harms, measure them, validate approaches with experts, mitigate risks through post-training and product updates, and keep iterating.

Evaluations combine real-world traffic estimates with structured adversarial tests, so better ChatGPT safeguards are in place now, and further refinements are planned as understanding and measurement methods evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

A generative AI model helps athletes avoid injuries and recover faster

Researchers at the University of California, San Diego, have developed a generative AI model designed to prevent sports injuries and assist rehabilitation.

The system, named BIGE (Biomechanics-informed GenAI for Exercise Science), integrates data on human motion with biomechanical constraints such as muscle force limits to create realistic training guidance.

BIGE can generate video demonstrations of optimal movements that athletes can imitate to enhance performance or avoid injury. It can also produce adaptive motions suited for athletes recovering from injuries, offering a personalised approach to rehabilitation.

The model merges generative AI with accurate modelling, overcoming limitations of previous systems that produced anatomically unrealistic results or required heavy computational resources.

To train BIGE, researchers used motion-capture data of athletes performing squats, converting them into 3D skeletal models with precise force calculations. The project’s next phase will expand to other types of movements and individualised training models.

Beyond sports, researchers suggest the tool could predict fall risks among the elderly. Professor Andrew McCulloch described the technology as ‘the future of exercise science’, while co-author Professor Rose Yu said its methods could be widely applied across healthcare and fitness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FDA and patent law create dual hurdles for AI-enabled medical technologies

AI reshapes healthcare by powering more precise and adaptive medical devices and diagnostic systems.

Yet, innovators face two significant challenges: navigating the US Food and Drug Administration’s evolving regulatory framework and overcoming legal uncertainty under US patent law.

These two systems, although interconnected, serve different goals. The FDA protects patients, while patent law rewards invention.

The FDA’s latest guidance seeks to adapt oversight for AI-enabled medical technologies that change over time. Its framework for predetermined change control plans allows developers to update AI models without resubmitting complete applications, provided updates stay within approved limits.

An approach that promotes innovation while maintaining transparency, bias control and post-market safety. By clarifying how adaptive AI devices can evolve safely, the FDA aims to balance accountability with progress.

Patent protection remains more complex. US courts continue to exclude non-human inventors, creating tension when AI contributes to discoveries.

Legal precedents such as Thaler vs Vidal and Alice Corp. vs CLS Bank limit patent eligibility for algorithms or diagnostic methods that resemble abstract ideas or natural laws. Companies must show human-led innovation and technical improvement beyond routine computation to secure patents.

Aligning regulatory and intellectual property strategies is now essential. Developers who engage regulators early, design flexible change control plans and coordinate patent claims with development timelines can reduce risk and accelerate market entry.

Integrating these processes helps ensure AI technologies in healthcare advance safely while preserving inventors’ rights and innovation incentives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AMD powers US AI factory supercomputers for national research

The US Department of Energy and AMD are joining forces to expand America’s AI and scientific computing power through two new supercomputers at Oak Ridge National Laboratory.

Named Lux and Discovery, the systems will drive the country’s sovereign AI strategy, combining public and private investment worth around $1 billion to strengthen research, innovation, and security infrastructure.

Lux, arriving in 2026, will become the nation’s first dedicated AI factory for science.

Built with AMD’s EPYC CPUs and Instinct GPUs alongside Oracle and HPE technologies, Lux will accelerate research across materials, medicine, and advanced manufacturing, supporting the US AI Action Plan and boosting the Department of Energy’s AI capacity.

Discovery, set for deployment in 2028, will deepen collaboration between the DOE, AMD, and HPE. Powered by AMD’s next-generation ‘Venice’ CPUs and MI430X GPUs, Discovery will train and deploy AI models on secure US-built systems, protecting national data and competitiveness.

It aims to deliver faster energy, biology, and national security breakthroughs while maintaining high efficiency and open standards.

AMD’s CEO, Dr Lisa Su, said the collaboration represents the best public-private partnerships, advancing the nation’s foundation for science and innovation.

US Energy Secretary Chris Wright described the initiative as proof that America leads when government and industry work together toward shared AI and scientific goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!