New ChatGPT model reduces unsafe replies by up to 80%

OpenAI has updated ChatGPT’s default model after working with more than 170 mental health clinicians to help the system better spot distress, de-escalate conversations and point users to real-world support.

The update routes sensitive exchanges to safer models, expands access to crisis hotlines and adds gentle prompts to take breaks, aiming to reduce harmful responses rather than simply offering more content.

Measured improvements are significant across three priority areas: severe mental health symptoms such as psychosis and mania, self-harm and suicide, and unhealthy emotional reliance on AI.

OpenAI reports that undesired responses fell between 65 and 80 percent in production traffic and that independent clinician reviews show significant gains compared with earlier models. At the same time, rare but high-risk scenarios remain a focus for further testing.

The company used a five-step process to shape the changes: define harms, measure them, validate approaches with experts, mitigate risks through post-training and product updates, and keep iterating.

Evaluations combine real-world traffic estimates with structured adversarial tests, so better ChatGPT safeguards are in place now, and further refinements are planned as understanding and measurement methods evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

A generative AI model helps athletes avoid injuries and recover faster

Researchers at the University of California, San Diego, have developed a generative AI model designed to prevent sports injuries and assist rehabilitation.

The system, named BIGE (Biomechanics-informed GenAI for Exercise Science), integrates data on human motion with biomechanical constraints such as muscle force limits to create realistic training guidance.

BIGE can generate video demonstrations of optimal movements that athletes can imitate to enhance performance or avoid injury. It can also produce adaptive motions suited for athletes recovering from injuries, offering a personalised approach to rehabilitation.

The model merges generative AI with accurate modelling, overcoming limitations of previous systems that produced anatomically unrealistic results or required heavy computational resources.

To train BIGE, researchers used motion-capture data of athletes performing squats, converting them into 3D skeletal models with precise force calculations. The project’s next phase will expand to other types of movements and individualised training models.

Beyond sports, researchers suggest the tool could predict fall risks among the elderly. Professor Andrew McCulloch described the technology as ‘the future of exercise science’, while co-author Professor Rose Yu said its methods could be widely applied across healthcare and fitness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FDA and patent law create dual hurdles for AI-enabled medical technologies

AI reshapes healthcare by powering more precise and adaptive medical devices and diagnostic systems.

Yet, innovators face two significant challenges: navigating the US Food and Drug Administration’s evolving regulatory framework and overcoming legal uncertainty under US patent law.

These two systems, although interconnected, serve different goals. The FDA protects patients, while patent law rewards invention.

The FDA’s latest guidance seeks to adapt oversight for AI-enabled medical technologies that change over time. Its framework for predetermined change control plans allows developers to update AI models without resubmitting complete applications, provided updates stay within approved limits.

An approach that promotes innovation while maintaining transparency, bias control and post-market safety. By clarifying how adaptive AI devices can evolve safely, the FDA aims to balance accountability with progress.

Patent protection remains more complex. US courts continue to exclude non-human inventors, creating tension when AI contributes to discoveries.

Legal precedents such as Thaler vs Vidal and Alice Corp. vs CLS Bank limit patent eligibility for algorithms or diagnostic methods that resemble abstract ideas or natural laws. Companies must show human-led innovation and technical improvement beyond routine computation to secure patents.

Aligning regulatory and intellectual property strategies is now essential. Developers who engage regulators early, design flexible change control plans and coordinate patent claims with development timelines can reduce risk and accelerate market entry.

Integrating these processes helps ensure AI technologies in healthcare advance safely while preserving inventors’ rights and innovation incentives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AMD powers US AI factory supercomputers for national research

The US Department of Energy and AMD are joining forces to expand America’s AI and scientific computing power through two new supercomputers at Oak Ridge National Laboratory.

Named Lux and Discovery, the systems will drive the country’s sovereign AI strategy, combining public and private investment worth around $1 billion to strengthen research, innovation, and security infrastructure.

Lux, arriving in 2026, will become the nation’s first dedicated AI factory for science.

Built with AMD’s EPYC CPUs and Instinct GPUs alongside Oracle and HPE technologies, Lux will accelerate research across materials, medicine, and advanced manufacturing, supporting the US AI Action Plan and boosting the Department of Energy’s AI capacity.

Discovery, set for deployment in 2028, will deepen collaboration between the DOE, AMD, and HPE. Powered by AMD’s next-generation ‘Venice’ CPUs and MI430X GPUs, Discovery will train and deploy AI models on secure US-built systems, protecting national data and competitiveness.

It aims to deliver faster energy, biology, and national security breakthroughs while maintaining high efficiency and open standards.

AMD’s CEO, Dr Lisa Su, said the collaboration represents the best public-private partnerships, advancing the nation’s foundation for science and innovation.

US Energy Secretary Chris Wright described the initiative as proof that America leads when government and industry work together toward shared AI and scientific goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Virginia’s data centre boom divides residents and industry

Loudoun County in Virginia, known as Data Center Alley, now hosts nearly 200 data centres powering much of the world’s internet and AI infrastructure. Their growth has brought vast economic benefits but stirred concerns about noise, pollution, and rising energy bills for nearby residents.

The facilities occupy about 3% of the county’s land yet generate 40% of its tax revenue. Locals say the constant humming and industrial sprawl have driven away wildlife and inflated electricity costs, which have surged by over 250% in five years.

Despite opposition, new US and global data centre projects continue to receive state support. The industry contributes $5.5 billion annually to Virginia’s economy and sustains around 74,000 jobs. Additionally, President Trump’s administration recently pledged to accelerate permits.

Residents like Emily Kasabian argue the expansion is eroding community life, replacing trees with concrete and machinery to fuel AI. Activists are now lobbying for construction pauses, warning that unchecked development threatens to transform affluent suburbs beyond recognition.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Qualcomm and HUMAIN power Saudi Arabia’s AI transformation

HUMAIN and Qualcomm Technologies have launched a collaboration to deploy advanced AI infrastructure in Saudi Arabia, aiming to position the Kingdom as a global hub for AI.

Announced ahead of the Future Investment Initiative conference, the project will deliver the world’s first fully optimised edge-to-cloud AI system, expanding Saudi Arabia’s regional and global inferencing services capabilities.

In 2026, HUMAIN plans to deploy 200 megawatts of Qualcomm’s AI200 and AI250 rack solutions to power large-scale AI inference services.

The partnership combines HUMAIN’s regional infrastructure and full AI stack with Qualcomm’s semiconductor expertise, creating a model for nations seeking to develop sovereign AI ecosystems.

However, the initiative will also integrate HUMAIN’s Saudi-developed ALLaM models with Qualcomm’s AI platforms, offering enterprise and government customers tailor-made solutions for industry-specific needs.

The collaboration supports Saudi Arabia’s strategy to drive economic growth through AI and semiconductor innovation, reinforcing its ambition to lead the next wave of global intelligent computing.

Qualcomm’s CEO Cristiano Amon said the partnership would help the Kingdom build a technology ecosystem to accelerate its AI ambitions.

HUMAIN CEO Tareq Amin added that combining local insight with Qualcomm’s product leadership will establish Saudi Arabia as a key player in global AI and semiconductor development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN cybercrime treaty signed in Hanoi amid rights concerns

Around 60 countries signed a landmark UN cybercrime convention in Hanoi, seeking faster cooperation against online crime. Leaders cited trillions in annual losses from scams, ransomware, and trafficking. The pact enters into force after 40 ratifications.

UN supporters say the treaty will streamline evidence sharing, extradition requests, and joint investigations. Provisions target phishing, ransomware, online exploitation, and hate speech. Backers frame the deal as a boost to global security.

Critics warn the text’s breadth could criminalise security research and dissent. The Cybersecurity Tech Accord called it a surveillance treaty. Activists fear expansive data sharing with weak safeguards.

The UNODC argues the agreement includes rights protections and space for legitimate research. Officials say oversight and due process remain essential. Implementation choices will decide outcomes on the ground.

The EU, Canada, and Russia signed in Hanoi, underscoring geopolitical buy-in. Vietnam, being the host, drew scrutiny over censorship and arrests. Officials there cast the treaty as a step toward resilience and stature.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Japan’s G-QuAT and Fujitsu sign pact to boost quantum competitiveness

Fujitsu and AIST’s G-QuAT have signed a collaboration to lift Japan’s quantum competitiveness, aligning roadmaps, labs, and funding toward commercialisation. The pact focuses on practical outcomes: industry-ready prototypes, interoperable tooling, and clear pathways from research to deployment.

The partners will pool superconducting know-how, shared fabs and test sites, and structured talent exchanges. Common testbeds will reduce duplication, lift throughput, and speed benchmarks. Joint governance will release reference designs, track milestones, and align on global standards.

Scaling quantum requires integrated systems, not just faster qubits. Priorities include full-stack validation across cryogenics and packaging, controls, and error mitigation. Demonstrations target reproducible, large-scale superconducting processors, with results for peer review and industry pilots.

G-QuAT will act as an international hub, convening suppliers, universities, and overseas labs for co-development. Fujitsu brings product engineering, supply chain, and quality systems to translate research into deployable hardware. External partners will be invited to run comparative trials.

AIST anchors the effort with the national research capacity of Japan and a mission to bridge lab and market. Fujitsu aligns commercialization and service models to emerging standards. Near-term work packages include joint pilots and verification suites, followed by prototypes aimed at industrial adoption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI to improve forecasts and early warnings worldwide

The World Meteorological Organisation has highlighted the potential of AI to improve weather forecasts and early warning systems. The organisation urged the public, private, and academic sectors to use AI and machine learning to protect communities from extreme heat and rainfall.

The Extraordinary World Meteorological Congress approved resolutions to speed up Early Warnings for All, targeting universal coverage by 2027. AI will support, not replace, traditional forecasting, providing national meteorological services with ethical, transparent, and open-source tools.

Pilot projects, including a collaboration between Norway and Malawi, have already improved local forecasts.

Congress stressed helping low- and middle-income countries, least developed countries, and small island states access AI technology. WIPPS will use AI to provide advanced forecasts for better preparation against extreme weather and environmental events.

Congress also advanced the Global Greenhouse Gas Watch, WMO’s first Youth Action Plan, and reforms to boost efficiency amid financial constraints. The WMO continues underlining its essential role in resilient development, disaster risk reduction, and global economic stability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI deepfake videos spark ethical and environmental concerns

Deepfake videos created by AI platforms like OpenAI’s Sora have gone viral, generating hyper-realistic clips of deceased celebrities and historical figures in often offensive scenarios.

Families of figures like Dr Martin Luther King Jr have publicly appealed to AI firms to prevent using their loved ones’ likenesses, highlighting ethical concerns around the technology.

Beyond the emotional impact, Dr Kevin Grecksch of Oxford University warns that producing deepfakes carries a significant environmental footprint. Instead of occurring on phones, video generation happens in data centres that consume vast amounts of electricity and water for cooling, often at industrial scales.

The surge in deepfake content has been rapid, with Sora downloaded over a million times in five days. Dr Grecksch urges users to consider the environmental cost, suggesting more integrated thinking about where data centres are built and how they are cooled to minimise their impact.

As governments promote AI growth areas like South Oxfordshire, questions remain over sustainable infrastructure. Users are encouraged to balance technological enthusiasm with environmental mindfulness, recognising the hidden costs behind creating and sharing AI-generated media.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!