New ChatGPT model reduces unsafe replies by up to 80%

OpenAI has updated ChatGPT’s default model after working with more than 170 mental health clinicians to help the system better spot distress, de-escalate conversations and point users to real-world support.

The update routes sensitive exchanges to safer models, expands access to crisis hotlines and adds gentle prompts to take breaks, aiming to reduce harmful responses rather than simply offering more content.

Measured improvements are significant across three priority areas: severe mental health symptoms such as psychosis and mania, self-harm and suicide, and unhealthy emotional reliance on AI.

OpenAI reports that undesired responses fell between 65 and 80 percent in production traffic and that independent clinician reviews show significant gains compared with earlier models. At the same time, rare but high-risk scenarios remain a focus for further testing.

The company used a five-step process to shape the changes: define harms, measure them, validate approaches with experts, mitigate risks through post-training and product updates, and keep iterating.

Evaluations combine real-world traffic estimates with structured adversarial tests, so better ChatGPT safeguards are in place now, and further refinements are planned as understanding and measurement methods evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI200 and AI250 set a rack-scale inference push from Qualcomm

Qualcomm unveiled AI200 and AI250 data-centre accelerators aimed at high-throughput, low-TCO generative AI inference. AI200 targets rack-level deployment with high performance per pound per watt and 768 GB LPDDR per card for large models.

AI250 introduces a near-memory architecture that boosts adequate memory bandwidth by over tenfold while lowering power draw. Qualcomm pitches the design for disaggregated serving, improving hardware utilisation across large fleets.

Both arrive as full racks with direct liquid cooling, PCIe for scale-up, Ethernet for scale-out, and confidential computing. Qualcomm quotes around 160 kW per rack for thermally efficient, dense inference.

A hyperscaler-grade software stack spans apps to system software with one-click onboarding of Hugging Face models. Support covers leading frameworks, inference engines, and optimisation techniques to simplify secure, scalable deployments.

Commercial timing splits the roadmap: AI200 in 2026 and AI250 in 2027. Qualcomm commits to an annual cadence for data-centre inference, aiming to lead in performance, energy efficiency, and total cost of ownership.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

A generative AI model helps athletes avoid injuries and recover faster

Researchers at the University of California, San Diego, have developed a generative AI model designed to prevent sports injuries and assist rehabilitation.

The system, named BIGE (Biomechanics-informed GenAI for Exercise Science), integrates data on human motion with biomechanical constraints such as muscle force limits to create realistic training guidance.

BIGE can generate video demonstrations of optimal movements that athletes can imitate to enhance performance or avoid injury. It can also produce adaptive motions suited for athletes recovering from injuries, offering a personalised approach to rehabilitation.

The model merges generative AI with accurate modelling, overcoming limitations of previous systems that produced anatomically unrealistic results or required heavy computational resources.

To train BIGE, researchers used motion-capture data of athletes performing squats, converting them into 3D skeletal models with precise force calculations. The project’s next phase will expand to other types of movements and individualised training models.

Beyond sports, researchers suggest the tool could predict fall risks among the elderly. Professor Andrew McCulloch described the technology as ‘the future of exercise science’, while co-author Professor Rose Yu said its methods could be widely applied across healthcare and fitness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FDA and patent law create dual hurdles for AI-enabled medical technologies

AI reshapes healthcare by powering more precise and adaptive medical devices and diagnostic systems.

Yet, innovators face two significant challenges: navigating the US Food and Drug Administration’s evolving regulatory framework and overcoming legal uncertainty under US patent law.

These two systems, although interconnected, serve different goals. The FDA protects patients, while patent law rewards invention.

The FDA’s latest guidance seeks to adapt oversight for AI-enabled medical technologies that change over time. Its framework for predetermined change control plans allows developers to update AI models without resubmitting complete applications, provided updates stay within approved limits.

An approach that promotes innovation while maintaining transparency, bias control and post-market safety. By clarifying how adaptive AI devices can evolve safely, the FDA aims to balance accountability with progress.

Patent protection remains more complex. US courts continue to exclude non-human inventors, creating tension when AI contributes to discoveries.

Legal precedents such as Thaler vs Vidal and Alice Corp. vs CLS Bank limit patent eligibility for algorithms or diagnostic methods that resemble abstract ideas or natural laws. Companies must show human-led innovation and technical improvement beyond routine computation to secure patents.

Aligning regulatory and intellectual property strategies is now essential. Developers who engage regulators early, design flexible change control plans and coordinate patent claims with development timelines can reduce risk and accelerate market entry.

Integrating these processes helps ensure AI technologies in healthcare advance safely while preserving inventors’ rights and innovation incentives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AMD powers US AI factory supercomputers for national research

The US Department of Energy and AMD are joining forces to expand America’s AI and scientific computing power through two new supercomputers at Oak Ridge National Laboratory.

Named Lux and Discovery, the systems will drive the country’s sovereign AI strategy, combining public and private investment worth around $1 billion to strengthen research, innovation, and security infrastructure.

Lux, arriving in 2026, will become the nation’s first dedicated AI factory for science.

Built with AMD’s EPYC CPUs and Instinct GPUs alongside Oracle and HPE technologies, Lux will accelerate research across materials, medicine, and advanced manufacturing, supporting the US AI Action Plan and boosting the Department of Energy’s AI capacity.

Discovery, set for deployment in 2028, will deepen collaboration between the DOE, AMD, and HPE. Powered by AMD’s next-generation ‘Venice’ CPUs and MI430X GPUs, Discovery will train and deploy AI models on secure US-built systems, protecting national data and competitiveness.

It aims to deliver faster energy, biology, and national security breakthroughs while maintaining high efficiency and open standards.

AMD’s CEO, Dr Lisa Su, said the collaboration represents the best public-private partnerships, advancing the nation’s foundation for science and innovation.

US Energy Secretary Chris Wright described the initiative as proof that America leads when government and industry work together toward shared AI and scientific goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Celebrity estates push back on Sora as app surges to No.1

OpenAI’s short-video app Sora topped one million downloads in under a week, then ran headlong into a likeness-rights firestorm. Celebrity families and studios demanded stricter controls. Estates for figures like Martin Luther King Jr. sought blocks on unauthorised cameos.

Users showcased hyperreal mashups that blurred satire and deception, from cartoon crossovers to dead celebrities in improbable scenes. All clips are AI-made, yet reposting across platforms spread confusion. Viewers faced a constant real-or-fake dilemma.

Rights holders pressed for consent, compensation, and veto power over characters and personas. OpenAI shifted toward opt-in for copyrighted properties and enabled estate requests to restrict cameos. Policy language on who qualifies as a public figure remains fuzzy.

Agencies and unions amplified pressure, warning of exploitation and reputational risks. Detection firms reported a surge in takedown requests for unauthorised impersonations. Watermarks exist, but removal tools undercut provenance and complicate enforcement.

Researchers warned about a growing fog of doubt as realistic fakes multiply. Every day, people are placed in deceptive scenarios, while bad actors exploit deniability. OpenAI promised stronger guardrails as Sora scales within tighter rules.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Qualcomm and HUMAIN power Saudi Arabia’s AI transformation

HUMAIN and Qualcomm Technologies have launched a collaboration to deploy advanced AI infrastructure in Saudi Arabia, aiming to position the Kingdom as a global hub for AI.

Announced ahead of the Future Investment Initiative conference, the project will deliver the world’s first fully optimised edge-to-cloud AI system, expanding Saudi Arabia’s regional and global inferencing services capabilities.

In 2026, HUMAIN plans to deploy 200 megawatts of Qualcomm’s AI200 and AI250 rack solutions to power large-scale AI inference services.

The partnership combines HUMAIN’s regional infrastructure and full AI stack with Qualcomm’s semiconductor expertise, creating a model for nations seeking to develop sovereign AI ecosystems.

However, the initiative will also integrate HUMAIN’s Saudi-developed ALLaM models with Qualcomm’s AI platforms, offering enterprise and government customers tailor-made solutions for industry-specific needs.

The collaboration supports Saudi Arabia’s strategy to drive economic growth through AI and semiconductor innovation, reinforcing its ambition to lead the next wave of global intelligent computing.

Qualcomm’s CEO Cristiano Amon said the partnership would help the Kingdom build a technology ecosystem to accelerate its AI ambitions.

HUMAIN CEO Tareq Amin added that combining local insight with Qualcomm’s product leadership will establish Saudi Arabia as a key player in global AI and semiconductor development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN cybercrime treaty signed in Hanoi amid rights concerns

Around 73 countries signed a landmark UN cybercrime convention in Hanoi, seeking faster cooperation against online crime. Leaders cited trillions in annual losses from scams, ransomware, and trafficking. The pact enters into force after 40 ratifications.

UN supporters say the treaty will streamline evidence sharing, extradition requests, and joint investigations. Provisions target phishing, ransomware, online exploitation, and hate speech. Backers frame the deal as a boost to global security.

Critics warn the text’s breadth could criminalise security research and dissent. The Cybersecurity Tech Accord called it a surveillance treaty. Activists fear expansive data sharing with weak safeguards.

The UNODC argues the agreement includes rights protections and space for legitimate research. Officials say oversight and due process remain essential. Implementation choices will decide outcomes on the ground.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MLK estate pushback prompts new Sora 2 guardrails at OpenAI

OpenAI paused the ability to re-create Martin Luther King Jr. in Sora 2 after Bernice King objected to user videos. Company leaders issued a joint statement with the King estate. New guardrails will govern depictions of historical figures on the app.

OpenAI said families and authorised estates should control how likenesses appear. Representatives can request removal or opt-outs. Free speech was acknowledged, but respectful use and consent were emphasised.

Policy scope remains unsettled, including who counts as a public figure. Case-by-case requests may dominate early enforcement. Transparency commitments arrived without full definitions or timelines.

Industry pressure intensified as major talent agencies opted out of clients. CAA and UTA cited exploitation and legal exposure. Some creators welcomed the tool, showing a split among public figures.

User appetite for realistic cameos continues to test boundaries. Rights of publicity and postmortem controls vary by state. OpenAI promised stronger safeguards while Sora 2 evolves.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU MiCA greenlight turns Blockchain.com’s Malta base into hub

Blockchain.com received a MiCA license from Malta’s Financial Services Authority, enabling passported crypto services across all 30 EEA countries under one EU framework. Leaders called it a step toward safer, consistent access.

Malta becomes the hub for scaling operations, citing regulatory clarity and cross-border support. Under the authorisation, teams will expand secure custody and wallets, enterprise treasury tools, and localised products for the EU consumers.

A unified license streamlines go-to-market and accelerates launches in priority jurisdictions. Institutions gain clearer expectations on safeguarding, disclosures, and governance, while retail users benefit from standardised protections and stronger redress.

Fiorentina D’Amore will lead the EU strategy with deep fintech experience. Plans include phased rollouts, supervisor engagement, and controls aligned to MiCA’s conduct and prudential requirements across key markets.

Since 2011, Blockchain.com says it has processed over one trillion dollars and serves more than 90 million wallets. Expansion under MiCA adds scalable infrastructure, robust custody, and clearer disclosures for users and institutions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!