New ChatGPT model reduces unsafe replies by up to 80%

OpenAI has updated ChatGPT’s default model after working with more than 170 mental health clinicians to help the system better spot distress, de-escalate conversations and point users to real-world support.

The update routes sensitive exchanges to safer models, expands access to crisis hotlines and adds gentle prompts to take breaks, aiming to reduce harmful responses rather than simply offering more content.

Measured improvements are significant across three priority areas: severe mental health symptoms such as psychosis and mania, self-harm and suicide, and unhealthy emotional reliance on AI.

OpenAI reports that undesired responses fell between 65 and 80 percent in production traffic and that independent clinician reviews show significant gains compared with earlier models. At the same time, rare but high-risk scenarios remain a focus for further testing.

The company used a five-step process to shape the changes: define harms, measure them, validate approaches with experts, mitigate risks through post-training and product updates, and keep iterating.

Evaluations combine real-world traffic estimates with structured adversarial tests, so better ChatGPT safeguards are in place now, and further refinements are planned as understanding and measurement methods evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italian political elite targeted in hacking scandal using stolen state data

Italian authorities have uncovered a vast hacking operation that built detailed dossiers on politicians and business leaders using data siphoned from state databases. Prosecutors say the group, operating under the name Equalize, tried to use the information to manipulate Italy’s political class.

The network, allegedly led by former police inspector Carmine Gallo, businessman Enrico Pazzali and cybersecurity expert Samuele Calamucci, created a system called Beyond to compile thousands of records from state systems, including confidential financial and criminal records.

Police wiretaps captured suspects boasting they could operate all over Italy. Targets included senior officials such as former Prime Minister Matteo Renzi and the president of the Senate Ignazio La Russa.

Investigators say the gang presented itself as a corporate intelligence firm while illegally accessing phones, computers and government databases. The group allegedly sold reputational dossiers to clients, including major firms such as Eni, Barilla and Heineken, which have all denied wrongdoing or said they were unaware of any illegal activity.

The probe began when police monitoring a northern Italian gangster uncovered links to Gallo. Gallo, who helped solve cases including the 1995 murder of Maurizio Gucci, leveraged contacts in law enforcement and intelligence to arrange unlawful data searches for Equalize.

The operation collapsed in autumn 2024, with four arrests and dozens questioned. After months of questioning and plea bargaining, 15 defendants are due to enter pleas this month. Officials warn the case shows how hackers can weaponise state data, calling it ‘a real and actual attack on democracy’.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

FDA and patent law create dual hurdles for AI-enabled medical technologies

AI reshapes healthcare by powering more precise and adaptive medical devices and diagnostic systems.

Yet, innovators face two significant challenges: navigating the US Food and Drug Administration’s evolving regulatory framework and overcoming legal uncertainty under US patent law.

These two systems, although interconnected, serve different goals. The FDA protects patients, while patent law rewards invention.

The FDA’s latest guidance seeks to adapt oversight for AI-enabled medical technologies that change over time. Its framework for predetermined change control plans allows developers to update AI models without resubmitting complete applications, provided updates stay within approved limits.

An approach that promotes innovation while maintaining transparency, bias control and post-market safety. By clarifying how adaptive AI devices can evolve safely, the FDA aims to balance accountability with progress.

Patent protection remains more complex. US courts continue to exclude non-human inventors, creating tension when AI contributes to discoveries.

Legal precedents such as Thaler vs Vidal and Alice Corp. vs CLS Bank limit patent eligibility for algorithms or diagnostic methods that resemble abstract ideas or natural laws. Companies must show human-led innovation and technical improvement beyond routine computation to secure patents.

Aligning regulatory and intellectual property strategies is now essential. Developers who engage regulators early, design flexible change control plans and coordinate patent claims with development timelines can reduce risk and accelerate market entry.

Integrating these processes helps ensure AI technologies in healthcare advance safely while preserving inventors’ rights and innovation incentives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AMD powers US AI factory supercomputers for national research

The US Department of Energy and AMD are joining forces to expand America’s AI and scientific computing power through two new supercomputers at Oak Ridge National Laboratory.

Named Lux and Discovery, the systems will drive the country’s sovereign AI strategy, combining public and private investment worth around $1 billion to strengthen research, innovation, and security infrastructure.

Lux, arriving in 2026, will become the nation’s first dedicated AI factory for science.

Built with AMD’s EPYC CPUs and Instinct GPUs alongside Oracle and HPE technologies, Lux will accelerate research across materials, medicine, and advanced manufacturing, supporting the US AI Action Plan and boosting the Department of Energy’s AI capacity.

Discovery, set for deployment in 2028, will deepen collaboration between the DOE, AMD, and HPE. Powered by AMD’s next-generation ‘Venice’ CPUs and MI430X GPUs, Discovery will train and deploy AI models on secure US-built systems, protecting national data and competitiveness.

It aims to deliver faster energy, biology, and national security breakthroughs while maintaining high efficiency and open standards.

AMD’s CEO, Dr Lisa Su, said the collaboration represents the best public-private partnerships, advancing the nation’s foundation for science and innovation.

US Energy Secretary Chris Wright described the initiative as proof that America leads when government and industry work together toward shared AI and scientific goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Virginia’s data centre boom divides residents and industry

Loudoun County in Virginia, known as Data Center Alley, now hosts nearly 200 data centres powering much of the world’s internet and AI infrastructure. Their growth has brought vast economic benefits but stirred concerns about noise, pollution, and rising energy bills for nearby residents.

The facilities occupy about 3% of the county’s land yet generate 40% of its tax revenue. Locals say the constant humming and industrial sprawl have driven away wildlife and inflated electricity costs, which have surged by over 250% in five years.

Despite opposition, new US and global data centre projects continue to receive state support. The industry contributes $5.5 billion annually to Virginia’s economy and sustains around 74,000 jobs. Additionally, President Trump’s administration recently pledged to accelerate permits.

Residents like Emily Kasabian argue the expansion is eroding community life, replacing trees with concrete and machinery to fuel AI. Activists are now lobbying for construction pauses, warning that unchecked development threatens to transform affluent suburbs beyond recognition.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Qualcomm and HUMAIN power Saudi Arabia’s AI transformation

HUMAIN and Qualcomm Technologies have launched a collaboration to deploy advanced AI infrastructure in Saudi Arabia, aiming to position the Kingdom as a global hub for AI.

Announced ahead of the Future Investment Initiative conference, the project will deliver the world’s first fully optimised edge-to-cloud AI system, expanding Saudi Arabia’s regional and global inferencing services capabilities.

In 2026, HUMAIN plans to deploy 200 megawatts of Qualcomm’s AI200 and AI250 rack solutions to power large-scale AI inference services.

The partnership combines HUMAIN’s regional infrastructure and full AI stack with Qualcomm’s semiconductor expertise, creating a model for nations seeking to develop sovereign AI ecosystems.

However, the initiative will also integrate HUMAIN’s Saudi-developed ALLaM models with Qualcomm’s AI platforms, offering enterprise and government customers tailor-made solutions for industry-specific needs.

The collaboration supports Saudi Arabia’s strategy to drive economic growth through AI and semiconductor innovation, reinforcing its ambition to lead the next wave of global intelligent computing.

Qualcomm’s CEO Cristiano Amon said the partnership would help the Kingdom build a technology ecosystem to accelerate its AI ambitions.

HUMAIN CEO Tareq Amin added that combining local insight with Qualcomm’s product leadership will establish Saudi Arabia as a key player in global AI and semiconductor development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN cybercrime treaty signed in Hanoi amid rights concerns

Around 60 countries signed a landmark UN cybercrime convention in Hanoi, seeking faster cooperation against online crime. Leaders cited trillions in annual losses from scams, ransomware, and trafficking. The pact enters into force after 40 ratifications.

UN supporters say the treaty will streamline evidence sharing, extradition requests, and joint investigations. Provisions target phishing, ransomware, online exploitation, and hate speech. Backers frame the deal as a boost to global security.

Critics warn the text’s breadth could criminalise security research and dissent. The Cybersecurity Tech Accord called it a surveillance treaty. Activists fear expansive data sharing with weak safeguards.

The UNODC argues the agreement includes rights protections and space for legitimate research. Officials say oversight and due process remain essential. Implementation choices will decide outcomes on the ground.

The EU, Canada, and Russia signed in Hanoi, underscoring geopolitical buy-in. Vietnam, being the host, drew scrutiny over censorship and arrests. Officials there cast the treaty as a step toward resilience and stature.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Diella 2.0 set to deliver 83 new AI assistants to aid Albania’s MPs

Albania’s AI minister Diella will ‘give birth’ to 83 virtual assistants for ruling-party MPs, Prime Minister Edi Rama said, framing a quirky rollout of parliamentary copilots that record debates and propose responses.

Diella began in January as a public-service chatbot on e-Albania, then ‘Diella 2.0’ added voice and an avatar in traditional dress. Built with Microsoft by the National Agency for Information Society, it now oversees specific state tech contracts.

The legality is murky: the constitution of Albania requires ministers to be natural persons. A presidential decree left Rama’s responsibility to establish the role and set up likely court tests from opposition lawmakers.

Rama says the ‘children’ will brief MPs, summarise absences, and suggest counterarguments through 2026, experimenting with automating the day-to-day legislative grind without replacing elected officials.

Reactions range from table-thumping scepticism to cautious curiosity, as other governments debate AI personhood and limits; Diella could become a template, or a cautionary tale for ‘ministerial’ bots.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI deepfake videos spark ethical and environmental concerns

Deepfake videos created by AI platforms like OpenAI’s Sora have gone viral, generating hyper-realistic clips of deceased celebrities and historical figures in often offensive scenarios.

Families of figures like Dr Martin Luther King Jr have publicly appealed to AI firms to prevent using their loved ones’ likenesses, highlighting ethical concerns around the technology.

Beyond the emotional impact, Dr Kevin Grecksch of Oxford University warns that producing deepfakes carries a significant environmental footprint. Instead of occurring on phones, video generation happens in data centres that consume vast amounts of electricity and water for cooling, often at industrial scales.

The surge in deepfake content has been rapid, with Sora downloaded over a million times in five days. Dr Grecksch urges users to consider the environmental cost, suggesting more integrated thinking about where data centres are built and how they are cooled to minimise their impact.

As governments promote AI growth areas like South Oxfordshire, questions remain over sustainable infrastructure. Users are encouraged to balance technological enthusiasm with environmental mindfulness, recognising the hidden costs behind creating and sharing AI-generated media.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google expands Earth AI for disaster response and environmental monitoring

The US tech giant, Google, has expanded access to Earth AI, a platform built on decades of geospatial modelling combined with Gemini’s advanced reasoning.

Enterprises, cities, and nonprofits can now rapidly analyse environmental and disaster-related data, enabling faster, informed decisions to protect communities.

During the 2025 California wildfires, Google’s AI helped alert millions and guide them to safety, showing the potential of Earth AI in crisis response.

A key feature, Geospatial Reasoning, allows the AI to connect multiple models (such as satellite imagery, population maps, and weather forecasts) to assess which communities and infrastructure are most at risk.

Instead of manual data analysis, organisations can now identify vulnerable areas and prioritise relief efforts in minutes.

Earth AI now includes tools to detect patterns in satellite imagery, such as drying rivers, harmful algae blooms, or vegetation encroachment on infrastructure. These insights support environmental monitoring and early warnings, letting authorities respond before disasters escalate.

The models are available on Google Cloud to Trusted Testers, allowing integration with external datasets for tailored analysis.

Several organisations are already leveraging Earth AI for the public good. WHO AFRO uses it to monitor cholera risks in the Democratic Republic of Congo, while Planet and Airbus analyse satellite imagery for deforestation and power line safety.

Bellwether uses Earth AI for hurricane prediction, enabling faster insurance claim processing and recovery. Google aims to make these tools broadly accessible to support global crisis management, public health, and environmental protection.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!