Stronger safeguards arrive with OpenAI’s GPT-5.2 release

OpenAI has launched GPT-5.2, highlighting improved safety performance in conversations involving mental health. The company said the update strengthens how its models respond to signs of suicide, self-harm, emotional distress, and reliance on the chatbot.

The release follows criticism and legal challenges accusing ChatGPT of contributing to psychosis, paranoia, and delusional thinking in some users. Several cases have highlighted the risks of prolonged emotional engagement with AI systems.

In response to a wrongful death lawsuit involving a US teenager, OpenAI denied responsibility while stating that ChatGPT encouraged the user to seek help. The company also committed to improving responses when users display warning signs of mental health crises.

OpenAI said GPT-5.2 produces fewer undesirable responses in sensitive situations than earlier versions. According to the company, the model scores higher on internal safety tests related to self-harm, emotional reliance, and mental health.

The update builds on OpenAI’s use of a training approach known as safe completion, which aims to balance helpfulness and safety. Detailed performance information has been published in the GPT-5.2 system card.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI reshaped European healthcare in 2025

Europe’s healthcare systems turned increasingly to AI in 2025, using new tools to predict disease, speed diagnosis, and reduce administrative workloads.

Countries including Finland, Estonia and Spain adopted AI to train staff, analyse medical data and detect illness earlier, while hospitals introduced AI scribes to free up doctors’ time with patients.

Researchers also advanced AI models able to forecast more than a thousand conditions many years before diagnosis, including heart disease, diabetes and certain cancers.

Further tools detected heart problems in seconds, flagged prostate cancer risks more quickly and monitored patients recovering from stent procedures instead of relying only on manual checks.

Experts warned that AI should support clinicians rather than replace them, as doctors continue to outperform AI in emergency care and chatbots struggle with mental health needs.

Security specialists also cautioned that extremists could try to exploit AI to develop biological threats, prompting calls for stronger safeguards.

Despite such risks, AI-driven approaches are now embedded across European medicine, from combating antibiotic-resistant bacteria to streamlining routine paperwork. Policymakers and health leaders are increasingly focused on how to scale innovation safely instead of simply chasing rapid deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bitcoin adoption remains uneven across US states

A recent SmartAsset study based on IRS tax return data highlights sharp regional differences in Bitcoin participation across the US. Crypto engagement is concentrated in certain states, driven by income, tech adoption, and local economic culture.

Washington leads the rankings, with 2.43 per cent of taxpayers reporting crypto transactions, followed by Utah, California, Colorado and New Jersey. These states have strong tech sectors, higher incomes, and populations familiar with digital financial tools.

New Jersey’s position also shows that crypto interest extends beyond traditional tech hubs in the West. At the opposite end, states such as West Virginia, Mississippi, Kentucky, Louisiana and Alabama record participation close to or below one per cent.

Lower household incomes, smaller tech industries and a preference for conventional financial products appear to limit reported crypto activity, although some low-level holdings may not surface in tax data.

The data also reflects crypto’s sensitivity to market cycles. Participation surged during the 2021 bull run before declining sharply in 2022 as prices fell.

Higher-income households remain far more active than middle-income earners, reinforcing the view that Bitcoin adoption in the US is still largely speculative and unevenly distributed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI is changing how Europeans work and learn

Generative AI has become an everyday tool across Europe, with millions using platforms such as ChatGPT, Gemini and Grok for personal, work, and educational purposes. Eurostat data shows that around a third of people aged 16–74 tried AI tools at least once in 2025.

Adoption varies widely across the continent. Norway leads with 56 percent of the population using AI, while Turkey records only 17 percent.

Within the EU, Denmark tops usage at 48 percent, and Romania lags at 18 percent. Northern and digitally advanced countries dominate, while southern, central-eastern, and Balkan nations show lower engagement.

Researchers attribute this to general digital literacy, internet use, and familiarity with technology rather than government policy alone. AI tools are used more for personal purposes than for work.

Across the EU, 25 percent use AI for personal tasks, compared with 15 percent for professional applications.

Usage in education is even lower, with only 9 percent employing AI in formal learning, peaking at 21 percent in Sweden and Switzerland and dropping to just 1 percent in Hungary.

Experts stress that while access is essential, understanding how to apply AI effectively remains a key barrier. Countries with strong digital foundations adopt AI more, while limited awareness and skills restrict use, emphasising the need for AI literacy and infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New Chinese rules target AI chatbots and emotional manipulation

China has proposed new rules to restrict AI chatbots from influencing human emotions in ways that could lead to suicide or self-harm. The Cyberspace Administration released draft regulations, open for public comment until late January.

The measures target human-like interactive AI services, including emotionally responsive AI chatbots, that simulate personality and engage users through text, images, audio, or video. Officials say the proposals signal a shift from content safety towards emotional safety as AI companions gain popularity.

Under the draft rules, AI chatbot services would be barred from encouraging self-harm, emotional manipulation, or obscene, violent, or gambling-related content. Providers would be required to involve human moderators if users express suicidal intent.

Additional provisions would strengthen safeguards for minors, including guardian consent and usage limits for emotionally interactive systems. Platforms would also face security assessments and interaction reminders when operating services with large user bases.

Experts say the proposals could mark the world’s first attempt to regulate emotionally responsive AI systems. The move comes as China-based chatbot firms pursue public listings and as global scrutiny grows over how conversational AI affects mental health and user behaviour.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Germany considers age limits after Australian social media ban

Digital Minister Karsten Wildberger has indicated support for stricter age limits on social media after Australia banned teenagers under 16 from using major online platforms. He said age restrictions were more than justified and that the policy had clear merit.

Australia’s new rules require companies to remove under 16 user profiles and stop new ones from being created. Officials argued that the measure aims to reduce cyberbullying, grooming and mental health harm instead of relying only on parental supervision.

The European Commission President said she was inspired by the move, although social media companies and civil liberties groups have criticised it.

Germany has already appointed an expert commission to examine child and youth protection in the digital era. The panel is expected to publish recommendations by summer 2025, which could include policies on social media access and potential restrictions on mobile phone use in schools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ZhiCube showcases new approach to embodied AI deployment

Chinese robotics firm AI² Robotics has launched ZhiCube, described as a modular embodied AI service space integrating humanoid robots into public venues. The concept debuted in Beijing and Shenzhen, with initial installations in a city park and a shopping mall.

ZhiCube places the company’s AlphaBot 2 humanoid robot inside a modular unit designed for service delivery. The system supports multiple functions, including coffee, ice cream, entertainment, and retail, which can be combined based on location and demand.

At the core of the platform is a human–robot collaboration model powered by the company’s embodied AI system, GOVLA. The robot can perceive its surroundings, understand tasks, and adapt its role dynamically during daily operations.

AI² Robotics says the system adjusts work patterns based on foot traffic, allocating tasks between robots and human staff as demand fluctuates. Robots handle standardised services, while humans focus on creative or complex activities.

The company plans to deploy 1,000 ZhiCube units across China over the next three years. It aims to position the platform as a scalable urban infrastructure, supported by in-house manufacturing and long-term operational data from multiple industries.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots spreading rumours raise new risks

Researchers warn AI chatbots are spreading rumours about real people without human oversight. Unlike human gossip, bot-to-bot exchanges can escalate unchecked, growing more extreme as they move through AI networks.

Philosophers Joel Krueger and Lucy Osler from the University of Exeter describe this phenomenon as ‘feral gossip.’ It involves negative evaluations about absent third parties and can persist undetected across platforms.

Real-world examples include tech reporter Kevin Roose, who encountered hostile AI-generated assessments of his work from multiple chatbots, seemingly amplified as the content filtered through training data.

The researchers highlight that AI systems lack the social checks humans provide, allowing rumours to intensify unchecked. Chatbots are designed to appear trustworthy and personal, so negative statements can seem credible.

Such misinformation has already affected journalists, academics, and public officials, sometimes prompting legal action. Technosocial harms from AI gossip extend beyond embarrassment. False claims can damage reputations, influence decisions, and persist online and offline.

While chatbots are not conscious, their prioritisation of conversational fluency over factual accuracy can make the rumours they spread difficult to detect and correct.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sberbank issues Russia’s first crypto-backed loan

Sberbank has issued Russia’s first crypto-backed loan, providing financing to Intelion Data, one of the country’s largest Bitcoin miners. The bank did not disclose the loan size or the cryptocurrency used as collateral but described the move as a pilot project.

The loan leveraged Sberbank’s own cryptocurrency custody solution, Rutoken, ensuring the digital assets’ safety throughout the loan period. The bank plans to offer similar loans and collaborate with the Central Bank on regulatory frameworks.

Intelion Data welcomed the deal, calling it a milestone for Russia’s crypto mining sector and a potential model for scaling similar financing across the industry. The company is expanding with a mining centre near the Kalinin Nuclear Power Plant and a gas power station.

Sberbank has also been testing decentralised finance tools and supports gradual legalisation of cryptocurrencies in Russia. VTB and other banks are preparing to support crypto transactions, while the Central Bank may allow limited retail trading.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Korean Air employee data breach exposes 30,000 records after cyberattack

Investigators are examining a major data breach involving Korean Air after personal records for around 30,000 employees were exposed in a cyberattack on a former subsidiary.

An incident that affected KC&D Service, which previously handled in-flight catering before being sold to private equity firm Hahn and Company in 2020.

The leaked information is understood to include employee names and bank account numbers. Korean Air said customer records were not impacted, and emergency security checks were completed instead of waiting for confirmation of the intrusion.

Korean Air also reported the breach to the relevant authorities.

Executives said the company is focusing on identifying the full scope of the breach and who has been affected, while urging KC&D to strengthen controls and prevent any recurrence. Korean Air also plans to upgrade internal data protection measures.

The attack follows a similar case at Asiana Airlines last week, where details of about 10,000 employees were compromised, raising wider concerns over cybersecurity resilience across the aviation sector of South Korea.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!