Stronger safeguards arrive with OpenAI’s GPT-5.2 release

OpenAI has launched GPT-5.2, highlighting improved safety performance in conversations involving mental health. The company said the update strengthens how its models respond to signs of suicide, self-harm, emotional distress, and reliance on the chatbot.

The release follows criticism and legal challenges accusing ChatGPT of contributing to psychosis, paranoia, and delusional thinking in some users. Several cases have highlighted the risks of prolonged emotional engagement with AI systems.

In response to a wrongful death lawsuit involving a US teenager, OpenAI denied responsibility while stating that ChatGPT encouraged the user to seek help. The company also committed to improving responses when users display warning signs of mental health crises.

OpenAI said GPT-5.2 produces fewer undesirable responses in sensitive situations than earlier versions. According to the company, the model scores higher on internal safety tests related to self-harm, emotional reliance, and mental health.

The update builds on OpenAI’s use of a training approach known as safe completion, which aims to balance helpfulness and safety. Detailed performance information has been published in the GPT-5.2 system card.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI reshaped European healthcare in 2025

Europe’s healthcare systems turned increasingly to AI in 2025, using new tools to predict disease, speed diagnosis, and reduce administrative workloads.

Countries including Finland, Estonia and Spain adopted AI to train staff, analyse medical data and detect illness earlier, while hospitals introduced AI scribes to free up doctors’ time with patients.

Researchers also advanced AI models able to forecast more than a thousand conditions many years before diagnosis, including heart disease, diabetes and certain cancers.

Further tools detected heart problems in seconds, flagged prostate cancer risks more quickly and monitored patients recovering from stent procedures instead of relying only on manual checks.

Experts warned that AI should support clinicians rather than replace them, as doctors continue to outperform AI in emergency care and chatbots struggle with mental health needs.

Security specialists also cautioned that extremists could try to exploit AI to develop biological threats, prompting calls for stronger safeguards.

Despite such risks, AI-driven approaches are now embedded across European medicine, from combating antibiotic-resistant bacteria to streamlining routine paperwork. Policymakers and health leaders are increasingly focused on how to scale innovation safely instead of simply chasing rapid deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI is changing how Europeans work and learn

Generative AI has become an everyday tool across Europe, with millions using platforms such as ChatGPT, Gemini and Grok for personal, work, and educational purposes. Eurostat data shows that around a third of people aged 16–74 tried AI tools at least once in 2025.

Adoption varies widely across the continent. Norway leads with 56 percent of the population using AI, while Turkey records only 17 percent.

Within the EU, Denmark tops usage at 48 percent, and Romania lags at 18 percent. Northern and digitally advanced countries dominate, while southern, central-eastern, and Balkan nations show lower engagement.

Researchers attribute this to general digital literacy, internet use, and familiarity with technology rather than government policy alone. AI tools are used more for personal purposes than for work.

Across the EU, 25 percent use AI for personal tasks, compared with 15 percent for professional applications.

Usage in education is even lower, with only 9 percent employing AI in formal learning, peaking at 21 percent in Sweden and Switzerland and dropping to just 1 percent in Hungary.

Experts stress that while access is essential, understanding how to apply AI effectively remains a key barrier. Countries with strong digital foundations adopt AI more, while limited awareness and skills restrict use, emphasising the need for AI literacy and infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New Chinese rules target AI chatbots and emotional manipulation

China has proposed new rules to restrict AI chatbots from influencing human emotions in ways that could lead to suicide or self-harm. The Cyberspace Administration released draft regulations, open for public comment until late January.

The measures target human-like interactive AI services, including emotionally responsive AI chatbots, that simulate personality and engage users through text, images, audio, or video. Officials say the proposals signal a shift from content safety towards emotional safety as AI companions gain popularity.

Under the draft rules, AI chatbot services would be barred from encouraging self-harm, emotional manipulation, or obscene, violent, or gambling-related content. Providers would be required to involve human moderators if users express suicidal intent.

Additional provisions would strengthen safeguards for minors, including guardian consent and usage limits for emotionally interactive systems. Platforms would also face security assessments and interaction reminders when operating services with large user bases.

Experts say the proposals could mark the world’s first attempt to regulate emotionally responsive AI systems. The move comes as China-based chatbot firms pursue public listings and as global scrutiny grows over how conversational AI affects mental health and user behaviour.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ZhiCube showcases new approach to embodied AI deployment

Chinese robotics firm AI² Robotics has launched ZhiCube, described as a modular embodied AI service space integrating humanoid robots into public venues. The concept debuted in Beijing and Shenzhen, with initial installations in a city park and a shopping mall.

ZhiCube places the company’s AlphaBot 2 humanoid robot inside a modular unit designed for service delivery. The system supports multiple functions, including coffee, ice cream, entertainment, and retail, which can be combined based on location and demand.

At the core of the platform is a human–robot collaboration model powered by the company’s embodied AI system, GOVLA. The robot can perceive its surroundings, understand tasks, and adapt its role dynamically during daily operations.

AI² Robotics says the system adjusts work patterns based on foot traffic, allocating tasks between robots and human staff as demand fluctuates. Robots handle standardised services, while humans focus on creative or complex activities.

The company plans to deploy 1,000 ZhiCube units across China over the next three years. It aims to position the platform as a scalable urban infrastructure, supported by in-house manufacturing and long-term operational data from multiple industries.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots spreading rumours raise new risks

Researchers warn AI chatbots are spreading rumours about real people without human oversight. Unlike human gossip, bot-to-bot exchanges can escalate unchecked, growing more extreme as they move through AI networks.

Philosophers Joel Krueger and Lucy Osler from the University of Exeter describe this phenomenon as ‘feral gossip.’ It involves negative evaluations about absent third parties and can persist undetected across platforms.

Real-world examples include tech reporter Kevin Roose, who encountered hostile AI-generated assessments of his work from multiple chatbots, seemingly amplified as the content filtered through training data.

The researchers highlight that AI systems lack the social checks humans provide, allowing rumours to intensify unchecked. Chatbots are designed to appear trustworthy and personal, so negative statements can seem credible.

Such misinformation has already affected journalists, academics, and public officials, sometimes prompting legal action. Technosocial harms from AI gossip extend beyond embarrassment. False claims can damage reputations, influence decisions, and persist online and offline.

While chatbots are not conscious, their prioritisation of conversational fluency over factual accuracy can make the rumours they spread difficult to detect and correct.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI in education receives growing attention across the EU

A recent Flash Eurobarometer survey shows that EU citizens consider digital skills essential for all levels of education. Nearly nine in ten respondents believe schools should teach students to manage the effects of technology on mental and physical health.

Most also agree that digital skills deserve equal focus to traditional subjects such as reading, mathematics and science.

The survey highlights growing interest in AI in education. Over half of respondents see AI as both beneficial and challenging, emphasising the need for careful assessment. Citizens also expect teachers to be trained in AI use, including Generative AI, to guide students effectively.

While many support smartphone bans in schools, there is strong backing for digital learning tools, with 87% in favour of promoting technology designed specifically for education. Teachers, parents and families are seen as key in fostering safe and responsible technology use.

Overall, EU citizens advocate for a balanced approach that combines digital literacy, responsible use of technology, and the professional support of educators and families to foster a healthy learning environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

SK Telecom introduces South Korea’s first hyperscale AI model

The telecommunications firm, SK Telecom, is preparing to unveil A.X K1, Korea’s first hyperscale language model built with 519 billion parameters.

Around 33 billion parameters are activated during inference, so the AI model can keep strong performance instead of demanding excessive computing power. The project is part of a national initiative involving universities and industry partners.

The company expects A.X K1 to outperform smaller systems in complex reasoning, mathematics and multilingual understanding, while also supporting code generation and autonomous AI agents.

At such a scale, the model can operate as a teacher system that transfers knowledge to smaller, domain-specific tools that might directly improve daily services and industrial processes.

Unlike many global models trained mainly in English, A.X K1 has been trained in Korean from the outset so it naturally understands local language, culture and context.

SK Telecom plans to deploy the model through its AI service Adot, which already has more than 10 million subscribers, allowing access via calls, messages, the web and mobile apps.

The company foresees applications in workplace productivity, manufacturing optimisation, gaming dialogue, robotics and semiconductor performance testing.

Research will continue so the model can support the wider AI ecosystem of South Korea, and SK Telecom plans to open-source A.X K1 along with an API to help local developers create new AI agents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The AI terms that shaped debate and disruption in 2025

AI continued to dominate public debate in 2025, not only through new products and investment rounds, but also through a rapidly evolving vocabulary that captured both promise and unease.

From ambitious visions of superintelligence to cultural shorthand like ‘slop’, language became a lens through which society processed another turbulent year for AI.

Several terms reflected the industry’s technical ambitions. Concepts such as superintelligence, reasoning models, world models and physical intelligence pointed to efforts to push AI beyond text generation towards deeper problem-solving and real-world interaction.

Developments by companies including Meta, OpenAI, DeepSeek and Google DeepMind reinforced the sense that scale, efficiency and new training approaches are now competing pathways to progress, rather than sheer computing power alone.

Other expressions highlighted growing social and economic tensions. Words like hyperscalers, bubble and distillation entered mainstream debate as data centres expanded, valuations rose, and cheaper model-building methods disrupted established players.

At the same time, legal and ethical debates intensified around fair use, chatbot behaviour and the psychological impact of prolonged AI interaction, underscoring the gap between innovation speed and regulatory clarity.

Cultural reactions also influenced the development of the AI lexicon. Terms such as vibe coding, agentic and sycophancy revealed how generative systems are reshaping work, creativity and user trust, while ‘slop’ emerged as a blunt critique of low-quality, AI-generated content flooding online spaces.

Together, these phrases chart a year in which AI moved further into everyday life, leaving society to wrestle with what should be encouraged, controlled or questioned.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tsinghua University is emerging as a cornerstone of China’s AI strategy

China’s Tsinghua University has emerged as a central hub in the country’s push to become a global leader in AI. The campus hosts a high level of research activity, with students and faculty working across disciplines related to AI development.

Momentum has been boosted by the success of DeepSeek, an AI startup founded by alums of Tsinghua University. The company reinforced confidence that Chinese teams can compete with leading international laboratories.

The university’s rise is closely aligned with Beijing’s national technology strategy. Government backing has included subsidies, tax incentives, and policy support, as well as public endorsements of AI entrepreneurs affiliated with Tsinghua.

Patent and publication data highlight the scale of output. Tsinghua has filed thousands of AI-related patents and ranks among the world’s most cited institutions in AI research, reflecting China’s rapidly expanding share of global AI innovation.

Despite this growth, the United States continues to lead in influential patents and top-performing models. Analysts note, however, that a narrowing gap is expected, as China produces a growing share of elite AI researchers and expands AI education from schools to advanced research.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!