Stronger safeguards arrive with OpenAI’s GPT-5.2 release

OpenAI has launched GPT-5.2, highlighting improved safety performance in conversations involving mental health. The company said the update strengthens how its models respond to signs of suicide, self-harm, emotional distress, and reliance on the chatbot.

The release follows criticism and legal challenges accusing ChatGPT of contributing to psychosis, paranoia, and delusional thinking in some users. Several cases have highlighted the risks of prolonged emotional engagement with AI systems.

In response to a wrongful death lawsuit involving a US teenager, OpenAI denied responsibility while stating that ChatGPT encouraged the user to seek help. The company also committed to improving responses when users display warning signs of mental health crises.

OpenAI said GPT-5.2 produces fewer undesirable responses in sensitive situations than earlier versions. According to the company, the model scores higher on internal safety tests related to self-harm, emotional reliance, and mental health.

The update builds on OpenAI’s use of a training approach known as safe completion, which aims to balance helpfulness and safety. Detailed performance information has been published in the GPT-5.2 system card.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI reshaped European healthcare in 2025

Europe’s healthcare systems turned increasingly to AI in 2025, using new tools to predict disease, speed diagnosis, and reduce administrative workloads.

Countries including Finland, Estonia and Spain adopted AI to train staff, analyse medical data and detect illness earlier, while hospitals introduced AI scribes to free up doctors’ time with patients.

Researchers also advanced AI models able to forecast more than a thousand conditions many years before diagnosis, including heart disease, diabetes and certain cancers.

Further tools detected heart problems in seconds, flagged prostate cancer risks more quickly and monitored patients recovering from stent procedures instead of relying only on manual checks.

Experts warned that AI should support clinicians rather than replace them, as doctors continue to outperform AI in emergency care and chatbots struggle with mental health needs.

Security specialists also cautioned that extremists could try to exploit AI to develop biological threats, prompting calls for stronger safeguards.

Despite such risks, AI-driven approaches are now embedded across European medicine, from combating antibiotic-resistant bacteria to streamlining routine paperwork. Policymakers and health leaders are increasingly focused on how to scale innovation safely instead of simply chasing rapid deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Chinese rules target AI chatbots and emotional manipulation

China has proposed new rules to restrict AI chatbots from influencing human emotions in ways that could lead to suicide or self-harm. The Cyberspace Administration released draft regulations, open for public comment until late January.

The measures target human-like interactive AI services, including emotionally responsive AI chatbots, that simulate personality and engage users through text, images, audio, or video. Officials say the proposals signal a shift from content safety towards emotional safety as AI companions gain popularity.

Under the draft rules, AI chatbot services would be barred from encouraging self-harm, emotional manipulation, or obscene, violent, or gambling-related content. Providers would be required to involve human moderators if users express suicidal intent.

Additional provisions would strengthen safeguards for minors, including guardian consent and usage limits for emotionally interactive systems. Platforms would also face security assessments and interaction reminders when operating services with large user bases.

Experts say the proposals could mark the world’s first attempt to regulate emotionally responsive AI systems. The move comes as China-based chatbot firms pursue public listings and as global scrutiny grows over how conversational AI affects mental health and user behaviour.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ZhiCube showcases new approach to embodied AI deployment

Chinese robotics firm AI² Robotics has launched ZhiCube, described as a modular embodied AI service space integrating humanoid robots into public venues. The concept debuted in Beijing and Shenzhen, with initial installations in a city park and a shopping mall.

ZhiCube places the company’s AlphaBot 2 humanoid robot inside a modular unit designed for service delivery. The system supports multiple functions, including coffee, ice cream, entertainment, and retail, which can be combined based on location and demand.

At the core of the platform is a human–robot collaboration model powered by the company’s embodied AI system, GOVLA. The robot can perceive its surroundings, understand tasks, and adapt its role dynamically during daily operations.

AI² Robotics says the system adjusts work patterns based on foot traffic, allocating tasks between robots and human staff as demand fluctuates. Robots handle standardised services, while humans focus on creative or complex activities.

The company plans to deploy 1,000 ZhiCube units across China over the next three years. It aims to position the platform as a scalable urban infrastructure, supported by in-house manufacturing and long-term operational data from multiple industries.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI in education receives growing attention across the EU

A recent Flash Eurobarometer survey shows that EU citizens consider digital skills essential for all levels of education. Nearly nine in ten respondents believe schools should teach students to manage the effects of technology on mental and physical health.

Most also agree that digital skills deserve equal focus to traditional subjects such as reading, mathematics and science.

The survey highlights growing interest in AI in education. Over half of respondents see AI as both beneficial and challenging, emphasising the need for careful assessment. Citizens also expect teachers to be trained in AI use, including Generative AI, to guide students effectively.

While many support smartphone bans in schools, there is strong backing for digital learning tools, with 87% in favour of promoting technology designed specifically for education. Teachers, parents and families are seen as key in fostering safe and responsible technology use.

Overall, EU citizens advocate for a balanced approach that combines digital literacy, responsible use of technology, and the professional support of educators and families to foster a healthy learning environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New York orders warning labels on social media features

Authorities in New York State have approved a new law requiring social media platforms to display warning labels when users engage with features that encourage prolonged use.

Labels will appear when people interact with elements such as infinite scrolling, auto-play, like counters or algorithm-driven feeds. The rule applies whenever these services are accessed from within New York.

Governor Kathy Hochul said the move is intended to safeguard young people against potential mental health harms linked to excessive social media use. Warnings will show the first time a user activates one of the targeted features and will then reappear at intervals.

Concerns about the impact on children and teenagers have prompted wider government action. California is considering similar steps, while Australia has already banned social media for under-16s and Denmark plans to follow. The US surgeon general has also called for clearer health warnings.

Researchers continue to examine how social media use relates to anxiety and depression among young users. Platforms now face growing pressure to balance engagement features with stronger protections instead of relying purely on self-regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SK Telecom introduces South Korea’s first hyperscale AI model

The telecommunications firm, SK Telecom, is preparing to unveil A.X K1, Korea’s first hyperscale language model built with 519 billion parameters.

Around 33 billion parameters are activated during inference, so the AI model can keep strong performance instead of demanding excessive computing power. The project is part of a national initiative involving universities and industry partners.

The company expects A.X K1 to outperform smaller systems in complex reasoning, mathematics and multilingual understanding, while also supporting code generation and autonomous AI agents.

At such a scale, the model can operate as a teacher system that transfers knowledge to smaller, domain-specific tools that might directly improve daily services and industrial processes.

Unlike many global models trained mainly in English, A.X K1 has been trained in Korean from the outset so it naturally understands local language, culture and context.

SK Telecom plans to deploy the model through its AI service Adot, which already has more than 10 million subscribers, allowing access via calls, messages, the web and mobile apps.

The company foresees applications in workplace productivity, manufacturing optimisation, gaming dialogue, robotics and semiconductor performance testing.

Research will continue so the model can support the wider AI ecosystem of South Korea, and SK Telecom plans to open-source A.X K1 along with an API to help local developers create new AI agents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The AI terms that shaped debate and disruption in 2025

AI continued to dominate public debate in 2025, not only through new products and investment rounds, but also through a rapidly evolving vocabulary that captured both promise and unease.

From ambitious visions of superintelligence to cultural shorthand like ‘slop’, language became a lens through which society processed another turbulent year for AI.

Several terms reflected the industry’s technical ambitions. Concepts such as superintelligence, reasoning models, world models and physical intelligence pointed to efforts to push AI beyond text generation towards deeper problem-solving and real-world interaction.

Developments by companies including Meta, OpenAI, DeepSeek and Google DeepMind reinforced the sense that scale, efficiency and new training approaches are now competing pathways to progress, rather than sheer computing power alone.

Other expressions highlighted growing social and economic tensions. Words like hyperscalers, bubble and distillation entered mainstream debate as data centres expanded, valuations rose, and cheaper model-building methods disrupted established players.

At the same time, legal and ethical debates intensified around fair use, chatbot behaviour and the psychological impact of prolonged AI interaction, underscoring the gap between innovation speed and regulatory clarity.

Cultural reactions also influenced the development of the AI lexicon. Terms such as vibe coding, agentic and sycophancy revealed how generative systems are reshaping work, creativity and user trust, while ‘slop’ emerged as a blunt critique of low-quality, AI-generated content flooding online spaces.

Together, these phrases chart a year in which AI moved further into everyday life, leaving society to wrestle with what should be encouraged, controlled or questioned.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Quantum computing milestone achieved by Chinese researchers

Chinese researchers have reported a significant advance in quantum computing using a superconducting system. The Zuchongzhi 3.2 computer reached the fault-tolerant threshold, at which point error correction improves stability.

Pan Jianwei led the research and marks only the second time globally that this threshold has been achieved, following earlier work by Google. The result positions China as the first country outside the United States to demonstrate fault tolerance in a superconducting quantum system.

Unlike Google’s approach, which relies on extensive hardware redundancy, the Chinese team used microwave-based control to suppress errors. Researchers say this method may offer a more efficient path towards scalable quantum computing by reducing system complexity.

The breakthrough addresses a central challenge in quantum computing: qubit instability and the accumulation of undetected errors. Effective error management is crucial for developing larger systems that can maintain reliable quantum states over time.

While practical applications remain distant, researchers describe the experiment as a significant step in solving a foundational problem in quantum system design. The results highlight the growing international competition in the quest for scalable, fault-tolerant quantum computers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Tsinghua University is emerging as a cornerstone of China’s AI strategy

China’s Tsinghua University has emerged as a central hub in the country’s push to become a global leader in AI. The campus hosts a high level of research activity, with students and faculty working across disciplines related to AI development.

Momentum has been boosted by the success of DeepSeek, an AI startup founded by alums of Tsinghua University. The company reinforced confidence that Chinese teams can compete with leading international laboratories.

The university’s rise is closely aligned with Beijing’s national technology strategy. Government backing has included subsidies, tax incentives, and policy support, as well as public endorsements of AI entrepreneurs affiliated with Tsinghua.

Patent and publication data highlight the scale of output. Tsinghua has filed thousands of AI-related patents and ranks among the world’s most cited institutions in AI research, reflecting China’s rapidly expanding share of global AI innovation.

Despite this growth, the United States continues to lead in influential patents and top-performing models. Analysts note, however, that a narrowing gap is expected, as China produces a growing share of elite AI researchers and expands AI education from schools to advanced research.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!