Stronger safeguards arrive with OpenAI’s GPT-5.2 release

OpenAI has launched GPT-5.2, highlighting improved safety performance in conversations involving mental health. The company said the update strengthens how its models respond to signs of suicide, self-harm, emotional distress, and reliance on the chatbot.

The release follows criticism and legal challenges accusing ChatGPT of contributing to psychosis, paranoia, and delusional thinking in some users. Several cases have highlighted the risks of prolonged emotional engagement with AI systems.

In response to a wrongful death lawsuit involving a US teenager, OpenAI denied responsibility while stating that ChatGPT encouraged the user to seek help. The company also committed to improving responses when users display warning signs of mental health crises.

OpenAI said GPT-5.2 produces fewer undesirable responses in sensitive situations than earlier versions. According to the company, the model scores higher on internal safety tests related to self-harm, emotional reliance, and mental health.

The update builds on OpenAI’s use of a training approach known as safe completion, which aims to balance helpfulness and safety. Detailed performance information has been published in the GPT-5.2 system card.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI reshaped European healthcare in 2025

Europe’s healthcare systems turned increasingly to AI in 2025, using new tools to predict disease, speed diagnosis, and reduce administrative workloads.

Countries including Finland, Estonia and Spain adopted AI to train staff, analyse medical data and detect illness earlier, while hospitals introduced AI scribes to free up doctors’ time with patients.

Researchers also advanced AI models able to forecast more than a thousand conditions many years before diagnosis, including heart disease, diabetes and certain cancers.

Further tools detected heart problems in seconds, flagged prostate cancer risks more quickly and monitored patients recovering from stent procedures instead of relying only on manual checks.

Experts warned that AI should support clinicians rather than replace them, as doctors continue to outperform AI in emergency care and chatbots struggle with mental health needs.

Security specialists also cautioned that extremists could try to exploit AI to develop biological threats, prompting calls for stronger safeguards.

Despite such risks, AI-driven approaches are now embedded across European medicine, from combating antibiotic-resistant bacteria to streamlining routine paperwork. Policymakers and health leaders are increasingly focused on how to scale innovation safely instead of simply chasing rapid deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bitcoin adoption remains uneven across US states

A recent SmartAsset study based on IRS tax return data highlights sharp regional differences in Bitcoin participation across the US. Crypto engagement is concentrated in certain states, driven by income, tech adoption, and local economic culture.

Washington leads the rankings, with 2.43 per cent of taxpayers reporting crypto transactions, followed by Utah, California, Colorado and New Jersey. These states have strong tech sectors, higher incomes, and populations familiar with digital financial tools.

New Jersey’s position also shows that crypto interest extends beyond traditional tech hubs in the West. At the opposite end, states such as West Virginia, Mississippi, Kentucky, Louisiana and Alabama record participation close to or below one per cent.

Lower household incomes, smaller tech industries and a preference for conventional financial products appear to limit reported crypto activity, although some low-level holdings may not surface in tax data.

The data also reflects crypto’s sensitivity to market cycles. Participation surged during the 2021 bull run before declining sharply in 2022 as prices fell.

Higher-income households remain far more active than middle-income earners, reinforcing the view that Bitcoin adoption in the US is still largely speculative and unevenly distributed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New Chinese rules target AI chatbots and emotional manipulation

China has proposed new rules to restrict AI chatbots from influencing human emotions in ways that could lead to suicide or self-harm. The Cyberspace Administration released draft regulations, open for public comment until late January.

The measures target human-like interactive AI services, including emotionally responsive AI chatbots, that simulate personality and engage users through text, images, audio, or video. Officials say the proposals signal a shift from content safety towards emotional safety as AI companions gain popularity.

Under the draft rules, AI chatbot services would be barred from encouraging self-harm, emotional manipulation, or obscene, violent, or gambling-related content. Providers would be required to involve human moderators if users express suicidal intent.

Additional provisions would strengthen safeguards for minors, including guardian consent and usage limits for emotionally interactive systems. Platforms would also face security assessments and interaction reminders when operating services with large user bases.

Experts say the proposals could mark the world’s first attempt to regulate emotionally responsive AI systems. The move comes as China-based chatbot firms pursue public listings and as global scrutiny grows over how conversational AI affects mental health and user behaviour.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Germany considers age limits after Australian social media ban

Digital Minister Karsten Wildberger has indicated support for stricter age limits on social media after Australia banned teenagers under 16 from using major online platforms. He said age restrictions were more than justified and that the policy had clear merit.

Australia’s new rules require companies to remove under 16 user profiles and stop new ones from being created. Officials argued that the measure aims to reduce cyberbullying, grooming and mental health harm instead of relying only on parental supervision.

The European Commission President said she was inspired by the move, although social media companies and civil liberties groups have criticised it.

Germany has already appointed an expert commission to examine child and youth protection in the digital era. The panel is expected to publish recommendations by summer 2025, which could include policies on social media access and potential restrictions on mobile phone use in schools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ZhiCube showcases new approach to embodied AI deployment

Chinese robotics firm AI² Robotics has launched ZhiCube, described as a modular embodied AI service space integrating humanoid robots into public venues. The concept debuted in Beijing and Shenzhen, with initial installations in a city park and a shopping mall.

ZhiCube places the company’s AlphaBot 2 humanoid robot inside a modular unit designed for service delivery. The system supports multiple functions, including coffee, ice cream, entertainment, and retail, which can be combined based on location and demand.

At the core of the platform is a human–robot collaboration model powered by the company’s embodied AI system, GOVLA. The robot can perceive its surroundings, understand tasks, and adapt its role dynamically during daily operations.

AI² Robotics says the system adjusts work patterns based on foot traffic, allocating tasks between robots and human staff as demand fluctuates. Robots handle standardised services, while humans focus on creative or complex activities.

The company plans to deploy 1,000 ZhiCube units across China over the next three years. It aims to position the platform as a scalable urban infrastructure, supported by in-house manufacturing and long-term operational data from multiple industries.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Korean Air employee data breach exposes 30,000 records after cyberattack

Investigators are examining a major data breach involving Korean Air after personal records for around 30,000 employees were exposed in a cyberattack on a former subsidiary.

An incident that affected KC&D Service, which previously handled in-flight catering before being sold to private equity firm Hahn and Company in 2020.

The leaked information is understood to include employee names and bank account numbers. Korean Air said customer records were not impacted, and emergency security checks were completed instead of waiting for confirmation of the intrusion.

Korean Air also reported the breach to the relevant authorities.

Executives said the company is focusing on identifying the full scope of the breach and who has been affected, while urging KC&D to strengthen controls and prevent any recurrence. Korean Air also plans to upgrade internal data protection measures.

The attack follows a similar case at Asiana Airlines last week, where details of about 10,000 employees were compromised, raising wider concerns over cybersecurity resilience across the aviation sector of South Korea.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New York orders warning labels on social media features

Authorities in New York State have approved a new law requiring social media platforms to display warning labels when users engage with features that encourage prolonged use.

Labels will appear when people interact with elements such as infinite scrolling, auto-play, like counters or algorithm-driven feeds. The rule applies whenever these services are accessed from within New York.

Governor Kathy Hochul said the move is intended to safeguard young people against potential mental health harms linked to excessive social media use. Warnings will show the first time a user activates one of the targeted features and will then reappear at intervals.

Concerns about the impact on children and teenagers have prompted wider government action. California is considering similar steps, while Australia has already banned social media for under-16s and Denmark plans to follow. The US surgeon general has also called for clearer health warnings.

Researchers continue to examine how social media use relates to anxiety and depression among young users. Platforms now face growing pressure to balance engagement features with stronger protections instead of relying purely on self-regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SK Telecom introduces South Korea’s first hyperscale AI model

The telecommunications firm, SK Telecom, is preparing to unveil A.X K1, Korea’s first hyperscale language model built with 519 billion parameters.

Around 33 billion parameters are activated during inference, so the AI model can keep strong performance instead of demanding excessive computing power. The project is part of a national initiative involving universities and industry partners.

The company expects A.X K1 to outperform smaller systems in complex reasoning, mathematics and multilingual understanding, while also supporting code generation and autonomous AI agents.

At such a scale, the model can operate as a teacher system that transfers knowledge to smaller, domain-specific tools that might directly improve daily services and industrial processes.

Unlike many global models trained mainly in English, A.X K1 has been trained in Korean from the outset so it naturally understands local language, culture and context.

SK Telecom plans to deploy the model through its AI service Adot, which already has more than 10 million subscribers, allowing access via calls, messages, the web and mobile apps.

The company foresees applications in workplace productivity, manufacturing optimisation, gaming dialogue, robotics and semiconductor performance testing.

Research will continue so the model can support the wider AI ecosystem of South Korea, and SK Telecom plans to open-source A.X K1 along with an API to help local developers create new AI agents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Quantum computing milestone achieved by Chinese researchers

Chinese researchers have reported a significant advance in quantum computing using a superconducting system. The Zuchongzhi 3.2 computer reached the fault-tolerant threshold, at which point error correction improves stability.

Pan Jianwei led the research and marks only the second time globally that this threshold has been achieved, following earlier work by Google. The result positions China as the first country outside the United States to demonstrate fault tolerance in a superconducting quantum system.

Unlike Google’s approach, which relies on extensive hardware redundancy, the Chinese team used microwave-based control to suppress errors. Researchers say this method may offer a more efficient path towards scalable quantum computing by reducing system complexity.

The breakthrough addresses a central challenge in quantum computing: qubit instability and the accumulation of undetected errors. Effective error management is crucial for developing larger systems that can maintain reliable quantum states over time.

While practical applications remain distant, researchers describe the experiment as a significant step in solving a foundational problem in quantum system design. The results highlight the growing international competition in the quest for scalable, fault-tolerant quantum computers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!