Stronger safeguards arrive with OpenAI’s GPT-5.2 release

OpenAI has launched GPT-5.2, highlighting improved safety performance in conversations involving mental health. The company said the update strengthens how its models respond to signs of suicide, self-harm, emotional distress, and reliance on the chatbot.

The release follows criticism and legal challenges accusing ChatGPT of contributing to psychosis, paranoia, and delusional thinking in some users. Several cases have highlighted the risks of prolonged emotional engagement with AI systems.

In response to a wrongful death lawsuit involving a US teenager, OpenAI denied responsibility while stating that ChatGPT encouraged the user to seek help. The company also committed to improving responses when users display warning signs of mental health crises.

OpenAI said GPT-5.2 produces fewer undesirable responses in sensitive situations than earlier versions. According to the company, the model scores higher on internal safety tests related to self-harm, emotional reliance, and mental health.

The update builds on OpenAI’s use of a training approach known as safe completion, which aims to balance helpfulness and safety. Detailed performance information has been published in the GPT-5.2 system card.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Chinese rules target AI chatbots and emotional manipulation

China has proposed new rules to restrict AI chatbots from influencing human emotions in ways that could lead to suicide or self-harm. The Cyberspace Administration released draft regulations, open for public comment until late January.

The measures target human-like interactive AI services, including emotionally responsive AI chatbots, that simulate personality and engage users through text, images, audio, or video. Officials say the proposals signal a shift from content safety towards emotional safety as AI companions gain popularity.

Under the draft rules, AI chatbot services would be barred from encouraging self-harm, emotional manipulation, or obscene, violent, or gambling-related content. Providers would be required to involve human moderators if users express suicidal intent.

Additional provisions would strengthen safeguards for minors, including guardian consent and usage limits for emotionally interactive systems. Platforms would also face security assessments and interaction reminders when operating services with large user bases.

Experts say the proposals could mark the world’s first attempt to regulate emotionally responsive AI systems. The move comes as China-based chatbot firms pursue public listings and as global scrutiny grows over how conversational AI affects mental health and user behaviour.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The AI terms that shaped debate and disruption in 2025

AI continued to dominate public debate in 2025, not only through new products and investment rounds, but also through a rapidly evolving vocabulary that captured both promise and unease.

From ambitious visions of superintelligence to cultural shorthand like ‘slop’, language became a lens through which society processed another turbulent year for AI.

Several terms reflected the industry’s technical ambitions. Concepts such as superintelligence, reasoning models, world models and physical intelligence pointed to efforts to push AI beyond text generation towards deeper problem-solving and real-world interaction.

Developments by companies including Meta, OpenAI, DeepSeek and Google DeepMind reinforced the sense that scale, efficiency and new training approaches are now competing pathways to progress, rather than sheer computing power alone.

Other expressions highlighted growing social and economic tensions. Words like hyperscalers, bubble and distillation entered mainstream debate as data centres expanded, valuations rose, and cheaper model-building methods disrupted established players.

At the same time, legal and ethical debates intensified around fair use, chatbot behaviour and the psychological impact of prolonged AI interaction, underscoring the gap between innovation speed and regulatory clarity.

Cultural reactions also influenced the development of the AI lexicon. Terms such as vibe coding, agentic and sycophancy revealed how generative systems are reshaping work, creativity and user trust, while ‘slop’ emerged as a blunt critique of low-quality, AI-generated content flooding online spaces.

Together, these phrases chart a year in which AI moved further into everyday life, leaving society to wrestle with what should be encouraged, controlled or questioned.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Digital rules dispute deepens as US administration avoids trade retaliation

The US administration is criticising foreign digital regulations affecting major online platforms while avoiding trade measures that could disrupt the US economy. Officials say the rules disproportionately impact American technology companies.

US officials have paused or cancelled trade discussions with the UK, the EU, and South Korea. Current negotiations are focused on rolling back digital taxes, privacy rules, and platform regulations that Washington views as unfair barriers to US firms.

US administration officials describe the moves as a negotiating tactic rather than an escalation toward tariffs. While trade investigations into digital practices have been raised as a possibility, officials have stressed that the goal remains a negotiated outcome rather than a renewed trade conflict.

Technology companies have pressed for firmer action, though some industry figures warn that aggressive retaliation could trigger a wider digital trade war. Officials acknowledge that prolonged disputes with major partners could ultimately harm both US firms and global markets.

Despite rhetorical escalation and targeted threats against European companies, the US administration has so far avoided dismantling existing trade agreements. Analysts say mounting pressure may soon force Washington to choose between compromise and more concrete enforcement measures.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots exploited to create nonconsensual bikini deepfakes

Users of popular AI chatbots are generating bikini deepfakes by manipulating photos of fully clothed women, often without consent. Online discussions show how generative AI tools can be misused to create sexually suggestive deepfakes from ordinary images, raising concerns about image-based abuse.

A now-deleted Reddit thread shared prompts for using Google’s Gemini to alter clothing in photographs. One post asked for a woman’s traditional dress to be changed to a bikini. Reddit removed the content and later banned the subreddit over deepfake-related harassment.

Researchers and digital rights advocates warn that nonconsensual deepfakes remain a persistent form of online harassment. Millions of users have visited AI-powered websites designed to undress people in photos. The trend reflects growing harm enabled by increasingly realistic image generation tools.

Most mainstream AI chatbots prohibit the creation of explicit images and apply safeguards to prevent abuse. However, recent advances in image-editing models have made it easier for users to bypass guardrails using simple prompts, according to limited testing and expert assessments.

Technology companies say their policies ban altering a person’s likeness without consent, with penalties including account suspensions. Legal experts argue that deepfakes involving sexualised imagery represent a core risk of generative AI and that accountability must extend to both users and platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Santa Tracker services add new features on Christmas Eve

AI-powered tools are adding new features to long-running Santa Tracker services used by families on Christmas Eve. Platforms run by NORAD and Google allow users to follow Father Christmas’s journey through their Santa Tracker tools, which also introduce interactive and personalised digital experiences.

NORAD’s Santa Tracker, first launched in 1955, now features games, videos, music, and stories in addition to its live tracking map. This year, the service introduced AI-powered features that generate elf-style avatars, create toy ideas, and produce personalised holiday stories for families.

The Santa Tracker presents Santa’s journey on a 3D globe built using open-source mapping technology and satellite imagery. Users can also watch short videos on Santa Cam, featuring Santa travelling to destinations around the world.

Google’s rendition offers similar features, including a live map, estimated arrival times, and interactive activities available throughout December. Santa’s Village includes games, animations, and beginner-friendly coding activities designed for children.

Google Assistant introduces a voice-based experience to its service, enabling users to ask about Santa’s location or receive updates from the North Pole. Both platforms aim to blend tradition with digital tools to create a seamless and engaging holiday experience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Aflac confirms large-scale data breach following cyber incident

US insurance firm Aflac has confirmed that a cyberattack disclosed in June affected around 22.65 million people. The breach involved the theft of sensitive personal and health information; however, the company initially did not specify the number of individuals affected.

In filings with the Texas attorney general, Aflac said the compromised data includes names, dates of birth, home addresses, government-issued identification numbers, driving licence details, and Social Security numbers. Medical and health insurance information was also accessed during the incident.

A separate filing with the Iowa attorney general suggested the attackers may be linked to a known cybercriminal organisation. Federal law enforcement and external cybersecurity specialists indicated the group had been targeting the insurance sector more broadly.

Security researchers have linked a wave of recent insurance-sector breaches to Scattered Spider, a loosely organised group of predominantly young, English-speaking hackers. The timing and targeting of the Aflac incident align with the group’s activity.

The US company stated that it has begun notifying the affected individuals. The company, which reports having around 50 million customers, did not respond to requests for comment. Other insurers, including Erie Insurance and Philadelphia Insurance Companies, reported breaches during the same period.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Fake weight loss adverts removed from TikTok

TikTok removed fake adverts for weight loss drugs after a company impersonating UK retailer Boots used AI-generated videos. The clips falsely showed healthcare professionals promoting prescription-only medicines.

Boots said it contacted TikTok after becoming aware of the misleading adverts circulating on the platform. TikTok confirmed the videos were removed for breaching its rules on deceptive and harmful advertising.

BBC reporting found the account was briefly able to repost the same videos before being taken down. The account appeared to be based in Hong Kong and directed users to a website selling the drugs.

UK health regulators warned that prescription-only weight loss medicines must only be supplied by registered pharmacies. TikTok stated that it continues to strengthen its detection systems and bans the promotion of controlled substances.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Atlas agent mode fortifies OpenAI’s ChatGPT security

ChatGPT Atlas has introduced an agent mode that allows an AI browser agent to view webpages and perform actions directly. The feature supports everyday workflows using the same context as a human user. Expanded capability also increases security exposure.

Prompt injection has emerged as a key threat to browser-based agents, targeting AI behaviour rather than software flaws. Malicious instructions embedded in content can redirect an agent from the user’s intended action. Successful attacks may trigger unauthorised actions.

To address the risk, OpenAI has deployed a security update to Atlas. The update includes an adversarially trained model and strengthened safeguards. It followed internal automated red teaming.

Automated red teaming uses reinforcement learning to train AI attackers that search for complex exploits. Simulations test how agents respond to injected prompts. Findings are used to harden models and system-level defences.

Prompt injection is expected to remain a long-term security challenge for AI agents. Continued investment in testing, training, and rapid mitigation aims to reduce real-world risk. The goal is to achieve reliable and secure AI assistance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI drives Vietnam’s smart city expansion

AI is becoming central to Vietnam’s urban development as major cities adopt data-led systems. Leaders at the Vietnam–Asia Smart City Summit said AI now shapes planning, service delivery and daily operations nationwide.

Experts noted rising pressure on cities, with congestion, pollution and population growth driving demand for more innovative governance. AI is helping authorities shift towards proactive management, using forecasting tools, shared data platforms and real-time supervision.

Speakers highlighted deployments across transport control, environmental monitoring, disaster alerts and administrative oversight. Hanoi and Da Nang presented advanced models, with Da Nang recognised again for achievements in green development and digital operations.

Delegates agreed that long-term progress depends on strong data foundations, closer coordination and clear strategic roadmaps in Vietnam. Many stressed that technology must prioritise public benefit, with citizens placed at the centre of smart-city design.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot