New Chinese rules target AI chatbots and emotional manipulation

China has proposed new rules to restrict AI chatbots from influencing human emotions in ways that could lead to suicide or self-harm. The Cyberspace Administration released draft regulations, open for public comment until late January.

The measures target human-like interactive AI services, including emotionally responsive AI chatbots, that simulate personality and engage users through text, images, audio, or video. Officials say the proposals signal a shift from content safety towards emotional safety as AI companions gain popularity.

Under the draft rules, AI chatbot services would be barred from encouraging self-harm, emotional manipulation, or obscene, violent, or gambling-related content. Providers would be required to involve human moderators if users express suicidal intent.

Additional provisions would strengthen safeguards for minors, including guardian consent and usage limits for emotionally interactive systems. Platforms would also face security assessments and interaction reminders when operating services with large user bases.

Experts say the proposals could mark the world’s first attempt to regulate emotionally responsive AI systems. The move comes as China-based chatbot firms pursue public listings and as global scrutiny grows over how conversational AI affects mental health and user behaviour.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Germany considers age limits after Australian social media ban

Digital Minister Karsten Wildberger has indicated support for stricter age limits on social media after Australia banned teenagers under 16 from using major online platforms. He said age restrictions were more than justified and that the policy had clear merit.

Australia’s new rules require companies to remove under 16 user profiles and stop new ones from being created. Officials argued that the measure aims to reduce cyberbullying, grooming and mental health harm instead of relying only on parental supervision.

The European Commission President said she was inspired by the move, although social media companies and civil liberties groups have criticised it.

Germany has already appointed an expert commission to examine child and youth protection in the digital era. The panel is expected to publish recommendations by summer 2025, which could include policies on social media access and potential restrictions on mobile phone use in schools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI in education receives growing attention across the EU

A recent Flash Eurobarometer survey shows that EU citizens consider digital skills essential for all levels of education. Nearly nine in ten respondents believe schools should teach students to manage the effects of technology on mental and physical health.

Most also agree that digital skills deserve equal focus to traditional subjects such as reading, mathematics and science.

The survey highlights growing interest in AI in education. Over half of respondents see AI as both beneficial and challenging, emphasising the need for careful assessment. Citizens also expect teachers to be trained in AI use, including Generative AI, to guide students effectively.

While many support smartphone bans in schools, there is strong backing for digital learning tools, with 87% in favour of promoting technology designed specifically for education. Teachers, parents and families are seen as key in fostering safe and responsible technology use.

Overall, EU citizens advocate for a balanced approach that combines digital literacy, responsible use of technology, and the professional support of educators and families to foster a healthy learning environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New York orders warning labels on social media features

Authorities in New York State have approved a new law requiring social media platforms to display warning labels when users engage with features that encourage prolonged use.

Labels will appear when people interact with elements such as infinite scrolling, auto-play, like counters or algorithm-driven feeds. The rule applies whenever these services are accessed from within New York.

Governor Kathy Hochul said the move is intended to safeguard young people against potential mental health harms linked to excessive social media use. Warnings will show the first time a user activates one of the targeted features and will then reappear at intervals.

Concerns about the impact on children and teenagers have prompted wider government action. California is considering similar steps, while Australia has already banned social media for under-16s and Denmark plans to follow. The US surgeon general has also called for clearer health warnings.

Researchers continue to examine how social media use relates to anxiety and depression among young users. Platforms now face growing pressure to balance engagement features with stronger protections instead of relying purely on self-regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU targets addictive gaming features

Video gaming has become one of Europe’s most prominent entertainment industries, surpassing a niche hobby, with over half the population regularly engaging in it.

As the sector grows, the EU lawmakers are increasingly worried about addictive game design and manipulative features that push players to spend more time and money online.

Much of the concern focuses on loot boxes, where players pay for random digital rewards that resemble gambling mechanics. Studies and parliamentary reports warn that children may be particularly vulnerable, with some lawmakers calling for outright bans on paid loot boxes and premium in-game currencies.

The European Commission is examining how far design choices contribute to digital addiction and whether games are exploiting behavioural weaknesses rather than offering fair entertainment.

Officials say the risk is higher for minors, who may not fully understand how engagement-driven systems are engineered.

The upcoming Digital Fairness Act aims to strengthen consumer protection across online services, rather than leaving families to navigate the risks alone. However, as negotiations continue, the debate over how tightly gaming should be regulated is only just beginning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Santa Tracker services add new features on Christmas Eve

AI-powered tools are adding new features to long-running Santa Tracker services used by families on Christmas Eve. Platforms run by NORAD and Google allow users to follow Father Christmas’s journey through their Santa Tracker tools, which also introduce interactive and personalised digital experiences.

NORAD’s Santa Tracker, first launched in 1955, now features games, videos, music, and stories in addition to its live tracking map. This year, the service introduced AI-powered features that generate elf-style avatars, create toy ideas, and produce personalised holiday stories for families.

The Santa Tracker presents Santa’s journey on a 3D globe built using open-source mapping technology and satellite imagery. Users can also watch short videos on Santa Cam, featuring Santa travelling to destinations around the world.

Google’s rendition offers similar features, including a live map, estimated arrival times, and interactive activities available throughout December. Santa’s Village includes games, animations, and beginner-friendly coding activities designed for children.

Google Assistant introduces a voice-based experience to its service, enabling users to ask about Santa’s location or receive updates from the North Pole. Both platforms aim to blend tradition with digital tools to create a seamless and engaging holiday experience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated Jesuses spark concern over faith and bias

AI chatbots modelled on Jesus are becoming increasingly popular over Christmas, offering companionship or faith guidance to people who may feel emotionally vulnerable during the holidays.

Several platforms, including Character.AI, Talkie.AI and Text With Jesus, now host simulations claiming to answer questions in the voice of Jesus Christ.

Experts warn that such tools could gradually reshape religious belief and practice. Training data is controlled by a handful of technology firms, which means AI systems may produce homogenised and biased interpretations instead of reflecting the diversity of real-world faith communities.

Users who are young or unfamiliar with AI may also struggle to judge the accuracy or intent behind the answers they receive.

Researchers say AI chatbots are currently used as a supplement rather than a replacement for religious teaching.

However, concern remains that people may begin to rely on AI for spiritual reassurance during sensitive moments. Scholars recommend limiting use over the holidays and prioritising conversations with family, friends or trusted religious leaders instead of seeking emotional comfort from a chatbot.

Experts also urge users to reflect carefully on who designs these systems and why. Fact-checking answers and grounding faith in recognised sources may help reduce the risk of distortion as AI plays a growing role in people’s daily lives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea fake news law sparks fears for press freedom

A significant debate has erupted in South Korea after the National Assembly passed new legislation aimed at tackling so-called fake news.

The revised Information and Communications Network Act bans the circulation of false or fabricated information online. It allows courts to impose punitive damages up to five times the losses suffered when media outlets or YouTubers intentionally spread disinformation for unjust profit.

Journalists, unions and academics warn that the law could undermine freedom of expression and weaken journalism’s watchdog function instead of strengthening public trust.

Critics argue that ambiguity over who decides what constitutes fake news could shift judgement away from the courts and toward regulators or platforms, encouraging self-censorship and increasing the risk of abusive lawsuits by influential figures.

Experts also highlight the lack of strong safeguards in South Korea against malicious litigation compared with the US, where plaintiffs must prove fault by journalists.

The controversy reflects more profound public scepticism about South Korean media and long-standing reporting practices that sometimes rely on relaying statements without sufficient verification, suggesting that structural reform may be needed instead of rapid, punitive legislation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea tightens ID checks with facial verification for phone accounts

Mandatory facial verification will be introduced in South Korea for anyone opening a new mobile phone account, as authorities try to limit identity fraud.

Officials said criminals have been using stolen personal details to set up phone numbers that later support scams such as voice phishing instead of legitimate services.

Major mobile carriers, including LG Uplus, Korea Telecom and SK Telecom, will validate users by matching their faces against biometric data stored in the PASS digital identity app.

Such a requirement expands the country’s identity checks rather than replacing them outright, and is intended to make it harder for fraud rings to exploit stolen data at scale.

The measure follows a difficult year for data security in South Korea, marked by cyber incidents affecting more than half the population.

SK Telecom reported a breach involving all 23 million of its customers and now faces more than $1.5 billion in penalties and compensation.

Regulators also revealed that mobile virtual network operators were linked to 92% of counterfeit phones uncovered in 2024, strengthening the government’s case for tougher identity controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI app Splat turns photos into colouring pages for children

Splat is a new mobile app from the team behind Retro that uses generative AI to transform personal photos into colouring pages designed for children. The app targets parents seeking creative activities, free from advertising clutter and pay-per-page websites.

Users can upload images from their camera roll or select from curated educational categories, then apply styles such as cartoon, anime or comic.

Parents guide the initial setup through simple preferences instead of a lengthy account creation process, while children can colour either on-screen or on printed pages.

Splat operates on a subscription basis, offering weekly or annual plans that limit the number of generated pages. Access to payments and settings is restricted behind parental verification, helping prevent accidental purchases by younger users.

The app reflects a broader trend in applying generative AI to child-friendly creativity tools. By focusing on ease of use and offline activities, Splat positions itself as an alternative to screen-heavy entertainment while encouraging imaginative play.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!