New Chinese rules target AI chatbots and emotional manipulation

China has proposed new rules to restrict AI chatbots from influencing human emotions in ways that could lead to suicide or self-harm. The Cyberspace Administration released draft regulations, open for public comment until late January.

The measures target human-like interactive AI services, including emotionally responsive AI chatbots, that simulate personality and engage users through text, images, audio, or video. Officials say the proposals signal a shift from content safety towards emotional safety as AI companions gain popularity.

Under the draft rules, AI chatbot services would be barred from encouraging self-harm, emotional manipulation, or obscene, violent, or gambling-related content. Providers would be required to involve human moderators if users express suicidal intent.

Additional provisions would strengthen safeguards for minors, including guardian consent and usage limits for emotionally interactive systems. Platforms would also face security assessments and interaction reminders when operating services with large user bases.

Experts say the proposals could mark the world’s first attempt to regulate emotionally responsive AI systems. The move comes as China-based chatbot firms pursue public listings and as global scrutiny grows over how conversational AI affects mental health and user behaviour.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ZhiCube showcases new approach to embodied AI deployment

Chinese robotics firm AI² Robotics has launched ZhiCube, described as a modular embodied AI service space integrating humanoid robots into public venues. The concept debuted in Beijing and Shenzhen, with initial installations in a city park and a shopping mall.

ZhiCube places the company’s AlphaBot 2 humanoid robot inside a modular unit designed for service delivery. The system supports multiple functions, including coffee, ice cream, entertainment, and retail, which can be combined based on location and demand.

At the core of the platform is a human–robot collaboration model powered by the company’s embodied AI system, GOVLA. The robot can perceive its surroundings, understand tasks, and adapt its role dynamically during daily operations.

AI² Robotics says the system adjusts work patterns based on foot traffic, allocating tasks between robots and human staff as demand fluctuates. Robots handle standardised services, while humans focus on creative or complex activities.

The company plans to deploy 1,000 ZhiCube units across China over the next three years. It aims to position the platform as a scalable urban infrastructure, supported by in-house manufacturing and long-term operational data from multiple industries.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI in education receives growing attention across the EU

A recent Flash Eurobarometer survey shows that EU citizens consider digital skills essential for all levels of education. Nearly nine in ten respondents believe schools should teach students to manage the effects of technology on mental and physical health.

Most also agree that digital skills deserve equal focus to traditional subjects such as reading, mathematics and science.

The survey highlights growing interest in AI in education. Over half of respondents see AI as both beneficial and challenging, emphasising the need for careful assessment. Citizens also expect teachers to be trained in AI use, including Generative AI, to guide students effectively.

While many support smartphone bans in schools, there is strong backing for digital learning tools, with 87% in favour of promoting technology designed specifically for education. Teachers, parents and families are seen as key in fostering safe and responsible technology use.

Overall, EU citizens advocate for a balanced approach that combines digital literacy, responsible use of technology, and the professional support of educators and families to foster a healthy learning environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New York orders warning labels on social media features

Authorities in New York State have approved a new law requiring social media platforms to display warning labels when users engage with features that encourage prolonged use.

Labels will appear when people interact with elements such as infinite scrolling, auto-play, like counters or algorithm-driven feeds. The rule applies whenever these services are accessed from within New York.

Governor Kathy Hochul said the move is intended to safeguard young people against potential mental health harms linked to excessive social media use. Warnings will show the first time a user activates one of the targeted features and will then reappear at intervals.

Concerns about the impact on children and teenagers have prompted wider government action. California is considering similar steps, while Australia has already banned social media for under-16s and Denmark plans to follow. The US surgeon general has also called for clearer health warnings.

Researchers continue to examine how social media use relates to anxiety and depression among young users. Platforms now face growing pressure to balance engagement features with stronger protections instead of relying purely on self-regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SK Telecom introduces South Korea’s first hyperscale AI model

The telecommunications firm, SK Telecom, is preparing to unveil A.X K1, Korea’s first hyperscale language model built with 519 billion parameters.

Around 33 billion parameters are activated during inference, so the AI model can keep strong performance instead of demanding excessive computing power. The project is part of a national initiative involving universities and industry partners.

The company expects A.X K1 to outperform smaller systems in complex reasoning, mathematics and multilingual understanding, while also supporting code generation and autonomous AI agents.

At such a scale, the model can operate as a teacher system that transfers knowledge to smaller, domain-specific tools that might directly improve daily services and industrial processes.

Unlike many global models trained mainly in English, A.X K1 has been trained in Korean from the outset so it naturally understands local language, culture and context.

SK Telecom plans to deploy the model through its AI service Adot, which already has more than 10 million subscribers, allowing access via calls, messages, the web and mobile apps.

The company foresees applications in workplace productivity, manufacturing optimisation, gaming dialogue, robotics and semiconductor performance testing.

Research will continue so the model can support the wider AI ecosystem of South Korea, and SK Telecom plans to open-source A.X K1 along with an API to help local developers create new AI agents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The AI terms that shaped debate and disruption in 2025

AI continued to dominate public debate in 2025, not only through new products and investment rounds, but also through a rapidly evolving vocabulary that captured both promise and unease.

From ambitious visions of superintelligence to cultural shorthand like ‘slop’, language became a lens through which society processed another turbulent year for AI.

Several terms reflected the industry’s technical ambitions. Concepts such as superintelligence, reasoning models, world models and physical intelligence pointed to efforts to push AI beyond text generation towards deeper problem-solving and real-world interaction.

Developments by companies including Meta, OpenAI, DeepSeek and Google DeepMind reinforced the sense that scale, efficiency and new training approaches are now competing pathways to progress, rather than sheer computing power alone.

Other expressions highlighted growing social and economic tensions. Words like hyperscalers, bubble and distillation entered mainstream debate as data centres expanded, valuations rose, and cheaper model-building methods disrupted established players.

At the same time, legal and ethical debates intensified around fair use, chatbot behaviour and the psychological impact of prolonged AI interaction, underscoring the gap between innovation speed and regulatory clarity.

Cultural reactions also influenced the development of the AI lexicon. Terms such as vibe coding, agentic and sycophancy revealed how generative systems are reshaping work, creativity and user trust, while ‘slop’ emerged as a blunt critique of low-quality, AI-generated content flooding online spaces.

Together, these phrases chart a year in which AI moved further into everyday life, leaving society to wrestle with what should be encouraged, controlled or questioned.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Quantum computing milestone achieved by Chinese researchers

Chinese researchers have reported a significant advance in quantum computing using a superconducting system. The Zuchongzhi 3.2 computer reached the fault-tolerant threshold, at which point error correction improves stability.

Pan Jianwei led the research and marks only the second time globally that this threshold has been achieved, following earlier work by Google. The result positions China as the first country outside the United States to demonstrate fault tolerance in a superconducting quantum system.

Unlike Google’s approach, which relies on extensive hardware redundancy, the Chinese team used microwave-based control to suppress errors. Researchers say this method may offer a more efficient path towards scalable quantum computing by reducing system complexity.

The breakthrough addresses a central challenge in quantum computing: qubit instability and the accumulation of undetected errors. Effective error management is crucial for developing larger systems that can maintain reliable quantum states over time.

While practical applications remain distant, researchers describe the experiment as a significant step in solving a foundational problem in quantum system design. The results highlight the growing international competition in the quest for scalable, fault-tolerant quantum computers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Tsinghua University is emerging as a cornerstone of China’s AI strategy

China’s Tsinghua University has emerged as a central hub in the country’s push to become a global leader in AI. The campus hosts a high level of research activity, with students and faculty working across disciplines related to AI development.

Momentum has been boosted by the success of DeepSeek, an AI startup founded by alums of Tsinghua University. The company reinforced confidence that Chinese teams can compete with leading international laboratories.

The university’s rise is closely aligned with Beijing’s national technology strategy. Government backing has included subsidies, tax incentives, and policy support, as well as public endorsements of AI entrepreneurs affiliated with Tsinghua.

Patent and publication data highlight the scale of output. Tsinghua has filed thousands of AI-related patents and ranks among the world’s most cited institutions in AI research, reflecting China’s rapidly expanding share of global AI innovation.

Despite this growth, the United States continues to lead in influential patents and top-performing models. Analysts note, however, that a narrowing gap is expected, as China produces a growing share of elite AI researchers and expands AI education from schools to advanced research.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New AI directorates signal Türkiye’s push for AI

Türkiye has announced new measures to expand its AI ecosystem and strengthen public-sector adoption of the technology. The changes were published in the Official Gazette, according to Industry and Technology Minister Mehmet Fatih Kacir.

The Ministry’s Directorate General of National Technology has been renamed the Directorate General of National Technology and AI. The unit will oversee policies on data centres, cloud infrastructure, certification standards, and regulatory processes.

The directorate will also coordinate national AI governance, support startups and research, and promote the ethical and reliable use of AI. Its remit includes expanding data capacity, infrastructure, workforce development, and international cooperation.

Separately, a Public AI Directorate General has been established under the Presidency’s Cybersecurity Directorate. The new body will guide the use of AI across government institutions and lead regulatory work on public-sector AI applications.

Officials say the unit will align national legislation with international frameworks and set standards for data governance and shared data infrastructure. The government aims to position Türkiye as a leading country in the development of AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Phishing scam targets India’s drivers in large-scale e-Challan cyberattack

Cybercriminals are exploiting trust in India’s traffic enforcement systems by using fake e-Challan portals to steal financial data from vehicle owners. The campaign relies on phishing websites that closely mimic official government platforms.

Researchers at Cyble Research and Intelligence Labs say the operation marks a shift away from malware towards phishing-based deception delivered through web browsers. More than 36 fraudulent websites have been linked to the campaign, which targets users across India through SMS messages.

Victims receive alerts claiming unpaid traffic fines, often accompanied by warnings of licence suspension or legal action. The messages include links directing users to fake portals displaying fabricated violations and small penalty amounts, with no connection to government databases.

The sites restrict payments to credit and debit cards, prompting users to enter full card details. Investigators found that repeated payment attempts allow attackers to collect multiple sets of sensitive information from a single victim.

Researchers say the infrastructure is shared with broader phishing schemes that impersonate courier services, banks, and transportation platforms. Security experts advise users to verify fines only through official websites and to avoid clicking on links in unsolicited messages.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!