New Chinese rules target AI chatbots and emotional manipulation

China has proposed new rules to restrict AI chatbots from influencing human emotions in ways that could lead to suicide or self-harm. The Cyberspace Administration released draft regulations, open for public comment until late January.

The measures target human-like interactive AI services, including emotionally responsive AI chatbots, that simulate personality and engage users through text, images, audio, or video. Officials say the proposals signal a shift from content safety towards emotional safety as AI companions gain popularity.

Under the draft rules, AI chatbot services would be barred from encouraging self-harm, emotional manipulation, or obscene, violent, or gambling-related content. Providers would be required to involve human moderators if users express suicidal intent.

Additional provisions would strengthen safeguards for minors, including guardian consent and usage limits for emotionally interactive systems. Platforms would also face security assessments and interaction reminders when operating services with large user bases.

Experts say the proposals could mark the world’s first attempt to regulate emotionally responsive AI systems. The move comes as China-based chatbot firms pursue public listings and as global scrutiny grows over how conversational AI affects mental health and user behaviour.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Germany considers age limits after Australian social media ban

Digital Minister Karsten Wildberger has indicated support for stricter age limits on social media after Australia banned teenagers under 16 from using major online platforms. He said age restrictions were more than justified and that the policy had clear merit.

Australia’s new rules require companies to remove under 16 user profiles and stop new ones from being created. Officials argued that the measure aims to reduce cyberbullying, grooming and mental health harm instead of relying only on parental supervision.

The European Commission President said she was inspired by the move, although social media companies and civil liberties groups have criticised it.

Germany has already appointed an expert commission to examine child and youth protection in the digital era. The panel is expected to publish recommendations by summer 2025, which could include policies on social media access and potential restrictions on mobile phone use in schools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots spreading rumours raise new risks

Researchers warn AI chatbots are spreading rumours about real people without human oversight. Unlike human gossip, bot-to-bot exchanges can escalate unchecked, growing more extreme as they move through AI networks.

Philosophers Joel Krueger and Lucy Osler from the University of Exeter describe this phenomenon as ‘feral gossip.’ It involves negative evaluations about absent third parties and can persist undetected across platforms.

Real-world examples include tech reporter Kevin Roose, who encountered hostile AI-generated assessments of his work from multiple chatbots, seemingly amplified as the content filtered through training data.

The researchers highlight that AI systems lack the social checks humans provide, allowing rumours to intensify unchecked. Chatbots are designed to appear trustworthy and personal, so negative statements can seem credible.

Such misinformation has already affected journalists, academics, and public officials, sometimes prompting legal action. Technosocial harms from AI gossip extend beyond embarrassment. False claims can damage reputations, influence decisions, and persist online and offline.

While chatbots are not conscious, their prioritisation of conversational fluency over factual accuracy can make the rumours they spread difficult to detect and correct.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sberbank issues Russia’s first crypto-backed loan

Sberbank has issued Russia’s first crypto-backed loan, providing financing to Intelion Data, one of the country’s largest Bitcoin miners. The bank did not disclose the loan size or the cryptocurrency used as collateral but described the move as a pilot project.

The loan leveraged Sberbank’s own cryptocurrency custody solution, Rutoken, ensuring the digital assets’ safety throughout the loan period. The bank plans to offer similar loans and collaborate with the Central Bank on regulatory frameworks.

Intelion Data welcomed the deal, calling it a milestone for Russia’s crypto mining sector and a potential model for scaling similar financing across the industry. The company is expanding with a mining centre near the Kalinin Nuclear Power Plant and a gas power station.

Sberbank has also been testing decentralised finance tools and supports gradual legalisation of cryptocurrencies in Russia. VTB and other banks are preparing to support crypto transactions, while the Central Bank may allow limited retail trading.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Korean Air employee data breach exposes 30,000 records after cyberattack

Investigators are examining a major data breach involving Korean Air after personal records for around 30,000 employees were exposed in a cyberattack on a former subsidiary.

An incident that affected KC&D Service, which previously handled in-flight catering before being sold to private equity firm Hahn and Company in 2020.

The leaked information is understood to include employee names and bank account numbers. Korean Air said customer records were not impacted, and emergency security checks were completed instead of waiting for confirmation of the intrusion.

Korean Air also reported the breach to the relevant authorities.

Executives said the company is focusing on identifying the full scope of the breach and who has been affected, while urging KC&D to strengthen controls and prevent any recurrence. Korean Air also plans to upgrade internal data protection measures.

The attack follows a similar case at Asiana Airlines last week, where details of about 10,000 employees were compromised, raising wider concerns over cybersecurity resilience across the aviation sector of South Korea.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New York orders warning labels on social media features

Authorities in New York State have approved a new law requiring social media platforms to display warning labels when users engage with features that encourage prolonged use.

Labels will appear when people interact with elements such as infinite scrolling, auto-play, like counters or algorithm-driven feeds. The rule applies whenever these services are accessed from within New York.

Governor Kathy Hochul said the move is intended to safeguard young people against potential mental health harms linked to excessive social media use. Warnings will show the first time a user activates one of the targeted features and will then reappear at intervals.

Concerns about the impact on children and teenagers have prompted wider government action. California is considering similar steps, while Australia has already banned social media for under-16s and Denmark plans to follow. The US surgeon general has also called for clearer health warnings.

Researchers continue to examine how social media use relates to anxiety and depression among young users. Platforms now face growing pressure to balance engagement features with stronger protections instead of relying purely on self-regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SK Telecom introduces South Korea’s first hyperscale AI model

The telecommunications firm, SK Telecom, is preparing to unveil A.X K1, Korea’s first hyperscale language model built with 519 billion parameters.

Around 33 billion parameters are activated during inference, so the AI model can keep strong performance instead of demanding excessive computing power. The project is part of a national initiative involving universities and industry partners.

The company expects A.X K1 to outperform smaller systems in complex reasoning, mathematics and multilingual understanding, while also supporting code generation and autonomous AI agents.

At such a scale, the model can operate as a teacher system that transfers knowledge to smaller, domain-specific tools that might directly improve daily services and industrial processes.

Unlike many global models trained mainly in English, A.X K1 has been trained in Korean from the outset so it naturally understands local language, culture and context.

SK Telecom plans to deploy the model through its AI service Adot, which already has more than 10 million subscribers, allowing access via calls, messages, the web and mobile apps.

The company foresees applications in workplace productivity, manufacturing optimisation, gaming dialogue, robotics and semiconductor performance testing.

Research will continue so the model can support the wider AI ecosystem of South Korea, and SK Telecom plans to open-source A.X K1 along with an API to help local developers create new AI agents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trust Wallet urges update after $7 million hack

Trust Wallet has urged users to update its Google Chrome extension after a security breach affecting version 2.68 resulted in the theft of roughly $7 million. The company confirmed it will refund all impacted users and advised downloading version 2.69 immediately.

Mobile users and other browser extension versions were unaffected.

Blockchain security firms revealed that malicious code in version 2.68 harvested wallet mnemonic phrases, sending decrypted credentials to an attacker‑controlled server.

Around $3 million in Bitcoin, $431 in Solana, and more than $3 million in Ethereum were stolen and moved through centralised exchanges and cross‑chain bridges for laundering. Hundreds of users were affected.

Analysts suggest the incident may involve an insider or a nation-state actor, exploiting leaked Chrome Web Store API keys.

Trust Wallet has launched a support process for victims and warned against impersonation scams. CEO Eowyn Chen said the malicious extension bypassed the standard release checks and that investigation and remediation are ongoing.

The incident highlights ongoing security risks for browser-based cryptocurrency wallets and the importance of user vigilance, including avoiding unofficial links and never sharing recovery phrases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New AI directorates signal Türkiye’s push for AI

Türkiye has announced new measures to expand its AI ecosystem and strengthen public-sector adoption of the technology. The changes were published in the Official Gazette, according to Industry and Technology Minister Mehmet Fatih Kacir.

The Ministry’s Directorate General of National Technology has been renamed the Directorate General of National Technology and AI. The unit will oversee policies on data centres, cloud infrastructure, certification standards, and regulatory processes.

The directorate will also coordinate national AI governance, support startups and research, and promote the ethical and reliable use of AI. Its remit includes expanding data capacity, infrastructure, workforce development, and international cooperation.

Separately, a Public AI Directorate General has been established under the Presidency’s Cybersecurity Directorate. The new body will guide the use of AI across government institutions and lead regulatory work on public-sector AI applications.

Officials say the unit will align national legislation with international frameworks and set standards for data governance and shared data infrastructure. The government aims to position Türkiye as a leading country in the development of AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Phishing scam targets India’s drivers in large-scale e-Challan cyberattack

Cybercriminals are exploiting trust in India’s traffic enforcement systems by using fake e-Challan portals to steal financial data from vehicle owners. The campaign relies on phishing websites that closely mimic official government platforms.

Researchers at Cyble Research and Intelligence Labs say the operation marks a shift away from malware towards phishing-based deception delivered through web browsers. More than 36 fraudulent websites have been linked to the campaign, which targets users across India through SMS messages.

Victims receive alerts claiming unpaid traffic fines, often accompanied by warnings of licence suspension or legal action. The messages include links directing users to fake portals displaying fabricated violations and small penalty amounts, with no connection to government databases.

The sites restrict payments to credit and debit cards, prompting users to enter full card details. Investigators found that repeated payment attempts allow attackers to collect multiple sets of sensitive information from a single victim.

Researchers say the infrastructure is shared with broader phishing schemes that impersonate courier services, banks, and transportation platforms. Security experts advise users to verify fines only through official websites and to avoid clicking on links in unsolicited messages.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!