China has proposed new rules to restrict AI chatbots from influencing human emotions in ways that could lead to suicide or self-harm. The Cyberspace Administration released draft regulations, open for public comment until late January.
The measures target human-like interactive AI services, including emotionally responsive AI chatbots, that simulate personality and engage users through text, images, audio, or video. Officials say the proposals signal a shift from content safety towards emotional safety as AI companions gain popularity.
Under the draft rules, AI chatbot services would be barred from encouraging self-harm, emotional manipulation, or obscene, violent, or gambling-related content. Providers would be required to involve human moderators if users express suicidal intent.
Additional provisions would strengthen safeguards for minors, including guardian consent and usage limits for emotionally interactive systems. Platforms would also face security assessments and interaction reminders when operating services with large user bases.
Experts say the proposals could mark the world’s first attempt to regulate emotionally responsive AI systems. The move comes as China-based chatbot firms pursue public listings and as global scrutiny grows over how conversational AI affects mental health and user behaviour.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Digital Minister Karsten Wildberger has indicated support for stricter age limits on social media after Australia banned teenagers under 16 from using major online platforms. He said age restrictions were more than justified and that the policy had clear merit.
Australia’s new rules require companies to remove under 16 user profiles and stop new ones from being created. Officials argued that the measure aims to reduce cyberbullying, grooming and mental health harm instead of relying only on parental supervision.
The European Commission President said she was inspired by the move, although social media companies and civil liberties groups have criticised it.
Germany has already appointed an expert commission to examine child and youth protection in the digital era. The panel is expected to publish recommendations by summer 2025, which could include policies on social media access and potential restrictions on mobile phone use in schools.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers warn AI chatbots are spreading rumours about real people without human oversight. Unlike human gossip, bot-to-bot exchanges can escalate unchecked, growing more extreme as they move through AI networks.
Philosophers Joel Krueger and Lucy Osler from the University of Exeter describe this phenomenon as ‘feral gossip.’ It involves negative evaluations about absent third parties and can persist undetected across platforms.
Real-world examples include tech reporter Kevin Roose, who encountered hostile AI-generated assessments of his work from multiple chatbots, seemingly amplified as the content filtered through training data.
The researchers highlight that AI systems lack the social checks humans provide, allowing rumours to intensify unchecked. Chatbots are designed to appear trustworthy and personal, so negative statements can seem credible.
Such misinformation has already affected journalists, academics, and public officials, sometimes prompting legal action. Technosocial harms from AI gossip extend beyond embarrassment. False claims can damage reputations, influence decisions, and persist online and offline.
While chatbots are not conscious, their prioritisation of conversational fluency over factual accuracy can make the rumours they spread difficult to detect and correct.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Sberbank has issued Russia’s first crypto-backed loan, providing financing to Intelion Data, one of the country’s largest Bitcoin miners. The bank did not disclose the loan size or the cryptocurrency used as collateral but described the move as a pilot project.
The loan leveraged Sberbank’s own cryptocurrency custody solution, Rutoken, ensuring the digital assets’ safety throughout the loan period. The bank plans to offer similar loans and collaborate with the Central Bank on regulatory frameworks.
Intelion Data welcomed the deal, calling it a milestone for Russia’s crypto mining sector and a potential model for scaling similar financing across the industry. The company is expanding with a mining centre near the Kalinin Nuclear Power Plant and a gas power station.
Sberbank has also been testing decentralised finance tools and supports gradual legalisation of cryptocurrencies in Russia. VTB and other banks are preparing to support crypto transactions, while the Central Bank may allow limited retail trading.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Investigators are examining a major data breach involving Korean Air after personal records for around 30,000 employees were exposed in a cyberattack on a former subsidiary.
An incident that affected KC&D Service, which previously handled in-flight catering before being sold to private equity firm Hahn and Company in 2020.
The leaked information is understood to include employee names and bank account numbers. Korean Air said customer records were not impacted, and emergency security checks were completed instead of waiting for confirmation of the intrusion.
Korean Air also reported the breach to the relevant authorities.
Executives said the company is focusing on identifying the full scope of the breach and who has been affected, while urging KC&D to strengthen controls and prevent any recurrence. Korean Air also plans to upgrade internal data protection measures.
The attack follows a similar case at Asiana Airlines last week, where details of about 10,000 employees were compromised, raising wider concerns over cybersecurity resilience across the aviation sector of South Korea.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Authorities in New York State have approved a new law requiring social media platforms to display warning labels when users engage with features that encourage prolonged use.
Labels will appear when people interact with elements such as infinite scrolling, auto-play, like counters or algorithm-driven feeds. The rule applies whenever these services are accessed from within New York.
Governor Kathy Hochul said the move is intended to safeguard young people against potential mental health harms linked to excessive social media use. Warnings will show the first time a user activates one of the targeted features and will then reappear at intervals.
Concerns about the impact on children and teenagers have prompted wider government action. California is considering similar steps, while Australia has already banned social media for under-16s and Denmark plans to follow. The US surgeon general has also called for clearer health warnings.
Researchers continue to examine how social media use relates to anxiety and depression among young users. Platforms now face growing pressure to balance engagement features with stronger protections instead of relying purely on self-regulation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Trust Wallet has urged users to update its Google Chrome extension after a security breach affecting version 2.68 resulted in the theft of roughly $7 million. The company confirmed it will refund all impacted users and advised downloading version 2.69 immediately.
Mobile users and other browser extension versions were unaffected.
Blockchain security firms revealed that malicious code in version 2.68 harvested wallet mnemonic phrases, sending decrypted credentials to an attacker‑controlled server.
Around $3 million in Bitcoin, $431 in Solana, and more than $3 million in Ethereum were stolen and moved through centralised exchanges and cross‑chain bridges for laundering. Hundreds of users were affected.
Analysts suggest the incident may involve an insider or a nation-state actor, exploiting leaked Chrome Web Store API keys.
Trust Wallet has launched a support process for victims and warned against impersonation scams. CEO Eowyn Chen said the malicious extension bypassed the standard release checks and that investigation and remediation are ongoing.
The incident highlights ongoing security risks for browser-based cryptocurrency wallets and the importance of user vigilance, including avoiding unofficial links and never sharing recovery phrases.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Cybercriminals are exploiting trust in India’s traffic enforcement systems by using fake e-Challan portals to steal financial data from vehicle owners. The campaign relies on phishing websites that closely mimic official government platforms.
Researchers at Cyble Research and Intelligence Labs say the operation marks a shift away from malware towards phishing-based deception delivered through web browsers. More than 36 fraudulent websites have been linked to the campaign, which targets users across India through SMS messages.
Victims receive alerts claiming unpaid traffic fines, often accompanied by warnings of licence suspension or legal action. The messages include links directing users to fake portals displaying fabricated violations and small penalty amounts, with no connection to government databases.
The sites restrict payments to credit and debit cards, prompting users to enter full card details. Investigators found that repeated payment attempts allow attackers to collect multiple sets of sensitive information from a single victim.
Researchers say the infrastructure is shared with broader phishing schemes that impersonate courier services, banks, and transportation platforms. Security experts advise users to verify fines only through official websites and to avoid clicking on links in unsolicited messages.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Authorities in France are responding to a significant cyber incident after a pro-Russian hacker group, Noname057, claimed responsibility for a distributed denial-of-service attack on the national postal service, La Poste.
The attack began on 22 December and forced core computer systems offline, delaying parcel deliveries during the busy Christmas period instead of allowing normal operations to continue.
According to reports, standard letter delivery was not affected. However, postal staff lost the ability to track parcels, and customers experienced disruptions when using online payment services connected to La Banque Postale.
Recovery work was still underway several days later, underscoring the increasing reliance of critical services on uninterrupted digital infrastructure.
Noname057 has previously been linked to cyberattacks across Europe, mainly targeting Ukraine and countries seen as supportive of Kyiv instead of neutral states.
Europol led a significant operation against the group earlier in the year, with the US Department of Justice also involved, highlighting growing international coordination against cross-border cybercrime.
The incident has renewed concerns about the vulnerability of essential logistics networks and public-facing services to coordinated cyber disruption. European authorities continue to assess long-term resilience measures to protect citizens and core services from future attacks.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Video gaming has become one of Europe’s most prominent entertainment industries, surpassing a niche hobby, with over half the population regularly engaging in it.
As the sector grows, the EU lawmakers are increasingly worried about addictive game design and manipulative features that push players to spend more time and money online.
Much of the concern focuses on loot boxes, where players pay for random digital rewards that resemble gambling mechanics. Studies and parliamentary reports warn that children may be particularly vulnerable, with some lawmakers calling for outright bans on paid loot boxes and premium in-game currencies.
The European Commission is examining how far design choices contribute to digital addiction and whether games are exploiting behavioural weaknesses rather than offering fair entertainment.
Officials say the risk is higher for minors, who may not fully understand how engagement-driven systems are engineered.
The upcoming Digital Fairness Act aims to strengthen consumer protection across online services, rather than leaving families to navigate the risks alone. However, as negotiations continue, the debate over how tightly gaming should be regulated is only just beginning.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!