China’s cyberspace regulator has proposed new limits on AI ‘boyfriend’ and ‘girlfriend’ chatbots, tightening oversight of emotionally interactive artificial intelligence services.
Draft rules released on 27 December would require platforms to intervene when users express suicidal or self-harm tendencies, while strengthening protections for minors and restricting harmful content.
The regulator defines the services as AI systems that simulate human personality traits and emotional interaction. The proposals are open for public consultation until 25 January.
The draft bans chatbots from encouraging suicide, engaging in emotional manipulation, or producing obscene, violent, or gambling-related content. Minors would need guardian consent to access AI companionship.
Platforms would also be required to disclose clearly that users are interacting with AI rather than humans. Legal experts in China warn that enforcement may be challenging, particularly in identifying suicidal intent through language cues alone.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
UK Technology Secretary Liz Kendall has urged Elon Musk’s X to act urgently after reports that its AI chatbot Grok was used to generate non-consensual sexualised deepfake images of women and girls.
The BBC identified multiple examples on X where users prompted Grok to digitally alter images, including requests to make people appear undressed or place them in sexualised scenarios without consent.
Kendall described the content as ‘absolutely appalling’ and said the government would not allow the spread of degrading images. She added that Ofcom had her full backing to take enforcement action where necessary.
The UK media regulator confirmed it had made urgent contact with xAI and was investigating concerns that Grok had produced undressed images of individuals. X has been approached for comment.
Kendall said the issue was about enforcing the law rather than limiting speech, noting that intimate image abuse, including AI-generated content, is now a priority offence under the Online Safety Act.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Residents in California now have a simpler way to force data brokers to delete their personal information.
The state has launched the Delete Requests and Opt-Out Platform, known as DROP, allowing residents to submit one verified deletion request that applies to every registered data broker instead of contacting each company individually.
A system that follows the Delete Act, passed in 2023, and is intended to create a single control point for consumer data removal.
Once a resident submits a request, the data brokers must begin processing it from August 2026 and will have 90 days to act. If data is not deleted, residents may need to provide extra identifying details.
First-party data collected directly by companies can still be retained, while data from public records, such as voter rolls, remains exempt. Highly sensitive data may fall under separate legal protections such as HIPAA.
The California Privacy Protection Agency argues that broader data deletion could reduce identity theft, AI-driven impersonation, fraud risk and unwanted marketing contact.
Penalties for non-compliance include daily fines for brokers who fail to register or ignore deletion orders. The state hopes the tool will make data rights meaningful instead of purely theoretical.
A launch that comes as regulators worldwide examine how personal data is used, traded and exploited.
California is positioning itself as a leader in consumer privacy enforcement, while questions continue about how effectively DROP will operate when the deadline arrives in 2026.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Scrutiny has intensified around X after its Grok chatbot was found generating non-consensual explicit images when prompted by users.
Grok had been positioned as a creative AI assistant, yet regulators reacted swiftly once altered photos were linked to content involving minors. Governments and rights groups renewed pressure on platforms to prevent abusive use of generative AI.
India’s Ministry of Electronics and IT issued a notice to X demanding an Action Taken Report within 72 hours, citing failure to restrict unlawful content.
Authorities in France referred similar cases to prosecutors and urged enforcement under the EU’s Digital Services Act, signalling growing international resolve to control AI misuse.
Elon Musk responded by stating users, instead of Grok, would be legally responsible for illegal material generated through prompts. The company said offenders would face permanent bans and cooperation with law enforcement.
We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.
Anyone using or prompting Grok to make illegal content will suffer the… https://t.co/93kiIBTCYO
Critics argue that transferring liability to users does not remove the platform’s duty to embed stronger safeguards.
Independent reports suggest Grok has previously been involved in deepfake creation, creating a wider debate about accountability in the AI sector. The outcome could shape expectations worldwide regarding how platforms design and police powerful AI tools.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Hardware maker Plaud has introduced a new AI notetaking pin called the Plaud NotePin S alongside a Mac desktop app for digital meeting notes ahead of CES in Las Vegas.
The wearable device costs 179 dollars and arrives with several accessories so users can attach or wear it in different ways. A physical button allows quick control of recordings and can be tapped to highlight key moments during conversations.
The NotePin S keeps the same core specifications as the earlier model, including 64GB of storage and up to 20 hours of continuous recording.
Two MEMS microphones capture speech clearly within roughly three metres. Owners receive 300 minutes of transcription each month without extra cost. Apple Find My support is also included, so users can locate the device easily instead of worrying about misplacing it.
Compared with the larger Note Pro, the new pin offers a shorter recording range and battery life, but the small size makes it easier to wear while travelling or working on the go.
Plaud says the device suits users who rely on frequent in-person conversations rather than long seated meetings.
Plaud has now sold more than 1.5 million notetaking devices. The company also aims to enter the AI meeting assistant market with a Mac desktop client that detects when a meeting is active and prompts users to capture audio.
The software records system sound and uses AI to organise the transcript into structured notes. Users can also add typed notes and images instead of relying only on audio.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In the UK, Reddit has quietly overtaken TikTok to become Britain’s fourth most-visited social media platform, marking a major shift in how people search for information and share opinions online.
Use of the platform among UK internet users has risen sharply over the past two years, driven strongly by younger audiences who are increasingly drawn to open discussion instead of polished influencer content.
Google’s algorithm changes have helped accelerate Reddit’s rise by prioritising forum-based conversations in search results. Partnership deals with major AI companies have reinforced visibility further, as AI tools increasingly cite Reddit threads.
Younger users in the UK appear to value unfiltered and experience-based conversations, creating strong growth across lifestyle, beauty, parenting and relationship communities, alongside major expansion in football-related discussion.
Women now make up more than half of Reddit’s UK audience, signalling a major demographic shift for a platform once associated mainly with male users. Government departments, including ministers, are also using Reddit for direct engagement through public Q&A sessions.
Tension remains part of the platform’s culture, yet company leaders argue that community moderation and voting systems help manage behaviour.
Reddit is now encouraging users to visit directly instead of arriving via search or AI summaries, positioning the platform as a human alternative to automated answers.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Grok, the AI chatbot built into Elon Musk’s social platform X, has been used to produce sexualised ‘edited’ images of real people, including material that appeared to involve children. In a statement cited in the report, Grok attributed some of the outputs to gaps in its safeguards that allowed images showing ‘minors in minimal clothing,’ and said changes were being made to prevent repeat incidents.
One case described a Rio de Janeiro musician, Julie Yukari, who posted a New Year’s Eve photo on X and then noticed other users tagging Grok with requests to alter her image into a bikini-style version. She said she assumed the bot would refuse, but AI-generated, near-nude edits of her image later spread on the platform.
The report suggested that the misuse was widespread and rapidly evolving. In a brief midday snapshot of public prompts, it counted more than 100 attempts in 10 minutes to get Grok to swap people’s clothing for bikinis or more revealing outfits. In dozens of cases, the tool complied wholly or partly, including instances involving people who appeared to be minors.
The episode has also drawn attention from officials outside the US. French ministers said they referred the content to prosecutors and also flagged it to the country’s media regulator, asking for an assessment under the EU’s Digital Services Act. India’s IT ministry, meanwhile, wrote to X’s local operation saying the platform had failed to stop the tool being used to generate and circulate obscene, sexually explicit material.
Specialists quoted in the report argued the backlash was predictable: ‘nudification’ tools have existed for years, but placing a powerful image editor inside a significant social network drastically lowers the effort needed to misuse it and helps harmful content spread. They said civil-society and child-safety groups had warned xAI about likely abuse, while Musk reacted online with joking posts about bikini-style AI edits, and xAI previously brushed off related coverage with the phrase ‘Legacy Media Lies.’
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Face-to-face interviews and oral verification could become a routine part of third-level assessments under new recommendations aimed at addressing the improper use of AI. Institutions are being encouraged to redesign assessment methods to ensure student work is authentic.
The proposals are set out in new guidelines published by the Higher Education Authority (HEA) of Ireland, which regulates universities and other third-level institutions. The report argues that assessment systems must evolve to reflect the growing use of generative AI in education.
While encouraging institutions to embrace AI’s potential, the report stresses the need to ensure students are demonstrating genuine learning. Academics have raised concerns that AI-generated assignments are increasingly difficult to distinguish from original student work.
To address this, the report recommends redesigning assessments to prioritise student authorship and human judgement. Suggested measures include oral verification, process-based learning, and, where appropriate, a renewed reliance on written exams conducted without technology.
The authors also caution against relying on AI detection tools, arguing that integrity processes should be based on dialogue and evidence. They call for clearer policies, staff and student training, and safeguards around data use and equitable access to AI tools.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Growing concern over data privacy and subscription fatigue has led an independent developer to create WitNote, an AI note-taking tool that runs entirely offline.
The software allows users to process notes locally on Windows and macOS rather than relying on cloud-based services where personal information may be exposed.
WitNote supports lightweight language models such as Qwen2.5-0.5B that can run with limited storage requirements. Users may also connect to external models through API keys if preferred.
Core functions include rewriting, summarising and extending content, while a WYSIWYG Markdown editor provides a familiar workflow without network delays, instead of relying on web-based interfaces.
Another key feature is direct integration with Obsidian Markdown files, allowing notes to be imported instantly and managed in one place.
The developer says the project remains a work in progress but commits to ongoing updates and user-driven improvements, even joining Apple’s developer programme personally to support smoother installation.
For users seeking AI assistance while protecting privacy and avoiding monthly fees, WitNote positions itself as an appealing offline alternative that keeps full control of data on the local machine.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The booming influencer economy of Belgium is colliding with an advertising rulebook that many creators say belongs to another era.
Different obligations across federal, regional and local authorities mean that wording acceptable in one region may trigger a reprimand in another. Some influencers have even faced large fines for administrative breaches such as failing to publish business details on their profiles.
In response, the Influencer Marketing Alliance in Belgium has launched a certification scheme designed to help creators navigate the legal maze instead of risking unintentional violations.
Influencers complete an online course on advertising and consumer law and must pass a final exam before being listed in a public registry monitored by the Jury for Ethical Practices.
Major brands, including L’Oréal and Coca-Cola, already prefer to collaborate with certified creators to ensure compliance and credibility.
Not everyone is convinced.
Some Belgian influencers argue that certification adds more bureaucracy at a time when they already struggle to understand overlapping rules. Others see value as a structured reminder that content creators remain legally responsible for commercial communication shared with followers.
The alliance is also pushing lawmakers to involve influencers more closely when drafting future rules, including taxation and safeguards for child creators.
Consumer groups such as BEUC support clearer definitions and obligations under the forthcoming EU Digital Fairness Act, arguing that influencer advertising should follow the same standards as other media instead of remaining in a grey zone.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!