Scrutiny has intensified around X after its Grok chatbot was found generating non-consensual explicit images when prompted by users.
Grok had been positioned as a creative AI assistant, yet regulators reacted swiftly once altered photos were linked to content involving minors. Governments and rights groups renewed pressure on platforms to prevent abusive use of generative AI.
India’s Ministry of Electronics and IT issued a notice to X demanding an Action Taken Report within 72 hours, citing failure to restrict unlawful content.
Authorities in France referred similar cases to prosecutors and urged enforcement under the EU’s Digital Services Act, signalling growing international resolve to control AI misuse.
Elon Musk responded by stating users, instead of Grok, would be legally responsible for illegal material generated through prompts. The company said offenders would face permanent bans and cooperation with law enforcement.
We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.
Anyone using or prompting Grok to make illegal content will suffer the… https://t.co/93kiIBTCYO
Critics argue that transferring liability to users does not remove the platform’s duty to embed stronger safeguards.
Independent reports suggest Grok has previously been involved in deepfake creation, creating a wider debate about accountability in the AI sector. The outcome could shape expectations worldwide regarding how platforms design and police powerful AI tools.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Hardware maker Plaud has introduced a new AI notetaking pin called the Plaud NotePin S alongside a Mac desktop app for digital meeting notes ahead of CES in Las Vegas.
The wearable device costs 179 dollars and arrives with several accessories so users can attach or wear it in different ways. A physical button allows quick control of recordings and can be tapped to highlight key moments during conversations.
The NotePin S keeps the same core specifications as the earlier model, including 64GB of storage and up to 20 hours of continuous recording.
Two MEMS microphones capture speech clearly within roughly three metres. Owners receive 300 minutes of transcription each month without extra cost. Apple Find My support is also included, so users can locate the device easily instead of worrying about misplacing it.
Compared with the larger Note Pro, the new pin offers a shorter recording range and battery life, but the small size makes it easier to wear while travelling or working on the go.
Plaud says the device suits users who rely on frequent in-person conversations rather than long seated meetings.
Plaud has now sold more than 1.5 million notetaking devices. The company also aims to enter the AI meeting assistant market with a Mac desktop client that detects when a meeting is active and prompts users to capture audio.
The software records system sound and uses AI to organise the transcript into structured notes. Users can also add typed notes and images instead of relying only on audio.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In the UK, Reddit has quietly overtaken TikTok to become Britain’s fourth most-visited social media platform, marking a major shift in how people search for information and share opinions online.
Use of the platform among UK internet users has risen sharply over the past two years, driven strongly by younger audiences who are increasingly drawn to open discussion instead of polished influencer content.
Google’s algorithm changes have helped accelerate Reddit’s rise by prioritising forum-based conversations in search results. Partnership deals with major AI companies have reinforced visibility further, as AI tools increasingly cite Reddit threads.
Younger users in the UK appear to value unfiltered and experience-based conversations, creating strong growth across lifestyle, beauty, parenting and relationship communities, alongside major expansion in football-related discussion.
Women now make up more than half of Reddit’s UK audience, signalling a major demographic shift for a platform once associated mainly with male users. Government departments, including ministers, are also using Reddit for direct engagement through public Q&A sessions.
Tension remains part of the platform’s culture, yet company leaders argue that community moderation and voting systems help manage behaviour.
Reddit is now encouraging users to visit directly instead of arriving via search or AI summaries, positioning the platform as a human alternative to automated answers.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Grok, the AI chatbot built into Elon Musk’s social platform X, has been used to produce sexualised ‘edited’ images of real people, including material that appeared to involve children. In a statement cited in the report, Grok attributed some of the outputs to gaps in its safeguards that allowed images showing ‘minors in minimal clothing,’ and said changes were being made to prevent repeat incidents.
One case described a Rio de Janeiro musician, Julie Yukari, who posted a New Year’s Eve photo on X and then noticed other users tagging Grok with requests to alter her image into a bikini-style version. She said she assumed the bot would refuse, but AI-generated, near-nude edits of her image later spread on the platform.
The report suggested that the misuse was widespread and rapidly evolving. In a brief midday snapshot of public prompts, it counted more than 100 attempts in 10 minutes to get Grok to swap people’s clothing for bikinis or more revealing outfits. In dozens of cases, the tool complied wholly or partly, including instances involving people who appeared to be minors.
The episode has also drawn attention from officials outside the US. French ministers said they referred the content to prosecutors and also flagged it to the country’s media regulator, asking for an assessment under the EU’s Digital Services Act. India’s IT ministry, meanwhile, wrote to X’s local operation saying the platform had failed to stop the tool being used to generate and circulate obscene, sexually explicit material.
Specialists quoted in the report argued the backlash was predictable: ‘nudification’ tools have existed for years, but placing a powerful image editor inside a significant social network drastically lowers the effort needed to misuse it and helps harmful content spread. They said civil-society and child-safety groups had warned xAI about likely abuse, while Musk reacted online with joking posts about bikini-style AI edits, and xAI previously brushed off related coverage with the phrase ‘Legacy Media Lies.’
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Face-to-face interviews and oral verification could become a routine part of third-level assessments under new recommendations aimed at addressing the improper use of AI. Institutions are being encouraged to redesign assessment methods to ensure student work is authentic.
The proposals are set out in new guidelines published by the Higher Education Authority (HEA) of Ireland, which regulates universities and other third-level institutions. The report argues that assessment systems must evolve to reflect the growing use of generative AI in education.
While encouraging institutions to embrace AI’s potential, the report stresses the need to ensure students are demonstrating genuine learning. Academics have raised concerns that AI-generated assignments are increasingly difficult to distinguish from original student work.
To address this, the report recommends redesigning assessments to prioritise student authorship and human judgement. Suggested measures include oral verification, process-based learning, and, where appropriate, a renewed reliance on written exams conducted without technology.
The authors also caution against relying on AI detection tools, arguing that integrity processes should be based on dialogue and evidence. They call for clearer policies, staff and student training, and safeguards around data use and equitable access to AI tools.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Growing concern over data privacy and subscription fatigue has led an independent developer to create WitNote, an AI note-taking tool that runs entirely offline.
The software allows users to process notes locally on Windows and macOS rather than relying on cloud-based services where personal information may be exposed.
WitNote supports lightweight language models such as Qwen2.5-0.5B that can run with limited storage requirements. Users may also connect to external models through API keys if preferred.
Core functions include rewriting, summarising and extending content, while a WYSIWYG Markdown editor provides a familiar workflow without network delays, instead of relying on web-based interfaces.
Another key feature is direct integration with Obsidian Markdown files, allowing notes to be imported instantly and managed in one place.
The developer says the project remains a work in progress but commits to ongoing updates and user-driven improvements, even joining Apple’s developer programme personally to support smoother installation.
For users seeking AI assistance while protecting privacy and avoiding monthly fees, WitNote positions itself as an appealing offline alternative that keeps full control of data on the local machine.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The booming influencer economy of Belgium is colliding with an advertising rulebook that many creators say belongs to another era.
Different obligations across federal, regional and local authorities mean that wording acceptable in one region may trigger a reprimand in another. Some influencers have even faced large fines for administrative breaches such as failing to publish business details on their profiles.
In response, the Influencer Marketing Alliance in Belgium has launched a certification scheme designed to help creators navigate the legal maze instead of risking unintentional violations.
Influencers complete an online course on advertising and consumer law and must pass a final exam before being listed in a public registry monitored by the Jury for Ethical Practices.
Major brands, including L’Oréal and Coca-Cola, already prefer to collaborate with certified creators to ensure compliance and credibility.
Not everyone is convinced.
Some Belgian influencers argue that certification adds more bureaucracy at a time when they already struggle to understand overlapping rules. Others see value as a structured reminder that content creators remain legally responsible for commercial communication shared with followers.
The alliance is also pushing lawmakers to involve influencers more closely when drafting future rules, including taxation and safeguards for child creators.
Consumer groups such as BEUC support clearer definitions and obligations under the forthcoming EU Digital Fairness Act, arguing that influencer advertising should follow the same standards as other media instead of remaining in a grey zone.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Polish authorities have urged the European Commission to investigate TikTok over AI-generated content advocating Poland’s exit from the European Union. Officials say the videos pose risks to democratic processes and public order.
Deputy Minister for Digitalisation Dariusz Standerski highlighted that the narratives, distribution patterns, and synthetic audiovisual material suggest TikTok may not be fulfilling its obligations under the EU Digital Services Act for Very Large Online Platforms.
The associated TikTok account has since disappeared from the platform.
The Digital Services Act requires platforms to address systemic risks, including disinformation, and allows fines of up to 6% of a company’s global annual turnover for non-compliance. TikTok and the Commission have not provided immediate comment.
Authorities emphasised that the investigation could set an important precedent for how EU countries address AI-driven disinformation on major social media platforms.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI dictation reached maturity during the years after many attempts of patchy performance and frustrating inaccuracies.
Advances in speech-to-text engines and large language models now allow modern dictation tools to recognise everyday speech more reliably while keeping enough context to format sentences automatically instead of producing raw transcripts that require heavy editing.
Several leading apps have emerged with different strengths. Wispr Flow focuses on flexibility with style options and custom vocabulary, while Willow blends automation with privacy by storing transcripts locally.
Monologue also prioritises privacy by allowing users to download the model and run transcription entirely on their own machines. Superwhisper caters for power users by supporting multiple downloadable models and transcription from audio or video files.
Other tools take different approaches. VoiceTypr offers an offline-first design with lifetime licensing, Aqua promotes speed and phrase-based shortcuts, Handy provides a simple free open source starting point, and Typeless gives one of the most generous free allowances while promising strong data protection.
Each reflects a wider trend where developers try to balance convenience, privacy, control and affordability.
Users now benefit from cleaner, more natural-sounding transcripts instead of the rigid audio typing tools of previous years. AI dictation has become faster, more accurate and far more usable for everyday note-taking, messaging and work tasks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China has proposed stringent new rules for AI aimed at protecting children and preventing chatbots from providing advice that could lead to self-harm, violence, or gambling.
The draft regulations, published by the Cyberspace Administration of China (CAC), require developers to include personalised settings, time limits, and parental consent for services offering emotional companionship.
High-risk chats involving self-harm or suicide must be passed to a human operator, with guardians or emergency contacts alerted. AI providers must not produce content that threatens national security, harms national honour, or undermines national unity.
The rules come as AI usage surges, with platforms such as DeepSeek, Z.ai, and Minimax attracting millions of users in China and abroad. The CAC supports safe AI use, including tools for local culture and elderly companionship.
The move reflects growing global concerns over AI’s impact on human behaviour. Notably, OpenAI has faced legal challenges over alleged chatbot-related harm, prompting the company to create roles focused on tracking AI risks to mental health and cybersecurity.
China’s draft rules signal a firm approach to regulating AI technology as its influence expands rapidly.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!