Grok, the AI chatbot built into Elon Musk’s social platform X, has been used to produce sexualised ‘edited’ images of real people, including material that appeared to involve children. In a statement cited in the report, Grok attributed some of the outputs to gaps in its safeguards that allowed images showing ‘minors in minimal clothing,’ and said changes were being made to prevent repeat incidents.
One case described a Rio de Janeiro musician, Julie Yukari, who posted a New Year’s Eve photo on X and then noticed other users tagging Grok with requests to alter her image into a bikini-style version. She said she assumed the bot would refuse, but AI-generated, near-nude edits of her image later spread on the platform.
The report suggested that the misuse was widespread and rapidly evolving. In a brief midday snapshot of public prompts, it counted more than 100 attempts in 10 minutes to get Grok to swap people’s clothing for bikinis or more revealing outfits. In dozens of cases, the tool complied wholly or partly, including instances involving people who appeared to be minors.
The episode has also drawn attention from officials outside the US. French ministers said they referred the content to prosecutors and also flagged it to the country’s media regulator, asking for an assessment under the EU’s Digital Services Act. India’s IT ministry, meanwhile, wrote to X’s local operation saying the platform had failed to stop the tool being used to generate and circulate obscene, sexually explicit material.
Specialists quoted in the report argued the backlash was predictable: ‘nudification’ tools have existed for years, but placing a powerful image editor inside a significant social network drastically lowers the effort needed to misuse it and helps harmful content spread. They said civil-society and child-safety groups had warned xAI about likely abuse, while Musk reacted online with joking posts about bikini-style AI edits, and xAI previously brushed off related coverage with the phrase ‘Legacy Media Lies.’
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Face-to-face interviews and oral verification could become a routine part of third-level assessments under new recommendations aimed at addressing the improper use of AI. Institutions are being encouraged to redesign assessment methods to ensure student work is authentic.
The proposals are set out in new guidelines published by the Higher Education Authority (HEA) of Ireland, which regulates universities and other third-level institutions. The report argues that assessment systems must evolve to reflect the growing use of generative AI in education.
While encouraging institutions to embrace AI’s potential, the report stresses the need to ensure students are demonstrating genuine learning. Academics have raised concerns that AI-generated assignments are increasingly difficult to distinguish from original student work.
To address this, the report recommends redesigning assessments to prioritise student authorship and human judgement. Suggested measures include oral verification, process-based learning, and, where appropriate, a renewed reliance on written exams conducted without technology.
The authors also caution against relying on AI detection tools, arguing that integrity processes should be based on dialogue and evidence. They call for clearer policies, staff and student training, and safeguards around data use and equitable access to AI tools.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Growing concern over data privacy and subscription fatigue has led an independent developer to create WitNote, an AI note-taking tool that runs entirely offline.
The software allows users to process notes locally on Windows and macOS rather than relying on cloud-based services where personal information may be exposed.
WitNote supports lightweight language models such as Qwen2.5-0.5B that can run with limited storage requirements. Users may also connect to external models through API keys if preferred.
Core functions include rewriting, summarising and extending content, while a WYSIWYG Markdown editor provides a familiar workflow without network delays, instead of relying on web-based interfaces.
Another key feature is direct integration with Obsidian Markdown files, allowing notes to be imported instantly and managed in one place.
The developer says the project remains a work in progress but commits to ongoing updates and user-driven improvements, even joining Apple’s developer programme personally to support smoother installation.
For users seeking AI assistance while protecting privacy and avoiding monthly fees, WitNote positions itself as an appealing offline alternative that keeps full control of data on the local machine.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The booming influencer economy of Belgium is colliding with an advertising rulebook that many creators say belongs to another era.
Different obligations across federal, regional and local authorities mean that wording acceptable in one region may trigger a reprimand in another. Some influencers have even faced large fines for administrative breaches such as failing to publish business details on their profiles.
In response, the Influencer Marketing Alliance in Belgium has launched a certification scheme designed to help creators navigate the legal maze instead of risking unintentional violations.
Influencers complete an online course on advertising and consumer law and must pass a final exam before being listed in a public registry monitored by the Jury for Ethical Practices.
Major brands, including L’Oréal and Coca-Cola, already prefer to collaborate with certified creators to ensure compliance and credibility.
Not everyone is convinced.
Some Belgian influencers argue that certification adds more bureaucracy at a time when they already struggle to understand overlapping rules. Others see value as a structured reminder that content creators remain legally responsible for commercial communication shared with followers.
The alliance is also pushing lawmakers to involve influencers more closely when drafting future rules, including taxation and safeguards for child creators.
Consumer groups such as BEUC support clearer definitions and obligations under the forthcoming EU Digital Fairness Act, arguing that influencer advertising should follow the same standards as other media instead of remaining in a grey zone.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Polish authorities have urged the European Commission to investigate TikTok over AI-generated content advocating Poland’s exit from the European Union. Officials say the videos pose risks to democratic processes and public order.
Deputy Minister for Digitalisation Dariusz Standerski highlighted that the narratives, distribution patterns, and synthetic audiovisual material suggest TikTok may not be fulfilling its obligations under the EU Digital Services Act for Very Large Online Platforms.
The associated TikTok account has since disappeared from the platform.
The Digital Services Act requires platforms to address systemic risks, including disinformation, and allows fines of up to 6% of a company’s global annual turnover for non-compliance. TikTok and the Commission have not provided immediate comment.
Authorities emphasised that the investigation could set an important precedent for how EU countries address AI-driven disinformation on major social media platforms.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI dictation reached maturity during the years after many attempts of patchy performance and frustrating inaccuracies.
Advances in speech-to-text engines and large language models now allow modern dictation tools to recognise everyday speech more reliably while keeping enough context to format sentences automatically instead of producing raw transcripts that require heavy editing.
Several leading apps have emerged with different strengths. Wispr Flow focuses on flexibility with style options and custom vocabulary, while Willow blends automation with privacy by storing transcripts locally.
Monologue also prioritises privacy by allowing users to download the model and run transcription entirely on their own machines. Superwhisper caters for power users by supporting multiple downloadable models and transcription from audio or video files.
Other tools take different approaches. VoiceTypr offers an offline-first design with lifetime licensing, Aqua promotes speed and phrase-based shortcuts, Handy provides a simple free open source starting point, and Typeless gives one of the most generous free allowances while promising strong data protection.
Each reflects a wider trend where developers try to balance convenience, privacy, control and affordability.
Users now benefit from cleaner, more natural-sounding transcripts instead of the rigid audio typing tools of previous years. AI dictation has become faster, more accurate and far more usable for everyday note-taking, messaging and work tasks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China has proposed stringent new rules for AI aimed at protecting children and preventing chatbots from providing advice that could lead to self-harm, violence, or gambling.
The draft regulations, published by the Cyberspace Administration of China (CAC), require developers to include personalised settings, time limits, and parental consent for services offering emotional companionship.
High-risk chats involving self-harm or suicide must be passed to a human operator, with guardians or emergency contacts alerted. AI providers must not produce content that threatens national security, harms national honour, or undermines national unity.
The rules come as AI usage surges, with platforms such as DeepSeek, Z.ai, and Minimax attracting millions of users in China and abroad. The CAC supports safe AI use, including tools for local culture and elderly companionship.
The move reflects growing global concerns over AI’s impact on human behaviour. Notably, OpenAI has faced legal challenges over alleged chatbot-related harm, prompting the company to create roles focused on tracking AI risks to mental health and cybersecurity.
China’s draft rules signal a firm approach to regulating AI technology as its influence expands rapidly.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Many visually impaired gamers find mainstream video games difficult due to limited accessibility features. Support groups enable players to share tips, recommend titles, and connect with others who face similar challenges.
Audio and text‑based mobile games are popular, yet console and PC titles often lack voiceovers or screen reader support. Adjustable visual presets could make mainstream games more accessible for partially sighted players.
UK industry bodies acknowledge progress, but barriers remain for millions of visually impaired players. Communities offer social support and provide feedback to developers to improve games and make them inclusive.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
China is proposing new rules requiring users to consent before AI companies can use chat logs for training. The draft measures aim to balance innovation with safety and public interest.
Platforms would need to inform users when interacting with AI and provide options to access or delete their chat history. For minors, guardian consent is required before sharing or storing any data.
Analysts say the rules may slow AI chatbot improvements but provide guidance on responsible development. The measures signal that some user conversations are too sensitive for free training data.
The draft rules are open for public consultation with feedback due in late January. China encourages expanding human-like AI applications once safety and reliability are demonstrated.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Association of Chartered Certified Accountants (ACCA) has announced it will largely end remote examinations in the UK from March 2026, requiring students to sit tests in person unless exceptional circumstances apply.
The decision aims to address a surge in cheating, particularly facilitated by AI tools.
Remote testing was introduced during the Covid-19 pandemic to allow students to continue qualifying when in-person exams were impossible. The ACCA said online assessments have now become too difficult to monitor effectively, despite efforts to strengthen safeguards against misconduct.
Investigations show cheating has impacted major auditing firms, including the ‘big four’ and other top companies. High-profile cases, such as EY’s $100m (£74m) settlement in the US, highlight the risks posed by compromised professional examinations.
While other accounting bodies, including the Institute of Chartered Accountants in England and Wales, continue to allow some online exams, the ACCA has indicated that high-stakes assessments must now be conducted in person to maintain credibility and integrity.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!