Malta plans tougher laws against deepfake abuse

Malta’s government is preparing new legal measures to curb the abusive use of deepfake technology, with existing laws now under review. The planned reforms aim to introduce penalties for the misuse of AI in cases of harassment, blackmail, and bullying.

The move mirrors earlier cyberbullying and cyberstalking laws, extending similar protections to AI-generated content. Authorities are promoting AI while stressing the need for strong public safety and legal safeguards.

AI and youth participation were the main themes discussed during the National Youth Parliament meeting, where Abela highlighted the role of young people in shaping Malta’s long-term development strategy, Vision Malta 2050.

The strategy focuses on the next 25 years and directly affects those entering the workforce or starting families.

Young people were described as key drivers of national policy in areas such as fertility, environmental protection, and work-life balance. Senior officials and members of the Youth Advisory Forum attended the meeting.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Indonesia and Malaysia restrict access to Grok AI over content safeguards

Malaysia and Indonesia have restricted access to Grok, the AI chatbot available through the X platform, following concerns about its image generation capabilities.

Authorities said the tool had been used to create manipulated images depicting real individuals in sexually explicit contexts.

Regulatory bodies in Malaysia and Indonesia stated that the decision was based on the absence of sufficient safeguards to prevent misuse.

Requests for additional risk mitigation measures were communicated to the platform operator, with access expected to remain limited until further protections are introduced.

The move has drawn attention from regulators in other regions, where online safety frameworks allow intervention when digital services fail to address harmful content. Discussions have focused on platform responsibility, content moderation standards, and compliance with existing legal obligations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU instructs X to keep all Grok chatbot records

The European Commission has ordered X to retain all internal documents and data on its AI chatbot Grok until the end of 2026. The order falls under the Digital Services Act after concerns Grok’s ‘spicy’ mode enabled sexualised deepfakes of minors.

The move continues EU oversight, recalling a January 2025 order to preserve X’s recommender system documents amid claims it amplified far-right content during German elections. EU regulators emphasised that platforms must manage the content generated by their AI responsibly.

Earlier this week, X submitted responses to the Commission regarding Grok’s outputs following concerns over Holocaust denial content. While the deepfake scandal has prompted calls for further action, the Commission has not launched a formal investigation into Grok.

Regulators reiterated that it remains X’s responsibility to ensure the chatbot’s outputs meet European standards, and retention of all internal records is crucial for ongoing monitoring and accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

X restricts Grok image editing after deepfake backlash

Elon Musk’s platform X has restricted image editing with its AI chatbot Grok to paying users, following widespread criticism over the creation of non-consensual sexualised deepfakes.

The move comes after Grok allowed users to digitally alter images of people, including removing clothing without consent. While free users can still access image tools through Grok’s separate app and website, image editing within X now requires a paid subscription linked to verified user details.

Legal experts and child protection groups said the change does not address the underlying harm. Professor Clare McGlynn said limiting access fails to prevent abuse, while the Internet Watch Foundation warned that unsafe tools should never have been released without proper safeguards.

UK government officials urged regulator Ofcom to use its full powers under the Online Safety Act, including possible financial restrictions on X. Prime Minister Sir Keir Starmer described the creation of sexualised AI images involving adults and children as unlawful and unacceptable.

The controversy has renewed pressure on X to introduce stronger ethical guardrails for Grok. Critics argue that restricting features to subscribers does not prevent misuse, and that meaningful protections are needed to stop AI tools from enabling image-based abuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Reddit overtakes TikTok in the UK social media race

In the UK, Reddit has quietly overtaken TikTok to become Britain’s fourth most-visited social media platform, marking a major shift in how people search for information and share opinions online.

Use of the platform among UK internet users has risen sharply over the past two years, driven strongly by younger audiences who are increasingly drawn to open discussion instead of polished influencer content.

Google’s algorithm changes have helped accelerate Reddit’s rise by prioritising forum-based conversations in search results. Partnership deals with major AI companies have reinforced visibility further, as AI tools increasingly cite Reddit threads.

Younger users in the UK appear to value unfiltered and experience-based conversations, creating strong growth across lifestyle, beauty, parenting and relationship communities, alongside major expansion in football-related discussion.

Women now make up more than half of Reddit’s UK audience, signalling a major demographic shift for a platform once associated mainly with male users. Government departments, including ministers, are also using Reddit for direct engagement through public Q&A sessions.

Tension remains part of the platform’s culture, yet company leaders argue that community moderation and voting systems help manage behaviour.

Reddit is now encouraging users to visit directly instead of arriving via search or AI summaries, positioning the platform as a human alternative to automated answers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots spreading rumours raise new risks

Researchers warn AI chatbots are spreading rumours about real people without human oversight. Unlike human gossip, bot-to-bot exchanges can escalate unchecked, growing more extreme as they move through AI networks.

Philosophers Joel Krueger and Lucy Osler from the University of Exeter describe this phenomenon as ‘feral gossip.’ It involves negative evaluations about absent third parties and can persist undetected across platforms.

Real-world examples include tech reporter Kevin Roose, who encountered hostile AI-generated assessments of his work from multiple chatbots, seemingly amplified as the content filtered through training data.

The researchers highlight that AI systems lack the social checks humans provide, allowing rumours to intensify unchecked. Chatbots are designed to appear trustworthy and personal, so negative statements can seem credible.

Such misinformation has already affected journalists, academics, and public officials, sometimes prompting legal action. Technosocial harms from AI gossip extend beyond embarrassment. False claims can damage reputations, influence decisions, and persist online and offline.

While chatbots are not conscious, their prioritisation of conversational fluency over factual accuracy can make the rumours they spread difficult to detect and correct.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI fuels online abuse of women in public life

Generative AI is increasingly being weaponised to harass women in public roles, according to a new report commissioned by UN Women. Journalists, activists, and human rights defenders face AI-assisted abuse that endangers personal safety and democratic freedoms.

The study surveyed 641 women from 119 countries and found that nearly one in four of those experiencing online violence reported AI-generated or amplified abuse.

Writers, communicators, and influencers reported the highest exposure, with human rights defenders and journalists also at significant risk. Rapidly developing AI tools, including deepfakes, facilitate the creation to harmful content that spreads quickly on social media.

Online attacks often escalate into offline harm, with 41% of women linking online abuse to physical harassment, stalking, or intimidation. Female journalists are particularly affected, with offline attacks more than doubling over five years.

Experts warn that such violence threatens freedom of expression and democratic processes, particularly in authoritarian contexts.

Researchers call for urgent legal frameworks, platform accountability, and technological safeguards to prevent AI-assisted attacks on women. They advocate for human rights-focused AI design and stronger support systems to protect women in public life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK plans ban on deepfake AI nudification apps

Britain plans to ban AI-nudification apps that digitally remove clothing from images. Creating or supplying these tools would become illegal under new proposals.

The offence would build on existing UK laws covering non-consensual sexual deepfakes and intimate image abuse. Technology Secretary Liz Kendall said developers and distributors would face harsh penalties.

Experts warn that nudification apps cause serious harm, mainly when used to create child sexual abuse material. Children’s Commissioner Dame Rachel de Souza has called for a total ban on the technology.

Child protection charities welcomed the move but want more decisive action from tech firms. The government said it would work with companies to stop children from creating or sharing nude images.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK launches taskforce to boost women in tech

The UK government has formed a Women in Tech taskforce to help more women enter, remain and lead across the technology sector. Technology secretary Liz Kendall will guide the group alongside industry figures determined to narrow long-standing representation gaps highlighted by recent BCS data.

Members include Anne-Marie Imafidon, Allison Kirkby and Francesca Carlesi, who will advise ministers on boosting diversity and supporting economic growth. Leaders stress that better representation enables more inclusive decision-making and encourages technology built with wider perspectives in mind.

The taskforce plans to address barriers affecting women’s progression, ranging from career access to investment opportunities. Organisations such as techUK and the Royal Academy of Engineering argue that gender imbalance limits innovation, particularly as the UK pursues ambitious AI goals.

UK officials expect working groups to develop proposals over the coming months, focusing on practical steps that broaden the talent pool. Advocates say the initiative arrives at a crucial moment as emerging technologies reshape employment and demand more inclusive leadership.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU faces new battles over digital rights

EU policy debates intensified after Denmark abandoned plans for mandatory mass scanning in the draft Child Sexual Abuse Regulation. Advocates welcomed the shift yet warned that new age checks and potential app bans still threaten privacy.

France and the UK advanced consultations on good practice guidelines for cyber intrusion firms, seeking more explicit rules for industry responsibility. Civil society groups also marked two years of the Digital Services Act by reflecting on enforcement experience and future challenges.

Campaigners highlighted rising concerns about tech-facilitated gender violence during the 16 Days initiative. The Centre for Democracy and Technology launched fresh resources stressing encryption protection, effective remedies and more decisive action against gendered misinformation.

CDT Europe also criticised the Commission’s digital omnibus package for weakening safeguards under laws, including the AI Act. The group urged firm enforcement of existing frameworks while exploring better redress options for AI-related harms in the EU legislation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot