EU warns X over Grok AI image abuse

The European Commission has warned X to address issues related to its Grok AI tool. Regulators say new features enabled the creation of sexualised images, including those of children.

EU Tech Sovereignty Commissioner Henna Virkkunen has stated that investigators have already taken action under the Digital Services Act. Failure to comply could result in enforcement measures being taken against the platform.

X recently restricted Grok’s image editing functions to paying users after criticism from regulators and campaigners. Irish and EU media watchdogs are now engaging with Brussels on the issue.

UK ministers also plan laws banning non-consensual intimate images and tools enabling their creation. Several digital rights groups argue that existing laws already permit criminal investigations and fines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New Spanish bill targets AI misuse of images and voices

Spain’s government has approved draft legislation that would tighten consent rules for AI-generated content, aiming to curb deepfakes and strengthen protections for the use of people’s images and voices. The proposal responds to growing concerns in Europe about AI being used to create harmful material, especially sexual content produced without the subject’s permission.

Under the draft, the minimum age to consent to the use of one’s own image would be set at 16, and stricter limits would apply to reusing images found online or reproducing a person’s voice or likeness through AI without authorisation. Spain’s Justice Minister Félix Bolaños warned that sharing personal photos on social media should not be treated as blanket approval for others to reuse them in different contexts.

The reform explicitly targets commercial misuse by classifying the use of AI-generated images or voices for advertising or other business purposes without consent as illegitimate. At the same time, it would still allow creative, satirical, or fictional uses involving public figures, so long as the material is clearly labelled as AI-generated.

Spain’s move aligns with broader EU efforts, as the bloc is working toward rules that would require member states to criminalise non-consensual sexual deepfakes by 2027. The push comes amid rising scrutiny of AI tools and real-world cases that have intensified calls for more precise legal boundaries, including a recent request by the Spanish government for prosecutors to review whether specific AI-generated material could fall under child pornography laws.

The bill is not yet final. It must go through a public consultation process before returning to the government for final approval and then heading to parliament.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Malta plans tougher laws against deepfake abuse

Malta’s government is preparing new legal measures to curb the abusive use of deepfake technology, with existing laws now under review. The planned reforms aim to introduce penalties for the misuse of AI in cases of harassment, blackmail, and bullying.

The move mirrors earlier cyberbullying and cyberstalking laws, extending similar protections to AI-generated content. Authorities are promoting AI while stressing the need for strong public safety and legal safeguards.

AI and youth participation were the main themes discussed during the National Youth Parliament meeting, where Abela highlighted the role of young people in shaping Malta’s long-term development strategy, Vision Malta 2050.

The strategy focuses on the next 25 years and directly affects those entering the workforce or starting families.

Young people were described as key drivers of national policy in areas such as fertility, environmental protection, and work-life balance. Senior officials and members of the Youth Advisory Forum attended the meeting.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia raises concerns over AI misuse on X

The eSafety regulator in Australia has expressed concern over the misuse of the generative AI system Grok on social media platform X, following reports involving sexualised or exploitative content, particularly affecting children.

Although overall report numbers remain low, authorities in Australia have observed a recent increase over the past weeks.

The regulator confirmed that enforcement powers under the Online Safety Act remain available where content meets defined legal thresholds.

X and other services are subject to systemic obligations requiring the detection and removal of child sexual exploitation material, alongside broader industry codes and safety standards.

eSafety has formally requested further information from X regarding safeguards designed to prevent misuse of generative AI features and to ensure compliance with existing obligations.

Previous enforcement actions taken in 2025 against similar AI services resulted in their withdrawal from the Australian market.

Additional mandatory safety codes will take effect in March 2026, introducing new obligations for AI services to limit children’s exposure to sexually explicit, violent and self-harm-related material.

Authorities emphasised the importance of Safety by Design measures and continued international cooperation among online safety regulators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-powered toys navigate safety concerns after early missteps

Toy makers at the Consumer Electronics Show highlighted efforts to improve AI in playthings following troubling early reports of chatbots giving unsuitable responses to children’s questions.

A recent Public Interest Research Group report found that some AI toys, such as an AI-enabled teddy bear, produced inappropriate advice, prompting companies like FoloToy to update their models and suspend problematic products.

Among newer devices, Curio’s Grok toy, which refuses to answer questions deemed inappropriate and allows parental overrides, has earned independent safety certification. However, concerns remain about continuous listening and data privacy.

Experts advise parents to be cautious about toys that retain information over time or engage in ongoing interactions with young users.

Some manufacturers are positioning AI toys as educational tools, for example, language-learning companions with time-limited, guided chat interactions, and others have built in flags to alert parents when inappropriate content arises.

Despite these advances, critics argue that self-regulation is insufficient and call for clearer guardrails and possible regulation to protect children in AI-toy environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teen victim turns deepfake experience into education

A US teenager targeted by explicit deepfake images has helped create a new training course. The programme aims to support students, parents and school staff facing online abuse.

The course explains how AI tools are used to create sexualised fake images. It also outlines legal rights, reporting steps and available victim support resources.

Research shows deepfake abuse is spreading among teenagers, despite stronger laws. One in eight US teens know someone targeted by non-consensual fake images.

Developers say education remains critical as AI tools become easier to access. Schools are encouraged to adopt training to protect students and prevent harm.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK considers regulatory action after Grok’s deepfake images on X

UK Prime Minister Keir Starmer is consulting Canada and Australia on a coordinated response to concerns surrounding social media platform X, after its AI assistant Grok was used to generate sexualised deepfake images of women and children.

The discussions focus on shared regulatory approaches rather than immediate bans.

X acknowledged weaknesses in its AI safeguards and limited image generation to paying users. Lawmakers in several countries have stated that further regulatory scrutiny may be required, while Canada has clarified that no prohibition is currently under consideration, despite concerns over platform responsibility.

In the UK, media regulator Ofcom is examining potential breaches of online safety obligations. Technology secretary Liz Kendall confirmed that enforcement mechanisms remain available if legal requirements are not met.

Australian Prime Minister Anthony Albanese also raised broader concerns about social responsibility in the use of generative AI.

X owner Elon Musk rejected accusations of non-compliance, describing potential restrictions as censorship and suppression of free speech.

European authorities requested the preservation of internal records for possible investigations, while Indonesia and Malaysia have already blocked access to the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Indonesia and Malaysia restrict access to Grok AI over content safeguards

Malaysia and Indonesia have restricted access to Grok, the AI chatbot available through the X platform, following concerns about its image generation capabilities.

Authorities said the tool had been used to create manipulated images depicting real individuals in sexually explicit contexts.

Regulatory bodies in Malaysia and Indonesia stated that the decision was based on the absence of sufficient safeguards to prevent misuse.

Requests for additional risk mitigation measures were communicated to the platform operator, with access expected to remain limited until further protections are introduced.

The move has drawn attention from regulators in other regions, where online safety frameworks allow intervention when digital services fail to address harmful content. Discussions have focused on platform responsibility, content moderation standards, and compliance with existing legal obligations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Instagram responds to claims of user data exposure

Reports published by cybersecurity researchers indicated that data linked to approximately 17.5 million Instagram accounts has been offered for sale on underground forums.

The dataset reportedly includes usernames, contact details and physical address information, raising broader concerns around digital privacy and data aggregation.

A few hours later, Instagram responded by stating that no breach of internal systems occurred. According to the company, some users received password reset emails after an external party abused a feature that has since been addressed.

The platform said affected accounts remained secure, with no unauthorised access recorded.

Security analysts have noted that risks arise when online identifiers are combined with external datasets, rather than originating from a single platform.

Such aggregation can increase exposure to targeted fraud, impersonation and harassment, reinforcing the importance of cautious digital security practices across social media ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

BBC launches media literacy series for teenagers

BBC Children’s and Education has launched Solve The Story, a new series tackling online misinformation among teenagers. The six-part programme is designed for classroom use across UK schools.

The British series follows research showing teachers lack resources to teach critical thinking effectively. Surveys found teenagers struggle with online content volume, while one in three teachers find media literacy difficult to deliver.

Solve The Story uses mystery-style storytelling to help pupils question sources, spot deepfakes and challenge viral claims. Each episode includes practical classroom guides supporting teachers and lesson planning.

BBC figures say two thirds of teenagers worry about fake news causing confusion and stress. Educators argue AI-driven misinformation makes structured media literacy support increasingly urgent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot