EU confronts Grok abuse as Brussels tests its digital power

The European Commission has opened a formal investigation into Grok after the tool produced millions of sexualised images of women and children.

A scrutiny that centres on whether X failed to carry out adequate risk assessments before releasing the undressing feature in the European market. The case arrives as ministers, including Sweden’s deputy prime minister, publicly reveal being targeted by the technology.

Brussels is preparing to use its strongest digital laws instead of deferring to US pressure. The Digital Services Act allows the European Commission to fine major platforms or force compliance measures when systemic harms emerge.

Experts argue the Grok investigation represents an important test of European resolve, particularly as the bloc tries to show it can hold powerful companies to account.

Concerns remain about the willingness of the EU to act decisively. Reports suggest the opening of the probe was delayed because of a tariff dispute with Washington, raising questions about whether geopolitical considerations slowed the enforcement response.

Several lawmakers say the delay undermined confidence in the bloc’s commitment to protecting fundamental rights.

The investigation could last months and may have wider implications for content ranking systems already under scrutiny.

Critics say financial penalties may not be enough to change behaviour at X, yet the case is still viewed as a pivotal moment for European digital governance. Observers believe a firm outcome would demonstrate that emerging harms linked to synthetic media cannot be ignored.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI reduces late breast cancer diagnoses by 12% in landmark study

AI in breast cancer screening reduced late diagnoses by 12% and increased early detection rates in the largest trial of its kind. The Swedish study involved 100,000 women randomly assigned to AI-supported screening or standard radiologist readings between April 2021 and December 2022.

The AI system analysed mammograms and assigned low-risk cases to single readings and high-risk cases to double readings by radiologists.

Results published in The Lancet showed 1.55 cancers per 1,000 women in the AI group versus 1.76 in the control group, with 81% detected at the screening stage, compared with 74% in the control group.

Dr Kristina Lång from Lund University said AI-supported mammography could reduce radiologist workload pressures and improve early detection, but cautioned that implementation must be done carefully with continuous monitoring.

Researchers stressed that screening still requires at least one human radiologist working alongside AI, rather than AI replacing human radiologists. Cancer Research UK’s Dr Sowmiya Moorthie called the findings promising but noted more research is needed to confirm life-saving potential

Breast Cancer Now’s Simon Vincent highlighted the significant potential for AI to support radiologists, emphasising that earlier diagnosis improves treatment outcomes for a disease that affects over 2 million people globally each year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Non-consensual deepfakes, consent, and power in synthetic media

ΑΙ has reshaped almost every domain of digital life, from creativity and productivity to surveillance and governance.

One of the most controversial and ethically fraught areas of AI deployment involves pornography, particularly where generative systems are used to create, manipulate, or simulate sexual content involving real individuals without consent.

What was once a marginal issue confined to niche online forums has evolved into a global policy concern, driven by the rapid spread of AI-powered nudity applications, deepfake pornography, and image-editing tools integrated into mainstream platforms.

Recent controversies surrounding AI-powered nudity apps and the image-generation capabilities of Elon Musk’s Grok have accelerated public debate and regulatory scrutiny.

grok generative ai safety incident

Governments, regulators, and civil society organisations increasingly treat AI-generated sexual content not as a matter of taste or morality, but as an issue of digital harm, gender-based violence, child safety, and fundamental rights.

Legislative initiatives such as the US Take It Down Act illustrate a broader shift toward recognising non-consensual synthetic sexual content as a distinct and urgent category of abuse.

Our analysis examines how AI has transformed pornography, why AI-generated nudity represents a qualitative break from earlier forms of online sexual content, and how governments worldwide are attempting to respond.

It also explores the limits of current legal frameworks and the broader societal implications of delegating sexual representation to machines.

From online pornography to synthetic sexuality

Pornography has long been intertwined with technological change. From photography and film to VHS tapes, DVDs, and streaming platforms, sexual content has often been among the earliest adopters of new media technologies.

The transition from traditional pornography to AI-generated sexual content, however, marks a deeper shift than earlier format changes.

Conventional online pornography relies on human performers, production processes, and contractual relationships, even where exploitation or coercion exists. AI-generated pornography, instead of depicting real sexual acts, simulates them using algorithmic inference.

Faces, bodies, voices, and identities can be reconstructed or fabricated at scale, often without the knowledge or consent of the individuals whose likenesses are used.

AI nudity apps exemplify such a transformation. These tools allow users to upload images of real people and generate artificial nude versions, frequently marketed as entertainment or novelty applications.

DIPLO AI tools featured image Reporting AIassistant

The underlying technology relies on diffusion models trained on vast datasets of human bodies and sexual imagery, enabling increasingly realistic outputs. Unlike traditional pornography, the subject of the image may never have participated in any sexual act, yet the resulting content can be indistinguishable from authentic photography.

Such a transformation carries profound ethical implications. Instead of consuming representations of consensual adult sexuality, users often engage in simulations of sexual advances on real individuals who have not consented to being sexualised.

Such a distinction between fantasy and violation becomes blurred, particularly when such content is shared publicly or used for harassment.

AI nudity apps and the normalisation of non-consensual sexual content

The recent proliferation of AI nudity applications has intensified concerns around consent and harm. These apps are frequently marketed through euphemistic language, emphasising humour, experimentation, or artistic exploration instead of sexual exploitation.

Their core functionality, however, centres on digitally removing clothing from images of real people.

Regulators and advocacy groups increasingly argue that such tools normalise a culture in which consent is irrelevant. The ability to undress someone digitally, without personal involvement, reflects a broader pattern of technological power asymmetry, where the subject of the image lacks meaningful control over how personal likeness is used.

The ongoing Grok controversy illustrates how quickly the associated harms can scale when AI tools are embedded within major platforms. Reports that Grok can generate or modify images of women and children in sexualised ways have triggered backlash from governments, regulators, and victims’ rights organisations.

65cb63d3d76285ff8805193c blog gen ai deepfake

Even where companies claim that safeguards are in place, the repeated emergence of abusive outputs suggests systemic design failures rather than isolated misuse.

What distinguishes AI-generated sexual content from earlier forms of online abuse lies not only in realism but also in replicability. Once an image or model exists, reproduction can occur endlessly, with the content shared across jurisdictions and recontextualised in new forms. Victims often face a permanent loss of control over digital identity, with limited avenues for redress.

Gendered harm and child protection

The impact of AI-generated pornography remains unevenly distributed. Research and reporting consistently show that women and girls are disproportionately targeted by non-consensual synthetic sexual content.

Public figures, journalists, politicians, and private individuals alike have found themselves subjected to sexualised deepfakes designed to humiliate, intimidate, or silence them.

computer keyboard with red deepfake button key deepfake dangers online

Children face even greater risk. AI tools capable of generating nudified or sexualised images of minors raise alarm across legal and ethical frameworks. Even where no real child experiences physical abuse during content creation, the resulting imagery may still constitute child sexual abuse material under many legal definitions.

The existence of such content contributes to harmful sexualisation and may fuel exploitative behaviour. AI complicates traditional child protection frameworks because the abuse occurs at the level of representation, not physical contact.

Legal systems built around evidentiary standards tied to real-world acts struggle to categorise synthetic material, particularly where perpetrators argue that no real person suffered harm during production.

Regulators increasingly reject such reasoning, recognising that harm arises through exposure, distribution, and psychological impact rather than physical contact alone.

Platform responsibility and the limits of self-regulation

Technology companies have historically relied on self-regulation to address harmful content. In the context of AI-generated pornography, such an approach has demonstrated clear limitations.

Platform policies banning non-consensual sexual content often lag behind technological capabilities, while enforcement remains inconsistent and opaque.

The Grok case highlights these challenges. Even where companies announce restrictions or safeguards, questions remain regarding enforcement, detection accuracy, and accountability.

AI systems struggle to reliably determine whether an image depicts a real person, whether consent exists, or whether local laws apply. Technical uncertainty frequently serves as justification for delayed action.

Commercial incentives further complicate moderation efforts. AI image tools drive user engagement, subscriptions, and publicity. Restricting capabilities may conflict with business objectives, particularly in competitive markets.

As a result, companies tend to act only after public backlash or regulatory intervention, instead of proactively addressing foreseeable harm.

Such patterns have contributed to growing calls for legally enforceable obligations rather than voluntary guidelines. Regulators increasingly argue that platforms deploying generative AI systems should bear responsibility for foreseeable misuse, particularly where sexual harm is involved.

Legal responses and the emergence of targeted legislation

Governments worldwide are beginning to address AI-generated pornography through a combination of existing laws and new legislative initiatives. The Take It Down Act represents one of the most prominent attempts to directly confront non-consensual intimate imagery, including AI-generated content.

The Act strengthens platforms’ obligations to remove intimate images shared without consent, regardless of whether the content is authentic or synthetic. Victims’ rights to request takedowns are expanded, while procedural barriers that previously left individuals navigating complex reporting systems are reduced.

Crucially, the law recognises that harm does not depend on image authenticity, but on the impact experienced by the individual depicted.

Within the EU, debates around AI nudity apps intersect with the AI Act and the Digital Services Act (DSA). While the AI Act categorises certain uses of AI as prohibited or high-risk, lawmakers continue to question whether nudity applications fall clearly within existing bans.

European Commission EU AI Act amendments Digital Omnibus European AI Office

Calls to explicitly prohibit AI-powered nudity tools reflect concern that legal ambiguity creates enforcement gaps.

Other jurisdictions, including Australia, the UK, and parts of Southeast Asia, are exploring regulatory approaches combining platform obligations, criminal penalties, and child protection frameworks.

Such efforts signal a growing international consensus that AI-generated sexual abuse requires specific legal recognition rather than fragmented treatment.

Enforcement challenges and jurisdictional fragmentation

Despite legislative progress, enforcement remains a significant challenge. AI-generated pornography operates inherently across borders. Applications may be developed in one country, hosted in another, and used globally. Content can be shared instantly across platforms, subject to different legal regimes.

Jurisdictional fragmentation complicates takedown requests and criminal investigations. Victims often face complex reporting systems, language barriers, and inconsistent legal standards. Even where a platform complies with local law in one jurisdiction, identical material may remain accessible elsewhere.

Technical enforcement presents additional difficulties. Automated detection systems struggle to distinguish consensual adult content from non-consensual synthetic imagery. Over-reliance on automation risks false positives and censorship, while under-enforcement leaves victims unprotected.

Balancing accuracy, privacy, and freedom of expression remains unresolved.

Broader societal implications

Beyond legal and technical concerns, AI-generated pornography raises deeper questions about sexuality, power, and digital identity.

The ability to fabricate sexual representations of others undermines traditional understandings of bodily autonomy and consent. Sexual imagery becomes detached from lived experience, transformed into manipulable data.

Such shifts risk normalising the perception of individuals as visual assets rather than autonomous subjects. When sexual access can be simulated without consent, the social meaning of consent itself may weaken.

Critics argue that such technologies reinforce misogynistic and exploitative norms, particularly where women’s bodies are treated as endlessly modifiable digital material.

Deepfakes and the AI scam header

At the same time, defenders of generative AI warn of moral panic and excessive regulation. Arguments persist that not all AI-generated sexual content is harmful, particularly where fictional or consenting adult representations are involved.

The central challenge lies in distinguishing legitimate creative expression from abuse without enabling exploitative practices.

In conclusion, we must admit that AI has fundamentally altered the landscape of pornography, transforming sexual representation into a synthetic, scalable, and increasingly detached process.

AI nudity apps and controversies surrounding AI tools demonstrate how existing social norms and legal frameworks remain poorly equipped to address non-consensual synthetic sexual content.

Global responses indicate a growing recognition that AI-generated pornography constitutes a distinct category of digital harm. Regulation alone, however, will not resolve the issue.

Effective responses require legal clarity, platform accountability, technical safeguards, and cultural change, especially with the help of the educational system.

As AI systems become more powerful and accessible, societies must confront difficult questions about consent, identity, and responsibility in the digital age.

The challenge lies not merely in restricting technology, but in defining ethical boundaries that protect our human dignity while preserving legitimate innovation.

In the days, weeks or months ahead, decisions taken by governments, platforms, and communities will shape the future relationship between AI and our precious human autonomy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN warns of rising AI-driven threats to child safety

UN agencies have issued a stark warning over the accelerating risks AI poses to children online, citing rising cases of grooming, deepfakes, cyberbullying and sexual extortion.

A joint statement published on 19 January urges urgent global action, highlighting how AI tools increasingly enable predators to target vulnerable children with unprecedented precision.

Recent data underscores the scale of the threat, with technology-facilitated child abuse cases in the US surging from 4,700 in 2023 to more than 67,000 in 2024.

During the COVID-19 pandemic, online exploitation intensified, particularly affecting girls and young women, with digital abuse frequently translating into real-world harm, according to officials from the International Telecommunication Union.

Governments are tightening policies, led by Australia’s social media ban for under-16s, as the UK, France and Canada consider similar measures. UN agencies urged tech firms to prioritise child safety and called for stronger AI literacy across society.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Analysis reveals Grok generated 3 million sexualised images

A new analysis found Grok generated an estimated three million sexualised images in 11 days, including around 23,000 appearing to depict children. The findings raise serious concerns over safeguards, content moderation, and platform responsibility.

The surge followed the launch of Grok’s one-click image editing feature in late December, which quickly gained traction among users. Restrictions were later introduced, including paid access limits and technical measures to prevent image undressing.

Researchers based their estimates on a random sample of 20,000 images, extrapolating from these results to more than 4.6 million images generated during the study period. Automated tools and manual review identified sexualised content and confirmed cases involving individuals appearing under 18.

Campaigners have warned that the findings expose significant gaps in AI safety controls, particularly in protecting children. Calls are growing for stricter oversight, stronger accountability, and more robust safeguards before large-scale AI image deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Japan arrests suspect over AI deepfake pornography

Police in Japan have arrested a man accused of creating and selling non-consensual deepfake pornography using AI tools. The Tokyo Metropolitan Police Department said thousands of manipulated images of female celebrities were distributed through paid websites.

Investigators in Japan allege the suspect generated hundreds of thousands of images over two years using freely available generative AI software. Authorities say the content was promoted on social media before being sold via subscription platforms.

The arrest follows earlier cases in Japan and reflects growing concern among police worldwide. In South Korea, law enforcement has reported hundreds of arrests linked to deepfake sexual crimes, while cases have also emerged in the UK.

European agencies, including Europol, have also coordinated arrests tied to AI-generated abuse material. Law enforcement bodies say the spread of accessible AI tools is forcing rapid changes in forensic investigation and in the handling of digital evidence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU considers further action against Grok over AI nudification concerns

The European Commission has signalled readiness to escalate action against Elon Musk’s AI chatbot Grok, following concerns over the spread of non-consensual sexualised images on the social media platform X.

The EU tech chief Henna Virkkunen told Members of the European Parliament that existing digital rules allow regulators to respond to risks linked to AI-driven nudification tools.

Grok has been associated with the circulation of digitally altered images depicting real people, including women and children, without consent. Virkkunen described such practices as unacceptable and stressed that protecting minors online remains a central priority for the EU enforcement under the Digital Services Act.

While no formal investigation has yet been launched, the Commission is examining whether X may breach the DSA and has already ordered the platform to retain internal information related to Grok until the end of 2026.

Commission President Ursula von der Leyen has also publicly condemned the creation of sexualised AI images without consent.

The controversy has intensified calls from EU lawmakers to strengthen regulation, with several urging an explicit ban on AI-powered nudification under the forthcoming AI Act.

A debate that reflects wider international pressure on governments to address the misuse of generative AI technologies and reinforce safeguards across digital platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyberviolence against women rises across Europe amid deepfake abuse

Digital violence targeting women and girls is spreading across Europe, according to new research highlighting cyberstalking, surveillance and online threats as the most common reported abuses.

Digital tools have expanded opportunities for communication, yet online environments increasingly expose women to persistent harassment instead of safety and accountability.

Image-based abuse has grown sharply, with deepfake pornography now dominating synthetic sexual content and almost exclusively targeting women.

More than half of European countries report rising cases of non-consensual intimate image sharing, while national data show women forming a clear majority of cyberstalking and online threat victims.

Algorithmic systems accelerate the circulation of misogynistic material, creating enclosed digital spaces where abuse is normalised rather than challenged. Researchers warn that automated recommendation mechanisms can quickly spread harmful narratives, particularly among younger audiences.

Recent generative technologies have further intensified concerns by enabling sexualised image manipulation with limited safeguards.

Investigations into chatbot-generated images prompted new restrictions, yet women’s rights groups argue that enforcement and prevention still lag behind the scale of online harm.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

California moves to halt X AI deepfakes

California has ordered Elon Musk’s AI company xAI to stop creating and sharing non-consensual sexual deepfakes immediately. The move follows a surge in explicit AI-generated images circulating on X.

Attorney General Rob Bonta said xAI’s Grok tool enabled the manipulation of images of women and children without consent. Authorities argue that such activity breaches state decency laws and a new deepfake pornography ban.

The Californian investigation began after researchers found Grok users shared more non-consensual sexual imagery than users of other platforms. xAI introduced partial restrictions, though regulators said the real-world impact remains unclear.

Lawmakers say the case highlights growing risks linked to AI image tools. California officials warned companies could face significant penalties if deepfake creation and distribution continue unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Council of Europe highlights legal frameworks for AI fairness

The Council of Europe recently hosted an online event to examine the challenges posed by algorithmic discrimination and explore ways to strengthen governance frameworks for AI and automated decision-making (ADM) systems.

Two new publications were presented, focusing on legal protections against algorithmic bias and policy guidelines for equality bodies and human rights institutions.

Algorithmic bias has been shown to exacerbate existing social inequalities. In employment, AI systems trained on historical data may unfairly favour male candidates or disadvantage minority groups.

Public authorities also use AI in law enforcement, migration, welfare, justice, education, and healthcare, where profiling, facial recognition, and other automated tools can carry discriminatory risks. Private-sector applications in banking, insurance, and personnel services similarly raise concerns.

Legal frameworks such as the EU AI Act (2024/1689) and the Council of Europe’s Framework Convention on AI, human rights, democracy, and the rule of law aim to mitigate these risks. The publications review how regulations protect against algorithmic discrimination and highlight remaining gaps.

National equality bodies and human rights structures play a key role in monitoring AI/ADM systems, ensuring compliance, and promoting human rights-based deployment.

The webinar highlighted practical guidance and examples for applying EU and Council of Europe rules to public sector AI initiatives, fostering more equitable and accountable systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot