Non-consensual deepfakes, consent, and power in synthetic media

ΑΙ has reshaped almost every domain of digital life, from creativity and productivity to surveillance and governance.

One of the most controversial and ethically fraught areas of AI deployment involves pornography, particularly where generative systems are used to create, manipulate, or simulate sexual content involving real individuals without consent.

What was once a marginal issue confined to niche online forums has evolved into a global policy concern, driven by the rapid spread of AI-powered nudity applications, deepfake pornography, and image-editing tools integrated into mainstream platforms.

Recent controversies surrounding AI-powered nudity apps and the image-generation capabilities of Elon Musk’s Grok have accelerated public debate and regulatory scrutiny.

grok generative ai safety incident

Governments, regulators, and civil society organisations increasingly treat AI-generated sexual content not as a matter of taste or morality, but as an issue of digital harm, gender-based violence, child safety, and fundamental rights.

Legislative initiatives such as the US Take It Down Act illustrate a broader shift toward recognising non-consensual synthetic sexual content as a distinct and urgent category of abuse.

Our analysis examines how AI has transformed pornography, why AI-generated nudity represents a qualitative break from earlier forms of online sexual content, and how governments worldwide are attempting to respond.

It also explores the limits of current legal frameworks and the broader societal implications of delegating sexual representation to machines.

From online pornography to synthetic sexuality

Pornography has long been intertwined with technological change. From photography and film to VHS tapes, DVDs, and streaming platforms, sexual content has often been among the earliest adopters of new media technologies.

The transition from traditional pornography to AI-generated sexual content, however, marks a deeper shift than earlier format changes.

Conventional online pornography relies on human performers, production processes, and contractual relationships, even where exploitation or coercion exists. AI-generated pornography, instead of depicting real sexual acts, simulates them using algorithmic inference.

Faces, bodies, voices, and identities can be reconstructed or fabricated at scale, often without the knowledge or consent of the individuals whose likenesses are used.

AI nudity apps exemplify such a transformation. These tools allow users to upload images of real people and generate artificial nude versions, frequently marketed as entertainment or novelty applications.

DIPLO AI tools featured image Reporting AIassistant

The underlying technology relies on diffusion models trained on vast datasets of human bodies and sexual imagery, enabling increasingly realistic outputs. Unlike traditional pornography, the subject of the image may never have participated in any sexual act, yet the resulting content can be indistinguishable from authentic photography.

Such a transformation carries profound ethical implications. Instead of consuming representations of consensual adult sexuality, users often engage in simulations of sexual advances on real individuals who have not consented to being sexualised.

Such a distinction between fantasy and violation becomes blurred, particularly when such content is shared publicly or used for harassment.

AI nudity apps and the normalisation of non-consensual sexual content

The recent proliferation of AI nudity applications has intensified concerns around consent and harm. These apps are frequently marketed through euphemistic language, emphasising humour, experimentation, or artistic exploration instead of sexual exploitation.

Their core functionality, however, centres on digitally removing clothing from images of real people.

Regulators and advocacy groups increasingly argue that such tools normalise a culture in which consent is irrelevant. The ability to undress someone digitally, without personal involvement, reflects a broader pattern of technological power asymmetry, where the subject of the image lacks meaningful control over how personal likeness is used.

The ongoing Grok controversy illustrates how quickly the associated harms can scale when AI tools are embedded within major platforms. Reports that Grok can generate or modify images of women and children in sexualised ways have triggered backlash from governments, regulators, and victims’ rights organisations.

65cb63d3d76285ff8805193c blog gen ai deepfake

Even where companies claim that safeguards are in place, the repeated emergence of abusive outputs suggests systemic design failures rather than isolated misuse.

What distinguishes AI-generated sexual content from earlier forms of online abuse lies not only in realism but also in replicability. Once an image or model exists, reproduction can occur endlessly, with the content shared across jurisdictions and recontextualised in new forms. Victims often face a permanent loss of control over digital identity, with limited avenues for redress.

Gendered harm and child protection

The impact of AI-generated pornography remains unevenly distributed. Research and reporting consistently show that women and girls are disproportionately targeted by non-consensual synthetic sexual content.

Public figures, journalists, politicians, and private individuals alike have found themselves subjected to sexualised deepfakes designed to humiliate, intimidate, or silence them.

computer keyboard with red deepfake button key deepfake dangers online

Children face even greater risk. AI tools capable of generating nudified or sexualised images of minors raise alarm across legal and ethical frameworks. Even where no real child experiences physical abuse during content creation, the resulting imagery may still constitute child sexual abuse material under many legal definitions.

The existence of such content contributes to harmful sexualisation and may fuel exploitative behaviour. AI complicates traditional child protection frameworks because the abuse occurs at the level of representation, not physical contact.

Legal systems built around evidentiary standards tied to real-world acts struggle to categorise synthetic material, particularly where perpetrators argue that no real person suffered harm during production.

Regulators increasingly reject such reasoning, recognising that harm arises through exposure, distribution, and psychological impact rather than physical contact alone.

Platform responsibility and the limits of self-regulation

Technology companies have historically relied on self-regulation to address harmful content. In the context of AI-generated pornography, such an approach has demonstrated clear limitations.

Platform policies banning non-consensual sexual content often lag behind technological capabilities, while enforcement remains inconsistent and opaque.

The Grok case highlights these challenges. Even where companies announce restrictions or safeguards, questions remain regarding enforcement, detection accuracy, and accountability.

AI systems struggle to reliably determine whether an image depicts a real person, whether consent exists, or whether local laws apply. Technical uncertainty frequently serves as justification for delayed action.

Commercial incentives further complicate moderation efforts. AI image tools drive user engagement, subscriptions, and publicity. Restricting capabilities may conflict with business objectives, particularly in competitive markets.

As a result, companies tend to act only after public backlash or regulatory intervention, instead of proactively addressing foreseeable harm.

Such patterns have contributed to growing calls for legally enforceable obligations rather than voluntary guidelines. Regulators increasingly argue that platforms deploying generative AI systems should bear responsibility for foreseeable misuse, particularly where sexual harm is involved.

Legal responses and the emergence of targeted legislation

Governments worldwide are beginning to address AI-generated pornography through a combination of existing laws and new legislative initiatives. The Take It Down Act represents one of the most prominent attempts to directly confront non-consensual intimate imagery, including AI-generated content.

The Act strengthens platforms’ obligations to remove intimate images shared without consent, regardless of whether the content is authentic or synthetic. Victims’ rights to request takedowns are expanded, while procedural barriers that previously left individuals navigating complex reporting systems are reduced.

Crucially, the law recognises that harm does not depend on image authenticity, but on the impact experienced by the individual depicted.

Within the EU, debates around AI nudity apps intersect with the AI Act and the Digital Services Act (DSA). While the AI Act categorises certain uses of AI as prohibited or high-risk, lawmakers continue to question whether nudity applications fall clearly within existing bans.

European Commission EU AI Act amendments Digital Omnibus European AI Office

Calls to explicitly prohibit AI-powered nudity tools reflect concern that legal ambiguity creates enforcement gaps.

Other jurisdictions, including Australia, the UK, and parts of Southeast Asia, are exploring regulatory approaches combining platform obligations, criminal penalties, and child protection frameworks.

Such efforts signal a growing international consensus that AI-generated sexual abuse requires specific legal recognition rather than fragmented treatment.

Enforcement challenges and jurisdictional fragmentation

Despite legislative progress, enforcement remains a significant challenge. AI-generated pornography operates inherently across borders. Applications may be developed in one country, hosted in another, and used globally. Content can be shared instantly across platforms, subject to different legal regimes.

Jurisdictional fragmentation complicates takedown requests and criminal investigations. Victims often face complex reporting systems, language barriers, and inconsistent legal standards. Even where a platform complies with local law in one jurisdiction, identical material may remain accessible elsewhere.

Technical enforcement presents additional difficulties. Automated detection systems struggle to distinguish consensual adult content from non-consensual synthetic imagery. Over-reliance on automation risks false positives and censorship, while under-enforcement leaves victims unprotected.

Balancing accuracy, privacy, and freedom of expression remains unresolved.

Broader societal implications

Beyond legal and technical concerns, AI-generated pornography raises deeper questions about sexuality, power, and digital identity.

The ability to fabricate sexual representations of others undermines traditional understandings of bodily autonomy and consent. Sexual imagery becomes detached from lived experience, transformed into manipulable data.

Such shifts risk normalising the perception of individuals as visual assets rather than autonomous subjects. When sexual access can be simulated without consent, the social meaning of consent itself may weaken.

Critics argue that such technologies reinforce misogynistic and exploitative norms, particularly where women’s bodies are treated as endlessly modifiable digital material.

Deepfakes and the AI scam header

At the same time, defenders of generative AI warn of moral panic and excessive regulation. Arguments persist that not all AI-generated sexual content is harmful, particularly where fictional or consenting adult representations are involved.

The central challenge lies in distinguishing legitimate creative expression from abuse without enabling exploitative practices.

In conclusion, we must admit that AI has fundamentally altered the landscape of pornography, transforming sexual representation into a synthetic, scalable, and increasingly detached process.

AI nudity apps and controversies surrounding AI tools demonstrate how existing social norms and legal frameworks remain poorly equipped to address non-consensual synthetic sexual content.

Global responses indicate a growing recognition that AI-generated pornography constitutes a distinct category of digital harm. Regulation alone, however, will not resolve the issue.

Effective responses require legal clarity, platform accountability, technical safeguards, and cultural change, especially with the help of the educational system.

As AI systems become more powerful and accessible, societies must confront difficult questions about consent, identity, and responsibility in the digital age.

The challenge lies not merely in restricting technology, but in defining ethical boundaries that protect our human dignity while preserving legitimate innovation.

In the days, weeks or months ahead, decisions taken by governments, platforms, and communities will shape the future relationship between AI and our precious human autonomy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Autonomous AI fails most tasks in virtual company experiment

Researchers at Carnegie Mellon University created a virtual company staffed solely by AI ’employees’ trained on large language models from vendors including Anthropic, OpenAI, and Google, assigning them roles such as financial analyst and software engineer.

In this simulated work environment, the AI agents struggled to complete most tasks, with even the best-performing model only completing about a quarter of its assignments.

The experiment highlighted key weaknesses in current AI systems, including difficulty interpreting nuanced instructions, managing web navigation with pop-ups, and coordinating multi-step workflows without human intervention.

These gaps suggest that human judgement, adaptability and collaboration remain essential in real workplaces for the foreseeable future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The UK labour market feels a sharper impact from AI use

Companies are reporting net job losses linked to AI adoption, with research showing a sharper impact than in other major economies. A Morgan Stanley survey found that firms using the technology for at least a year cut more roles than they created, particularly across the UK labour market.

The study covered sectors including retail, real estate, transport, healthcare equipment and automotive manufacturing, showing an average productivity increase of 11.5% among UK businesses. Comparable firms in the United States reported similar efficiency gains but continued to expand employment overall.

Researchers pointed to higher operating costs and tax pressures as factors amplifying the employment impact in Britain. Unemployment has reached a four-year high, while increases in the minimum wage and employer national insurance contributions have tightened hiring across industries.

Public concern over AI-driven displacement is also rising, with more than a quarter of UK workers fearing their roles could disappear within five years, according to recruitment firm Randstad. Younger workers expressed the highest anxiety, while older generations showed greater confidence in adapting.

Political leaders warn that unmanaged AI-driven change could disrupt labour markets. London mayor Sadiq Khan said the technology may cut many white-collar jobs, calling for action to create replacement roles.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU classifies WhatsApp as Very Large Online Platform

WhatsApp has been formally designated a Very Large Online Platform under the EU Digital Services Act, triggering the bloc’s most stringent digital oversight regime.

The classification follows confirmation that the messaging service has exceeded 51 million monthly users in the EU, triggering enhanced regulatory scrutiny.

As a VLOP, WhatsApp must take active steps to limit the spread of disinformation and reduce risks linked to the manipulation of public debate. The platform is also expected to strengthen safeguards for users’ mental health, with particular attention placed on the protection of minors and younger audiences.

The European Commission will oversee compliance directly and may impose financial penalties of up to 6 percent of WhatsApp’s global annual turnover if violations are identified. The company has until mid-May to align its systems, policies and risk assessments with the DSA’s requirements.

WhatsApp joins a growing list of major platforms already subject to similar obligations, including Facebook, Instagram, YouTube and X. The move reflects the Commission’s broader effort to apply the Digital Services Act across social media, messaging services and content platforms linked to systemic online risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France proposes EU tools to map foreign tech dependence

France has unveiled a new push to reduce Europe’s dependence on US and Chinese technology suppliers, placing digital sovereignty back at the centre of the EU policy debates.

Speaking in Paris, France’s minister for AI and digital affairs, Anne Le Hénanff, presented initiatives to expose and address the structural reliance on non-EU technologies across public administrations and private companies.

Central to the strategy is the creation of a Digital Sovereignty Observatory, which will map foreign technology dependencies and assess organisational exposure to geopolitical and supply-chain risks.

The body, led by former Europe minister Clément Beaune, is intended to provide the evidence base needed for coordinated action rather than symbolic declarations of autonomy.

France is also advancing a Digital Resilience Index, expected to publish its first findings in early 2026. The index will measure reliance on foreign digital services and products, identifying vulnerabilities linked to cloud infrastructure, AI, cybersecurity and emerging technologies.

Industry data suggests Europe’s dependence on external tech providers costs the continent hundreds of billions of euros annually.

Paris is using the initiative to renew calls for a European preference in public-sector digital procurement and for a standard EU definition of European digital services.

Such proposals remain contentious among member states, yet France argues they are essential for restoring strategic control over critical digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok outages spark fears over data control and censorship in the US

Widespread TikTok disruptions affected users across the US as snowstorms triggered power outages and technical failures, with reports of malfunctioning algorithms and missing content features.

Problems persisted for some users beyond the initial incident, adding to uncertainty surrounding the platform’s stability.

The outage coincided with the creation of a new US-based TikTok joint venture following government concerns over potential Chinese access to user data. TikTok stated that a power failure at a domestic data centre caused the disruption, rather than ownership restructuring or policy changes.

Suspicion grew among users due to overlapping political events, including large-scale protests in Minneapolis and reports of difficulties searching for related content. Fears of censorship spread online, although TikTok attributed all disruptions to infrastructure failure.

The incident also resurfaced concerns over TikTok’s privacy policy, which outlines the collection of sensitive personal data. While some disclosures predated the ownership deal, the timing reinforced broader anxieties over social media surveillance during periods of political tension.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Siri set for major AI overhaul through Google’s Gemini partnership

Apple is preparing a major AI upgrade for Siri powered by Google’s Gemini models, expected in the second half of February, according to Bloomberg. The update will run on Apple’s Private Cloud Compute infrastructure using high-end Mac chips.

The iOS 26.4 release is set to introduce ‘World Knowledge Answers’, enabling Siri to provide web-based summaries with citations similar to ChatGPT and Perplexity. Deeper integration across core apps such as Mail, Photos, Music, TV, and Xcode is also planned.

Expanded voice controls are expected to let users search for and edit photos by spoken description, as well as generate emails based on calendar activity. Bloomberg also reported Apple is paying Google around $1 billion annually to access Gemini’s underlying AI technology.

Market reaction to the news pushed Apple shares higher, while Alphabet stock also rose following confirmation of the partnership. A spokesperson for Apple declined to comment on the reported developments.

Looking ahead, Apple is developing a chatbot-style assistant known internally as ‘Campos’ to eventually replace the current Siri interface. The system would analyse on-screen activity, suggest actions, and expand device control across future operating systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI Overviews leans heavily on YouTube for health information

Google’s health-related search results increasingly draw on YouTube rather than hospitals, government agencies, or academic institutions, as new research reveals how AI Overviews select citation sources in automated results.

An analysis by SEO platform SE Ranking reviewed more than 50,000 German-language health queries and found AI Overviews appeared on over 82% of searches, making healthcare one of the most AI-influenced information categories on Google.

Across all cited sources, YouTube ranked first by a wide margin, accounting for more than 20,000 references and surpassing medical publishers, hospital websites, and public health authorities.

Academic journals and research institutions accounted for less than 1% of citations, while national and international government health bodies accounted for under 0.5%, highlighting a sharp imbalance in source authority.

Researchers warn that when platform-scale content outweighs evidence-based medical sources, the risk extends beyond misinformation to long-term erosion of trust in AI-powered search systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google.org backs AI-led science breakthroughs

The organisation Google.org has selected twelve recipients for its $20 million AI for Science fund, which aims to accelerate research in health, agriculture, biodiversity, and climate.

The initiative backs academic, nonprofit, and startup teams using AI to turn scientific insights into real-world solutions. In health and life sciences, projects target genetic decoding, neural mapping, disease prediction, and faster detection of drug resistance.

Research groups are applying advanced AI models to unlock hidden regions of the human genome, simulate disease pathways, and dramatically reduce detection times for life-threatening pathogens, shifting medicine towards earlier intervention and prevention.

Agriculture and food systems are another focus, using AI to breed resistant crops, improve nutrition, and cut livestock methane emissions. Projects seek to strengthen food security, boost sustainability, and support climate resilience.

Biodiversity and clean energy efforts target species mapping, conservation planning, fusion research, and large-scale carbon capture. Open science principles ensure datasets and tools remain accessible, scalable, and capable of driving wider breakthroughs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google fixes Gmail bug that sent spam into primary inboxes

Gmail experienced widespread email filtering issues on Saturday, sending spam into primary inboxes and mislabelling legitimate messages as suspicious, according to Google’s Workspace status dashboard.

Problems began around 5 a.m. Pacific time, with users reporting disrupted inbox categories, unexpected spam warnings and delays in email delivery. Many said promotional and social emails appeared in primary folders, while trusted senders were flagged as potential threats.

Google acknowledged the malfunction throughout the day, noting ongoing efforts to restore normal service as complaints spread across social media platforms.

By Saturday evening, the company confirmed the issue had been fully resolved for all users, although some misclassified messages and spam warnings may remain visible for emails received before the fix.

Google said it is conducting an internal investigation and will publish a detailed incident analysis to explain what caused the disruption.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!