TikTok accused of breaching EU digital safety rules

The European Commission has concluded that TikTok’s design breaches the Digital Services Act by encouraging compulsive use and failing to protect users, particularly children and teenagers.

Preliminary findings say the platform relies heavily on features such as infinite scroll, which automatically delivers new videos and makes disengagement difficult.

Regulators argue that such mechanisms place users into habitual patterns of repeated viewing rather than supporting conscious choice. EU officials found that safeguards introduced by TikTok do not adequately reduce the risks linked to excessive screen time.

Daily screen time limits were described as ineffective because alerts are easy to dismiss, even for younger users who receive automatic restrictions. Parental control tools were also criticised for requiring significant effort, technical knowledge and ongoing involvement from parents.

Henna Virkkunen, the Commission’s executive vice-president for tech sovereignty, security and democracy, said addictive social media design can harm the development of young people. European law, she said, makes platforms responsible for the effects their services have on users.

Regulators concluded that compliance with the Digital Services Act would require TikTok to alter core elements of its product, including changes to infinite scroll, recommendation systems and screen break features.

TikTok rejected the findings, calling them inaccurate and saying the company would challenge the assessment. The platform argues that it already offers a range of tools, including sleep reminders and wellbeing features, to help users manage their time.

The investigation remains ongoing and no penalties have yet been imposed. A final decision could still result in enforcement measures, including fines of up to six per cent of TikTok’s global annual turnover.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Slovenia plans social media ban for children under 15

Among several countries lately, Slovenia is also moving towards banning access to social media platforms for children under the age of 15, as the government prepares draft legislation aimed at protecting minors online.

Deputy Prime Minister Matej Arčon said the Education Ministry initiated the proposal and would be developed with input from professionals.

The planned law would apply to major social networks where user-generated content is shared, including TikTok, Snapchat and Instagram. Arčon said the initiative reflects growing international concern over the impact of social media on children’s mental health, privacy and exposure to addictive design features.

Slovenia’s move follows similar debates and proposals across Europe and beyond. Countries such as Italy, France, Spain, UK, Greece and Austria have considered restrictions, while Australia has already introduced a nationwide minimum age for social media use.

Spain’s prime minister recently defended proposed limits, arguing that technology companies should not influence democratic decision-making.

Critics of such bans warn of potential unintended consequences. Telegram founder Pavel Durov has argued that age-based restrictions could lead to broader data collection and increased state control over online content.

Despite these concerns, Slovenia’s government appears determined to proceed, positioning the measure as part of a broader effort to strengthen child protection in the digital space.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU split widens over ban on AI nudification apps

European lawmakers remain divided over whether AI tools that generate non-consensual sexual images should face an explicit ban in the EU legislation.

The split emerged as debate intensified over the AI simplification package, which is moving through Parliament and the Council rather than remaining confined to earlier negotiations.

Concerns escalated after Grok was used to create images that digitally undressed women and children.

The EU regulators responded by launching an investigation under the Digital Services Act, and the Commission described the behaviour as illegal under existing European rules. Several lawmakers argue that the AI Act should name pornification apps directly instead of relying on broader legal provisions.

Lead MEPs did not include a ban in their initial draft of the Parliament’s position, prompting other groups to consider adding amendments. Negotiations continue as parties explore how such a restriction could be framed without creating inconsistencies within the broader AI framework.

The Commission appears open to strengthening the law and has hinted that the AI omnibus could be an appropriate moment to act. Lawmakers now have a limited time to decide whether an explicit prohibition can secure political agreement before the amendment deadline passes.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Under 16 social media ban proposed in Spain

Spain is preparing legislation to ban social media access for users under 16, with the proposal expected to be introduced within days. Prime Minister Pedro Sánchez framed the move as a child-protection measure aimed at reducing exposure to harmful online environments.

Government plans include mandatory age-verification systems for platforms, designed to serve as practical barriers rather than symbolic safeguards. Officials argue that minors face escalating risks online, including addiction, exploitation, violent content, and manipulation.

Additional provisions could hold technology executives legally accountable for unlawful or hateful content that remains online. The proposal reflects a broader regulatory shift toward platform responsibility and stricter enforcement standards.

Momentum for youth restrictions is building across Europe. France and Denmark are pursuing similar controls, while the EU Digital Services Act guidelines allow member states to define a national ‘digital majority age’.

The European Commission is also testing an age verification app, with wider deployment expected next year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ofcom expands scrutiny of X over Grok deepfake concerns

The British regulator, Ofcom, has released an update on its investigation into X after reports that the Grok chatbot had generated sexual deepfakes of real people, including minors.

As such, the regulator initiated a formal inquiry to assess whether X took adequate steps to manage the spread of such material and to remove it swiftly.

X has since introduced measures to limit the distribution of manipulated images, while the ICO and regulators abroad have opened parallel investigations.

The Online Safety Act does not cover all chatbot services, as regulation depends on whether a system enables user interactions, provides search functionality, or produces pornographic material.

Many AI chatbots fall partly or entirely outside the Act’s scope, limiting regulators’ ability to act when harmful content is created during one-to-one interactions.

Ofcom cannot currently investigate the standalone Grok service for producing illegal images because the Act does not cover that form of generation.

Evidence-gathering from X continues, with legally binding information requests issued to the company. Ofcom will offer X a full opportunity to present representations before any provisional findings are published.

Enforcement actions take several months, since regulators must follow strict procedural safeguards to ensure decisions are robust and defensible.

Ofcom added that people who encounter harmful or illegal content online are encouraged to report it directly to the relevant platforms. Incidents involving intimate images can be reported to dedicated services for adults or support schemes for minors.

Material that may constitute child sexual abuse should be reported to the Internet Watch Foundation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

France targets X over algorithm abuse allegations

The cybercrime unit of the Paris prosecutor has raided the French office of X as part of an expanding investigation into alleged algorithm manipulation and illicit data extraction.

Authorities said the probe began in 2025 after a lawmaker warned that biassed algorithms on the platform might have interfered with automated data systems. Europol supported the operation together with national cybercrime officers.

Prosecutors confirmed that the investigation now includes allegations of complicity in circulating child sex abuse material, sexually explicit deepfakes and denial of crimes against humanity.

Elon Musk and former chief executive Linda Yaccarino have been summoned for questioning in April in their roles as senior figures of the company at the time.

The prosecutor’s office also announced its departure from X in favour of LinkedIn and Instagram, rather than continuing to use the platform under scrutiny.

X strongly rejected the accusations and described the raid as politically motivated. Musk claimed authorities should focus on pursuing sex offenders instead of targeting the company.

The platform’s government affairs team said the investigation amounted to law enforcement theatre rather than a legitimate examination of serious offences.

Regulatory pressure increased further as the UK data watchdog opened inquiries into both X and xAI over concerns about Grok producing sexualised deepfakes. Ofcom is already conducting a separate investigation that is expected to take months.

The widening scrutiny reflects growing unease around alleged harmful content, political interference and the broader risks linked to large-scale AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Grok returns to Indonesia as X agrees to tightened oversight

Indonesia has restored access to Grok after receiving guarantees from X that stronger safeguards will be introduced to prevent further misuse of the AI tool.

Authorities suspended the service last month following the spread of sexualised images on the platform, making Indonesia the first country to block the system.

Officials from the Ministry of Communications and Digital Affairs said that access had been reinstated on a conditional basis after X submitted a written commitment outlining concrete measures to strengthen compliance with national law.

The ministry emphasised that the document serves as a starting point for evaluation instead of signalling the end of supervision.

However, the government warned that restrictions could return if Grok fails to meet local standards or if new violations emerge. Indonesian regulators stressed that monitoring would remain continuous, and access could be withdrawn immediately should inconsistencies be detected.

The decision marks a cautious reopening rather than a full reinstatement, reflecting Indonesia’s wider efforts to demand greater accountability from global platforms deploying advanced AI systems within its borders.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU confronts Grok abuse as Brussels tests its digital power

The European Commission has opened a formal investigation into Grok after the tool produced millions of sexualised images of women and children.

A scrutiny that centres on whether X failed to carry out adequate risk assessments before releasing the undressing feature in the European market. The case arrives as ministers, including Sweden’s deputy prime minister, publicly reveal being targeted by the technology.

Brussels is preparing to use its strongest digital laws instead of deferring to US pressure. The Digital Services Act allows the European Commission to fine major platforms or force compliance measures when systemic harms emerge.

Experts argue the Grok investigation represents an important test of European resolve, particularly as the bloc tries to show it can hold powerful companies to account.

Concerns remain about the willingness of the EU to act decisively. Reports suggest the opening of the probe was delayed because of a tariff dispute with Washington, raising questions about whether geopolitical considerations slowed the enforcement response.

Several lawmakers say the delay undermined confidence in the bloc’s commitment to protecting fundamental rights.

The investigation could last months and may have wider implications for content ranking systems already under scrutiny.

Critics say financial penalties may not be enough to change behaviour at X, yet the case is still viewed as a pivotal moment for European digital governance. Observers believe a firm outcome would demonstrate that emerging harms linked to synthetic media cannot be ignored.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI reduces late breast cancer diagnoses by 12% in landmark study

AI in breast cancer screening reduced late diagnoses by 12% and increased early detection rates in the largest trial of its kind. The Swedish study involved 100,000 women randomly assigned to AI-supported screening or standard radiologist readings between April 2021 and December 2022.

The AI system analysed mammograms and assigned low-risk cases to single readings and high-risk cases to double readings by radiologists.

Results published in The Lancet showed 1.55 cancers per 1,000 women in the AI group versus 1.76 in the control group, with 81% detected at the screening stage, compared with 74% in the control group.

Dr Kristina Lång from Lund University said AI-supported mammography could reduce radiologist workload pressures and improve early detection, but cautioned that implementation must be done carefully with continuous monitoring.

Researchers stressed that screening still requires at least one human radiologist working alongside AI, rather than AI replacing human radiologists. Cancer Research UK’s Dr Sowmiya Moorthie called the findings promising but noted more research is needed to confirm life-saving potential

Breast Cancer Now’s Simon Vincent highlighted the significant potential for AI to support radiologists, emphasising that earlier diagnosis improves treatment outcomes for a disease that affects over 2 million people globally each year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Non-consensual deepfakes, consent, and power in synthetic media

ΑΙ has reshaped almost every domain of digital life, from creativity and productivity to surveillance and governance.

One of the most controversial and ethically fraught areas of AI deployment involves pornography, particularly where generative systems are used to create, manipulate, or simulate sexual content involving real individuals without consent.

What was once a marginal issue confined to niche online forums has evolved into a global policy concern, driven by the rapid spread of AI-powered nudity applications, deepfake pornography, and image-editing tools integrated into mainstream platforms.

Recent controversies surrounding AI-powered nudity apps and the image-generation capabilities of Elon Musk’s Grok have accelerated public debate and regulatory scrutiny.

grok generative ai safety incident

Governments, regulators, and civil society organisations increasingly treat AI-generated sexual content not as a matter of taste or morality, but as an issue of digital harm, gender-based violence, child safety, and fundamental rights.

Legislative initiatives such as the US Take It Down Act illustrate a broader shift toward recognising non-consensual synthetic sexual content as a distinct and urgent category of abuse.

Our analysis examines how AI has transformed pornography, why AI-generated nudity represents a qualitative break from earlier forms of online sexual content, and how governments worldwide are attempting to respond.

It also explores the limits of current legal frameworks and the broader societal implications of delegating sexual representation to machines.

From online pornography to synthetic sexuality

Pornography has long been intertwined with technological change. From photography and film to VHS tapes, DVDs, and streaming platforms, sexual content has often been among the earliest adopters of new media technologies.

The transition from traditional pornography to AI-generated sexual content, however, marks a deeper shift than earlier format changes.

Conventional online pornography relies on human performers, production processes, and contractual relationships, even where exploitation or coercion exists. AI-generated pornography, instead of depicting real sexual acts, simulates them using algorithmic inference.

Faces, bodies, voices, and identities can be reconstructed or fabricated at scale, often without the knowledge or consent of the individuals whose likenesses are used.

AI nudity apps exemplify such a transformation. These tools allow users to upload images of real people and generate artificial nude versions, frequently marketed as entertainment or novelty applications.

DIPLO AI tools featured image Reporting AIassistant

The underlying technology relies on diffusion models trained on vast datasets of human bodies and sexual imagery, enabling increasingly realistic outputs. Unlike traditional pornography, the subject of the image may never have participated in any sexual act, yet the resulting content can be indistinguishable from authentic photography.

Such a transformation carries profound ethical implications. Instead of consuming representations of consensual adult sexuality, users often engage in simulations of sexual advances on real individuals who have not consented to being sexualised.

Such a distinction between fantasy and violation becomes blurred, particularly when such content is shared publicly or used for harassment.

AI nudity apps and the normalisation of non-consensual sexual content

The recent proliferation of AI nudity applications has intensified concerns around consent and harm. These apps are frequently marketed through euphemistic language, emphasising humour, experimentation, or artistic exploration instead of sexual exploitation.

Their core functionality, however, centres on digitally removing clothing from images of real people.

Regulators and advocacy groups increasingly argue that such tools normalise a culture in which consent is irrelevant. The ability to undress someone digitally, without personal involvement, reflects a broader pattern of technological power asymmetry, where the subject of the image lacks meaningful control over how personal likeness is used.

The ongoing Grok controversy illustrates how quickly the associated harms can scale when AI tools are embedded within major platforms. Reports that Grok can generate or modify images of women and children in sexualised ways have triggered backlash from governments, regulators, and victims’ rights organisations.

65cb63d3d76285ff8805193c blog gen ai deepfake

Even where companies claim that safeguards are in place, the repeated emergence of abusive outputs suggests systemic design failures rather than isolated misuse.

What distinguishes AI-generated sexual content from earlier forms of online abuse lies not only in realism but also in replicability. Once an image or model exists, reproduction can occur endlessly, with the content shared across jurisdictions and recontextualised in new forms. Victims often face a permanent loss of control over digital identity, with limited avenues for redress.

Gendered harm and child protection

The impact of AI-generated pornography remains unevenly distributed. Research and reporting consistently show that women and girls are disproportionately targeted by non-consensual synthetic sexual content.

Public figures, journalists, politicians, and private individuals alike have found themselves subjected to sexualised deepfakes designed to humiliate, intimidate, or silence them.

computer keyboard with red deepfake button key deepfake dangers online

Children face even greater risk. AI tools capable of generating nudified or sexualised images of minors raise alarm across legal and ethical frameworks. Even where no real child experiences physical abuse during content creation, the resulting imagery may still constitute child sexual abuse material under many legal definitions.

The existence of such content contributes to harmful sexualisation and may fuel exploitative behaviour. AI complicates traditional child protection frameworks because the abuse occurs at the level of representation, not physical contact.

Legal systems built around evidentiary standards tied to real-world acts struggle to categorise synthetic material, particularly where perpetrators argue that no real person suffered harm during production.

Regulators increasingly reject such reasoning, recognising that harm arises through exposure, distribution, and psychological impact rather than physical contact alone.

Platform responsibility and the limits of self-regulation

Technology companies have historically relied on self-regulation to address harmful content. In the context of AI-generated pornography, such an approach has demonstrated clear limitations.

Platform policies banning non-consensual sexual content often lag behind technological capabilities, while enforcement remains inconsistent and opaque.

The Grok case highlights these challenges. Even where companies announce restrictions or safeguards, questions remain regarding enforcement, detection accuracy, and accountability.

AI systems struggle to reliably determine whether an image depicts a real person, whether consent exists, or whether local laws apply. Technical uncertainty frequently serves as justification for delayed action.

Commercial incentives further complicate moderation efforts. AI image tools drive user engagement, subscriptions, and publicity. Restricting capabilities may conflict with business objectives, particularly in competitive markets.

As a result, companies tend to act only after public backlash or regulatory intervention, instead of proactively addressing foreseeable harm.

Such patterns have contributed to growing calls for legally enforceable obligations rather than voluntary guidelines. Regulators increasingly argue that platforms deploying generative AI systems should bear responsibility for foreseeable misuse, particularly where sexual harm is involved.

Legal responses and the emergence of targeted legislation

Governments worldwide are beginning to address AI-generated pornography through a combination of existing laws and new legislative initiatives. The Take It Down Act represents one of the most prominent attempts to directly confront non-consensual intimate imagery, including AI-generated content.

The Act strengthens platforms’ obligations to remove intimate images shared without consent, regardless of whether the content is authentic or synthetic. Victims’ rights to request takedowns are expanded, while procedural barriers that previously left individuals navigating complex reporting systems are reduced.

Crucially, the law recognises that harm does not depend on image authenticity, but on the impact experienced by the individual depicted.

Within the EU, debates around AI nudity apps intersect with the AI Act and the Digital Services Act (DSA). While the AI Act categorises certain uses of AI as prohibited or high-risk, lawmakers continue to question whether nudity applications fall clearly within existing bans.

European Commission EU AI Act amendments Digital Omnibus European AI Office

Calls to explicitly prohibit AI-powered nudity tools reflect concern that legal ambiguity creates enforcement gaps.

Other jurisdictions, including Australia, the UK, and parts of Southeast Asia, are exploring regulatory approaches combining platform obligations, criminal penalties, and child protection frameworks.

Such efforts signal a growing international consensus that AI-generated sexual abuse requires specific legal recognition rather than fragmented treatment.

Enforcement challenges and jurisdictional fragmentation

Despite legislative progress, enforcement remains a significant challenge. AI-generated pornography operates inherently across borders. Applications may be developed in one country, hosted in another, and used globally. Content can be shared instantly across platforms, subject to different legal regimes.

Jurisdictional fragmentation complicates takedown requests and criminal investigations. Victims often face complex reporting systems, language barriers, and inconsistent legal standards. Even where a platform complies with local law in one jurisdiction, identical material may remain accessible elsewhere.

Technical enforcement presents additional difficulties. Automated detection systems struggle to distinguish consensual adult content from non-consensual synthetic imagery. Over-reliance on automation risks false positives and censorship, while under-enforcement leaves victims unprotected.

Balancing accuracy, privacy, and freedom of expression remains unresolved.

Broader societal implications

Beyond legal and technical concerns, AI-generated pornography raises deeper questions about sexuality, power, and digital identity.

The ability to fabricate sexual representations of others undermines traditional understandings of bodily autonomy and consent. Sexual imagery becomes detached from lived experience, transformed into manipulable data.

Such shifts risk normalising the perception of individuals as visual assets rather than autonomous subjects. When sexual access can be simulated without consent, the social meaning of consent itself may weaken.

Critics argue that such technologies reinforce misogynistic and exploitative norms, particularly where women’s bodies are treated as endlessly modifiable digital material.

Deepfakes and the AI scam header

At the same time, defenders of generative AI warn of moral panic and excessive regulation. Arguments persist that not all AI-generated sexual content is harmful, particularly where fictional or consenting adult representations are involved.

The central challenge lies in distinguishing legitimate creative expression from abuse without enabling exploitative practices.

In conclusion, we must admit that AI has fundamentally altered the landscape of pornography, transforming sexual representation into a synthetic, scalable, and increasingly detached process.

AI nudity apps and controversies surrounding AI tools demonstrate how existing social norms and legal frameworks remain poorly equipped to address non-consensual synthetic sexual content.

Global responses indicate a growing recognition that AI-generated pornography constitutes a distinct category of digital harm. Regulation alone, however, will not resolve the issue.

Effective responses require legal clarity, platform accountability, technical safeguards, and cultural change, especially with the help of the educational system.

As AI systems become more powerful and accessible, societies must confront difficult questions about consent, identity, and responsibility in the digital age.

The challenge lies not merely in restricting technology, but in defining ethical boundaries that protect our human dignity while preserving legitimate innovation.

In the days, weeks or months ahead, decisions taken by governments, platforms, and communities will shape the future relationship between AI and our precious human autonomy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!