Snap faces new AI training lawsuit in California

A group of YouTubers has filed a copyright lawsuit against Snap in the US, alleging their videos were used to train AI systems without permission. The case was lodged in a federal court in California and targets AI features used within Snapchat.

The creators claim that Snap relied on large-scale video-language datasets intended initially for academic research. According to the filing in California, access to the material required bypassing YouTube safeguards and license restrictions on commercial use.

The lawsuit in the US seeks statutory damages and a permanent injunction to block further use of the content. The case is led by creators behind the h3h3 channel, alongside two smaller US-based golf channels.

The action adds Snap to a growing list of tech companies facing similar claims in the US. Courts in California and elsewhere continue to weigh how copyright law applies to AI training practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK government makes bold move with AI tutoring trials for 450,000 pupils

The government plans to trial AI tutoring tools in secondary schools, with nationwide availability targeted for the end of 2027. The tools will be developed through a government-led tender, bringing together teachers, AI labs, and technology companies to co-create solutions aligned with classroom needs.

The initiative aims to provide personalised, one-to-one-style learning support, adapting to individual pupils’ needs and helping them catch up where they struggle. A central objective is to reduce educational inequality, with up to 450,000 disadvantaged pupils in years 9–11 potentially benefiting each year, particularly those eligible for free school meals.

AI tutoring tools are intended to complement, not replace, face-to-face teaching. Teachers will play a key role in co-designing, testing, and refining the tools, ensuring they support high-quality teaching, provide targeted help to struggling pupils, and stretch higher-performing students.

Safety and quality are positioned as non-negotiable. The tools will be rigorously tested to ensure they are safe, reliable, and aligned with the National Curriculum, and clear benchmarks will be developed for use in schools. Trials beginning later this year will generate evidence to guide wider rollout, alongside practical training for teachers and school staff to support confident and responsible use of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Zurich researchers link AI with spirituality studies

Researchers at the University of Zurich have received a Postdoc Team Award for SpiritRAG, an AI system designed to analyse religion and spirituality in United Nations documents. The interdisciplinary project brings together expertise from Zurich across computer science, linguistics, education and spiritual care.

SpiritRAG connects large language models with more than 7,500 UN texts, allowing users in Zurich and beyond to ask context sensitive questions grounded in original sources. The system addresses challenges where meaning varies across cultures, history and political settings.

The Zurich based team presented SpiritRAG at EMNLP 2025 in Suzhou, China, and later at the AI+X Summit in Zurich. Interest from organisations outside Zurich highlights demand for transparent AI tools supporting research and policy analysis.

Designed as open source infrastructure, SpiritRAG allows deployment with different datasets while using limited resources. Researchers in Zurich say the approach supports responsible AI use in complex domains where accuracy and context remain critical.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Data privacy shifts from breaches to authorised surveillance

Data Privacy Week has returned at a time when personal information is increasingly collected by default rather than through breaches. Campaigns urge awareness, yet privacy is being reshaped by lawful, large-scale data gathering driven by corporate and government systems.

In the US, companies now collect, retain and combine data with AI tools under legal authority, often without meaningful consent. Platforms such as TikTok illustrate how vast datasets are harvested regardless of ownership, shifting debates towards who controls data rather than how much is taken.

US policy responses have focused on national security rather than limiting surveillance itself. Pressure on TikTok to separate from Chinese ownership left data collection intact, while border authorities in the US are seeking broader access to travellers’ digital and biometric information.

Across the US technology sector, privacy increasingly centres on agency rather than secrecy. Data Privacy Week highlights growing concern that once information is gathered, control is lost, leaving accountability lagging behind capability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AfricAI positions Africa for large-scale adoption of intelligent machines

Through exclusive rights to Micropolis Robotics, AfricAI is the gateway to autonomous systems in Africa. This partnership deploys advanced robotics into industry, security, logistics, and regional infrastructure. The collaboration establishes a single entry point for high-tech automation and sustainable growth.

Micropolis will not pursue direct sales or other distributors in Africa, leaving the pan-African AI and tech platform responsible for localisation, regulation, and market rollout across the continent.

Company leaders described the partnership as a shift from software-focused AI to intelligent machines in real-world environments. According to Micropolis CEO Fareed Aljawhari, Africa is becoming the exclusive route for robotics expansion across the continent.

The agreement allows AfricAI to integrate autonomous robotics with its broader AI infrastructure stack, supporting security systems, smart cities, automated logistics, and industrial operations adapted to local conditions. Initial deployments will begin in security and infrastructure.

Analysts say the deal positions as one of Africa’s first large-scale robotics gatekeepers, potentially accelerating industrial transformation through autonomous technologies. Both firms highlighted commitments to responsible innovation and sustainable technology ecosystems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Facial recognition expansion anchors UK policing reforms driven by AI

UK authorities have unveiled a major policing reform programme that places AI and facial recognition at the centre of future law enforcement strategy. The plans include expanding the use of Live Facial Recognition and creating a national hub to scale AI tools across police forces.

The Home Office will fund 40 new facial recognition vans for town centres across England and Wales, significantly increasing real-time biometric surveillance capacity. Officials say the rollout responds to crime that increasingly involves digital activity.

The UK government will also invest £115 million over three years into a National Centre for AI in Policing, known as Police.AI. The centre will focus on speeding investigations, reducing paperwork and improving crime detection.

New governance measures will regulate police use of facial recognition and introduce a public register of deployed AI systems. National data standards aim to strengthen accountability and coordination across forces.

Structural reforms include creating a National Police Service to tackle serious crime and terrorism. Predictive analytics, deepfake detection and digital forensics will play a larger operational role.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA invests $2 billion as CoreWeave expands AI factory network

CoreWeave’s long-running partnership has deepened with NVIDIA to accelerate AI infrastructure deployment, including ambitious plans for multi-gigawatt AI factory capacity by 2030.

As part of the agreement, the US company is investing $2 billion in CoreWeave through the purchase of Class A common stock, signalling strong confidence in the company’s growth strategy and AI-focused cloud platform.

Both companies aim to deepen alignment across infrastructure, software and platform development, with CoreWeave building and operating AI factories using NVIDIA’s accelerated computing technologies and early access to upcoming architectures such as Rubin, Vera CPUs and BlueField systems.

The collaboration will also test and integrate CoreWeave’s AI-native software and reference designs into NVIDIA’s broader cloud and enterprise ecosystem, while NVIDIA supports faster site development through financial backing for land and power procurement.

Executives from both firms described the expansion as a response to surging global demand for AI computing, positioning large-scale AI factories as the backbone of future industrial AI deployment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Non-consensual deepfakes, consent, and power in synthetic media

ΑΙ has reshaped almost every domain of digital life, from creativity and productivity to surveillance and governance.

One of the most controversial and ethically fraught areas of AI deployment involves pornography, particularly where generative systems are used to create, manipulate, or simulate sexual content involving real individuals without consent.

What was once a marginal issue confined to niche online forums has evolved into a global policy concern, driven by the rapid spread of AI-powered nudity applications, deepfake pornography, and image-editing tools integrated into mainstream platforms.

Recent controversies surrounding AI-powered nudity apps and the image-generation capabilities of Elon Musk’s Grok have accelerated public debate and regulatory scrutiny.

grok generative ai safety incident

Governments, regulators, and civil society organisations increasingly treat AI-generated sexual content not as a matter of taste or morality, but as an issue of digital harm, gender-based violence, child safety, and fundamental rights.

Legislative initiatives such as the US Take It Down Act illustrate a broader shift toward recognising non-consensual synthetic sexual content as a distinct and urgent category of abuse.

Our analysis examines how AI has transformed pornography, why AI-generated nudity represents a qualitative break from earlier forms of online sexual content, and how governments worldwide are attempting to respond.

It also explores the limits of current legal frameworks and the broader societal implications of delegating sexual representation to machines.

From online pornography to synthetic sexuality

Pornography has long been intertwined with technological change. From photography and film to VHS tapes, DVDs, and streaming platforms, sexual content has often been among the earliest adopters of new media technologies.

The transition from traditional pornography to AI-generated sexual content, however, marks a deeper shift than earlier format changes.

Conventional online pornography relies on human performers, production processes, and contractual relationships, even where exploitation or coercion exists. AI-generated pornography, instead of depicting real sexual acts, simulates them using algorithmic inference.

Faces, bodies, voices, and identities can be reconstructed or fabricated at scale, often without the knowledge or consent of the individuals whose likenesses are used.

AI nudity apps exemplify such a transformation. These tools allow users to upload images of real people and generate artificial nude versions, frequently marketed as entertainment or novelty applications.

DIPLO AI tools featured image Reporting AIassistant

The underlying technology relies on diffusion models trained on vast datasets of human bodies and sexual imagery, enabling increasingly realistic outputs. Unlike traditional pornography, the subject of the image may never have participated in any sexual act, yet the resulting content can be indistinguishable from authentic photography.

Such a transformation carries profound ethical implications. Instead of consuming representations of consensual adult sexuality, users often engage in simulations of sexual advances on real individuals who have not consented to being sexualised.

Such a distinction between fantasy and violation becomes blurred, particularly when such content is shared publicly or used for harassment.

AI nudity apps and the normalisation of non-consensual sexual content

The recent proliferation of AI nudity applications has intensified concerns around consent and harm. These apps are frequently marketed through euphemistic language, emphasising humour, experimentation, or artistic exploration instead of sexual exploitation.

Their core functionality, however, centres on digitally removing clothing from images of real people.

Regulators and advocacy groups increasingly argue that such tools normalise a culture in which consent is irrelevant. The ability to undress someone digitally, without personal involvement, reflects a broader pattern of technological power asymmetry, where the subject of the image lacks meaningful control over how personal likeness is used.

The ongoing Grok controversy illustrates how quickly the associated harms can scale when AI tools are embedded within major platforms. Reports that Grok can generate or modify images of women and children in sexualised ways have triggered backlash from governments, regulators, and victims’ rights organisations.

65cb63d3d76285ff8805193c blog gen ai deepfake

Even where companies claim that safeguards are in place, the repeated emergence of abusive outputs suggests systemic design failures rather than isolated misuse.

What distinguishes AI-generated sexual content from earlier forms of online abuse lies not only in realism but also in replicability. Once an image or model exists, reproduction can occur endlessly, with the content shared across jurisdictions and recontextualised in new forms. Victims often face a permanent loss of control over digital identity, with limited avenues for redress.

Gendered harm and child protection

The impact of AI-generated pornography remains unevenly distributed. Research and reporting consistently show that women and girls are disproportionately targeted by non-consensual synthetic sexual content.

Public figures, journalists, politicians, and private individuals alike have found themselves subjected to sexualised deepfakes designed to humiliate, intimidate, or silence them.

computer keyboard with red deepfake button key deepfake dangers online

Children face even greater risk. AI tools capable of generating nudified or sexualised images of minors raise alarm across legal and ethical frameworks. Even where no real child experiences physical abuse during content creation, the resulting imagery may still constitute child sexual abuse material under many legal definitions.

The existence of such content contributes to harmful sexualisation and may fuel exploitative behaviour. AI complicates traditional child protection frameworks because the abuse occurs at the level of representation, not physical contact.

Legal systems built around evidentiary standards tied to real-world acts struggle to categorise synthetic material, particularly where perpetrators argue that no real person suffered harm during production.

Regulators increasingly reject such reasoning, recognising that harm arises through exposure, distribution, and psychological impact rather than physical contact alone.

Platform responsibility and the limits of self-regulation

Technology companies have historically relied on self-regulation to address harmful content. In the context of AI-generated pornography, such an approach has demonstrated clear limitations.

Platform policies banning non-consensual sexual content often lag behind technological capabilities, while enforcement remains inconsistent and opaque.

The Grok case highlights these challenges. Even where companies announce restrictions or safeguards, questions remain regarding enforcement, detection accuracy, and accountability.

AI systems struggle to reliably determine whether an image depicts a real person, whether consent exists, or whether local laws apply. Technical uncertainty frequently serves as justification for delayed action.

Commercial incentives further complicate moderation efforts. AI image tools drive user engagement, subscriptions, and publicity. Restricting capabilities may conflict with business objectives, particularly in competitive markets.

As a result, companies tend to act only after public backlash or regulatory intervention, instead of proactively addressing foreseeable harm.

Such patterns have contributed to growing calls for legally enforceable obligations rather than voluntary guidelines. Regulators increasingly argue that platforms deploying generative AI systems should bear responsibility for foreseeable misuse, particularly where sexual harm is involved.

Legal responses and the emergence of targeted legislation

Governments worldwide are beginning to address AI-generated pornography through a combination of existing laws and new legislative initiatives. The Take It Down Act represents one of the most prominent attempts to directly confront non-consensual intimate imagery, including AI-generated content.

The Act strengthens platforms’ obligations to remove intimate images shared without consent, regardless of whether the content is authentic or synthetic. Victims’ rights to request takedowns are expanded, while procedural barriers that previously left individuals navigating complex reporting systems are reduced.

Crucially, the law recognises that harm does not depend on image authenticity, but on the impact experienced by the individual depicted.

Within the EU, debates around AI nudity apps intersect with the AI Act and the Digital Services Act (DSA). While the AI Act categorises certain uses of AI as prohibited or high-risk, lawmakers continue to question whether nudity applications fall clearly within existing bans.

European Commission EU AI Act amendments Digital Omnibus European AI Office

Calls to explicitly prohibit AI-powered nudity tools reflect concern that legal ambiguity creates enforcement gaps.

Other jurisdictions, including Australia, the UK, and parts of Southeast Asia, are exploring regulatory approaches combining platform obligations, criminal penalties, and child protection frameworks.

Such efforts signal a growing international consensus that AI-generated sexual abuse requires specific legal recognition rather than fragmented treatment.

Enforcement challenges and jurisdictional fragmentation

Despite legislative progress, enforcement remains a significant challenge. AI-generated pornography operates inherently across borders. Applications may be developed in one country, hosted in another, and used globally. Content can be shared instantly across platforms, subject to different legal regimes.

Jurisdictional fragmentation complicates takedown requests and criminal investigations. Victims often face complex reporting systems, language barriers, and inconsistent legal standards. Even where a platform complies with local law in one jurisdiction, identical material may remain accessible elsewhere.

Technical enforcement presents additional difficulties. Automated detection systems struggle to distinguish consensual adult content from non-consensual synthetic imagery. Over-reliance on automation risks false positives and censorship, while under-enforcement leaves victims unprotected.

Balancing accuracy, privacy, and freedom of expression remains unresolved.

Broader societal implications

Beyond legal and technical concerns, AI-generated pornography raises deeper questions about sexuality, power, and digital identity.

The ability to fabricate sexual representations of others undermines traditional understandings of bodily autonomy and consent. Sexual imagery becomes detached from lived experience, transformed into manipulable data.

Such shifts risk normalising the perception of individuals as visual assets rather than autonomous subjects. When sexual access can be simulated without consent, the social meaning of consent itself may weaken.

Critics argue that such technologies reinforce misogynistic and exploitative norms, particularly where women’s bodies are treated as endlessly modifiable digital material.

Deepfakes and the AI scam header

At the same time, defenders of generative AI warn of moral panic and excessive regulation. Arguments persist that not all AI-generated sexual content is harmful, particularly where fictional or consenting adult representations are involved.

The central challenge lies in distinguishing legitimate creative expression from abuse without enabling exploitative practices.

In conclusion, we must admit that AI has fundamentally altered the landscape of pornography, transforming sexual representation into a synthetic, scalable, and increasingly detached process.

AI nudity apps and controversies surrounding AI tools demonstrate how existing social norms and legal frameworks remain poorly equipped to address non-consensual synthetic sexual content.

Global responses indicate a growing recognition that AI-generated pornography constitutes a distinct category of digital harm. Regulation alone, however, will not resolve the issue.

Effective responses require legal clarity, platform accountability, technical safeguards, and cultural change, especially with the help of the educational system.

As AI systems become more powerful and accessible, societies must confront difficult questions about consent, identity, and responsibility in the digital age.

The challenge lies not merely in restricting technology, but in defining ethical boundaries that protect our human dignity while preserving legitimate innovation.

In the days, weeks or months ahead, decisions taken by governments, platforms, and communities will shape the future relationship between AI and our precious human autonomy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The EU Commission opens DMA proceedings on Google interoperability and search data

The European Commission has opened two specification proceedings to spell out how Google should meet key obligations under the EU’s Digital Markets Act (DMA), focusing on Android’s AI-related features and access to Google Search data for competitors.

The first proceeding targets the DMA’s interoperability requirement for Android. In practical terms, Brussels wants to clarify how third-party AI services can get access, free and effectively, to the same Android hardware/software functionalities that power Google’s own AI offerings, including Gemini, so that rivals can compete on a more equal footing on mobile devices.

The second proceeding addresses Google’s obligation to provide rival search engines access to anonymised search data (such as ranking, query, click, and view data) on fair, reasonable, and non-discriminatory terms. The Commission is also considering whether AI chatbot providers should qualify for that access, an essential question as ‘search’ increasingly blurs with conversational AI.

These proceedings are designed to define how compliance should work rather than immediately sanction Google. The Commission is expected to wrap them up within six months, with draft measures and preliminary findings shared earlier in the process, and with scope for third-party feedback. A separate non-compliance track could still follow later, and DMA penalties for breaches can reach up to 10% of global turnover.

Google, for its part, says Android is ‘open by design’ and argues it is already licensing Search data, while warning that additional requirements, especially those it views as competitor-driven, could undermine user privacy, security, and innovation.

Why does it matter?

The EU is trying to prevent dominant platforms from turning control over operating systems and data into an ‘unfair advantage’ in the next wave of consumer tech, particularly as AI assistants become built into phones and as search data becomes fuel for competing discovery tools. The move also sits within a broader DMA enforcement push: the Commission has already opened DMA-related proceedings into Alphabet in other areas, signalling that Brussels sees gatekeeper compliance as an ongoing, hands-on exercise rather than a one-off checkbox.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Autonomous AI fails most tasks in virtual company experiment

Researchers at Carnegie Mellon University created a virtual company staffed solely by AI ’employees’ trained on large language models from vendors including Anthropic, OpenAI, and Google, assigning them roles such as financial analyst and software engineer.

In this simulated work environment, the AI agents struggled to complete most tasks, with even the best-performing model only completing about a quarter of its assignments.

The experiment highlighted key weaknesses in current AI systems, including difficulty interpreting nuanced instructions, managing web navigation with pop-ups, and coordinating multi-step workflows without human intervention.

These gaps suggest that human judgement, adaptability and collaboration remain essential in real workplaces for the foreseeable future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!