Thousands of writers have joined a symbolic protest against AI companies by publishing a book that contains no traditional content.
The work, titled “Don’t Steal This Book,” lists only the names of roughly 10,000 contributors who oppose the use of their writing to train AI systems without their permission.
An initiative that was organised by composer and campaigner Ed Newton-Rex and distributed during the London Book Fair. Contributors include prominent authors such as Kazuo Ishiguro, Philippa Gregory and Richard Osman, along with thousands of other writers and creative professionals.
Campaigners argue that generative AI systems are trained on vast collections of copyrighted material gathered from the internet without authorisation or compensation.
According to organisers, such practices allow AI tools to compete with the creators whose works were used to develop them.
The protest arrives as the UK Government prepares an economic assessment of potential copyright reforms related to AI. Proposals under discussion include allowing AI developers to use copyrighted material unless rights holders explicitly opt out.
Many writers and artists oppose that approach and demand stronger copyright protections. In parallel, the publishing sector is preparing a licensing initiative through Publishers’ Licensing Services to provide AI developers with legal access to books while ensuring authors receive compensation.
The dispute reflects a growing global debate over how copyright law should apply to generative AI systems that rely on massive datasets to develop chatbots and other digital tools.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
YouTube is expanding its likeness-detection technology designed to identify AI-generated deepfakes, extending access to a pilot group of government officials, political candidates, and journalists.
The tool allows participants to detect unauthorised AI-generated videos that simulate their faces and request removal if the content violates YouTube policies. The system builds on technology launched last year for around four million creators in the YouTube Partner Program.
Similar to YouTube’s Content ID system, which detects copyrighted material in uploaded videos, the likeness detection feature scans for AI-generated faces created with deepfake tools. Such technologies are increasingly used to spread misinformation or manipulate public perception by making prominent figures appear to say or do things they never did.
According to YouTube, the pilot programme aims to balance free expression with safeguards against AI impersonation, particularly in sensitive civic contexts.
‘This expansion is really about the integrity of the public conversation,’ said Leslie Miller, YouTube’s vice president of Government Affairs and Public Policy. ‘We know that the risks of AI impersonation are particularly high for those in the civic space. But while we are providing this new shield, we’re also being careful about how we use it.’
Removal requests will be assessed individually under YouTube’s privacy policy rules to determine whether the content constitutes parody or political critique, which remain protected forms of expression. Participants must verify their identity by uploading a selfie and a government-issued ID before accessing the tool. Once verified, they can review detected matches and submit removal requests for content they believe violates policy.
YouTube also said it supports the proposed NO FAKES Act in the United States, which aims to regulate the unauthorised use of an individual’s voice or visual likeness in AI-generated media. AI-generated videos on the platform are already labelled, though label placement varies depending on the topic’s sensitivity.
‘There’s a lot of content that’s produced with AI, but that distinction’s actually not material to the content itself,’ said Amjad Hanif, YouTube’s vice president of Creator Products. The company said it plans to expand the technology over time to detect AI-generated voices and other intellectual property.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Researchers and policymakers are raising concerns about how new technologies may put women at risk online, despite existing EU rules designed to ensure safer digital spaces.
AI-powered tools and smart devices have been linked to incidents of harassment and the creation of non-consensual sexualised imagery, highlighting gaps in enforcement and compliance.
Investigations into tools such as Elon Musk’s Grok AI and Meta’s Ray-Ban smart glasses have drawn attention to how digital platforms and wearable technologies can be misused, even where legal frameworks like the Digital Services Act (DSA) are in place.
Experts emphasise that while the EU’s rules offer a foundation to regulate online content, significant challenges remain. Advocates and lawmakers say enforcement gaps let harmful AI functions like nudification persist.
Commissioners have stressed ongoing cooperation with tech companies and upcoming guidelines to prioritise flagged content from independent organisations to address gender-based cyber violence.
Authorities are also monitoring new technologies closely. In the case of wearable devices, regulators are considering how users and bystanders are informed about recording features.
Ongoing discussions aim to strengthen compliance under existing legislation and ensure that digital spaces become safer and more accountable for all users.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Courts across Europe are examining how copyright law applies to AI systems trained on large datasets. Judges in Europe are reviewing whether existing rules allow AI developers to use copyrighted books, music and journalism without permission.
One closely watched dispute in Luxembourg involves a publisher challenging Google over summaries produced by its Gemini chatbot. The case before the EU court in Luxembourg could test how press publishers’ rights apply to AI-generated outputs.
Legal experts warn the ruling in Luxembourg may not resolve wider questions about AI training data. Many disputes in Europe focus on the EU copyright directive and its text and data mining exception.
Additional lawsuits across Europe involving music rights group GEMA and OpenAI are expected to continue for years. Policymakers in Europe are also considering updates to copyright rules as AI technology expands.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Concerns about AI copyright are rising after a House of Lords committee report. The report warns that unlicensed use of creative works for AI training threatens the UK’s creative industries.
Large AI systems rely on vast amounts of human-created content, often used without clear consent or compensation. Such developments have intensified debates around AI copyright protections.
The committee argues that the key issues are not the copyright framework itself, but the widespread unlicensed use of protected works and AI developers’ lack of transparency.
The lack of clarity prevents rightsholders from knowing whether their works are being used or from enforcing their rights, raising critical questions about the practical application of AI copyright rules.
The report urges the government to reject the proposed commercial text and data mining exception, introduce stronger protections against unauthorised digital replicas, and safeguard against AI outputs that imitate a creator’s style, voice, or identity.
The committee also calls for legal transparency in AI training data, backing the development of a licensing market, and standards for rights-reservation, data provenance, labelling AI-generated content, and support for UK-governed AI models within a robust AI copyright framework.
Baroness Keeley, committee chair, warned: ‘Our creative industries face a clear and present danger from uncredited and unremunerated use of copyrighted material to train AI models.
Photographers, musicians, authors, and publishers are seeing their work fed into AI models, which then produce imitations that take employment and earning opportunities from original creators.’
Keeley added: ‘AI may contribute to our future economic growth, but the UK creative industries create jobs and economic value now.
In 2023, the creative industries delivered £124 billion of economic value to the UK, and this is set to grow to £141 billion by 2030. Watering down the protections in our existing copyright regime to lure the biggest US tech companies is a race to the bottom that does not serve UK interests. We should not sacrifice our creative industries for the AI jam tomorrow.’
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
When Hayao Miyazaki dismissed early AI-generated animation as ‘an insult to life itself’ in 2016, the technology felt distant from mainstream creative work. Less than a decade later, generative AI tools produce images and text in seconds, reviving debate over authorship, copyright, and artistic identity.
In Japan, debate reflects both anxiety and ambition. Illustrators question the use of their work in training data, while policymakers and corporations see AI as vital to easing a projected labour shortfall by 2040. Legal provisions allowing data use for analysis have intensified calls for safeguards.
Public sentiment in Japan remains broadly favourable toward AI adoption. Surveys indicate relatively high levels of trust, with many viewing AI as part of long-term structural adjustment rather than an immediate threat. Economic expectations often outweigh concerns about disruption.
Workplace implementation, however, remains limited. OECD research shows only a small share of employees actively use AI tools, citing skills shortages and cautious corporate culture. Analysts describe a paradox: AI could ease labour pressures, yet adoption is constrained by limited expertise.
Creative professionals report more immediate effects. Surveys highlight income pressures and uncertainty among illustrators and freelancers. As deployment expands, Japan faces the task of balancing economic necessity with cultural preservation and fair access to emerging technologies.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI is reshaping cultural debates, with music emerging as a key area of concern. Discussions reflect broader tensions about AI’s impact on creativity, labour, and ownership.
In the music industry, AI-generated tracks and automated playlists have raised fears about competition and income loss. Artists are concerned that their catalogues are being data-mined to train systems without consent.
Copyright and compensation are central to the debate. Composer Ed Newton-Rex organised the protest album Is This What We Want?, supported by artists including James MacMillan and Kate Bush, to oppose the unauthorised use of music for AI training.
Advocates argue that lawmakers can still introduce safeguards to prevent unregulated exploitation. The discussion focuses on whether governments will establish clear rules or allow broad data harvesting to continue.
Some observers highlight AI’s potential as a creative tool. Like previous music technologies, it could help composers explore new sounds rather than replace human musicians.
Ultimately, music is described as rooted in human emotion, interpretation, and shared experience. These qualities are presented as central to musical culture and difficult for AI to replicate.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Microsoft and OpenAI have reaffirmed their long-standing collaboration after new funding and partnerships raised speculation about their relationship.
Both firms stressed that recent announcements leave their original agreements intact, preserving a framework built on technical integration, trust and shared ambitions for AI development.
Microsoft’s exclusive licence to OpenAI’s intellectual property remains untouched, as does its position as the sole cloud provider for stateless APIs powering OpenAI models.
These APIs can be accessed through either company. Yet all such calls, including those arising from third-party partnerships such as OpenAI’s work with Amazon, continue to run on Azure rather than on alternative clouds. OpenAI’s own products, including Frontier, also stay hosted on Azure.
Revenue-sharing arrangements are unchanged, alongside the contractual definition and evaluation process for artificial general intelligence.
OpenAI retains the freedom to secure additional compute capacity elsewhere, supported by large-scale initiatives such as the Stargate project.
Even with broader collaborations emerging across the industry, both firms present their alliance as central to advancing responsible AI and expanding access to powerful tools worldwide.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Speaking at the Bengaluru GAFX Conference, a major event for the Games, Animation, Visual Effects and Extended Reality (AVGC-XR) sector, Karnataka Chief Minister Siddaramaiah positioned AI as a tool to augment artistic work rather than replace human creators.
He highlighted the importance of ethical AI adoption, respect for intellectual property, data privacy, and ensuring fair compensation for artists and creative professionals as the sector grows.
Siddaramaiah underscored that the ‘soul of storytelling’ and human emotion cannot be fully replicated by algorithms, stressing that technology should amplify human potential without erasing it.
He also urged industry leaders to invest in original content, educational institutions to modernise curricula, and global partners to collaborate with Karnataka’s burgeoning creative ecosystem.
The remarks came amid efforts to develop the AVGC-XR sector through policy support, infrastructure, skill development, and the creation of digital creative clusters beyond Bengaluru in cities like Mysuru, Mangaluru and Hubballi-Dharwad.
Siddaramaiah framed this approach as both an economic and cultural opportunity that must be inclusive and ethically grounded.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In December 2025, the Macquarie Dictionary, Merriam-Webster, and the American Dialect Society named ‘slop’ as the Word of the Year, reflecting a widespread reaction to AI-generated content online, often referred to as ‘AI slop.’ By choosing ‘slop’, typically associated with unappetising animal feed, they captured unease about the digital clutter created by AI tools.
As LLMs and AI tools became accessible to more people, many saw them as opportunities for profit through the creation of artificial content for marketing or entertainment, or through the manipulation of social media algorithms. However, despite video and image generation advances, there is a growing gap between perceived quality and actual detection: many overestimate how easily AI content evades notice, fueling scepticism about its online value.
As generative AI systems expand, the debate goes beyond digital clutter to deeper concerns about trust, market incentives, and regulatory resilience. How will societies manage the social, economic, and governance impacts of an information ecosystem increasingly shaped by automated abundance? In simplified terms, is AI slop more than a simple digital nuisance, or do we needlessly worry about a transient vogue that will eventually fade away?
The social aspect of AI slop’s influence
The most visible effects of AI slop emerge on large social media platforms such as YouTube, TikTok, and Instagram. Users frequently encounter AI-generated images and videos that appropriate celebrity likenesses without consent, depict fabricated events, or present sensational and misleading scenarios. Comment sections often become informal verification spaces, where some users identify visual inconsistencies and warn others, while many remain uncertain about the content’s authenticity.
However, no platform has suffered the AI slop effect as much as Facebook, and once you take a glance at its demographics, the pieces start to come together. According to multiple studies, Facebook’s user base is mostly populated by adults aged 25-34, but users over the age of 55 make up nearly 24 percent of all users. While seniors do not constitute the majority (yet), younger generations have been steadily migrating to social platforms such as TikTok, Instagram, and X, leaving the most popular platform to the whims of the older generation.
Due to factors such as cognitive decline, positivity bias, or digital (il)literacy, older social media users are more likely to fall for scams and fraud. Such conditions make Facebook an ideal place for spreading low-quality AI slop and false information. Scammers use AI tools to create fake images and videos about made-up crises to raise money for causes that are not real.
The lack of regulation on Meta’s side is the most glaring sore spot, evidenced by the company pushing back against the EU’s Digital Services Act (DSA) and Digital Markets Act (DMA), viewing them as ‘overreaching‘ and stifling innovation. The math is simple: content generates engagement, resulting in more revenue for Facebook and other platforms owned by Meta. Whether that content is authentic and high-quality or low-effort AI slop, the numbers don’t care.
The economics behind AI slop
At its core, AI content is not just a social media phenomenon, but an economic one as well. GenAI tools drastically reduce the cost and time required to produce all types of content, and when production approaches zero marginal cost, the incentive to churn out AI slop seems too good to ignore. Even minimal engagement can generate positive returns through advertising, affiliate marketing, or platform monetisation schemes.
AI content production goes beyond exploiting social media algorithms and monetisation policies. SEO can now be automated at scale, thus generating thousands of keyword-optimised articles within hours. Affiliate link farming allows creators to monetise their products or product recommendations with minimal editorial input.
On video platforms like TikTok and YouTube, synthetic voice-overs and AI-generated visuals are on full display, banking on trending topics and using AI-generated thumbnails to garner more views on a whim. Thanks to AI tools, content creators can post relevant AI-generated content in minutes, enabling them to jump on the hottest topics and drive clicks faster than with any other authentic content creation method.
To add salt to the wound, YouTube content creators share the sentiment that they are victims of the platform’s double standards in enforcing its strict community guidelines. Even the largest YouTube Channels are often flagged for a plethora of breaches, including copyright claims and depictions of dangerous or illegal activities, and harmful speech, to name a few. On the other hand, AI slop videos seem to fly under YouTube’s radar, leading to more resentment towards AI-generated content.
Businesses that rely on generative AI tools to market their services online are also finding AI to be the way to go, as most users are still not too keen on distinguishing authentic content, nor do they give much importance to those aspects. Instead of paying voice-over artists and illustrators, it is way cheaper to simply create a desired post in under a few minutes, adding fuel to an already raging fire. Some might call it AI slop, but again, the numbers are what truly matter.
The regulatory challenge of AI slop
AI slop is not only a social and economic issue, but also a regulatory one. The problem is not a single AI-generated post that promotes harmful behaviour or misleading information, but the sheer scale of synthetic content entering digital platforms. When large volumes of low-value or deceptive material circulate on the web, they can distort information ecosystems and make moderation a tough challenge. Such a predicament shifts the focus from individual violations to broader systemic effects.
In the EU, the DSA requires very large online platforms to assess and mitigate the systemic risks linked to their services. While the DSA does not specifically target AI slop, its provisions on transparency, content recommendation algorithms, and risk mitigation could apply if AI content significantly affects public discourse or enables fraud. The challenge lies in defining when content volume prevails over quality control, becoming a systemic issue rather than isolated misuse.
Debates around labelling AI slop and transparency also play a large role. Policymakers and platforms have explored ways to flag AI-generated content throughout disclosures or watermarking. For example, OpenAI’s Sora generates videos with a faint Sora watermark, although it is hardly visible to an uninitiated user. Nevertheless, labelling alone may not address deeper concerns if recommendation systems continue to prioritise engagement above all else, with the issue not only being whether users know the content is AI-generated, but how such content is ranked, amplified, and monetised.
More broadly, AI slop highlights the limits of traditional content moderation. As generative tools make production faster and cheaper, enforcement systems may struggle to keep pace. Regulation, therefore, faces a structural question: can existing digital governance frameworks preserve information quality in an environment where automated content production continues to grow?
Building resilience in the era of AI slop
Humans are considered the most adaptable species on Earth, and for good reason. While AI slop has exposed weaknesses in platform design, monetisation models, and moderation systems, it may also serve as a catalyst for adaptation. Unless regulatory bodies unite under one banner and agree to ban AI content for good, it is safe to say that synthetic content is here to stay. However, sooner or later, systemic regulations will evolve to address this new AI craze and mitigate its negative effects.
The AI slop bubble is bound to burst at some point, as online users will come to favour meticulously crafted content – whether authentic or artificial over low-quality content. Consequently, incentives may also evolve along with content saturation, leading to a greater focus on quality rather than quantity. Advertisers and brands often prioritise credibility and brand safety, which could encourage platforms to refine their ranking systems to reward originality, reliability, and verified creators.
Transparency requirements, systemic risk assessments, and discussions around provenance disclosure mechanisms imply that governance is responding to the realities of generative AI. Instead of marking the deterioration of digital spaces, AI slop may represent a transitional phase in which platforms, policymakers, and users are challenged to adjust their expectations and norms accordingly.
Finally, the long-term outcome will depend entirely on whether innovation, market incentives, and governance structures can converge around information quality and resilience. In that sense, AI slop may ultimately function less as a permanent state of affairs and more as a stress test to separate the wheat from the chaff. In the upcoming struggle between user experience and generative AI tools, the former will have the final say, which is an encouraging thought.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!