AI news needs ‘nutrition labels’, UK think tank says amid concerns over gatekeepers

A leading British think tank has urged the government to introduce ‘nutrition labels’ for AI-generated news, arguing that clearer rules are needed as AI becomes a dominant source of information.

The Institute for Public Policy Research said AI firms are increasingly acting as new gatekeepers of the internet and must pay publishers for the journalism that shapes their output.

The group recommended standardised labels showing which sources underpin AI-generated answers, instead of leaving users unsure about the origin or reliability of the material they read.

It also called for a formal licensing system in the UK that would allow publishers to negotiate directly with technology companies over the use of their content. The move comes as a growing share of the public turns to AI for news, while Google’s AI summaries reach billions each month.

IPPR’s study found that some AI platforms rely heavily on content from outlets with licensing agreements, such as the Guardian and the Financial Times, while others, like the BBC, appear far less often due to restrictions on scraping.

The think tank warned that such patterns could weaken media plurality by sidelining local and smaller publishers instead of supporting a balanced ecosystem. It added that Google’s search summaries have already reduced traffic to news websites by providing answers before users click through.

The report said public funding should help sustain investigative and local journalism as AI tools expand. OpenAI responded that its products highlight sources and provide links to publishers, arguing that careful design can strengthen trust in the information people see online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Deezer opens AI detection tool to rivals

French streaming platform Deezer has opened access to its AI music detection tool for rival services, including Spotify. The move follows mounting concern in France and across the industry over the rapid rise of synthetic music uploads.

Deezer said around 60,000 AI-generated tracks are uploaded daily, with 13.4 million detected in 2025. In France, the company has already demonetised 85% of AI-generated streams to redirect royalties to human artists.

The tool automatically tags fully AI-generated tracks, removes them from recommendations and flags fraudulent streaming activity. Spotify, which also operates widely in France, has introduced its own measures but relies more heavily on creator disclosure.

Challenges remain for Deezer in France and beyond, as the system struggles to identify hybrid tracks mixing human and AI elements. Industry pressure continues to grow for shared standards that balance innovation, transparency and fair payment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google faces new UK rules over AI summaries and publisher rights

The UK competition watchdog has proposed new rules that would force Google to give publishers greater control over how their content is used in search and AI tools.

The Competition and Markets Authority (CMA) plans to require opt-outs for AI-generated summaries and model training, marking the first major intervention under Britain’s new digital markets regime.

Publishers argue that generative AI threatens traffic and revenue by answering queries directly instead of sending users to the original sources.

The CMA proposal would also require clearer attribution of publisher content in AI results and stronger transparency around search rankings, including AI Overviews and conversational search features.

Additional measures under consultation include search engine choice screens on Android and Chrome, alongside stricter data portability obligations. The regulator says tailored obligations would give businesses and users more choice while supporting innovation in digital markets.

Google has warned that overly rigid controls could damage the user experience, describing the relationship between AI and search as complex.

The consultation runs until late February, with the outcome expected to shape how AI-powered search operates in the UK.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Artists and writers say no to generative AI

Creative communities are pushing back against generative AI in literature and art. The Science Fiction and Fantasy Writers Association now bars works created wholly or partly with large language models after criticism of earlier, more permissive rules.

San Diego Comic-Con faced controversy when it initially allowed AI-generated art in its exhibition, but not for sale. Artists argued that the rules threatened originality, prompting organisers to ban all AI-created material.

Authors warn that generative AI undermines the creative process. Some point out that large language model tools are already embedded in research and writing software, raising concerns about accidental disqualification from awards.

Fans and members welcomed SFWA’s decision, but questions remain about how broadly AI usage will be defined. Many creators insist that machines cannot replicate storytelling and artistic skill.

Industry observers expect other cultural organisations to follow similar policies this year. The debate continues over ethics, fairness, and technology’s role in arts and literature.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Writers challenge troubling AI assumptions about language and style

A growing unease among writers is emerging as AI tools reshape how language is produced and perceived. Long-established habits, including the use of em dashes and semicolons, are increasingly being viewed with suspicion as machine-generated text becomes more common.

The concern is not opposition to AI itself, but the blurring of boundaries between human expression and automated output. Writers whose work was used to train large language models without consent say stylistic traits developed over decades are now being misread as algorithmic authorship.

Academic and editorial norms are also shifting under this pressure. Teaching practices that once valued rhythm, voice, and individual cadence are increasingly challenged by stricter stylistic rules, sometimes framed as safeguards against sloppy or machine-like writing rather than as matters of taste or craft.

At the same time, productivity tools embedded into mainstream software continue to intervene in the writing process, offering substitutions and revisions that prioritise clarity and efficiency over nuance. Such interventions risk flattening language and discouraging the idiosyncrasies that define human authorship.

As AI becomes embedded in publishing, education, and professional writing, the debate is shifting from detection to preservation. Many writers warn that protecting human voice and stylistic diversity is essential, arguing that affectless, uniform prose would erode creativity and trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hollywood figures back anti-AI campaign

More than 800 creatives in the US have signed an anti-AI campaign accusing big technology companies of exploiting human work. High-profile figures from film and television in the country have backed the initiative, which argues that training AI on creative content without consent amounts to theft.

The campaign was launched by the Human Artistry Campaign, a coalition representing creators, unions and industry groups in the country. Supporters say AI systems should not be allowed to use artistic work without permission and fair compensation.

Actors and filmmakers in the US warned that unchecked AI adoption threatens livelihoods across film, television and music. Campaign organisers said innovation should not come at the expense of creators’ rights or ownership of their work.

The statement adds to growing pressure on lawmakers and technology firms in the US. Creative workers are calling for clearer rules on how AI can be developed and deployed across the entertainment industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

YouTube’s 2026 strategy places AI at the heart of moderation and monetisation

As announced yesterday, YouTube is expanding its response to synthetic media by introducing experimental likeness detection tools that allow creators to identify videos where their face appears altered or generated by AI.

The system, modelled conceptually on Content ID, scans newly uploaded videos for visual matches linked to enrolled creators, enabling them to review content and pursue privacy or copyright complaints when misuse is detected.

Participation requires identity verification through government-issued identification and a biometric reference video, positioning facial data as both a protective and governance mechanism.

While the platform stresses consent and limited scope, the approach reflects a broader shift towards biometric enforcement as platforms attempt to manage deepfakes, impersonation, and unauthorised synthetic content at scale.

Alongside likeness detection, YouTube’s 2026 strategy places AI at the centre of content moderation, creator monetisation, and audience experience.

AI tools already shape recommendation systems, content labelling, and automated enforcement, while new features aim to give creators greater control over how their image, voice, and output are reused in synthetic formats.

The move highlights growing tensions between creative empowerment and platform authority, as safeguards against AI misuse increasingly rely on surveillance, verification, and centralised decision-making.

As regulators debate digital identity, biometric data, and synthetic media governance, YouTube’s model signals how private platforms may effectively set standards ahead of formal legislation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

European Parliament moves to force AI companies to pay news publishers

Lawmakers in the EU are moving closer to forcing technology companies to pay news publishers for the use of journalistic material in model training, according to a draft copyright report circulating in the European Parliament.

The text forms part of a broader effort to update copyright enforcement as automated content systems expand across media and information markets.

Compromise amendments also widen the scope beyond payment obligations, bringing AI-generated deepfakes and synthetic manipulation into sharper focus.

MEPs argue that existing legal tools fail to offer sufficient protection for publishers, journalists and citizens when automated systems reproduce or distort original reporting.

The report reflects growing concern that platform-driven content extraction undermines the sustainability of professional journalism. Lawmakers are increasingly framing compensation mechanisms as a corrective measure rather than as voluntary licensing or opaque commercial arrangements.

If adopted, the position of the Parliament would add further regulatory pressure on large technology firms already facing tighter scrutiny under the Digital Markets Act and related digital legislation, reinforcing Europe’s push to assert control over data use, content value and democratic safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI firms fall short of EU transparency rules on training data

Several major AI companies appear slow to meet EU transparency obligations, raising concerns over compliance with the AI Act.

Under the regulation, developers of large foundation models must disclose information about training data sources, allowing creators to assess whether copyrighted material has been used.

Such disclosures are intended to offer a minimal baseline of transparency, covering the use of public datasets, licensed material and scraped websites.

While open-source providers such as Hugging Face have already published detailed templates, leading commercial developers have so far provided only broad descriptions of data usage instead of specific sources.

Formal enforcement of the rules will not begin until later in the year, extending a grace period for companies that released models after August 2025.

The European Commission has indicated willingness to impose fines if necessary, although it continues to assess whether newer models fall under immediate obligations.

The issue is likely to become politically sensitive, as stricter enforcement could affect US-based technology firms and intensify transatlantic tensions over digital regulation.

Transparency under the AI Act may therefore test both regulatory resolve and international relations as implementation moves closer.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Questions mount over AI-generated artist

An artist called Sienna Rose has drawn millions of streams on Spotify, despite strong evidence suggesting she is AI-generated. Several of her jazz-influenced soul tracks have gone viral, with one surpassing five million plays.

Streaming platform Deezer says many of its songs have been flagged as AI-made using detection tools that identify technical artefacts in the audio. Signs include an unusually high volume of releases, generic sound patterns and a complete absence of live performances or online presence.

The mystery intensified after pop star Selena Gomez briefly shared one of Rose’s tracks on social media, only for it to be removed amid growing scrutiny. Record labels linked to Rose have declined to clarify whether a human performer exists.

The case highlights mounting concern across the industry as AI music floods streaming services. Artists, including Raye and Paul McCartney, have warned audiences that they still value emotional authenticity over algorithmic output.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot