AI news needs ‘nutrition labels’, UK think tank says amid concerns over gatekeepers

A leading British think tank has urged the government to introduce ‘nutrition labels’ for AI-generated news, arguing that clearer rules are needed as AI becomes a dominant source of information.

The Institute for Public Policy Research said AI firms are increasingly acting as new gatekeepers of the internet and must pay publishers for the journalism that shapes their output.

The group recommended standardised labels showing which sources underpin AI-generated answers, instead of leaving users unsure about the origin or reliability of the material they read.

It also called for a formal licensing system in the UK that would allow publishers to negotiate directly with technology companies over the use of their content. The move comes as a growing share of the public turns to AI for news, while Google’s AI summaries reach billions each month.

IPPR’s study found that some AI platforms rely heavily on content from outlets with licensing agreements, such as the Guardian and the Financial Times, while others, like the BBC, appear far less often due to restrictions on scraping.

The think tank warned that such patterns could weaken media plurality by sidelining local and smaller publishers instead of supporting a balanced ecosystem. It added that Google’s search summaries have already reduced traffic to news websites by providing answers before users click through.

The report said public funding should help sustain investigative and local journalism as AI tools expand. OpenAI responded that its products highlight sources and provide links to publishers, arguing that careful design can strengthen trust in the information people see online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Best moments from MoltBook archives

A new ‘Best of MoltBook’ post on Astral Codex Ten has renewed debate over how AI-assisted writing is being presented and understood. The collection highlights selected excerpts from MoltBook, a public notebook used to explore ideas with the help of AI tools.

MoltBook is framed as a space for experimentation rather than finished analysis, with short-form entries reflecting drafts, prompts and revisions. Human judgement remains central, with outputs curated, edited or discarded rather than treated as autonomous reasoning.

Some readers have questioned descriptions of the work as ‘agentic AI’, arguing the label exaggerates the technology’s role. The AI involved responds to instructions but does not act independently, plan goals or retain long-term memory.

The discussion reflects wider scepticism about inflated claims around AI capability. MoltBook is increasingly viewed as an example of AI as a productivity aid for thinking, rather than evidence of a new form of independent intelligence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Chinese court limits liability for AI hallucinations

A court in China has ruled that AI developers are not automatically liable for hallucinations produced by their systems. The decision was issued by the Hangzhou Internet Court in eastern China and sets an early legal precedent.

Judges found that AI-generated content should be treated as a service rather than a product in such cases. In China, users must therefore prove developer fault and show concrete harm caused by the erroneous output.

The case involved a user in China who relied on AI-generated information about a university campus that did not exist. The court ruled no damages were owed, citing a lack of demonstrable harm and no authorisation for the AI to make binding promises.

The Hangzhou Internet Court warned that strict liability could hinder innovation in China’s AI sector. Legal experts say the ruling clarifies expectations for developers while reinforcing the need for user warnings about AI limitations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Grok returns to Indonesia as X agrees to tightened oversight

Indonesia has restored access to Grok after receiving guarantees from X that stronger safeguards will be introduced to prevent further misuse of the AI tool.

Authorities suspended the service last month following the spread of sexualised images on the platform, making Indonesia the first country to block the system.

Officials from the Ministry of Communications and Digital Affairs said that access had been reinstated on a conditional basis after X submitted a written commitment outlining concrete measures to strengthen compliance with national law.

The ministry emphasised that the document serves as a starting point for evaluation instead of signalling the end of supervision.

However, the government warned that restrictions could return if Grok fails to meet local standards or if new violations emerge. Indonesian regulators stressed that monitoring would remain continuous, and access could be withdrawn immediately should inconsistencies be detected.

The decision marks a cautious reopening rather than a full reinstatement, reflecting Indonesia’s wider efforts to demand greater accountability from global platforms deploying advanced AI systems within its borders.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Why smaller AI models may be the smarter choice

Most everyday jobs do not actually need the most powerful, cutting-edge AI models, argues Jovan Kurbalija in his blog post ‘Do we really need frontier AI for everyday work?’. While frontier AI systems dominate headlines with ever-growing capabilities, their real-world value for routine professional tasks is often limited. For many people, much of daily work remains simple, repetitive, and predictable.

Kurbalija points out that large parts of professional life, from administration and law to healthcare and corporate management, operate within narrow linguistic and cognitive boundaries. Daily communication relies on a small working vocabulary, and most decision-making follows familiar mental patterns.

In this context, highly complex AI models are often unnecessary. Smaller, specialised systems can handle these tasks more efficiently, at lower cost and with fewer risks.

Using frontier AI for routine work, the author suggests, is like using a sledgehammer to crack a nut. These large models are designed to handle almost anything, but that breadth comes with higher costs, heavier governance requirements, and stronger dependence on major technology platforms.

In contrast, small language models tailored to specific tasks or organisations can be faster, cheaper, and easier to control, while still delivering strong results.

Kurbalija compares this to professional expertise itself. Most jobs never required having the Encyclopaedia Britannica open on the desk. Real expertise lives in procedures, institutions, and communities, not in massive collections of general knowledge.

Similarly, the most useful AI tools are often those designed to draft standard documents, summarise meetings, classify requests, or answer questions based on a defined body of organisational knowledge.

Diplomacy, an area Kurbalija knows well, illustrates both the strengths and limits of AI. Many diplomatic tasks are highly ritualised and can be automated using rules-based systems or smaller models. But core diplomatic skills, such as negotiation, persuasion, empathy, and trust-building, remain deeply human and resistant to automation. The lesson, he argues, is to automate routines while recognising where AI should stop.

The broader paradox is that large AI platforms may benefit more from users than users benefit from frontier AI. By sitting at the centre of workflows, these platforms collect valuable data and organisational knowledge, even when their advanced capabilities are not truly needed.

As Kurbalija concludes, a more common-sense approach would prioritise smaller, specialised models for everyday work, reserving frontier AI for genuinely complex tasks, and moving beyond the assumption that bigger AI is always better.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deezer opens AI detection tool to rivals

French streaming platform Deezer has opened access to its AI music detection tool for rival services, including Spotify. The move follows mounting concern in France and across the industry over the rapid rise of synthetic music uploads.

Deezer said around 60,000 AI-generated tracks are uploaded daily, with 13.4 million detected in 2025. In France, the company has already demonetised 85% of AI-generated streams to redirect royalties to human artists.

The tool automatically tags fully AI-generated tracks, removes them from recommendations and flags fraudulent streaming activity. Spotify, which also operates widely in France, has introduced its own measures but relies more heavily on creator disclosure.

Challenges remain for Deezer in France and beyond, as the system struggles to identify hybrid tracks mixing human and AI elements. Industry pressure continues to grow for shared standards that balance innovation, transparency and fair payment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic challenges Pentagon over military AI use

Pentagon officials are at odds with AI developer Anthropic over restrictions designed to prevent autonomous weapons targeting and domestic surveillance. The disagreement has stalled discussions under a $200 million contract.

Anthropic has expressed concern about its tools being used in ways that could harm civilians or breach privacy. The company emphasises that human oversight is essential for national security applications.

The dispute reflects broader tensions between Silicon Valley firms and government use of AI. Pentagon officials argue that commercial AI can be deployed as long as it follows US law, regardless of corporate guidelines.

Anthropic’s stance may affect its Pentagon contracts as the firm prepares for a public offering. The company continues to engage with officials while advocating for ethical AI deployment in defence operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Darren Aronofsky and Google DeepMind reimagine the American Revolution with AI

Director Darren Aronofsky’s creative studio, Primordial Soup, has released the first episodes of On This Day… 1776, a short-form animated series that uses generative AI technology from Google DeepMind to visualise pivotal events from the American Revolution ahead of the 250th anniversary of the Declaration of Independence.

Episodes are published weekly on TIME’s YouTube channel throughout 2026, with each one focusing on a specific date in 1776.

The project combines AI-generated visuals with traditional post-production elements, including colour grading and voice performances by SAG-AFTRA actors, to expand narrative possibilities while retaining human creative input.

Aronofsky and collaborators describe the series as an example of how thoughtful, artist-led AI use can enhance storytelling rather than replace artistic craft.

The initiative is part of a broader trend in entertainment where AI tools are being explored as creative accelerators, though reactions have been mixed on social media, with some viewers questioning the quality and artistic decisions in early episodes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI streamlines data analysis with in-house AI agent

OpenAI has developed an internal AI data agent designed to help employees move from complex questions to reliable insights in minutes. The tool allows teams to analyse vast datasets using natural language instead of manual SQL-heavy workflows.

Across engineering, finance, research and product teams, the agent reduces friction by locating the right tables, running queries and validating results automatically. Built on GPT-5.2, it adapts as it works, correcting errors and refining its approach without constant human input.

Context plays a central role in the system’s accuracy, combining metadata, human annotations, code-level insights and institutional knowledge. A built-in memory function stores non-obvious corrections, helping the agent improve over time and avoid repeated mistakes.

To maintain trust, OpenAI evaluates the agent continuously using automated tests that compare generated results with verified benchmarks. Strong access controls and transparent reasoning ensure the system remains secure, reliable and aligned with existing data permissions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI reshapes customer experience, survey finds

A survey of contact centre and customer experience (CX) leaders finds that AI has become ‘non-negotiable’ for organisations seeking to deliver efficient, personalised, and data-driven customer service.

Respondents reported widespread use of AI-enabled tools such as chatbots, virtual agents, and conversational analytics to handle routine queries, triage requests and surface insights from large volumes of interaction data.

CX leaders emphasised AI’s ability to boost service quality and reduce operational costs, enabling faster response times and better outcomes across channels.

Many organisations are investing in AI platforms that integrate with existing systems to automate workflows, assist human agents, and personalise interactions based on real-time customer context.

Despite optimism, leaders also noted challenges, including data quality, governance, skills gaps and maintaining human oversight, and stressed that AI should augment, not replace, human agents.

The article underscores that today’s competitive CX landscape increasingly depends on strategic AI adoption rather than optional experimentation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!