Libraries lead UK government push to improve digital inclusion and AI confidence

Libraries Connected, supported by a £310,400 grant from the UK Government’s Digital Inclusion Innovation Fund administered by the Department for Science, Industry and Technology (DSIT), is launching Innovating in Trusted Spaces: Libraries Advancing the Digital Inclusion Action Plan.

The programme will run from November 2025 to March 2026 across 121 library branches in Newcastle, Northumberland, Nottingham City and Nottinghamshire, targeting older people, low-income families and individuals with disabilities to ensure they are not left behind amid rapid digital and AI-driven change.

Public libraries are already a leading provider of free internet access and basic digital skills support, offering tens of thousands of public computers and learning opportunities each year. However, only around 27 percent of UK adults currently feel confident in recognising AI-generated content online, underscoring the need for improved digital and media literacy.

The project will create and test a new digital inclusion guide for library staff, focusing on the benefits and risks of AI tools, misinformation and emerging technologies, as well as building a national network of practice for sharing insights.

Partners in the programme include Good Things Foundation and WSA Community, which will help co-design materials and evaluate the initiative’s impact to inform future digital inclusion efforts across communities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google secures approval for major UK data centre at former RAF airfield

Local councillors have approved Google’s plans to build a large data centre campus at North Weald Airfield near Harlow, marking a major expansion of the company’s UK digital infrastructure.

The development is expected to create up to 780 local jobs, including approximately 200 direct roles, and contribute an estimated £79 million annually to the local economy and £319 million nationally.

The project involves demolishing existing buildings at the former RAF airfield and constructing two data centre facilities alongside offices, roads and parking.

While UK councillors largely welcomed the investment, the council acknowledged potential downsides, including a reduction in stalls at the long-running North Weald Market and pending Section 106 contributions to mitigate infrastructure impacts, such as upgrades to nearby transport links.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI content flood drives ‘slop’ to word of the year

Merriam-Webster has chosen ‘slop’ as its 2025 word of the year, reflecting the rise of low-quality digital content produced by AI. The term originally meant soft mud, but now describes absurd or fake online material.

Greg Barlow, Merriam-Webster’s president, said the word captures how AI-generated content has fascinated, annoyed and sometimes alarmed people. Tools like AI video generators can produce deepfakes and manipulated clips in seconds.

The spike in searches for ‘slop’ shows growing public awareness of poor-quality content and a desire for authenticity. People want real, genuine material rather than AI-driven junk content.

AI-generated slop includes everything from absurd videos to fake news and junky digital books. Merriam-Webster selects its word of the year by analysing search trends and cultural relevance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Streaming platforms face pressure over AI-generated music

Musicians are raising the alarm over AI-generated tracks appearing on their profiles without consent, presenting fraudulent work as their own. British folk artist Emily Portman discovered an AI-generated album, Orca, on Spotify and Apple Music, which copied her folk style and lyrics.

Fans initially congratulated her on a release she had not made since 2022.

Australian musician Paul Bender reported a similar experience, with four ‘bizarrely bad’ AI tracks appearing under his band, The Sweet Enoughs. Both artists said that weak distributor security allows scammers to easily upload content, calling it ‘the easiest scam in the world.’

A petition launched by Bender garnered tens of thousands of signatures, urging platforms to strengthen their protections.

AI-generated music has become increasingly sophisticated, making it nearly impossible for listeners to distinguish from genuine tracks. While revenues from such fraudulent streams are low individually, bots and repeated listening can significantly increase payouts.

Industry representatives note that the primary motive is to collect royalties from unsuspecting users.

Despite the threat of impersonation, Portman is continuing her creative work, emphasising human collaboration and authentic artistry. Spotify and Apple Music have pledged to collaborate with distributors to enhance the detection and prevention of AI-generated fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI generated podcasts flood platforms and disrupt the audio industry

Podcasts generated by AI are rapidly reshaping the audio industry, with automated shows flooding platforms such as Spotify, Apple Podcasts and YouTube.

Advances in voice cloning and speech synthesis have enabled the production to large volumes of content at minimal cost, allowing AI hosts to compete directly with human creators in an already crowded market.

Some established podcasters are experimenting cautiously, using cloned voices for translation, post-production edits or emergency replacements. Others have embraced full automation, launching synthetic personalities designed to deliver commentary, biographies and niche updates at speed.

Studios, such as Los Angeles-based Inception Point AI, have scaled the model to scale, producing hundreds of thousands of episodes by targeting micro-audiences and trending searches instead of premium advertising slots.

The rapid expansion is fuelling concern across the industry, where trust and human connection remain central to listener loyalty.

Researchers and networks warn that large-scale automation risks devaluing premium content, while creators and audiences question how far AI voices can replace authenticity without undermining the medium itself.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Zoom launches AI Companion 3.0 with expanded features

Zoom has unveiled AI Companion 3.0, its latest AI assistant, which extends functionality beyond meetings with a new web interface, workflow tools, and agentic search. Select features are now accessible to free Zoom Workplace Basic users, while full access is available via a paid add-on.

Free users can generate meeting summaries, action item lists, and insights, albeit with usage limitations.

The updated AI Companion introduces agentic retrieval, enabling searches across meeting summaries, transcripts, and connected services, such as Google Drive and Microsoft OneDrive, with Gmail and Outlook support planned.

Users can automatically generate follow-up tasks and draft emails using a post-meeting template, while the Daily Reflection Report summarises tasks and updates to help prioritise work.

A new agentic writing mode allows drafting, editing, and refining business documents in a canvas-style interface, and AI-created content can be exported in multiple formats, including Markdown, PDF, Word, and Zoom Docs.

Additional tools include AI-based brainstorming and, for Custom AI Companion users, a deep research mode consolidating insights from multiple meetings and documents.

Basic plan users get limited access for up to three meetings per month, including automated summaries, in-meeting queries, and AI-generated notes. Up to 20 prompts are included via the side panel and web interface, while broader access requires a subscription priced at Rs 1,080 per month.

The new web interface also offers built-in prompts to guide users in exploring the assistant’s capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI’s rise signals a shift in frontier tech investment

OpenAI overtook SpaceX as the world’s most valuable private company in October after a secondary share sale valued the AI firm at $500 billion. The deal put Sam Altman’s company about $100 billion ahead of Elon Musk’s space venture.

That lead may prove short-lived, as SpaceX is now planning its own secondary share sale that could value the company at around $800 billion. An internal letter seen by multiple outlets suggests Musk would reclaim the top spot within months.

The clash is the latest chapter in a rivalry that dates back to OpenAI’s founding in 2015, before Musk left the organisation in 2018 and later launched the startup xAI. Since then, lawsuits and public criticism have marked a sharp breakdown in relations.

Musk also confirmed on X that SpaceX is exploring a major initial public offering, while OpenAI’s recent restructuring allows it to pursue an IPO in the future. The valuation battle reflects soaring investor appetite for frontier technologies, as AI, space, robotics and defence startups attract unprecedented private funding.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study warns that LLMs are vulnerable to minimal tampering

Researchers from Anthropic, the UK AI Security Institute and the Alan Turing Institute have shown that only a few hundred crafted samples can poison LLM models. The tests revealed that around 250 malicious entries could embed a backdoor that triggers gibberish responses when a specific phrase appears.

Models ranging from 600 million to 13 billion parameters (such as Pythia) were affected, highlighting the scale-independent nature of the weakness. A planted phrase such as ‘sudo’ caused output collapse, raising concerns about targeted disruption and the ease of manipulating widely trained systems.

Security specialists note that denial-of-service effects are worrying, yet deceptive outputs pose far greater risk. Prior studies already demonstrated that medical and safety-critical models can be destabilised by tiny quantities of misleading data, heightening the urgency for robust dataset controls.

Researchers warn that open ecosystems and scraped corpora make silent data poisoning increasingly feasible. Developers are urged to adopt stronger provenance checks and continuous auditing, as reliance on LLMs continues to expand for AI purposes across technical and everyday applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google boosts Translate with Gemini upgrades

Google is rolling out a major Translate upgrade powered by Gemini to improve text and speech translation. The update enhances contextual understanding so idioms, tone and intent are interpreted more naturally.

A beta feature for live headphone translation enables real-time speech-to-speech output. Gemini processes audio directly, preserving cadence and emphasis to improve conversations and lectures. Android users in the US, Mexico and India gain early access, with wider availability planned for 2026.

Translate is also gaining expanded language-learning tools for speaking practice and progress tracking. Additional language pairs, including English to German and Portuguese, broaden support for learners worldwide.

Google aims to reduce friction in global communication by focusing on meaning rather than literal phrasing. Engineers expect user feedback to shape the AI live translation beta across platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Building trustworthy AI for humanitarian response

A new vision for Humanitarian AI is emerging around a simple idea, and that is that technology should grow from local knowledge if it is to work everywhere. Drawing on the IFRC’s slogan ‘Local, everywhere,’ this approach argues that AI should not be driven by hype or raw computing power, but by the lived experience of communities and humanitarian workers on the ground. With millions of volunteers and staff worldwide, the Red Cross and Red Crescent Movement holds a vast reservoir of practical knowledge that AI can help preserve, organise, and share for more effective crisis response.

In a recent blog post, Jovan Kurbalija explains that this bottom-up approach is not only practical but also ethically sound. AI systems grounded in local humanitarian knowledge can better reflect cultural and social contexts, reduce bias and misinformation, and strengthen trust by being governed by humanitarian organisations rather than opaque commercial platforms. Trust, he argues, lies in people and institutions behind the technology, not in algorithms themselves.

Kurbalija also notes that developing such AI is technically and financially realistic. Open-source models, mobile and edge computing, and domain-specific AI tools enable the deployment to functional systems even in low-resource environments. Most humanitarian tasks, from decision support to translation or volunteer guidance, do not require massive infrastructure, but high-quality, well-structured knowledge rooted in real-world experience.

If developed carefully, Humanitarian AI could also support the IFRC’s broader renewal goals, from strengthening local accountability and collaboration to safeguarding independence and humanitarian principles. Starting with small pilot projects and scaling up gradually, the Movement could transform AI into a shared public good that not only enhances responses to today’s crises but also preserves critical knowledge for future generations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!