New project expands AI access for African languages

Africa is working to close the AI language gap, as most global tools remain trained on English, Chinese, and European languages.

The African Next Voices project has created the continent’s largest dataset of spoken African languages, covering 18 tongues across Kenya, Nigeria, and South Africa. Supported by a $2.2m Gates Foundation grant, the dataset includes 9,000 hours of speech in farming, health, and education settings.

Languages such as Hausa, Yoruba, isiZulu, and Kikuyu are now available for developers to build translation, transcription, and conversational AI tools. Farmers like South Africa’s Kelebogile Mosime already use local-language apps to solve agricultural challenges.

Start-ups, including Lelapa AI, are building products in African languages for banks and telecoms. Researchers warn that without such initiatives, millions risk exclusion from vital services and cultural knowledge could be lost.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK publishers fear Google AI summaries hit revenues

UK publishers warn that Google’s AI Overviews significantly cut website traffic, threatening fragile online revenues.

Reach, owner of the Mirror and Daily Express, said readers often settle for the AI summary instead of visiting its sites. DMG Media told regulators that click-through rates had fallen by up to 89% since the rollout.

Publishers argue that they provide accurate reporting that fuels Google’s search results, yet they see no financial return when users no longer click through. Concerns are growing over Google’s conversational AI Mode, which displays even fewer links.

Google insists that search traffic has remained stable year-on-year and claims that AI overviews offer users more opportunities to find quality links. Still, a coalition of publishers has filed a complaint with the UK Competition and Markets Authority, alleging misuse of their content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google boosts Gemini with audio uploads and NotebookLM upgrades

The US tech giant has expanded the capability of its Gemini app by allowing users to upload audio files for AI analysis across Android, iOS, and the web. The upgrade enables transcription of interviews, voice memos and lecture recordings instead of relying solely on typed or spoken prompts.

Free-tier users can upload clips of up to ten minutes with five prompts daily, while paid subscribers have access to three hours of uploads across multiple files. According to Gemini vice president Josh Woodward, the feature is designed to make the platform more versatile and practical for everyday tasks.

Google has also enhanced its Search AI mode with five new languages, including Hindi, Japanese and Korean, extending its multilingual reach.

NotebookLM, the company’s research assistant powered by Gemini, can now generate structured reports such as quizzes, study guides and blog posts from uploaded content, available in more than 80 languages.

These improvements underline Google’s ambition to integrate AI more deeply into everyday applications instead of leaving the technology confined to experimental tools. They also highlight growing competition in the AI market, with Google using Gemini 2.5 to expand its services for global users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Momenta set for first European robotaxi rollout with Uber in Germany

Uber and Chinese startup Momenta will begin robotaxi testing in Munich in 2026, marking their first public deployment in continental Europe. The trials will start with human safety operators, with plans to expand across additional European cities.

Founded in 2016, Momenta is one of China’s leading autonomous vehicle companies, having tested self-driving cars since 2018. The company is already collaborating with automakers such as Mercedes-Benz and BMW to integrate advanced driver assistance systems.

Uber is broadening its global AV network, which already spans 20 partners across mobility, delivery, and freight. In the US, Waymo robotaxis operate via Uber’s app, while international partnerships include WeRide in the Gulf and Wayve in London.

Competition in Europe is intensifying. Baidu from China and Lyft plan to roll out robotaxis in Germany and the UK next year, while Uber has chosen Munich, Germany, as its engineering base and a strong automotive ecosystem.

German regulators must still certify Momenta’s technology and approve geo-fenced operating areas. If successful, Munich will become Momenta’s first European launchpad, building on its Shanghai robotaxi service and global ADAS deployment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI Mode in Google Search adds multilingual support to Hindi and four more languages

Google has announced an expansion of AI Mode in Search to five new languages, including Hindi, Indonesian, Japanese, Korean and Brazilian Portuguese. The feature was first introduced in English in March and aims to compete with AI-powered search platforms such as ChatGPT Search and Perplexity AI.

The company highlighted that building a global search experience requires more than translation. Google’s custom version of Gemini 2.5 uses advanced reasoning and multimodal capabilities to provide locally relevant and useful search results instead of offering generic answers.

AI Mode now also supports agentic tasks such as booking restaurant reservations, with plans to include local service appointments and event ticketing.

Currently, these advanced functions are available to Google AI Ultra subscribers in the US, while India received the rollout of the language expansion in July.

These developments reinforce Google’s strategy to integrate AI deeply into its search ecosystem, enhancing user experience across diverse regions instead of limiting sophisticated AI tools to English-language users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Social media authenticity questioned as Altman points to bot-like behaviour

Sam Altman, X enthusiast and Reddit shareholder, has expressed doubts over whether social media content can still be distinguished from bot activity. His remarks followed an influx of praise for OpenAI Codex on Reddit, where users questioned whether such posts were genuine.

Altman noted that humans are increasingly adopting quirks of AI-generated language, blurring the line between authentic and synthetic speech. He also pointed to factors such as social media optimisation for engagement and astroturfing campaigns, which amplify suspicions of fakery.

The comments follow OpenAI’s backlash over the rollout of GPT-5, which saw Reddit communities shift from celebratory to critical. Altman acknowledged flaws in a Reddit AMA, but the fallout left lasting scepticism and lower enthusiasm among AI users.

Underlying this debate is the wider reality that bots dominate much of the online environment. Imperva estimates that more than half of 2024’s internet traffic was non-human, while X’s own Grok chatbot admitted to hundreds of millions of bots on the platform.

Some observers suggest Altman’s comments may foreshadow an OpenAI-backed social media venture. Whether such a project could avoid the same bot-related challenges remains uncertain, with research suggesting that even bot-only networks eventually create echo chambers of their own.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Superconducting qubits power Stanford’s quantum router advance

Quantum computers could become more efficient with a new quantum router that directs data more quickly within machines. Researchers at Stanford have built the component, which could eventually form the backbone of quantum random access memory (QRAM).

The router utilises superconducting qubits, controlled by electromagnetic pulses, to transmit information to quantum addresses. Unlike classical routers, it can encode addresses in superposition, allowing data to be stored in two places simultaneously.

In tests with three qubits, the router achieved a fidelity of around 95%. If integrated into QRAM, it could unlock new algorithms by placing information into quantum states where locations remain indeterminate.

Experts say the advance could benefit areas such as quantum machine learning and database searches. It may also support future ideas, such as quantum IP addresses, although more reliable designs with larger qubit counts are still required.

The Stanford team acknowledges the device needs refinement to reduce errors. But with further development, the quantum router could be a vital step toward practical QRAM and more powerful quantum computing applications.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic AI faces legal setback in authors’ piracy lawsuit

A federal judge has rejected the $1.5 billion settlement Anthropic agreed to in a piracy lawsuit filed by authors.

Judge William Alsup expressed concerns that the deal was ‘nowhere close to complete’ and could be forced on writers without proper input.

The lawsuit involves around 500,000 authors whose works were allegedly used without permission to train Anthropic’s large language models. The proposed settlement would have granted $3,000 per work, a sum far exceeding previous copyright recoveries.

However, the judge criticised the lack of clarity regarding the list of works, authors, notification process, and claim forms.

Alsup instructed the lawyers to provide clear notice to class members and allow them to opt in or out. He also emphasised that Anthropic must be shielded from future claims on the same issue. The court set deadlines for a final list of works by September 15 and approval of all related documents by October 10.

The ruling highlights ongoing legal challenges for AI companies using copyrighted material for training large language models instead of relying solely on licensed or public-domain data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Orson Welles lost film reconstructed with AI

More than 80 years after Orson Welles’ The Magnificent Ambersons was cut and lost, AI is being used to restore 43 missing minutes of the film.

Amazon-backed Showrunner, led by Edward Saatchi, is experimenting with AI technology to rebuild the destroyed sequences as part of a broader push to reimagine how Hollywood might use AI in storytelling.

The project is not intended for commercial release, since Showrunner has not secured rights from Warner Bros. or Concord, but instead aims to explore what could have been the director’s original vision.

The initiative marks a shift in the role of AI in filmmaking. Rather than serving only as a tool for effects, dubbing or storyboarding, it is being positioned as a foundation for long-form narrative creation.

Showrunner is developing AI models capable of sustaining complex plots, with the goal of eventually generating entire films. Saatchi envisions the platform as a type of ‘Netflix of AI,’ where audiences might one day interact with intellectual property and generate their own stories.

To reconstruct The Magnificent Ambersons, the company is combining traditional techniques with AI tools. New sequences will be shot with actors, while AI will be used for face and pose transfer to replicate the original cast.

Thousands of archival set photographs are being used to digitally recreate the film’s environments.

Filmmaker Brian Rose, who has rebuilt 30,000 missing frames over five years, has reconstructed set movements and timing to match the lost scenes, while VFX expert Tom Clive will assist in refining the likenesses of the original actors.

A project that underlines both the creative possibilities and ethical tensions surrounding AI in cinema. While the reconstructed footage will not be commercially exploited, it raises questions about the use of copyrighted material in training AI and the risk of replacing human creators.

For many, however, the experiment offers a glimpse of what Welles’ ambitious work might have looked like had it survived intact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI study links AI hallucinations to flawed testing incentives

OpenAI researchers say large language models continue to hallucinate because current evaluation methods encourage them to guess rather than admit uncertainty.

Hallucinations, defined as confident but false statements, persist despite advances in models such as GPT-5. Low-frequency facts, like specific dates or names, are particularly vulnerable.

The study argues that while pretraining predicts the next word without true or false labels, the real problem lies in accuracy-based testing. Evaluations that reward lucky guesses discourage models from saying ‘I don’t know’.

Researchers suggest penalising confident errors more heavily than uncertainty, and awarding partial credit when AI models acknowledge limits in knowledge. They argue that only by reforming evaluation methods can hallucinations be meaningfully reduced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot