YouTube expands AI dubbing to millions of creators

Real-time translation is becoming a standard feature across consumer tech, with Samsung, Google, and Apple all introducing new tools. Apple’s recently announced Live Translation on AirPods demonstrates the utility of such features, particularly for travellers.

YouTube has joined the trend, expanding its multi-language audio feature to millions of creators worldwide. The tool enables creators to add dubbed audio tracks in multiple languages, powered by Google’s Gemini AI, replicating tone and emotion.

The feature was first tested with creators like MrBeast, Mark Rober, and Jamie Oliver. YouTube reports that Jamie Oliver’s channel saw its views triple, while over 25% of the watch time came from non-primary languages.

Mark Rober’s channel now supports more than 30 languages per video, helping creators reach audiences far beyond their native markets. YouTube states that this expansion should make content more accessible to global viewers and increase overall engagement.

Subtitles will still be vital for people with hearing difficulties, but AI-powered dubbing could reduce reliance on them for language translation. For creators, it marks a significant step towards making content truly global.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Educators rethink assignments as AI becomes widespread

Educators are confronting a new reality as AI tools like ChatGPT become widespread among students. Traditional take-home assignments and essays are increasingly at risk as students commonly use AI chatbots to complete schoolwork.

Schools are responding by moving more writing tasks into the classroom and monitoring student activity. Teachers are also integrating AI into lessons, teaching students how to use it responsibly for research, summarising readings, or improving drafts, rather than as a shortcut to cheat.

Policies on AI use still vary widely. Some classrooms allow AI tools for grammar checks or study aids, while others enforce strict bans. Teachers are shifting away from take-home essays, adopting in-class tests, lockdown browsers, or flipped classrooms to manage AI’s impact better. 

The inconsistency often leaves students unsure about acceptable use and challenges educators to uphold academic integrity.

Institutions like the University of California, Berkeley, and Carnegie Mellon have implemented policies promoting ‘AI literacy,’ explaining when and how AI can be used, and adjusting assessments to prevent misuse.

As AI continues improving, educators seek a balance between embracing technology’s potential and safeguarding academic standards. Teachers emphasise guidance, structured use, and supervision to ensure AI supports learning rather than undermining it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S technology chief steps down after cyberattack

Marks & Spencer’s technology chief, Rachel Higham, has stepped down less than 18 months after joining the retailer from BT.

Her departure comes months after a cyberattack in April by Scattered Spider disrupted systems and cost the company around £300 million. Online operations, including click-and-collect, were temporarily halted before being gradually restored.

In a memo to staff, the company described Higham as a steady hand during a turbulent period and wished her well. M&S has said it does not intend to replace her role, leaving questions over succession directly.

The retailer expects part of the financial hit to be offset by insurance. It has declined to comment further on whether Higham will receive a payoff.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Claude AI gains powerful file editing tools for documents and spreadsheets

Anthropic’s Claude has expanded its role as a leading AI assistant by adding advanced tools for creating and editing files. Instead of manually working with different programs, users can now describe their needs in plain language and let the AI produce or update Word, Excel, PowerPoint, and PDF files.

A feature that supports uploads of CSV and TSV data and can generate charts, graphs, or images where needed, with a 30MB size limit applying to uploads and downloads.

The real breakthrough lies in editing. Instead of opening a document or spreadsheet, users can simply type instructions such as replacing text, changing currencies, or updating job titles. Claude processes the prompt and makes all the changes in one pass, preserving the original formatting.

It positions the AI as more efficient than rivals, as Gemini can only export reports but not directly modify existing files.

The feature preview is available on web and desktop for subscribers on Max, Team, or Enterprise plans. Analysts suggest the update could reshape productivity tools, especially after reports that Microsoft has partnered with Anthropic to explore using Claude for Office 365 functions.

By removing repetitive tasks and making file handling conversational, Claude is pushing productivity software into a new phase of automation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NotebookLM turns notes into flashcards podcasts and quizzes

Google’s learning-focused AI tool NotebookLM has gained a major update, making studying and teaching more interactive.

Instead of offering only static summaries, it now generates flashcards that condense key information into easy-to-remember notes, helping users recall knowledge more effectively.

Reports can also be transformed into quizzes with customisable topics and difficulty, which can then be shared with friends or colleagues through a simple link.

The update extends to audio learning, where NotebookLM’s podcast-style Audio Overviews are evolving with new formats. Instead of a single style, users can now create Brief, Debate, or Critique episodes, giving greater flexibility in how material is explained or discussed.

Google is also strengthening its teaching tools. A new Blog Post format offers contextual suggestions such as strategy papers or explainers, while the ability to create custom report formats allows users to design study resources tailored to their needs.

The most significant addition, however, is the Learning Guide. Acting like a personal tutor, it promotes deeper understanding by asking open-ended questions, breaking problems into smaller steps, and adapting explanations to suit each learner.

With these features, NotebookLM is moving closer to becoming a comprehensive learning assistant, offering a mix of interactive study aids and adaptable teaching methods that go beyond simple note-taking.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Growing concern over AI fatigue among students and teachers

Experts say growing exposure to AI is leaving many people exhausted, a phenomenon increasingly described as ‘AI fatigue’.

Educators and policymakers note that AI adoption surged before society had time to thoroughly weigh its ethical or social effects. The technology now underpins tasks from homework writing to digital art, leaving some feeling overwhelmed or displaced.

University students are among those most affected, with many relying heavily on AI for assignments. Teachers say it has become challenging to identify AI-generated work, as detection tools often produce inconsistent results.

Some educators are experimenting with low-tech classrooms, banning phones and requiring handwritten work. They report deeper conversations and stronger engagement when distractions are removed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teens turn to AI chatbots for support, raising mental health concerns

Mental health experts in Iowa have warned that teenagers are increasingly turning to AI chatbots instead of seeking human connection, raising concerns about misinformation and harmful advice.

The issue comes into focus on National Suicide Prevention Day, shortly after a lawsuit against ChatGPT was filed over a teenager’s suicide.

Jessica Bartz, a therapy supervisor at Vera French Duck Creek, said young people are at a vulnerable stage of identity formation while family communication often breaks down.

She noted that some teens use chatbot tools like ChatGPT, Genius and Copilot to self-diagnose, which can reinforce inaccurate or damaging ideas.

‘Sometimes AI can validate the wrong things,’ Bartz said, stressing that algorithms only reflect the limited information users provide.

Without human guidance, young people risk misinterpreting results and worsening their struggles.

Experts recommend that parents and trusted adults engage directly with teenagers, offering empathy and open communication instead of leaving them dependent on technology.

Bartz emphasised that nothing can replace a caring person noticing warning signs and intervening to protect a child’s well-being.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI export rules tighten as the US opens global opportunities

President Trump has signed an Executive Order to promote American leadership in AI exports, marking a significant policy shift. The move creates new global opportunities for US businesses but also introduces stricter compliance responsibilities.

The order establishes the American AI Exports Program, overseen by the Department of Commerce, to develop and deploy ‘full-stack’ AI export packages.

These packages cover everything from chips and cloud infrastructure to AI models and cybersecurity safeguards. Industry consortia will be invited to submit proposals, outlining hardware origins, export targets, business models, and federal support requests.

A central element of the initiative is ensuring compliance with US export control regimes. Companies must align with the Export Control Reform Act and the Export Administration Regulations, with special attention to restrictions on advanced computing chips.

New guidance warns against potential violations linked to hardware and highlights red flags for illegal diversion of sensitive technology.

Commerce stresses that participation requires robust export compliance plans and rigorous end user screening.

Legal teams are urged to review policies on AI exports, as regulators focus on preventing misuse of advanced computing systems in military or weapons programmes abroad.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI Mode in Google Search adds multilingual support to Hindi and four more languages

Google has announced an expansion of AI Mode in Search to five new languages, including Hindi, Indonesian, Japanese, Korean and Brazilian Portuguese. The feature was first introduced in English in March and aims to compete with AI-powered search platforms such as ChatGPT Search and Perplexity AI.

The company highlighted that building a global search experience requires more than translation. Google’s custom version of Gemini 2.5 uses advanced reasoning and multimodal capabilities to provide locally relevant and useful search results instead of offering generic answers.

AI Mode now also supports agentic tasks such as booking restaurant reservations, with plans to include local service appointments and event ticketing.

Currently, these advanced functions are available to Google AI Ultra subscribers in the US, while India received the rollout of the language expansion in July.

These developments reinforce Google’s strategy to integrate AI deeply into its search ecosystem, enhancing user experience across diverse regions instead of limiting sophisticated AI tools to English-language users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic AI faces legal setback in authors’ piracy lawsuit

A federal judge has rejected the $1.5 billion settlement Anthropic agreed to in a piracy lawsuit filed by authors.

Judge William Alsup expressed concerns that the deal was ‘nowhere close to complete’ and could be forced on writers without proper input.

The lawsuit involves around 500,000 authors whose works were allegedly used without permission to train Anthropic’s large language models. The proposed settlement would have granted $3,000 per work, a sum far exceeding previous copyright recoveries.

However, the judge criticised the lack of clarity regarding the list of works, authors, notification process, and claim forms.

Alsup instructed the lawyers to provide clear notice to class members and allow them to opt in or out. He also emphasised that Anthropic must be shielded from future claims on the same issue. The court set deadlines for a final list of works by September 15 and approval of all related documents by October 10.

The ruling highlights ongoing legal challenges for AI companies using copyrighted material for training large language models instead of relying solely on licensed or public-domain data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!