China cracks down on Kuaishou and Weibo over alleged online content violations

China’s internet watchdog, the Cyberspace Administration of China (CAC), has warned online platforms Kuaishou Technology and Weibo for failing to curb celebrity gossip and harmful content on their platforms.

The CAC issued formal warnings, citing damage to the ‘online ecosystem’ and demanding corrective action. Both firms pledged compliance, with Kuaishou forming a task force and Weibo promising self-reflection.

The move follows similar disciplinary action against lifestyle app RedNote and is part of a broader two-month campaign targeting content that ‘viciously stimulates negative emotions.’

Separately, Kuaishou is under investigation by the State Administration for Market Regulation for alleged malpractice in live-streaming e-commerce.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Meta and Google to block political ads in EU under new regulations

Broadcasters and advertisers seek clarity before the EU’s political advertising rules become fully applicable on 10 October. The European Commission has promised further guidance, but details on what qualifies as political advertising remain vague.

Meta and Google will block the EU’s political, election, and social issue ads when the rules take effect, citing operational challenges and legal uncertainty. The regulation, aimed at curbing disinformation and foreign interference, requires ads to display labels with sponsors, payments, and targeting.

Publishers fear they lack the technical means to comply or block non-compliant programmatic ads, risking legal exposure. They call for clear sponsor identification procedures, standardised declaration formats, and robust verification processes to ensure authenticity.

Advertisers warn that the rules’ broad definition of political actors may be hard to implement. At the same time, broadcasters fear issue-based campaigns – such as environmental awareness drives – could unintentionally fall under the scope of political advertising.

The Dutch parliamentary election on 29 October will be the first to take place under the fully applicable rules, making clarity from Brussels urgent for media and advertisers across the bloc.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic introduces memory feature to Claude AI for workplace productivity

The AI startup Anthropic has added a memory feature to its Claude AI, designed to automatically recall details from earlier conversations, such as project information and team preferences.

Initially, the upgrade is only available to Team and Enterprise subscribers, who can manage, edit, or delete the content that the system retains.

Anthropic presents the tool as a way to improve workplace efficiency instead of forcing users to repeat instructions. Enterprise administrators have additional controls, including entirely turning memory off.

Privacy safeguards are included, such as an ‘incognito mode’ for conversations that are not stored.

Analysts view the step as an effort to catch up with competitors like ChatGPT and Gemini, which already offer similar functions. Memory also links with Claude’s newer tools for creating spreadsheets, presentations, and PDFs, allowing past information to be reused in future documents.

Anthropic plans a wider release after testing the feature with businesses. Experts suggest the approach could strengthen the company’s position in the AI market by offering both continuity and security, which appeal to enterprises handling sensitive data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube expands AI dubbing to millions of creators

Real-time translation is becoming a standard feature across consumer tech, with Samsung, Google, and Apple all introducing new tools. Apple’s recently announced Live Translation on AirPods demonstrates the utility of such features, particularly for travellers.

YouTube has joined the trend, expanding its multi-language audio feature to millions of creators worldwide. The tool enables creators to add dubbed audio tracks in multiple languages, powered by Google’s Gemini AI, replicating tone and emotion.

The feature was first tested with creators like MrBeast, Mark Rober, and Jamie Oliver. YouTube reports that Jamie Oliver’s channel saw its views triple, while over 25% of the watch time came from non-primary languages.

Mark Rober’s channel now supports more than 30 languages per video, helping creators reach audiences far beyond their native markets. YouTube states that this expansion should make content more accessible to global viewers and increase overall engagement.

Subtitles will still be vital for people with hearing difficulties, but AI-powered dubbing could reduce reliance on them for language translation. For creators, it marks a significant step towards making content truly global.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FTC opens inquiry into AI chatbots and child safety

The US Federal Trade Commission has launched an inquiry into AI chatbots that act as digital companions, raising concerns about their impact on children and teenagers.

Seven firms, including Alphabet, Meta, OpenAI and Snap, have been asked to provide information about how they address risks linked to ΑΙ chatbots designed to mimic human relationships.

Chairman Andrew Ferguson said protecting children online was a top priority, stressing the need to balance safety with maintaining US leadership in AI. Regulators fear minors may be particularly vulnerable to forming emotional bonds with AI chatbots that simulate friendship and empathy.

An inquiry that will investigate how companies develop AI chatbot personalities, monetise user interactions and enforce age restrictions. It will also assess how personal information from conversations is handled and whether privacy laws are being respected.

Other companies receiving orders include Character.AI and Elon Musk’s xAI.

The probe follows growing public concern over the psychological effects of generative AI on young people.

Last month, the parents of a 16-year-old who died by suicide sued OpenAI, alleging ChatGPT provided harmful instructions. The company later pledged corrective measures, admitting its chatbot does not always recommend mental health support during prolonged conversations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated film sparks copyright battle as it heads to Cannes

OpenAI has taken a significant step into entertainment by backing Critterz, the first animated feature film generated with GPT models.

Human artists sketch characters and scenes, while AI transforms them into moving images. The $30 million project, expected to finish in nine months, is far cheaper and faster than traditional animation and could debut at the Cannes Film Festival in 2026.

Yet the film has triggered a fierce copyright debate in India and beyond. Under India’s Copyright Act of 1957, only human works are protected.

Legal experts argue that while AI can be used as a tool when human skill and judgement are clearly applied, autonomously generated outputs may not qualify for copyright at all.

The uncertainty carries significant risks. Producers may struggle to combat piracy or unauthorised remakes, while streaming platforms and investors could hesitate to support projects without clear ownership rights.

A recent case involving an AI tool credited as a co-author of a painting, later revoked, shows how untested the law remains.

Global approaches vary. The US and the EU require human creativity for copyright, while the UK recognises computer-generated works under certain conditions.

In India, lawyers suggest contracts provide the safest path until the law evolves, with detailed agreements on ownership, revenue sharing and disclosure of AI input.

The government has already set up an expert panel to review the Copyright Act, even as AI-driven projects and trailers rapidly gain popularity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbot mixes up Dutch political party policies

Dutch voters have been warned not to rely on AI chatbots for political advice after Google’s NotebookLM mixed up VVD and PVV policies.

When asked about Ukrainian refugees, the tool attributed a PVV proposal to send men back to Ukraine to the VVD programme. Similar confusions reportedly occurred when others used the system.

Google acknowledged the mistake and said it would investigate whether the error was a hallucination, the term for incorrect AI-generated output.

Experts caution that language models predict patterns rather than facts, making errors unavoidable. Voting guide StemWijzer stressed that reliable political advice requires up-to-date and verified information.

Professor Claes de Vreese said chatbots might be helpful to supplementary tools but should never replace reading actual party programmes. He also urged stricter regulation to avoid undue influence on election choices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Canadian news publishers clash with OpenAI in landmark copyright case

OpenAI is set to argue in an Ontario court that a copyright lawsuit by Canadian news publishers should be heard in the United States. The case, the first of its kind in Canada, alleges that OpenAI scraped Canadian news content to train ChatGPT without permission or payment.

The coalition of publishers, including CBC/Radio-Canada, The Globe and Mail, and Postmedia, says the material was created and hosted in Ontario, making the province the proper venue. They warn that accepting OpenAI’s stance would undermine Canadian sovereignty in the digital economy.

OpenAI, however, says the training of its models and web crawling occurred outside Canada and that the Copyright Act cannot apply extraterritorially. It argues the publishers are politicising the case by framing it as a matter of sovereignty rather than jurisdiction.

The dispute reflects a broader global clash over how generative AI systems use copyrighted works. US courts are already handling several similar cases, though no clear precedent has been established on whether such use qualifies as fair use.

Publishers argue Canadian courts must decide the matter domestically, while OpenAI insists it belongs in US courts. The outcome could shape how copyright laws apply to AI training and digital content across borders.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Nepal lifts social media ban after protests

The Nepali government has lifted its ban on major social media platforms following days of nationwide protests led largely by youth demanding action against corruption.

The ban, which blocked access to 26 social media sites including WhatsApp, Facebook, Instagram, LinkedIn, and YouTube, was introduced in an effort to curb misinformation, online fraud, and hate speech, according to government officials.

However, critics accused the administration of using the restrictions to stifle dissent and silence public outrage.

Thousands of demonstrators took to the streets in Kathmandu and other major cities in Nepal, voicing frustration over rising unemployment, inflation, and what they described as a lack of accountability among political leaders.

The protests quickly gained momentum, with digital freedom becoming a central theme alongside anti-corruption demands.

The United Nations Office for the High Commissioner of Human Rights addressed the situation, stating: “We have received several deeply worrying allegations of unnecessary or disproportionate use of force by security forces during protests organized by youth groups demonstrating against corruption and the recent Government ban on social media platforms.”

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New project expands AI access for African languages

Africa is working to close the AI language gap, as most global tools remain trained on English, Chinese, and European languages.

The African Next Voices project has created the continent’s largest dataset of spoken African languages, covering 18 tongues across Kenya, Nigeria, and South Africa. Supported by a $2.2m Gates Foundation grant, the dataset includes 9,000 hours of speech in farming, health, and education settings.

Languages such as Hausa, Yoruba, isiZulu, and Kikuyu are now available for developers to build translation, transcription, and conversational AI tools. Farmers like South Africa’s Kelebogile Mosime already use local-language apps to solve agricultural challenges.

Start-ups, including Lelapa AI, are building products in African languages for banks and telecoms. Researchers warn that without such initiatives, millions risk exclusion from vital services and cultural knowledge could be lost.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot