Mexico drafts law to regulate AI in dubbing and animation

The Mexican government is preparing a law to regulate the use of AI in dubbing, animation, and voiceovers to prevent unauthorised voice cloning and safeguard creative rights.

Working with the National Copyright Institute and more than 128 associations, it aims to reform copyright legislation before the end of the year.

The plan would strengthen protections for actors, voiceover artists, and creative workers, while addressing contract conditions and establishing a ‘Made in Mexico’ seal for cultural industries.

A bill that is expected to prohibit synthetic dubbing without consent, impose penalties for misuse, and recognise voice and image as biometric data.

Industry voices warn that AI has already disrupted work opportunities. Several dubbing firms in Los Angeles have closed, with their projects taken over by companies specialising in AI-driven dubbing.

Startups such as Deepdub and TrueSync have advanced the technology, dubbing films and television content across languages at scale.

Unions and creative groups argue that regulation is vital to protect both jobs and culture. While AI offers efficiency in translation and production, it cannot yet replicate the emotional depth of human performance.

The law is seen as the first attempt of Mexico to balance technological innovation with the rights of workers and creators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta faces fines in Netherlands over algorithm-first timelines

A Dutch court has ordered Meta to give Facebook and Instagram users in the Netherlands the right to set a chronological feed as their default.

The ruling follows a case brought by digital rights group Bits of Freedom, which argued that Meta’s design undermines user autonomy under the European Digital Services Act.

Although a chronological feed is already available, it is hidden and cannot be permanent. The court said Meta must make the settings accessible on the homepage and Reels section and ensure they stay in place when the apps are restarted.

If Meta does not comply within two weeks, it faces a fine of €100,000 per day, capped at €5 million.

Bits of Freedom argued that algorithmic feeds threaten democracy, particularly before elections. The court agreed the change must apply permanently rather than temporarily during campaigns.

The group welcomed the ruling but stressed it was only a small step in tackling the influence of tech giants on public debate.

Meta has not yet responded to the decision, which applies only in the Netherlands despite being based on EU law. Campaigners say the case highlights the need for more vigorous enforcement to ensure digital platforms respect user choice and democratic values.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta to use AI interactions for content and ad recommendations

Meta has announced that beginning 16 December 2025, it will start personalising content and ad recommendations on Facebook, Instagram and other apps using users’ interactions with its generative AI features.

The update means that if you chat with Meta’s AI about a topic, such as hiking, the system may infer your interests and show related content, including posts from hiking groups or ads for boots. Meta emphasises that content and ad recommendations already use signals like likes, shares and follows, but the new change adds AI interactions as another signal.

Meta will notify users starting 7 October via in-app messages and emails to maintain user control. Users will retain access to settings such as Ads Preferences and feed controls to adjust what they see. Meta says it will not use sensitive AI chat content (religion, health, political beliefs, etc.) to personalise ads.

If users have linked those accounts in Meta’s Accounts Centre, interactions with AI on particular accounts will only be used for cross-account personalisation. Also, unless a WhatsApp account is added to the same Accounts Centre, AI interactions won’t influence experience in other apps.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China’s new K visa sparks public backlash

China’s new K visa, aimed at foreign professionals in science and technology, has sparked heated debate and online backlash. The scheme, announced in August and launched this week, has been compared by Indian media to the US H-1B visa.

Tens of thousands of social media users in China have voiced fears that the programme will worsen job competition in an already difficult market. Comments also included xenophobic remarks, particularly directed at Indian nationals.

State media outlets have stepped in, defending the policy as a sign of China’s openness while stressing that it is not a simple work permit or immigration pathway. Officials say the visa is designed to attract graduates and researchers from top institutions in STEM fields.

The government has yet to clarify whether the visa allows foreign professionals to work, adding to uncertainty. Analysts note that language barriers, cultural differences, and China’s political environment may pose challenges for newcomers despite Beijing’s drive to attract global talent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NIST pushes longer passphrases and MFA over strict rules

The US National Institute of Standards and Technology (NIST) has updated its password guidelines, urging organisations to drop strict complexity rules. NIST states that requirements such as mandatory symbols and frequent resets often harm usability without significantly improving security.

Instead, the agency recommends using blocklists for breached or commonly used passwords, implementing hashed storage, and rate limiting to resist brute-force attacks. Multi-factor authentication and password managers are encouraged as additional safeguards.

Password length remains essential. Short strings are easily cracked, but users should be allowed to create longer passphrases. NIST recommends limiting only extremely long passwords that slow down hashing.

The new approach replaces mandatory resets with changes triggered only after suspected compromise, such as a data breach. NIST argues this method reduces fatigue while improving overall account protection.

Businesses adopting these guidelines must audit their existing policies, reconfigure authentication systems, deploy blocklists, and train employees to adapt accordingly. Clear communication of the changes will be key to ensuring compliance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Gmail phishing attack hides malware inside fake PDFs

Researchers have uncovered a phishing toolkit disguised as a PDF attachment to bypass Gmail’s defences. Known as MatrixPDF, the technique blurs document text, embeds prompts, and uses hidden JavaScript to redirect victims to malicious sites.

The method exploits Gmail’s preview function, slipping past filters because the PDF contains no visible links. Users are lured into clicking a fake button to ‘open secure document,’ triggering the attack and fetching malware outside Gmail’s sandbox.

A second variation embeds scripts that connect directly to payload URLs when PDFs are opened in desktop or browser readers. Victims see permission prompts that appear legitimate, but allowing access launches downloads that compromise devices.

Experts warn that PDFs are trusted more than other file types, making this a dangerous evolution of social engineering. Once inside a network, attackers can move laterally, escalate privileges, and plant further malware.

Security leaders recommend restricting personal email access on corporate devices, increasing sandboxing capabilities, and expanding employee training initiatives. Analysts emphasise that awareness and recognition of suspicious files remain crucial in countering this new phishing threat.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Grok controversies shadow Musk’s new Grokipedia project

Elon Musk has announced that his company xAI is developing Grokipedia, a planned Wikipedia rival powered by its Grok AI chatbot. He described the project as a step towards achieving xAI’s mission of understanding the universe.

In a post on X, Musk called Grokipedia a ‘necessary improvement over Wikipedia,’ renewing his criticism of the platform’s funding model and what he views as ideological bias. He has long accused Wikimedia of leaning left and reflecting ‘woke’ influence.

Despite Musk’s efforts to position Grok as a solution to bias, the chatbot has occasionally turned on its creator. Earlier this year, it named Musk among the people doing the most harm to the US, alongside Donald Trump and Vice President JD Vance.

The Grok 4 update also drew controversy when users reported that the chatbot praised and adopted the surname of a controversial historical figure in its responses, sparking criticism of its safety. Such incidents raised questions about the limits of Musk’s oversight.

Grok is already integrated into X as a conversational assistant, providing context and explanations in real time. Musk has said it will power the platform’s recommendation algorithm by late 2025, allowing users to customise their feeds dynamically through direct requests.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Europe urged to seize AI opportunity through action

Europe faces a pivotal moment to lead in AI, potentially boosting GDP by over €1.2 trillion, according to Google’s Kent Walker. Urgent action is needed to close the gap between ambition and implementation.

Complex EU regulations, with over 100 new digital rules since 2019, hinder businesses, costing an estimated €124 billion annually. Simplifying these, as suggested by Mario Draghi’s report, could unlock €450 billion in AI-driven growth.

Focused, balanced policies must prioritise real-world AI impacts without stifling progress.

Skilling Europe’s workforce is crucial for AI adoption, with only 14% of EU firms using generative AI compared to 83% in China. Google’s initiatives, like its €15 million AI Opportunity Fund, support digital training. Public-private partnerships can scale these efforts, creating new job categories.

Scaling AI demands secure, dependable tools and ongoing momentum. Google’s AlphaFold and GNoME fuel advances in biology and materials science, while partnerships with European companies safeguard data sovereignty. Joint efforts will help Europe lead globally in AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Grok 4 launches on Azure with advanced reasoning features

Microsoft has announced that Grok 4, the latest large language model from Elon Musk’s xAI, is now available in Azure AI Foundry. The collaboration aims to deliver frontier-level reasoning capabilities with enterprise-grade safety and control.

Grok 4 features a 128,000-token context window, integrated web search, and native tool use. According to Microsoft, it excels at first-principles reasoning, handling complex tasks in science, maths, and logic. The model was trained on xAI’s Colossus supercomputer.

Azure says the model can analyse long documents, code repositories, and academic texts simultaneously, reducing the need to split inputs. It also incorporates external data for real-time responses, though Microsoft cautions that outputs should be verified against reliable sources.

The platform includes Azure AI Content Safety by default, and Microsoft stresses responsible use with ongoing monitoring. Pricing starts at $5.5 per million input tokens and $27.5 per million output tokens.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Spotify removes 75 million tracks in AI crackdown

Spotify has confirmed that it removed 75 million tracks in the past year as part of a crackdown on AI-generated spam, deepfakes, and fake artist uploads. The purge, almost half of its total archive, highlights the scale of the problem facing music streaming.

Executives say they are not banning AI outright. Instead, the company is targeting misuse, such as cloned voices of real artists without permission, fake profiles, and mass-uploaded spam designed to siphon royalties.

New measures include a music spam filter, stricter rules on vocal deepfakes, and tools allowing artists to flag impersonation before publication. Spotify is also testing the DDEX disclosure system so creators can indicate whether and how AI was used in their work.

Despite the scale of removals, Spotify insists AI music engagement remains minimal and has not significantly impacted human artists’ revenue. The platform now faces the challenge of balancing innovation with transparency, while protecting both listeners and musicians.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot