Evaluations covered three DeepSeek and four leading US models, including OpenAI’s GPT-5 series and Anthropic’s Opus 4, across 19 benchmarks.
US AI models outperformed DeepSeek across nearly all benchmarks, with the most significant gaps in software engineering and cybersecurity tasks. CAISI found DeepSeek models costlier and far more vulnerable to hijacking and jailbreaking, posing risks to developers, consumers, and national security.
DeepSeek models were observed to echo inaccurate Chinese Communist Party narratives four times more often than US reference models. Despite weaknesses, DeepSeek model adoption has surged, with downloads rising nearly 1,000% since January 2025.
CAISI is a key contact for industry collaboration on AI standards and security. The evaluation aligns with the US government’s AI Action Plan, which aims to assess the capabilities and risks of foreign AI while securing American leadership in the field.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Bombay High Court has granted ad-interim relief to Asha Bhosle, barring AI platforms and sellers from cloning her voice or likeness without consent. The 90-year-old playback singer, whose career spans eight decades, approached the court to protect her identity from unauthorised commercial use.
Bhosle filed the suit after discovering platforms offering AI-generated voice clones mimicking her singing. Her plea argued that such misuse damages her reputation and goodwill. Justice Arif S. Doctor found a strong prima facie case and stated that such actions would cause irreparable harm.
The order restrains defendants, including US-based Mayk Inc, from using machine learning, face-morphing, or generative AI to imitate her voice or likeness. Google, also named in the case, has agreed to take down specific URLs identified by Bhosle’s team.
Defendants are required to share subscriber information, IP logs, and payment details to assist in identifying infringers. The court emphasised that cloning the voices of cultural icons risks misleading the public and infringing on individuals’ rights to their identity.
The ruling builds on recent cases in India affirming personality rights and sets an important precedent in the age of generative AI. The matter is scheduled to return to court on 13 October 2025.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Alibaba’s $250 billion rebound has turned it into China’s hottest AI stock, with analysts saying the rally may still have room to run.
The group’s US-listed shares have more than doubled this year as Beijing pushes for greater technological self-reliance. Despite the surge, the stock remains 65% below its 2020 peak, keeping valuations attractive compared with US giants like Microsoft and Amazon.
Fund managers say global investors still hold relatively minor positions in Alibaba, creating scope for further gains. Some caution remains, however, with Chinese short bets rising last month and price wars in food delivery threatening to dent margins.
Alibaba trades roughly 22 times the estimated forward earnings in Hong Kong, which is in line with the Hang Seng Tech Index but below its historic peak and US peers. Investors say its valuation looks reasonable given its AI push and improving sentiment.
Shares touched their highest level since August 2021 on Friday, standing out against declines in the broader Hong Kong market. The key test will be whether Alibaba can convert its AI ambitions into mainstream revenues.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The EU Innovation Hub for Internal Security’s AI Cluster gathered in Tallinn on 25–26 September for a workshop focused on AI and its implications for security and rights.
The European Union Agency for Fundamental Rights (FRA) played a central role, presenting its Fundamental Rights Impact Assessment framework under the AI Act and highlighting its ongoing project on assessing high-risk AI.
A workshop that also provided an opportunity for FRA to give an update on its internal and external work in the AI field, reflecting the growing need to balance technological innovation with rights-based safeguards.
AI-driven systems in security and policing are increasingly under scrutiny, with regulators and agencies seeking to ensure compliance with EU rules on privacy, transparency and accountability.
In collaboration with Europol, FRA also introduced plans for a panel discussion on ‘The right to explanation of AI-driven individual decision-making’. Scheduled for 19 November in Brussels, the session will form part of the Annual Event of the EU Innovation Hub for Internal Security.
It is expected to draw policymakers, law enforcement representatives and rights advocates into dialogue about transparency obligations in AI use for security contexts.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Mexican government is preparing a law to regulate the use of AI in dubbing, animation, and voiceovers to prevent unauthorised voice cloning and safeguard creative rights.
Working with the National Copyright Institute and more than 128 associations, it aims to reform copyright legislation before the end of the year.
The plan would strengthen protections for actors, voiceover artists, and creative workers, while addressing contract conditions and establishing a ‘Made in Mexico’ seal for cultural industries.
A bill that is expected to prohibit synthetic dubbing without consent, impose penalties for misuse, and recognise voice and image as biometric data.
Industry voices warn that AI has already disrupted work opportunities. Several dubbing firms in Los Angeles have closed, with their projects taken over by companies specialising in AI-driven dubbing.
Startups such as Deepdub and TrueSync have advanced the technology, dubbing films and television content across languages at scale.
Unions and creative groups argue that regulation is vital to protect both jobs and culture. While AI offers efficiency in translation and production, it cannot yet replicate the emotional depth of human performance.
The law is seen as the first attempt of Mexico to balance technological innovation with the rights of workers and creators.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A Dutch court has ordered Meta to give Facebook and Instagram users in the Netherlands the right to set a chronological feed as their default.
The ruling follows a case brought by digital rights group Bits of Freedom, which argued that Meta’s design undermines user autonomy under the European Digital Services Act.
Although a chronological feed is already available, it is hidden and cannot be permanent. The court said Meta must make the settings accessible on the homepage and Reels section and ensure they stay in place when the apps are restarted.
If Meta does not comply within two weeks, it faces a fine of €100,000 per day, capped at €5 million.
Bits of Freedom argued that algorithmic feeds threaten democracy, particularly before elections. The court agreed the change must apply permanently rather than temporarily during campaigns.
The group welcomed the ruling but stressed it was only a small step in tackling the influence of tech giants on public debate.
Meta has not yet responded to the decision, which applies only in the Netherlands despite being based on EU law. Campaigners say the case highlights the need for more vigorous enforcement to ensure digital platforms respect user choice and democratic values.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The update means that if you chat with Meta’s AI about a topic, such as hiking, the system may infer your interests and show related content, including posts from hiking groups or ads for boots. Meta emphasises that content and ad recommendations already use signals like likes, shares and follows, but the new change adds AI interactions as another signal.
Meta will notify users starting 7 October via in-app messages and emails to maintain user control. Users will retain access to settings such as Ads Preferences and feed controls to adjust what they see. Meta says it will not use sensitive AI chat content (religion, health, political beliefs, etc.) to personalise ads.
If users have linked those accounts in Meta’s Accounts Centre, interactions with AI on particular accounts will only be used for cross-account personalisation. Also, unless a WhatsApp account is added to the same Accounts Centre, AI interactions won’t influence experience in other apps.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China’s new K visa, aimed at foreign professionals in science and technology, has sparked heated debate and online backlash. The scheme, announced in August and launched this week, has been compared by Indian media to the US H-1B visa.
Tens of thousands of social media users in China have voiced fears that the programme will worsen job competition in an already difficult market. Comments also included xenophobic remarks, particularly directed at Indian nationals.
State media outlets have stepped in, defending the policy as a sign of China’s openness while stressing that it is not a simple work permit or immigration pathway. Officials say the visa is designed to attract graduates and researchers from top institutions in STEM fields.
The government has yet to clarify whether the visa allows foreign professionals to work, adding to uncertainty. Analysts note that language barriers, cultural differences, and China’s political environment may pose challenges for newcomers despite Beijing’s drive to attract global talent.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US National Institute of Standards and Technology (NIST) has updated its password guidelines, urging organisations to drop strict complexity rules. NIST states that requirements such as mandatory symbols and frequent resets often harm usability without significantly improving security.
Instead, the agency recommends using blocklists for breached or commonly used passwords, implementing hashed storage, and rate limiting to resist brute-force attacks. Multi-factor authentication and password managers are encouraged as additional safeguards.
Password length remains essential. Short strings are easily cracked, but users should be allowed to create longer passphrases. NIST recommends limiting only extremely long passwords that slow down hashing.
The new approach replaces mandatory resets with changes triggered only after suspected compromise, such as a data breach. NIST argues this method reduces fatigue while improving overall account protection.
Businesses adopting these guidelines must audit their existing policies, reconfigure authentication systems, deploy blocklists, and train employees to adapt accordingly. Clear communication of the changes will be key to ensuring compliance.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Researchers have uncovered a phishing toolkit disguised as a PDF attachment to bypass Gmail’s defences. Known as MatrixPDF, the technique blurs document text, embeds prompts, and uses hidden JavaScript to redirect victims to malicious sites.
The method exploits Gmail’s preview function, slipping past filters because the PDF contains no visible links. Users are lured into clicking a fake button to ‘open secure document,’ triggering the attack and fetching malware outside Gmail’s sandbox.
A second variation embeds scripts that connect directly to payload URLs when PDFs are opened in desktop or browser readers. Victims see permission prompts that appear legitimate, but allowing access launches downloads that compromise devices.
Experts warn that PDFs are trusted more than other file types, making this a dangerous evolution of social engineering. Once inside a network, attackers can move laterally, escalate privileges, and plant further malware.
Security leaders recommend restricting personal email access on corporate devices, increasing sandboxing capabilities, and expanding employee training initiatives. Analysts emphasise that awareness and recognition of suspicious files remain crucial in countering this new phishing threat.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!