Labour market remains stable despite rapid AI adoption

Surveys show persistent anxiety about AI-driven job losses. Nearly three years after ChatGPT’s launch, labour data indicate that these fears have not materialised. Researchers examined shifts in the US occupational mix since late 2022, comparing them to earlier technological transitions.

Their analysis found that shifts in job composition have been modest, resembling the gradual changes seen during the rise of computers and the internet. The overall pace of occupational change has not accelerated substantially, suggesting that widespread job losses due to AI have not yet occurred.

Industry-level data shows limited impact. High-exposure sectors, such as Information and Professional Services, have seen shifts, but many predate the introduction of ChatGPT. Overall, labour market volatility remains below the levels of historical periods of major change.

To better gauge AI’s impact, the study compared OpenAI’s exposure data with Anthropic’s usage data from Claude. The two show limited correlation, indicating that high exposure does not always imply widespread use, especially outside of software and quantitative roles.

Researchers caution that significant labour effects may take longer to emerge, as seen with past technologies. They argue that transparent, comprehensive usage data from major AI providers will be essential to monitor real impacts over time.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Thousands affected by AI-linked data breach in New South Wales

A major data breach has affected the Northern Rivers Resilient Homes Program in New South Wales.

Authorities confirmed that personal information was exposed after a former contractor uploaded data to the AI platform ChatGPT between 12 and 15 March 2025.

The leaked file contained over 12,000 records, with details including names, addresses, contact information and health data. Up to 3,000 individuals may be impacted.

While there is no evidence yet that the information has been accessed by third parties, the NSW Reconstruction Authority (RA) and Cyber Security NSW have launched a forensic investigation.

Officials apologised for the breach and pledged to notify all affected individuals in the coming week. ID Support NSW is offering free advice and resources, while compensation will be provided for any costs linked to replacing compromised identity documents.

The RA has also strengthened its internal policies to prevent unauthorised use of AI platforms. An independent review of the incident is underway to determine how the breach occurred and why notification took several months.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nintendo denies lobbying the Japanese government over generative AI

The video game company, Nintendo, has denied reports that it lobbied the Japanese government over the use of generative AI. The company issued an official statement on its Japanese X account, clarifying that it has had no contact with authorities.

However, this rumour originated from a post by Satoshi Asano, a member of Japan’s House of Representatives, who suggested that private companies had pressed the government on intellectual property protection concerning AI.

After Nintendo’s statement, Asano retracted his remarks and apologised for spreading misinformation.

Nintendo stressed that it would continue to protect its intellectual property against infringement, whether AI was involved or not. The company reaffirmed its cautious approach toward generative AI in game development, focusing on safeguarding creative rights rather than political lobbying.

The episode underscores the sensitivity around AI in the creative industries of Japan, where concerns about copyright and technological disruption are fuelling debate. Nintendo’s swift clarification signals how seriously it takes misinformation and protects its brand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU kicks off cybersecurity awareness campaign against phishing threats

European Cybersecurity Month (ECSM) 2025 has kicked off, with this year’s campaign centring on the growing threat of phishing attacks.

The initiative, driven by the EU Agency for Cybersecurity (ENISA) and the European Commission, seeks to raise awareness and provide practical guidance to European citizens and organisations.

Phishing is still the primary vector through which threat actors launch social engineering attacks. However, this year’s ECSM materials expand the scope to include variants like SMS phishing (smishing), QR code phishing (quishing), voice phishing (vishing), and business email compromise (BEC).

ENISA warns that as of early 2025, over 80 percent of observed social engineering activity involves using AI in their campaigns, in which language models enable more convincing and scalable scams.

To support the campaign, a variety of tiers of actors, from individual citizens to large organisations, are encouraged to engage in training, simulations, awareness sessions and public outreach under the banner #ThinkB4UClick.

A cross-institutional kick-off event is also scheduled, bringing together the EU institutions, member states and civil society to align messaging and launch coordinated activities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US AI models outperform Chinese rival DeepSeek

The National Institute of Standards and Technology’s Centre for AI Standards and Innovation (CAISI) found AI models from Chinese developer DeepSeek trail US models in performance, cost, security, and adoption.

Evaluations covered three DeepSeek and four leading US models, including OpenAI’s GPT-5 series and Anthropic’s Opus 4, across 19 benchmarks.

US AI models outperformed DeepSeek across nearly all benchmarks, with the most significant gaps in software engineering and cybersecurity tasks. CAISI found DeepSeek models costlier and far more vulnerable to hijacking and jailbreaking, posing risks to developers, consumers, and national security.

DeepSeek models were observed to echo inaccurate Chinese Communist Party narratives four times more often than US reference models. Despite weaknesses, DeepSeek model adoption has surged, with downloads rising nearly 1,000% since January 2025.

CAISI is a key contact for industry collaboration on AI standards and security. The evaluation aligns with the US government’s AI Action Plan, which aims to assess the capabilities and risks of foreign AI while securing American leadership in the field.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI platforms barred from cloning Asha Bhosle’s voice without consent

The Bombay High Court has granted ad-interim relief to Asha Bhosle, barring AI platforms and sellers from cloning her voice or likeness without consent. The 90-year-old playback singer, whose career spans eight decades, approached the court to protect her identity from unauthorised commercial use.

Bhosle filed the suit after discovering platforms offering AI-generated voice clones mimicking her singing. Her plea argued that such misuse damages her reputation and goodwill. Justice Arif S. Doctor found a strong prima facie case and stated that such actions would cause irreparable harm.

The order restrains defendants, including US-based Mayk Inc, from using machine learning, face-morphing, or generative AI to imitate her voice or likeness. Google, also named in the case, has agreed to take down specific URLs identified by Bhosle’s team.

Defendants are required to share subscriber information, IP logs, and payment details to assist in identifying infringers. The court emphasised that cloning the voices of cultural icons risks misleading the public and infringing on individuals’ rights to their identity.

The ruling builds on recent cases in India affirming personality rights and sets an important precedent in the age of generative AI. The matter is scheduled to return to court on 13 October 2025.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FRA presents rights framework at EU Innovation Hub AI Cluster workshop in Tallinn

The EU Innovation Hub for Internal Security’s AI Cluster gathered in Tallinn on 25–26 September for a workshop focused on AI and its implications for security and rights.

The European Union Agency for Fundamental Rights (FRA) played a central role, presenting its Fundamental Rights Impact Assessment framework under the AI Act and highlighting its ongoing project on assessing high-risk AI.

A workshop that also provided an opportunity for FRA to give an update on its internal and external work in the AI field, reflecting the growing need to balance technological innovation with rights-based safeguards.

AI-driven systems in security and policing are increasingly under scrutiny, with regulators and agencies seeking to ensure compliance with EU rules on privacy, transparency and accountability.

In collaboration with Europol, FRA also introduced plans for a panel discussion on ‘The right to explanation of AI-driven individual decision-making’. Scheduled for 19 November in Brussels, the session will form part of the Annual Event of the EU Innovation Hub for Internal Security.

It is expected to draw policymakers, law enforcement representatives and rights advocates into dialogue about transparency obligations in AI use for security contexts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mexico drafts law to regulate AI in dubbing and animation

The Mexican government is preparing a law to regulate the use of AI in dubbing, animation, and voiceovers to prevent unauthorised voice cloning and safeguard creative rights.

Working with the National Copyright Institute and more than 128 associations, it aims to reform copyright legislation before the end of the year.

The plan would strengthen protections for actors, voiceover artists, and creative workers, while addressing contract conditions and establishing a ‘Made in Mexico’ seal for cultural industries.

A bill that is expected to prohibit synthetic dubbing without consent, impose penalties for misuse, and recognise voice and image as biometric data.

Industry voices warn that AI has already disrupted work opportunities. Several dubbing firms in Los Angeles have closed, with their projects taken over by companies specialising in AI-driven dubbing.

Startups such as Deepdub and TrueSync have advanced the technology, dubbing films and television content across languages at scale.

Unions and creative groups argue that regulation is vital to protect both jobs and culture. While AI offers efficiency in translation and production, it cannot yet replicate the emotional depth of human performance.

The law is seen as the first attempt of Mexico to balance technological innovation with the rights of workers and creators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta faces fines in Netherlands over algorithm-first timelines

A Dutch court has ordered Meta to give Facebook and Instagram users in the Netherlands the right to set a chronological feed as their default.

The ruling follows a case brought by digital rights group Bits of Freedom, which argued that Meta’s design undermines user autonomy under the European Digital Services Act.

Although a chronological feed is already available, it is hidden and cannot be permanent. The court said Meta must make the settings accessible on the homepage and Reels section and ensure they stay in place when the apps are restarted.

If Meta does not comply within two weeks, it faces a fine of €100,000 per day, capped at €5 million.

Bits of Freedom argued that algorithmic feeds threaten democracy, particularly before elections. The court agreed the change must apply permanently rather than temporarily during campaigns.

The group welcomed the ruling but stressed it was only a small step in tackling the influence of tech giants on public debate.

Meta has not yet responded to the decision, which applies only in the Netherlands despite being based on EU law. Campaigners say the case highlights the need for more vigorous enforcement to ensure digital platforms respect user choice and democratic values.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Oracle systems targeted in unverified data theft claims, Google warns

Google has warned that hackers are emailing company executives, claiming to have stolen sensitive data from Oracle business applications. The group behind the campaign identifies itself as affiliated with the Cl0p ransomware gang.

In a statement, Google said the attackers target executives at multiple organisations with extortion emails linked to Oracle’s E-Business Suite. The company stated that it lacks sufficient evidence to verify the claims or confirm whether any data has been taken.

Neither Cl0p nor Oracle responded to requests for comment. Google did not provide additional information about the scale or specific campaign targets.

The cl0p ransomware gang has been involved in several high-profile extortion cases, often using claims of data theft to pressure organisations into paying ransoms, even when breaches remain unverified.

Google advised recipients to treat such messages cautiously and report any suspicious emails to security teams while investigations continue.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!