Labour market stability persists despite the rise of AI

Public fears of AI rapidly displacing workers have not yet materialised in the US labour market.

A new study finds that the overall occupational mix has shifted only slightly since the launch of generative AI in November 2022, with changes resembling past technological transitions such as the rise of computers and the internet.

The pace of disruption is not significantly faster than historical benchmarks.

Industry-level data show some variation, particularly in information services, finance, and professional sectors, but trends were already underway before AI tools became widely available.

Similarly, younger workers have not seen a dramatic divergence in opportunities compared with older graduates, suggesting that AI’s impact on early careers remains modest and difficult to isolate.

Exposure, automation, and augmentation metrics offer little evidence of widespread displacement. OpenAI’s exposure data and Anthropic’s usage data suggest stability in the proportion of workers most affected by AI, including those unemployed.

Even in roles theoretically vulnerable to automation, there has been no measurable increase in job losses.

The study concludes that AI’s labour effects are gradual rather than immediate. Historical precedent suggests that large-scale workforce disruption unfolds over decades, not months. Researchers plan to monitor the data to track whether AI’s influence becomes more visible over time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU kicks off cybersecurity awareness campaign against phishing threats

European Cybersecurity Month (ECSM) 2025 has kicked off, with this year’s campaign centring on the growing threat of phishing attacks.

The initiative, driven by the EU Agency for Cybersecurity (ENISA) and the European Commission, seeks to raise awareness and provide practical guidance to European citizens and organisations.

Phishing is still the primary vector through which threat actors launch social engineering attacks. However, this year’s ECSM materials expand the scope to include variants like SMS phishing (smishing), QR code phishing (quishing), voice phishing (vishing), and business email compromise (BEC).

ENISA warns that as of early 2025, over 80 percent of observed social engineering activity involves using AI in their campaigns, in which language models enable more convincing and scalable scams.

To support the campaign, a variety of tiers of actors, from individual citizens to large organisations, are encouraged to engage in training, simulations, awareness sessions and public outreach under the banner #ThinkB4UClick.

A cross-institutional kick-off event is also scheduled, bringing together the EU institutions, member states and civil society to align messaging and launch coordinated activities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

What a Hollywood AI actor can teach CEOs about the future of work

Tilly Norwood, a fully AI-created actor, has become the centre of a heated debate in Hollywood after her creator revealed that talent agents were interested in representing her.

The actors’ union responded swiftly, warning that Tilly was trained on the work of countless performers without their consent or compensation. It also reminded producers that hiring her would involve dealing with the union.

The episode highlights two key lessons for business leaders in any industry. First, never assume a technology’s current limitations will remain its inherent limitations. Some commentators, including Whoopi Goldberg, have argued that AI actors pose little threat because their physical movements still appear noticeably artificial.

Yet history shows that early limitations often disappear over time. Once-dismissed technologies like machine translation and chess software have since far surpassed human abilities. Similarly, AI-generated performers may eventually become indistinguishable from human actors.

The second lesson concerns human behaviour. People are often irrational; their preferences can upend even the most carefully planned strategies. Producers avoided publicising actors’ names in Hollywood’s early years to maintain control.

Audiences, however, demanded to know everything about the stars they admired, forcing studios to adapt. This human attachment created the star system that shaped the industry. Whether audiences will embrace AI performers like Tilly remains uncertain, but cultural and emotional factors will play a decisive role.

Hollywood offers a high-profile glimpse of the challenges and opportunities of advanced AI. As other sectors face similar disruptions, business leaders may find that technology alone does not determine outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DualEntry raises $90m to scale AI-first ERP platform

New York ERP startup DualEntry has emerged from stealth with $90 million in Series A funding, co-led by Lightspeed and Khosla Ventures. Investors include GV, Contrary, and Vesey Ventures, bringing the total funding to more than $100 million within 18 months of the company’s founding.

The capital will accelerate the growth of its AI-native ERP platform, which has processed $100 billion in journal entries. The platform targets mid-market finance teams, aiming to automate up to 90% of manual tasks and scale without external IT support or add-ons.

Early adopters include fintech firm Slash, which runs its $100M+ ARR operation with a single finance employee. DualEntry offers a comprehensive ERP suite that covers general ledger, accounts receivable, accounts payable, audit controls, FP&A, and live bank connections.

The company’s NextDay Migration tool enables complete onboarding within 24 hours, securely transferring all data, including subledgers and attachments. With more than 13,000 integrations across banking, CRM, and HR systems, DualEntry establishes a centralised source of accounting information.

Founded in 2024 by Benedict Dohmen and Santiago Nestares, the startup positions itself as a faster, more flexible alternative to legacy systems such as NetSuite, Sage Intacct, and Microsoft Dynamics, while supporting starter tools like QuickBooks and Xero.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI transcription tool aims to speed up police report writing

The Washington County Sheriff’s Office in Oregon is testing an AI transcription service to speed up police report writing. The tool, Draft One, analyses Axon body-worn camera footage to generate draft reports for specific calls, including theft, trespassing, and DUII incidents.

Corporal David Huey stated that the technology is designed to provide deputies more time in the field. He noted that reports that took around 90 minutes can now be completed in 15 to 20 minutes, freeing officers to focus on policing rather than paperwork.

Deputies in the 60-day pilot must review and edit all AI-generated drafts. At least 20 percent of each report must be manually adjusted to ensure accuracy. Huey explained that the system deliberately inserts minor errors to ensure officers remain engaged with the content.

He added that human judgement remains essential for interpreting emotional cues, such as tense body language, which AI cannot detect solely from transcripts. All data generated by Draft One is securely stored within Axon’s network.

After the pilot concludes, the sheriff’s office and the district attorney will determine whether to adopt the system permanently. If successful, the tool could mark a significant step in integrating AI into everyday law enforcement operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Japan targets AI leadership through new Nvidia–Fujitsu collaboration

Nvidia and Fujitsu have partnered to build an AI infrastructure in Japan, focusing on robotics and advanced computing. The project will utilise Nvidia’s GPUs and Fujitsu’s expertise to support healthcare, manufacturing, environmental work, and customer services, with completion targeted for 2030.

Speaking in Tokyo, Nvidia CEO Jensen Huang said Japan could lead the world in AI and robotics. He described the initiative as part of the ongoing AI industrial revolution, calling infrastructure development essential in Japan and globally.

The infrastructure will initially target the Japanese market but may later expand internationally. Although specific projects and investment figures were not disclosed, collaboration with robotics firm Yaskawa Electric was mentioned as a possible example.

Fujitsu and Nvidia have previously collaborated on digital twins and robotics to address Japan’s labour shortages. Both companies state that AI systems will continually evolve and adapt over time.

Fujitsu CEO Takahito Tokita said the partnership takes a humancentric approach to keep Japan competitive. He added that the companies aim to create unprecedented technologies and tackle serious societal challenges.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US AI models outperform Chinese rival DeepSeek

The National Institute of Standards and Technology’s Centre for AI Standards and Innovation (CAISI) found AI models from Chinese developer DeepSeek trail US models in performance, cost, security, and adoption.

Evaluations covered three DeepSeek and four leading US models, including OpenAI’s GPT-5 series and Anthropic’s Opus 4, across 19 benchmarks.

US AI models outperformed DeepSeek across nearly all benchmarks, with the most significant gaps in software engineering and cybersecurity tasks. CAISI found DeepSeek models costlier and far more vulnerable to hijacking and jailbreaking, posing risks to developers, consumers, and national security.

DeepSeek models were observed to echo inaccurate Chinese Communist Party narratives four times more often than US reference models. Despite weaknesses, DeepSeek model adoption has surged, with downloads rising nearly 1,000% since January 2025.

CAISI is a key contact for industry collaboration on AI standards and security. The evaluation aligns with the US government’s AI Action Plan, which aims to assess the capabilities and risks of foreign AI while securing American leadership in the field.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI platforms barred from cloning Asha Bhosle’s voice without consent

The Bombay High Court has granted ad-interim relief to Asha Bhosle, barring AI platforms and sellers from cloning her voice or likeness without consent. The 90-year-old playback singer, whose career spans eight decades, approached the court to protect her identity from unauthorised commercial use.

Bhosle filed the suit after discovering platforms offering AI-generated voice clones mimicking her singing. Her plea argued that such misuse damages her reputation and goodwill. Justice Arif S. Doctor found a strong prima facie case and stated that such actions would cause irreparable harm.

The order restrains defendants, including US-based Mayk Inc, from using machine learning, face-morphing, or generative AI to imitate her voice or likeness. Google, also named in the case, has agreed to take down specific URLs identified by Bhosle’s team.

Defendants are required to share subscriber information, IP logs, and payment details to assist in identifying infringers. The court emphasised that cloning the voices of cultural icons risks misleading the public and infringing on individuals’ rights to their identity.

The ruling builds on recent cases in India affirming personality rights and sets an important precedent in the age of generative AI. The matter is scheduled to return to court on 13 October 2025.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Alibaba shares climb to highest since 2021

Alibaba’s $250 billion rebound has turned it into China’s hottest AI stock, with analysts saying the rally may still have room to run.

The group’s US-listed shares have more than doubled this year as Beijing pushes for greater technological self-reliance. Despite the surge, the stock remains 65% below its 2020 peak, keeping valuations attractive compared with US giants like Microsoft and Amazon.

Fund managers say global investors still hold relatively minor positions in Alibaba, creating scope for further gains. Some caution remains, however, with Chinese short bets rising last month and price wars in food delivery threatening to dent margins.

Alibaba trades roughly 22 times the estimated forward earnings in Hong Kong, which is in line with the Hang Seng Tech Index but below its historic peak and US peers. Investors say its valuation looks reasonable given its AI push and improving sentiment.

Shares touched their highest level since August 2021 on Friday, standing out against declines in the broader Hong Kong market. The key test will be whether Alibaba can convert its AI ambitions into mainstream revenues.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

FRA presents rights framework at EU Innovation Hub AI Cluster workshop in Tallinn

The EU Innovation Hub for Internal Security’s AI Cluster gathered in Tallinn on 25–26 September for a workshop focused on AI and its implications for security and rights.

The European Union Agency for Fundamental Rights (FRA) played a central role, presenting its Fundamental Rights Impact Assessment framework under the AI Act and highlighting its ongoing project on assessing high-risk AI.

A workshop that also provided an opportunity for FRA to give an update on its internal and external work in the AI field, reflecting the growing need to balance technological innovation with rights-based safeguards.

AI-driven systems in security and policing are increasingly under scrutiny, with regulators and agencies seeking to ensure compliance with EU rules on privacy, transparency and accountability.

In collaboration with Europol, FRA also introduced plans for a panel discussion on ‘The right to explanation of AI-driven individual decision-making’. Scheduled for 19 November in Brussels, the session will form part of the Annual Event of the EU Innovation Hub for Internal Security.

It is expected to draw policymakers, law enforcement representatives and rights advocates into dialogue about transparency obligations in AI use for security contexts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!