Future of work shaped by AI, flexible ecosystems and soft retirement

As technology reshapes workplaces, how we work is set for significant change in the decade’s second half. Seven key trends are expected to drive this transformation, shaped by technological shifts, evolving employee expectations, and new organisational realities.

AI will continue to play a growing role in 2026. Beyond simply automating tasks, companies will increasingly design AI-native workflows built from the ground up to automate, predict, and support decision-making.

Hybrid and remote work will solidify flexible ecosystems of tools, networks, and spaces to support employees wherever they are. The trend emphasises seamless experiences, global talent access, and stronger links between remote workers and company culture.

The job landscape will continue to change as AI affects hiring in clerical, administrative, and managerial roles, while sectors such as healthcare, education, and construction grow. Human skills, such as empathy, communication, and leadership, will become increasingly valuable.

Data-driven people management will replace intuition-based approaches, with AI used to find patterns and support evidence-based decisions. Employee experience will also become a key differentiator, reflecting customer-focused strategies to attract and retain talent.

An emerging ‘soft retirement’ trend will see healthier older workers reduce hours rather than stop altogether, offering businesses valuable expertise. Those who adapt early to these trends will be better positioned to thrive in the future of work.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Nintendo denies lobbying the Japanese government over generative AI

The video game company, Nintendo, has denied reports that it lobbied the Japanese government over the use of generative AI. The company issued an official statement on its Japanese X account, clarifying that it has had no contact with authorities.

However, this rumour originated from a post by Satoshi Asano, a member of Japan’s House of Representatives, who suggested that private companies had pressed the government on intellectual property protection concerning AI.

After Nintendo’s statement, Asano retracted his remarks and apologised for spreading misinformation.

Nintendo stressed that it would continue to protect its intellectual property against infringement, whether AI was involved or not. The company reaffirmed its cautious approach toward generative AI in game development, focusing on safeguarding creative rights rather than political lobbying.

The episode underscores the sensitivity around AI in the creative industries of Japan, where concerns about copyright and technological disruption are fuelling debate. Nintendo’s swift clarification signals how seriously it takes misinformation and protects its brand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Labour market stability persists despite the rise of AI

Public fears of AI rapidly displacing workers have not yet materialised in the US labour market.

A new study finds that the overall occupational mix has shifted only slightly since the launch of generative AI in November 2022, with changes resembling past technological transitions such as the rise of computers and the internet.

The pace of disruption is not significantly faster than historical benchmarks.

Industry-level data show some variation, particularly in information services, finance, and professional sectors, but trends were already underway before AI tools became widely available.

Similarly, younger workers have not seen a dramatic divergence in opportunities compared with older graduates, suggesting that AI’s impact on early careers remains modest and difficult to isolate.

Exposure, automation, and augmentation metrics offer little evidence of widespread displacement. OpenAI’s exposure data and Anthropic’s usage data suggest stability in the proportion of workers most affected by AI, including those unemployed.

Even in roles theoretically vulnerable to automation, there has been no measurable increase in job losses.

The study concludes that AI’s labour effects are gradual rather than immediate. Historical precedent suggests that large-scale workforce disruption unfolds over decades, not months. Researchers plan to monitor the data to track whether AI’s influence becomes more visible over time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU kicks off cybersecurity awareness campaign against phishing threats

European Cybersecurity Month (ECSM) 2025 has kicked off, with this year’s campaign centring on the growing threat of phishing attacks.

The initiative, driven by the EU Agency for Cybersecurity (ENISA) and the European Commission, seeks to raise awareness and provide practical guidance to European citizens and organisations.

Phishing is still the primary vector through which threat actors launch social engineering attacks. However, this year’s ECSM materials expand the scope to include variants like SMS phishing (smishing), QR code phishing (quishing), voice phishing (vishing), and business email compromise (BEC).

ENISA warns that as of early 2025, over 80 percent of observed social engineering activity involves using AI in their campaigns, in which language models enable more convincing and scalable scams.

To support the campaign, a variety of tiers of actors, from individual citizens to large organisations, are encouraged to engage in training, simulations, awareness sessions and public outreach under the banner #ThinkB4UClick.

A cross-institutional kick-off event is also scheduled, bringing together the EU institutions, member states and civil society to align messaging and launch coordinated activities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

What a Hollywood AI actor can teach CEOs about the future of work

Tilly Norwood, a fully AI-created actor, has become the centre of a heated debate in Hollywood after her creator revealed that talent agents were interested in representing her.

The actors’ union responded swiftly, warning that Tilly was trained on the work of countless performers without their consent or compensation. It also reminded producers that hiring her would involve dealing with the union.

The episode highlights two key lessons for business leaders in any industry. First, never assume a technology’s current limitations will remain its inherent limitations. Some commentators, including Whoopi Goldberg, have argued that AI actors pose little threat because their physical movements still appear noticeably artificial.

Yet history shows that early limitations often disappear over time. Once-dismissed technologies like machine translation and chess software have since far surpassed human abilities. Similarly, AI-generated performers may eventually become indistinguishable from human actors.

The second lesson concerns human behaviour. People are often irrational; their preferences can upend even the most carefully planned strategies. Producers avoided publicising actors’ names in Hollywood’s early years to maintain control.

Audiences, however, demanded to know everything about the stars they admired, forcing studios to adapt. This human attachment created the star system that shaped the industry. Whether audiences will embrace AI performers like Tilly remains uncertain, but cultural and emotional factors will play a decisive role.

Hollywood offers a high-profile glimpse of the challenges and opportunities of advanced AI. As other sectors face similar disruptions, business leaders may find that technology alone does not determine outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Diag2Diag brings fusion reactors closer to commercial viability

Researchers have developed an AI tool that could make fusion power more reliable and affordable. Diag2Diag reconstructs missing sensor data to give scientists a clearer view of plasma, helping address one of fusion energy’s biggest challenges.

Developed through a collaboration led by Princeton University and the US Department of Energy’s Princeton Plasma Physics Laboratory, Diag2Diag analyses multiple diagnostics in real time to generate synthetic, high-resolution data. It improves plasma control and cuts reliance on costly hardware.

A key use of Diag2Diag is improving the study of the plasma pedestal, the fuel’s outer layer. Current methods miss sudden changes or lack detail. The AI fills these gaps without new instruments, helping researchers fine-tune stability.

The system has also advanced research into edge-localised modes, or ELMs, which are bursts of energy that can damage reactor walls. It revealed how magnetic perturbations create ‘magnetic islands’ that flatten plasma temperature and density, supporting a leading theory on ELM suppression.

Although designed for fusion, Diag2Diag could also enhance reliability in fields such as spacecraft monitoring and robotic surgery. For fusion specifically, it supports smaller, cheaper, and more dependable reactors, bringing the prospect of clean, round-the-clock power closer to reality.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI transcription tool aims to speed up police report writing

The Washington County Sheriff’s Office in Oregon is testing an AI transcription service to speed up police report writing. The tool, Draft One, analyses Axon body-worn camera footage to generate draft reports for specific calls, including theft, trespassing, and DUII incidents.

Corporal David Huey stated that the technology is designed to provide deputies more time in the field. He noted that reports that took around 90 minutes can now be completed in 15 to 20 minutes, freeing officers to focus on policing rather than paperwork.

Deputies in the 60-day pilot must review and edit all AI-generated drafts. At least 20 percent of each report must be manually adjusted to ensure accuracy. Huey explained that the system deliberately inserts minor errors to ensure officers remain engaged with the content.

He added that human judgement remains essential for interpreting emotional cues, such as tense body language, which AI cannot detect solely from transcripts. All data generated by Draft One is securely stored within Axon’s network.

After the pilot concludes, the sheriff’s office and the district attorney will determine whether to adopt the system permanently. If successful, the tool could mark a significant step in integrating AI into everyday law enforcement operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Japan targets AI leadership through new Nvidia–Fujitsu collaboration

Nvidia and Fujitsu have partnered to build an AI infrastructure in Japan, focusing on robotics and advanced computing. The project will utilise Nvidia’s GPUs and Fujitsu’s expertise to support healthcare, manufacturing, environmental work, and customer services, with completion targeted for 2030.

Speaking in Tokyo, Nvidia CEO Jensen Huang said Japan could lead the world in AI and robotics. He described the initiative as part of the ongoing AI industrial revolution, calling infrastructure development essential in Japan and globally.

The infrastructure will initially target the Japanese market but may later expand internationally. Although specific projects and investment figures were not disclosed, collaboration with robotics firm Yaskawa Electric was mentioned as a possible example.

Fujitsu and Nvidia have previously collaborated on digital twins and robotics to address Japan’s labour shortages. Both companies state that AI systems will continually evolve and adapt over time.

Fujitsu CEO Takahito Tokita said the partnership takes a humancentric approach to keep Japan competitive. He added that the companies aim to create unprecedented technologies and tackle serious societal challenges.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US AI models outperform Chinese rival DeepSeek

The National Institute of Standards and Technology’s Centre for AI Standards and Innovation (CAISI) found AI models from Chinese developer DeepSeek trail US models in performance, cost, security, and adoption.

Evaluations covered three DeepSeek and four leading US models, including OpenAI’s GPT-5 series and Anthropic’s Opus 4, across 19 benchmarks.

US AI models outperformed DeepSeek across nearly all benchmarks, with the most significant gaps in software engineering and cybersecurity tasks. CAISI found DeepSeek models costlier and far more vulnerable to hijacking and jailbreaking, posing risks to developers, consumers, and national security.

DeepSeek models were observed to echo inaccurate Chinese Communist Party narratives four times more often than US reference models. Despite weaknesses, DeepSeek model adoption has surged, with downloads rising nearly 1,000% since January 2025.

CAISI is a key contact for industry collaboration on AI standards and security. The evaluation aligns with the US government’s AI Action Plan, which aims to assess the capabilities and risks of foreign AI while securing American leadership in the field.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI platforms barred from cloning Asha Bhosle’s voice without consent

The Bombay High Court has granted ad-interim relief to Asha Bhosle, barring AI platforms and sellers from cloning her voice or likeness without consent. The 90-year-old playback singer, whose career spans eight decades, approached the court to protect her identity from unauthorised commercial use.

Bhosle filed the suit after discovering platforms offering AI-generated voice clones mimicking her singing. Her plea argued that such misuse damages her reputation and goodwill. Justice Arif S. Doctor found a strong prima facie case and stated that such actions would cause irreparable harm.

The order restrains defendants, including US-based Mayk Inc, from using machine learning, face-morphing, or generative AI to imitate her voice or likeness. Google, also named in the case, has agreed to take down specific URLs identified by Bhosle’s team.

Defendants are required to share subscriber information, IP logs, and payment details to assist in identifying infringers. The court emphasised that cloning the voices of cultural icons risks misleading the public and infringing on individuals’ rights to their identity.

The ruling builds on recent cases in India affirming personality rights and sets an important precedent in the age of generative AI. The matter is scheduled to return to court on 13 October 2025.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!