AI sparks worry over job loss and skill decline

A 2025 survey by Statistics Netherlands (CBS) shows that 41% of employees think AI could perform part of their job, while 4% fear full replacement. Higher-educated workers and young adults are most likely to believe their tasks could be automated.

Among those using AI at work, 56% expect it could partly or fully do their jobs, compared with 37% of non-users. Almost half of the workers who see AI as a potential replacement expressed concern, with women slightly more worried than men.

Most adults anticipate that AI will lead to job losses (75%), a decline in workforce skills (64%), and less interesting work (48%). Despite these concerns, 57% believe AI could boost productivity by speeding up tasks.

Fewer respondents think AI will solve labour shortages (46%) or replace unsafe jobs (41%). The findings highlight both the opportunities and anxieties surrounding AI adoption in the workplace.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New AI tool predicts post-mortem time with precision

Researchers at Linköping University and the Swedish National Board of Forensic Medicine have developed an AI tool that estimates time of death from blood metabolites. The model, trained on thousands of samples, provides greater accuracy than traditional forensic methods.

Methods like body temperature, rigor mortis, or eye potassium become unreliable after a few days. AI analysis of blood metabolites estimates time of death with about one-day accuracy for up to 13 days post-mortem.

The project uses a unique data resource of over 45,000 autopsies, with 4,876 samples used to train the AI. Researchers say the method works globally, even in labs with smaller datasets, making it useful for forensic investigations.

Next steps aim to increase precision, allowing models to estimate not only the day but also the specific time of death. Experts say the tool can improve investigations by guiding law enforcement and aiding complex cases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OURA launches AI model tailored to women’s physiology with privacy-first design

Guidance for women’s health is entering a new phase as ŌURA introduces a proprietary large language model designed specifically for reproductive and hormonal wellbeing.

The model sits within Oura Advisor and is available for testing through Oura Labs, drawing on clinical standards, peer-reviewed evidence and biometric signals collected through the Oura Ring to create personalised and context-aware responses.

The system interprets questions through women’s physiology instead of depending on general-purpose models that miss critical hormonal and life-stage variables.

It supports the full spectrum of reproductive health, from the earliest menstrual patterns to menopause, and is intentionally tuned to be non-dismissive and emotionally supportive.

By combining longitudinal sleep, activity, stress, cycle and pregnancy data with clinician-reviewed research, the model aims to strengthen understanding and preparation ahead of medical appointments.

Privacy forms the centre of the architecture, with all processing hosted on infrastructure controlled entirely by the company. Conversations are neither shared nor sold, reflecting ŌURA’s broader push for private AI.

Oura Labs operates as an opt-in experimental environment where new features are tested in collaboration with members who can leave at any time.

Women who take part influence the model’s evolution by contributing feedback that informs future development.

These interactions help refine personalised insights across fertility, cycle irregularities, pregnancy changes and other hormonal shifts, marking a significant step in how the Finland-founded company advances preventive, data-guided care for its global community.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

NVIDIA healthcare survey shows surge in AI adoption and strong ROI

AI is reshaping healthcare as organisations shift from trial projects to large-scale deployment.

The latest industry survey from NVIDIA shows widespread adoption across digital healthcare, biotechnology, pharmaceuticals and medical technology, signalling a sector that is now executing rather than experimenting.

Uptake is expanding rapidly, with generative AI and large language models becoming central tools for clinical and operational tasks.

The report highlights how medical imaging, drug discovery and clinical decision support are among the most prominent applications. Radiologists are using AI to accelerate image analysis, while research teams apply advanced models to speed early-stage drug development.

Organisations benefit from workflow optimisation instead of relying on manual administrative routines, with many citing improvements in patient coordination, documentation and coding.

Open-source models are increasingly important, with most respondents considering them vital for domain-specific development.

Experts argue that open-source innovation will guide exploration, whereas deployment in clinical environments will demand rigorous validation and accountability rather than unrestricted experimentation.

Agentic AI is emerging as a new capability for knowledge retrieval and literature analysis.

Evidence of return on investment is clear, prompting 85% of organisations to expand their AI budgets. Many report higher revenue, reduced costs and significant gains in back-office productivity.

Evaluation is becoming a core operational requirement, ensuring AI continues to improve safety, quality and overall clinical performance over time.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New Relic advances AI agents for enterprise observability

The expansion into enterprise AI comes with a no-code platform from New Relic that allows companies to build and supervise their own observability agents.

A system that assembles AI-driven monitors designed to detect bugs and performance problems before they affect users, instead of leaving teams to rely on manual tracking.

It also supports the Model Context Protocol so organisations can link external data sources to the agents and integrate them with existing New Relic tools.

The company stresses that the platform is intended to complement other agent systems rather than replace them.

As AI agent software spreads across the market, enterprises are searching for ways to manage risk when giving automated tools access to internal systems.

Industry players such as Salesforce and OpenAI have already introduced their own agent platforms, and assessments from Gartner describe these frameworks as essential infrastructure for wider AI adoption.

New Relic also introduced new tools for the OpenTelemetry framework to remove friction around observability standards.

Its application performance monitoring agents now support OTel data, allowing enterprises to manage these streams in one place instead of operating separate collectors.

The update aims to reduce fragmentation that has slowed OTel deployment across large organisations and to simplify how engineering teams handle diverse observability pipelines.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

CrowdStrike warns of faster AI driven threats

Cyber adversaries increasingly used AI to accelerate attacks and evade detection in 2025, according to CrowdStrike’s 2026 Global Threat Report. The company described the period as the year of the evasive adversary, marked by subtle and rapid intrusions.

The average time to a financially motivated online crime breakout fell to 29 minutes, with the fastest recorded at 27 seconds. CrowdStrike observed an 89 percent rise in attacks by AI-enabled threat actors compared with 2024.

Attackers also targeted AI systems themselves, exploiting GenAI tools at more than 90 organisations through malicious prompt injection. Supply chain compromises and the abuse of valid credentials enabled intrusions to blend into legitimate activity, with most detections classified as malware-free.

China linked activity rose by 38 percent across sectors, while North Korea linked incidents increased by 130 percent. CrowdStrike tracked more than 281 adversaries in total, warning that speed, credential abuse, and AI fluency now define the modern threat landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenClaw vulnerabilities exposed by AI-powered code scanner

Researchers at Endor Labs identified six high- to critical vulnerabilities in the open-source AI agent framework OpenClaw using an AI-powered static application security testing engine to trace untrusted data flows. The flaws included server-side request forgery, authentication bypass, and path traversal.

The bugs affected multiple components of the agentic system, which integrates large language models with external tools and web services. Several SSRF issues were found in the gateway and authentication modules, potentially exposing internal services or cloud metadata depending on the deployment context.

Access control failures were also found in OpenClaw. A webhook handler lacked proper verification, enabling forged requests, while another flaw allowed unauthenticated access to protected functionality. Researchers confirmed exploitability with proof-of-concept demonstrations.

The team said that traditional static analysis tools struggle with modern AI software stacks, where inputs undergo multiple transformations before reaching sensitive operations. Their AI-based SAST engine preserved context across layers, tracing untrusted data from entry points to critical functions.

OpenClaw maintainers were notified through responsible disclosure and have since issued patches and advisories. Researchers argue that as AI agent frameworks expand into enterprise environments, security analysis must adapt to address both conventional vulnerabilities and AI-specific attack surfaces.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sony targets AI music copyright use

Sony Group has developed technology designed to identify the original sources of music generated by AI. The move comes amid growing concern over the unauthorised use of copyrighted works in AI training.

According to Sony Group, the system can extract data from an underlying AI model and compare generated tracks with original compositions. The process aims to quantify how much specific works contributed to the output.

Composers, songwriters and publishers could use the technology to seek compensation from AI developers if their material was used without permission. Sony said the goal is to help ensure creators are properly rewarded.

Efforts to safeguard intellectual property have intensified across the music industry. Sony Music Entertainment in the US previously filed a copyright infringement lawsuit in 2024 over AI-generated music, underscoring wider tensions around AI and creative rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Enterprises rethink cloud amid digital sovereignty push

Digital sovereignty has moved to the boardroom as geopolitical tensions rise and cloud adoption accelerates. Organisations are reassessing infrastructure to protect autonomy, ensure compliance, and manage jurisdictional risk. Cloud strategy is increasingly shaped by data location, control, and resilience.

Regulations such as NIS2, DORA, and national data laws have intensified scrutiny of cross-border dependencies. Sovereignty concerns now extend beyond governments to sectors such as healthcare and finance. Vendor selection increasingly prioritises sovereign regions and stricter data controls.

Hybrid cloud remains dominant. Organisations place sensitive workloads on private platforms to strengthen oversight while retaining public cloud innovation. Large-scale repatriation is rare due to cost and complexity, though compliance pressures are driving broader multicloud diversification.

Government investment and oversight are reinforcing the shift. Sovereignty is becoming part of national resilience policy, prompting stricter audits and governance expectations. Enterprises face growing pressure to demonstrate control over critical systems, supply chains, and data flows.

A pragmatic approach, often described as minimum viable sovereignty, helps reduce exposure without unnecessary complexity. Organisations can identify critical workloads, secure enforceable vendor commitments, and plan for disruption. Early adaptation supports resilience and long-term flexibility.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated film removed from cinemas after public backlash

A prize-winning AI-generated short film has been pulled from cinemas following criticism from audiences. Thanksgiving Day, created by filmmaker Igor Alferov, was due to screen in selected theatres before feature presentations.

Concerns emerged after news of the screening spread online, prompting complaints directed at AMC Theatres. The chain stated it had not programmed the film and that pre-show advertising partner Screenvision Media had arranged the placement.

AMC confirmed it would not participate in the initiative, meaning the AI film will no longer appear in its locations. The animated short, produced using Google’s Gemini 3.1 and Nano Banana Pro tools, had recently won an AI film festival award.

The episode comes amid broader debate about artificial intelligence in Hollywood. Industry insiders suggest studios are quietly increasing AI use in production, even as concerns grow over job losses and economic uncertainty within Los Angeles’ entertainment sector.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!