Enterprises rethink cloud amid digital sovereignty push

Digital sovereignty has moved to the boardroom as geopolitical tensions rise and cloud adoption accelerates. Organisations are reassessing infrastructure to protect autonomy, ensure compliance, and manage jurisdictional risk. Cloud strategy is increasingly shaped by data location, control, and resilience.

Regulations such as NIS2, DORA, and national data laws have intensified scrutiny of cross-border dependencies. Sovereignty concerns now extend beyond governments to sectors such as healthcare and finance. Vendor selection increasingly prioritises sovereign regions and stricter data controls.

Hybrid cloud remains dominant. Organisations place sensitive workloads on private platforms to strengthen oversight while retaining public cloud innovation. Large-scale repatriation is rare due to cost and complexity, though compliance pressures are driving broader multicloud diversification.

Government investment and oversight are reinforcing the shift. Sovereignty is becoming part of national resilience policy, prompting stricter audits and governance expectations. Enterprises face growing pressure to demonstrate control over critical systems, supply chains, and data flows.

A pragmatic approach, often described as minimum viable sovereignty, helps reduce exposure without unnecessary complexity. Organisations can identify critical workloads, secure enforceable vendor commitments, and plan for disruption. Early adaptation supports resilience and long-term flexibility.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Romania’s job market faces structural change as AI and automation rise

A Think by ING analysis finds that Romania’s recent macroeconomic slowdown reflects more profound structural change than cyclical weakness.

After years of robust consumption-led expansion, fiscal tightening and weak domestic demand have curbed growth, while firms increasingly invest in automation and AI to boost productivity rather than expand headcount.

Industrial employment has declined; for example, manufacturing jobs fell by around 25,000 in late 2025, and labour market hiring has shifted toward defensive, replacement-only patterns.

Firms are integrating robotics, automated assembly lines and intelligent logistics systems, and service-sector work is also being reshaped by AI tools, even where formal adoption is still emerging.

A recent survey suggests that 68% of people in Romania have used AI tools, and 44% rely on them for work tasks such as administrative support and analysis, signalling rising informal use ahead of widespread enterprise deployment.

While automation and AI can raise productivity and output without proportional employment growth, they also tilt the labour market: high-skill specialised roles (e.g. AI, engineering, advanced management) are expected to remain resilient or grow, while routine roles, including some entry-level tech positions, call-centre jobs and administrative tasks, face stagnation or decline.

However, this can create a ‘barbell’ labour market with growth chiefly at the high and low ends, and limited opportunities in mid-skill roles.

Real wage erosion, tight hiring and demographic trends (including a shrinking workforce) add to short-term challenges. In the near term, employment may remain subdued even as economic output recovers modestly by 2027.

Over the longer term, the economy’s shift toward capital-intensive, productivity-driven growth could support stronger output without generating broad employment, underscoring the need for education, reskilling and policy strategies that help workers adapt to AI-driven labour demand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study warns AI chatbots can reinforce delusions and mania

AI chatbots may pose serious risks for people with severe mental illnesses, according to a new study from Acta Psychiatrica Scandinavica. Researchers found that tools such as ChatGPT can worsen psychiatric conditions by reinforcing users’ delusions, paranoia, mania, suicidal thoughts, and eating disorders.

The team examined health records from more than 54,000 patients and identified dozens of cases where AI interactions appeared to exacerbate symptoms. Experts warn that the actual number of affected individuals is likely far higher.

AI’s design to follow and validate a user’s input can unintentionally strengthen delusional thinking, turning digital assistants into echo chambers for psychosis.

Despite potential benefits for psychoeducation or alleviating loneliness, experts caution against using AI as a substitute for trained therapists. Chatbots should be tested in rigorous clinical trials before any therapeutic use, says Professor Søren Dinesen Østergaard.

The researchers urge healthcare providers to discuss AI chatbot use with patients, particularly those with severe mental illnesses, and call for central regulation of the technology. They argue that lessons from social media show that early oversight is essential to protect vulnerable populations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenClaw users face account suspensions under Google AI rules

Google has suspended access to its Antigravity AI platform for numerous OpenClaw users, citing violations of its terms of service. Developers had used OpenClaw’s OAuth plugin to access subsidised Gemini model tokens, triggering backend strain and service degradation.

OpenClaw, launched in November 2025, gained more than 219,000 GitHub stars by enabling local AI agents for tasks such as email management and web browsing. Users authenticated through Antigravity to access advanced Gemini models at reduced cost, bypassing official distribution channels.

Google said the third-party integration powered non-authorised products on Antigravity infrastructure, triggering usage flagged as malicious. In February 2026, AI Ultra subscribers reported 403 errors and account restrictions, with some citing temporary disruptions to Gmail and Workspace.

Varun Mohan of Google DeepMind said the surge had degraded service quality and that enforcement prioritised legitimate users. Limited reinstatement options were offered to those unaware of violations, while capacity constraints were cited as the reason.

The move follows similar restrictions by Anthropic on third-party OAuth usage. Developers are shifting to alternative forks, as debate intensifies over open tooling, platform control, and the risks of agentic AI ecosystems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

IQM puts Finland on Europe’s quantum computing map

Finland is emerging as a key hub in Europe’s quantum computing landscape as startup IQM prepares to become one of the continent’s first publicly listed quantum firms.

The company is developing full-stack, open-architecture quantum systems designed for on-premise deployment or cloud access. It aims to advance the practical use of quantum computing across research and industry.

Founded in 2018, IQM has already delivered 21 quantum systems to 13 customers, highlighting growing European interest in commercial quantum technologies.

Analysts note that while challenges remain, meaningful breakthroughs are now occurring, signalling that quantum computing is shifting from purely experimental science to an operational industry.

IQM’s technology could support advancements in medicine, science, and computational research, enabling the solution to complex problems far beyond the reach of classical computers.

The firm exemplifies Europe’s ambition to build quantum capabilities independently of larger players in the US and China, positioning Finland as a strategic hub for next-generation computing.

The company’s work aligns with broader European efforts to foster innovation in quantum technologies.

By combining domestic expertise with open-access systems, IQM demonstrates how Finland is contributing to the continent’s emerging quantum ecosystem, bridging academic research and industrial application.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI-driven physics speeds up industrial innovation

PhysicsX, a London-based startup founded by former F1 engineers and AI experts, is redefining engineering with its AI-driven physics platform.

Design and testing cycles are reduced from weeks or months to seconds. Engineers can now iterate rapidly and optimise systems across multiple industries, including aerospace, automotive, semiconductors, energy, and materials.

The technology enables teams to evaluate thousands of design variations simultaneously. Semiconductor firms speed up prototype development, electronics improve thermal performance, and mining boosts copper recovery for renewable energy and AI data centres.

PhysicsX achieves this using Large Physics Models and Large Geometry Models that base design evaluation on real-world physics rather than assumptions.

Predictive reasoning lets engineers simulate multiple parameter changes before acting. The approach shifts control from reactive adjustments to proactive optimisation, helping teams make faster, better-informed decisions.

PhysicsX also bridges disciplinary divides, enabling aerodynamics, structural, and thermal considerations to be optimised together rather than in isolation.

By combining speed, system-level insight, and predictive control, PhysicsX is shrinking the gap between cutting-edge research and practical industrial impact. The platform uses physics-based AI to improve efficiency, drive innovation, and support sustainable growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI drives faster modernisation of legacy COBOL systems

Critical to finance, airlines, and government, COBOL handles about 95% of US ATM transactions. Despite its ubiquity, the pool of developers able to read and maintain COBOL is shrinking as seasoned engineers retire and universities offer limited instruction.

Institutional knowledge is now embedded in decades-old code, and documentation often lags.

Modernising COBOL differs from typical software updates. It requires untangling intricate dependencies and reverse-engineering business logic that has evolved over decades.

Traditional modernisation efforts involved large teams of consultants over the years, resulting in high costs and lengthy timelines. AI tools are changing that paradigm by automating the most labour-intensive tasks.

AI-driven solutions like Claude Code map code dependencies, trace execution paths, document workflows, and identify risks. They provide teams with actionable insights for prioritisation, risk management, and refactoring, dramatically shortening modernisation timelines from years to months.

Human experts remain essential to reviewing AI recommendations, ensuring regulatory compliance, and making strategic decisions about which components to modernise first.

Implementation follows an incremental approach. AI translates COBOL logic into modern languages, creates integration scaffolding, and supports side-by-side operation with legacy components.

Continuous validation at each step reduces risk, allowing teams to build confidence as complex parts of the system are modernised. AI automation combined with expert oversight makes large-scale COBOL modernisation feasible.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA drives a new era of industrial AI cybersecurity

AI-driven defences are moving deeper into operational technology as NVIDIA leads a shift toward embedded cybersecurity across critical infrastructure.

The company is partnering with firms such as Akamai Technologies, Forescout, Palo Alto Networks, Siemens and Xage Security to protect energy, manufacturing and transport systems that increasingly operate through cloud-linked environments.

Modernisation has expanded capabilities across these sectors, yet it has widened the gap between evolving threats and ageing industrial defences.

Zero-trust adoption in operational environments is gaining momentum as Forescout and NVIDIA develop real-time verification models tailored to legacy devices and safety-critical processes.

Security workloads run on NVIDIA BlueField hardware to keep protection isolated from industrial systems and avoid any interference with essential operations. That approach enables more precise control over lateral movement across networks without disrupting performance.

Industrial automation is also adapting through Siemens and Palo Alto Networks, which are moving security enforcement closer to workloads at the edge. AI-enabled inspection via BlueField enhances visibility in highly time-sensitive environments, improving reliability and uptime.

Akamai and Xage are extending similar models to energy infrastructure and large-scale operational networks, embedding segmentation and identity-based controls where resilience is most critical.

A coordinated architecture is now emerging in which edge-generated operational data feeds central AI analysis, while enforcement remains local to maintain continuity.

The result is a security model designed to meet the pressures of cyber-physical systems, enabling operators to detect threats faster, reinforce operational stability and protect infrastructure that supports global AI expansion.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

OCC approval moves Crypto.com closer to US trust bank

Crypto.com has secured conditional approval from the Office of the Comptroller of the Currency to move ahead with plans to launch a federally regulated national trust bank in the United States.

Approval marks a notable step in the firm’s regulatory roadmap. It also signals continued alignment with US supervisory expectations as the digital asset sector seeks deeper integration with traditional financial infrastructure.

Plans focus on establishing Foris Dax National Trust Bank. The entity is designed to provide a consolidated suite of services, including digital asset custody, staking across multiple blockchain ecosystems such as Cronos, and trade settlement.

Full approval would place the entity under direct federal oversight, positioning it to serve institutional clients that require qualified custodians operating within a clear regulatory perimeter.

Leadership described the decision as recognition of its compliance and risk management framework. Executives said the structure would offer institutions a single regulated gateway to digital asset infrastructure and strengthen market confidence.

Existing operations at Crypto.com Custody Trust Company in New Hampshire will continue without interruption. Final authorisation will determine the timeline for launching the national trust bank and expanding federally supervised US services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic uncovers large-scale AI model theft operations

Three AI laboratories have been found conducting large-scale illicit campaigns to extract capabilities from Anthropic’s Claude AI, the company revealed.

DeepSeek, Moonshot, and MiniMax used around 24,000 fraudulent accounts to generate more than 16 million interactions, violating terms of service and regional access restrictions. The technique, called distillation, trains a weaker model on outputs from a stronger one, speeding AI development.

Distilled models obtained in this manner often lack critical safeguards, creating serious national security concerns. Without protections, these capabilities could be integrated into military, intelligence, surveillance, or cyber operations, potentially by authoritarian governments.

The attacks also undermine export controls designed to preserve the competitive edge of US AI technology and could give a misleading impression of foreign labs’ independent AI progress.

Each lab followed coordinated playbooks using proxy networks and large-scale automated prompts to target specific capabilities such as agentic reasoning, coding, and tool use.

Anthropic attributed the campaigns using request metadata, infrastructure indicators, and corroborating observations from industry partners. The investigation detailed how distillation attacks operate from data generation to model launch.

In response, Anthropic has strengthened detection systems, implemented stricter access controls, shared intelligence with other labs and authorities, and introduced countermeasures to reduce the effectiveness of illicit distillation.

The company emphasises that addressing these attacks will require coordinated action across the AI industry, cloud providers, and policymakers to protect frontier AI capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot