Microsoft expands Sovereign Cloud with secure offline support for large AI models

Digital sovereignty is gaining urgency as organisations seek infrastructure that remains secure and reliable under strict regulatory conditions.

Microsoft is expanding its Sovereign Cloud to help public bodies, regulated industries and enterprises maintain control of data and operations even when environments must operate without external connectivity.

The updated portfolio allows customers to choose how each workload is governed, rather than relying on a single deployment model.

Azure Local now supports disconnected operations, keeping mission-critical systems running with full Azure governance within sovereign boundaries. Management, policies and workloads stay entirely on site, so services continue during periods of isolation.

Microsoft 365 Local extends the resilience to the productivity layer by enabling Exchange Server, SharePoint Server and Skype for Business Server to run locally, giving teams secure collaboration within the same protected boundary as their infrastructure.

Support for large multimodal AI models is delivered through Foundry Local, which enables advanced inference on customer-controlled hardware using technology from partners such as NVIDIA.

Such an approach helps organisations bring modern AI capabilities into highly restricted environments while preserving control over data, identities and operational procedures.

Microsoft positions it as a unified stack that works across connected, hybrid and fully disconnected modes without increasing operational complexity.

These additions create a framework designed for governments and regulated industries that regard sovereignty as a strategic priority.

With global availability for qualified customers, the Sovereign Cloud aims to preserve continuity, reinforce governance and expand AI capability while keeping every layer of the environment within local control.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OURA launches AI model tailored to women’s physiology with privacy-first design

Guidance for women’s health is entering a new phase as ŌURA introduces a proprietary large language model designed specifically for reproductive and hormonal wellbeing.

The model sits within Oura Advisor and is available for testing through Oura Labs, drawing on clinical standards, peer-reviewed evidence and biometric signals collected through the Oura Ring to create personalised and context-aware responses.

The system interprets questions through women’s physiology instead of depending on general-purpose models that miss critical hormonal and life-stage variables.

It supports the full spectrum of reproductive health, from the earliest menstrual patterns to menopause, and is intentionally tuned to be non-dismissive and emotionally supportive.

By combining longitudinal sleep, activity, stress, cycle and pregnancy data with clinician-reviewed research, the model aims to strengthen understanding and preparation ahead of medical appointments.

Privacy forms the centre of the architecture, with all processing hosted on infrastructure controlled entirely by the company. Conversations are neither shared nor sold, reflecting ŌURA’s broader push for private AI.

Oura Labs operates as an opt-in experimental environment where new features are tested in collaboration with members who can leave at any time.

Women who take part influence the model’s evolution by contributing feedback that informs future development.

These interactions help refine personalised insights across fertility, cycle irregularities, pregnancy changes and other hormonal shifts, marking a significant step in how the Finland-founded company advances preventive, data-guided care for its global community.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

NVIDIA healthcare survey shows surge in AI adoption and strong ROI

AI is reshaping healthcare as organisations shift from trial projects to large-scale deployment.

The latest industry survey from NVIDIA shows widespread adoption across digital healthcare, biotechnology, pharmaceuticals and medical technology, signalling a sector that is now executing rather than experimenting.

Uptake is expanding rapidly, with generative AI and large language models becoming central tools for clinical and operational tasks.

The report highlights how medical imaging, drug discovery and clinical decision support are among the most prominent applications. Radiologists are using AI to accelerate image analysis, while research teams apply advanced models to speed early-stage drug development.

Organisations benefit from workflow optimisation instead of relying on manual administrative routines, with many citing improvements in patient coordination, documentation and coding.

Open-source models are increasingly important, with most respondents considering them vital for domain-specific development.

Experts argue that open-source innovation will guide exploration, whereas deployment in clinical environments will demand rigorous validation and accountability rather than unrestricted experimentation.

Agentic AI is emerging as a new capability for knowledge retrieval and literature analysis.

Evidence of return on investment is clear, prompting 85% of organisations to expand their AI budgets. Many report higher revenue, reduced costs and significant gains in back-office productivity.

Evaluation is becoming a core operational requirement, ensuring AI continues to improve safety, quality and overall clinical performance over time.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New Relic advances AI agents for enterprise observability

The expansion into enterprise AI comes with a no-code platform from New Relic that allows companies to build and supervise their own observability agents.

A system that assembles AI-driven monitors designed to detect bugs and performance problems before they affect users, instead of leaving teams to rely on manual tracking.

It also supports the Model Context Protocol so organisations can link external data sources to the agents and integrate them with existing New Relic tools.

The company stresses that the platform is intended to complement other agent systems rather than replace them.

As AI agent software spreads across the market, enterprises are searching for ways to manage risk when giving automated tools access to internal systems.

Industry players such as Salesforce and OpenAI have already introduced their own agent platforms, and assessments from Gartner describe these frameworks as essential infrastructure for wider AI adoption.

New Relic also introduced new tools for the OpenTelemetry framework to remove friction around observability standards.

Its application performance monitoring agents now support OTel data, allowing enterprises to manage these streams in one place instead of operating separate collectors.

The update aims to reduce fragmentation that has slowed OTel deployment across large organisations and to simplify how engineering teams handle diverse observability pipelines.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

CrowdStrike warns of faster AI driven threats

Cyber adversaries increasingly used AI to accelerate attacks and evade detection in 2025, according to CrowdStrike’s 2026 Global Threat Report. The company described the period as the year of the evasive adversary, marked by subtle and rapid intrusions.

The average time to a financially motivated online crime breakout fell to 29 minutes, with the fastest recorded at 27 seconds. CrowdStrike observed an 89 percent rise in attacks by AI-enabled threat actors compared with 2024.

Attackers also targeted AI systems themselves, exploiting GenAI tools at more than 90 organisations through malicious prompt injection. Supply chain compromises and the abuse of valid credentials enabled intrusions to blend into legitimate activity, with most detections classified as malware-free.

China linked activity rose by 38 percent across sectors, while North Korea linked incidents increased by 130 percent. CrowdStrike tracked more than 281 adversaries in total, warning that speed, credential abuse, and AI fluency now define the modern threat landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Sony targets AI music copyright use

Sony Group has developed technology designed to identify the original sources of music generated by AI. The move comes amid growing concern over the unauthorised use of copyrighted works in AI training.

According to Sony Group, the system can extract data from an underlying AI model and compare generated tracks with original compositions. The process aims to quantify how much specific works contributed to the output.

Composers, songwriters and publishers could use the technology to seek compensation from AI developers if their material was used without permission. Sony said the goal is to help ensure creators are properly rewarded.

Efforts to safeguard intellectual property have intensified across the music industry. Sony Music Entertainment in the US previously filed a copyright infringement lawsuit in 2024 over AI-generated music, underscoring wider tensions around AI and creative rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Commission delays high risk AI guidance

The European Commission has confirmed it will again delay publishing guidance on high-risk AI systems under the EU AI Act. The guidelines were due by 2 February 2026, but will now follow a revised timeline.

According to Euractiv, the document is intended to clarify which AI systems fall into the high-risk category and therefore face stricter obligations. Officials said more time is needed to incorporate significant stakeholder feedback.

The delay marks the second missed deadline and adds to broader implementation setbacks surrounding the EU AI Act. Several member states have yet to designate national enforcement bodies, complicating oversight preparations.

Brussels is also considering postponing the application of high-risk rules through a digital simplification package. Parliament and Council appear supportive of moving the August deadline back by more than a year, easing pressure on companies awaiting guidance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenClaw users face account suspensions under Google AI rules

Google has suspended access to its Antigravity AI platform for numerous OpenClaw users, citing violations of its terms of service. Developers had used OpenClaw’s OAuth plugin to access subsidised Gemini model tokens, triggering backend strain and service degradation.

OpenClaw, launched in November 2025, gained more than 219,000 GitHub stars by enabling local AI agents for tasks such as email management and web browsing. Users authenticated through Antigravity to access advanced Gemini models at reduced cost, bypassing official distribution channels.

Google said the third-party integration powered non-authorised products on Antigravity infrastructure, triggering usage flagged as malicious. In February 2026, AI Ultra subscribers reported 403 errors and account restrictions, with some citing temporary disruptions to Gmail and Workspace.

Varun Mohan of Google DeepMind said the surge had degraded service quality and that enforcement prioritised legitimate users. Limited reinstatement options were offered to those unaware of violations, while capacity constraints were cited as the reason.

The move follows similar restrictions by Anthropic on third-party OAuth usage. Developers are shifting to alternative forks, as debate intensifies over open tooling, platform control, and the risks of agentic AI ecosystems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA drives a new era of industrial AI cybersecurity

AI-driven defences are moving deeper into operational technology as NVIDIA leads a shift toward embedded cybersecurity across critical infrastructure.

The company is partnering with firms such as Akamai Technologies, Forescout, Palo Alto Networks, Siemens and Xage Security to protect energy, manufacturing and transport systems that increasingly operate through cloud-linked environments.

Modernisation has expanded capabilities across these sectors, yet it has widened the gap between evolving threats and ageing industrial defences.

Zero-trust adoption in operational environments is gaining momentum as Forescout and NVIDIA develop real-time verification models tailored to legacy devices and safety-critical processes.

Security workloads run on NVIDIA BlueField hardware to keep protection isolated from industrial systems and avoid any interference with essential operations. That approach enables more precise control over lateral movement across networks without disrupting performance.

Industrial automation is also adapting through Siemens and Palo Alto Networks, which are moving security enforcement closer to workloads at the edge. AI-enabled inspection via BlueField enhances visibility in highly time-sensitive environments, improving reliability and uptime.

Akamai and Xage are extending similar models to energy infrastructure and large-scale operational networks, embedding segmentation and identity-based controls where resilience is most critical.

A coordinated architecture is now emerging in which edge-generated operational data feeds central AI analysis, while enforcement remains local to maintain continuity.

The result is a security model designed to meet the pressures of cyber-physical systems, enabling operators to detect threats faster, reinforce operational stability and protect infrastructure that supports global AI expansion.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Global privacy regulators warn of rising AI deepfake harms

Privacy regulators from around the world have issued a joint warning about the rise of AI-generated deepfakes, arguing that the spread of non-consensual images poses a global risk instead of remaining a problem confined to individual countries.

Sixty-one authorities endorsed a declaration that draws attention to AI images and videos depicting real people without their knowledge or consent.

The signatories highlight the rapid growth of intimate deepfakes, particularly those targeting children and individuals from vulnerable communities. They note that such material often circulates widely on social platforms and may fuel exploitation or cyberbullying.

The declaration argues that the scale of the threat requires coordinated action rather than isolated national responses.

European authorities, including the European Data Protection Board and the European Data Protection Supervisor, support the effort to build global cooperation.

Regulators say that only joint oversight can limit the harms caused by AI systems that generate false depictions, rather than protecting individuals’ privacy as required under frameworks such as the General Data Protection Regulation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!