Galaxy S26 series brings powerful AI and privacy features

Samsung Electronics has unveiled the Galaxy S26 series, featuring advanced AI experiences, powerful performance, and an industry-leading camera system designed to simplify everyday smartphone tasks.

The series, which includes the Galaxy S26, S26+, and S26 Ultra, handles complex processes in the background, allowing users to focus on results rather than device operations.

The Galaxy S26 Ultra introduces the world’s first built-in Privacy Display, a redesigned chipset, and improved thermal management. Together, these upgrades enhance AI performance, graphics, and CPU efficiency, while ensuring faster, cooler, and more reliable operation throughout the day.

Photography and videography are also upgraded with wider apertures, Nightography Video, Super Steady video, and AI-powered editing tools that make professional-quality content accessible to all users.

Galaxy AI streamlines daily experiences by proactively suggesting actions, organising information, and automating tasks. Features such as Now Nudge, Now Brief, Circle to Search, and upgraded Bixby allow users to interact naturally with their devices.

Integrated AI agents, including Gemini and Perplexity, support multi-step tasks across apps, from booking services to advanced searches, all with minimal input.

Samsung has embedded multiple layers of security and privacy in the Galaxy S26 series. From AI-powered Call Screening and Privacy Alerts to Knox Vault, Knox Matrix, and post-quantum cryptography, users can control data access and protect personal information.

With long-term security updates, seamless software, and Galaxy Buds4 integration, the S26 series aims to combine performance, convenience, and safety in a single, intuitive device.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta AI flood of unusable abuse tips overwhelms US investigators

Investigators in the US say that AI used by Meta is flooding child protection units with large volumes of unhelpful reports, thereby draining resources rather than assisting ongoing cases.

Officers in the Internet Crimes Against Children network told a New Mexico court that most alerts generated by the company’s platforms lack essential evidence or contain material that is not criminal, leaving teams unable to progress investigations.

Meta rejects the claim that it prioritises profit, stressing its cooperation with law enforcement and highlighting rapid response times to emergency requests.

Its position is challenged by officers who say the volume of AI-generated alerts has doubled since 2024, particularly after the Report Act broadened reporting obligations.

They argue that adolescent conversations and incomplete data now form a sizeable portion of the alerts, while genuine cases of child sexual abuse material are becoming harder to detect.

Internal company documents disclosed at trial show Meta executives raising concerns as early as 2019 about the impact of end-to-end encryption on the firm’s ability to identify child exploitation and support investigators.

Child safety groups have long warned that encryption could limit early detection, even though Meta says it has introduced new tools designed to operate safely within encrypted environments.

The growing influx of unusable tips is taking a heavy toll on investigative teams. Officers in the US say each report must still be reviewed manually, despite the low likelihood of actionable evidence, and this backlog is diminishing morale at a time when they say resources have not kept pace with demand.

They warn that meaningful cases risk being delayed as units struggle with a workload swollen by AI systems tuned to avoid regulatory penalties rather than investigative value.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK enforces mandatory ETA as digital border era begins

Non-visa nationals are now barred from entering the UK, as the country has begun enforcing mandatory digital permission through the Electronic Travel Authorisation.

Travellers from 85 nations, including the US, Canada and France, must obtain an ETA before departure; otherwise, airlines will prevent them from boarding rather than allow last-minute checks at the border. The authorisation costs £16 and remains valid for two years or until a passport expires.

British and Irish citizens remain exempt but must present valid proof of status when travelling. Authorities say the scheme brings the UK into line with similar systems used by the US and the EU.

The Home Office emphasises that the measure strengthens border security and supports a modern, efficient entry process designed to benefit both visitors and the wider public.

A requirement that also applies to travellers passing through the UK to take connecting flights, reinforcing the shift toward a fully digital immigration system.

Over 19 million people have already used the ETA since its launch in 2023, generating significant revenue that is being reinvested in broader border improvements. Officials argue that the momentum paves the way for a future contactless border, supported by the steady transition from physical documents to eVisas.

From 26 February, Certificates of Entitlement will also be issued digitally, creating a single record that no longer expires with a passport.

Most ETA applications are processed automatically within minutes, allowing short-notice trips to remain possible. However, authorities still recommend applying up to 3 working days in advance to avoid delays for the small number of cases that require additional review.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UAE builds sovereign financial cloud

The Central Bank of the UAE has partnered with Abu Dhabi-based AI company Core42 to develop a sovereign financial cloud infrastructure in the UAE. The system is designed to ensure data sovereignty and strengthen protection against cyber threats.

According to the Central Bank of the UAE, the platform will operate on a centralised, highly secure and isolated infrastructure. It aims to support continuous financial services while boosting operational agility across the UAE.

The infrastructure will be powered by AI and provide automation and real-time data analysis for licensed institutions in the UAE. It will also enable unified management of multi-cloud services within a single regulatory framework.

Core42, established by G42 in 2023, said finance must remain sovereign as it relies on digital infrastructure. The Central Bank of the UAE described the project as a key pillar of its financial infrastructure transformation programme.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

National security concerns reshape US data policy

US policymakers are increasingly treating personal data as a dual use asset that carries both economic value and national security risks. Regulators have raised concerns about sensitive information, including geolocation data linked to military personnel.

Measures such as the Protecting Americans Data from Foreign Adversaries Act of 2024 and the Department of Justice Data Security Program aim to curb misuse by designated foreign adversaries. Both frameworks impose broad restrictions on cross border data transfers.

Experts warn that compliance remains complex and uncertain, with companies adapting in what one adviser described as a fog. Enforcement signals have already emerged, including a draft noncompliance letter from the Federal Trade Commission and litigation.

Organizations are being urged to integrate national security expertise into privacy and cybersecurity teams. Observers say early preparation is essential as selective enforcement risks increase under strict but evolving US data protection regimes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft expands Sovereign Cloud with secure offline support for large AI models

Digital sovereignty is gaining urgency as organisations seek infrastructure that remains secure and reliable under strict regulatory conditions.

Microsoft is expanding its Sovereign Cloud to help public bodies, regulated industries and enterprises maintain control of data and operations even when environments must operate without external connectivity.

The updated portfolio allows customers to choose how each workload is governed, rather than relying on a single deployment model.

Azure Local now supports disconnected operations, keeping mission-critical systems running with full Azure governance within sovereign boundaries. Management, policies and workloads stay entirely on site, so services continue during periods of isolation.

Microsoft 365 Local extends the resilience to the productivity layer by enabling Exchange Server, SharePoint Server and Skype for Business Server to run locally, giving teams secure collaboration within the same protected boundary as their infrastructure.

Support for large multimodal AI models is delivered through Foundry Local, which enables advanced inference on customer-controlled hardware using technology from partners such as NVIDIA.

Such an approach helps organisations bring modern AI capabilities into highly restricted environments while preserving control over data, identities and operational procedures.

Microsoft positions it as a unified stack that works across connected, hybrid and fully disconnected modes without increasing operational complexity.

These additions create a framework designed for governments and regulated industries that regard sovereignty as a strategic priority.

With global availability for qualified customers, the Sovereign Cloud aims to preserve continuity, reinforce governance and expand AI capability while keeping every layer of the environment within local control.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

CrowdStrike warns of faster AI driven threats

Cyber adversaries increasingly used AI to accelerate attacks and evade detection in 2025, according to CrowdStrike’s 2026 Global Threat Report. The company described the period as the year of the evasive adversary, marked by subtle and rapid intrusions.

The average time to a financially motivated online crime breakout fell to 29 minutes, with the fastest recorded at 27 seconds. CrowdStrike observed an 89 percent rise in attacks by AI-enabled threat actors compared with 2024.

Attackers also targeted AI systems themselves, exploiting GenAI tools at more than 90 organisations through malicious prompt injection. Supply chain compromises and the abuse of valid credentials enabled intrusions to blend into legitimate activity, with most detections classified as malware-free.

China linked activity rose by 38 percent across sectors, while North Korea linked incidents increased by 130 percent. CrowdStrike tracked more than 281 adversaries in total, warning that speed, credential abuse, and AI fluency now define the modern threat landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA drives a new era of industrial AI cybersecurity

AI-driven defences are moving deeper into operational technology as NVIDIA leads a shift toward embedded cybersecurity across critical infrastructure.

The company is partnering with firms such as Akamai Technologies, Forescout, Palo Alto Networks, Siemens and Xage Security to protect energy, manufacturing and transport systems that increasingly operate through cloud-linked environments.

Modernisation has expanded capabilities across these sectors, yet it has widened the gap between evolving threats and ageing industrial defences.

Zero-trust adoption in operational environments is gaining momentum as Forescout and NVIDIA develop real-time verification models tailored to legacy devices and safety-critical processes.

Security workloads run on NVIDIA BlueField hardware to keep protection isolated from industrial systems and avoid any interference with essential operations. That approach enables more precise control over lateral movement across networks without disrupting performance.

Industrial automation is also adapting through Siemens and Palo Alto Networks, which are moving security enforcement closer to workloads at the edge. AI-enabled inspection via BlueField enhances visibility in highly time-sensitive environments, improving reliability and uptime.

Akamai and Xage are extending similar models to energy infrastructure and large-scale operational networks, embedding segmentation and identity-based controls where resilience is most critical.

A coordinated architecture is now emerging in which edge-generated operational data feeds central AI analysis, while enforcement remains local to maintain continuity.

The result is a security model designed to meet the pressures of cyber-physical systems, enabling operators to detect threats faster, reinforce operational stability and protect infrastructure that supports global AI expansion.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Anthropic uncovers large-scale AI model theft operations

Three AI laboratories have been found conducting large-scale illicit campaigns to extract capabilities from Anthropic’s Claude AI, the company revealed.

DeepSeek, Moonshot, and MiniMax used around 24,000 fraudulent accounts to generate more than 16 million interactions, violating terms of service and regional access restrictions. The technique, called distillation, trains a weaker model on outputs from a stronger one, speeding AI development.

Distilled models obtained in this manner often lack critical safeguards, creating serious national security concerns. Without protections, these capabilities could be integrated into military, intelligence, surveillance, or cyber operations, potentially by authoritarian governments.

The attacks also undermine export controls designed to preserve the competitive edge of US AI technology and could give a misleading impression of foreign labs’ independent AI progress.

Each lab followed coordinated playbooks using proxy networks and large-scale automated prompts to target specific capabilities such as agentic reasoning, coding, and tool use.

Anthropic attributed the campaigns using request metadata, infrastructure indicators, and corroborating observations from industry partners. The investigation detailed how distillation attacks operate from data generation to model launch.

In response, Anthropic has strengthened detection systems, implemented stricter access controls, shared intelligence with other labs and authorities, and introduced countermeasures to reduce the effectiveness of illicit distillation.

The company emphasises that addressing these attacks will require coordinated action across the AI industry, cloud providers, and policymakers to protect frontier AI capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Wikipedia removes Archive.today links

Wikipedia editors have voted to remove all links to Archive.today, citing allegations that the web archive was involved in a distributed denial of service attack.

Editors said Archive.today, which also operates under domains such as archive.is and archive.ph, should not be linked because it allegedly used visitors’ browsers to target blogger Jani Patokallio. The site has also been accused of altering archived pages, raising concerns about reliability.

Archive.today had previously been blacklisted in 2013 before being reinstated in 2016. Wikipedia’s latest guidance calls for replacing Archive.today links with original sources or alternative archives such as the Wayback Machine.

The apparent owner of Archive.today denied wrongdoing in posts linked from the site and suggested the controversy had been exaggerated. Wikipedia editors nevertheless concluded that readers should not be directed to a service facing such allegations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot