Shared code, shared risk: How are security responsibilities allocated?

Cyber stability is increasingly tested by geopolitical fragmentation, rapid technological change, and tightly coupled digital supply chains. Open source software sits at the centre of these dynamics: widely embedded in critical digital infrastructure, globally developed, and governed through models that were not designed for today’s security, policy, and geopolitical pressures.

In 2026, the Geneva Dialogue will focus on stress-testing cybersecurity practices and agreed cyber norms under real-world conditions. hrough a scenario-based engagement framework, the Dialogue brings together policymakers, private sector actors, technical communities, and civil society to examine how responsibilities, incentives, and governance arrangements hold up when systems are under strain, with insights from Costin G. Raiu, Mika Lauhde, and Roman Zhukov.

Cybercriminals shift to stolen credentials and AI-enabled attacks

Ransomware attacks are increasingly relying on stolen passwords rather than traditional malware, according to Cloudflare’s latest annual threat report. Attackers now exploit legitimate account credentials to blend into regular traffic, making breaches harder to detect and contain.

Manufacturing and critical infrastructure organisations account for over half of targeted attacks, reflecting their high operational stakes.

Cloudflare highlighted that AI is enabling attackers to prioritise speed and scale over technical sophistication. Generative AI lets criminals automate fraud, hijacking email threads and targeting a ~$49,000 sweet spot to maximise profit while avoiding scrutiny.

Nation-state actors also leverage legitimate platforms for command-and-control operations, with Russia, China, Iran, and North Korea each following distinct cyber strategies.

Researchers warned that modern ransomware is less a malware crisis and more an identity and access challenge. Attackers using authorised credentials can bypass defences and execute high-impact extortion, marking a significant shift in global threat vectors.

The report urges businesses to strengthen identity security, monitor access, and defend against AI-driven attacks that exploit impersonation and automation at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ClawJacked flaw let attackers hijack AI agents through the browser

A high-severity vulnerability dubbed ‘ClawJacked’ has been discovered in OpenClaw, an open-source AI agent framework that lets developers run autonomous AI assistants locally.

The flaw, uncovered by Oasis Security, allowed malicious websites to silently hijack a user’s local AI agent instance and steal sensitive data, all triggered by a single browser visit.

The attack exploited OpenClaw’s local WebSocket gateway, which assumed that traffic from localhost could be trusted. A malicious website could open a WebSocket connection to the gateway, brute-force the password at hundreds of guesses per second, with no rate limiting applied to local connections, and then silently register as a trusted device without any user prompt.

Once inside, attackers gained admin-level access to the AI agent, connected devices, logs, and configuration data. Oasis Security responsibly disclosed the flaw, and OpenClaw issued a patch within 24 hours, releasing version 2026.2.26.

Security experts are urging organisations to update immediately, audit the permissions held by their AI agents, and apply strict governance policies, treating AI agents as non-human identities that require the same oversight as human users or service accounts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Why detecting deepfakes is no longer enough to stay secure

Deepfakes and injection attacks are no longer just tools for misinformation; they are now being deployed to break the identity verification systems that underpin banking, hiring, and account access.

Bad actors are targeting the critical moments when a system determines whether someone is a real person, from customer onboarding at banks to remote hiring and account recovery workflows.

Attackers exploit verification systems in two main ways: by using increasingly convincing synthetic faces and voice clones to mimic real people, and by launching injection attacks that substitute fraudulent video into the capture pipeline before it ever reaches the detection system.

According to the Entrust 2026 Identity Fraud Report, deepfakes are now linked to one in five biometric fraud attempts, with injection attacks rising 40% year-on-year.

Experts warn that detecting deepfakes alone is no longer sufficient. Enterprises must validate the whole session, including device integrity and behavioural signals, in real time.

Gartner predicts that by 2026, 30% of enterprises will no longer consider face-based identity verification reliable in isolation, given the pace AI AI-generated deepfake attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake scams target Indian global executives

A deepfake video of Bombay Stock Exchange chief executive Sundararaman Ramamurthy circulated on social media in India, falsely offering stock advice to investors. The exchange moved quickly to report and remove the content, warning the public not to trust fake investment clips.

Cybersecurity experts say such cases are rising sharply, with one US firm estimating a 3,000 percent increase in deepfake incidents over two years. Executives in the US and the UK have also been impersonated using AI-generated audio and video.

In Hong Kong, police said a UK engineering firm lost $25m after an employee joined a video call featuring deepfake versions of senior colleagues. The transfer was made to multiple accounts before the fraud was discovered.

Security companies in the US and the UK are developing detection tools that analyse facial movement and blood flow patterns to identify AI-generated footage. Analysts warn that as costs fall and tools improve, businesses in India, Hong Kong and beyond face an escalating arms race against digital fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Chrome Gemini vulnerability allowed camera and file access

A high-severity vulnerability in Chrome’s integrated Gemini AI assistant exposed users to the potential activation of the camera and microphone, local file access, and phishing attacks. The issue, tracked as CVE-2026-0628, was disclosed by Palo Alto Networks’ Unit 42 and patched by Google in January 2026.

Gemini Live operates as a privileged AI panel embedded within the browser, capable of web page summarisation and task automation. To enable multimodal functionality, the panel is granted elevated permissions, including access to screenshots, local files, and device hardware.

Researchers identified inconsistent handling of the declarativeNetRequest API when gemini.google.com was loaded inside the AI side panel rather than a standard browser tab. While extensions could inject JavaScript in both cases, the panel context inherited browser-level privileges.

A malicious extension exploiting this distinction could hijack the trusted panel and execute arbitrary code with elevated access. Potential impacts included silent activation of a camera or microphone, screenshot capture, local file exfiltration, and high-credibility phishing attacks.

Google released a fix on 5 January 2026 following responsible disclosure. Users running the latest version of Chrome are protected, and organisations are advised to ensure updates are applied across all endpoints.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Europe pressed to slow digital age-verification push amid privacy fears

Hundreds of academics urged governments to halt plans for mandatory age checks on social media, rather than accelerating deployment without assessing the risks.

The warning arrives as several European states consider restrictions on children’s access to online platforms and as companies promote verification tools such as live selfies or uploads of government-issued IDs.

Researchers argue that current systems expose people to privacy breaches, security vulnerabilities and malicious sites that ignore verification rules instead of offering meaningful protection.

They say scientific consensus has not yet formed on the benefits or harms of age-assurance technologies, making large-scale implementation premature and potentially discriminatory.

The letter stresses that any credible system would require cryptographic safeguards for every query, protecting data in transit rather than leaving identity checks to platforms without robust technical guarantees.

Academics believe such infrastructure would be complex to build globally and would create friction that many providers may refuse to adopt.

Concern escalated after early deployments in Italy and France, where verification is already mandatory.

Signatories, including Ronald Rivest and Bart Preneel, warn that governments risk introducing a socially unacceptable system that increases exposure to data misuse instead of ensuring children’s safety online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Microsoft reveals OAuth redirection abuse powering new phishing attempts

Researchers at Microsoft have identified phishing activity that abuses legitimate OAuth redirection behaviour instead of relying on credential theft.

Threat actors create malicious applications within attacker-controlled tenants and configure redirect pages that lead victims from trusted authentication domains to malware-delivery sites.

A technique that has been used against government and public-sector organisations and is designed to bypass email and browser defences by embedding URLs that appear genuine.

The attack begins with lures themed around documents, financial matters or meeting requests, each containing OAuth URLs crafted to trigger silent authentication.

Validation errors, session checks and Conditional Access evaluations provide attackers with information about session status without granting access to tokens, yet still deliver the victim to a malicious landing page.

Once redirected, victims encounter phishing frameworks or are served ZIP files containing shortcut files and HTML-based loaders. These PowerShell commands launch system discovery and extract files used for DLL side-loading.

Executing a legitimate process allows a malicious DLL to load unseen, decrypt the final payload and establish a connection to a remote command-and-control server for hands-on keyboard activity.

Microsoft Entra has removed identified malicious OAuth applications, although related activity continues to appear.

Microsoft emphasises that OAuth redirection follows standards such as RFC 6749 and RFC 9700, meaning attackers cannot exploit normal protocol behaviour instead of software vulnerabilities.

Stronger governance of OAuth applications, tighter consent controls and cross-domain monitoring are required to prevent trusted authentication flows from being turned into delivery paths for phishing and malware.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Claude AI experiences temporary global outage

Anthropic’s AI chatbot, Claude, experienced a global outage, leaving users unable to access the platform. Visitors reported error messages indicating the system had broken down, though the company said it was working to resolve the issue.

The Claude API, used by other websites to integrate the chatbot, remained operational. Anthropic confirmed that the outage was limited to the Claude web interface and did not affect other integrations, emphasising that engineers were actively resolving the issue.

The outage, tracked by Down Detector, began around noon in the UK and affected users worldwide. Messages on the platform reassured users that Claude would return soon and that the problem had been identified and was being fixed.

The interruption comes at a sensitive time for Anthropic, as the company navigates heightened attention surrounding access to its Claude AI system. The situation unfolds amid broader discussions about the role of advanced AI tools in defence contexts, with industry players facing increasing scrutiny over their policies and partnerships.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Quantum-safe security upgrades SIM and eSIM cards

Thales has successfully demonstrated a world-first capability that prepares 5G networks for the era of quantum computing. The test proved that SIM and eSIM cards can be remotely upgraded to support post-quantum cryptography, boosting security without disrupting services or user experience.

The breakthrough highlights the potential of crypto-agile networks to evolve securely as quantum threats emerge.

Replacing millions of devices is impractical, so Thales enables operators to deploy quantum-safe algorithms directly to existing devices. Remote upgrades preserve data and connectivity while instantly boosting security, keeping 5G networks resilient and trusted.

The demonstration reinforces Thales’ leadership in post-quantum cryptography, with dedicated research teams developing quantum-resistant methods and contributing to international standards, including NIST initiatives.

Operators can now protect long-term investments, secure critical services, and prepare for the next generation of quantum computing without operational disruptions.

Thales’ approach offers a practical roadmap for telecoms to adopt quantum-safe security today, ensuring continuity, trust, and resilience across mobile networks as digital threats evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot