OpenClaw exploits spark a major security alert

A wave of coordinated attacks has targeted OpenClaw, the autonomous AI framework that gained rapid popularity after its release in January.

Multiple hacking groups have taken advantage of severe vulnerabilities to steal API keys, extract persistent memory data, and push information-stealing malware instead of leaving the platform’s expanding user base unharmed.

Security analysts have linked more than 30,000 compromised instances to campaigns that intercept messages and deploy malicious payloads through channels such as Telegram.

Much of the damage stems from flaws such as the Remote Code Execution vulnerability CVE-2026-25253, supply chain poisoning, and exposed administrative interfaces. Early attacks centred on the ‘ClawHavoc’ campaign, which disguised malware as legitimate installation tools.

Users who downloaded these scripts inadvertently installed stealers capable of full compromise, enabling attackers to move laterally across enterprise systems instead of being confined to a single device.

Further incidents emerged on the OpenClaw marketplace, where backdoored ‘skills’ were published from accounts that appeared reliable. These updates executed remote commands that allowed attackers to siphon OAuth tokens, passwords, and API keys in real time.

A Shodan scan later identified more than 312,000 OpenClaw instances running on a default port with little or no protection, while honeypots recorded hostile activity within minutes of appearing online.

Security researchers argue that the surge in attacks marks a decisive moment for autonomous AI frameworks. As organisations experiment with agents capable of independent decision-making, the absence of security-by-design safeguards is creating opportunities for organised threat groups.

Flare’s advisory urges companies to secure credentials and isolate AI workloads instead of relying on default configurations that expose high-privilege systems to the internet.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU DSA fine against X heads to court in key test case

X Corp., owned by Elon Musk, has filed an appeal with the General Court of the European Union against a €120 million fine imposed by the European Commission for breaching the Digital Services Act. The penalty, issued in December, marks the first enforcement action under the 2022 law.

The Commission concluded that X violated transparency obligations and misled users through its verification design, arguing that paid blue checkmarks made it harder to assess account authenticity. Officials also cited concerns about advertising transparency and researchers’ access to platform data.

Henna Virkkunen, the EU’s executive vice-president for tech sovereignty, security, and democracy, said deceptive verification and opaque advertising had no place online. The Commission opened its probe in December 2023, examining risk management, moderation practices, and alleged dark patterns.

X Corp. argued that the decision followed an incomplete investigation and a flawed reading of the DSA, citing procedural errors and due-process concerns. It said the appeal could shape future enforcement standards and penalty calculations under the regulation.

The EU is also assessing whether X mitigated systemic risks, including deepfaked content and child sexual abuse material linked to its Grok chatbot. US critics describe DSA enforcement as a threat to free speech, while EU officials say it strengthens accountability across the digital single market.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Claude Code Security by Anthropic aims to detect and patch complex vulnerabilities

Anthropic has introduced Claude Code Security, an AI-powered service that scans software codebases for vulnerabilities and recommends targeted fixes. Built into Claude Code, the capability is rolling out in a limited research preview for Enterprise and Team customers.

The tool analyses code beyond traditional rule-based scanners, examining data flows and component interactions to identify complex, high-severity vulnerabilities. Findings undergo multi-stage verification, receive severity and confidence ratings, and are presented in a dashboard for human review.

Anthropic said the system re-examines its own results to reduce false positives before surfacing them to analysts. Teams can prioritise remediation based on severity ratings and iterate on suggested patches within familiar development workflows.

Claude Code Security builds on more than a year of cybersecurity research. Using Claude Opus 4.6, Anthropic reported discovering more than 500 long-undetected bugs in open-source projects through testing and external partnerships.

The company said AI will increasingly be used to scan global codebases, warning that attackers and defenders alike are adopting advanced models. Open-source maintainers can apply for expedited access as Anthropic expands the preview.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU–US draft data pact allows automated decisions on travellers

A draft data-sharing agreement between the EU and the US Department of Homeland Security would allow automated decisions about European travellers to continue under certain conditions, despite attempts to tighten protections.

The text permits such decisions when authorised under domestic law and relies on safeguards that let individuals request human intervention instead of leaving outcomes entirely to algorithms.

A deal designed to preserve visa-free travel would require national authorities to grant access to biometric databases containing fingerprints and facial scans.

Negotiators are attempting to reconcile the framework with the General Data Protection Regulation, even though the draft states that the new rules would supplement and supersede earlier bilateral arrangements.

Sensitive information, including political views, trade union membership and biometric identifiers, could be transferred as long as protective conditions are applied.

EU countries face a deadline at the end of 2026 to conclude individual agreements, and failure to do so could result in suspension from the US Visa Waiver Program.

A separate clause keeps disputes firmly outside judicial scrutiny by requiring disagreements to be resolved through a Joint Committee instead of national or international courts.

The draft also restricts onward sharing, obliging US authorities to seek explicit consent before passing European-supplied data to third parties.

Further negotiations are expected, with the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs preparing to hold a closed-door review of the talks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU drops revised GDPR personal data definition amid regulatory pressure

Governments across the EU have withdrawn the revised definition of personal data from the GDPR omnibus package, softening earlier proposals that had prompted strong resistance from regulators and civil society.

A decision that signals a preference for maintaining the original scope of the General Data Protection Regulation instead of reopening sensitive debates that risked weakening long-standing protections.

Greater attention is now placed on the forthcoming pseudonymisation guidelines prepared by the European Data Protection Board. These guidelines are expected to shape how organisations interpret key safeguards, offering practical direction instead of altering the legal definition of personal data.

The updated prominence given to the guidance reflects a broader trend within the Council towards regulatory clarity rather than legislative redesign.

The compromise text also maintains links with the wider review of the ePrivacy Directive, keeping future updates aligned with existing digital-rights rules.

Member states appear increasingly cautious about reopening foundational privacy concepts, opting to strengthen enforcement through guidance and implementation rather than altering core definitions in law.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Data breach at PayPal prompts password resets and transaction refunds

PayPal has notified some customers of a data breach linked to its Working Capital loan application, after unauthorised access between 1 July and 12 December 2025 exposed personal information. Letters dated 10 February confirm that around 100 customers were potentially affected.

The incident was linked to an error in the Working Capital application, described as a ‘code change’. PayPal said it ‘terminated the unauthorised access to PayPal’s systems’ after discovery. A spokesperson later stated that systems were not compromised, leaving the extent of exposure unclear.

Data potentially accessed includes names, email addresses, phone numbers, business addresses, Social Security numbers, and dates of birth. PayPal confirmed a small number of unauthorised transactions and said refunds were issued. Affected users had passwords reset and were offered credit monitoring.

Previous incidents include a 2023 credential stuffing attack that affected nearly 35,000 accounts and phishing campaigns that abused legitimate infrastructure. The company said it continues to use manual investigations and automated tools to mitigate fraud.

Customers are advised to use unique passwords, avoid unsolicited links, verify urgent messages directly via their accounts, and enable passkeys where available. Even limited breaches can heighten risks of targeted phishing and identity theft, especially for small businesses.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Turkey reviews children’s data handling as identity checks planned for social platforms

The data protection authority of Turkey has opened a new review into how major social media platforms manage children’s personal data.

A decision that places scrutiny on TikTok, Instagram, Facebook, YouTube, X and Discord as Ankara prepares legislation that would expand state authority over digital activity instead of relying on existing rules alone.

Regulators aim to assess safeguards for children and ensure stronger compliance with local standards.

The ruling party is expected to introduce a family package that would require identity verification for every account through phone numbers or the e-Devlet system. Children under 15 would not be allowed to create profiles and further limits could apply to users under 18.

A proposal that would also allow authorities to order the rapid removal of content deemed unlawful without waiting for court approval, while platforms that fail to comply may face penalties such as phased bandwidth reductions.

Rights advocates warn that mandatory verification and broader enforcement powers could reshape online speech across the country. Some argue that linking accounts to verified identities threatens anonymity and could restrict legitimate expression instead of fostering safety.

Turkey has already expanded online oversight since 2016 through laws that increased the government’s ability to block websites, require content removal and oblige major platforms to maintain a legal presence in the country.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Phishing messages target IndiaAI and Impact Summit 2026 participants

IndiaAI has issued an urgent advisory warning of a phishing campaign targeting attendees of the India AI Impact Summit 2026. Fraudulent SMS and WhatsApp messages claim refunds are pending and request sensitive financial details.

Organisers said the messages are not official and have not been authorised. Recipients are being urged to click links and provide full card numbers, WhatsApp numbers, and other contact information to ‘process’ refunds.

IndiaAI advised participants not to click suspicious links or share personal or banking information with unverified sources. Attendees in India are encouraged to delete such messages immediately and block the sender’s number.

Anyone who may have submitted details through a suspicious link should contact their bank without delay to secure their accounts. Organisers stressed that event-related communication will only be shared through official channels.

The advisory was issued under the AI Impact Summit 2026 banner, themed ‘Welfare for All | Happiness of All’, as authorities seek to prevent financial fraud linked to the high-profile gathering.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Strict ban on crypto references introduced by OpenClaw

OpenClaw has introduced a firm community rule prohibiting any reference to Bitcoin or other cryptocurrencies on its Discord server, according to its creator, Peter Steinberger.

Enforcement drew attention after a user was removed for mentioning Bitcoin block height as a timing method in a benchmark, with the developer later offering to restore access.

The policy follows a rebrand scare when scammers hijacked old accounts to promote a fake Solana token. Market value spiked then plunged after Steinberger denied involvement, warning that no official token would be issued.

Rapid growth of the open-source project, which has attracted a large developer base within weeks of launch, contrasts with wider industry momentum linking AI agents and digital assets.

Leaders such as Jeremy Allaire of Circle argue stablecoins could become default payment rails for autonomous software, while Coinbase is already rolling out infrastructure enabling agents to transact on-chain.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Generative AI presents the biggest data-risk challenge in history

Cybersecurity specialists warn that generative AI systems, such as large language models, are creating a data risk frontier far larger than that posed by previous digital innovations.

Because these models are trained on extensive datasets drawn from web pages, internal documents, email corpora and proprietary sources, they can unintentionally memorise or regenerate sensitive information, increasing the risk of exposure.

The article highlights several core concerns. Data leakage and memorisation, where AI models can repeat or infer private data if training processes are not tightly controlled.

Amplification of poor hygiene, when generative tools can magnify the reach of bad actors by automating phishing, social engineering, and malware generation at scale.

Compounding breach impact, if an AI model is trained on stolen or leaked data, it could internalise and regurgitate that information without detection, entrenching harm. Cloud and access governance gaps that allow organisations to adopt AI without robust access controls and encryption may widen their attack surface.

The author calls for revised data governance frameworks, including strict training data provenance, auditability, encryption, minimisation and purpose limitation, to mitigate what is described as ‘the biggest data risk in history.’

Recommendations also include accountability measures for models, continuous monitoring, and legislative action to align AI development with privacy and security principles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!