Claude Code Security by Anthropic aims to detect and patch complex vulnerabilities

Anthropic has introduced Claude Code Security, an AI-powered service that scans software codebases for vulnerabilities and recommends targeted fixes. Built into Claude Code, the capability is rolling out in a limited research preview for Enterprise and Team customers.

The tool analyses code beyond traditional rule-based scanners, examining data flows and component interactions to identify complex, high-severity vulnerabilities. Findings undergo multi-stage verification, receive severity and confidence ratings, and are presented in a dashboard for human review.

Anthropic said the system re-examines its own results to reduce false positives before surfacing them to analysts. Teams can prioritise remediation based on severity ratings and iterate on suggested patches within familiar development workflows.

Claude Code Security builds on more than a year of cybersecurity research. Using Claude Opus 4.6, Anthropic reported discovering more than 500 long-undetected bugs in open-source projects through testing and external partnerships.

The company said AI will increasingly be used to scan global codebases, warning that attackers and defenders alike are adopting advanced models. Open-source maintainers can apply for expedited access as Anthropic expands the preview.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU–US draft data pact allows automated decisions on travellers

A draft data-sharing agreement between the EU and the US Department of Homeland Security would allow automated decisions about European travellers to continue under certain conditions, despite attempts to tighten protections.

The text permits such decisions when authorised under domestic law and relies on safeguards that let individuals request human intervention instead of leaving outcomes entirely to algorithms.

A deal designed to preserve visa-free travel would require national authorities to grant access to biometric databases containing fingerprints and facial scans.

Negotiators are attempting to reconcile the framework with the General Data Protection Regulation, even though the draft states that the new rules would supplement and supersede earlier bilateral arrangements.

Sensitive information, including political views, trade union membership and biometric identifiers, could be transferred as long as protective conditions are applied.

EU countries face a deadline at the end of 2026 to conclude individual agreements, and failure to do so could result in suspension from the US Visa Waiver Program.

A separate clause keeps disputes firmly outside judicial scrutiny by requiring disagreements to be resolved through a Joint Committee instead of national or international courts.

The draft also restricts onward sharing, obliging US authorities to seek explicit consent before passing European-supplied data to third parties.

Further negotiations are expected, with the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs preparing to hold a closed-door review of the talks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU drops revised GDPR personal data definition amid regulatory pressure

Governments across the EU have withdrawn the revised definition of personal data from the GDPR omnibus package, softening earlier proposals that had prompted strong resistance from regulators and civil society.

A decision that signals a preference for maintaining the original scope of the General Data Protection Regulation instead of reopening sensitive debates that risked weakening long-standing protections.

Greater attention is now placed on the forthcoming pseudonymisation guidelines prepared by the European Data Protection Board. These guidelines are expected to shape how organisations interpret key safeguards, offering practical direction instead of altering the legal definition of personal data.

The updated prominence given to the guidance reflects a broader trend within the Council towards regulatory clarity rather than legislative redesign.

The compromise text also maintains links with the wider review of the ePrivacy Directive, keeping future updates aligned with existing digital-rights rules.

Member states appear increasingly cautious about reopening foundational privacy concepts, opting to strengthen enforcement through guidance and implementation rather than altering core definitions in law.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Data breach at PayPal prompts password resets and transaction refunds

PayPal has notified some customers of a data breach linked to its Working Capital loan application, after unauthorised access between 1 July and 12 December 2025 exposed personal information. Letters dated 10 February confirm that around 100 customers were potentially affected.

The incident was linked to an error in the Working Capital application, described as a ‘code change’. PayPal said it ‘terminated the unauthorised access to PayPal’s systems’ after discovery.

In a statement sent following publication, a PayPal spokesperson said ‘When there is a potential exposure of customer information, PayPal is required to notify affected customers. In this case, PayPal’s systems were not compromised. As such, we contacted the approximately 100 customers who were potentially impacted to provide awareness on this matter.’

Data potentially accessed includes names, email addresses, phone numbers, business addresses, Social Security numbers, and dates of birth. PayPal confirmed a small number of unauthorised transactions and said refunds were issued. Affected users had passwords reset and were offered credit monitoring.

Previous incidents include a 2023 credential stuffing attack that affected nearly 35,000 accounts and phishing campaigns that abused legitimate infrastructure. The company said it continues to use manual investigations and automated tools to mitigate fraud.

Customers are advised to use unique passwords, avoid unsolicited links, verify urgent messages directly via their accounts, and enable passkeys where available. Even limited breaches can heighten risks of targeted phishing and identity theft, especially for small businesses.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Turkey reviews children’s data handling as identity checks planned for social platforms

The data protection authority of Turkey has opened a new review into how major social media platforms manage children’s personal data.

A decision that places scrutiny on TikTok, Instagram, Facebook, YouTube, X and Discord as Ankara prepares legislation that would expand state authority over digital activity instead of relying on existing rules alone.

Regulators aim to assess safeguards for children and ensure stronger compliance with local standards.

The ruling party is expected to introduce a family package that would require identity verification for every account through phone numbers or the e-Devlet system. Children under 15 would not be allowed to create profiles and further limits could apply to users under 18.

A proposal that would also allow authorities to order the rapid removal of content deemed unlawful without waiting for court approval, while platforms that fail to comply may face penalties such as phased bandwidth reductions.

Rights advocates warn that mandatory verification and broader enforcement powers could reshape online speech across the country. Some argue that linking accounts to verified identities threatens anonymity and could restrict legitimate expression instead of fostering safety.

Turkey has already expanded online oversight since 2016 through laws that increased the government’s ability to block websites, require content removal and oblige major platforms to maintain a legal presence in the country.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Cloudflare outage causes global internet disruption after an internal error

A major outage on 20 February disrupted global internet traffic after an internal configuration failure at Cloudflare caused the unintended withdrawal of customer BGP routes.

The incident lasted just over six hours and left numerous services unreachable, despite early fears of a cyberattack. An internal update led to the systematic deletion of more than a thousand Bring Your Own IP prefixes, which pushed many connections into BGP path hunting instead of stable routing.

Engineers traced the disruption to an error in the company’s Addressing API, introduced during an automated cleanup task under the Code Orange resilience programme.

A flawed query interpreted an empty value as an instruction to delete all returned prefixes, removing essential bindings for hundreds of customers. Some users restored connectivity through the dashboard, while others required manual reconstruction carried out across the edge network.

An outage that affected a series of core offerings, including content delivery, security layers, dedicated egress and network protection services. Restoration took several hours because the withdrawn prefixes varied in severity, demanding different recovery methods instead of a uniform reinstatement process.

The error triggered widespread timeouts on dependent websites and applications, along with 403 responses on the 1.1.1.1 DNS resolver.

Cloudflare plans to introduce stricter API validation, circuit breakers for abnormal deletion patterns, and improved configuration separation. It has also issued a public apology for a failure that undermined its assurances of network resilience.

An event that reaffirmed the risks posed by internal automation faults when they interact with critical internet infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Strict ban on crypto references introduced by OpenClaw

OpenClaw has introduced a firm community rule prohibiting any reference to Bitcoin or other cryptocurrencies on its Discord server, according to its creator, Peter Steinberger.

Enforcement drew attention after a user was removed for mentioning Bitcoin block height as a timing method in a benchmark, with the developer later offering to restore access.

The policy follows a rebrand scare when scammers hijacked old accounts to promote a fake Solana token. Market value spiked then plunged after Steinberger denied involvement, warning that no official token would be issued.

Rapid growth of the open-source project, which has attracted a large developer base within weeks of launch, contrasts with wider industry momentum linking AI agents and digital assets.

Leaders such as Jeremy Allaire of Circle argue stablecoins could become default payment rails for autonomous software, while Coinbase is already rolling out infrastructure enabling agents to transact on-chain.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Fake Google Forms phishing campaign targets job seekers

A phishing campaign is targeting job seekers with fake Google Forms pages designed to harvest account credentials. Attackers are using a spoofed domain, forms.google.ss-o[.]com, to mimic the legitimate Google Forms service and trick victims into signing in.

The fraudulent pages advertise a Customer Support Executive role and prompt applicants to enter personal details before clicking a ‘Sign in’ button. Victims are then redirected to id-v4[.]com/generation.php, a domain previously linked to credential harvesting campaigns.

Researchers identified the operation as part of a broader wave of job-themed phishing attacks. The attackers used a script called generation_form.php to create personalised tracking links and implemented redirects to evade security analysis by sending suspicious visitors to local Google search pages.

Security experts warn that the campaign relies on domain impersonation techniques, including the use of ‘ss-o’ to resemble ‘single sign-on’. The fake site reproduces Google branding elements and standard disclaimers to increase credibility.

Users are advised to avoid clicking unsolicited job links, verify opportunities through official channels, and enable multi-factor authentication. Password managers and real-time anti-malware tools can also reduce exposure to credential theft.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK sets 48-hour deadline for removing intimate images

The UK government plans to require technology platforms to remove intimate images shared without consent within forty-eight hours instead of allowing such content to remain online for days.

Through an amendment to the Crime and Policing Bill, firms that fail to comply could face fines amounting to ten percent of their global revenue or risk having their services blocked in the UK.

A move that reflects ministers’ commitment to treat intimate image abuse with the same seriousness as child sexual abuse material and extremist content.

The action follows mounting concern after non-consensual sexual deepfakes produced by Grok circulated widely, prompting investigations by Ofcom and political pressure on platforms owned by Elon Musk.

The government now intends victims to report an image once instead of repeating the process across multiple services. Once flagged, the content should disappear across all platforms and be blocked automatically on future uploads through hash-matching or similar detection tools.

Ministers also aim to address content hosted outside the reach of the Online Safety Act by issuing guidance requiring internet providers to block access to sites that refuse to comply.

Keir Starmer, Liz Kendall and Alex Davies-Jones emphasised that no woman should be forced to pursue platform after platform to secure removal and that the online environment must offer safety and respect.

The package of reforms forms part of a broader pledge to halve violence against women and girls during the next decade.

Alongside tackling intimate image abuse, the government is legislating against nudification tools and ensuring AI chatbots fall within regulatory scope, using this agenda to reshape online safety instead of relying on voluntary compliance from large technology firms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Reload launches Epic to bring shared memory and structure to AI agents

Founders of the Reload platform say AI is moving from simple automation toward something closer to teamwork.

Newton Asare and Kiran Das noticed that AI agents were completing tasks normally handled by employees, which pushed them to design a system that treats digital workers as part of a company’s structure instead of disposable tools.

Their platform, Reload, offers a way for organisations to manage these agents across departments, assign responsibilities and monitor performance. The firm has secured 2.275 million dollars in new funding led by Anthemis with several other investors joining the round.

The shift toward agent-driven development exposed a recurring limitation. Most agents retain only short-term memory, which means they often lose context about a product or forget why a task matters.

Reload’s answer is Epic, a new product built on its platform that acts as an architect alongside coding agents. Epic defines requirements and constraints at the start of a project, then continuously preserves the shared understanding that agents need as software evolves.

Epic integrates with popular AI-assisted code editors such as Cursor and Windsurf, allowing developers to keep a consistent system memory without changing their workflow.

The tool generates key project artefacts from the outset, including data models and technical decisions, then carries them forward even when teams switch agents. It creates a single source of truth so that engineers and digital workers develop against the same structure.

Competing systems such as LongChain and CrewAI also offer support for managing agents, but Reload argues that Epic’s ability to maintain project-level context sets it apart.

Asare and Das, who already built and sold a previous company together, plan to use the fresh capital to grow their team and expand the infrastructure needed for a future in which human workers manage AI employees instead of the other way around.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!