Data breach at PayPal prompts password resets and transaction refunds

PayPal has notified some customers of a data breach linked to its Working Capital loan application, after unauthorised access between 1 July and 12 December 2025 exposed personal information. Letters dated 10 February confirm that around 100 customers were potentially affected.

The incident was linked to an error in the Working Capital application, described as a ‘code change’. PayPal said it ‘terminated the unauthorised access to PayPal’s systems’ after discovery.

In a statement sent following publication, a PayPal spokesperson said ‘When there is a potential exposure of customer information, PayPal is required to notify affected customers. In this case, PayPal’s systems were not compromised. As such, we contacted the approximately 100 customers who were potentially impacted to provide awareness on this matter.’

Data potentially accessed includes names, email addresses, phone numbers, business addresses, Social Security numbers, and dates of birth. PayPal confirmed a small number of unauthorised transactions and said refunds were issued. Affected users had passwords reset and were offered credit monitoring.

Previous incidents include a 2023 credential stuffing attack that affected nearly 35,000 accounts and phishing campaigns that abused legitimate infrastructure. The company said it continues to use manual investigations and automated tools to mitigate fraud.

Customers are advised to use unique passwords, avoid unsolicited links, verify urgent messages directly via their accounts, and enable passkeys where available. Even limited breaches can heighten risks of targeted phishing and identity theft, especially for small businesses.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Turkey reviews children’s data handling as identity checks planned for social platforms

The data protection authority of Turkey has opened a new review into how major social media platforms manage children’s personal data.

A decision that places scrutiny on TikTok, Instagram, Facebook, YouTube, X and Discord as Ankara prepares legislation that would expand state authority over digital activity instead of relying on existing rules alone.

Regulators aim to assess safeguards for children and ensure stronger compliance with local standards.

The ruling party is expected to introduce a family package that would require identity verification for every account through phone numbers or the e-Devlet system. Children under 15 would not be allowed to create profiles and further limits could apply to users under 18.

A proposal that would also allow authorities to order the rapid removal of content deemed unlawful without waiting for court approval, while platforms that fail to comply may face penalties such as phased bandwidth reductions.

Rights advocates warn that mandatory verification and broader enforcement powers could reshape online speech across the country. Some argue that linking accounts to verified identities threatens anonymity and could restrict legitimate expression instead of fostering safety.

Turkey has already expanded online oversight since 2016 through laws that increased the government’s ability to block websites, require content removal and oblige major platforms to maintain a legal presence in the country.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Cloudflare outage causes global internet disruption after an internal error

A major outage on 20 February disrupted global internet traffic after an internal configuration failure at Cloudflare caused the unintended withdrawal of customer BGP routes.

The incident lasted just over six hours and left numerous services unreachable, despite early fears of a cyberattack. An internal update led to the systematic deletion of more than a thousand Bring Your Own IP prefixes, which pushed many connections into BGP path hunting instead of stable routing.

Engineers traced the disruption to an error in the company’s Addressing API, introduced during an automated cleanup task under the Code Orange resilience programme.

A flawed query interpreted an empty value as an instruction to delete all returned prefixes, removing essential bindings for hundreds of customers. Some users restored connectivity through the dashboard, while others required manual reconstruction carried out across the edge network.

An outage that affected a series of core offerings, including content delivery, security layers, dedicated egress and network protection services. Restoration took several hours because the withdrawn prefixes varied in severity, demanding different recovery methods instead of a uniform reinstatement process.

The error triggered widespread timeouts on dependent websites and applications, along with 403 responses on the 1.1.1.1 DNS resolver.

Cloudflare plans to introduce stricter API validation, circuit breakers for abnormal deletion patterns, and improved configuration separation. It has also issued a public apology for a failure that undermined its assurances of network resilience.

An event that reaffirmed the risks posed by internal automation faults when they interact with critical internet infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Phishing messages target IndiaAI and Impact Summit 2026 participants

IndiaAI has issued an urgent advisory warning of a phishing campaign targeting attendees of the India AI Impact Summit 2026. Fraudulent SMS and WhatsApp messages claim refunds are pending and request sensitive financial details.

Organisers said the messages are not official and have not been authorised. Recipients are being urged to click links and provide full card numbers, WhatsApp numbers, and other contact information to ‘process’ refunds.

IndiaAI advised participants not to click suspicious links or share personal or banking information with unverified sources. Attendees in India are encouraged to delete such messages immediately and block the sender’s number.

Anyone who may have submitted details through a suspicious link should contact their bank without delay to secure their accounts. Organisers stressed that event-related communication will only be shared through official channels.

The advisory was issued under the AI Impact Summit 2026 banner, themed ‘Welfare for All | Happiness of All’, as authorities seek to prevent financial fraud linked to the high-profile gathering.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Saudi Arabia steps into global AI leadership to shape AI future

The Global Partnership on Artificial Intelligence (GPAI), a multilateral initiative hosted by the OECD and launched by the G7, has officially welcomed Saudi Arabia as a new member. The move reflects the Kingdom’s commitment to shaping global AI governance and ethical technology use.

Accession is led by the Saudi Data and Artificial Intelligence Authority and supported by Crown Prince Mohammed bin Salman. Joining GPAI aligns with Vision 2030, which aims to localise advanced technologies and boost the digital economy’s contribution to GDP.

Through membership in GPAI, which unites over 40 countries, Saudi Arabia will help establish international AI standards, promote human-centric and responsible AI development, and strengthen global cooperation in the sector.

Officials also anticipate that the move will attract high-quality international investment, leveraging the Kingdom’s expanding regulatory framework and growing AI and data ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Generative AI presents the biggest data-risk challenge in history

Cybersecurity specialists warn that generative AI systems, such as large language models, are creating a data risk frontier far larger than that posed by previous digital innovations.

Because these models are trained on extensive datasets drawn from web pages, internal documents, email corpora and proprietary sources, they can unintentionally memorise or regenerate sensitive information, increasing the risk of exposure.

The article highlights several core concerns. Data leakage and memorisation, where AI models can repeat or infer private data if training processes are not tightly controlled.

Amplification of poor hygiene, when generative tools can magnify the reach of bad actors by automating phishing, social engineering, and malware generation at scale.

Compounding breach impact, if an AI model is trained on stolen or leaked data, it could internalise and regurgitate that information without detection, entrenching harm. Cloud and access governance gaps that allow organisations to adopt AI without robust access controls and encryption may widen their attack surface.

The author calls for revised data governance frameworks, including strict training data provenance, auditability, encryption, minimisation and purpose limitation, to mitigate what is described as ‘the biggest data risk in history.’

Recommendations also include accountability measures for models, continuous monitoring, and legislative action to align AI development with privacy and security principles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Altman urges urgent AI regulation

OpenAI chief Sam Altman has called for urgent global regulation of AI, speaking at the AI Impact Summit in New Delhi. Addressing leaders and executives in New Delhi, he said the rapid pace of development demands coordinated international oversight.

In New Delhi, Altman suggested creating a body similar to the International Atomic Energy Agency to oversee advanced AI systems. He warned that highly capable open source biomodels could pose serious biosecurity risks if misused.

Altman argued in New Delhi that democratising AI is essential to prevent power from being concentrated in a single company or country. He added that safeguards are urgently required, even as technology continues to disrupt labour markets.

During the summit in New Delhi, Altman said ChatGPT has 100 million weekly users in India, with more than a third being students. OpenAI also announced plans with Tata Consultancy Services to build data centre infrastructure in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Fake Google Forms phishing campaign targets job seekers

A phishing campaign is targeting job seekers with fake Google Forms pages designed to harvest account credentials. Attackers are using a spoofed domain, forms.google.ss-o[.]com, to mimic the legitimate Google Forms service and trick victims into signing in.

The fraudulent pages advertise a Customer Support Executive role and prompt applicants to enter personal details before clicking a ‘Sign in’ button. Victims are then redirected to id-v4[.]com/generation.php, a domain previously linked to credential harvesting campaigns.

Researchers identified the operation as part of a broader wave of job-themed phishing attacks. The attackers used a script called generation_form.php to create personalised tracking links and implemented redirects to evade security analysis by sending suspicious visitors to local Google search pages.

Security experts warn that the campaign relies on domain impersonation techniques, including the use of ‘ss-o’ to resemble ‘single sign-on’. The fake site reproduces Google branding elements and standard disclaimers to increase credibility.

Users are advised to avoid clicking unsolicited job links, verify opportunities through official channels, and enable multi-factor authentication. Password managers and real-time anti-malware tools can also reduce exposure to credential theft.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EVMbench from OpenAI, Paradigm and OtterSec measures AI smart contract risks

OpenAI, with Paradigm and OtterSec, introduced EVMbench to test how AI agents detect, patch, and exploit smart contract flaws. The benchmark draws on 120 real vulnerabilities from 40 blockchain projects to better reflect live conditions.

Researchers report that leading agents can now discover and exploit end-to-end vulnerabilities in live blockchain instances. Over six months, exploit success rates rose sharply, prompting both praise for improved auditing capabilities and concern over the rapid scaling of offensive skills.

EVMbench evaluates agents across three modes: detect, patch, and exploit. Each stage reflects increasing technical complexity and mirrors the responsibilities faced in production blockchain environments, where contracts are often immutable, and errors can lead to irreversible losses.

Recent incidents underline the stakes. A vulnerability in AI-generated Solidity code reportedly mispriced an asset, triggering liquidations and losses. Such cases highlight the risks of deploying AI-written financial logic without rigorous human review and governance safeguards.

While EVMbench advances measurement of AI capabilities, it remains limited to curated vulnerabilities and sandboxed conditions. As blockchain adoption expands and criminal misuse evolves, researchers stress the need for responsible AI development alongside stronger innovative contract security practices.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Lithuania selects Swiss firm Procivis for national eIDAS 2.0 wallet sandbox

Swiss firm Procivis has secured a contract to deliver Lithuania’s end-to-end Digital Identity Wallet sandbox, supporting the country’s preparations under eIDAS 2.0. The project will establish a national testbed for digital ID use cases and interoperability across the European Union.

Selected by Lithuania’s digitalisation agency, Procivis will build a platform for public authorities and relying parties to test secure digital wallet use cases. The sandbox will validate readiness ahead of the EU’s 2027 digital identity wallet deadline.

The updated eIDAS 2.0 technical framework sets out how wallets will store and share trusted digital credentials and electronic identification. Governments and private organisations will be able to integrate services into the wallets, streamlining authentication, onboarding, and cross-border access.

Across Lithuania and the EU, testbeds and large-scale pilots have been central to turning regulatory requirements into interoperable infrastructure. Lithuania’s sandbox will also support activities under the EU’s LSP Aptitude consortium, which is testing cross-sector digital identity solutions.

Procivis said the collaboration aims to accelerate practical validation while ensuring compliance with European standards on security, interoperability, and data protection. The company stated that supporting a timely, budget-aligned implementation of eIDAS 2.0 remains central to its mission.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!