Data breach at PayPal prompts password resets and transaction refunds

PayPal has notified some customers of a data breach linked to its Working Capital loan application, after unauthorised access between 1 July and 12 December 2025 exposed personal information. Letters dated 10 February confirm that around 100 customers were potentially affected.

The incident was linked to an error in the Working Capital application, described as a ‘code change’. PayPal said it ‘terminated the unauthorised access to PayPal’s systems’ after discovery.

In a statement sent following publication, a PayPal spokesperson said ‘When there is a potential exposure of customer information, PayPal is required to notify affected customers. In this case, PayPal’s systems were not compromised. As such, we contacted the approximately 100 customers who were potentially impacted to provide awareness on this matter.’

Data potentially accessed includes names, email addresses, phone numbers, business addresses, Social Security numbers, and dates of birth. PayPal confirmed a small number of unauthorised transactions and said refunds were issued. Affected users had passwords reset and were offered credit monitoring.

Previous incidents include a 2023 credential stuffing attack that affected nearly 35,000 accounts and phishing campaigns that abused legitimate infrastructure. The company said it continues to use manual investigations and automated tools to mitigate fraud.

Customers are advised to use unique passwords, avoid unsolicited links, verify urgent messages directly via their accounts, and enable passkeys where available. Even limited breaches can heighten risks of targeted phishing and identity theft, especially for small businesses.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Phishing messages target IndiaAI and Impact Summit 2026 participants

IndiaAI has issued an urgent advisory warning of a phishing campaign targeting attendees of the India AI Impact Summit 2026. Fraudulent SMS and WhatsApp messages claim refunds are pending and request sensitive financial details.

Organisers said the messages are not official and have not been authorised. Recipients are being urged to click links and provide full card numbers, WhatsApp numbers, and other contact information to ‘process’ refunds.

IndiaAI advised participants not to click suspicious links or share personal or banking information with unverified sources. Attendees in India are encouraged to delete such messages immediately and block the sender’s number.

Anyone who may have submitted details through a suspicious link should contact their bank without delay to secure their accounts. Organisers stressed that event-related communication will only be shared through official channels.

The advisory was issued under the AI Impact Summit 2026 banner, themed ‘Welfare for All | Happiness of All’, as authorities seek to prevent financial fraud linked to the high-profile gathering.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Digital addiction in Italy sparks debate over social media bans

Italy has warned that digital addiction among teenagers is rising sharply, as health authorities link excessive social media and gaming use to family and educational challenges. Officials say bans alone will not resolve the issue.

According to Italy’s National Institute of Health, about 100,000 young people aged 15 to 18 are at risk of social media addiction. A further 500,000 are estimated to suffer from gaming disorder, recognised by the World Health Organisation as a medical condition.

A survey by digital ethics group Social Warning found that 77 percent of Italian teenagers consider themselves addicted to their devices. However, many say they lack the tools or support to change their behaviour.

Research by ‘Con i Bambini’, which funds projects tackling educational poverty in Italy, links digital dependency to isolation and strained parental relationships. The organisation says legislative measures can protect minors but cannot replace structured education and family support.

The debate extends across the EU. The European Parliament has called for a minimum age of 16 for social media platforms, while France, Italy, and Spain are considering national restrictions. Experts argue that prevention and digital literacy must complement regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Social media ban for children gains momentum in Germany

Germany’s coalition government is weighing new restrictions on children’s access to social media as both governing parties draft proposals to tighten online safeguards. The debate comes amid broader economic pressures, with industry reporting significant job losses last year.

The conservative bloc and the centre-left Social Democrats are examining measures that could curb or block social media access for minors. Proposals under discussion include age-based restrictions and stronger platform accountability.

The Social Democrats in Germany have proposed banning access for children under 14 and introducing dedicated youth versions of platforms for users aged 14 to 16. Supporters argue that clearer age thresholds could reduce exposure to harmful content and addictive design features.

The discussions align with a growing European trend toward stricter digital child protection rules. Several governments are exploring tougher age verification and content moderation standards, reflecting mounting concerns over online safety and mental health.

The policy debate unfolded as German industry reported cutting 124,100 jobs in 2025 amid ongoing economic headwinds. Lawmakers face the dual challenge of safeguarding younger users while navigating wider structural pressures affecting Europe’s largest economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study says China AI governance not purely state-driven

New research challenges the view that China’s AI controls are solely the product of authoritarian rule, arguing instead that governance emerges from interaction between the state, private sector and society.

A study by Xuechen Chen of Northeastern University London and Lu Xu of Lancaster University argues that China’s AI governance is not purely top-down. Published in the Computer Law & Security Review, it says safeguards are shaped by regulators, companies and social actors, not only the central government.

Chen calls claims that Beijing’s AI oversight is entirely state-driven a ‘stereotypical narrative’. Although the Cyberspace Administration of China leads regulation, firms such as ByteDance and DeepSeek help shape guardrails through self-regulation and commercial strategy.

China was the first country to introduce rules specific to generative AI. Systems must avoid unlawful or vulgar content, and updated legislation strengthens minor protection, limiting children’s online activity and requiring child-friendly device modes.

Market incentives also reinforce compliance. As Chinese AI firms expand globally, consumer expectations and cultural norms encourage content moderation. The study concludes that governance reflects interaction between state authority, market forces and society.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study warns against using AI for Valentine’s messages

Psychologists have urged caution over using AI to write Valentine’s Day messages, after research suggested people judge such use negatively in intimate contexts.

A University of Kent study surveyed 4,000 participants about their perceptions of people who relied on AI to complete various tasks. Respondents viewed AI use more negatively when it was applied to writing love letters, apologies, and wedding vows.

According to the findings, people who used AI for personal messages were seen as less caring, less authentic, less trustworthy, and lazier, even when the writing quality was high, and the AI use was disclosed.

The research forms part of the Trust in Moral Machines project, supported by the University of Exeter. Lead researcher Dr Scott Claessens said people judge not only outcomes, but also the process behind them, particularly in socially meaningful tasks.

Dr Jim Everett, also from the University of Kent, said relying on AI for relationship-focused communication risks signalling lower effort and care. He added that AI could not replace the personal investment that underpins close human relationships.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU decision regulates researcher access to data under the DSA

A document released by the Republican-led House Judiciary Committee revived claims that the EU digital rules amount to censorship. The document concerns a €120 million fine against X under the Digital Services Act and was framed as a ‘secret censorship ruling’, despite publication requirements.

The document provides insight into how the European Commission interprets Article 40 of the DSA, which governs researcher access to platform data. The rule requires huge online platforms to grant qualified researchers access to publicly accessible data needed to study systemic risks in the EU.

Investigators found that X failed to comply with Article 40.12, in force since 2023 and covering public data access. The Commission said X applied restrictive eligibility rules, delayed reviews, imposed tight quotas, and blocked independent researcher access, including scraping.

The decision confirms platforms cannot price access to restrict research, deny access based on affiliation or location, or ban scraping by contract. The European Commission also rejected X’s narrow reading of ‘systemic risk’, allowing broader research contexts.

The ruling also highlights weak internal processes and limited staffing for handling access requests. X must submit an action plan by mid-April 2026, with the decision expected to shape future enforcement of researcher access across major platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT starts limited advertising rollout in the US

OpenAI has begun rolling out advertising inside ChatGPT, marking a shift for a service that has largely operated without traditional ads since its launch in 2022.

OpenAI said it is testing ads for logged-in Free and Go users in the United States, while paid tiers remain ad-free. The company said the test aims to fund broader access to advanced AI tools.

Ads appear outside ChatGPT responses and are clearly labelled as sponsored content, with no influence on answers. Placement is based on broad topics, with restrictions around sensitive areas such as health or politics.

Free users can opt out of ads by upgrading to a paid plan or by accepting fewer daily free messages in exchange for an ad-free experience. Users who allow ads can also opt out of ad personalisation, prevent past chats from being used for ad selection, and delete all ad-related history and data.

The rollout follows months of speculation after screenshots suggested that ads appeared in ChatGPT responses, which OpenAI described as suggestions. Rivals, including Anthropic, have contrasted their approach, promoting Claude as free from in-chat advertising.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

X given deadline by Brazil to curb Grok sexualised outputs

Brazil has ordered X to immediately stop its chatbot Grok from generating sexually explicit images, escalating international pressure on the platform over the misuse of generative AI tools.

The order, issued on 11 February by Brazil’s National Data Protection Agency and National Consumer Rights Bureau, requires X to prevent the creation of sexualised content involving children, adolescents, or non-consenting adults. Authorities gave the company five days to comply or face legal action and fines.

Officials in Brazil said X claimed to have removed thousands of posts and suspended hundreds of accounts after a January warning. However, follow-up checks found Grok users were still able to generate sexualised deepfakes. Regulators criticised the platform for a lack of transparency in its response.

The move follows growing scrutiny after Indonesia blocked Grok in January, while the UK and France signalled continued pressure. Concerns increased after Grok’s ‘spicy mode’ enabled users to generate explicit images using simple prompts.

According to the Centre for Countering Digital Hate, Grok generated millions of sexualised images within days. X and its parent company, xAI, announced measures in mid-January to restrict such outputs in certain jurisdictions, but regulators said it remains unclear where those safeguards apply.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Codex growth prompts OpenAI to expand access

OpenAI said its new Codex Mac app has surpassed one million downloads just over a week after launch, with overall Codex usage rising by 60% following the release of GPT-5.3-Codex.

The strong uptake has prompted OpenAI to extend free access to Codex for Free and Go users beyond the initial launch promotion. Sam Altman said usage limits for lower tiers may be tightened, but access would remain available so more users can experiment and build.

Separately, OpenAI released a YouTube video showcasing a redesigned Deep Research interface, introducing a full-screen report viewer that opens research outputs in a separate window from the chat interface.

The updated layout includes a table of contents for navigation, hyperlinks, and anchor tags within reports, and a dedicated source panel for verification. Users can also download reports as PDF or Word files, while new controls allow research scopes and sources to be adjusted during generation.

The Deep Research updates are available to Plus and Pro users, with broader access expected soon. OpenAI also confirmed the changes in ChatGPT release notes on 10 February and announced a more minor GPT-5.2 update focused on more measured responses.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!