French public office hit with €5 million CNIL fine after massive data leak

The data protection authority of France has imposed a €5 million penalty on France Travail after a massive data breach exposed sensitive personal information collected over two decades.

A leak which included social security numbers, email addresses, phone numbers and home addresses of an estimated 36.8 million people who had used the public employment service. CNIL said adequate security measures would have made access far more difficult for the attackers.

The investigation found that cybercriminals exploited employees through social engineering instead of breaking in through technical vulnerabilities.

CNIL highlighted the failure to secure such data breach requirements under the General Data Protection Regulation. The watchdog also noted that the size of the fine reflects the fact that France Travail operates with public funding.

France Travail has taken corrective steps since the breach, yet CNIL has ordered additional security improvements.

The authority set a deadline for these measures and warned that non-compliance would trigger a daily €5,000 penalty until France Travail meets GDPR obligations. A case that underlines growing pressure on public institutions to reinforce cybersecurity amid rising threats.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Netherlands faces rising digital sovereignty threat, data authority warns

The Dutch data protection authority has urged the government to act swiftly to protect the country’s digital sovereignty, warning that dependence on overseas technology firms could expose vital public services to significant risk.

Concern has intensified after DigiD, the national digital identity system, appeared set for acquisition by a US company, raising questions about long-term control of key infrastructure.

The watchdog argues that the Netherlands relies heavily on a small group of non-European cloud and IT providers, and stresses that public bodies lack clear exit strategies if foreign ownership suddenly shifts.

Additionally, the watchdog criticises the government for treating digital autonomy as an academic exercise rather than recognising its immediate implications for communication between the state and citizens.

In a letter to the economy minister, the authority calls for a unified national approach rather than fragmented decisions by individual public bodies.

It proposes sovereignty criteria for all government contracts and suggests termination clauses that enable the state to withdraw immediately if a provider is sold abroad. It also notes the importance of designing public services to allow smooth provider changes when required.

The watchdog urges the government to strengthen European capacity by investing in scalable domestic alternatives, including a Dutch-controlled government cloud. The economy ministry has declined to comment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Fake AI assistant steals OpenAI credentials from thousands of Chrome users

A Chrome browser extension posing as an AI assistant has stolen OpenAI credentials from more than 10,000 users. Cybersecurity platform Obsidian identified the malicious software, known as H-Chat Assistant, which secretly harvested API keys and transmitted user data to hacker-controlled servers.

The extension, initially called ChatGPT Extension, appeared to function normally after users provided their OpenAI API keys. Analysts discovered that the theft occurred when users deleted chats or logged out, triggering the transmission of credentials via hardcoded Telegram bot credentials.

At least 459 unique API keys were exfiltrated to a Telegram channel months before they were discovered in January 2025.

Researchers believe the malicious activity began in July 2024 and continued undetected for months. Following disclosure to OpenAI on 13 January, the company revoked compromised API keys, though the extension reportedly remained available in the Chrome Web Store.

Security analysts identified 16 related extensions sharing the identical developer fingerprints, suggesting a coordinated campaign by a single threat actor.

LayerX Security consultant Natalie Zargarov warned that whilst current download numbers remain relatively low, AI-focused browser extensions could rapidly surge in popularity.

The malicious extensions exploit vulnerabilities in web-based authentication processes, creating, as researchers describe, a ‘materially expanded browser attack surface’ through deep integration with authenticated web applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI biometric social platform plans spark Worldcoin surge

Worldcoin jumped 40% after reports that OpenAI is developing a biometric social platform to verify users and eliminate bots. The proposed network would reportedly integrate AI tools while relying on biometric identification to ensure proof of personhood.

Sources cited by Forbes claim the project aims to create a humans-only platform, differentiating itself from existing social networks, including X. Development is said to be led by a small internal team, with work reportedly underway since early 2025.

Biometric verification could involve Apple’s Face ID or the World Orb scanner, a device linked to the World project co-founded by OpenAI chief executive Sam Altman.

The report sparked a sharp rally in Worldcoin, though part of the gains later reversed amid wider market weakness. Despite the brief surge, Worldcoin has remained sharply lower over the past year amid weak market sentiment and ongoing privacy concerns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SoundCloud breach exposes nearly 30 million users

SoundCloud disclosed a major data breach in December 2025, confirming that around 29.8 million global user accounts were affected. The incident represents one of the largest security failures involving a global music streaming platform.

The privacy breach exposed email addresses alongside public profile information, including usernames, display names and follower data. SoundCloud said passwords and payment details were not accessed, but the combined data increases the risk of phishing.

SoundCloud detected unauthorised activity in December 2025 and launched an internal investigation. Attackers reportedly exploited a flaw that linked public profile data with private email addresses at scale.

After SoundCloud refused an extortion demand, the stolen dataset was released publicly. SoundCloud has urged users worldwide to monitor accounts closely and enable stronger security protections.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Class-action claims challenge WhatsApp end-to-end encryption practices

WhatsApp rejected a class-action lawsuit accusing Meta of accessing encrypted messages, calling such claims false. The company reaffirmed that chats remain protected by device-based Signal protocol encryption.

Filed in a US federal court in California, the complaint alleges Meta misleads more than two billion users by promoting unbreakable encryption while internally storing and analysing message content. Plaintiffs from several countries claim employees can access chats through internal requests.

WhatsApp said no technical evidence accompanies the accusations and stressed that encryption occurs on users’ devices before messages are sent. According to the company, only recipients hold the keys required to decrypt content, which are never accessible to Meta.

The firm described the lawsuit as frivolous and said it will seek sanctions against the legal teams involved. Meta spokespersons reiterated that WhatsApp has relied on independently audited encryption standards for over a decade.

The case highlights ongoing debates about encryption and security, but so far, no evidence has shown that message content has been exposed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Council presidency launches talks on AI deepfakes and cyberattacks

EU member states are preparing to open formal discussions on the risks posed by AI-powered deepfakes and their use in cyberattacks, following an initiative by the current Council presidency.

The talks are intended to assess how synthetic media may undermine democratic processes and public trust across the bloc.

According to sources, capitals will also begin coordinated exchanges on the proposed Democracy Shield, a framework aimed at strengthening resilience against foreign interference and digitally enabled manipulation.

Deepfakes are increasingly viewed as a cross-cutting threat, combining disinformation, cyber operations and influence campaigns.

The timeline set out by the presidency foresees structured discussions among national experts before escalating the issue to the ministerial level. The approach reflects growing concern that existing cyber and media rules are insufficient to address rapidly advancing AI-generated content.

An initiative that signals a broader shift within the Council towards treating deepfakes not only as a content moderation challenge, but as a security risk with implications for elections, governance and institutional stability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Experts debate when quantum computers could break modern encryption

Scientists are divided over when quantum computers will become powerful enough to break today’s digital encryption, a moment widely referred to as ‘Q–Day’.

While predictions range from just two years to several decades, experts agree that governments and companies must begin preparing urgently for a future where conventional security systems may fail.

Quantum computing uses subatomic behaviour to process data far faster than classical machines, enabling rapid decryption of information once considered secure.

Financial systems, healthcare data, government communications, and military networks could all become vulnerable as advanced quantum machines emerge.

Major technology firms have already made breakthroughs, accelerating concerns that encryption safeguards could be overwhelmed sooner than expected.

Several cybersecurity specialists warn that sensitive data is already being harvested and stored for future decryption, a strategy known as ‘harvest now, decrypt later’.

Regulators in the UK and the US have set timelines for shifting to post-quantum cryptography, aiming for full migration by 2030-2035. However, engineering challenges and unresolved technical barriers continue to cast uncertainty over the pace of progress.

Despite scepticism over timelines, experts agree that early preparation remains the safest approach. Experts stress that education, infrastructure upgrades, and global cooperation are vital to prevent disruption as quantum technology advances.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google faces new UK rules over AI summaries and publisher rights

The UK competition watchdog has proposed new rules that would force Google to give publishers greater control over how their content is used in search and AI tools.

The Competition and Markets Authority (CMA) plans to require opt-outs for AI-generated summaries and model training, marking the first major intervention under Britain’s new digital markets regime.

Publishers argue that generative AI threatens traffic and revenue by answering queries directly instead of sending users to the original sources.

The CMA proposal would also require clearer attribution of publisher content in AI results and stronger transparency around search rankings, including AI Overviews and conversational search features.

Additional measures under consultation include search engine choice screens on Android and Chrome, alongside stricter data portability obligations. The regulator says tailored obligations would give businesses and users more choice while supporting innovation in digital markets.

Google has warned that overly rigid controls could damage the user experience, describing the relationship between AI and search as complex.

The consultation runs until late February, with the outcome expected to shape how AI-powered search operates in the UK.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Canada’s Cyber Centre flags rising ransomware risks for 2025 to 2027

The national cyber authority of Canada has warned that ransomware will remain one of the country’s most serious cyber threats through 2027, as attacks become faster, cheaper and harder to detect.

The Canadian Centre for Cyber Security, part of Communications Security Establishment Canada, says ransomware now operates as a highly interconnected criminal ecosystem driven by financial motives and opportunistic targeting.

According to the outlook, threat actors are increasingly using AI and cryptocurrency while expanding extortion techniques beyond simple data encryption.

Businesses, public institutions and critical infrastructure in Canada remain at risk, with attackers continuously adapting their tactics, techniques and procedures to maximise financial returns.

The Cyber Centre stresses that basic cyber hygiene still provides strong protection. Regular software updates, multi-factor authentication and vigilance against phishing attempts significantly reduce exposure, even as attack methods evolve.

A report that also highlights the importance of cooperation between government bodies, law enforcement, private organisations and the public.

Officials conclude that while ransomware threats will intensify over the next two years, early warnings, shared intelligence and preventive measures can limit damage.

Canada’s cyber authorities say continued investment in partnerships and guidance remains central to building national digital resilience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!