The European Commission has opened a formal investigation into Grok after the tool produced millions of sexualised images of women and children.
A scrutiny that centres on whether X failed to carry out adequate risk assessments before releasing the undressing feature in the European market. The case arrives as ministers, including Sweden’s deputy prime minister, publicly reveal being targeted by the technology.
Brussels is preparing to use its strongest digital laws instead of deferring to US pressure. The Digital Services Act allows the European Commission to fine major platforms or force compliance measures when systemic harms emerge.
Experts argue the Grok investigation represents an important test of European resolve, particularly as the bloc tries to show it can hold powerful companies to account.
Concerns remain about the willingness of the EU to act decisively. Reports suggest the opening of the probe was delayed because of a tariff dispute with Washington, raising questions about whether geopolitical considerations slowed the enforcement response.
Several lawmakers say the delay undermined confidence in the bloc’s commitment to protecting fundamental rights.
The investigation could last months and may have wider implications for content ranking systems already under scrutiny.
Critics say financial penalties may not be enough to change behaviour at X, yet the case is still viewed as a pivotal moment for European digital governance. Observers believe a firm outcome would demonstrate that emerging harms linked to synthetic media cannot be ignored.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A global wave of deepfake abuse is spreading across Telegram as millions of users generate and share sexualised images of women without consent.
Researchers have identified at least 150 active channels offering AI-generated nudes of celebrities, influencers and ordinary women, often for payment. The widespread availability of advanced AI tools has turned intimate digital abuse into an industrialised activity.
Telegram states that deepfake pornography is banned and says moderators removed nearly one million violating posts in 2025. Yet new channels appear immediately after old ones are shut, enabling users to exchange tips on how to bypass safety controls.
The rise of nudification apps on major app stores, downloaded more than 700 million times, adds further momentum to an expanding ecosystem that encourages harassment rather than accountability.
Experts argue that the celebration of such content reflects entrenched misogyny instead of simple technological misuse. Women targeted by deepfakes face isolation, blackmail, family rejection and lost employment opportunities.
Legal protections remain minimal in much of the world, with fewer than 40% of countries having laws that address cyber-harassment or stalking.
Campaigners warn that women in low-income regions face the most significant risks due to poor digital literacy, limited resources and inadequate regulatory frameworks.
The damage inflicted on victims is often permanent, as deepfake images circulate indefinitely across platforms and are impossible to remove, undermining safety, dignity and long-term opportunities comprehensively.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The data protection authority of France has imposed a €5 million penalty on France Travail after a massive data breach exposed sensitive personal information collected over two decades.
A leak which included social security numbers, email addresses, phone numbers and home addresses of an estimated 36.8 million people who had used the public employment service. CNIL said adequate security measures would have made access far more difficult for the attackers.
The investigation found that cybercriminals exploited employees through social engineering instead of breaking in through technical vulnerabilities.
CNIL highlighted the failure to secure such data breach requirements under the General Data Protection Regulation. The watchdog also noted that the size of the fine reflects the fact that France Travail operates with public funding.
France Travail has taken corrective steps since the breach, yet CNIL has ordered additional security improvements.
The authority set a deadline for these measures and warned that non-compliance would trigger a daily €5,000 penalty until France Travail meets GDPR obligations. A case that underlines growing pressure on public institutions to reinforce cybersecurity amid rising threats.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A Chrome browser extension posing as an AI assistant has stolen OpenAI credentials from more than 10,000 users. Cybersecurity platform Obsidian identified the malicious software, known as H-Chat Assistant, which secretly harvested API keys and transmitted user data to hacker-controlled servers.
The extension, initially called ChatGPT Extension, appeared to function normally after users provided their OpenAI API keys. Analysts discovered that the theft occurred when users deleted chats or logged out, triggering the transmission of credentials via hardcoded Telegram bot credentials.
At least 459 unique API keys were exfiltrated to a Telegram channel months before they were discovered in January 2025.
Researchers believe the malicious activity began in July 2024 and continued undetected for months. Following disclosure to OpenAI on 13 January, the company revoked compromised API keys, though the extension reportedly remained available in the Chrome Web Store.
Security analysts identified 16 related extensions sharing the identical developer fingerprints, suggesting a coordinated campaign by a single threat actor.
LayerX Security consultant Natalie Zargarov warned that whilst current download numbers remain relatively low, AI-focused browser extensions could rapidly surge in popularity.
The malicious extensions exploit vulnerabilities in web-based authentication processes, creating, as researchers describe, a ‘materially expanded browser attack surface’ through deep integration with authenticated web applications.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Worldcoin jumped 40% after reports that OpenAI is developing a biometric social platform to verify users and eliminate bots. The proposed network would reportedly integrate AI tools while relying on biometric identification to ensure proof of personhood.
Sources cited by Forbes claim the project aims to create a humans-only platform, differentiating itself from existing social networks, including X. Development is said to be led by a small internal team, with work reportedly underway since early 2025.
Biometric verification could involve Apple’s Face ID or the World Orb scanner, a device linked to the World project co-founded by OpenAI chief executive Sam Altman.
The report sparked a sharp rally in Worldcoin, though part of the gains later reversed amid wider market weakness. Despite the brief surge, Worldcoin has remained sharply lower over the past year amid weak market sentiment and ongoing privacy concerns.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
WhatsApp rejected a class-action lawsuit accusing Meta of accessing encrypted messages, calling such claims false. The company reaffirmed that chats remain protected by device-based Signal protocol encryption.
Filed in a US federal court in California, the complaint alleges Meta misleads more than two billion users by promoting unbreakable encryption while internally storing and analysing message content. Plaintiffs from several countries claim employees can access chats through internal requests.
WhatsApp said no technical evidence accompanies the accusations and stressed that encryption occurs on users’ devices before messages are sent. According to the company, only recipients hold the keys required to decrypt content, which are never accessible to Meta.
The firm described the lawsuit as frivolous and said it will seek sanctions against the legal teams involved. Meta spokespersons reiterated that WhatsApp has relied on independently audited encryption standards for over a decade.
The case highlights ongoing debates about encryption and security, but so far, no evidence has shown that message content has been exposed.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Swiss technology and privacy expert Anna Zeiter is leading the development of W Social, a new European-built social media network designed as an alternative to X. The project aims to reduce reliance on US tech and strengthen European digital sovereignty.
W Social will require users to verify their identity and provide a photo to ensure genuine human accounts, tackling fake profiles and bot driven disinformation that critics link to existing platforms. Zeiter said the name W stands for ‘We’ as well as values and verification.
The platform’s infrastructure will be hosted in Europe under strict EU data protection laws, with decentralised storage and offices planned in Berlin and Paris. Early support comes from European political and tech figures, signalling interest beyond Silicon Valley.
W Social could launch a beta version as early as February, with broader public access planned by year-end. Backers hope the network will foster more positive dialogue and provide a European alternative to US based social media influence.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
StackAdapt has secured EU–US Data Privacy Framework certification, strengthening GDPR compliance and enabling cross-border data transfers between the EU and the US.
The certification allows the advertising technology firm to manage personal data without relying on additional transfer mechanisms.
The framework, adopted in 2023, provides a legal basis for EU-to-US data flows while strengthening oversight and accountability. Certification requires organisations to meet strict standards on data minimisation, security, transparency, and individual rights.
By joining the framework, StackAdapt enhances its ability to support advertisers, publishers, and partners through seamless international data processing.
The move also reduces regulatory complexity for European customers while reinforcing the company’s broader commitment to privacy-by-design and responsible data use.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
EU member states are preparing to open formal discussions on the risks posed by AI-powered deepfakes and their use in cyberattacks, following an initiative by the current Council presidency.
The talks are intended to assess how synthetic media may undermine democratic processes and public trust across the bloc.
According to sources, capitals will also begin coordinated exchanges on the proposed Democracy Shield, a framework aimed at strengthening resilience against foreign interference and digitally enabled manipulation.
Deepfakes are increasingly viewed as a cross-cutting threat, combining disinformation, cyber operations and influence campaigns.
The timeline set out by the presidency foresees structured discussions among national experts before escalating the issue to the ministerial level. The approach reflects growing concern that existing cyber and media rules are insufficient to address rapidly advancing AI-generated content.
An initiative that signals a broader shift within the Council towards treating deepfakes not only as a content moderation challenge, but as a security risk with implications for elections, governance and institutional stability.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Scientists are divided over when quantum computers will become powerful enough to break today’s digital encryption, a moment widely referred to as ‘Q–Day’.
While predictions range from just two years to several decades, experts agree that governments and companies must begin preparing urgently for a future where conventional security systems may fail.
Quantum computing uses subatomic behaviour to process data far faster than classical machines, enabling rapid decryption of information once considered secure.
Financial systems, healthcare data, government communications, and military networks could all become vulnerable as advanced quantum machines emerge.
Major technology firms have already made breakthroughs, accelerating concerns that encryption safeguards could be overwhelmed sooner than expected.
Several cybersecurity specialists warn that sensitive data is already being harvested and stored for future decryption, a strategy known as ‘harvest now, decrypt later’.
Regulators in the UK and the US have set timelines for shifting to post-quantum cryptography, aiming for full migration by 2030-2035. However, engineering challenges and unresolved technical barriers continue to cast uncertainty over the pace of progress.
Despite scepticism over timelines, experts agree that early preparation remains the safest approach. Experts stress that education, infrastructure upgrades, and global cooperation are vital to prevent disruption as quantum technology advances.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!