Moltbook AI vulnerability exposes user data and API keys

A critical security flaw has emerged in Moltbook, a new AI agent social network launched by Octane AI.

The vulnerability allowed unauthenticated access to user profiles, exposing email addresses, login tokens, and API keys for registered agents. The platform’s rapid growth, claimed to have 1.5 million users, was largely artificial, as a single agent reportedly created hundreds of thousands of fake accounts.

Moltbook enables AI agents to post, comment, and form sub-communities, fostering interactions that range from AI debates to token-related activities.

Analysts warned that prompt injections and unregulated agent interactions could lead to credential theft or destructive actions, including data exfiltration or account hijacking. Experts described the platform as both a milestone in scale and a serious security concern.

Developers have not confirmed any patches, leaving users and enterprises exposed. Security specialists advised revoking API keys, sandboxing AI agents, and auditing potential exposures.

The lack of safeguards on the platform highlights the risks of unchecked AI agent networks, particularly for organisations that may rely on them without proper oversight.

An incident that underscores the growing need for stronger governance in AI-powered social networks. Experts stress that without enforced security protocols, such platforms could be exploited at scale, affecting both individual users and corporate systems.

The Moltbook case serves as a warning about prioritising hype over security in emerging AI applications.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Roblox faces new dutch scrutiny under EU digital rules

Regulators in the Netherlands have opened a formal investigation into Roblox over concerns about inadequate protections for children using the popular gaming platform.

The national authority responsible for enforcing digital rules is examining whether the company has implemented the safeguards required under the Digital Services Act rather than relying solely on voluntary measures.

Officials say children may have been exposed to harmful environments, including violent or sexualised material, as well as manipulative interfaces encouraging more extended play.

The concerns intensify pressure on the EU authorities to monitor social platforms that attract younger users, even when they do not meet the threshold for huge online platforms.

Roblox says it has worked with Dutch regulators for months and recently introduced age checks for users who want to use chat. The company argues that it has invested in systems designed to reinforce privacy, security and safety features for minors.

The Dutch authority plans to conclude the investigation within a year. The outcome could include fines or broader compliance requirements and is likely to influence upcoming European rules on gaming and consumer protection, due later in the decade.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

GDPR violation reports surge across Europe in 2025, study finds

European data protection authorities recorded a sharp rise in GDPR violation reports in 2025, according to a new study by law firm DLA Piper, signalling growing regulatory pressure across the European Union.

Average daily reports surpassed 400 for the first time since the regulation entered force in 2018, reaching 443 incidents per day, a 22% increase compared with the previous year. The firm noted that expanding digital systems, new breach reporting laws, and geopolitical cyber risks may be driving the surge.

Despite the higher number of cases in the EU, total fines remained broadly stable at around €1.2 billion for the year, pushing cumulative GDPR penalties since 2018 to €7.1 billion, underlining regulators’ continued willingness to impose major sanctions.

Ireland once again led enforcement figures, with fines imposed by its Data Protection Commission totaling €4.04 billion, reflecting the presence of major technology firms headquartered there, including Meta, Google, and Apple.

Recent headline penalties included a €1.2 billion fine against Meta and a €530 million sanction against TikTok over data transfers to China, while courts across Europe increasingly consider compensation claims linked to GDPR violations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU confronts Grok abuse as Brussels tests its digital power

The European Commission has opened a formal investigation into Grok after the tool produced millions of sexualised images of women and children.

A scrutiny that centres on whether X failed to carry out adequate risk assessments before releasing the undressing feature in the European market. The case arrives as ministers, including Sweden’s deputy prime minister, publicly reveal being targeted by the technology.

Brussels is preparing to use its strongest digital laws instead of deferring to US pressure. The Digital Services Act allows the European Commission to fine major platforms or force compliance measures when systemic harms emerge.

Experts argue the Grok investigation represents an important test of European resolve, particularly as the bloc tries to show it can hold powerful companies to account.

Concerns remain about the willingness of the EU to act decisively. Reports suggest the opening of the probe was delayed because of a tariff dispute with Washington, raising questions about whether geopolitical considerations slowed the enforcement response.

Several lawmakers say the delay undermined confidence in the bloc’s commitment to protecting fundamental rights.

The investigation could last months and may have wider implications for content ranking systems already under scrutiny.

Critics say financial penalties may not be enough to change behaviour at X, yet the case is still viewed as a pivotal moment for European digital governance. Observers believe a firm outcome would demonstrate that emerging harms linked to synthetic media cannot be ignored.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Millions use Telegram to create AI deepfake nudes as digital abuse escalates

A global wave of deepfake abuse is spreading across Telegram as millions of users generate and share sexualised images of women without consent.

Researchers have identified at least 150 active channels offering AI-generated nudes of celebrities, influencers and ordinary women, often for payment. The widespread availability of advanced AI tools has turned intimate digital abuse into an industrialised activity.

Telegram states that deepfake pornography is banned and says moderators removed nearly one million violating posts in 2025. Yet new channels appear immediately after old ones are shut, enabling users to exchange tips on how to bypass safety controls.

The rise of nudification apps on major app stores, downloaded more than 700 million times, adds further momentum to an expanding ecosystem that encourages harassment rather than accountability.

Experts argue that the celebration of such content reflects entrenched misogyny instead of simple technological misuse. Women targeted by deepfakes face isolation, blackmail, family rejection and lost employment opportunities.

Legal protections remain minimal in much of the world, with fewer than 40% of countries having laws that address cyber-harassment or stalking.

Campaigners warn that women in low-income regions face the most significant risks due to poor digital literacy, limited resources and inadequate regulatory frameworks.

The damage inflicted on victims is often permanent, as deepfake images circulate indefinitely across platforms and are impossible to remove, undermining safety, dignity and long-term opportunities comprehensively.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Fake AI assistant steals OpenAI credentials from thousands of Chrome users

A Chrome browser extension posing as an AI assistant has stolen OpenAI credentials from more than 10,000 users. Cybersecurity platform Obsidian identified the malicious software, known as H-Chat Assistant, which secretly harvested API keys and transmitted user data to hacker-controlled servers.

The extension, initially called ChatGPT Extension, appeared to function normally after users provided their OpenAI API keys. Analysts discovered that the theft occurred when users deleted chats or logged out, triggering the transmission of credentials via hardcoded Telegram bot credentials.

At least 459 unique API keys were exfiltrated to a Telegram channel months before they were discovered in January 2025.

Researchers believe the malicious activity began in July 2024 and continued undetected for months. Following disclosure to OpenAI on 13 January, the company revoked compromised API keys, though the extension reportedly remained available in the Chrome Web Store.

Security analysts identified 16 related extensions sharing the identical developer fingerprints, suggesting a coordinated campaign by a single threat actor.

LayerX Security consultant Natalie Zargarov warned that whilst current download numbers remain relatively low, AI-focused browser extensions could rapidly surge in popularity.

The malicious extensions exploit vulnerabilities in web-based authentication processes, creating, as researchers describe, a ‘materially expanded browser attack surface’ through deep integration with authenticated web applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

SoundCloud breach exposes nearly 30 million users

SoundCloud disclosed a major data breach in December 2025, confirming that around 29.8 million global user accounts were affected. The incident represents one of the largest security failures involving a global music streaming platform.

The privacy breach exposed email addresses alongside public profile information, including usernames, display names and follower data. SoundCloud said passwords and payment details were not accessed, but the combined data increases the risk of phishing.

SoundCloud detected unauthorised activity in December 2025 and launched an internal investigation. Attackers reportedly exploited a flaw that linked public profile data with private email addresses at scale.

After SoundCloud refused an extortion demand, the stolen dataset was released publicly. SoundCloud has urged users worldwide to monitor accounts closely and enable stronger security protections.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Class-action claims challenge WhatsApp end-to-end encryption practices

WhatsApp rejected a class-action lawsuit accusing Meta of accessing encrypted messages, calling such claims false. The company reaffirmed that chats remain protected by device-based Signal protocol encryption.

Filed in a US federal court in California, the complaint alleges Meta misleads more than two billion users by promoting unbreakable encryption while internally storing and analysing message content. Plaintiffs from several countries claim employees can access chats through internal requests.

WhatsApp said no technical evidence accompanies the accusations and stressed that encryption occurs on users’ devices before messages are sent. According to the company, only recipients hold the keys required to decrypt content, which are never accessible to Meta.

The firm described the lawsuit as frivolous and said it will seek sanctions against the legal teams involved. Meta spokespersons reiterated that WhatsApp has relied on independently audited encryption standards for over a decade.

The case highlights ongoing debates about encryption and security, but so far, no evidence has shown that message content has been exposed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Council presidency launches talks on AI deepfakes and cyberattacks

EU member states are preparing to open formal discussions on the risks posed by AI-powered deepfakes and their use in cyberattacks, following an initiative by the current Council presidency.

The talks are intended to assess how synthetic media may undermine democratic processes and public trust across the bloc.

According to sources, capitals will also begin coordinated exchanges on the proposed Democracy Shield, a framework aimed at strengthening resilience against foreign interference and digitally enabled manipulation.

Deepfakes are increasingly viewed as a cross-cutting threat, combining disinformation, cyber operations and influence campaigns.

The timeline set out by the presidency foresees structured discussions among national experts before escalating the issue to the ministerial level. The approach reflects growing concern that existing cyber and media rules are insufficient to address rapidly advancing AI-generated content.

An initiative that signals a broader shift within the Council towards treating deepfakes not only as a content moderation challenge, but as a security risk with implications for elections, governance and institutional stability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Experts debate when quantum computers could break modern encryption

Scientists are divided over when quantum computers will become powerful enough to break today’s digital encryption, a moment widely referred to as ‘Q–Day’.

While predictions range from just two years to several decades, experts agree that governments and companies must begin preparing urgently for a future where conventional security systems may fail.

Quantum computing uses subatomic behaviour to process data far faster than classical machines, enabling rapid decryption of information once considered secure.

Financial systems, healthcare data, government communications, and military networks could all become vulnerable as advanced quantum machines emerge.

Major technology firms have already made breakthroughs, accelerating concerns that encryption safeguards could be overwhelmed sooner than expected.

Several cybersecurity specialists warn that sensitive data is already being harvested and stored for future decryption, a strategy known as ‘harvest now, decrypt later’.

Regulators in the UK and the US have set timelines for shifting to post-quantum cryptography, aiming for full migration by 2030-2035. However, engineering challenges and unresolved technical barriers continue to cast uncertainty over the pace of progress.

Despite scepticism over timelines, experts agree that early preparation remains the safest approach. Experts stress that education, infrastructure upgrades, and global cooperation are vital to prevent disruption as quantum technology advances.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot