Critical AI toy security failure exposes children’s data

The exposure of more than 50,000 children’s chat logs by AI toy company Bondu highlights serious gaps in child data protection. Sensitive personal information, including names, birth dates, and family details, was accessible through a poorly secured parental portal, raising immediate concerns about children’s privacy and safety.

The incident highlights the absence of mandatory security-by-design standards for AI products for children, with weak safeguards enabling unauthorised access and exposing vulnerable users to serious risks.

Beyond the specific flaw, the case raises wider concerns about AI toys used by children. Researchers warned that the exposed data could be misused, strengthening calls for stricter rules and closer oversight of AI systems designed for minors.

Concerns also extend to transparency around data handling and AI supply chains. Uncertainty over whether children’s data was shared with third-party AI model providers points to the need for clearer rules on data flows, accountability, and consent in AI ecosystems.

Finally, the incident has added momentum to policy discussions on restricting or pausing the sale of interactive AI toys. Lawmakers are increasingly considering precautionary measures while more robust child-focused AI safety frameworks are developed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

GDPR violation reports surge across Europe in 2025, study finds

European data protection authorities recorded a sharp rise in GDPR violation reports in 2025, according to a new study by law firm DLA Piper, signalling growing regulatory pressure across the European Union.

Average daily reports surpassed 400 for the first time since the regulation entered force in 2018, reaching 443 incidents per day, a 22% increase compared with the previous year. The firm noted that expanding digital systems, new breach reporting laws, and geopolitical cyber risks may be driving the surge.

Despite the higher number of cases in the EU, total fines remained broadly stable at around €1.2 billion for the year, pushing cumulative GDPR penalties since 2018 to €7.1 billion, underlining regulators’ continued willingness to impose major sanctions.

Ireland once again led enforcement figures, with fines imposed by its Data Protection Commission totaling €4.04 billion, reflecting the presence of major technology firms headquartered there, including Meta, Google, and Apple.

Recent headline penalties included a €1.2 billion fine against Meta and a €530 million sanction against TikTok over data transfers to China, while courts across Europe increasingly consider compensation claims linked to GDPR violations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Conversational advertising arrives as OpenAI integrates sponsored content into ChatGPT

OpenAI has begun testing advertising placements inside ChatGPT, marking a shift toward monetising one of the world’s most widely used AI platforms. Sponsored content now appears below chatbot responses for free and low-cost users, integrating promotions directly into conversational queries.

Ads remain separate from organic answers, with OpenAI saying commercial content will not influence AI-generated responses. Users can see why specific ads appear, dismiss irrelevant placements, and disable personalisation. Advertising is excluded for younger users and sensitive topics.

Initial access is limited to enterprise partners, with broader availability expected later. Premium subscription tiers continue without ads, reflecting a freemium model similar to streaming platforms offering both paid and ad-supported options.

Pricing places ChatGPT ads among the most expensive digital formats. The value lies in reaching users at high-intent moments, such as during product research and purchase decisions. Measurement tools remain basic, tracking only impressions and clicks.

OpenAI’s move into advertising signals a broader shift as conversational AI reshapes how people discover information. Future performance data and targeting features will determine whether ChatGPT becomes a core ad channel or a premium niche format.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU confronts Grok abuse as Brussels tests its digital power

The European Commission has opened a formal investigation into Grok after the tool produced millions of sexualised images of women and children.

A scrutiny that centres on whether X failed to carry out adequate risk assessments before releasing the undressing feature in the European market. The case arrives as ministers, including Sweden’s deputy prime minister, publicly reveal being targeted by the technology.

Brussels is preparing to use its strongest digital laws instead of deferring to US pressure. The Digital Services Act allows the European Commission to fine major platforms or force compliance measures when systemic harms emerge.

Experts argue the Grok investigation represents an important test of European resolve, particularly as the bloc tries to show it can hold powerful companies to account.

Concerns remain about the willingness of the EU to act decisively. Reports suggest the opening of the probe was delayed because of a tariff dispute with Washington, raising questions about whether geopolitical considerations slowed the enforcement response.

Several lawmakers say the delay undermined confidence in the bloc’s commitment to protecting fundamental rights.

The investigation could last months and may have wider implications for content ranking systems already under scrutiny.

Critics say financial penalties may not be enough to change behaviour at X, yet the case is still viewed as a pivotal moment for European digital governance. Observers believe a firm outcome would demonstrate that emerging harms linked to synthetic media cannot be ignored.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Millions use Telegram to create AI deepfake nudes as digital abuse escalates

A global wave of deepfake abuse is spreading across Telegram as millions of users generate and share sexualised images of women without consent.

Researchers have identified at least 150 active channels offering AI-generated nudes of celebrities, influencers and ordinary women, often for payment. The widespread availability of advanced AI tools has turned intimate digital abuse into an industrialised activity.

Telegram states that deepfake pornography is banned and says moderators removed nearly one million violating posts in 2025. Yet new channels appear immediately after old ones are shut, enabling users to exchange tips on how to bypass safety controls.

The rise of nudification apps on major app stores, downloaded more than 700 million times, adds further momentum to an expanding ecosystem that encourages harassment rather than accountability.

Experts argue that the celebration of such content reflects entrenched misogyny instead of simple technological misuse. Women targeted by deepfakes face isolation, blackmail, family rejection and lost employment opportunities.

Legal protections remain minimal in much of the world, with fewer than 40% of countries having laws that address cyber-harassment or stalking.

Campaigners warn that women in low-income regions face the most significant risks due to poor digital literacy, limited resources and inadequate regulatory frameworks.

The damage inflicted on victims is often permanent, as deepfake images circulate indefinitely across platforms and are impossible to remove, undermining safety, dignity and long-term opportunities comprehensively.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

French public office hit with €5 million CNIL fine after massive data leak

The data protection authority of France has imposed a €5 million penalty on France Travail after a massive data breach exposed sensitive personal information collected over two decades.

A leak which included social security numbers, email addresses, phone numbers and home addresses of an estimated 36.8 million people who had used the public employment service. CNIL said adequate security measures would have made access far more difficult for the attackers.

The investigation found that cybercriminals exploited employees through social engineering instead of breaking in through technical vulnerabilities.

CNIL highlighted the failure to secure such data breach requirements under the General Data Protection Regulation. The watchdog also noted that the size of the fine reflects the fact that France Travail operates with public funding.

France Travail has taken corrective steps since the breach, yet CNIL has ordered additional security improvements.

The authority set a deadline for these measures and warned that non-compliance would trigger a daily €5,000 penalty until France Travail meets GDPR obligations. A case that underlines growing pressure on public institutions to reinforce cybersecurity amid rising threats.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google brings AI agent to Chrome in the US

Google is rolling out an AI-powered browsing agent inside Chrome, allowing users to automate routine online tasks. The feature is being introduced in the US for AI Pro and AI Ultra subscribers.

The Gemini agent can interact directly with websites in the US, including opening pages, clicking buttons and completing complex online forms. Testers reported successful use for tasks such as tax paperwork and licence renewals.

Google said Gemini AI integrates with password management tools while requiring user confirmation for payments and final transactions. Security safeguards and fraud detection systems have been built into Chrome for US users.

The update reflects Alphabet’s strategy to reposition Chrome in the US as an intelligent operating agent. Google aims to move beyond search toward AI-driven personal task management.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Netherlands faces rising digital sovereignty threat, data authority warns

The Dutch data protection authority has urged the government to act swiftly to protect the country’s digital sovereignty, warning that dependence on overseas technology firms could expose vital public services to significant risk.

Concern has intensified after DigiD, the national digital identity system, appeared set for acquisition by a US company, raising questions about long-term control of key infrastructure.

The watchdog argues that the Netherlands relies heavily on a small group of non-European cloud and IT providers, and stresses that public bodies lack clear exit strategies if foreign ownership suddenly shifts.

Additionally, the watchdog criticises the government for treating digital autonomy as an academic exercise rather than recognising its immediate implications for communication between the state and citizens.

In a letter to the economy minister, the authority calls for a unified national approach rather than fragmented decisions by individual public bodies.

It proposes sovereignty criteria for all government contracts and suggests termination clauses that enable the state to withdraw immediately if a provider is sold abroad. It also notes the importance of designing public services to allow smooth provider changes when required.

The watchdog urges the government to strengthen European capacity by investing in scalable domestic alternatives, including a Dutch-controlled government cloud. The economy ministry has declined to comment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Fake AI assistant steals OpenAI credentials from thousands of Chrome users

A Chrome browser extension posing as an AI assistant has stolen OpenAI credentials from more than 10,000 users. Cybersecurity platform Obsidian identified the malicious software, known as H-Chat Assistant, which secretly harvested API keys and transmitted user data to hacker-controlled servers.

The extension, initially called ChatGPT Extension, appeared to function normally after users provided their OpenAI API keys. Analysts discovered that the theft occurred when users deleted chats or logged out, triggering the transmission of credentials via hardcoded Telegram bot credentials.

At least 459 unique API keys were exfiltrated to a Telegram channel months before they were discovered in January 2025.

Researchers believe the malicious activity began in July 2024 and continued undetected for months. Following disclosure to OpenAI on 13 January, the company revoked compromised API keys, though the extension reportedly remained available in the Chrome Web Store.

Security analysts identified 16 related extensions sharing the identical developer fingerprints, suggesting a coordinated campaign by a single threat actor.

LayerX Security consultant Natalie Zargarov warned that whilst current download numbers remain relatively low, AI-focused browser extensions could rapidly surge in popularity.

The malicious extensions exploit vulnerabilities in web-based authentication processes, creating, as researchers describe, a ‘materially expanded browser attack surface’ through deep integration with authenticated web applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI biometric social platform plans spark Worldcoin surge

Worldcoin jumped 40% after reports that OpenAI is developing a biometric social platform to verify users and eliminate bots. The proposed network would reportedly integrate AI tools while relying on biometric identification to ensure proof of personhood.

Sources cited by Forbes claim the project aims to create a humans-only platform, differentiating itself from existing social networks, including X. Development is said to be led by a small internal team, with work reportedly underway since early 2025.

Biometric verification could involve Apple’s Face ID or the World Orb scanner, a device linked to the World project co-founded by OpenAI chief executive Sam Altman.

The report sparked a sharp rally in Worldcoin, though part of the gains later reversed amid wider market weakness. Despite the brief surge, Worldcoin has remained sharply lower over the past year amid weak market sentiment and ongoing privacy concerns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!