EU classifies WhatsApp as Very Large Online Platform

WhatsApp has been formally designated a Very Large Online Platform under the EU Digital Services Act, triggering the bloc’s most stringent digital oversight regime.

The classification follows confirmation that the messaging service has exceeded 51 million monthly users in the EU, triggering enhanced regulatory scrutiny.

As a VLOP, WhatsApp must take active steps to limit the spread of disinformation and reduce risks linked to the manipulation of public debate. The platform is also expected to strengthen safeguards for users’ mental health, with particular attention placed on the protection of minors and younger audiences.

The European Commission will oversee compliance directly and may impose financial penalties of up to 6 percent of WhatsApp’s global annual turnover if violations are identified. The company has until mid-May to align its systems, policies and risk assessments with the DSA’s requirements.

WhatsApp joins a growing list of major platforms already subject to similar obligations, including Facebook, Instagram, YouTube and X. The move reflects the Commission’s broader effort to apply the Digital Services Act across social media, messaging services and content platforms linked to systemic online risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok outages spark fears over data control and censorship in the US

Widespread TikTok disruptions affected users across the US as snowstorms triggered power outages and technical failures, with reports of malfunctioning algorithms and missing content features.

Problems persisted for some users beyond the initial incident, adding to uncertainty surrounding the platform’s stability.

The outage coincided with the creation of a new US-based TikTok joint venture following government concerns over potential Chinese access to user data. TikTok stated that a power failure at a domestic data centre caused the disruption, rather than ownership restructuring or policy changes.

Suspicion grew among users due to overlapping political events, including large-scale protests in Minneapolis and reports of difficulties searching for related content. Fears of censorship spread online, although TikTok attributed all disruptions to infrastructure failure.

The incident also resurfaced concerns over TikTok’s privacy policy, which outlines the collection of sensitive personal data. While some disclosures predated the ownership deal, the timing reinforced broader anxieties over social media surveillance during periods of political tension.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France’s National Assembly backs under-15 social media ban

France’s National Assembly has backed a bill that would bar children under 15 from accessing social media, citing rising concern over cyberbullying and mental-health harms. MPs approved the text late Monday by 116 votes to 23, sending it next to the Senate before it returns to the lower house for a final vote.

As drafted, the proposal would cover both standalone social networks and ‘social networking’ features embedded inside wider platforms, and it would rely on age checks that comply with the EU rules. The same package also extends France’s existing smartphone restrictions in schools to include high schools, and lawmakers have discussed additional guardrails, such as limits on practices deemed harmful to minors (including advertising and recommendation systems).

President Emmanuel Macron has urged lawmakers to move quickly, arguing that platforms are not neutral spaces for adolescents and linking social media to broader concerns about youth violence and well-being. Support for stricter limits is broad across parties, and polling has pointed in the same direction, but the bill still faces the practical question of how reliably platforms can keep underage users out.

Australia set the pace in December 2025, when its world-first ban on under-16s holding accounts on major platforms came into force, an approach now closely watched abroad. Early experience there has highlighted the same tension France faces, between political clarity (‘no accounts under the age line’) and the messy reality of age assurance and workarounds.

France’s debate is also unfolding in a broader European push to tighten child online safety rules. The European Parliament has called for an EU-wide ‘digital minimum age’ of 16 (with parental consent options for 13–16), while the European Commission has issued guidance for platforms and developed a prototype age-verification tool designed to preserve privacy, signalling that Brussels is trying to square protection with data-minimisation.

Why does it matter?

Beyond the child-safety rationale, the move reflects a broader push to curb platform power, with youth protection framed as a test case for stronger state oversight of Big Tech. At the same time, critics warn that strict age-verification regimes can expand online identification and surveillance, raising privacy and rights concerns, and may push teens toward smaller or less regulated spaces rather than offline life.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Monnett highlights EU digital sovereignty in social media

Monnett is a European-built social media platform designed to give people control over their online feeds. Users can choose exactly what they see, prioritise friends’ posts, and opt out of surveillance-style recommendation systems that dominate other networks.

Unlike mainstream platforms, Monnett places privacy first, with no profiling or sale of user data, and private chats protected without being mined for advertising. The platform also avoids “AI slop” or generative AI content shaping people’s feeds, emphasising human-centred interaction.

Created and built in Luxembourg at the heart of Europe, Monnett’s design reflects a growing push for digital sovereignty in the European Union, where citizens, regulators and developers want more control over how their digital spaces are governed and how personal data is treated.

Core features include full customisation of your algorithm, no shadowbans, strong privacy safeguards, and a focus on genuine social connection. Monnett aims to win users who prefer meaningful online interaction over addictive feeds and opaque data practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta pauses teen access to AI characters

Meta Platforms has announced a temporary pause on teenagers’ access to AI characters across its platforms, including Instagram and WhatsApp. Meta disclosed the decision to review and rebuild the feature for younger users.

In San Francisco, Meta said the restriction will apply to users identified as minors based on declared ages or internal age-prediction systems. Teenagers will still be able to use Meta’s core AI assistant, though interactive AI characters will be unavailable.

The move comes ahead of a major child safety trial in Los Angeles involving Meta, TikTok and YouTube. The Los Angeles case focuses on allegations that social media platforms cause harm to children through addictive and unsafe digital features.

Concerns about AI chatbots and minors have grown across the US, prompting similar action by other companies. In Los Angeles and San Francisco, regulators and courts are increasingly scrutinising how AI interactions affect young users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia’s social media ban raises concern for social media companies

Australia’s social media ban for under-16s is worrying social media companies. According to the country’s eSafety Commissioner, these companies fear a global trend of banning such apps. In Australia, regulators say major platforms reluctantly resisted the policy, fearing that similar rules could spread internationally.

In Australia, the ban has already led to the closure of 4.7 million child-linked accounts across platforms, including Instagram, TikTok and Snapchat. Authorities argue the measures are necessary to protect children from harmful algorithms and addictive design.

Social media companies operating in Australia, including Meta, say stronger safeguards are needed but oppose a blanket ban. Critics have warned about privacy risks, while regulators insist early data shows limited migration to alternative platforms.

Australia is now working with partners such as the UK to push tougher global standards on online child safety. In Australia, fines of up to A$49.5m may be imposed on companies failing to enforce the rules effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UN warns of rising AI-driven threats to child safety

UN agencies have issued a stark warning over the accelerating risks AI poses to children online, citing rising cases of grooming, deepfakes, cyberbullying and sexual extortion.

A joint statement published on 19 January urges urgent global action, highlighting how AI tools increasingly enable predators to target vulnerable children with unprecedented precision.

Recent data underscores the scale of the threat, with technology-facilitated child abuse cases in the US surging from 4,700 in 2023 to more than 67,000 in 2024.

During the COVID-19 pandemic, online exploitation intensified, particularly affecting girls and young women, with digital abuse frequently translating into real-world harm, according to officials from the International Telecommunication Union.

Governments are tightening policies, led by Australia’s social media ban for under-16s, as the UK, France and Canada consider similar measures. UN agencies urged tech firms to prioritise child safety and called for stronger AI literacy across society.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Grok faces regulatory scrutiny in South Korea over explicit AI content

South Korea has moved towards regulatory action against Grok, the generative AI chatbot developed by xAI, following allegations that the system was used to generate and distribute sexually exploitative deepfake images.

The country’s Personal Information Protection Commission has launched a preliminary fact-finding review to assess whether violations occurred and whether the matter falls within its legal remit.

The review follows international reports accusing Grok of facilitating the creation of explicit and non-consensual images of real individuals, including minors.

Under the Personal Information Protection Act of South Korea, generating or altering sexual images of identifiable people without consent may constitute unlawful handling of personal data, exposing providers to enforcement action.

Concerns have intensified after civil society groups estimated that millions of explicit images were produced through Grok over a short period, with thousands involving children.

Several governments, including those in the US, Europe and Canada, have opened inquiries, while parts of Southeast Asia have opted to block access to the service altogether.

In response, xAI has introduced technical restrictions preventing users from generating or editing images of real people. Korean regulators have also demanded stronger youth protection measures from X, warning that failure to address criminal content involving minors could result in administrative penalties.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France accelerates rapid ban on social media for under-15s

French President Emmanuel Macron has called for an accelerated legislative process to introduce a nationwide ban on social media for children under 15 by September.

Speaking in a televised address, Macron said the proposal would move rapidly through parliament so that explicit rules are in place before the new school year begins.

Macron framed the initiative as a matter of child protection and digital sovereignty, arguing that foreign platforms or algorithmic incentives should not shape young people’s cognitive and emotional development.

He linked excessive social media use to manipulation, commercial exploitation and growing psychological harm among teenagers.

Data from France’s health watchdog show that almost half of teenagers spend between two and five hours a day on their smartphones, with the vast majority accessing social networks daily.

Regulators have associated such patterns with reduced self-esteem and increased exposure to content linked to self-harm, drug use and suicide, prompting legal action by families against major platforms.

The proposal from France follows similar debates in the UK and Australia, where age-based access restrictions have already been introduced.

The French government argues that decisive national action is necessary instead of waiting for a slower Europe-wide consensus, although Macron has reiterated support for a broader EU approach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI startup secures $5M to transform children’s digital learning

AI education start-up Sparkli has raised $5 million in seed funding to develop an ‘anti-chatbot’ AI platform to transform how children engage with digital content.

Unlike traditional chatbots that focus on general conversation, Sparkli positions its AI as an interactive learning companion, guiding kids through topics such as math, science and language skills in a dynamic, age-appropriate format.

The funding will support product development, content creation and expansion into new markets. Founders say the platform addresses increasing concerns about passive screen time by offering educational interactions that blend AI responsiveness with curriculum-aligned activities.

The company emphasises safe design and parental controls to ensure technology supports learning outcomes rather than distraction.

Investors backing Sparkli see demand for responsible AI applications for children that can enhance cognition and motivation while preserving digital well-being. As schools and homes increasingly integrate AI tools, Sparkli aims to position itself at the intersection of educational technology and child-centred innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!