EU weighs social media age rules to protect children

The European Commission has signalled that it may propose EU-level rules on delaying children’s access to social media, as concerns grow over addictive platform design, harmful content and AI-enabled risks for minors.

In a keynote address at the European Summit on Artificial Intelligence and Children in Copenhagen, European Commission President Ursula von der Leyen said the EU must consider whether young people should be given more time before using social media. She said the question was not whether young people should have access to social media, but ‘whether social media should have access to young people’.

Von der Leyen said almost all the EU member states had called for an assessment of whether a minimum age is needed, while Denmark and nine other member states want to introduce one. She added that the Commission’s expert panel on child safety online is advising on the issue, and that a legal proposal could follow this summer, depending on its findings.

Von der Leyen linked the debate to wider concerns about platform business models. She argued that children’s attention was being treated as a commodity through addictive design, advertising, algorithmic recommendation systems and content that can harm mental health. She also pointed to risks linked to AI-generated sexualised images and child sexual abuse material.

The Commission President cited enforcement under the Digital Services Act, including actions involving TikTok, Meta and X, as well as investigations into platforms over whether children are being drawn into harmful content. She said the EU had created strong tools through the Digital Services Act and the Digital Markets Act, and that platforms breaking the rules would be held accountable.

Von der Leyen said that any age restriction model would depend on reliable age verification. She said the EU had developed an open-source age verification app that would soon be available, including a rollout in Denmark by summer, and that the Union was working with member states to integrate it into digital wallets.

The speech also framed child online safety as a matter of platform responsibility, not just parental control. Von der Leyen said social media companies should be responsible for product safety in the same way other industries are, adding that ‘safety by design’ protections should be strengthened and expanded. She also pointed to the forthcoming Digital Fairness Act, which is expected to address addictive and harmful design practices.

Why does it matter?

The speech suggests that the EU child online safety policy may be moving from platform accountability after harm occurs towards more structural controls over access, design and age verification. A possible social media delay would mark a major shift in how the EU approaches children’s participation online, raising questions about privacy-preserving age checks, children’s rights, parental responsibility, platform duties and the balance between protection and digital inclusion.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Texas lawsuit targets Netflix data practices

The Attorney General of Texas has filed a lawsuit against Netflix, alleging the company unlawfully collected user data without consent. The case claims the platform tracked extensive behavioural information from both adults and children while presenting itself as privacy-conscious.

According to the lawsuit, Netflix allegedly logged viewing habits, device usage and other interactions, turning user activity into monetised data. The lawsuit further claims that this data was shared with brokers and advertising technology firms to build detailed consumer profiles.

The Attorney General also argues that Netflix designed features to increase engagement, including autoplay, which allegedly encouraged prolonged viewing, particularly among younger users. These practices allegedly contradict the platform’s public messaging about being ad-free and family-friendly.

Texas’s complaint quoted a statement from Netflix co-founder and Chairman Reed Hastings, who allegedly said the company did not collect user data. He sought to distinguish Netflix’s approach from other major technology platforms with regard to data collection.

The Attorney General also claims that Netflix’s alleged surveillance violates the Texas Deceptive Trade Practices Act. The legal action seeks to halt the alleged data practices, introduce stricter controls, such as disabling autoplay for children, and impose penalties under consumer protection law, including civil fines of $ 10,000 per violation. The case highlights ongoing scrutiny of data practices by major technology platforms in the USA.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Child safety concerns dominate Europe’s digital agenda

A growing majority of Europeans believe stronger online protections for children and young people should remain a top policy priority, according to new findings from the Special Eurobarometer on the Digital Decade.

The European Commission said 92% of Europeans consider further action to protect children and young people online a top priority, reflecting sustained concern over the impact of digital platforms on younger users.

Mental health risks linked to social media ranked among the biggest concerns, with 93% of respondents calling for stronger protections. Cyberbullying, online harassment, and better age-restriction mechanisms for inappropriate content were also highlighted by 92% of respondents.

Concerns over AI and online manipulation also remain high. The survey found that 39% of respondents cited privacy or data protection as a barrier to using AI, followed by accuracy or incorrect information at 36% and ethical issues or misuse of generative AI tools at 32%.

Around 87% of Europeans agreed that online manipulation, including disinformation, foreign interference, AI-generated content and deepfakes, poses a threat to democratic processes. Another 80% said AI development should be carefully regulated to ensure safety, even if oversight places constraints on developers.

The findings also show continuing concern over online platforms. Europeans reported being personally affected by fake news and disinformation, misuse of personal data and insufficient protections for minors, with concerns over fake news and child protection showing the sharpest increases since 2024.

Why does it matter?

The findings show that public concern over digital technologies in Europe is increasingly centred on safety, rights and accountability, particularly for children and young people. They also suggest that trust in platforms and AI systems will depend not only on innovation and access, but also on visible safeguards against manipulation, harmful content, privacy risks, and weak protections for minors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

UK’s Ofcom prioritises child protection and AI moderation under Online Safety Act

The UK’s Ofcom has outlined its main online safety priorities for 2026–27, signalling tougher oversight of digital platforms under the UK’s Online Safety Act. The regulator said it will continue focusing heavily on child protection while expanding enforcement efforts against illegal hate speech, terrorism-related material, intimate image abuse, and AI-generated harms.

The regulator confirmed that more than 100,000 online services now fall within the scope of the legislation, creating major compliance and enforcement challenges. Ofcom said it will continue investigating platforms that fail to prevent harmful or illegal content, while also preparing new rules linked to additional UK legislation covering cyberflashing, non-consensual intimate imagery, and generative AI services.

Ofcom stated that major online platforms have already introduced broader age verification measures under regulatory pressure. Services including gaming, dating, social media, and pornography platforms have implemented stronger age checks and child safety protections.

Furthermore, the regulator said it will expand supervision of large technology companies and publish updated safety codes later this year, including guidance on AI-powered moderation systems.

According to Ofcom, future compliance work will increasingly focus on the effectiveness of platform moderation systems rather than relying solely on reactive content removal. The regulator also plans to strengthen protections for women and girls online through new technical standards designed to block the spread of non-consensual intimate images and sexual deepfakes at scale.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

FTC guidance sets out platform duties under Take It Down Act

The US Federal Trade Commission has issued guidance for online platforms on compliance with Section 3 of the Take It Down Act, which takes effect on 19 May 2026 and requires covered platforms to remove non-consensual intimate photos or videos within 48 hours of receiving a valid request.

The FTC says the law applies to a broad range of online platforms, including websites, apps, social media, messaging, image and video sharing, and gaming services. Platforms may fall under the law if they primarily provide a forum for user-generated content or regularly publish, curate, host, or furnish intimate content shared without consent.

Covered platforms must provide clear and conspicuous plain-language information about how people can submit removal requests for intimate photos or videos shared without consent. The FTC says platforms should make the process easy to use, including for people who do not have an account on the service.

The law also covers ‘digital forgeries’, including intimate images that were digitally created or altered using software, apps, or AI. Platforms that receive a valid request must remove the reported content and make reasonable efforts to locate and remove known identical copies within 48 hours.

The FTC also encourages platforms to help prevent removed images from spreading further, including through hashing technology and, where appropriate, by sharing hashes with services such as the National Center for Missing and Exploited Children’s Take It Down service or StopNCII.org.

Violations of the Take It Down Act will be enforced by the FTC and treated as violations of an FTC rule. The agency says platforms that breach the law may face civil penalties of $53,088 per violation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Canada issues age assurance guidance

The Office of the Privacy Commissioner of Canada has issued guidance on how organisations should assess and implement age assurance tools for websites and online services.

The OPC states that age assurance should only be used where there is a clear legal requirement or a demonstrable risk of harm to children. It emphasises that organisations must evaluate whether alternative, less intrusive measures could address these risks before adopting such systems.

The guidance highlights that any age assurance approach, including those that use AI, must be proportionate, limit personal data collection, and operate in a privacy-protective manner. It also warns against using collected data for other purposes or linking user activity across sessions.

The OPC adds that organisations must provide user choice with respect to the type of personal information they would prefer to use in an age-assurance process, provide appeal mechanisms, and minimise repeated verification. The framework aims to balance child protection with privacy rights, with the guidance applying to online services in Canada.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

MEPs consider stronger EU measures on cyberbullying and online harassment

The European Parliament has voted on a resolution on targeted criminal provisions and platform responsibility to address cyberbullying and online harassment, following a debate with the Commission.

The debate focused on whether EU law should go further in addressing harmful online behaviour, including through targeted criminal provisions and stronger obligations for platforms. Parliament’s plenary briefing said MEPs were expected to press the Commission on what more can be done beyond existing Digital Services Act protections.

Draft resolution texts tabled in Parliament say MEPs want the Commission to consider making cyberbullying a criminal offence under EU law and to address legal gaps in the current framework.

The vote followed the Commission’s recent action plan against cyberbullying, which Parliament said is built around a support app, coordination of national approaches, and the promotion of safer digital practices.

The debate also comes after MEPs heard testimony earlier this year from Jackie Fox, whose daughter Coco’s case led to Ireland’s Harassment, Harmful Communications and Related Offences Act 2020, known as Coco’s Law. Parliament’s briefing notes that while EU initiatives address parts of the issue, there is still no EU-wide anti-online bullying law or commonly agreed definition at the European or international level.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK House of Commons backs amendments in lieu on Children’s Wellbeing Bill with online safety provisions

The UK House of Commons has backed government amendments instead of the Children’s Wellbeing and Schools Bill, after insisting on its disagreement with the Lords’ amendments and proposing its own amendments in lieu. In the debate, ministers said the Children’s Wellbeing and Schools Bill will place a statutory duty on the Secretary of State to act following the consultation, changing the wording from ‘may’ to ‘must’.

Education minister Olivia Bailey told MPs that the government is consulting on the mechanism, but that ‘under any outcome’ it will impose ‘some form of age or functionality restrictions for children under 16’. She added that curfews would be considered in addition to, not instead of, those restrictions.

Bailey said the Children’s Wellbeing and Schools Bill now requires a statutory progress report three months after Royal Assent, with regulations to be laid within 12 months after that. She said the government intends to move faster and aims to lay the regulations by the end of the year, while describing any further six-month extension as a backstop for ‘exceptional and unforeseen circumstances’ only.

Opposition MPs and Liberal Democrats argued that the timetable remained too slow. Conservative frontbencher Laura Trott said the revised proposal was ‘a huge step forward’ but warned that ‘every month of delay just leaves children more exposed to the harms of social media online’.

Liberal Democrat spokesperson Munira Wilson said the overall timeline could still amount to 21 months before action. The House later voted by 272 to 64 to insist on its disagreement with the Lords’ amendments and to approve the government’s amendments in lieu. Lords amendment 105C was also agreed to, allowing the Children’s Wellbeing and Schools Bill to move forward with the revised online safety provisions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta faces EU Digital Services Act breach finding over under-13 access

The European Commission has preliminarily found Meta’s Instagram and Facebook in breach of the Digital Services Act over failures to adequately prevent children under 13 from accessing the platforms. The finding remains provisional and does not prejudge the outcome of the investigation.

According to the Commission, Meta’s existing measures do not effectively enforce its own minimum age requirement of 13. The preliminary findings say children below that age can still create accounts by entering false birth dates, while the company’s reporting tool for underage users is difficult to use and often does not result in effective follow-up.

The Commission also considers Meta’s risk assessment to be incomplete and arbitrary. It says the company failed to identify and assess the risks properly posed to children under 13 who access Instagram and Facebook, despite evidence from across the EU suggesting that a significant share of children under 13 use one or both services. This wording is best kept cautious unless you are quoting the exact percentage directly from the Commission text.

At this stage, the Commission says Meta must revise its risk assessment methodology and strengthen its measures to prevent, detect, and remove children under 13 from the platforms. It also says the company must better counter and mitigate the risks those children may face and ensure a high level of privacy, safety, and security for minors.

The preliminary findings form part of formal proceedings opened against Meta in May 2024 under the DSA. The Commission says the investigation has included analysis of Meta’s risk assessment reports, internal data and documents, and the company’s responses to requests for information, with support from civil society organisations and child protection experts across the EU.

If the Commission’s preliminary view is confirmed, it may adopt a non-compliance decision and impose a fine of up to 6% of the provider’s total worldwide annual turnover, as well as periodic penalty payments. Meta now has the opportunity to reply before any final decision is taken.

Henna Virkkunen, Executive Vice President for Tech Sovereignty, Security and Democracy, said Meta’s own terms and conditions already state that its services are not intended for children under 13, but that the company appears to be doing too little in practice to prevent them from gaining access.

Why does it matter?

The case matters because it goes to the heart of how the Digital Services Act is expected to work in practice: not only by requiring large platforms to set rules for child safety, but by obliging them to enforce those rules effectively. If the Commission’s preliminary view is confirmed, the Meta case could become an important benchmark for how the EU treats age assurance, risk assessments, and platform accountability in cases involving minors, with wider implications for other services that rely on self-declared age checks and weak reporting tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

IWF and Immaterialism expand efforts to combat child abuse content online

Immaterialism has joined the Internet Watch Foundation to strengthen efforts against the spread of child sexual abuse material online.

The partnership introduces IWF tools designed to accelerate the identification of harmful domains and enable faster intervention when abusive activity is detected. By adopting Registrar Alerts and related datasets, the registrar aims to improve its ability to respond to criminal content across the domains under its management.

The collaboration reflects a broader shift towards more proactive action at the domain infrastructure layer. By integrating intelligence tools into operational processes, the initiative aims to disrupt both the deliberate distribution of abusive material and the continued availability of domains linked to it.

The IWF says the volume of detected child sexual abuse material continues to rise, reinforcing the need for coordinated responses between safety organisations and private-sector actors. In that sense, the partnership points to closer alignment between domain service providers and specialist online safety groups working to strengthen protections for children online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!