EU demands stronger age verification from adult websites

The European Commission has preliminarily found that several major adult platforms, including Pornhub, Stripchat, XNXX, and XVideos, may be in breach of the Digital Services Act for failing to adequately protect minors from accessing harmful content.

These findings highlight concerns that children can easily access such platforms rather than being effectively prevented by robust safeguards.

The Commission’s investigation indicates that the platforms’ risk assessments were insufficient. In several cases, companies focused on reputational or business risks instead of fully addressing societal harms to minors.

Authorities also raised concerns that some platforms did not adequately consider input from civil society organisations specialising in children’s rights and age-assurance technologies, undermining the reliability of their evaluations.

Regarding risk mitigation, the Commission found that existing measures are ineffective. Simple self-declaration systems, in which users confirm they are over 18, were deemed inadequate, while additional features such as warnings, labels, or blurred content failed to prevent minors from accessing content.

The Commission considers that stronger, privacy-preserving age-verification solutions are necessary to ensure meaningful protection of children’s rights and well-being online.

The companies involved now have the opportunity to respond and propose corrective measures, while consultations with the European Board for Digital Services continue.

If the preliminary findings are confirmed, the Commission may impose fines of up to 6 percent of global annual turnover, alongside periodic penalties to enforce compliance.

The case forms part of broader efforts to enforce the Digital Services Act and strengthen online safety across the EU, rather than relying on voluntary measures by platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Europol warns legal gaps could weaken child abuse detection online

Efforts to combat online child sexual exploitation could be severely weakened, Europol has warned, if legal frameworks supporting detection and reporting are disrupted.

Executive Director Catherine De Bolle highlighted growing concerns over the increasing volume of harmful content online and stressed that protecting children remains a top priority for European law enforcement.

Authorities rely heavily on reports submitted by online service providers, which play a central role in identifying victims and supporting investigations, rather than relying solely on traditional policing methods.

Europol processed around 1.1 million CyberTips in a single year, many originating from the National Centre for Missing & Exploited Children and shared across 24 European countries.

These CyberTips include critical evidence such as images, videos, and other digital data used to track criminal activity.

Europol cautioned that removing the legal basis allowing voluntary detection by platforms could significantly reduce the number of reports submitted to authorities. A decline in CyberTips would limit investigative leads, making it harder to identify victims and disrupt online criminal networks.

Such a development could undermine broader security efforts and weaken the protection of minors across the EU instead of strengthening safeguards.

The agency emphasised that maintaining online service providers’ ability to detect and report suspected abuse is essential to effective law enforcement.

Ensuring continued cooperation between platforms and authorities remains a key factor in safeguarding children and addressing the growing threat of online exploitation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU opens probe into Snapchat child safety compliance

The European Commission has launched formal proceedings to assess whether Snapchat is complying with child protection obligations under the Digital Services Act. The investigation focuses on whether the platform ensures adequate safety, privacy, and security for minors.

Authorities suspect Snapchat may have failed to prevent exposure of children to grooming attempts, recruitment for criminal activity, and content linked to illegal goods such as drugs, vapes, and alcohol.

Concerns also include whether minors can be effectively prevented from accessing the platform or interacting with adults posing as peers.

The inquiry will examine age assurance methods, default account settings, reporting tools, and the spread of illegal content. Regulators argue that self-declared age may be insufficient, while default settings and recommendations may expose minors to risks.

The Commission will now gather further evidence through information requests, inspections, and interviews, and may take enforcement actions, including interim measures or penalties.

National regulators will support the investigation as part of coordinated oversight under the Digital Services Act.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Open letter targets Meta ad practices

A coalition of civil society and industry groups has urged the European Commission to enforce the Digital Markets Act more rigorously, warning that major tech firms continue to exploit compliance gaps. The appeal centres on concerns over data use and online advertising practices.

Organisations including noyb, Check My Ads, and the Irish Council for Civil Liberties argue that current models fail to offer users genuine choice. Critics say consent mechanisms tied to payment or tracking undermine the intent of the EU digital rules.

The letter against Meta calls for clearer standards, including equal options for personalised and non-personalised advertising, as well as stricter limits on design practices that influence user decisions. Campaigners also want stronger coordination between regulators to ensure consistent enforcement.

The push reflects wider frustration among European organisations, with several recent letters demanding faster action against dominant platforms. Observers warn that delayed enforcement risks weakening the credibility of the EU digital regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNESCO and Tecnológico de Monterrey partner on AI in education initiative

UNESCO and Tecnológico de Monterrey have signed an agreement to collaborate on advancing the use of AI in education, as digital transformation reshapes learning systems and workforce skills across Latin America and the Caribbean.

The agreement establishes a framework for joint work on generating evidence, developing standards and formulating public policy recommendations on AI in education, and supports the launch of a Regional Observatory on Artificial Intelligence in Education.

A financial contribution of $90,000 will support the Observatory’s implementation, following months of technical coordination and institutional validation between the two organisations.

After the signing, technical teams reviewed the operational plan for the first year, including methodological frameworks on teachers’ digital competencies and AI ethics, as well as pilot projects in Chile, El Salvador and Mexico.

According to Esther Kuisch Laroche, the initiative aims to ensure AI contributes to more inclusive, ethical and relevant education systems, while moving from principles to practical solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

New Mexico wins major case against Meta

A jury has found Meta Platforms liable for misleading consumers and endangering children in a landmark case brought by the New Mexico Department of Justice. The verdict marks the first successful trial by a US state against a major tech firm over child safety concerns.

Jurors awarded civil penalties totalling 375 million dollars after finding violations of consumer protection law. The case focused on claims that platform design choices exposed young users to harmful and exploitative content.

Evidence presented in court included internal company documents and testimony suggesting awareness of risks to children. Allegations centred on failures to prevent exploitation, as well as features linked to addictive behaviour and exposure to harmful material.

Further proceedings in the US are scheduled, with authorities seeking additional penalties and mandated changes to platform safety measures. Proposed actions include stronger age verification and improved protections for minors online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI details Sora 2 safeguards for likeness, audio, and harmful content

OpenAI has published a new overview of the safety measures built into Sora 2 and the Sora app, setting out how the company says it is approaching provenance, likeness protection, teen safeguards, harmful-content filtering, audio controls, and user reporting tools. The Sora team published the note on 23 March 2026.

OpenAI says every video generated with Sora includes visible and invisible provenance signals, and that all videos also embed C2PA metadata. The company adds that many outputs feature visible moving watermarks that include the creator’s name, while internal reverse-image and audio search tools are used to trace videos back to Sora.

A substantial part of the update focuses on likeness and consent. OpenAI says users can upload images of people to generate videos, but only after attesting that they have consent from the people featured and the right to upload the media. OpenAI also says image-to-video generations involving people are subject to stricter safeguards than Sora Characters, and that images including children and young-looking persons face stricter moderation. Shared videos generated from such images will always carry watermarks, according to the company.

OpenAI also sets out controls linked to its characters feature, which it says is intended to give users stronger control over their likeness, including both appearance and voice. According to the company, users can decide who can use their characters, revoke access at any time, and review, delete, or report videos featuring their characters. OpenAI says it also applies additional restrictions designed to limit major changes to a person’s appearance, avoid embarrassing uses, and maintain broadly consistent identity presentation.

Protections for younger users form another part of the update. OpenAI says teen accounts are subject to stronger limitations on mature output, that age-inappropriate or harmful content is filtered from teen feeds, and that adult users cannot initiate direct messages with teens. Parental controls in ChatGPT can also be used to manage teen messaging permissions and to select a non-personalised feed in the app, while default limits apply to continuous scrolling for teens.

OpenAI says harmful-content controls operate at both creation and distribution stages. Prompt and output checks are used across multiple video frames and audio transcripts to block content including sexual material, terrorist propaganda, and self-harm promotion. OpenAI also says it has tightened policies for video generation compared with image generation because of added realism, motion, and audio, while automated systems and human review are used to monitor feed content against its global usage policies.

Audio generation is treated separately in the note. OpenAI says generated speech transcripts are automatically scanned for possible policy violations, and that prompts intended to imitate living artists or existing works are blocked. The company also says it honours takedown requests from creators who believe an output infringes their work.

User controls and recourse are presented as the final layer. OpenAI says users can choose whether to share videos to the feed, remove published content, and report videos, profiles, direct messages, comments, and characters for abuse. Blocking tools are also available, according to the company, to stop other users from viewing a profile or posts, using a character, or contacting someone through direct message.

OpenAI’s post is framed as a product-safety explanation rather than an independent assessment of the effectiveness of the measures in practice. Much of the note describes controls that the company says it has built into Sora 2, but it does not provide external evaluation data in the published summary.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New AI safety policies target teen protection in apps

OpenAI has released a set of prompt-based safety policies to help developers build safer AI experiences for teenagers. The tools work with the open-weight model gpt-oss-safeguard, turning safety requirements into practical classifiers for real-world use.

The policies address teen risks, including graphic violence, sexual content, harmful body image behaviour, dangerous challenges, roleplay, and age-restricted goods and services. Developers can use them for both real-time filtering and offline content analysis.

The framework was developed with input from organisations such as Common Sense Media and everyone.ai to improve clarity and consistency in teen safety rules. The initiative also responds to long-standing challenges in translating high-level safety goals into precise operational systems.

Open-source availability through the ROOST Model Community allows developers to adapt and expand the policies for different use cases and languages. The framework is a foundational step, not a complete solution, encouraging layered safeguards and ongoing refinement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Scotland publishes AI guidance for schools

The Scottish government has published national guidance on the use of AI in schools, aiming to support the safe and ethical adoption of AI in classrooms. The document provides advice for teachers and pupils as AI use continues to expand across society.

The guidance outlines potential benefits of AI alongside risks that need to be considered, and includes examples of appropriate classroom use. It was developed with the EIS teaching union, local government and Education Scotland.

Education Secretary Jenny Gilruth said AI should support creativity, critical thinking and personalised learning while protecting pupils’ rights and privacy. She added that technology must not replace teachers or human relationships in education.

Andrea Bradley said AI should remain a tool for teachers and not replace professional judgement. The non-statutory guidance allows schools and local authorities flexibility to develop their own policies as AI continues to evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

UK tests social media bans for children in national pilot

The UK government has launched a large-scale pilot programme to test social media restrictions in the homes of 300 teenagers, aiming to improve children’s well-being instead of relying solely on existing digital safety measures.

The initiative, led by the Department for Science, Innovation and Technology and supported by Liz Kendall, will run for six weeks and examine how limits on digital platforms affect young people’s daily lives, including sleep, schoolwork, and family relationships.

Families across the UK will be divided into groups testing different approaches. Some parents will block access to social media entirely, while others will introduce a one-hour daily limit on popular platforms such as Instagram, TikTok, and Snapchat.

Another group will implement overnight curfews, restricting access between 9 pm and 7 am, while a control group will maintain existing usage patterns rather than introducing changes.

Participants will be interviewed before and after the trial to assess behavioural and practical outcomes, including how easily restrictions can be enforced and whether teenagers attempt to bypass controls.

The pilot runs alongside a national consultation on children’s digital well-being, which has already received nearly 30,000 responses. Government officials and academic experts will analyse data gathered from both initiatives to guide future policy decisions.

A programme that aims to ensure that any regulatory steps are evidence-based, reflecting real-life experiences rather than theoretical assumptions about digital behaviour.

Alongside the government trials, an independent scientific study funded by the Wellcome Trust will examine the effects of reduced social media use among adolescents.

Led by researchers from the University of Cambridge and the Bradford Institute for Health Research, the study will involve around 4,000 students aged 12 to 15.

Findings are expected to provide deeper insight into how social media influences anxiety, sleep, relationships, and overall well-being, supporting policymakers in shaping future online safety measures instead of relying on limited evidence.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!