New Mexico wins major case against Meta

A jury has found Meta Platforms liable for misleading consumers and endangering children in a landmark case brought by the New Mexico Department of Justice. The verdict marks the first successful trial by a US state against a major tech firm over child safety concerns.

Jurors awarded civil penalties totalling 375 million dollars after finding violations of consumer protection law. The case focused on claims that platform design choices exposed young users to harmful and exploitative content.

Evidence presented in court included internal company documents and testimony suggesting awareness of risks to children. Allegations centred on failures to prevent exploitation, as well as features linked to addictive behaviour and exposure to harmful material.

Further proceedings in the US are scheduled, with authorities seeking additional penalties and mandated changes to platform safety measures. Proposed actions include stronger age verification and improved protections for minors online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI details Sora 2 safeguards for likeness, audio, and harmful content

OpenAI has published a new overview of the safety measures built into Sora 2 and the Sora app, setting out how the company says it is approaching provenance, likeness protection, teen safeguards, harmful-content filtering, audio controls, and user reporting tools. The Sora team published the note on 23 March 2026.

OpenAI says every video generated with Sora includes visible and invisible provenance signals, and that all videos also embed C2PA metadata. The company adds that many outputs feature visible moving watermarks that include the creator’s name, while internal reverse-image and audio search tools are used to trace videos back to Sora.

A substantial part of the update focuses on likeness and consent. OpenAI says users can upload images of people to generate videos, but only after attesting that they have consent from the people featured and the right to upload the media. OpenAI also says image-to-video generations involving people are subject to stricter safeguards than Sora Characters, and that images including children and young-looking persons face stricter moderation. Shared videos generated from such images will always carry watermarks, according to the company.

OpenAI also sets out controls linked to its characters feature, which it says is intended to give users stronger control over their likeness, including both appearance and voice. According to the company, users can decide who can use their characters, revoke access at any time, and review, delete, or report videos featuring their characters. OpenAI says it also applies additional restrictions designed to limit major changes to a person’s appearance, avoid embarrassing uses, and maintain broadly consistent identity presentation.

Protections for younger users form another part of the update. OpenAI says teen accounts are subject to stronger limitations on mature output, that age-inappropriate or harmful content is filtered from teen feeds, and that adult users cannot initiate direct messages with teens. Parental controls in ChatGPT can also be used to manage teen messaging permissions and to select a non-personalised feed in the app, while default limits apply to continuous scrolling for teens.

OpenAI says harmful-content controls operate at both creation and distribution stages. Prompt and output checks are used across multiple video frames and audio transcripts to block content including sexual material, terrorist propaganda, and self-harm promotion. OpenAI also says it has tightened policies for video generation compared with image generation because of added realism, motion, and audio, while automated systems and human review are used to monitor feed content against its global usage policies.

Audio generation is treated separately in the note. OpenAI says generated speech transcripts are automatically scanned for possible policy violations, and that prompts intended to imitate living artists or existing works are blocked. The company also says it honours takedown requests from creators who believe an output infringes their work.

User controls and recourse are presented as the final layer. OpenAI says users can choose whether to share videos to the feed, remove published content, and report videos, profiles, direct messages, comments, and characters for abuse. Blocking tools are also available, according to the company, to stop other users from viewing a profile or posts, using a character, or contacting someone through direct message.

OpenAI’s post is framed as a product-safety explanation rather than an independent assessment of the effectiveness of the measures in practice. Much of the note describes controls that the company says it has built into Sora 2, but it does not provide external evaluation data in the published summary.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New AI safety policies target teen protection in apps

OpenAI has released a set of prompt-based safety policies to help developers build safer AI experiences for teenagers. The tools work with the open-weight model gpt-oss-safeguard, turning safety requirements into practical classifiers for real-world use.

The policies address teen risks, including graphic violence, sexual content, harmful body image behaviour, dangerous challenges, roleplay, and age-restricted goods and services. Developers can use them for both real-time filtering and offline content analysis.

The framework was developed with input from organisations such as Common Sense Media and everyone.ai to improve clarity and consistency in teen safety rules. The initiative also responds to long-standing challenges in translating high-level safety goals into precise operational systems.

Open-source availability through the ROOST Model Community allows developers to adapt and expand the policies for different use cases and languages. The framework is a foundational step, not a complete solution, encouraging layered safeguards and ongoing refinement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Scotland publishes AI guidance for schools

The Scottish government has published national guidance on the use of AI in schools, aiming to support the safe and ethical adoption of AI in classrooms. The document provides advice for teachers and pupils as AI use continues to expand across society.

The guidance outlines potential benefits of AI alongside risks that need to be considered, and includes examples of appropriate classroom use. It was developed with the EIS teaching union, local government and Education Scotland.

Education Secretary Jenny Gilruth said AI should support creativity, critical thinking and personalised learning while protecting pupils’ rights and privacy. She added that technology must not replace teachers or human relationships in education.

Andrea Bradley said AI should remain a tool for teachers and not replace professional judgement. The non-statutory guidance allows schools and local authorities flexibility to develop their own policies as AI continues to evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

UK tests social media bans for children in national pilot

The UK government has launched a large-scale pilot programme to test social media restrictions in the homes of 300 teenagers, aiming to improve children’s well-being instead of relying solely on existing digital safety measures.

The initiative, led by the Department for Science, Innovation and Technology and supported by Liz Kendall, will run for six weeks and examine how limits on digital platforms affect young people’s daily lives, including sleep, schoolwork, and family relationships.

Families across the UK will be divided into groups testing different approaches. Some parents will block access to social media entirely, while others will introduce a one-hour daily limit on popular platforms such as Instagram, TikTok, and Snapchat.

Another group will implement overnight curfews, restricting access between 9 pm and 7 am, while a control group will maintain existing usage patterns rather than introducing changes.

Participants will be interviewed before and after the trial to assess behavioural and practical outcomes, including how easily restrictions can be enforced and whether teenagers attempt to bypass controls.

The pilot runs alongside a national consultation on children’s digital well-being, which has already received nearly 30,000 responses. Government officials and academic experts will analyse data gathered from both initiatives to guide future policy decisions.

A programme that aims to ensure that any regulatory steps are evidence-based, reflecting real-life experiences rather than theoretical assumptions about digital behaviour.

Alongside the government trials, an independent scientific study funded by the Wellcome Trust will examine the effects of reduced social media use among adolescents.

Led by researchers from the University of Cambridge and the Bradford Institute for Health Research, the study will involve around 4,000 students aged 12 to 15.

Findings are expected to provide deeper insight into how social media influences anxiety, sleep, relationships, and overall well-being, supporting policymakers in shaping future online safety measures instead of relying on limited evidence.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ICO and Ofcom issue guidance on age assurance and online safety

The Information Commissioner’s Office and Ofcom have issued a joint statement outlining how age assurance measures should align with online safety and data protection requirements.

A guidance that focuses on protecting children from harm online instead of treating safety and privacy as separate obligations, reflecting closer coordination between the two regulators.

The statement is directed at digital services likely to be accessed by children and falling within the scope of the Online Safety Act and UK data protection laws.

It provides a practical overview of existing policies, helping organisations understand how to meet both regulatory frameworks while implementing age assurance technologies.

Rather than introducing new rules, the guidance clarifies how current requirements interact in practice. It highlights the importance of designing systems that both verify users’ ages and safeguard personal data, ensuring that safety measures do not undermine privacy protections.

The approach encourages organisations to integrate compliance into service design instead of addressing obligations separately.

By aligning regulatory expectations, the ICO and Ofcom aim to support organisations in delivering safer online environments for children while maintaining strong data protection standards.

The joint effort signals a broader move towards coordinated digital regulation, where safety and privacy are addressed together to reflect the complexities of modern online services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EDPB summarises conference on cross-regulatory cooperation in the EU

The European Data Protection Board has published a summary of its 17 March conference in Brussels on cross-regulatory interplay and cooperation in the EU from a data protection perspective. According to the EDPB, the event brought together representatives of the EU institutions, European Data Protection Authorities, academia, and industry.

Three panels structured the conference discussion. One focused on data protection and competition, another on the Digital Markets Act and the General Data Protection Regulation (GDPR), and a third on the Digital Services Act and the GDPR.

Discussion in the first panel centred on cooperation between regulatory bodies in data protection and competition, including lessons from the aftermath of the Bundeskartellamt ruling. The EDPB said speakers emphasised the need for regulators to align their approaches and recognise synergies between the two fields. Speakers also said data protection should be considered in competition analysis only when relevant and on a case-by-case basis. The EDPB added that it had recently agreed with the European Commission to develop joint guidelines on the interplay between competition law and data protection.

The second panel focused on joint guidelines on the Digital Markets Act and the GDPR, developed by the European Commission and the EDPB and recently opened to public consultation. According to the EDPB, speakers described the guidelines as an example of regulatory cooperation aimed at developing a coherent and compatible interpretation of the two frameworks while respecting regulatory competences. The Board said participants linked the guidelines to stronger consistency, legal clarity, and easier compliance. Some speakers also suggested changes to the final version, including points related to proportionality and the relationship between DMA obligations and the GDPR.

The final panel examined the interaction between the Digital Services Act and the GDPR. The EDPB said panellists referred to the protection of minors as one example, arguing that age verification should be effective while remaining fully in line with data protection legislation. Speakers also highlighted the need for coordination between the two frameworks, including cooperation involving the EU institutions such as the European Board for Digital Services, the European Commission, the EDPB, and national authorities. Emerging technologies such as AI were also mentioned in the discussion.

The event also featured keynote speeches from European Commission Executive Vice President Henna Virkkunen and European Parliament LIBE Committee Chair Javier Zarzalejos. According to the EDPB, Virkkunen said the Commission remained committed to cooperation between different frameworks and highlighted the need to support compliance through stronger coordination among regulators. Zarzalejos said close cross-regulatory cooperation was essential for consistency, effective enforcement, and trust, and pointed to the intersections among data protection law, competition law, the DMA, and the DSA.

EDPB Chair Anu Talus closed the conference by reiterating that the EDPB and European Data Protection Authorities are committed to supporting stakeholders in navigating what the Board described as a new cross-regulatory landscape. The EDPB said future work will include continued cooperation with the Commission on joint guidelines on the interplay between the AI Act and the GDPR, finalisation of the joint guidelines on the interplay between the DMA and the GDPR, and work on the recently announced Joint Guidelines on the interplay between data protection and competition law.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

IWF report reveals a rapid growth of synthetic child abuse material online

A surge in AI-generated child sexual abuse material has raised urgent concerns across Europe, with the Internet Watch Foundation reporting record levels of harmful content online.

Findings of the IWF report indicate that AI is accelerating both the scale and severity of abuse, transforming how offenders create and distribute illicit material.

Data from 2025 reveals a sharp increase in AI-generated imagery and video, with over 8,000 cases identified and a dramatic rise in highly severe content.

Synthetic videos have grown at an unprecedented rate, reflecting how emerging tools are being used to produce increasingly realistic and extreme scenarios rather than traditional formats.

Analysis of offender behaviour highlights a disturbing trend toward automation and accessibility.

Discussions on dark web forums suggest that future agentic AI systems may enable the creation of fully produced abusive content with minimal technical skill. The integration of audio and image manipulation further deepens risks, particularly where real children’s likenesses are involved.

Calls for regulatory action are intensifying as policymakers in the EU debate reforms to the Child Sexual Abuse Directive.

Advocacy groups emphasise the need for comprehensive criminalisation, alongside stronger safety-by-design requirements, arguing that technological innovation must not outpace child protection frameworks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Europol-backed operation shuts down thousands of dark web fraud sites

A global law enforcement operation supported by Europol has led to the shutdown of more than 373,000 dark web websites linked to fraudulent activity and the advertisement of child sexual abuse material.

The operation, known as ‘Operation Alice’, was launched on 9 March 2026 under the leadership of German authorities, with participation from 23 countries. The investigation, which began in 2021, initially targeted a dark web platform referred to as ‘Alice with Violence CP’.

According to Europol, investigators identified a single operator responsible for managing a network of hundreds of thousands of onion domains. These websites advertised child sexual abuse material and cybercrime-as-a-service offerings, including access to stolen financial data and systems.

Authorities state that the services were fraudulent, designed to extract payments without delivering the advertised material.

The operation has so far resulted in the identification of 440 customers worldwide, with further investigations ongoing against more than 100 individuals. Law enforcement agencies also seized 105 servers and multiple electronic devices during the coordinated action.

Europol provided analytical support, facilitated information exchange, and assisted in tracing cryptocurrency transactions linked to the network.

Authorities also reported that measures were taken throughout the investigation to identify and protect children at risk. An international arrest warrant has been issued for the suspected operator, who is reported to have generated significant profits through the scheme.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Sora strengthens AI video safety through consent and traceability controls

OpenAI has outlined a safety framework for Sora that embeds protections into how AI-generated video content is created, shared, and managed.

The system introduces visible and invisible provenance signals, including C2PA metadata and watermarks, designed to ensure that generated media can be identified and traced.

The framework emphasises consent and control. Users can generate video content from images of real individuals only after confirming they have permission, while the ‘characters’ feature enables controlled use of personal likeness, with the ability to revoke access at any time.

Additional safeguards apply to content involving minors or young-looking individuals, with stricter moderation rules and enforced watermarking.

Safety mechanisms operate across the entire lifecycle of content. Generation is subject to layered filtering that assesses prompts and outputs for harmful material, including sexual content, self-harm promotion, and illegal activity.

These automated systems are complemented by human review and continuous testing to address emerging risks linked to increasingly realistic video and audio outputs.

The system also introduces protections specific to audio and user interaction. Generated speech is analysed for policy violations, and attempts to replicate the style of living artists or existing works are restricted.

Users of Sora retain control over their content through reporting tools, sharing settings, and the ability to remove material, reflecting a broader approach that aligns AI-generated media with safety, transparency, and accountability standards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!