OpenAI launches child safety framework to address AI risks

A new framework has been introduced by OpenAI to address risks of AI-enabled child abuse and strengthen protection mechanisms across digital systems.

An initiative that reflects growing concern over how emerging technologies can both enable and prevent harm.

The blueprint focuses on modernising legal frameworks to address AI-generated harmful content, improving reporting and coordination among service providers, and embedding safety measures directly into AI systems.

These measures aim to enhance early detection and prevent misuse at scale.

Developed in collaboration with organisations such as the National Center for Missing and Exploited Children (NCMEC) and the Attorney General Alliance, the framework promotes shared standards across industry and public authorities.

It emphasises coordinated responses and stronger accountability mechanisms.

An approach that combines technical safeguards, human oversight, and legal enforcement, aiming to improve response speed and reduce risks before harm occurs.

Ultimately, the initiative highlights the need for continuous adaptation as AI capabilities evolve and reshape online safety challenges.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Greece moves to restrict youth social media access with new digital age rules

New measures to protect minors online have been announced by Greece, introducing a national ‘digital age of majority’, restricting access to social media for users under 15.

The policy forms part of a broader strategy addressing child safety and digital overuse, with implementation scheduled for January 2027.

An initiative that places primary responsibility on platforms, requiring robust age-verification systems and periodic re-verification of existing accounts. Authorities will oversee compliance under the EU’s Digital Services Act framework, with penalties including fines and operational restrictions for violations.

The policy builds on earlier tools such as KidsWallet, an age-verification mechanism already deployed nationally.

Authorities in Greece argue that reliance on parental control alone is insufficient, citing increasing evidence linking excessive platform use to mental health risks, including anxiety, reduced sleep, and social isolation.

A proposal that aligns with wider European discussions on youth protection, including efforts to establish a unified digital age threshold across member states.

Greece has also called for stronger EU-wide enforcement mechanisms, positioning the measure as part of a coordinated approach to safeguarding minors in digital environments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK data reveals alarming growth in online child abuse cases

A sharp increase in online child abuse cases has been reported by the Internet Watch Foundation (IWF) and NSPCC’s Childline, based on data from the Report Remove service.

Nearly 1,900 UK children reported sexual imagery concerns in 2025, a 66 percent rise, with more than 1,100 confirmed cases involving abuse material. Weekly reports show a consistent pattern of coercion, threats, and financial pressure targeting minors.

The scale of the increase reflects structural changes in how abuse operates online. Offenders use fake identities and contact many victims simultaneously, turning exploitation into a repeatable activity.

Financial incentives reinforce the pattern, while teenage boys aged 14 to 17 represent the majority of cases, indicating targeted and adaptive behaviour by perpetrators.

Weaknesses in digital environments further sustain such growth. Platforms prioritise speed and interaction instead of prevention, while anonymity and cross-border activity reduce enforcement effectiveness.

Psychological pressure remains central, with threats designed to isolate victims and limit reporting, meaning recorded cases likely underestimate the real scale.

The IWF‘s findings highlight a policy gap between technological expansion and child safety protections in the UK.

While services like Report Remove improve response and mitigation, they do not address underlying risks. Without stronger platform accountability and preventive regulation, online child abuse is likely to continue expanding.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

ICO launches online privacy campaign for parents

New research published by the Information Commissioner’s Office (ICO) found that 24% of primary school-aged children have shared their real name or address online, while 21% of parents and carers have never spoken to them about online privacy. It also found that 22% of children have shared personal information, such as health details, with AI tools.

Research published by the ICO also found that 71% of parents worry that information their child shares today could affect their future. Findings also show that 46% do not feel confident protecting their children’s privacy online, 44% say they try but are not sure they are doing enough, and 42% say they probably do not spend enough time checking privacy settings.

Online privacy is one of the least-discussed online safety topics among parents, according to the ICO. Its research found that 38% discuss it less than once a month, while 90% have discussed screen time in the past month.

Emily Keaney, Deputy Commissioner at the ICO, said: ‘The internet offers amazing opportunities for children – but every click can leave a hidden data trail and these digital footprints can last forever.’ She added: ‘We wouldn’t expect our children to share their birthdays or address with a stranger in a shop, because we’d explain stranger danger to them from a very young age, but kids these days are growing up online.’

Keaney said: ‘We know that where children’s details – like their name, interests and pictures – aren’t protected, the potential risks are serious: unwanted contact from strangers, grooming and radicalisation.’ She said children’s online privacy ‘requires a whole society approach’ and added: ‘We have taken and will continue to take action to hold tech companies accountable for their role.’

Keaney also said: ‘There’s a role for parents too but the problem is that many families have never been shown how to talk to their children about online privacy.’ She added: ‘This is where the ICO comes in. We want parents to feel empowered and children to feel digitally confident, because only then will they be able to start to trust in how their data is used and be part of the whole society solution that is needed for online safety.’

The ICO campaign website outlines three steps for parents: talk regularly with children about online privacy, carefully choose what personal information to share, and check privacy settings on new devices and apps.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

National Crime Agency to receive CSEA reports under UK Online Safety Act rules

UK regulations under the Online Safety Act 2023 are now in force, requiring certain regulated user-to-user services to register with the National Crime Agency and report detected and unreported child sexual exploitation and abuse content.

Under the Online Safety (CSEA Content Reporting by Regulated User-to-User Service Providers) Regulations 2026, providers subject to the reporting duty, and any third-party providers acting on their behalf, must register with the National Crime Agency through an online portal. They must also appoint an organisation administrator as a point of contact.

Reports submitted to the National Crime Agency must contain specified information, including details about the content, the time it was uploaded, relevant IP addresses, and user account data. The regulations also require providers to classify reports into three priority levels and submit them within the corresponding timeframes.

Record-keeping duties are also set out in the regulations. Providers must retain the report reference number for five years and keep the associated content and user data for one year from the reporting date.

The rules form part of the reporting framework under the Online Safety Act 2023 for child sexual exploitation and abuse content on regulated user-to-user services in the UK. Non-compliance may result in a penalty of up to 10% of qualifying worldwide revenue or £18 million, whichever is greater.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU lapse in child safety rules raises concerns

The expiry of the EU ePrivacy derogation, which allowed technology to detect child sexual abuse material online, has raised concerns over weaker child safeguards. The lapse is seen as creating legal uncertainty for platforms that rely on established detection tools to prevent ongoing harm.

For years, technology companies have voluntarily used hash-matching to detect and remove CSAM, a widely recognised tool for disrupting abuse and protecting victims.

Google is among the organisations calling on the EU institutions to urgently finalise a regulatory framework, alongside nearly 250 child rights organisations, warning that reduced capacity could impact child safety globally.

The EU institutions face criticism for failing to maintain an interim agreement, with stakeholders saying the lack of continuity undermines child online safety efforts.

Meta, Microsoft, and Snap have reaffirmed their commitment to continue voluntary detection and reporting measures while respecting user privacy. The companies also urge the EU institutions to finalise an urgent regulatory framework for consistent and effective child protection standards.

The absence of a clear framework has been described as creating instability for responsible platforms operating across Europe. Fragmented rules and legal uncertainty can slow detection and reporting systems, weakening coordinated protection efforts across platforms and borders.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU interim ePrivacy derogation for voluntary CSAM detection expires

The EU’s interim ePrivacy derogation allowing certain communications services to detect child sexual abuse online voluntarily expired after 3 April 2026, bringing to an end the temporary legal basis that had permitted some providers to scan private communications for child sexual abuse material under limited conditions.

The exemption applied to number-independent interpersonal communications services such as messaging, webmail, and internet telephony platforms, allowing them to use specific technologies to detect, report, and remove child sexual abuse material in private communications.

Under the temporary framework, providers were also required to make information from reports submitted to authorities and the European Commission available in a structured, machine-readable format.

On 26 March 2026, the European Parliament said the derogation would not be extended after negotiations with the Council of the European Union failed to produce an agreement. Parliament had supported a further extension on 11 March, backing a shorter prolongation until August 2027 and a narrower scope than the European Commission had proposed, but no final deal was reached before the deadline.

The expiry leaves the EU without an updated interim arrangement, while negotiations on a permanent legal framework for addressing online child sexual abuse continue. In practice, that means the bloc still has no settled long-term answer to one of its most difficult digital policy questions: how to reconcile child protection measures with privacy and confidentiality rules governing private communications.

Why does it matter?

Because the lapse removes the temporary EU legal basis that had allowed some messaging and other communications services to voluntarily use detection technologies for online child sexual abuse under a limited exemption from ePrivacy rules. That creates immediate legal and operational uncertainty for providers that had relied on the framework, while also reopening a wider policy conflict the EU has still not resolved: how to support child safety online without undermining privacy, confidentiality of communications, and data protection safeguards in the absence of a permanent legislative solution.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK regulator orders revised safety assessments under Online Safety Act

Ofcom has ordered more than 40 online services to submit revised risk assessments under the UK’s Online Safety Act, increasing pressure on platforms to show how they identify and reduce illegal content and other user harms.

The move marks a tougher phase in the UK’s online safety regime, with the regulator signalling that incomplete or delayed submissions could trigger enforcement action.

Ofcom said earlier reviews had identified weaknesses in several assessments, prompting companies to strengthen their approach and improve safeguards.

The requirement is especially significant for services likely to be accessed by children, which must also examine the risk of exposure to harmful content and demonstrate what protective measures they have in place. In that sense, the regulator is pushing platforms to treat safety not as a reactive moderation issue, but as a design and compliance obligation.

Ofcom has also indicated that major platforms will eventually have to publish summaries of their risk assessments, adding a transparency layer to the regime.

The latest demands suggest that the UK is moving beyond setting out online safety expectations and into a more interventionist stage focused on supervision, accountability, and enforcement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Cyberbullying in education addressed at UNESCO workshop in Addis Ababa

UNESCO has used a two-day workshop in Addis Ababa to push cyberbullying, hate speech, misinformation, and other forms of online violence in schools higher on the education and digital safety agenda, bringing together teachers, education experts, government representatives, youth leaders, and academics in training organised by its Liaison Office to the African Union, UNECA and Ethiopia alongside the Addis Ababa City Government Education Bureau.

Held on 7 and 8 March, the event was presented as an effort to strengthen local capacity to recognise, prevent, and respond to online harms affecting students, while framing cyberviolence not only as a student well-being issue, but also as a broader challenge for safer and more inclusive learning environments.

According to UNESCO, such harms can affect learners’ mental health, sense of safety, and academic performance, placing cyberbullying and online abuse within a wider discussion about digital well-being and protection in education. That framing matters because it treats online violence in schools as more than an issue of classroom discipline or individual misconduct.

The organisation also linked the workshop to wider evidence of harm in digital spaces, citing data showing that 58% of young women and girls globally have experienced online harassment on social media platforms. The Addis Ababa event can be read as part of a broader attempt to build institutional awareness and response capacity around online harms affecting young people.

Training sessions covered digital safety, cyberbullying prevention, digital rights and responsibilities, digital well-being, and UNESCO guidance on tackling cyberviolence in education. The emphasis was not only on identifying risks, but also on helping educators and youth leaders respond to them more effectively in both online and offline learning settings.

While the workshop did not introduce a new policy framework or regulatory measure, it suggests that cyberbullying is increasingly being treated as part of a wider public-interest conversation about education, student protection, and digital harms.

That gives the event greater relevance than a routine training session, particularly in a context where schools are being pushed to address the social consequences of digital platforms more directly.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Experts warn YouTube AI slop harms children and demand action

Fairplay and more than 200 experts have urged YouTube to address the spread of ‘AI slop’ targeting children. The letter was sent to Sundar Pichai and Neal Mohan, along with a petition.

The signatories state that AI-generated videos harm children’s development by distorting reality and overwhelming learning processes. They also warn that such content captures attention and is being recommended to young users, including infants and toddlers.

The letter cites findings that 40% of videos following shows like Cocomelon contained AI-generated content. It also states that 21% of Shorts recommendations included similar material, and misleading science videos were shown to older children.

Fairplay and its partners propose measures, including labelling AI content and banning it from YouTube Kids. They also call for restrictions on recommendations to under-18s and for tools that allow parents to turn off such content.

The initiative was organised by Fairplay and supported by organisations and experts, including Jonathan Haidt. The group says platforms must ensure content is safe and appropriate for children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot