UK’s Ofcom prioritises child protection and AI moderation under Online Safety Act

Expanded Ofcom enforcement targets AI moderation, online hate, deepfakes, and child protection measures.

New Ofcom priorities expand UK online safety enforcement against AI harms, illegal content, and child safety risks.

The UK’s Ofcom has outlined its main online safety priorities for 2026–27, signalling tougher oversight of digital platforms under the UK’s Online Safety Act. The regulator said it will continue focusing heavily on child protection while expanding enforcement efforts against illegal hate speech, terrorism-related material, intimate image abuse, and AI-generated harms.

The regulator confirmed that more than 100,000 online services now fall within the scope of the legislation, creating major compliance and enforcement challenges. Ofcom said it will continue investigating platforms that fail to prevent harmful or illegal content, while also preparing new rules linked to additional UK legislation covering cyberflashing, non-consensual intimate imagery, and generative AI services.

Ofcom stated that major online platforms have already introduced broader age verification measures under regulatory pressure. Services including gaming, dating, social media, and pornography platforms have implemented stronger age checks and child safety protections.

Furthermore, the regulator said it will expand supervision of large technology companies and publish updated safety codes later this year, including guidance on AI-powered moderation systems.

According to Ofcom, future compliance work will increasingly focus on the effectiveness of platform moderation systems rather than relying solely on reactive content removal. The regulator also plans to strengthen protections for women and girls online through new technical standards designed to block the spread of non-consensual intimate images and sexual deepfakes at scale.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!