UK enforces age checks to block harmful online content for children

New laws are forcing tech platforms to verify age using facial recognition or credit cards to block children from harmful content.

The United Kingdom has introduced new age verification laws to prevent children from accessing harmful online content, marking a significant shift in digital child protection.

The measures, enforced by media regulator Ofcom, require websites and apps to implement strict age checks such as facial recognition and credit card verification.

Around 6,000 pornography websites have already agreed to the new regulations, which stem from the 2023 Online Safety Act. The rules also target content related to suicide, self-harm, eating disorders and online violence, instead of just focusing on pornography.

Companies failing to comply risk fines of up to £18 million or 10% of global revenue, and senior executives could face criminal charges if they ignore Ofcom’s directives.

Technology Secretary Peter Kyle described the move as a turning point, saying children will now experience a ‘different internet for the first time’.

Ofcom data shows that around 500,000 children aged eight to fourteen encountered online pornography in just one month, highlighting the urgency of the reforms. Campaigners, including the NSPCC, called the new rules a ‘milestone’, though they warned loopholes could remain.

The UK government is also exploring further restrictions, including a potential daily two-hour time limit on social media use for under-16s. Kyle has promised more announcements soon, as Britain moves to hold tech platforms accountable instead of leaving children exposed to harmful content online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!