UK Online Safety Bill is now law: tech firms face responsibility as messaging services threaten exit

In a bid to navigate this new regulatory landscape, Ofcom is set to release initial guidelines on November 9th.

 Flag, United Kingdom Flag

Following lengthy discussions, the UK’s Online Safety Bill has become law, shifting responsibility onto tech companies for the content on their platforms. While aimed at making the internet safer for children, the law has stirred controversy, with messaging services like WhatsApp considering leaving the country due to concerns over message security.

The law introduces rules to protect children from harmful material, gives enforcement powers to the regulator Ofcom, and creates new offenses, such as cyber-flashing and deepfake pornography. It also enables bereaved parents to obtain information about their children from tech firms. Violating the law could result in fines or even prison time for tech companies.

Ofcom plans to create guidelines to help navigate the new regulations, with the initial draft set to be released on November 9. Campaigners consider it a vital step, but critics argue it fails to address misinformation concerns.

Why does it matter?

The provisions in the law allowing authorities to force messaging services to inspect encrypted messages for child abuse content have sparked significant controversy. To ease tensions, the government has stated that the regulatory body Ofcom will only request tech companies to access messages when ‘feasible technology‘ becomes available. The reality is that there’s no longer room for discussion, as the bill has now become law. The focus must shift to evaluating its effectiveness in addressing the growing problem of online child sexual abuse, as recently reported by The National Society for the Prevention of Cruelty to Children (NSPCC) and the WeProtect Global Alliance.