UK moves to curb AI-generated child abuse imagery with pre-release testing
UK ministers will allow authorised testing of AI models for child abuse risks under new law, as reports of AI-generated imagery surge and campaigners call for mandatory safeguards.
The UK government plans to let approved organisations test AI models before release to ensure they cannot generate child sexual abuse material. The amendment to the Crime and Policing Bill aims to build safeguards into AI tools at the design stage rather than after deployment.
The Internet Watch Foundation reported 426 AI-related abuse cases this year, up from 199 in 2024. Chief Executive Kerry Smith said the move could make AI products safer before they are launched. The proposal also extends to detecting extreme pornography and non-consensual intimate images.
The NSPCC’s Rani Govender welcomed the reform but said testing should be mandatory to make child safety part of product design. Earlier this year, the Home Office introduced new offences for creating or distributing AI tools used to produce abusive imagery, punishable by up to five years in prison.
Technology Secretary Liz Kendall said the law would ensure that trusted groups can verify the safety of AI systems. In contrast, Safeguarding Minister Jess Phillips said it would help prevent predators from exploiting legitimate tools.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
