UK’s government to publish tests to determine new AI laws

The results of these tests could trigger legislative action to ensure the UK can keep pace with the risks of AI.

 Person, Security

The UK government is about to publish a series of tests that will determine when and how to legislate AI technology. These tests are designed to clarify the criteria for what developments would necessitate new laws governing AI.

The tests include scenarios such as major AI developers failing to adhere to commitments to develop safe systems or if the newly created UK’s AI Safety Institute does not succeed in identifying risks in a new application that subsequently proliferates after its release.

Why does it matter?


The UK government’s approach to AI regulation has been described as ‘pro-innovation’, seeking to support safe and responsible AI innovation without putting excessive burdens on the industry.
This approach was outlined in a policy paper published in spring 2023, titled ‘A pro-innovation approach to AI regulation.’

The paper sets out five cross-cutting principles that will underpin the UK’s AI regulation: safety, security, and robustness; appropriate transparency and explainability; fairness; accountability and governance; and privacy. Britain’s approach to AI regulation is also characterized by a commitment to establish AI regulatory sandboxes to facilitate the interplay between new technologies and regulation.


The upcoming tests are expected to be published as part of the consultation process for the government’s AI white paper, which started in March 2023. The results of these tests could trigger legislative action to ensure the UK can keep pace with the risks of AI. However, contrary to the EU’s stance on comprehensive regulation with the AI Act, the UK government has previously stated it is in no rush to legislate AI in the short term, instead giving the industry time to develop and innovate unburdened.