Researchers call for rethink on UK AI regulation

The researchers recommend investing in pilot projects to improve government understanding of AI, clarifying AI liability laws, establishing an AI ombudsman, and expanding the definition of AI safety.

 Pattern, Accessories, Ornament

The Ada Lovelace Institute has released a report titled ‘Regulating AI in the UK,’ calling the UK Government to reevaluate its current AI regulation proposals. The report emphasises the importance of effective regulation in fostering the country’s future AI economy and addresses concerns related to public safety and trust in AI. These concerns include issues like data-driven or algorithmic social scoring, biometric identification, and the use of AI in law enforcement, education, and employment. Researchers also recommend investing in pilot projects to improve government understanding of AI, clarifying AI liability laws, establishing an AI ombudsman, and expanding the definition of AI safety.

The report presents 18 recommendations, shedding light on the limited legal protections available to citizens seeking recourse for discriminatory AI decisions. It also advises against excessive worry about speculative AI risks and advocates for close collaboration with developers to address potential harms.
The report highlights the importance of clearer guidance and external review to ensure ethical AI practices. However, the scarcity of AI assurance professionals and the constantly evolving AI landscape present challenges in anticipating and managing emerging risks.

Furthermore, the report delves into the UK Government’s existing plans and provides suggestions for improvement.