Report explores algorithmic bias in UK policing sector

In a report commissioned by the UK Centre for Data Ethics and Innovation, the Royal United Services Institute explored the use of data analytics and algorithms in policing activities in England and Wales, and different types of bias that can arise in these activities. The report is based on consultations with police forces, civil society organisations, academics, and legal experts. It notes that the use of data analytics and algorithms for policing has benefits, but also significant risks. For example, using algorithms to make predictions about future crime and offending raises considerable legal and ethical questions, especially when it comes to the risk of bias and discrimination. Such discrimination could happen on the grounds of protected characteristics, real or apparent skewing of the decision-making process, or outcomes and processes which are systematically less fair to individuals within a particular group. Independent and methodologically robust evaluation of trials is key to address such risks and to demonstrate the accuracy and effectiveness of a tool or method used by the police. The report also indicates a need and desire within the police forces for clearer national guidance and leadership in the area of data analytics, and well as for legality, consistency, scientific validity, and oversight. As such, a code of practice for algorithmic tools in policing should be established. The code should include a standard process for the development, trialling, deployment, monitoring, and evaluation of algorithmic tools. It should also focus on the need to comply with legal and ethical requirements during the development and use of such tools. Moreover, the code should outline clear roles and responsibilities regarding scrutiny, regulation and enforcement, and establish processes for independent ethical review and oversight to ensure transparency and accountability.