San Francisco to use AI to prevent bias in prosecutions

Authorities in San Francisco, USA, plan to use an artificial intelligence (AI) tool to mitigate the risk of bias when decisions are made regarding the prosecution of potential criminals. The ‘bias mitigation tool’ is intended to address the racial bias in the legal system and thus avoid having people prosecuted based on this bias. The AI tool will be applied to documentation processed by a prosecutor and will mainly redact information from a police report that could identify a suspect’s race (description of race, hair and eye colour, neighbourhoods, names of people if they could indicate the individual’s racial background). It will also hide information identifying specific police officers, to avoid bias decisions by prosecutors due to them knowing the officers. Expected to be launched in July, the system will use computer vision algorithms to recognise words and replace them with generic alternatives, such as ‘location’ and ‘officer 2’.