AI experts ask governments to introduce algorithmic impact assessments

In a paper released by artificial intelligence (AI) experts from the AI Now Institute, governments are invited to conduct algorithmic impact assessments (AIA) for the automated decision making systems that they use as part of their activities. While giving examples of AI-driven systems used in criminal justice, predictive policing, or optimising energy uses in critical infrastructures, the authors note that many such systems operate as ‘black boxes’, without proper scrutiny and accountability. To better understand how these systems work and what impacts they could have, public authorities are advised to introduce an AIA framework based on the following main elements: conducting a self-assessment of existing and proposed automated decision systems, evaluating potential impacts on fairness, justice, bias, or other concerns across affected communities; developing meaningful external researcher review processes to discover, measure, or track impacts over time; providing notice to the public disclosing their definition of ‘automated decision system’ and existing and proposed systems; soliciting public comments to clarify concerns and answer outstanding questions; and providing enhanced due process mechanisms for affected individuals or communities to challenge inadequate assessments or unfair, biased, or otherwise harmful system uses that authorities have failed to mitigate or correct.