Socio-technical approach is needed to mitigate bias in AI, NIST report argues

In a recently published report titled Towards a standard for identifying and managing bias in artificial intelligence, the US National Institute of Technology (NIST) argues that machine learning processes and data are not the only sources of bias in artificial intelligence (AI). While computational and statistical sources of AI bias are important, human and systemic biases are relevant as well. ‘Systemic biases result from institutions operating in ways that disadvantage certain social groups, such as discriminating against individuals based on their race. Human biases can relate to how people use data to fill in missing information, such as a person’s neighbourhood of residence influencing how likely authorities would consider the person to be a crime suspect.’ The report argues in favour of a socio-technical approach to mitigating bias in AI, and introduces guidance for addressing three key challenges for mitigating bias – datasets, testing and evaluation, and human factors.