Accountability for human rights: Mitigate unfair bias in AI

13 Nov 2018 09:00h - 10:30h

Event report

[Read more session reports and live updates from the 13th Internet Governance Forum]

Many fields of human endeavours are increasingly impacted by artificial intelligence (AI). While this offers benefits in advancing human rights and the sustainable development goals (SDGs), it also poses threats. Accountability remains a crucial factor in the wider use of AI. Non-state and state actors should focus on testing the models use the UN human rights impact assessment, preserving human oversight of the machines. In addition to explaining AI, we should focus on understanding the nature, origin, and use of the input training data as it often reflects historical errors in decision-making.

The session moderator, Mr Bernard Shen, Assistant General Counsel – Human Rights, Microsoft, kicked off the session that addressed the benefits of AI in advancing human rights and the SDGs. Shen also invited the panel to comment the negative impact of irresponsible use of AI, especially in the public sector.

Ms Sana Khareghani, Deputy-Director, Head of Office for Artificial Intelligence, UK Government, named increase in productivity and better interaction among the public service sectors as some of the benefits of AI for the public. Kharegani pointed out that often automated decision-making helps remove the bias humans inherently produce.

Mr Scott Campbell, Senior Human Rights officer at the United Nations Human Rights Office (UNHR), remarked that AI for SDGs has a wide array of uses, and that the UN struggles with prioritising where to focus its energy first. AI helps civil society access information and reach stakeholders, thus increasing their impact. As access to information empowers people, AI is particularly useful in advancing the SDG5 on gender equality and SDG10 on reducing inequality.

From the civil society perspective, Ms Wafa Ben-Hassine, Policy and Advocacy Lead, Access Now for the Middle East and North Africa Region, echoed Campbell’s point that AI facilitates freedom of expression and the right to information in local environments. She added that for a successful application of AI, state and private actors have to be held accountable. ‘Keeping people in the loop in terms of every decision that AI makes’ is important, Ben-Hassine said.

Mr David Reichel, European Union Agency for Fundamental Rights (FRA), said that his agency gathers evidence in the European Union (EU) on the aspects of AI and compares data in order to provide expertise on fundamental rights issues. ‘AI can help detect structural problems in human rights violations’, Reichel said. He added that 25% of large companies in the EU use AI and detected discrimination and unfair treatments as issues most often picked by AI.

The first part of the discussion unpacked how AI helps in detecting discrimination in human decision-making. Kharegani mentioned the Amazon case where the algorithm was biased towards hiring men and how Amazon discovered its internal issue. We should not expect machines to be better than humans, but use them to detect this type of practices. This is called an AU audit, which if supplemented by a discussion on the nature of datasets and algorithms can point the humans towards fixing their biases. Ben-Hassini stressed that in high-risk sectors such as criminal justice, healthcare, and border control there should always be a human oversight.

Reichel stressed that when companies gather datasets, a lot of personal information is likely a part of the protected attributes according to Article 21 of the UN Charter. The difference in protected attributes in the dataset can create a very different value base where AI will treat groups differently.

The second part of the session discussed the practical steps to detect incorrect AI application, transparency and accountability, and the place of private and state actors in this. Ben-Hasine singled out the human rights due diligence framework as a useful tool. It focuses on including explainability and accountability in the use of the systems, taking proactive steps when deploying AI. Kharegani emphasised that states have the responsibility to set the right principles and parameters and allow the non-state actors to use them. Widespread use of AI is good and for this, trust in the technology is crucial.  

Ms Layla El Asri, Research Manager at Microsoft Research Lab, Canada, added that trust is built by testing models, eliminating the uncertainty of prediction and building relationships between the human user and AI.

In order to help users assess the AI predictions, several recommendations were made: multi-task learning, the UN human rights impact assessment, keeping the human oversight, thoroughly discuss the nature of the input data and increase literacy so that users can understand the predictions.

In the end, the panellists agreed that since AI is a broad field, an interdisciplinary approach is crucial. Campbell noted the work of the UN High Level-Panel on Digital Cooperation is a good step in this direction.

 

By Jana Mišić