Applying human rights and ethics in responsible data governance and artificial Intelligence

27 Nov 2019 09:00h - 11:00h

Event report

[Read more session reports and updates from the 14th Internet Governance Forum]

The session addressed the question of implementing ethics and human rights principles in the development of artificial intelligence (AI), through three guiding questions. First, what responsible AI should look like; second, whether existing legislative frameworks and guidelines are enough; and third, how to implement transparency and accountability in the development of AI.

In recent years, many organisations and groups (such as the Organisation for Economic Co-operation and Development, the European Commission’s High-level Expert Group on AI, and the Institute of Electrical and Electronic Engineers) have developed guidelines and principles to drive the development and use of ethical AI. As explained by Ms Olga Cavalli (Academic Director, South School on Internet Governance), these frameworks have multiple elements in common, which can be clustered into three main categories: human-centric AI, responsible AI, and implementation mechanisms. Human-centric AI focuses on an approach that puts the human at the centre, guaranteeing human dignity, safety and fairness in the algorithmic development of AI. Responsible AI identifies approaches looking for more accountability, responsibility and transparency in the development of AI that is explainable. Finally, the third category identifies mechanisms that tackle current gaps and empower users and the workforce with appropriate skills. This approach stresses the importance of multistakeholderism and co-operation among different initiatives, especially with regard to privacy, safety, and security.

Addressing the question of what trustworthy and responsible AI should look like, different definitions were proposed on philosophical, technical, and anthropological grounds. Responsible AI should be human-centric and in line with principles of inclusivity, robustness, and responsibility, as argued by Mr Yoichi Lida (Director for International Research and Policy Coordination, Global ICT Strategy Bureau, Ministry of Internal Affairs and Communications, Japan). Recalling the European Commission’s Ethics guidelines for trustworthy AI, Ms Sarah Kiden (Ph.D. student, University of Dundee) complemented the definition highlighting that robustness should not ignore the social context. Ms Lisa Dyer (Director of Policy, Partnership on AI) and Mr Mina J. Hanna (Chair, IEEE-USA Artificial Intelligence and Autonomous Systems Policy Committee, IEEE) built on a philosophical aspect. As explained by Hanna, responsible AI should be recognised using a derivative approach: responsible AI is the opposite of irresponsible AI. If the responsibility for a decision that could be harmful to someone does not fall under their own control, that would be defined as irresponsible. So responsible AI would mean the exact opposite of this. According to Ms Peggy Hicks (Director at the United Nations Human Rights Office), AI should additionally be transparent, understandable, and accountable. Additional definitions proposed by Mr Augusto Zampini Davies (Theologian, Dicastery for Promoting Integral Human Development, Vatican) stressed the dichotomy between the anthropological (intelligence) aspect of AI, and the technology (artificial) one: trustworthy AI should bring these two elements together. Complementing this framing, Ms Carolyn Nguyen (Director of technology policy, Microsoft) recalled that the term ‘artificial’ puts emphasis on the wrong aspect: AI should be addressed as a means for benefits and should be part of the solution rather than the problem. Interestingly, these definitions have little to do with the technology itself, but more to how humans approach the technology.

IGF2019

Trustworthy AI also means that proper consideration is given to ensuring respect for human rights. The conversations on ethics and human rights in AI are not disjointed: for instance, the principles proposed for responsible AI by the IEEE in its Ethically Aligned Design report are based on a human-centric and human rights-based approach, as well as on agency and political self-determination.

Economic and geopolitical aspects also influence the development of responsible AI, as highlighted by Mr Vladimir Radunović (Director of E-diplomacy and Cybersecurity Programmes, DiploFoundation). Companies need to implement human rights impact assessments and due diligence in their AI development. Leading by example would also allow them to gain a competitive advantage from this responsible AI development approach.

Are the existing human rights legal instruments and ethical frameworks effective in ensuring trustworthy and responsible AI? According to Hicks and Nguyen, there is no need for a further guiding framework or regulatory developments. Nevertheless, it is crucial to effectively harmonise the implementation of existing frameworks in order to identify possible gaps (i.e. aspects that are not sufficiently addressed in existing frameworks) and ways forward.

As data is the fuel of AI, data governance issues should be part of the discussion around trustworthy AI. We need to look at how data can be shared, including what biases are being perpetuated. As Davies pointed out, the datasets used by developers ‘come from the past’ and bring their biases and problems with them.

What is the responsibility of the different actors when it comes to responsible AI? According to the session’s participants, companies and the technical community are the main responsible actors. But other actors should be involved in this debate as well, such as journalists, academics, and founders of AI development research. Building on this framework, Radunović also highlighted the role that users play in creating AI market demand and the potential to shape such demand in a more responsible way.

The potential and limitations of AI was demonstrated at the end of the session: IQ’whalo, an AI-driven coffee machine, contributed an intervention developed automatically (via neural networks) on the basis on the transcripts of sessions that tackled AI at previous IGF meetings. Participants were left to decide whether this AI contribution was valuable or not.

A few questions were left open: Can AI also benefit the less powerful groups? Can we identify more principles that guide us towards an inclusive society that counters inequalities? Who should fund the role of civil society organisations and journalists in training and research processes?

By Stefania Grottola