Human rights & AI wrongs – Who is responsible?

27 Nov 2019 15:00h - 16:30h

Event report

[Read more session reports and updates from the 14th Internet Governance Forum]

The session focused on the impacts of artificial intelligence (AI) on human rights, the need for regulatory frameworks, the role of AI principles, as well as the ethical design of trustworthy AI.

Mr Jan Kleijssen (Director, Information Society – Action against Crime, Council of Europe) introduced the background paper for this session, the Council of Europe’s 2019 A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework and the 2018 Draft Recommendation of the Committee of Ministers to member States on the human rights impacts of algorithmic systems – Addressing the impacts of Algorithms on Human Rights.

He noted that in May 2019, following the Helsinki Conference on Artificial Intelligence, 47 governments agreed to start working on legal frameworks for the development and application of AI. In order to do that, an ad-hoc Council of Europe Committee on Artificial Intelligence (CAHAI) was created and had its first meeting in November 2019 in Strasbourg. The CAHAI is the first international effort to go beyond focusing on the ethical frameworks for AI, and instead, will focus on establishing a legal framework to ensure that AI is a force for the good.

All participants of the session agreed that the Helsinki Conference on AI was an important conference.

Mr Joe McNamee (Member of the Council of Europe Committee of Experts on Human Rights Dimensions of Automated Data Processing and Different Forms of AI (MSI-AUT)), who also participated in the preparation of the 2019 Report and 2018 Draft Recommendation, stressed that these reports are both a result of a multistakeholder process. He then turned the discussion to the perception of AI, stating that it is generally owned and implemented by the powerful. The 2018 Draft Recommendation in its preamble, therefore, examines this power imbalance and underscores the need to ensure that existing racial, gender, and other areas of social diversity are not deliberately or accidentally eliminated by AI systems.

McNamee also pointed out the need to understand that externalities of AI can cause real harm to real people, especially those who are the least able to defend themselves. He called for the development of regulations and guidelines to mitigate such impacts, stating, ‘It must not be profitable to cause harm. It must not be profitable to cause significant risk. And, we must radically accept the notion that some applications of technology are not acceptable in a democratic society’.

Continuing the discussion on the implications of AI on fundamental rights, Mr David Reichel (Social Research – Research & Data Unit, Fundamental Rights Agency (FRA)) noted it is the responsibility of the state to be aware of possible human rights problems when using AI in public administration. For Reichel, the state is responsible to take a leadership role within human rights, and develop safeguards and possible regulation. Citing FRA’s 2019 Project on AI and fundamental rights related to facial recognition technology, he called for an impact assessment of AI on human rights.

Speaking on the measures to improve trust in AI, Ms Clara Neppel (Senior Director, European Business Operations, Institute of Electrical and Electronics Engineers (IEEE)) pointed out the transparency principle. She stressed that it is important to define the meaning of the principles, determine how to achieve them, and prove that the requirements have been satisfied. The IEEE started to practice these principles four years ago, and is currently working on technical standards, as well as ethical standards (impact standards) focused on an ethical system design.

Taking up the issue of ethical principles of AI, Ms Cornelia Kutterer (Senior Director, EU Government Affairs, Privacy and Digital Policies, Microsoft) discussed how Microsoft started to develop its ethical principles three years ago. She explained that the AI engineering group went into details of how you can implement these principles into practice, resulting in the establishment of the Office of Responsible AI at Microsoft. This office is currently tasked to develop an encompassing responsible AI life cycle by transforming the principles of AI into engineering guidance, empowering customers on the responsible deployment of AI tools, and developing policies and engagement mechanisms with stakeholders.

In all, the participants agreed that the area which currently needs most regulation is facial recognition, especially related to the facial recognition of children. They also spoke about whether regulations can stifle innovation. McNamee pointed out that regulation is beneficial to innovation, as it works for creating a clear predictable, accountable framework within which everyone can operate.

By Pavlina Ittelson