Data Governance by AI: Putting Human Rights at Risk?

Report

event

Session date
to

assignment

Session:
Workshop 282

linkSession page

[Read more session reports and updates from the 14th Internet Governance Forum]

The session considered human rights implications in the deployment of artificial intelligence (AI) technologies and addressed the link between data governance, AI, and human rights. Ms Marianne Franklin (Internet Rights Coalition) explained that the session was going to follow a very interactive format based on multiple rounds of questions from the floor to the audience. She asked the speakers to provide a definition of AI and to list three pressing issues they consider at stake at the intersection of AI research and development, and its online deployment, particularly in light of human rights law.

The speakers came up with different definitions of AI stemming from the narrow understanding of machine learning and decision-making algorithms, to the broader one of ‘digital intelligence’ which encompasses the technical infrastructure and the use of data. Mr Paul Nemitz (Principal Adviser at the European Commission, Directorate-General for Justice and Consumers) clarified that although the EU’s High-level Group on Artificial Intelligence has identified a precise understanding of the technology, in the EU there is currently no legally binding definition of AI.

To launch the discussion, panellists started considering the issues that they deem most pressing in terms of data governance, AI, and human rights. Some of them mentioned the importance of addressing the transparency and accountability gap that exists in the programming and functioning of algorithms. Ms Renata Avila, (Director, Smart Citizenship Foundation) also warned the audience against the possible distorting effects of AI towards equality: ‘There is a risk that AI would augment inequality’, and against the automated manipulation of information since algorithms are now curating people’s access to information.

Speakers also stressed the importance of having a human-centred AI in the sense that it is important to assess the impact that AI will have on the individuals as rights holders. Mr Markus Beeko (Secretary-General, Amnesty Germany) stressed the importance of ranking the impact that AI may have on human rights and according to that also define different levels of regulation and oversight. Nemitz added that the upcoming challenge will be how to operationalise existing principles such as data protection and privacy into AI systems and regulations. Mr Jai Vipra (Research Associate, IT for Change) also raised concern over AI’s impact on development and climate.

Should the use of AI be banned for certain purposes or at least limited due to human rights risks?

There was broad consensus among the panellists that regulation is needed for AI, especially on those specific applications that threaten fundamental human rights. This is the case for the lethal autonomous weapons systems (LAWS) which should be regulated with a legally binding instrument. Avila pointed out that some of the AI applications are already discussed at some fora (e.g. in trade discussions), therefore it will be difficult to develop global regulations limiting AI applications when some of these applications seem to be accepted in specific contexts. Nemitz also noted that the General Data Protection Regulation (GDPR) does not cover technology itself, therefore it is possible to legally produce technology and use it for illegal purposes (e.g. mass surveillance).

Since data is essential to machine learning, how do we measure and mitigate political, gender, and racial bias in data?

Although, for example, in the fin tech and labour sector there are many regulations that prohibit discrimination, panellists recognised that data sets may contain biases based on the population of reference. Vipra noted that, ‘Although we cannot regulate the input that goes into the AI system, it is however possible to set standards on the output’. The speakers joined her in calling for states to preserve the policy space for public participation and discussion on these outputs. The discussion also moved to the limits of AI algorithms in terms of data sets: this technology is not able to move away from the (biased) data that it is fed.

Who should be held accountable for the decisions taken by the AI?

Speakers agreed that algorithms cannot be held liable for the output they produce. Answers varied from speaker to speaker. A stronger approach argued for the liability of every person that is involved in the technology (producers, designers, consumers, investors etc). Another stance called for clearer national and international legislation in this regard. They all agreed that, with reference to the European Court of Human Rights decision on the right to be forgotten, the presence of an automated system in the decision-making process does not shield the legal person from responsibility.

Are self-regulatory codes sufficient sources of regulation of AI or are oversight bodies necessary?

There was consensus among the panellists that AI regulation is now a matter of ‘how to regulate’ rather than ‘whether to regulate’. There was shared scepticism towards self-regulatory resources as insufficient mechanisms protecting human rights: ‘History has shown that we would not have human rights if they were based on self-regulation’.

The discussion continued on the importance to have oversight bodies managed by public boards and whether we need a global policy and norm-making mechanism which is tied to the UN to manage the governance of new technologies.

By Marco Lotti

Share on FacebookTweet