Artificial Intelligence, Justice and Human Rights

20 Sep 2017
|
Geneva, Switzerland

Resource4Events

Event report/s:
Barbara Rosen Jacobson

This side event of the Human Rights Council’s 36th session, organised by the Permanent Observer of the Holy See Mission to the UN and other international organisations in Geneva and the Permanent Mission of the Principality of Liechtenstein in Geneva, discussed the potential impact of artificial intelligence (AI) on justice systems and human rights.

The panel was opened by Mr Eric Salobir, President of OPTIC, who emphasised that the link between justice and AI is not just found in science fiction, but has already been tested and employed in judicial systems.

In his opening remarks, H.E. Archishop Ivan Jurkovic, Permanent Observer of the Holy See Mission, spoke about the importance of considering human dignity in discussions on AI, as well as the risk of machines substituting humans in certain key areas, such as education. H. E. Ambassador Peter Matt, Permanent Representative of Liechtenstein, explained that AI encompasses both opportunities and threats, especially related to the human rights to privacy and non-discrimination. He added that addressing these challenges effectively requires multistakeholder engagement.

Next, Prof. Pierre Vandergheynst, Professor at Ecole Polytecnique Fédérale de Lausanne, provided an introduction to AI and the way it could be applied to the judicial system. Although it is not a new concept, AI is mostly understood today as machine learning, powered by algorithms, which are based on data. Ultimately, ‘whoever controls data, controls AI’. AI’s predictive power comes from its ability to model the reasoning from the raw data to the final outcome.

There are several examples of AI being reasonably accurate in predicting verdicts and risk assessments. Yet, decisions based on AI cannot be easily disputed, as the patterns discovered by AI cannot be interpreted and clarified. If AI decisions are based on biased data, rooted in human judgement (such as previous verdicts), they risk disproportionally and negatively affecting certain population groups.

Prof. Louis Assier Andrieu, Professor at the Law School of SciencesPo in Paris, and Research Professor at the National Center of Scientific Research, provided a more in-depth analysis of the interplay between AI and legal traditions. According to him, both common and civil law are based on fictions, that would be internalised by AI. Common law’s fiction is based on its assumption that legal decisions can be based on previous cases; yet, ‘one never enters the same river twice’. Civil law assumes that laws and codes encompass every imaginable case, and that abstract rules can be applied to a variety of cases. To address these fictions, it could be useful to look at more communal, non-Western forms of justice.

Assier Andrieu highlighted the fact that France is already experimenting with predictive justice using big data, to make institutions more rational and less dependent on human bias. However, judgement ultimately needs trust. With 93% of the private practitioners in the USA fearing to be replaced by robots, ‘where is the trust in the making of algorithms and the predefinitions used?’ Can we trust AI to decide something as important as legal judgement? Salobir added that we need to consider whether AI makes judgements based on consequence or correlation, and whether it judges the individual or the group to which it belongs.

Prof. Lorna McGregor, Professor and Director of the Human Rights Centre, University of Essex, concluded the panel discussion by relating AI to human rights. She explained that it is ‘crucial’ to understand our current and future environment, to make sense of their human rights implications. AI could provide opportunities in making progress towards the sustainable development goals by creating efficiency, cost-effectiveness, and improvements through disaggregated data. It can help allocate resources and predict crime.

AI can also generate risks for human rights, not only by creating privacy threats and facilitating surveillance, but also by creating inequalities and discrimination. While the big data on which AI is based is extensive, it is neither complete nor perfect. This imperfect data feeds algorithms and AI, and can ‘bake discrimination into algorithms’. As a result, human bias is ‘accentuated, rather than resolved’. Echoing Vandergheynst, she repeated that AI decisions cannot easily be challenged, and that judges and lawyers might not be sufficiently equipped to understand the accuracy of these decisions.

McGregor concluded that international human rights law could provide a framework to address the risks posed by AI. We also need to consider the responsibility of states and business actors, as well as identifying red lines when the risks look too great to proceed.

A side event on 'Artificial intelligence, justice and human rights' will be held on 20 September 2017, in the framework of the 36th session of the UN Human Rights Council. The event will be held between 09.30 and 11.00 UTC, in Geneva, Switzerland. It is co-organised by the Permanent Observer of the Holy See Mission to the UN in Geneva and the Permanent Mission of the Principality of Liechtenstein in Geneva.

The aim of the event is to bring the debate about artificial intelligence (AI) and its implications inside the UN circle, as it becomes an important issue for the future of justice and human rights. Discussions will focus on bringing an ethical approach to the debate, while trying to provide some reflections on the pros and cons of the use of AI in the field of justice and human rights.

Some of the questions to be addressed include: What will be the impacts of artificial intelligence on the administration of justice and on the recognition and respect of human rights if we cannot ensure that automated reasoning is rational and transparent? Could the use of artificial intelligence in the justice sector likely to result in decisions taken without any human judgment? Could bias related to racial or ethnic background become standardized and be less likely to be questioned as racially motivated than if based on a human decision? 

 

The GIP Digital Watch observatory is provided by

in partnership with

and members of the GIP Steering Committee



 

GIP Digital Watch is operated by

Scroll to Top