Council of Europe: AI as a risk to enhance discrimination

18 Jun 2019 14:30h - 16:00h

Event report

[Read more session reports and live updates from the EuroDig 2019]

Mr Frederik Zuiderveen Borgesius (Professor of Law, Radboud University) welcomed the participants and started off the discussion by asking about the sectors that bring the most risks related to the use of artificial intelligence (AI). He invited the panellists to share what role their respective organisations play in mitigating these risks, and what is the place of victims in these discussions.

Speaking from the industry perspective, Ms Meeri Haataja (CEO and Co-Founder, Saidot.ai) spoke about building citizen agency through transparency. According to Haataja, the focus should be on transparency or lack thereof, as they are the main causes of concern when discussing whether technology discriminates or not. Access to information is the basis for sparking the public debate and the community should advance the public review of these topics, particularly when public sector cases exist, as they are relatable. Haataja illustrated a successful example of public review in Finland, when the public, helped by data scientists and civil society, audited an algorithm developed by a media company. The algorithm predicted party affiliation, but moved from 40% accuracy to 60% accuracy after a public debate, thanks to the company’s openness to show the used algorithm.

According to Ms Ariane Adam (Legal Adviser – Freedoms and Justice Team, Law and Policy Programme, Amnesty International), there is a risk of discrimination in every area of our lives as the use of AI increases. Adam explained the case of the ‘Gangs Matrix’ in the United Kingdom when the London Metropolitan Police created an AI-based database of potential criminals. ‘There was a lot of inconsistency about the way that data was fed into the matrix’, Adam emphasised. She added that the national data protection agency should be involved in cases concerning breaches of privacy. Adam stressed that to roll out new technology, especially on a large scale i.e. in the public sector domain and before it has been fully tested, is very damaging. She reminded the room that while the General Data Protection Regulation (GDPR) has had the spotlight lately, the Conventions 108 and 108+ are the only internationally legal tool for data protection. There is a public perception that machine-made decisions are neutral, objective, and precise. However, there is a great need for more awareness of the human bias behind the development of AI. 

Ms Kirsi Pimiä (Non-Discrimination Ombudsman, Finland) agreed that there are no sectors where AI is absent. The Non-Discrimination Ombudsman receives claims based on which both private and public sectors can be held liable for any discrimination, or might need guidance, among others. They do mediation, particularly working with the private sector, promoting interests. Finland has a separate Data Protection Ombudsman and upon the adoption of the GDPR, the new national Data Protection Law has opened possibilities for tackling discrimination caused by the AI processes. Pimiä explained the case they took to the National Non-Discrimination and Equality Tribunal two years ago, in 2017, when an individual felt discriminated by a credit company. The Tribunal concluded that the crediting company has engaged in discrimination and this case shows that AI affects not only vulnerable or marginalised groups but all of us. Reflecting on the victims of AI discrimination, Primiä said that there should be a close connection between the public sectors and the Ombudsman, in communicating and explaining the reasons behind decisions, risks, and solutions.

Mr Menno Ettema (Programme Manager, Inclusion and Anti-Discrimination, Council of Europe (CoE)) noted that one of the objectives of the panel was to discuss the report on discrimination, AI and algorithmic decision-making. From the CoE perspective, the concern is how to uphold the protection of human rights and the rule of law if AI touches so many lives, both directly and indirectly. The question arises as to what legal tools are available, and what is the role of the AI governance and the multistakeholder approach. The CoE is developing several tools, and one of the main challenges is the blurriness between public use and private development. How much agency does the public sector have for these new services? Does our existing legislation provide for the appropriate tools? Ettema remarked that there are still areas of AI that remain under-researched, such as the filter bubbles that can exclude the marginalised groups from online communities even further.
 

By Jana Misic