AI and Discrimination - Whose problem is it?

Resource type

Session ID
Pre-event 30

[Read more session reports and updates from the 14th Internet Governance Forum]

Algorithmic decision-making has the potential to improve the lives of people, but may also pose a threat to fundamental human rights if it is implemented without careful analysis. This session focused on mapping what is currently being done to address the ethical challenges of using artificial intelligence (AI) for automating decision processes and for detecting existing gaps. The main point raised was the challenges associated with a general lack of expertise, even within organisations that need to deal with the issue.

There are several stakeholders currently addressing the issue of automated-decision making. Ms Milla Vidina (Policy Officer, European Network of Equality Bodies (EQUINET)) and Mr Robin Allen (Barrister, Queen’s Counsel) highlighted the importance of National Equality Bodies (NEBs) for safeguarding equality in AI-driven technologies. NEBs are independent public institutions set up to promote equality across Europe.

EQUINET is currently working on building capacity for the NEBs. For instance, they have conducted research to determine an overview of the implications of AI systems. They are developing a checklist for assessing equality and non-discrimination and they have created a list of issues that NEBs could address in the European context. However, there are still big gaps when it comes to tackling the issue of avoiding discrimination in automated decision processes, even among experts. For example, a survey among NEBs showed that 70% of them were not working on defining best practices for avoiding discrimination in AI.

What are the most urgent new measures that need to be taken to prevent further issues? Overall, participants emphasized the general lack of awareness of the scale and the depth of AI-driven discrimination. There is little research being done on this topic and there are not enough resources dedicated to tackling this problem. Participants also expressed a desire to promote literacy among all stakeholders, and suggested lines of action for governmental bodies, industry, and civil society. For example, they proposed both the creation of compulsory public registries of algorithms and new requirements for transparency in the way that automated-decision making algorithms are applied to reach final decisions.

To conclude, Allen read an excerpt of an interview with Sundar Pichai (CEO, Google) by the Financial Times. ‘Sundar Pichai said that AI required smart regulation that balanced innovation with protecting citizens’. Allen then added, ‘there are areas where we need to do the research before we know what are the right kinds of approaches we need to take’.

By Paula Szewach

Share on FacebookTweetShare