Human vs algorithmic bias – is unbiased decision-making even a thing?

Related event

Session date
Session ID:
WS7

Resource type
Event reports

Author:
Katharina Höne

Moderated by Mr Aleksandr Tyulkanov (Special Adviser on Digital Development, Council of Europe), the speakers of this session were Mr Karthikeyan Natesan Ramamurthy (Research Staff Member, IBM Research AI), Ms Ekaterina Muravleva (Senior Research Scientist, Skolkovo Institute of Science and Technology), Mr Zoltán Turbék (Co-Chair of the CAHAI Policy Development Group, Council of Europe), Mr Daniel Leufer (Europe Policy Analyst, Access Now), and Ms Hiromi Arai (Head of AI Safety and Reliability Unit, RIKEN Center for Advanced Intelligence Project).

Overview of machine learning (ML)

The discussion began with a quick overview of machine learning (ML) and how ‘learning’ works in the context of ML, particularly in contrast to human learning. Speakers highlighted the need for training data and the implications of potential bias in this data. Unbalanced data leads to unbalanced predictions. However, the black box nature of ML algorithms raises concerns when ML results are applied to decision-making. In addition, it is important to realise that bias can occur in various steps in creating ML systems. The concern is that biases in ML can lead to violations of human rights and other principles highly valued by societies. Policies are needed to mitigate risks. This of course is not to say that human decision-making is bias free or can ever be bias free. On the contrary, humans are evolutionarily wired to take a certain perspective and focus on certain things rather than on others.

Decision-making and ML

Decision-making can benefit from ML inputs. It is, however, important to make sure that decision-making, especially when it comes to sensitive topics or issues with far-reaching implications, is not based solely on ML results. In that sense, ML results should only be one input into decision-making and the final decisions should be with humans. Further, constraints, safety mechanisms, and audit mechanisms need to be in place to alert decision makers and those affected about bias that emerges in the use of AI systems. In addition, it is crucial to work towards greater transparency and explainability of AI systems involved in decision-making. Databases that list the AI systems and data in use should be considered, as well as bans on the use of certain high risk and high harm AI systems.

Equality and fairness

Discussions around bias and harm mitigation of ML systems also involve discussions about equality and fairness. These concepts, however, have strong cultural connotations and different societies have found slightly different answers regarding these concepts. This means that, while these are important principles to address bias and harm, it is not easy to find an intercultural agreement on some aspects of these principles. Overall, this signals the need to discuss what kind of society we want to live in in the future.

Regulation and self-regulation

On the issue of regulation, self-regulation of the private sector is important. However, some speakers also argued that this ultimately is not enough. Suggestions regarding the ethical use of AI (UNESCO) and various regulatory efforts (EU) exist. However, it is also important to ensure that these efforts complement each other. In this sense, greater cooperation between various stakeholders is needed to create synergies.