Workshop 1: AI & non-discrimination in digital spaces: from prevention to redress
13 May 2025 07:30h - 08:30h
Workshop 1: AI & non-discrimination in digital spaces: from prevention to redress
Session at a glance
Summary
This workshop focused on AI and non-discrimination in digital spaces, exploring prevention, mitigation, and redress of algorithmic discrimination. Experts discussed how AI systems, increasingly used in various sectors, can infringe on equality rights. They emphasized that AI is not neutral but shaped by existing biases and power structures.
The panelists highlighted the importance of involving impacted communities in AI design and development processes. They stressed that AI alone cannot solve inequality, as it is inherently based on existing societal inequalities. The discussion touched on the challenges of addressing intersectional discrimination, both in law and in AI systems.
Participants identified several barriers to effective AI regulation, including lack of transparency, limited access to data, commercial secrecy, and insufficient funding. They emphasized the need for mandatory impact assessments, stronger powers for equality bodies, and improved accountability mechanisms.
The role of equality bodies was discussed as crucial for access to justice, with calls for adequate funding and expanded powers. Experts stressed the importance of a multi-stakeholder approach involving civil society, private sector, equality bodies, and affected communities to address discrimination effectively.
The discussion concluded that while some steps are being taken to collaborate with regulatory bodies, more needs to be done to prevent, mitigate, and provide redress for AI-driven discrimination. Participants agreed that a global, collaborative effort is vital to fully address discrimination in all its forms, particularly intersectional discrimination.
Keypoints
Major discussion points:
– The limitations of using AI to solve inequality and discrimination
– The need for meaningful involvement of impacted communities in AI design and regulation
– Challenges in addressing intersectional discrimination through AI and regulation
– The role of equality bodies and other regulators in tackling AI-driven discrimination
– Barriers to effective AI regulation and enforcement, including lack of transparency and funding
The overall purpose of the discussion was to explore how to ensure AI systems do not infringe on or amplify discrimination and inequality, focusing on prevention, mitigation, and meaningful redress.
The tone of the discussion was thoughtful and critical, with speakers highlighting the complexities and challenges involved in addressing discrimination through AI and regulation. There was a sense of urgency about the need to tackle these issues, but also caution about over-relying on technological solutions. The tone became more solution-oriented towards the end as speakers discussed concrete steps for improvement.
Speakers
– Minda Moreira: No specific role mentioned
– Milla Vidina: Leads Equinet’s work on AI and algorithmic discrimination
– Menno Ettema: No specific role mentioned
– Mher Hakobyan: Amnesty International’s Advocacy Advisor on AI Regulation, leads advocacy work at the Algorithmic Accountability Lab of Amnesty Tech
– Ayça DibekoÄlu: Works at the Hate Speech, Hate Crime and Artificial Intelligence Unit of the Council of Europe
– Louise Hooper: Human rights barrister, expert of the Council of Europe’s Committee on Artificial Intelligence, Equality and Non-Discrimination
– Robin Aïsha Pocornie: Computer scientist specializing in algorithmic discrimination and bias
Additional speakers:
– Eino Etema: Head of unit at the Council of Europe (mentioned but did not speak)
– Linda Ardenghi: Trainee at the Council of Europe
– Somaya: Associated with YouThink
Full session report
AI and Non-Discrimination in Digital Spaces: Prevention, Mitigation, and Redress
This workshop, organized by the Council of Europe, explored the complex intersection of artificial intelligence (AI) and non-discrimination in digital spaces, focusing on prevention, mitigation, and meaningful redress of algorithmic discrimination. Experts from various fields, including human rights, equality bodies, and computer science, convened to discuss the challenges and potential solutions in ensuring AI systems do not infringe upon or amplify discrimination and inequality.
Key Themes and Discussions
1. Limitations of AI in Addressing Inequality
A central theme that emerged was the inherent limitations of AI in solving inequality. Robin Aïsha Pocornie, a computer scientist specialising in algorithmic discrimination, argued forcefully that “AI cannot fix inequality because it is inherently made by the inequalities that are already existing. So it cannot fix what it is based on.” This perspective challenged the common assumption that AI can be a neutral solution to societal problems.
Other speakers, including Louise Hooper and Milla Vidina, echoed this sentiment, agreeing that AI alone is insufficient to address inequality and may even perpetuate existing biases. Pocornie emphasized the importance of considering non-use of AI as a potential solution in some cases.
2. Importance of Community Involvement
There was strong consensus on the critical importance of involving impacted communities in AI design, development, and regulation processes. Robin Aïsha Pocornie emphasised that community expertise should be valued equally to technical expertise, arguing for meaningful participation that gives communities decision-making power. She highlighted specific examples of community-based initiatives, such as the Distributed AI Research Centre and the Indigenous Protocol and AI Working Group.
Mher Hakobyan, from Amnesty International, supported this view but highlighted the challenges in engaging diverse communities due to competing priorities. He also stressed the need to demystify conversations around AI to make them more accessible to various communities.
This discussion prompted reflection on traditional notions of expertise in AI, with Hakobyan noting, “We often in this kind of fora, you know, we give priority to people who have technical expertise and a lot of professionalized kind of expertise, so to say, but we don’t think of a person that has the lived experience as an expert.”
3. Challenges in Regulating AI Discrimination
Several barriers to effective AI regulation were identified:
– Lack of transparency and explainability in AI systems (Louise Hooper)
– Difficulty in detecting and proving algorithmic discrimination (Louise Hooper)
– Commercial secrecy and lack of access to data (Menno Ettema)
– Insufficient funding and resources for regulators (Milla Vidina)
Louise Hooper, a human rights barrister, emphasised the need for mandatory impact assessments and stronger powers for equality bodies. She also highlighted the inadequacy of court responses to proven algorithmic discrimination.
4. Role of Equality Bodies and Regulators
The discussion highlighted the crucial role of equality bodies in addressing AI-driven discrimination. Milla Vidina, representing Equinet (the European Network of Equality Bodies), stressed the importance of collaboration between different regulatory bodies, including those focused on data protection, consumer protection, and competition law. She emphasized the need for equality bodies to have adequate resources, expertise, and legal powers to effectively address AI-related discrimination.
Vidina also suggested the creation of public facilities to assist regulators, academics, and civil society in investigating and testing algorithms. This proposal aimed to address the technical challenges faced by equality bodies in assessing AI systems.
Mher Hakobyan noted that support from equality bodies adds legitimacy to civil society advocacy efforts. This multi-stakeholder approach was seen as essential for effectively tackling discrimination in AI systems.
5. Addressing Intersectional Discrimination
The complexity of addressing intersectional discrimination, both in law and in AI systems, was a significant point of discussion. Louise Hooper highlighted that “Intersectional discrimination is incredibly complicated. I think it’s not dealt with properly in law. There’s a lot of resistance in states to introduce legal prohibitions on intersectional discrimination before we even get to any problems in terms of detection and resolving intersectional discrimination in AI.”
Milla Vidina suggested the need to move beyond a grounds-based approach in non-discrimination law, while Mher Hakobyan called for expanding research to include less studied forms of discrimination. Vidina also mentioned recent progress in addressing intersectionality in law, citing examples such as the Pay Transparency Directive and the equality bodies directives.
6. Mentimeter Questions and Responses
Throughout the session, participants engaged with Mentimeter questions, providing valuable insights into the challenges and potential solutions discussed. These interactive elements helped to gauge the audience’s perspectives on key issues related to AI and non-discrimination.
Key Messages
Minda Moreira concluded the session by summarizing key messages, including:
1. The importance of critically assessing whether AI solutions are truly necessary before implementation
2. The need for meaningful involvement of impacted communities in AI design and regulation
3. The crucial role of equality bodies in addressing AI-driven discrimination, and the need for strengthened powers and increased funding
4. The importance of addressing intersectional discrimination in both law and AI systems
5. The need for ongoing dialogue, research, and action from diverse stakeholders to address the complex challenges of AI and non-discrimination in digital spaces
As AI continues to permeate various sectors of society, the urgency of addressing these challenges becomes increasingly apparent, requiring a global, collaborative effort to fully address discrimination in all its forms, particularly intersectional discrimination, in the context of AI systems.
Session transcript
Minda Moreira: Welcome to the workshop AI and non-discrimination in digital spaces from prevention to redress. I’m just going to read the session rules and then give the floor to Aïsha. So please enter with your full name if you’re online. To ask a question, raise your hand using the Zoom function. You will be unmuted when the floor is given to you. When speaking, switch on the video, state your name and affiliation. Do not share links to the Zoom meetings, not even with your colleagues. Thank you very much. Aïsha, the floor is yours.
Ayça DibekoÄlu: Thank you, Minda Moreira. Good morning to everyone in Strasbourg and good morning to those joining online. My name is Aïsha DibekoÄlu and I work at the Hate Speech, Hate Crime and Artificial Intelligence Unit of the Council of Europe, which is within the anti-discrimination department of our organization, where among many areas of work, we focus on strengthening human rights and equality in the context of emerging technologies. I’m delighted this morning to be moderating this session together with my head of unit, Eino Etema, which we bring together expert voices from across academia, civil society, tech community and public institutions. Today we’re here to explore a critical and urgent question. How can we ensure that AI systems, which are now increasingly used in law enforcement, in welfare and employment and in health, do not infringe or amplify discrimination and the right to equality? AI is often presented as neutral or objective, but in reality, it’s shaped by data, it’s shaped by assumptions and power structures that go deep within its design and its use. This session is called, as Minda Moreira mentioned, From Prevention to Redress, AI and Non-Discrimination in Digital Spaces, and the idea is to push beyond the surface. We’ll explore what prevention, mitigation and meaningful redress should actually look like in practice and who gets to shape those outcomes. You’ll hear from a brilliant panel of experts and we will be inviting you, the audience, among who are also experts who are actively working in this area, to actively contribute Thank you all for your contributions through interactive polls and questions that we will be presenting to you via Mentimeter. Please remember that this is a conversation, not just a panel. And after short interventions from each panelist, we’ll move on to Mentimeter questions, and we hope that they will be as thought-provoking to you as they were to us. And after various answers, we’d like to encourage you to further elaborate on your answers and explore topics that we haven’t discussed to make it as effective and beneficial as possible. So, let’s dive in. I would like to first give the floor to Robin Aïsha Pocornie, who is a computer scientist who specializes in algorithmic discrimination and bias. And you might also recognize Robin as the first person who established case law around algorithmic discrimination in the Netherlands after challenging a discriminatory facial recognition algorithm. Robin, drawing both from your personal experience and also your broader work that you’re currently doing on fairness and bias, we’d love to hear on how you approach these challenges and possibilities of preventing, mitigating, and addressing algorithmic discrimination. The floor is yours. Thank you so much, Aïsha, for the great introduction. I’m happy to be here. Hi, everyone.
Robin Aïsha Pocornie: My name is Robin Aïsha Pocornie, and I work on the intersection of data-driven technologies and the implications that it has on people, especially from a race, gender, and class income perspective. I work from the approaches that are anti-techno-solutionist, which means that complex social problems cannot only be fixed by technology alone, and especially if these problems are made by the technology. An example of this is data-driven facial detection software that cannot recognize people of a certain skin color. In order to fix that, it’s not only a techno-solutionist perspective that can fix this problem. And from the approach of radical reform or also known as non-reformist reform, which indicates that community gain is more important than individual gains. And we do this by optimizing community work and going to the communities that are being harmed instead of looking at it from an outside perspective, looking in, trying to fix problems that were not personally or at least as a community related to. The three approaches for prevention of AI harms that I work with within my consultancies, or I advise different clients about this, is that community always goes above the technology. The community that’s being harmed is the expert and of the integration of diverse knowledge. This means that technological knowledge, such as computer science, data-driven knowledge on a higher education level, is more accepted than community-based expertise, so real-lived experiences. And how we do this, this is already being done, is that small communities all over the world, for example, in Singapore, there is a impact-driven community that does work around mitigation of AI harms by going directly from a prevention point of view. What they do is that they actually look at if the technological non-use, so looking at whether technology is needed or not, is actually the first question being asked instead of creating and developing these technologies. This doesn’t mean that AI cannot help or support those in need, it just means that we look further back before deploying and implementing technologies that could potentially cause harm and not have a mitigation as a reactive way, but a responsive way, but looking beforehand before deploying.
Ayça DibekoÄlu: Thank you very much, Robin. So now I’d like to give the floor to Louise Hooper, a human rights barrister with over 20 years of experience focused on human rights, human dignity and equality. Besides her consultancy work, Louise currently serves as an expert of the Council of Europe’s Committee on Artificial Intelligence, Equality and Non-Discrimination, which is abbreviated as GECCI-DADI. Louise, drawing on your legal expertise and your work in international standard setting, how do you see the role of law and policy frameworks in shaping responses to algorithmic discrimination? The floor is yours.
Louise Hooper: Thanks. Good morning, everybody. So the first thing that I’d like to talk about is what AI systems are and do in terms of being models and systems that capture patterns from data in a model. And data comes from real world processes, which means humans. And humans are inherently flawed beings. We are discriminatory by nature. And data that we get from our historical processes is very often dirty in terms of we’re not all consistent about the way that we collect or input data into models or systems. At the moment, we, by which I mean generally older white men with political or economic power, are more and more reluctant to accept discrimination is bad or take steps to ensure equality exists in society. And you can see this in things like, for example, aggressive attempts to dismantle equality, diversity and inclusion programs. So all of this feeds into how we regulate, what we’re regulating and the social approach to law. And against this background, algorithmic discrimination itself is difficult to detect. So who’s being discriminated against? Was it a mistake? Was it by design? It’s difficult to prove. We can’t access data. We don’t understand the systems. We have black boxes with. Nobody understanding what algorithms are doing, we don’t know how decisions are made, what influences them. And overarching all of this, access to justice is costly, it’s time-consuming and it can be very difficult for individual victims. In the context of AI and ADM, the opacity of systems, information and power asymmetry between the deployer and subject, the lack of capacity to monitor group effects or to compare yourself to another person, the inability to access this information and an absence of transparent, meaningful information can preclude proper assessment of discrimination or prevent legal action from being started. And it’s there where I think regulation comes in. We have two attempts that I’d like to just touch on. One is the Council of Europe Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. And the other is the EU AI Act. Primarily under the framework convention, this is directed to and governs state actions, not the actions of companies. It places the responsibility on states to regulate, to govern providers and deployers of AI. It solely focuses on human rights, rule of law and democracy rather than other issues, and it’s not yet in force. Article 10 is the key provision in respect of equality and non-discrimination, which includes not just a negative obligation, but also, interestingly, a positive obligation for AI to be used to overcome inequality, which we may talk about in terms of techno-solutionism later on. It also has a whole series of effective procedural guarantees, safeguards and rights designed to enable people to litigate if necessary and requires effective oversight mechanisms. The EU AI Act, by contrast, is directed to and governs states, but also providers and deployers of AI. So it’s directly relevant to providers and deployers and can be directly relied on by individuals to protect and enforce their rights. It’s built on product safety principles rather than a human rights law. Thank you very much for that very insightful and insights-based approach. There are some criticisms of the approach in determining what AI is risky before you determine what AI is risky, which I find quite complicated. And then it sets out a series of requirements for both providers and deployers to comply with. Compared with some of the new European legislation on equality bodies, there are some really significant new powers for authorities responsible for protecting fundamental rights. And there are articles 77 and in the new directives on equality bodies. And for time reasons, I’m going to stop there.
Ayça DibekoÄlu: Thank you, Louise. And lastly, our final panelist is joining us online. Hi, good morning, Mher. I see you online. I would like to now give the floor to you, Mher Hakobyan, Amnesty International’s Advocacy Advisor on AI Regulation. Mher leads the advocacy work at the Algorithmic Accountability Lab of Amnesty Tech and brings experience from the European Disability Forum, Equinet and the Council of Europe. Mher, drawing from your advocacy work in the Algorithmic Accountability Lab and across these institutions, how can we build truly meaningful multi-stakeholder participation in shaping AI policy? And what does it look like in practice when it comes to tackling discrimination? The floor is yours.
Mher Hakobyan: Thank you. Thank you very much. I’m very happy to be here, as I mentioned in the coordination meeting we had last week. I started my professional journey at the Council of Europe in Armenia, so always happy to be back. So thank you for the invitation. I will dive right into the question, just maybe also a bit presenting the team that I’m in currently, which is one of the Amnesty International’s tech and Human Rights Program teams called the Algorithmic Accountability Lab. And the way we do our work, I think, is really fitting to the question that you asked, because we’re a multidisciplinary team that covers under the broad umbrella of the automated state. We do our work basically focusing on the AI use in the public sector, so by public authorities in the areas of like social protection, policing, migration, border surveillance, military use of AI, etc. And where, of course, when there’s public and private partnerships involved, we look at that as well, but mainly our focus is the use by public authorities. The way we do approach our work in terms of both research, advocacy, litigation work, media and communications work is focusing on collaborating with impacted communities, with other civil society organizations, and networks based in Brussels that represent different community organizations, so a lot of membership organizations that have members in different EU states, focusing on migration, focusing on disability rights, focusing on LGBTI rights. So it’s different networks. We try to connect to the membership on the ground. This is when we do advocacy, but in terms of research, when we do research in specific countries, such as in Serbia, in Denmark, also supported investigations in Sweden, in the Netherlands, these processes are always with collaboration with local organizations that represent different communities that are impacted by AI systems. For example, a research we did in Denmark that you might know about, which looked at welfare protection scheme use of a fraud detection algorithm that heavily discriminated and surveilled migrant communities, people with disabilities. We worked with disability rights organizations, with organizations that work on racial justice, on representing migrant communities. So I think this is really important in terms of also making sure that communities are equipped and are the ones leading the work to challenge a lot of the harms that the systems pose. Similarly, in litigation work, for example, in France, we are engaged in an initiative that is led by local or civil society, digital and human rights organizations, challenging the social security agencies, national family allowance fund, use of fraud detection algorithm, which is also allegedly discriminatory. So there is a court case filing that has called for the system to be stopped. So I think this collaborative approach with organizations that represent directly communities is very important. I think maybe when we also talk about redress and remedy measures, we need to also be mindful that we are not talking about it in an isolated manner, but we think of the system, the ecosystem that is around AI, starting from the conceptualization to the development to deployment. And in that sense, I think to ensure effective redress and remedy, we need to also ensure that the transparency and the public accountability measures are set in place. And even going back, I think what Robin said really resonates with me as well, is that sometimes we need to also think if a certain type of AI system is actually necessary. And if it’s the appropriate solution in the context, then we need to draw clear red lines on things that are. that shouldn’t be deployed. So I think in some sense, a ban could be the prevention in itself and the appropriate mitigation measure. So we don’t need to wait until the harm occurs and then think about now what is the redress and remedy process to mitigate that. I think I will stop here, but really looking forward to the continued discussion.
Ayça DibekoÄlu: Thank you, Mayor. Sorry, at some point, your screen got a bit smaller in our screens. Thank you to each of you for your interventions and powerful insights. And now it’s time to hear from the audience, from you. I’d like to hand over to Menno, who will guide us through a short series of Mentimeter questions. If you have your phones out, if you have your phones out, you can scan the QR code or enter the code on Menti.
Menno Ettema: Great. It always takes a moment before the screen comes up. Yes, to open the eyes, we want to launch a little bit. You can have a look at the, you can type in on your phone or laptop, the UR code menti.com and you have the entry code. You’ll see that code on every screen that’s coming up. Or if you can take a moment to use your phone to scan the QR code so you can enter. Don’t close the tab after the screen goes because we will use the Mentimeter the rest of the session. So then I go forward to the first question, just to go back to a point that was made by a few. The question we want to ask is, can you solve inequality with AI? Yes, it’s technically possible. Yes, it’s a good way to address inequality. No, it’s technically not possible. Or no, we must solve inequality differently. So what are your thoughts on this? I’ll give you a few minutes or a few seconds to complete. I see that 52 people registered, that’s lovely. And we’re seeing the answers coming in. So I’ll give two more seconds. Okay. So what we’re seeing at the moment is that a large part believes that AI cannot solve inequality and it must be solved differently. There are also those who think that it’s a good way to address inequality. Technically not possible is the least of the concern. And some nine people mentioned that technically it’s possible. But the largest part, nearly half, say that we cannot solve inequality through AI. Connected to that, we have our next question. What’s the main barriers to effective AI regulation for tackling discrimination and bias? So what are the main barriers to effective AI regulation for tackling discrimination and bias? We got a few answers and you can… How do you call it? Rank. Rank, thank you, Ayça. Rank the various answers. You can also skip. So if you think one is the major issue, then you put that as first, and then the second or third reasons, because they could all be relevant. But you can also skip if you don’t think they are of a concern to the effectiveness of AI regulation to tackle discrimination and bias. Thank you. It takes a little bit longer because you have to rank and move the things. ENGINEERING AND BEHAVIOUR REPUBLIC OF SANTORUM Feel free to answer, but I will summarize what we see. The responses are that it’s equal that no one agrees what discrimination is, so that’s a barrier to effective AI regulation. And it’s also already too little and too late, which is an interesting response and maybe that invites us to force some discussion. Also, there’s quite a strong assumption that AI is the answer. So, the question is if this is indeed the case. There’s some finger-pointing to big tech, and then there are some remarks about that it has to be global to be effective. Okay, I’ll just pinch in one more question. Referring a bit to the first speaker, do current approaches to AI take group-based discrimination seriously? So, this is a yes, no, maybe question. So, let’s go a bit quicker. Now, you can still vote, but I see a tendency towards no. Current approaches to AI do not take group-based discrimination seriously. I think this is another so maybe. Okay, so, let’s go a bit quicker. So, this is a yes, no, maybe question. So, let’s go a bit quicker. So I think this is an interesting reflection point as well. There are a few more questions, but maybe we stop here and give the floor back to you, Ayça, for some reflections from the audience.
Ayça DibekoÄlu: Thank you, Menno. So we have more questions, as Menno mentioned, but we’ll take it in a step-by-step approach to have discussions in the middle before finishing all of the questions. And thank you for all your responses. It’s clear that there’s quite a lot of insight and experience in the room. I’m also quite surprised at some of the answers as well, and I’d like to further discuss. And let’s now open the floor for a broader discussion. We’d like to invite you and our panellists to respond, whether that’s building on what we have answered in the questions or picking up on something that hasn’t yet been fully explored. And please feel free to raise a hand to ask a question and also comment on the chat. And let’s use this time to dig in deeper. I’d like to actually first ask whether Equinet has any input on this. Here on the panel, we have Mila Vidina. Mila leads Equinet’s work on AI and algorithmic discrimination. I think the first three questions specifically relate quite heavily to inequality and some thought-provoking questions on this. So I’d be curious to hear your insights on this, Mila. Thank you.
Milla Vidina: Thank you for inviting us. So for those of you who wonder what Equinet stands for, it’s not equestrian and horses. It’s a network of national equality public authorities. And our network seeks to empower and bring the collective voices of 47 such institutions across Europe for 35 European states. So equality bodies are anchored in EU law, national law as well, but we go broader than that. So all Council of Europe member states, to my knowledge, have equality bodies. So why are we here? Equinet has been working on non-discrimination for six years. And your first question, can AI solve inequality? that we are trying to tackle head-on, like as in literally working with technical and with software engineers and data scientists for the past year and a half in the context of technical standardization. Maybe you heard that the European Union Act rests upon compliance would be based upon conformity with a set of technical requirements. Those technical requirements are embodied to technical standards. Those are the same standards that you have for a light bulb or for a toy. And then you have that CE, safe use marking that you see across Europe. Well, this is how the high-risk AI system under the AI Act would be certified safe. Now, the problem is that part of the AI Act, especially Annex 3 to the AI Act, includes human rights, I deliberately won’t use this, like with human rights critical AI systems. So how would engineers know how to evaluate risk to human right? And how do they certify as being safe? And mind you, they self-certify. This makes it even more problematic. So what we as equities have been trying to do is answer this question, okay, can AI solve inequality if we engage with industry at this technical standardization forum, which is extremely resource intensive, unrepresentative of our society, and with a high entry barrier in terms of how much time we had to invest, we got a small pot of public money from the UK, not even the European Union. And we’re dealing with the EU legislation that only lasted for a year and a half, and most of us for now are working pro bono. But in that project, basically, we worked with industry representatives, most of them data scientists, software engineers, and cybersecurity specialists, to see whether in those technical standards, it’s a set of standards around 10, we could embed equality safeguards. Can you solve the inequality question, for example? How do you select a fairness measure or a debiasing method that could more effectively tackle potential inequality aspects? if you have to assess risks before you, when you design and develop systems. So most of this is about design and development. Some of this is deployment, but according to the AIAC, even the deployment kind of preparation is done prior to release on the market. So most of the conversation is what we anticipate, what we foresee as risks to equality. How can you foresee risks to equality? And could an engineer make this assessment? So we’ve been trying to see how to set up the human oversight mechanism. So each technical organization that has to certify a system, they have a designated person to assess those risks to human rights. Do they have the necessary competence? Do they have interdisciplinary teams? How do you include affected communities in that? Is there what is a way, because we talked about participation and it’s all fine, but when you talk to engineers, where do you actually make it mandatory for them to include people? Well, there is a concept of design engineering with software specialists and there, there is a consultation, you know, those entry points purely within the technical community. And then when you want to validate a product as a condition for validation, include stakeholder input, you know, in the product safety branch, there are such things as testing products and consulting with consumers. Only now we are not talking about consumers, we are talking about affected persons. So long story short, based on that experience, yeah, as expected, AI cannot solve equality. What it could have, this is why we are in, we want to hopefully find also the means of support financially to continue to be engaged with technical standardization. It could do minimal prevention. So there are some things that you could, some technically feasible measures, which you could implement a design and development that could give you documentation, logging, more transparency. You cannot completely resolve transparency and explainability. It’s a contested concept within the engineering community, but more transparency. So basically you’re getting leverage and information. to contest and to enforce afterwards. So this is as a minimum, we hope, and maybe some prevention mitigation in so far as you want the designers, sorry, the developers when they have a documentation to be able to say, okay, why did I use this fairness metrics and not that fairness metric? And how was the fairness metric appropriate to the outcome, right? So you want that reason giving and justification and also tracing decision-making in the way a product is developed. So this is important for accountability, right? At what point, who took the decision to use that over that? Who was overseeing the human, so the human oversight mechanism? Who signed off product validation? Those things. And so in that way, it could give us some leverage, but ultimately, it’s costly and time-consuming policy changes and legal changes that would get us there. And I would end with an example. We were discussing recruitment algorithms. And there is literature, there is some evidence that human bias in recruitment is worse, whatever that means, than computer bias, right? So you could argue that there is more neutrality. Further to that, you can also thinker with the fairness metrics so that you actually use the algorithm to implement a kind of positive action. So we choose preferentially, let’s say, more women. But then the question becomes, who comes to that recruitment place? Who shows up for that particular position? Even if we have women, would it be the single mothers from immigrant background? So in this question of broader access and participation, the structural and systemic dimensions is something that I think could only be addressed through costly and slow, as I said, policy and legal processes. And I do not believe, just to link to another, I don’t think that we necessarily have a problem with group-based discrimination. but we have a problem with the systemic and societal dimension. Because from our experience with technical standardization, and you could see this in the AACT, in the Annex that outlines the technical documentation that all companies that have to prove compliance with the AACT have to do, there you have a group. They specifically mention that you have to provide data on anticipated impact on persons and groups. So groups are okay, but that broader systemic effect is really what we are fighting for. And I’ll stop, it was a very long intervention. Sorry, lots of material accumulated over the years.
Ayça DibekoÄlu: Thank you so much Mila. Any questions from the audience? Yes, please go ahead.
Audience: Good morning and thank you for the very interesting presentation. I would like the speakers to elaborate a little bit more on intersectional discrimination.
Ayça DibekoÄlu: I’m very sorry, I’m just going to interrupt you because I forgot to mention, could you briefly say who you are as well before you ask your question? Thanks.
Audience: Of course. Hi, I am Linda Ardenghi, I’m a trainee here at the Council. And yes, thank you for the very interesting presentation. I wanted to ask the speakers to elaborate a little more on the concept of intersectional discrimination. In particular, if they think that from a technical and legal perspective this issue can be addressed because while intersectional discrimination by AI is being named in most of the documents concerning discrimination by AI and issues of accountability for algorithmic discrimination, it is not really addressed in that. It is, from what I noticed in my personal research, it is always named but never really focused on. So if you could elaborate more on this, thank you. Other questions?
Ayça DibekoÄlu: Any questions online perhaps? No? Okay. Is there one of you that would like to take up the floor first? I think that finger was pointed at me. So I think intersectional discrimination is incredibly
Louise Hooper: complicated. I think it’s not dealt with properly in law. There’s a lot of resistance in states to introduce legal prohibitions on intersectional discrimination before we even get to any problems in terms of detection and resolving intersectional discrimination in AI. So you start with we don’t understand, most people don’t properly understand what intersectional discrimination is. There’s great reluctance at the Court of Justice of the European Union to deal with it. There is an increasing recognition in the ECHR and Council of Europe documents about intersectional discrimination. It’s looked at in the context of the Istanbul Convention on Violence Against Women, for example. So that’s my starting point. I think the second point on that is that it becomes very difficult to quantify. And one of the big issues with AI and tech-based both solutions and problems is the focus on data and what you can prove and what you can’t prove. And I think starting from the perspective that statisticians know you can prove anything you like, depending on how you look at the data, you then have even bigger problems looking at something like intersectional discrimination. And I just want, within that context, want to touch on the way that I think about group-based and individual discrimination and why it’s difficult, and I understand very difficult, to be able to produce a system that is fair for both individuals and for groups. And that’s because of the way that data is analyzed within a system. You can have something that’s producing the right decision on the evidence that it has, but that ultimately produces a right decision that has very significant consequential impact. Thank you very much for the introduction, I’m Minda Moreira, Menno Ettema, Ayça DibekoÄlu, Robin Aïsha Pocornie, Louise Hooper, Mher Hakobyan, Mher Hakobyan, Ayça DibekoÄlu, Robin Aïsha Pocornie, Louise Hooper, Mher Hakobyan, Mher Hakobyan, Ayça DibekoÄlu, and some of which may not be. So I think that also feeds into issues around intersectional discrimination.
Ayça DibekoÄlu: Thank you Luis. Would any of the panellists like to respond or would you like toâ¦
Robin Aïsha Pocornie: I also think that it is important to note that intersectional discrimination as it’s defined right now is not seen as a root, like the root cause of discrimination outside of the technical sphere is not considered. It’s seen as an add-on and not a bug that immediately needs fixing. So when we look at the development of⦠I as well develop AI, I develop algorithms. So not only do I have the community-based perspective of people that are impacted by it, but I also make them. And within our pipeline we do not consider any intersectional discrimination or any types of human-based discrimination within an integral part of mitigation within the development pipeline. What is considered, for example, is privacy or data set gaps. That’s stuff that you consider from the get-go. How do you clean your data? How do you use it? Which type of model are you going to use and how are you going to validate it? That’s all based on technical model efficiency rather than the eventual or potential⦠It’s a potential harm it could have on people and that’s why intersectional discrimination is always in the literature that you’ve read is often times sort of cited as an add-on or something after the fact. And I think that’s important to recognize and acknowledge, because if we don’t focus on that as an integral part of the development process and even take a step back and look at what are the people who, where this algorithm is going to be deployed, what do they have to say about it and what alternatives are possible until then it cannot be fixed. So AI cannot, in my opinion, cannot fix inequality because it is inherently made by the inequalities that are already existing. So it cannot fix what it is based on.
Ayça DibekoÄlu: Thank you very much Robyn. I would like to give the floor back to Menno to present our next set of questions.
Menno Ettema: Thank you. We go back online. Yes, there you go. For those who closed their web, their browser, you can see on the top still the hyperlink and the code that you need to use. So we go to the next question. Which goes, most AI regulation focuses on harms after they happen. What would real prevention of discrimination look like in practice? This is again a ranking system. So you could put those that you think are most important on top and then the rest order. You can also skip if things you think are not relevant. I see two responses came in and there we go. So mandatory impact assessment is an answer involving impacted communities in design, assessing whether AI is needed at all, stronger powers for equality bodies, or other measures. And I’m particularly curious or we are particularly curious what other measures you think could be taken. 21, 23. Okay, I see still a few people responding, but it’s quite evenly split. So there’s a few answers that are popping up, but quite equally assessing whether AI is needed at all, which I think the panelists also spoke about in a few minutes ago. Stronger powers for equality bodies, I think this is for me a good response. Others, which I’m very curious about what they mean. And then the least amount of responses involving impacted communities in design and mandatory impact assessments. And maybe that’s also worthwhile to reflect further on what these are actually and what this looks like among the panelists. Okay, so others is the biggest, so curious what that is and assessing whether AI is needed at all and equality bodies are the biggest responses. We have one more question, if I remember correctly. No, sorry, two more. What are the main barriers to effective enforcing regulation? The panel already addressed on a few concerns or a few things that are really needed. But what do you think? Lack of transparency, is that a barrier? Explainability, access to data and training sets, commercial secrecy, or money and funding? What are the main barriers to effectively enforcing regulation? Okay. So here we see a trend of quite equally. So there are equal concerns when it comes to lack of transparency, access to data and training sets, money and funding, money funding for whom I might want to ask. Commercial secrecy and explainability is also mentioned. So they’re all quite equally raised as barriers to effective enforcing regulation. We go to the next question. And this is an open question. So when we had other, I was curious what you had in mind. So this is an open question. What would you how would you reduce AI discrimination, AI driven discrimination? What are your thoughts, insights, ideas? I invite you to keep it short. Don’t write an essay, but maybe have some bullet points, thoughts on what could be done that we could further discuss in the remaining of the session. So we get the first responses, proper use cases, reduce social, social discrimination, human rights, impact assessments, big tech, transparency reporting on detecting biases and improvements, quality bodies and multi community mandates. It’s a designing process, mechanisms of effective accountability, ensuring that companies have a skin skin in the game, non-discretion by design, extra control, more equals. Equal society that is informed, empathy and using of the tool, literacy and using of the tool, integration of all steps in AI elaboration, checking and control, organizational accountability, challenging ideas that solutions lie in AI, feedback, accountability, mandatory impact assessments, consulting during the design, more funding to support consultation processes, proof of data sets, including affected communities, taking their viewpoints into account, impact assessments, collect diversity and equality data, inclusive data sets, community participation in design. So these are some of the responses that are coming in. Human rights based algorithmic design, etc. I leave this a little bit open but I want to
Ayça DibekoÄlu: give the floor back to Ayça to ask to continue the process. Thank you Menno. I would like to ask, we have received quite a lot of responses to the word cloud. If you’d like to explain further your answers and bring any points that you mentioned.
Audience: Okay. Yes, please go ahead. Thank you very much. My name is Somaya. I’m with YouThink. Should I speak louder? So my name is Somaya. I’m with YouThink. If I can participate in the discussion. I think that we have to solve the problem of disincentivization in the real life. Then we can solve it on the AI. As she explained, the expert explained, she wrote codes and we cannot write good codes and fair codes and algorithms if the discrimination is already there. So I think that we have to solve the problem of disincentivization in the real life. Then we can solve it on the AI. As she explained, the expert explained, she wrote What is already happening in the real life? For example, for recruitment, as you said before, there is a lot of discrimination in the process of recruitment, so how can we write good code that is not discriminating against the communities or a group of people or a class? So I think we should work in real life so that we can assess this on the codes and algorithms. Thank you very much.
Mher Hakobyan: Thank you, Minda Moreira. I would also like to ask Mher if he would like to intervene at this point to further explore and respond to any of the questions. Thank you. Actually, I wanted to answer the last question we had on intersectionality, but I failed to find my raise the hand function in time. If it’s okay, I can just maybe go back to that question. Of course, please. Thank you. Yeah, I just wanted to say that in terms of our work, it’s been important to also try to expand the way we look at things and the harms that AI can cause, because I think there is a lot of research and knowledge accumulated when it comes to, for example, racial discrimination and gender-based discrimination, but we often lack knowledge about discrimination that impacts, for example, based on disability, on socioeconomic ground, etc. At the lab, we’ve tried to kind of push those boundaries for ourselves and some of the research in the past years that we’ve done. We’ve also tried to speak to communities that are like disability rights communities or rights communities. So I think sometimes it’s intuitive for researchers, for example, to just go based off the knowledge that exists and we can often reinforce and advance the knowledge that is there, but we cannot know what we don’t know, right? So I think sometimes we need to maybe, even if we don’t see examples out there in the media or, you know, already documented, we need to maybe push some of the boundaries of our thinking towards expanding how we see discrimination. And that can then help also address the gaps that we have in intersectional discrimination. And in the aspect of addressing it through advocacy, I think it’s also been quite challenging for us to engage with a broader range of rights holders because oftentimes organizations that represent different communities, they have so many urgent already issues that their communities are facing. So sometimes when we speak of AI, they don’t necessarily see the direct connection that technology could have on the different rights that are being violated. For example, again, with disability rights, people face, for example, accessibility barriers or there is a huge issue with institutionalization of people with disabilities. So to engage with the disability rights networks has been also kind of sometimes not, you know, they’re not fully engaged in advocacy because they have issues that are, you know, for them like much more urgent at this point. But also there is the issue of people feeling intimidated by the conversations around AI. And I think we need to do a lot of work to demystify also how we speak about AI. I think Robin also mentioned about the expertise, like who is considered to be an expert. expert. We often in this kind of fora, you know, we give priority to people who have technical expertise and a lot of professionalized kind of expertise, so to say, but we don’t think of a person that has the lived experience as an expert. And I think these are limitations that we need to address to be able to actually go broader and address intersectional discrimination more effectively.
Ayça DibekoÄlu: Thank you Mair. If there’s, are there any questions from the audience? If not, I’d like to actually pose two further questions myself to the panel. My first question actually is to Robin, because it relates to some of the points of your intervention. We had a question on do current approaches to AI take group-based discrimination seriously, and we thought we discovered the idea of involving impacted communities in the design of the process, but I wanted to ask how do we secure the meaningful involvement of these communities? I think it’s easier said than done,
Robin Aïsha Pocornie: how does it truly work in practice? That’s actually a really good question, because it’s already being done. There are communities who are currently already create, who have created working groups to educate and inform larger regulatory bodies of who gets to decide what gets to be developed, deployed and implemented. Two good examples of this is the Distributed AI Research Centre, research institution, sorry. They combine community-based work with evidence-based technological expertise, but it’s always from a community-based expertise perspective. What that means is that usually a large regulatory body will go out and collaborate with communities, but the end decision stays with the larger regulatory body. And in this case, they ensure that the community-based entities are actually the end-based decision makers. And another good example is the Canadian initiative called the Indigenous Protocol and AI Working Group. These are indigenous people who actually create the regulatory, what we would call regulatory information from a indigenous perspective, as they are the currently, the group of people being harmed by AI the most. And I think it bears mentioning that this is a very unpopular perspective still, even within a very large regulatory body building as we are today even. It’s a very unrepresented perspective because, like we said before, it has been mentioned many times before today, expertise is seen as a professional thing. You have to have some sort of technical education, higher education, but a lived experience needs to be seen as an expert experience and an expert seat to have at the table.
Ayça DibekoÄlu: Thank you, Robin. Another question of mine is to both Louise and Mher, actually, because you have also touched upon this about meaningful collaboration and multi-stakeholder cooperation in the process. And I’d like to. I would like to ask about, with regards to how you would reduce AI-driven discrimination, because this is also one of the answers that stood out on the Mentimeter, about the role of equality bodies and other regulators. How do we create a meaningful cooperation to make a real discernance, and how do we engage the equality bodies and other regulators in the structural design?
Louise Hooper: I will address that, but I just want to add to what Robin said, because one of the problems with having got community participation, apart from the fact that there’s no money to do it, but once you’ve got community participation, even when you prove something is causing discriminatory effect, quite often you don’t get an adequate solution by a court or regulator. What we saw in Canada, there was a similar issue to the sentencing case in the US, where indigenous people in Canada are discriminated against by an algorithm in sentencing decisions. The response of the court was, we accept that the discrimination is there, but it’s alright because the judge knows that and will make a different decision. It’s just not enough. It’s not acceptable. It doesn’t work. Don’t give false promises to communities if you’re asking them to become involved in anti-discrimination work by taking on board their expertise and then ignoring it afterwards because it’s not okay. That’s the first thing that I would say about community type participation. In terms of equality bodies, adequate funding is needed. Within the context of the EU, I think there’s a real need to give proper and adequate recognition as regulatory bodies in the context of the AI Act. Sorry, I’m forgetting your question. How much further do you want me to go? I got carried away with being cross about community participation not being taken seriously enough. As you wish. About most of the role of other regulators, specifically the equality bodies, I’m curious about. In fact, I’d also like to hear… and Mila’s intervention on this as well, because you have also worked with us on our project, now that we’re running with three countries and we specifically work with equality bodies in the EU sphere. Thanks. So what we’re doing with equality bodies, or what the Council of Europe is doing with equality bodies and have asked me to help with, is work on developing e-learning courses and online-offline courses to facilitate knowledge and awareness and for that to then be rolled out to help equality bodies, help other government institutions be aware of AI and equality issues, particularly when commissioning and using AI products. There’s also issues to do with the way in which equality bodies can raise awareness of AI and discrimination, the tools that are available to them to investigate on behalf of both individuals and groups and their ability to bring proceedings and all of those things collectively as a package can really assist in terms of both using powers that individuals don’t have to get access to documentation, information and to test systems, to use regulatory sandboxes and then to also bring proceedings if necessary.
Ayça DibekoÄlu: Thank you very much. Mila, would you also like to intervene briefly? Thank you.
Milla Vidina: I don’t know where to start. I mean, well maybe let’s start with the obvious points. Equality bodies do not exist for the sake of equality bodies. It’s a very technocratic language, equality bodies. It’s an access to justice mechanism. That’s the way I, a public access to justice mechanism. So what would set apart an equality body from an ombud institution or a human rights center or human rights institute? In most of the cases, what sets them apart is all equality bodies handle cases directly. And most equality bodies work with both private and public sector. So being on the front line handling cases, so being in that way an immediate, they also, some of our members decide on cases with legally binding decisions. The majority of our members have litigation powers, not all of our members, but many have investigation powers. So because of, for those reasons, because of the specificity of those powers, we think of them as access to justice mechanism. So now that said, what do we need to facilitate that access to justice, to make it actually more accessible and more effective, because those are right. Accessibility barriers and effectiveness. Well, we, the funding was already mentioned, funding also for the sake of potentially, some of our members, for example, give fund. They do leave the decision-making power with whoever brings the complaint, and they fund and give free legal advice, so that the party goes and litigates a case. So in this way, they do not influence, but they empower somebody to bring a case. But beyond funding, two points I wanted to make. Second is, one is equality bodies operate in ecosystem, and equality law is only one part of the puzzle. We also have data protection law, we also have consumer protection law, we also have competition law, if you’re talking about big tech, and all of those have a stake in algorithmic discrimination. And equality bodies, we cannot, I’ve noticed for our staff members, we’ve been encouraging them to, educated them on data protection law, and I know, for example, the French Defender of Rights works with their data protection authority, but not only, also in the Netherlands at the Chi Institute. So what states could do is setting up, formalizing, institutionalizing a platform where all those regulators sit together, and there is a kind of cross-pollination of the different types of expertise, because they do not talk to each other. And equality law alone can only do that much, because we do not, let’s be honest, we do not have the sanctions the data protection law has. We do not have the enforcement powers, or like for example even competition law, right? So we need to work together. Then we need reform in the law, but this was already made so that equality bodies are given more power, but outside of equality bodies, also burden of proof. And this is something that I know the scholar Rafael Senides has worked on that, basically a presumption of algorithmic, so that you make it easier to establish prima facie discrimination, shift the burden of proof whenever there is an algorithmic system deployed. Changing the sanctions under non-discrimination law, and perhaps equally importantly, moving away from a grounds-based approach, so having to prove you discriminate only on gender, only on disability, or combined gender and disability, but that you have to prove that each one of them, there was a discrimination, and moving beyond that to a truly intersectional approach. In some legislation, like in Belgium, intersectionality is in non-discrimination law. There is a new directive, the Pay Transparency Directive of the European Union, where intersectionality is in the operative part of the directive. So it’s a legal concept already, binding effects. In the equality bodies directives, intersectionality is explicitly mentioned in the prevention and promotion powers. So member states have obligation to also, when they enhance, not give, the prevention and promotion powers of equality bodies, to also consider intersectionality. So we’re starting to have the tools, but we need this more also to empower our members. And one last thing, setting up a facility, because we were exposed to it, I’m inspired by France, and to my knowledge, it’s the only public facility that provides that intergovernmental service of investigating algorithms and testing algorithms. I think it would be, it’s called PEREN, I don’t know how… I see a colleague, I don’t know how to translate it, but the point is if each government could set up a public facility that assists all regulators, equality bodies included, if you ask me, not only all regulators, academics and civil society as well, to actually do the technical testing and the technical investigation part, because it’s, right, we should not have the expertise, you should not be expected to have technical expertise, right? Our added value, unique added value comes from, you know, either it’s lived experience or it’s also human rights, legal and policy knowledge. But if states set up such a facility for us, I think this would allow a more coordinated, consistent and also larger scale impact for approach. So that’s it.
Ayça DibekoÄlu: Thank you so much, Mila. Sorry, somebody’s alarm went off in the room, which I think signaled that our session is over. I would just like to take one moment to see if Mher would like to answer to my final question, a very brief answer, and then we would like to wrap up with sharing the messages from the session.
Mher Hakobyan: Thank you. Sorry, it takes a few seconds to unmute. Thanks for that. I will be very short. I would just like to say that equality bodies and not only the also the NHRIs, the DPAs, the European Data Protection Supervisors, their support in the AI process has been greatly appreciated by civil society organizations, because I think they add to the kind of the legitimacy of the calls that we often make. I think we live in a world where sadly, sometimes civil societies seem to be a bit too radical or ambitious. And when we have public bodies supporting, you know, very strong, making very strong kind of also calls that we make in terms of bans, in terms of sufficient human rights safeguards, I think I think it really adds to the effectiveness of the work that we do. So, yeah, it’s just like an opportunity to thank Equinet and all the other organizations that work with us.
Ayça DibekoÄlu: Thank you. Thank you, Mhir. I would just like to share the last Mentimeter question, which will be running while we get the message from this session.
Menno Ettema: Because we are curious what you might want to take forward after this session. So for us, this is a kind of a feedback and see if we inspired you to take action. So we’ll leave this open and I give the floor to my colleague for the message from the session.
Minda Moreira: Okay. Good morning. I’m just going to try to share my screen. Yeah, so I’ll stop sharing, but the Mentimeter stays open so you can fill in your answers while the screen is shared. And this is what I could capture from the session. It was really rich, but we can only have three main takeaways that I will read. And we expect general consensus. So it’s more about the message than about the specific content or the way that it’s built in that we can still work out afterwards. We still have one or two weeks that we can give it some tweaks. So the first message should be, I decided to divide it into three parts, prevention, mitigation, and redress. On prevention, we have the session agreed that more needs to be done to address group-based discrimination inequality. And it may not be solved with AI alone. It may be necessary to assess if AI is really needed or if non-technical solutions may be more effective. Where AI is needed, transparency and accountability is crucial. Bias detection with mandatory impact assessments must be used as well involving and consulting impacted communities in the AI design and development processes combined with stronger powers for equality bodies and industry best practices. Does it make sense? Okay, so the second one would be mitigation. Algorithm discrimination is difficult to detect and to prove and those affected find it difficult to access justice. When it comes to intersectional discrimination it is even more difficult not only because of the resistance of states and international courts to deal with them but also to effectively work with affected communities. There are main barriers to effective AI regulation for tackling discrimination and bias and session participants agreed that those include the lack of transparency and accountability, access to data and training sets as well commercial secrecy and funding. And finally redress access to adequate funding particularly to equality bodies is a main barrier to access justice. Some steps are being taken by advocacy groups to collaborate with regulatory bodies but a multi-stakeholder approach at a global level involving civil society private sector equality bodies and communities is vital for meaningful cooperation and to fully address discrimination in all its forms particularly intersectional discrimination. Is everyone okay with that? Does anyone would like to include something? that is important and is not mentioned.
Ayça DibekoÄlu: Please object now, or until the 25th of May, until when we can finalize our messages. Okay, I see no objections. Thank you. Thank you, Minda. Sorry that we’re quite over time. Thank you for your time, for both our panelists and the audience for being here and joining this discussion. I think we have a break now, so I hope to talk more to you soon then, and have a great rest of your conference.
Robin Aïsha Pocornie
Speech speed
125 words per minute
Speech length
926 words
Speech time
441 seconds
AI cannot solve inequality alone
Explanation
AI is inherently based on existing inequalities and cannot fix what it is built upon. Real-world inequalities need to be addressed first before AI can be used to address discrimination.
Evidence
The development pipeline for AI does not consider intersectional discrimination or human-based discrimination as an integral part of mitigation.
Major discussion point
AI and inequality
Agreed with
– Louise Hooper
– Milla Vidina
– Minda Moreira
Agreed on
AI alone cannot solve inequality
Disagreed with
– Milla Vidina
Disagreed on
Role of AI in solving inequality
AI is shaped by existing inequalities and biases
Explanation
AI systems are developed based on data and assumptions that reflect existing societal inequalities. This means AI cannot inherently solve inequality without addressing the root causes in society.
Major discussion point
AI and inequality
Agreed with
– Louise Hooper
– Milla Vidina
– Minda Moreira
Agreed on
AI alone cannot solve inequality
Community expertise should be valued equally to technical expertise
Explanation
Lived experiences of impacted communities should be considered expert knowledge, equal to technical expertise. This perspective is still underrepresented in regulatory bodies and discussions about AI.
Evidence
Examples of the Distributed AI Research Centre and the Indigenous Protocol and AI Working Group, which prioritize community-based expertise in decision-making.
Major discussion point
Involving impacted communities
Agreed with
– Mher Hakobyan
– Ayça DibekoÄlu
Agreed on
Importance of involving impacted communities
Meaningful participation requires giving communities decision-making power
Explanation
For true community involvement, impacted groups should have final decision-making power rather than just being consulted. This approach ensures that community perspectives are not just heard but actually implemented.
Evidence
The Distributed AI Research Centre ensures that community-based entities are the end-based decision makers, unlike typical approaches where larger regulatory bodies retain final say.
Major discussion point
Involving impacted communities
Agreed with
– Mher Hakobyan
– Ayça DibekoÄlu
Agreed on
Importance of involving impacted communities
Louise Hooper
Speech speed
154 words per minute
Speech length
1413 words
Speech time
548 seconds
Lack of transparency and explainability in AI systems
Explanation
AI systems often operate as ‘black boxes’, making it difficult to understand how decisions are made. This lack of transparency poses challenges for detecting and addressing discrimination.
Major discussion point
Challenges in regulating AI discrimination
Agreed with
– Milla Vidina
– Minda Moreira
Agreed on
Need for transparency and accountability in AI systems
Difficulty in detecting and proving algorithmic discrimination
Explanation
Algorithmic discrimination is hard to identify and prove due to the complexity of AI systems. This makes it challenging for individuals to seek justice when they face discrimination.
Major discussion point
Challenges in regulating AI discrimination
Equality bodies need stronger powers and adequate funding
Explanation
To effectively address AI discrimination, equality bodies require more robust powers and sufficient funding. This would enable them to investigate, litigate, and raise awareness about AI and equality issues.
Evidence
The Council of Europe is developing e-learning courses to help equality bodies and government institutions understand AI and equality issues, particularly when commissioning and using AI products.
Major discussion point
Role of equality bodies and regulators
Intersectional discrimination is complex and not well addressed in law
Explanation
Legal systems struggle to adequately handle intersectional discrimination cases. This is due to reluctance from states and international courts to deal with such complex issues.
Evidence
Examples of resistance at the Court of Justice of the European Union to address intersectional discrimination, while there is increasing recognition in ECHR and Council of Europe documents.
Major discussion point
Addressing intersectional discrimination
Milla Vidina
Speech speed
164 words per minute
Speech length
2057 words
Speech time
750 seconds
Technical standards could provide minimal prevention of inequality
Explanation
While AI cannot solve inequality entirely, technical standards in AI development could offer some prevention measures. These include increased transparency, documentation, and logging of decision-making processes.
Evidence
Equinet’s work with industry representatives to embed equality safeguards in technical standards for AI systems.
Major discussion point
AI and inequality
Agreed with
– Louise Hooper
– Minda Moreira
Agreed on
Need for transparency and accountability in AI systems
Disagreed with
– Robin Aïsha Pocornie
Disagreed on
Role of AI in solving inequality
Collaboration between different regulatory bodies is crucial
Explanation
Effective regulation of AI requires cooperation between various regulatory bodies, including those focused on equality, data protection, consumer protection, and competition. This cross-pollination of expertise is necessary to address the complex issues surrounding AI discrimination.
Evidence
Examples of collaboration between the French Defender of Rights and their data protection authority, and similar efforts in the Netherlands.
Major discussion point
Role of equality bodies and regulators
Public facilities to investigate algorithms could assist regulators
Explanation
Establishing public facilities to investigate and test algorithms could support regulators, academics, and civil society in addressing AI discrimination. This would provide technical expertise to complement the legal and policy knowledge of equality bodies.
Evidence
The example of PEREN in France, a public facility that provides intergovernmental service for investigating algorithms.
Major discussion point
Role of equality bodies and regulators
Need to move beyond grounds-based approach in non-discrimination law
Explanation
Current non-discrimination laws often require proving discrimination on specific grounds (e.g., gender, disability). A more intersectional approach is needed to effectively address complex forms of discrimination in AI systems.
Evidence
Examples of intersectionality being included in Belgian non-discrimination law and the EU Pay Transparency Directive.
Major discussion point
Addressing intersectional discrimination
Menno Ettema
Speech speed
133 words per minute
Speech length
1341 words
Speech time
600 seconds
Commercial secrecy and lack of access to data
Explanation
Commercial secrecy and limited access to data and training sets are significant barriers to effective AI regulation. These factors make it difficult for regulators and researchers to assess and address discrimination in AI systems.
Evidence
Results from a Mentimeter poll showing that participants identified these as major barriers to effective AI regulation.
Major discussion point
Challenges in regulating AI discrimination
Mher Hakobyan
Speech speed
143 words per minute
Speech length
1368 words
Speech time
571 seconds
Engaging diverse communities can be challenging due to competing priorities
Explanation
Organizations representing different communities often face urgent issues that take precedence over AI-related concerns. This can make it difficult to engage them in discussions and advocacy around AI discrimination.
Evidence
Example of disability rights organizations focusing on immediate issues like accessibility barriers and institutionalization rather than AI-related concerns.
Major discussion point
Involving impacted communities
Agreed with
– Robin Aïsha Pocornie
– Ayça DibekoÄlu
Agreed on
Importance of involving impacted communities
Support from equality bodies adds legitimacy to civil society advocacy
Explanation
The involvement of equality bodies and other public institutions in advocating for strong AI safeguards adds credibility to civil society efforts. This support helps counter perceptions that civil society demands are too radical or ambitious.
Major discussion point
Role of equality bodies and regulators
Expanding research to include less studied forms of discrimination
Explanation
While there is substantial research on racial and gender-based discrimination in AI, other forms of discrimination, such as those based on disability or socioeconomic status, are less studied. Expanding research to cover these areas is crucial for addressing intersectional discrimination.
Evidence
The speaker’s work at Amnesty International’s Algorithmic Accountability Lab, which aims to push boundaries and engage with a broader range of rights holders.
Major discussion point
Addressing intersectional discrimination
Audience
Speech speed
144 words per minute
Speech length
333 words
Speech time
138 seconds
Real-world discrimination must be addressed first
Explanation
To effectively address AI discrimination, underlying societal discrimination must be tackled first. AI systems reflect and potentially amplify existing biases in society, so focusing solely on AI without addressing real-world inequalities is insufficient.
Evidence
Example of discrimination in recruitment processes, which can be reflected in AI systems if not addressed in real-life practices.
Major discussion point
AI and inequality
Minda Moreira
Speech speed
137 words per minute
Speech length
491 words
Speech time
214 seconds
More needs to be done to address group-based discrimination and inequality
Explanation
The session agreed that current efforts to tackle group-based discrimination and inequality are insufficient. There is a need for increased action and more effective measures to address these issues in AI systems.
Major discussion point
Addressing discrimination in AI
AI alone may not solve inequality
Explanation
The session recognized that AI by itself is not a complete solution to inequality. It may be necessary to consider non-technical solutions or assess whether AI is truly needed in certain situations.
Major discussion point
AI and inequality
Agreed with
– Robin Aïsha Pocornie
– Louise Hooper
– Milla Vidina
Agreed on
AI alone cannot solve inequality
Transparency and accountability are crucial in AI systems
Explanation
The session emphasized the importance of transparency and accountability in AI systems. These elements are essential for addressing discrimination and ensuring fairness in AI applications.
Major discussion point
Challenges in regulating AI discrimination
Agreed with
– Louise Hooper
– Milla Vidina
Agreed on
Need for transparency and accountability in AI systems
Ayça Dibekoğlu
Speech speed
158 words per minute
Speech length
1483 words
Speech time
560 seconds
AI is shaped by data, assumptions, and power structures
Explanation
AI systems are not neutral or objective, but are influenced by the data they are trained on, underlying assumptions, and existing power structures. This shaping can lead to the perpetuation or amplification of biases and discrimination.
Major discussion point
AI and inequality
Meaningful participation of impacted communities is essential
Explanation
The involvement of communities affected by AI systems is crucial in shaping outcomes and addressing discrimination. It is important to go beyond surface-level engagement and ensure that impacted communities have a real voice in the process.
Major discussion point
Involving impacted communities
Agreed with
– Robin Aïsha Pocornie
– Mher Hakobyan
Agreed on
Importance of involving impacted communities
Agreements
Agreement points
AI alone cannot solve inequality
Speakers
– Robin Aïsha Pocornie
– Louise Hooper
– Milla Vidina
– Minda Moreira
Arguments
AI cannot solve inequality alone
AI is shaped by existing inequalities and biases
Technical standards could provide minimal prevention of inequality
AI alone may not solve inequality
Summary
Multiple speakers agreed that AI by itself is not sufficient to address inequality and may even perpetuate existing biases.
Importance of involving impacted communities
Speakers
– Robin Aïsha Pocornie
– Mher Hakobyan
– Ayça DibekoÄlu
Arguments
Community expertise should be valued equally to technical expertise
Meaningful participation requires giving communities decision-making power
Engaging diverse communities can be challenging due to competing priorities
Meaningful participation of impacted communities is essential
Summary
Speakers emphasized the importance of involving and empowering communities affected by AI systems in the decision-making process.
Need for transparency and accountability in AI systems
Speakers
– Louise Hooper
– Milla Vidina
– Minda Moreira
Arguments
Lack of transparency and explainability in AI systems
Technical standards could provide minimal prevention of inequality
Transparency and accountability are crucial in AI systems
Summary
Multiple speakers highlighted the importance of transparency and accountability in AI systems to address discrimination and ensure fairness.
Similar viewpoints
Both speakers emphasized the need for stronger regulatory mechanisms and collaboration between different bodies to effectively address AI discrimination.
Speakers
– Louise Hooper
– Milla Vidina
Arguments
Equality bodies need stronger powers and adequate funding
Collaboration between different regulatory bodies is crucial
Public facilities to investigate algorithms could assist regulators
These speakers agreed on the need to address intersectional discrimination more effectively in both law and research.
Speakers
– Louise Hooper
– Milla Vidina
– Mher Hakobyan
Arguments
Intersectional discrimination is complex and not well addressed in law
Need to move beyond grounds-based approach in non-discrimination law
Expanding research to include less studied forms of discrimination
Unexpected consensus
Questioning the necessity of AI solutions
Speakers
– Robin Aïsha Pocornie
– Minda Moreira
Arguments
AI cannot solve inequality alone
AI alone may not solve inequality
Explanation
Despite coming from different backgrounds, both speakers questioned the assumption that AI is always the best solution, suggesting a more critical approach to AI implementation.
Overall assessment
Summary
The main areas of agreement included the limitations of AI in solving inequality, the importance of involving impacted communities, the need for transparency and accountability in AI systems, and the challenges in addressing intersectional discrimination.
Consensus level
There was a moderate to high level of consensus among the speakers on these key issues. This consensus suggests a growing recognition of the complex challenges surrounding AI and discrimination, and the need for multifaceted approaches involving legal, technical, and community-based solutions. The implications of this consensus point towards a need for more comprehensive and inclusive strategies in developing and regulating AI systems to address discrimination and inequality.
Differences
Different viewpoints
Role of AI in solving inequality
Speakers
– Robin Aïsha Pocornie
– Milla Vidina
Arguments
AI cannot solve inequality alone
Technical standards could provide minimal prevention of inequality
Summary
Robin argues that AI cannot solve inequality as it is inherently based on existing inequalities, while Milla suggests that technical standards in AI development could offer some prevention measures.
Unexpected differences
Overall assessment
Summary
The main areas of disagreement revolve around the role of AI in addressing inequality and the most effective regulatory approaches.
Disagreement level
The level of disagreement among speakers is relatively low. Most speakers agree on the fundamental challenges of AI discrimination and the need for improved regulation and community involvement. The disagreements are primarily about specific approaches or emphases rather than fundamental principles. This suggests a general consensus on the importance of addressing AI discrimination, which could facilitate collaborative efforts in developing solutions.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers emphasized the need for stronger regulatory mechanisms and collaboration between different bodies to effectively address AI discrimination.
Speakers
– Louise Hooper
– Milla Vidina
Arguments
Equality bodies need stronger powers and adequate funding
Collaboration between different regulatory bodies is crucial
Public facilities to investigate algorithms could assist regulators
These speakers agreed on the need to address intersectional discrimination more effectively in both law and research.
Speakers
– Louise Hooper
– Milla Vidina
– Mher Hakobyan
Arguments
Intersectional discrimination is complex and not well addressed in law
Need to move beyond grounds-based approach in non-discrimination law
Expanding research to include less studied forms of discrimination
Takeaways
Key takeaways
AI alone cannot solve inequality, as it is shaped by existing biases and inequalities
Meaningful involvement of impacted communities in AI development is crucial but challenging
Intersectional discrimination is complex and not well addressed in current laws and AI systems
Collaboration between different regulatory bodies and stronger powers for equality bodies are needed to address AI discrimination
Lack of transparency, explainability, and access to data are major barriers in regulating AI discrimination
Resolutions and action items
Assess whether AI is truly needed before implementation, considering non-technical alternatives
Conduct mandatory impact assessments and bias detection in AI development
Involve and consult impacted communities in AI design and development processes
Strengthen powers and increase funding for equality bodies
Establish public facilities to assist regulators in investigating and testing algorithms
Unresolved issues
How to effectively secure meaningful involvement of impacted communities in AI development
How to address intersectional discrimination in AI systems and law
How to balance technical expertise with lived experiences in AI regulation
How to overcome commercial secrecy barriers in accessing AI training data and algorithms
How to effectively coordinate between different regulatory bodies (equality, data protection, consumer protection, etc.) in AI governance
Suggested compromises
Using AI to implement positive action measures while acknowledging its limitations in solving systemic inequalities
Balancing the need for technical expertise with valuing community-based and lived experience expertise in AI regulation
Establishing multi-stakeholder platforms to facilitate collaboration between different regulatory bodies and civil society organizations
Thought provoking comments
AI cannot fix inequality because it is inherently made by the inequalities that are already existing. So it cannot fix what it is based on.
Speaker
Robin Aïsha Pocornie
Reason
This comment challenges the common assumption that AI can be a neutral solution to societal problems.
Impact
It shifted the discussion towards examining the fundamental limitations of AI in addressing inequality and discrimination.
Intersectional discrimination is incredibly complicated. I think it’s not dealt with properly in law. There’s a lot of resistance in states to introduce legal prohibitions on intersectional discrimination before we even get to any problems in terms of detection and resolving intersectional discrimination in AI.
Speaker
Louise Hooper
Reason
This comment highlights the complexity of addressing intersectional discrimination, both legally and technically.
Impact
It broadened the conversation to include legal and policy challenges alongside technical ones, emphasizing the need for a multifaceted approach.
Equality bodies operate in ecosystem, and equality law is only one part of the puzzle. We also have data protection law, we also have consumer protection law, we also have competition law, if you’re talking about big tech, and all of those have a stake in algorithmic discrimination.
Speaker
Milla Vidina
Reason
This comment introduces the idea of a broader regulatory ecosystem needed to address AI discrimination.
Impact
It expanded the discussion to consider the role of various regulatory bodies and laws, emphasizing the need for collaboration across different domains.
We often in this kind of fora, you know, we give priority to people who have technical expertise and a lot of professionalized kind of expertise, so to say, but we don’t think of a person that has the lived experience as an expert.
Speaker
Mher Hakobyan
Reason
This comment challenges traditional notions of expertise in AI discussions.
Impact
It prompted reflection on the importance of including diverse perspectives, particularly from affected communities, in AI development and regulation.
Overall assessment
These key comments shaped the discussion by challenging assumptions about AI’s capacity to address inequality, highlighting the complexity of intersectional discrimination, emphasizing the need for a multifaceted regulatory approach, and advocating for the inclusion of diverse perspectives, particularly from affected communities. The discussion evolved from technical considerations to a more holistic examination of the social, legal, and policy dimensions of AI discrimination.
Follow-up questions
How can we effectively address intersectional discrimination in AI systems?
Speaker
Linda Ardenghi (audience member)
Explanation
This was raised as an important issue that is often mentioned in documents but not adequately addressed in practice or technical implementations.
How can we meaningfully involve impacted communities in the design and development of AI systems?
Speaker
Ayça DibekoÄlu
Explanation
This was identified as a crucial step for preventing discrimination, but the practical implementation remains challenging.
What would an effective multi-stakeholder platform for addressing AI discrimination look like?
Speaker
Milla Vidina
Explanation
This was suggested as a way to bring together different regulatory bodies and expertise to tackle algorithmic discrimination more comprehensively.
How can we reform equality laws to better address AI-driven discrimination?
Speaker
Milla Vidina
Explanation
Specific suggestions were made, such as changing the burden of proof and moving away from a grounds-based approach, which warrant further exploration.
What would be the impact of setting up public facilities to assist in technical testing and investigation of algorithms?
Speaker
Milla Vidina
Explanation
This was proposed as a potential solution to support regulators, academics, and civil society in addressing algorithmic discrimination.
How can we effectively demystify AI for rights holders and affected communities?
Speaker
Mher Hakobyan
Explanation
This was identified as a barrier to engaging a broader range of stakeholders in AI policy discussions and advocacy.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event
