Workshop 1: AI & non-discrimination in digital spaces: from prevention to redress
13 May 2025 07:30h - 08:30h
Workshop 1: AI & non-discrimination in digital spaces: from prevention to redress
Session at a glance
Summary
This workshop focused on AI and non-discrimination in digital spaces, examining how to prevent, mitigate, and address algorithmic discrimination. The session was moderated by Aïsha DibekoÄlu from the Council of Europe’s Hate Speech, Hate Crime and Artificial Intelligence Unit, with panelists including computer scientist Robin Aïsha Pocornie, human rights barrister Louise Hooper, and Amnesty International’s Mher Hakobyan.
The discussion revealed significant skepticism about AI’s ability to solve inequality, with participants emphasizing that AI systems are inherently shaped by existing biases and power structures. Robin Pocornie argued for an anti-techno-solutionist approach, advocating that communities affected by AI harm should be recognized as experts and decision-makers rather than mere consultees. She stressed the importance of assessing whether AI is actually needed before deployment, rather than assuming technological solutions are always appropriate.
Louise Hooper highlighted the legal challenges in addressing algorithmic discrimination, noting that it is difficult to detect and prove, with significant barriers to accessing justice. She discussed two key regulatory frameworks: the Council of Europe’s Framework Convention on AI and the EU AI Act, emphasizing their different approaches to governance and enforcement. The complexity of intersectional discrimination emerged as a particular challenge, with limited legal recognition and technical difficulty in quantification.
Mher Hakobyan emphasized the importance of meaningful multi-stakeholder participation, describing Amnesty’s collaborative approach with impacted communities and local organizations. He advocated for transparency, accountability measures, and in some cases, outright bans on harmful AI systems as prevention strategies.
Interactive polling revealed that most participants believed AI cannot solve inequality and must be addressed through other means. Key barriers to effective regulation included lack of transparency, limited access to data, commercial secrecy, and insufficient funding. The session concluded with calls for stronger equality body powers, mandatory impact assessments, meaningful community involvement, and coordinated multi-stakeholder approaches to address discrimination comprehensively.
Keypoints
## Major Discussion Points:
– **AI’s limitations in solving inequality**: The discussion explored whether AI can address discrimination and inequality, with strong consensus that AI alone cannot solve these problems due to being built on existing biased data and power structures. Participants emphasized the need to assess whether AI is actually necessary before deployment.
– **Prevention vs. reactive approaches**: Panelists and participants discussed moving beyond addressing AI harms after they occur to implementing proactive prevention measures, including mandatory impact assessments, involving affected communities in design processes, and questioning the necessity of AI solutions from the outset.
– **Intersectional discrimination challenges**: The conversation highlighted the complexity of addressing intersectional discrimination in AI systems, noting that it’s often mentioned in literature but rarely effectively addressed in practice, with significant technical and legal barriers to detection and remediation.
– **Multi-stakeholder collaboration and community involvement**: Extensive discussion on the need for meaningful participation of impacted communities in AI development and regulation, moving beyond tokenistic consultation to genuine community-led decision-making and recognizing lived experience as expertise.
– **Regulatory frameworks and enforcement barriers**: Analysis of current legal approaches (EU AI Act, Council of Europe Framework Convention) and identification of key barriers including lack of transparency, limited access to data, commercial secrecy, inadequate funding for equality bodies, and the need for stronger cross-regulatory cooperation.
## Overall Purpose:
The workshop aimed to examine how AI systems can avoid perpetuating or amplifying discrimination, focusing on the full spectrum from prevention through mitigation to redress. The goal was to move beyond surface-level discussions to explore practical implementation of non-discrimination measures in AI, emphasizing community participation and meaningful regulatory responses.
## Overall Tone:
The discussion maintained a serious, academic tone throughout while being notably critical of current approaches to AI regulation and discrimination. The tone was collaborative and solution-oriented, with participants building on each other’s insights. There was a consistent thread of urgency about addressing these issues proactively rather than reactively, and the conversation remained constructively critical rather than dismissive, with speakers offering concrete examples and alternative approaches to current practices.
Speakers
**Speakers from the provided list:**
– **Minda Moreira**: Session moderator, reads session rules and provides closing remarks
– **Ayça DibekoÄlu**: Session moderator, works at the Hate Speech, Hate Crime and Artificial Intelligence Unit of the Council of Europe within the anti-discrimination department
– **Menno Ettema**: Head of unit (mentioned as Ayça’s head of unit), guides Mentimeter questions during the session
– **Robin Aïsha Pocornie**: Computer scientist specializing in algorithmic discrimination and bias, first person to establish case law around algorithmic discrimination in the Netherlands after challenging a discriminatory facial recognition algorithm
– **Louise Hooper**: Human rights barrister with over 20 years of experience focused on human rights, human dignity and equality, expert of the Council of Europe’s Committee on Artificial Intelligence, Equality and Non-Discrimination (GECCI-DADI)
– **Mher Hakobyan**: Amnesty International’s Advocacy Advisor on AI Regulation, leads advocacy work at the Algorithmic Accountability Lab of Amnesty Tech, has experience from the European Disability Forum, Equinet and the Council of Europe
– **Milla Vidina**: Leads Equinet’s work on AI and algorithmic discrimination, Equinet is a network of national equality public authorities
– **Audience**: Various audience members who asked questions and participated in discussions
**Additional speakers:**
– **Linda Ardenghi**: Trainee at the Council of Europe, asked about intersectional discrimination
– **Somaya**: Representative from YouThink, discussed the need to solve discrimination in real life before addressing it in AI
Full session report
# Comprehensive Workshop Report: AI and Non-Discrimination in Digital Spaces
## Executive Summary
This workshop examined the intersection of artificial intelligence and non-discrimination in digital spaces, focusing on prevention, mitigation, and redress of algorithmic discrimination. Moderated by Ayça DibekoÄlu from the Council of Europe’s Hate Speech, Hate Crime and Artificial Intelligence Unit, alongside Minda Moreira, the session brought together diverse expertise including computer scientist Robin Aïsha Pocornie, human rights barrister Louise Hooper, Amnesty International’s Mher Hakobyan, and Equinet’s Milla Vidina.
The discussion revealed significant skepticism about AI’s capacity to solve inequality, with participants emphasizing that AI systems reflect and amplify existing societal biases. Key themes included the need to recognize affected communities as experts, gaps in current regulatory frameworks, challenges in addressing intersectional discrimination, and the importance of multi-stakeholder collaboration. Interactive polling throughout the session engaged participants and revealed broad alignment with speakers’ perspectives on AI’s limitations and regulatory barriers.
## Session Overview and Structure
The workshop followed a structured format beginning with opening remarks from moderators, followed by individual panelist presentations, interactive Mentimeter polling, audience questions, and concluding takeaway messages. This format enabled both expert input and meaningful audience engagement throughout the session.
Ayça DibekoÄlu opened by framing the discussion around three key areas: prevention of algorithmic discrimination, mitigation strategies when discrimination occurs, and redress mechanisms for affected communities. The session aimed to move beyond theoretical discussions toward practical approaches for addressing AI discrimination in digital spaces.
## Key Panelist Contributions
### Robin Aïsha Pocornie: Anti-Techno-Solutionist Perspective
Robin presented a fundamental challenge to techno-solutionist approaches, arguing that “AI cannot solve inequality because it is inherently made by the inequalities that are already existing. So it cannot fix what it is based on.” She emphasized that the first question should be whether technology is needed at all, rather than automatically developing AI solutions.
Drawing from her work with the Distributed AI Research Centre and the Indigenous Protocol and AI Working Group, Robin highlighted the importance of community-led approaches. She argued that “community always goes above the technology. The community that’s being harmed is the expert,” challenging traditional hierarchies of knowledge in AI governance.
Her prevention approach focused on questioning AI necessity entirely, advocating for community leadership in decision-making processes, and recognizing that complex social problems cannot be resolved through technological solutions alone.
### Louise Hooper: Legal Framework Analysis
Louise provided detailed analysis of legal challenges in addressing algorithmic discrimination, noting that such discrimination is “difficult to detect, prove, and access justice for due to opacity and information asymmetries.” She emphasized that even when discrimination is proven, courts often provide inadequate remedies.
She highlighted the need for mandatory impact assessments and meaningful community involvement in AI system design processes. Louise noted that current regulatory approaches often focus on product safety principles rather than comprehensive human rights frameworks, creating gaps in protection against discrimination.
Her recommendations included legal reforms such as burden of proof changes and moving beyond single-ground discrimination approaches to address the complexity of intersectional discrimination more effectively.
### Mher Hakobyan: Civil Society and Advocacy Perspective
Mher emphasized the importance of recognizing lived experience as expertise, noting that “we often give priority to people who have technical expertise and a lot of professionalised kind of expertise, but we don’t think of a person that has the lived experience as an expert.”
He described Amnesty International’s collaborative approach with impacted communities and local organizations, highlighting how public body support for civil society advocacy adds legitimacy and effectiveness to their work. Mher supported prevention strategies that could include bans for certain AI applications rather than waiting for harm to occur.
He also noted gaps in research, observing that while substantial work exists on racial and gender discrimination, “research often lacks knowledge about discrimination based on disability and socioeconomic grounds.”
### Milla Vidina: Technical Standards and Regulatory Cooperation
Milla provided detailed explanation of Equinet’s work on technical standardization, including their involvement in developing European standards for AI systems. She described the CE marking process and how equality considerations can be embedded in technical standards during the design and development phases.
While acknowledging AI’s limitations, Milla offered a more nuanced perspective on the potential for technical measures to provide some safeguards through better documentation, transparency, and standardization processes. She emphasized the need for cross-regulatory cooperation between equality bodies, data protection authorities, and other regulators.
Milla proposed establishing public facilities similar to France’s PEREN to assist regulators with technical testing and investigation of algorithms, addressing the technical capacity gaps that prevent effective regulatory oversight.
## Interactive Polling Results and Analysis
The session incorporated extensive Mentimeter polling that revealed significant audience engagement and alignment with speaker perspectives. Key polling results included:
**On AI’s capacity to solve inequality:** Nearly half of participants agreed that inequality cannot be solved through AI, with responses showing strong skepticism about technological solutions to social problems.
**Barriers to effective regulation:** Participants identified lack of transparency, limited access to data, commercial secrecy, and inadequate funding as primary obstacles to effective AI discrimination regulation.
**Group-based discrimination concerns:** Polling revealed particular concerns about discrimination affecting multiple intersecting identities, supporting speakers’ emphasis on intersectional discrimination challenges.
**Prevention measures:** Audience responses supported proactive approaches including impact assessments, community involvement, and stronger regulatory frameworks.
The polling results demonstrated sophisticated understanding of the issues among participants and broad recognition of AI discrimination challenges across different stakeholder groups.
## Audience Engagement and Questions
The session featured active audience participation with several substantive questions that advanced the discussion. Notable interventions included:
Linda Ardenghi raised questions about intersectional discrimination and how multiple identities interact in AI systems, prompting detailed responses about the complexity of addressing overlapping forms of discrimination.
Somaya contributed an important perspective about the need to address real-world discrimination first, noting that “if we cannot solve discrimination in the real world, how can we solve it in the digital world?”
Audience questions demonstrated deep engagement with the material and helped speakers elaborate on practical implementation challenges, community involvement mechanisms, and regulatory approaches.
## Areas of Consensus and Disagreement
### Points of Agreement
Strong consensus emerged around several key principles:
– AI cannot solve inequality independently, as these systems reflect and amplify existing societal discrimination
– Meaningful community participation requires genuine decision-making power rather than tokenistic consultation
– Intersectional discrimination remains inadequately addressed in both technical development and legal frameworks
– Multi-stakeholder collaboration and cross-regulatory cooperation are essential for effective responses
### Areas of Tension
Despite broad agreement on principles, speakers differed on implementation approaches:
**Prevention strategies:** Robin advocated for questioning AI necessity entirely, Louise focused on improving regulatory frameworks, and Mher supported potential bans for harmful AI systems.
**Role of technical solutions:** Robin maintained a strong anti-techno-solutionist position, while Milla saw value in working within technical standardization processes to embed equality safeguards.
**Regulatory approaches:** Speakers emphasized different aspects of strengthening oversight, from litigation capabilities to cross-regulatory cooperation to technical support facilities.
## Unresolved Issues and Future Challenges
The workshop identified several ongoing challenges requiring continued attention:
**Meaningful participation mechanisms:** While speakers agreed on the importance of community leadership, specific mechanisms for providing adequate funding and genuine decision-making power remain unclear.
**Intersectional discrimination:** Both legal and technical frameworks struggle to address the complexity of multiple, overlapping identities and their interactions in AI systems.
**Transparency versus commercial interests:** Balancing algorithmic accountability requirements with legitimate business interests presents ongoing tensions affecting oversight capabilities.
**Funding and capacity:** Adequate resources for equality bodies and community participation were identified as critical but without specific implementation pathways.
## Final Takeaway Messages
Minda Moreira concluded the session with three key takeaway messages that represent the workshop’s official conclusions:
1. **Community expertise must be recognized and centered:** Affected communities possess essential knowledge that should guide AI governance decisions, moving beyond consultation toward genuine leadership roles.
2. **Prevention approaches are essential:** Rather than waiting to address harms after they occur, focus must shift toward preventing discriminatory AI systems from being deployed in the first place.
3. **Comprehensive collaboration is required:** Effective responses to AI discrimination require coordinated efforts across technical, legal, civil society, and community stakeholders with adequate institutional support.
## Practical Recommendations and Action Items
The discussion generated several concrete recommendations:
**Capacity building:** The Council of Europe committed to developing e-learning courses for equality bodies and government institutions on AI and equality issues.
**Technical support:** Establishing public facilities to assist regulators with technical testing and investigation of algorithms, addressing capacity limitations in regulatory oversight.
**Legal reforms:** Implementing mandatory impact assessments, transparency requirements, and burden of proof changes to make algorithmic discrimination cases more viable.
**Community involvement:** Developing mechanisms for meaningful community participation with adequate funding and genuine decision-making power in AI governance processes.
## Conclusion
This workshop demonstrated sophisticated understanding of AI discrimination challenges while revealing significant gaps in current approaches. The emphasis on community expertise, prevention over reaction, and comprehensive collaboration provides a framework for more effective approaches to algorithmic discrimination.
The session’s contribution lies in articulating alternative approaches that center affected communities, question fundamental assumptions about AI necessity, and demand systemic rather than piecemeal solutions. The interactive elements revealed broad stakeholder alignment on these principles, suggesting growing recognition of the need for fundamental changes in how AI governance addresses discrimination.
Moving forward, translating these insights into practice requires continued work on funding mechanisms, legal reforms, and institutional changes that enable meaningful implementation of community-centered, prevention-focused approaches to AI discrimination. The workshop provides a foundation for continued collaboration toward more equitable AI governance that serves human rights rather than perpetuating existing inequalities.
Session transcript
Minda Moreira: Welcome to the workshop AI and non-discrimination in digital spaces from prevention to redress. I’m just going to read the session rules and then give the floor to Aïsha. So please enter with your full name if you’re online. To ask a question, raise your hand using the Zoom function. You will be unmuted when the floor is given to you. When speaking, switch on the video, state your name and affiliation. Do not share links to the Zoom meetings, not even with your colleagues. Thank you very much. Aïsha, the floor is yours.
Ayça DibekoÄlu: Thank you, Minda Moreira. Good morning to everyone in Strasbourg and good morning to those joining online. My name is Aïsha DibekoÄlu and I work at the Hate Speech, Hate Crime and Artificial Intelligence Unit of the Council of Europe, which is within the anti-discrimination department of our organization, where among many areas of work, we focus on strengthening human rights and equality in the context of emerging technologies. I’m delighted this morning to be moderating this session together with my head of unit, Eino Etema, which we bring together expert voices from across academia, civil society, tech community and public institutions. Today we’re here to explore a critical and urgent question. How can we ensure that AI systems, which are now increasingly used in law enforcement, in welfare and employment and in health, do not infringe or amplify discrimination and the right to equality? AI is often presented as neutral or objective, but in reality, it’s shaped by data, it’s shaped by assumptions and power structures that go deep within its design and its use. This session is called, as Minda Moreira mentioned, From Prevention to Redress, AI and Non-Discrimination in Digital Spaces, and the idea is to push beyond the surface. We’ll explore what prevention, mitigation and meaningful redress should actually look like in practice and who gets to shape those outcomes. You’ll hear from a brilliant panel of experts and we will be inviting you, the audience, among who are also experts who are actively working in this area, to actively contribute Thank you all for your contributions through interactive polls and questions that we will be presenting to you via Mentimeter. Please remember that this is a conversation, not just a panel. And after short interventions from each panelist, we’ll move on to Mentimeter questions, and we hope that they will be as thought-provoking to you as they were to us. And after various answers, we’d like to encourage you to further elaborate on your answers and explore topics that we haven’t discussed to make it as effective and beneficial as possible. So, let’s dive in. I would like to first give the floor to Robin Aïsha Pocornie, who is a computer scientist who specializes in algorithmic discrimination and bias. And you might also recognize Robin as the first person who established case law around algorithmic discrimination in the Netherlands after challenging a discriminatory facial recognition algorithm. Robin, drawing both from your personal experience and also your broader work that you’re currently doing on fairness and bias, we’d love to hear on how you approach these challenges and possibilities of preventing, mitigating, and addressing algorithmic discrimination. The floor is yours. Thank you so much, Aïsha, for the great introduction. I’m happy to be here. Hi, everyone.
Robin Aïsha Pocornie: My name is Robin Aïsha Pocornie, and I work on the intersection of data-driven technologies and the implications that it has on people, especially from a race, gender, and class income perspective. I work from the approaches that are anti-techno-solutionist, which means that complex social problems cannot only be fixed by technology alone, and especially if these problems are made by the technology. An example of this is data-driven facial detection software that cannot recognize people of a certain skin color. In order to fix that, it’s not only a techno-solutionist perspective that can fix this problem. And from the approach of radical reform or also known as non-reformist reform, which indicates that community gain is more important than individual gains. And we do this by optimizing community work and going to the communities that are being harmed instead of looking at it from an outside perspective, looking in, trying to fix problems that were not personally or at least as a community related to. The three approaches for prevention of AI harms that I work with within my consultancies, or I advise different clients about this, is that community always goes above the technology. The community that’s being harmed is the expert and of the integration of diverse knowledge. This means that technological knowledge, such as computer science, data-driven knowledge on a higher education level, is more accepted than community-based expertise, so real-lived experiences. And how we do this, this is already being done, is that small communities all over the world, for example, in Singapore, there is a impact-driven community that does work around mitigation of AI harms by going directly from a prevention point of view. What they do is that they actually look at if the technological non-use, so looking at whether technology is needed or not, is actually the first question being asked instead of creating and developing these technologies. This doesn’t mean that AI cannot help or support those in need, it just means that we look further back before deploying and implementing technologies that could potentially cause harm and not have a mitigation as a reactive way, but a responsive way, but looking beforehand before deploying.
Ayça DibekoÄlu: Thank you very much, Robin. So now I’d like to give the floor to Louise Hooper, a human rights barrister with over 20 years of experience focused on human rights, human dignity and equality. Besides her consultancy work, Louise currently serves as an expert of the Council of Europe’s Committee on Artificial Intelligence, Equality and Non-Discrimination, which is abbreviated as GECCI-DADI. Louise, drawing on your legal expertise and your work in international standard setting, how do you see the role of law and policy frameworks in shaping responses to algorithmic discrimination? The floor is yours.
Louise Hooper: Thanks. Good morning, everybody. So the first thing that I’d like to talk about is what AI systems are and do in terms of being models and systems that capture patterns from data in a model. And data comes from real world processes, which means humans. And humans are inherently flawed beings. We are discriminatory by nature. And data that we get from our historical processes is very often dirty in terms of we’re not all consistent about the way that we collect or input data into models or systems. At the moment, we, by which I mean generally older white men with political or economic power, are more and more reluctant to accept discrimination is bad or take steps to ensure equality exists in society. And you can see this in things like, for example, aggressive attempts to dismantle equality, diversity and inclusion programs. So all of this feeds into how we regulate, what we’re regulating and the social approach to law. And against this background, algorithmic discrimination itself is difficult to detect. So who’s being discriminated against? Was it a mistake? Was it by design? It’s difficult to prove. We can’t access data. We don’t understand the systems. We have black boxes with. Nobody understanding what algorithms are doing, we don’t know how decisions are made, what influences them. And overarching all of this, access to justice is costly, it’s time-consuming and it can be very difficult for individual victims. In the context of AI and ADM, the opacity of systems, information and power asymmetry between the deployer and subject, the lack of capacity to monitor group effects or to compare yourself to another person, the inability to access this information and an absence of transparent, meaningful information can preclude proper assessment of discrimination or prevent legal action from being started. And it’s there where I think regulation comes in. We have two attempts that I’d like to just touch on. One is the Council of Europe Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. And the other is the EU AI Act. Primarily under the framework convention, this is directed to and governs state actions, not the actions of companies. It places the responsibility on states to regulate, to govern providers and deployers of AI. It solely focuses on human rights, rule of law and democracy rather than other issues, and it’s not yet in force. Article 10 is the key provision in respect of equality and non-discrimination, which includes not just a negative obligation, but also, interestingly, a positive obligation for AI to be used to overcome inequality, which we may talk about in terms of techno-solutionism later on. It also has a whole series of effective procedural guarantees, safeguards and rights designed to enable people to litigate if necessary and requires effective oversight mechanisms. The EU AI Act, by contrast, is directed to and governs states, but also providers and deployers of AI. So it’s directly relevant to providers and deployers and can be directly relied on by individuals to protect and enforce their rights. It’s built on product safety principles rather than a human rights law. Thank you very much for that very insightful and insights-based approach. There are some criticisms of the approach in determining what AI is risky before you determine what AI is risky, which I find quite complicated. And then it sets out a series of requirements for both providers and deployers to comply with. Compared with some of the new European legislation on equality bodies, there are some really significant new powers for authorities responsible for protecting fundamental rights. And there are articles 77 and in the new directives on equality bodies. And for time reasons, I’m going to stop there.
Ayça DibekoÄlu: Thank you, Louise. And lastly, our final panelist is joining us online. Hi, good morning, Mher. I see you online. I would like to now give the floor to you, Mher Hakobyan, Amnesty International’s Advocacy Advisor on AI Regulation. Mher leads the advocacy work at the Algorithmic Accountability Lab of Amnesty Tech and brings experience from the European Disability Forum, Equinet and the Council of Europe. Mher, drawing from your advocacy work in the Algorithmic Accountability Lab and across these institutions, how can we build truly meaningful multi-stakeholder participation in shaping AI policy? And what does it look like in practice when it comes to tackling discrimination? The floor is yours.
Mher Hakobyan: Thank you. Thank you very much. I’m very happy to be here, as I mentioned in the coordination meeting we had last week. I started my professional journey at the Council of Europe in Armenia, so always happy to be back. So thank you for the invitation. I will dive right into the question, just maybe also a bit presenting the team that I’m in currently, which is one of the Amnesty International’s tech and Human Rights Program teams called the Algorithmic Accountability Lab. And the way we do our work, I think, is really fitting to the question that you asked, because we’re a multidisciplinary team that covers under the broad umbrella of the automated state. We do our work basically focusing on the AI use in the public sector, so by public authorities in the areas of like social protection, policing, migration, border surveillance, military use of AI, etc. And where, of course, when there’s public and private partnerships involved, we look at that as well, but mainly our focus is the use by public authorities. The way we do approach our work in terms of both research, advocacy, litigation work, media and communications work is focusing on collaborating with impacted communities, with other civil society organizations, and networks based in Brussels that represent different community organizations, so a lot of membership organizations that have members in different EU states, focusing on migration, focusing on disability rights, focusing on LGBTI rights. So it’s different networks. We try to connect to the membership on the ground. This is when we do advocacy, but in terms of research, when we do research in specific countries, such as in Serbia, in Denmark, also supported investigations in Sweden, in the Netherlands, these processes are always with collaboration with local organizations that represent different communities that are impacted by AI systems. For example, a research we did in Denmark that you might know about, which looked at welfare protection scheme use of a fraud detection algorithm that heavily discriminated and surveilled migrant communities, people with disabilities. We worked with disability rights organizations, with organizations that work on racial justice, on representing migrant communities. So I think this is really important in terms of also making sure that communities are equipped and are the ones leading the work to challenge a lot of the harms that the systems pose. Similarly, in litigation work, for example, in France, we are engaged in an initiative that is led by local or civil society, digital and human rights organizations, challenging the social security agencies, national family allowance fund, use of fraud detection algorithm, which is also allegedly discriminatory. So there is a court case filing that has called for the system to be stopped. So I think this collaborative approach with organizations that represent directly communities is very important. I think maybe when we also talk about redress and remedy measures, we need to also be mindful that we are not talking about it in an isolated manner, but we think of the system, the ecosystem that is around AI, starting from the conceptualization to the development to deployment. And in that sense, I think to ensure effective redress and remedy, we need to also ensure that the transparency and the public accountability measures are set in place. And even going back, I think what Robin said really resonates with me as well, is that sometimes we need to also think if a certain type of AI system is actually necessary. And if it’s the appropriate solution in the context, then we need to draw clear red lines on things that are. that shouldn’t be deployed. So I think in some sense, a ban could be the prevention in itself and the appropriate mitigation measure. So we don’t need to wait until the harm occurs and then think about now what is the redress and remedy process to mitigate that. I think I will stop here, but really looking forward to the continued discussion.
Ayça DibekoÄlu: Thank you, Mayor. Sorry, at some point, your screen got a bit smaller in our screens. Thank you to each of you for your interventions and powerful insights. And now it’s time to hear from the audience, from you. I’d like to hand over to Menno, who will guide us through a short series of Mentimeter questions. If you have your phones out, if you have your phones out, you can scan the QR code or enter the code on Menti.
Menno Ettema: Great. It always takes a moment before the screen comes up. Yes, to open the eyes, we want to launch a little bit. You can have a look at the, you can type in on your phone or laptop, the UR code menti.com and you have the entry code. You’ll see that code on every screen that’s coming up. Or if you can take a moment to use your phone to scan the QR code so you can enter. Don’t close the tab after the screen goes because we will use the Mentimeter the rest of the session. So then I go forward to the first question, just to go back to a point that was made by a few. The question we want to ask is, can you solve inequality with AI? Yes, it’s technically possible. Yes, it’s a good way to address inequality. No, it’s technically not possible. Or no, we must solve inequality differently. So what are your thoughts on this? I’ll give you a few minutes or a few seconds to complete. I see that 52 people registered, that’s lovely. And we’re seeing the answers coming in. So I’ll give two more seconds. Okay. So what we’re seeing at the moment is that a large part believes that AI cannot solve inequality and it must be solved differently. There are also those who think that it’s a good way to address inequality. Technically not possible is the least of the concern. And some nine people mentioned that technically it’s possible. But the largest part, nearly half, say that we cannot solve inequality through AI. Connected to that, we have our next question. What’s the main barriers to effective AI regulation for tackling discrimination and bias? So what are the main barriers to effective AI regulation for tackling discrimination and bias? We got a few answers and you can… How do you call it? Rank. Rank, thank you, Ayça. Rank the various answers. You can also skip. So if you think one is the major issue, then you put that as first, and then the second or third reasons, because they could all be relevant. But you can also skip if you don’t think they are of a concern to the effectiveness of AI regulation to tackle discrimination and bias. Thank you. It takes a little bit longer because you have to rank and move the things. ENGINEERING AND BEHAVIOUR REPUBLIC OF SANTORUM Feel free to answer, but I will summarize what we see. The responses are that it’s equal that no one agrees what discrimination is, so that’s a barrier to effective AI regulation. And it’s also already too little and too late, which is an interesting response and maybe that invites us to force some discussion. Also, there’s quite a strong assumption that AI is the answer. So, the question is if this is indeed the case. There’s some finger-pointing to big tech, and then there are some remarks about that it has to be global to be effective. Okay, I’ll just pinch in one more question. Referring a bit to the first speaker, do current approaches to AI take group-based discrimination seriously? So, this is a yes, no, maybe question. So, let’s go a bit quicker. Now, you can still vote, but I see a tendency towards no. Current approaches to AI do not take group-based discrimination seriously. I think this is another so maybe. Okay, so, let’s go a bit quicker. So, this is a yes, no, maybe question. So, let’s go a bit quicker. So I think this is an interesting reflection point as well. There are a few more questions, but maybe we stop here and give the floor back to you, Ayça, for some reflections from the audience.
Ayça DibekoÄlu: Thank you, Menno. So we have more questions, as Menno mentioned, but we’ll take it in a step-by-step approach to have discussions in the middle before finishing all of the questions. And thank you for all your responses. It’s clear that there’s quite a lot of insight and experience in the room. I’m also quite surprised at some of the answers as well, and I’d like to further discuss. And let’s now open the floor for a broader discussion. We’d like to invite you and our panellists to respond, whether that’s building on what we have answered in the questions or picking up on something that hasn’t yet been fully explored. And please feel free to raise a hand to ask a question and also comment on the chat. And let’s use this time to dig in deeper. I’d like to actually first ask whether Equinet has any input on this. Here on the panel, we have Mila Vidina. Mila leads Equinet’s work on AI and algorithmic discrimination. I think the first three questions specifically relate quite heavily to inequality and some thought-provoking questions on this. So I’d be curious to hear your insights on this, Mila. Thank you.
Milla Vidina: Thank you for inviting us. So for those of you who wonder what Equinet stands for, it’s not equestrian and horses. It’s a network of national equality public authorities. And our network seeks to empower and bring the collective voices of 47 such institutions across Europe for 35 European states. So equality bodies are anchored in EU law, national law as well, but we go broader than that. So all Council of Europe member states, to my knowledge, have equality bodies. So why are we here? Equinet has been working on non-discrimination for six years. And your first question, can AI solve inequality? that we are trying to tackle head-on, like as in literally working with technical and with software engineers and data scientists for the past year and a half in the context of technical standardization. Maybe you heard that the European Union Act rests upon compliance would be based upon conformity with a set of technical requirements. Those technical requirements are embodied to technical standards. Those are the same standards that you have for a light bulb or for a toy. And then you have that CE, safe use marking that you see across Europe. Well, this is how the high-risk AI system under the AI Act would be certified safe. Now, the problem is that part of the AI Act, especially Annex 3 to the AI Act, includes human rights, I deliberately won’t use this, like with human rights critical AI systems. So how would engineers know how to evaluate risk to human right? And how do they certify as being safe? And mind you, they self-certify. This makes it even more problematic. So what we as equities have been trying to do is answer this question, okay, can AI solve inequality if we engage with industry at this technical standardization forum, which is extremely resource intensive, unrepresentative of our society, and with a high entry barrier in terms of how much time we had to invest, we got a small pot of public money from the UK, not even the European Union. And we’re dealing with the EU legislation that only lasted for a year and a half, and most of us for now are working pro bono. But in that project, basically, we worked with industry representatives, most of them data scientists, software engineers, and cybersecurity specialists, to see whether in those technical standards, it’s a set of standards around 10, we could embed equality safeguards. Can you solve the inequality question, for example? How do you select a fairness measure or a debiasing method that could more effectively tackle potential inequality aspects? if you have to assess risks before you, when you design and develop systems. So most of this is about design and development. Some of this is deployment, but according to the AIAC, even the deployment kind of preparation is done prior to release on the market. So most of the conversation is what we anticipate, what we foresee as risks to equality. How can you foresee risks to equality? And could an engineer make this assessment? So we’ve been trying to see how to set up the human oversight mechanism. So each technical organization that has to certify a system, they have a designated person to assess those risks to human rights. Do they have the necessary competence? Do they have interdisciplinary teams? How do you include affected communities in that? Is there what is a way, because we talked about participation and it’s all fine, but when you talk to engineers, where do you actually make it mandatory for them to include people? Well, there is a concept of design engineering with software specialists and there, there is a consultation, you know, those entry points purely within the technical community. And then when you want to validate a product as a condition for validation, include stakeholder input, you know, in the product safety branch, there are such things as testing products and consulting with consumers. Only now we are not talking about consumers, we are talking about affected persons. So long story short, based on that experience, yeah, as expected, AI cannot solve equality. What it could have, this is why we are in, we want to hopefully find also the means of support financially to continue to be engaged with technical standardization. It could do minimal prevention. So there are some things that you could, some technically feasible measures, which you could implement a design and development that could give you documentation, logging, more transparency. You cannot completely resolve transparency and explainability. It’s a contested concept within the engineering community, but more transparency. So basically you’re getting leverage and information. to contest and to enforce afterwards. So this is as a minimum, we hope, and maybe some prevention mitigation in so far as you want the designers, sorry, the developers when they have a documentation to be able to say, okay, why did I use this fairness metrics and not that fairness metric? And how was the fairness metric appropriate to the outcome, right? So you want that reason giving and justification and also tracing decision-making in the way a product is developed. So this is important for accountability, right? At what point, who took the decision to use that over that? Who was overseeing the human, so the human oversight mechanism? Who signed off product validation? Those things. And so in that way, it could give us some leverage, but ultimately, it’s costly and time-consuming policy changes and legal changes that would get us there. And I would end with an example. We were discussing recruitment algorithms. And there is literature, there is some evidence that human bias in recruitment is worse, whatever that means, than computer bias, right? So you could argue that there is more neutrality. Further to that, you can also thinker with the fairness metrics so that you actually use the algorithm to implement a kind of positive action. So we choose preferentially, let’s say, more women. But then the question becomes, who comes to that recruitment place? Who shows up for that particular position? Even if we have women, would it be the single mothers from immigrant background? So in this question of broader access and participation, the structural and systemic dimensions is something that I think could only be addressed through costly and slow, as I said, policy and legal processes. And I do not believe, just to link to another, I don’t think that we necessarily have a problem with group-based discrimination. but we have a problem with the systemic and societal dimension. Because from our experience with technical standardization, and you could see this in the AACT, in the Annex that outlines the technical documentation that all companies that have to prove compliance with the AACT have to do, there you have a group. They specifically mention that you have to provide data on anticipated impact on persons and groups. So groups are okay, but that broader systemic effect is really what we are fighting for. And I’ll stop, it was a very long intervention. Sorry, lots of material accumulated over the years.
Ayça DibekoÄlu: Thank you so much Mila. Any questions from the audience? Yes, please go ahead.
Audience: Good morning and thank you for the very interesting presentation. I would like the speakers to elaborate a little bit more on intersectional discrimination.
Ayça DibekoÄlu: I’m very sorry, I’m just going to interrupt you because I forgot to mention, could you briefly say who you are as well before you ask your question? Thanks.
Audience: Of course. Hi, I am Linda Ardenghi, I’m a trainee here at the Council. And yes, thank you for the very interesting presentation. I wanted to ask the speakers to elaborate a little more on the concept of intersectional discrimination. In particular, if they think that from a technical and legal perspective this issue can be addressed because while intersectional discrimination by AI is being named in most of the documents concerning discrimination by AI and issues of accountability for algorithmic discrimination, it is not really addressed in that. It is, from what I noticed in my personal research, it is always named but never really focused on. So if you could elaborate more on this, thank you. Other questions?
Ayça DibekoÄlu: Any questions online perhaps? No? Okay. Is there one of you that would like to take up the floor first? I think that finger was pointed at me. So I think intersectional discrimination is incredibly
Louise Hooper: complicated. I think it’s not dealt with properly in law. There’s a lot of resistance in states to introduce legal prohibitions on intersectional discrimination before we even get to any problems in terms of detection and resolving intersectional discrimination in AI. So you start with we don’t understand, most people don’t properly understand what intersectional discrimination is. There’s great reluctance at the Court of Justice of the European Union to deal with it. There is an increasing recognition in the ECHR and Council of Europe documents about intersectional discrimination. It’s looked at in the context of the Istanbul Convention on Violence Against Women, for example. So that’s my starting point. I think the second point on that is that it becomes very difficult to quantify. And one of the big issues with AI and tech-based both solutions and problems is the focus on data and what you can prove and what you can’t prove. And I think starting from the perspective that statisticians know you can prove anything you like, depending on how you look at the data, you then have even bigger problems looking at something like intersectional discrimination. And I just want, within that context, want to touch on the way that I think about group-based and individual discrimination and why it’s difficult, and I understand very difficult, to be able to produce a system that is fair for both individuals and for groups. And that’s because of the way that data is analyzed within a system. You can have something that’s producing the right decision on the evidence that it has, but that ultimately produces a right decision that has very significant consequential impact. Thank you very much for the introduction, I’m Minda Moreira, Menno Ettema, Ayça DibekoÄlu, Robin Aïsha Pocornie, Louise Hooper, Mher Hakobyan, Mher Hakobyan, Ayça DibekoÄlu, Robin Aïsha Pocornie, Louise Hooper, Mher Hakobyan, Mher Hakobyan, Ayça DibekoÄlu, and some of which may not be. So I think that also feeds into issues around intersectional discrimination.
Ayça DibekoÄlu: Thank you Luis. Would any of the panellists like to respond or would you like toâ¦
Robin Aïsha Pocornie: I also think that it is important to note that intersectional discrimination as it’s defined right now is not seen as a root, like the root cause of discrimination outside of the technical sphere is not considered. It’s seen as an add-on and not a bug that immediately needs fixing. So when we look at the development of⦠I as well develop AI, I develop algorithms. So not only do I have the community-based perspective of people that are impacted by it, but I also make them. And within our pipeline we do not consider any intersectional discrimination or any types of human-based discrimination within an integral part of mitigation within the development pipeline. What is considered, for example, is privacy or data set gaps. That’s stuff that you consider from the get-go. How do you clean your data? How do you use it? Which type of model are you going to use and how are you going to validate it? That’s all based on technical model efficiency rather than the eventual or potential⦠It’s a potential harm it could have on people and that’s why intersectional discrimination is always in the literature that you’ve read is often times sort of cited as an add-on or something after the fact. And I think that’s important to recognize and acknowledge, because if we don’t focus on that as an integral part of the development process and even take a step back and look at what are the people who, where this algorithm is going to be deployed, what do they have to say about it and what alternatives are possible until then it cannot be fixed. So AI cannot, in my opinion, cannot fix inequality because it is inherently made by the inequalities that are already existing. So it cannot fix what it is based on.
Ayça DibekoÄlu: Thank you very much Robyn. I would like to give the floor back to Menno to present our next set of questions.
Menno Ettema: Thank you. We go back online. Yes, there you go. For those who closed their web, their browser, you can see on the top still the hyperlink and the code that you need to use. So we go to the next question. Which goes, most AI regulation focuses on harms after they happen. What would real prevention of discrimination look like in practice? This is again a ranking system. So you could put those that you think are most important on top and then the rest order. You can also skip if things you think are not relevant. I see two responses came in and there we go. So mandatory impact assessment is an answer involving impacted communities in design, assessing whether AI is needed at all, stronger powers for equality bodies, or other measures. And I’m particularly curious or we are particularly curious what other measures you think could be taken. 21, 23. Okay, I see still a few people responding, but it’s quite evenly split. So there’s a few answers that are popping up, but quite equally assessing whether AI is needed at all, which I think the panelists also spoke about in a few minutes ago. Stronger powers for equality bodies, I think this is for me a good response. Others, which I’m very curious about what they mean. And then the least amount of responses involving impacted communities in design and mandatory impact assessments. And maybe that’s also worthwhile to reflect further on what these are actually and what this looks like among the panelists. Okay, so others is the biggest, so curious what that is and assessing whether AI is needed at all and equality bodies are the biggest responses. We have one more question, if I remember correctly. No, sorry, two more. What are the main barriers to effective enforcing regulation? The panel already addressed on a few concerns or a few things that are really needed. But what do you think? Lack of transparency, is that a barrier? Explainability, access to data and training sets, commercial secrecy, or money and funding? What are the main barriers to effectively enforcing regulation? Okay. So here we see a trend of quite equally. So there are equal concerns when it comes to lack of transparency, access to data and training sets, money and funding, money funding for whom I might want to ask. Commercial secrecy and explainability is also mentioned. So they’re all quite equally raised as barriers to effective enforcing regulation. We go to the next question. And this is an open question. So when we had other, I was curious what you had in mind. So this is an open question. What would you how would you reduce AI discrimination, AI driven discrimination? What are your thoughts, insights, ideas? I invite you to keep it short. Don’t write an essay, but maybe have some bullet points, thoughts on what could be done that we could further discuss in the remaining of the session. So we get the first responses, proper use cases, reduce social, social discrimination, human rights, impact assessments, big tech, transparency reporting on detecting biases and improvements, quality bodies and multi community mandates. It’s a designing process, mechanisms of effective accountability, ensuring that companies have a skin skin in the game, non-discretion by design, extra control, more equals. Equal society that is informed, empathy and using of the tool, literacy and using of the tool, integration of all steps in AI elaboration, checking and control, organizational accountability, challenging ideas that solutions lie in AI, feedback, accountability, mandatory impact assessments, consulting during the design, more funding to support consultation processes, proof of data sets, including affected communities, taking their viewpoints into account, impact assessments, collect diversity and equality data, inclusive data sets, community participation in design. So these are some of the responses that are coming in. Human rights based algorithmic design, etc. I leave this a little bit open but I want to
Ayça DibekoÄlu: give the floor back to Ayça to ask to continue the process. Thank you Menno. I would like to ask, we have received quite a lot of responses to the word cloud. If you’d like to explain further your answers and bring any points that you mentioned.
Audience: Okay. Yes, please go ahead. Thank you very much. My name is Somaya. I’m with YouThink. Should I speak louder? So my name is Somaya. I’m with YouThink. If I can participate in the discussion. I think that we have to solve the problem of disincentivization in the real life. Then we can solve it on the AI. As she explained, the expert explained, she wrote codes and we cannot write good codes and fair codes and algorithms if the discrimination is already there. So I think that we have to solve the problem of disincentivization in the real life. Then we can solve it on the AI. As she explained, the expert explained, she wrote What is already happening in the real life? For example, for recruitment, as you said before, there is a lot of discrimination in the process of recruitment, so how can we write good code that is not discriminating against the communities or a group of people or a class? So I think we should work in real life so that we can assess this on the codes and algorithms. Thank you very much.
Mher Hakobyan: Thank you, Minda Moreira. I would also like to ask Mher if he would like to intervene at this point to further explore and respond to any of the questions. Thank you. Actually, I wanted to answer the last question we had on intersectionality, but I failed to find my raise the hand function in time. If it’s okay, I can just maybe go back to that question. Of course, please. Thank you. Yeah, I just wanted to say that in terms of our work, it’s been important to also try to expand the way we look at things and the harms that AI can cause, because I think there is a lot of research and knowledge accumulated when it comes to, for example, racial discrimination and gender-based discrimination, but we often lack knowledge about discrimination that impacts, for example, based on disability, on socioeconomic ground, etc. At the lab, we’ve tried to kind of push those boundaries for ourselves and some of the research in the past years that we’ve done. We’ve also tried to speak to communities that are like disability rights communities or rights communities. So I think sometimes it’s intuitive for researchers, for example, to just go based off the knowledge that exists and we can often reinforce and advance the knowledge that is there, but we cannot know what we don’t know, right? So I think sometimes we need to maybe, even if we don’t see examples out there in the media or, you know, already documented, we need to maybe push some of the boundaries of our thinking towards expanding how we see discrimination. And that can then help also address the gaps that we have in intersectional discrimination. And in the aspect of addressing it through advocacy, I think it’s also been quite challenging for us to engage with a broader range of rights holders because oftentimes organizations that represent different communities, they have so many urgent already issues that their communities are facing. So sometimes when we speak of AI, they don’t necessarily see the direct connection that technology could have on the different rights that are being violated. For example, again, with disability rights, people face, for example, accessibility barriers or there is a huge issue with institutionalization of people with disabilities. So to engage with the disability rights networks has been also kind of sometimes not, you know, they’re not fully engaged in advocacy because they have issues that are, you know, for them like much more urgent at this point. But also there is the issue of people feeling intimidated by the conversations around AI. And I think we need to do a lot of work to demystify also how we speak about AI. I think Robin also mentioned about the expertise, like who is considered to be an expert. expert. We often in this kind of fora, you know, we give priority to people who have technical expertise and a lot of professionalized kind of expertise, so to say, but we don’t think of a person that has the lived experience as an expert. And I think these are limitations that we need to address to be able to actually go broader and address intersectional discrimination more effectively.
Ayça DibekoÄlu: Thank you Mair. If there’s, are there any questions from the audience? If not, I’d like to actually pose two further questions myself to the panel. My first question actually is to Robin, because it relates to some of the points of your intervention. We had a question on do current approaches to AI take group-based discrimination seriously, and we thought we discovered the idea of involving impacted communities in the design of the process, but I wanted to ask how do we secure the meaningful involvement of these communities? I think it’s easier said than done,
Robin Aïsha Pocornie: how does it truly work in practice? That’s actually a really good question, because it’s already being done. There are communities who are currently already create, who have created working groups to educate and inform larger regulatory bodies of who gets to decide what gets to be developed, deployed and implemented. Two good examples of this is the Distributed AI Research Centre, research institution, sorry. They combine community-based work with evidence-based technological expertise, but it’s always from a community-based expertise perspective. What that means is that usually a large regulatory body will go out and collaborate with communities, but the end decision stays with the larger regulatory body. And in this case, they ensure that the community-based entities are actually the end-based decision makers. And another good example is the Canadian initiative called the Indigenous Protocol and AI Working Group. These are indigenous people who actually create the regulatory, what we would call regulatory information from a indigenous perspective, as they are the currently, the group of people being harmed by AI the most. And I think it bears mentioning that this is a very unpopular perspective still, even within a very large regulatory body building as we are today even. It’s a very unrepresented perspective because, like we said before, it has been mentioned many times before today, expertise is seen as a professional thing. You have to have some sort of technical education, higher education, but a lived experience needs to be seen as an expert experience and an expert seat to have at the table.
Ayça DibekoÄlu: Thank you, Robin. Another question of mine is to both Louise and Mher, actually, because you have also touched upon this about meaningful collaboration and multi-stakeholder cooperation in the process. And I’d like to. I would like to ask about, with regards to how you would reduce AI-driven discrimination, because this is also one of the answers that stood out on the Mentimeter, about the role of equality bodies and other regulators. How do we create a meaningful cooperation to make a real discernance, and how do we engage the equality bodies and other regulators in the structural design?
Louise Hooper: I will address that, but I just want to add to what Robin said, because one of the problems with having got community participation, apart from the fact that there’s no money to do it, but once you’ve got community participation, even when you prove something is causing discriminatory effect, quite often you don’t get an adequate solution by a court or regulator. What we saw in Canada, there was a similar issue to the sentencing case in the US, where indigenous people in Canada are discriminated against by an algorithm in sentencing decisions. The response of the court was, we accept that the discrimination is there, but it’s alright because the judge knows that and will make a different decision. It’s just not enough. It’s not acceptable. It doesn’t work. Don’t give false promises to communities if you’re asking them to become involved in anti-discrimination work by taking on board their expertise and then ignoring it afterwards because it’s not okay. That’s the first thing that I would say about community type participation. In terms of equality bodies, adequate funding is needed. Within the context of the EU, I think there’s a real need to give proper and adequate recognition as regulatory bodies in the context of the AI Act. Sorry, I’m forgetting your question. How much further do you want me to go? I got carried away with being cross about community participation not being taken seriously enough. As you wish. About most of the role of other regulators, specifically the equality bodies, I’m curious about. In fact, I’d also like to hear… and Mila’s intervention on this as well, because you have also worked with us on our project, now that we’re running with three countries and we specifically work with equality bodies in the EU sphere. Thanks. So what we’re doing with equality bodies, or what the Council of Europe is doing with equality bodies and have asked me to help with, is work on developing e-learning courses and online-offline courses to facilitate knowledge and awareness and for that to then be rolled out to help equality bodies, help other government institutions be aware of AI and equality issues, particularly when commissioning and using AI products. There’s also issues to do with the way in which equality bodies can raise awareness of AI and discrimination, the tools that are available to them to investigate on behalf of both individuals and groups and their ability to bring proceedings and all of those things collectively as a package can really assist in terms of both using powers that individuals don’t have to get access to documentation, information and to test systems, to use regulatory sandboxes and then to also bring proceedings if necessary.
Ayça DibekoÄlu: Thank you very much. Mila, would you also like to intervene briefly? Thank you.
Milla Vidina: I don’t know where to start. I mean, well maybe let’s start with the obvious points. Equality bodies do not exist for the sake of equality bodies. It’s a very technocratic language, equality bodies. It’s an access to justice mechanism. That’s the way I, a public access to justice mechanism. So what would set apart an equality body from an ombud institution or a human rights center or human rights institute? In most of the cases, what sets them apart is all equality bodies handle cases directly. And most equality bodies work with both private and public sector. So being on the front line handling cases, so being in that way an immediate, they also, some of our members decide on cases with legally binding decisions. The majority of our members have litigation powers, not all of our members, but many have investigation powers. So because of, for those reasons, because of the specificity of those powers, we think of them as access to justice mechanism. So now that said, what do we need to facilitate that access to justice, to make it actually more accessible and more effective, because those are right. Accessibility barriers and effectiveness. Well, we, the funding was already mentioned, funding also for the sake of potentially, some of our members, for example, give fund. They do leave the decision-making power with whoever brings the complaint, and they fund and give free legal advice, so that the party goes and litigates a case. So in this way, they do not influence, but they empower somebody to bring a case. But beyond funding, two points I wanted to make. Second is, one is equality bodies operate in ecosystem, and equality law is only one part of the puzzle. We also have data protection law, we also have consumer protection law, we also have competition law, if you’re talking about big tech, and all of those have a stake in algorithmic discrimination. And equality bodies, we cannot, I’ve noticed for our staff members, we’ve been encouraging them to, educated them on data protection law, and I know, for example, the French Defender of Rights works with their data protection authority, but not only, also in the Netherlands at the Chi Institute. So what states could do is setting up, formalizing, institutionalizing a platform where all those regulators sit together, and there is a kind of cross-pollination of the different types of expertise, because they do not talk to each other. And equality law alone can only do that much, because we do not, let’s be honest, we do not have the sanctions the data protection law has. We do not have the enforcement powers, or like for example even competition law, right? So we need to work together. Then we need reform in the law, but this was already made so that equality bodies are given more power, but outside of equality bodies, also burden of proof. And this is something that I know the scholar Rafael Senides has worked on that, basically a presumption of algorithmic, so that you make it easier to establish prima facie discrimination, shift the burden of proof whenever there is an algorithmic system deployed. Changing the sanctions under non-discrimination law, and perhaps equally importantly, moving away from a grounds-based approach, so having to prove you discriminate only on gender, only on disability, or combined gender and disability, but that you have to prove that each one of them, there was a discrimination, and moving beyond that to a truly intersectional approach. In some legislation, like in Belgium, intersectionality is in non-discrimination law. There is a new directive, the Pay Transparency Directive of the European Union, where intersectionality is in the operative part of the directive. So it’s a legal concept already, binding effects. In the equality bodies directives, intersectionality is explicitly mentioned in the prevention and promotion powers. So member states have obligation to also, when they enhance, not give, the prevention and promotion powers of equality bodies, to also consider intersectionality. So we’re starting to have the tools, but we need this more also to empower our members. And one last thing, setting up a facility, because we were exposed to it, I’m inspired by France, and to my knowledge, it’s the only public facility that provides that intergovernmental service of investigating algorithms and testing algorithms. I think it would be, it’s called PEREN, I don’t know how… I see a colleague, I don’t know how to translate it, but the point is if each government could set up a public facility that assists all regulators, equality bodies included, if you ask me, not only all regulators, academics and civil society as well, to actually do the technical testing and the technical investigation part, because it’s, right, we should not have the expertise, you should not be expected to have technical expertise, right? Our added value, unique added value comes from, you know, either it’s lived experience or it’s also human rights, legal and policy knowledge. But if states set up such a facility for us, I think this would allow a more coordinated, consistent and also larger scale impact for approach. So that’s it.
Ayça DibekoÄlu: Thank you so much, Mila. Sorry, somebody’s alarm went off in the room, which I think signaled that our session is over. I would just like to take one moment to see if Mher would like to answer to my final question, a very brief answer, and then we would like to wrap up with sharing the messages from the session.
Mher Hakobyan: Thank you. Sorry, it takes a few seconds to unmute. Thanks for that. I will be very short. I would just like to say that equality bodies and not only the also the NHRIs, the DPAs, the European Data Protection Supervisors, their support in the AI process has been greatly appreciated by civil society organizations, because I think they add to the kind of the legitimacy of the calls that we often make. I think we live in a world where sadly, sometimes civil societies seem to be a bit too radical or ambitious. And when we have public bodies supporting, you know, very strong, making very strong kind of also calls that we make in terms of bans, in terms of sufficient human rights safeguards, I think I think it really adds to the effectiveness of the work that we do. So, yeah, it’s just like an opportunity to thank Equinet and all the other organizations that work with us.
Ayça DibekoÄlu: Thank you. Thank you, Mhir. I would just like to share the last Mentimeter question, which will be running while we get the message from this session.
Menno Ettema: Because we are curious what you might want to take forward after this session. So for us, this is a kind of a feedback and see if we inspired you to take action. So we’ll leave this open and I give the floor to my colleague for the message from the session.
Minda Moreira: Okay. Good morning. I’m just going to try to share my screen. Yeah, so I’ll stop sharing, but the Mentimeter stays open so you can fill in your answers while the screen is shared. And this is what I could capture from the session. It was really rich, but we can only have three main takeaways that I will read. And we expect general consensus. So it’s more about the message than about the specific content or the way that it’s built in that we can still work out afterwards. We still have one or two weeks that we can give it some tweaks. So the first message should be, I decided to divide it into three parts, prevention, mitigation, and redress. On prevention, we have the session agreed that more needs to be done to address group-based discrimination inequality. And it may not be solved with AI alone. It may be necessary to assess if AI is really needed or if non-technical solutions may be more effective. Where AI is needed, transparency and accountability is crucial. Bias detection with mandatory impact assessments must be used as well involving and consulting impacted communities in the AI design and development processes combined with stronger powers for equality bodies and industry best practices. Does it make sense? Okay, so the second one would be mitigation. Algorithm discrimination is difficult to detect and to prove and those affected find it difficult to access justice. When it comes to intersectional discrimination it is even more difficult not only because of the resistance of states and international courts to deal with them but also to effectively work with affected communities. There are main barriers to effective AI regulation for tackling discrimination and bias and session participants agreed that those include the lack of transparency and accountability, access to data and training sets as well commercial secrecy and funding. And finally redress access to adequate funding particularly to equality bodies is a main barrier to access justice. Some steps are being taken by advocacy groups to collaborate with regulatory bodies but a multi-stakeholder approach at a global level involving civil society private sector equality bodies and communities is vital for meaningful cooperation and to fully address discrimination in all its forms particularly intersectional discrimination. Is everyone okay with that? Does anyone would like to include something? that is important and is not mentioned.
Ayça DibekoÄlu: Please object now, or until the 25th of May, until when we can finalize our messages. Okay, I see no objections. Thank you. Thank you, Minda. Sorry that we’re quite over time. Thank you for your time, for both our panelists and the audience for being here and joining this discussion. I think we have a break now, so I hope to talk more to you soon then, and have a great rest of your conference.
Robin Aïsha Pocornie
Speech speed
125 words per minute
Speech length
926 words
Speech time
441 seconds
AI cannot solve inequality alone as it’s shaped by existing discriminatory data and power structures
Explanation
AI systems are inherently built upon existing inequalities and discriminatory structures, making it impossible for them to fix the very problems they are based on. The technology reflects and amplifies the biases present in the data and assumptions used in its design and deployment.
Evidence
Example of data-driven facial detection software that cannot recognize people of certain skin colors, demonstrating how AI systems perpetuate existing discriminatory patterns
Major discussion point
AI’s Role in Addressing Inequality and Discrimination
Topics
Human rights principles | Digital identities
Agreed with
– Louise Hooper
– Audience
Agreed on
AI cannot solve inequality alone and reflects existing societal discrimination
Disagreed with
– Louise Hooper
– Milla Vidina
Disagreed on
Role of AI in addressing inequality
Complex social problems cannot be fixed by technology alone, especially when these problems are created by the technology
Explanation
This anti-techno-solutionist approach argues that technological solutions cannot address complex social issues, particularly when the technology itself is the source of the problem. It emphasizes that community-based approaches are necessary rather than purely technical fixes.
Evidence
Facial detection software example that fails to recognize certain skin colors, illustrating how technical fixes alone cannot solve discrimination embedded in the technology
Major discussion point
Prevention and Mitigation Strategies
Topics
Human rights principles | Interdisciplinary approaches
Community expertise should be prioritized over technological solutions, with affected communities leading decision-making processes
Explanation
The approach emphasizes that communities being harmed by AI systems are the true experts and should have decision-making power rather than just being consulted. This represents a shift from outside-in problem-solving to community-led solutions.
Evidence
Examples of the Distributed AI Research Centre and the Indigenous Protocol and AI Working Group in Canada, where communities maintain decision-making power rather than just advisory roles
Major discussion point
Community Participation and Expertise
Topics
Human rights principles | Inclusive finance | Capacity development
Agreed with
– Louise Hooper
– Mher Hakobyan
Agreed on
Community expertise and participation must be meaningful with decision-making power
The first question should be whether technology is needed at all, rather than automatically developing AI solutions
Explanation
Before deploying AI systems, organizations should assess whether technological solutions are actually necessary or if non-technical alternatives might be more appropriate. This preventive approach looks at technological non-use as a viable option.
Evidence
Example of impact-driven communities in Singapore that prioritize assessing whether technology is needed before development and deployment
Major discussion point
Prevention and Mitigation Strategies
Topics
Human rights principles | Interdisciplinary approaches
Disagreed with
– Louise Hooper
– Mher Hakobyan
Disagreed on
Approach to preventing AI discrimination
Intersectional discrimination is treated as an add-on rather than an integral part of the development process
Explanation
In AI development pipelines, intersectional discrimination is not considered from the beginning as a core issue that needs addressing. Instead, it’s treated as an afterthought or additional consideration, unlike technical aspects like privacy or data gaps which are integral to the development process.
Evidence
Personal experience as an AI developer showing that development pipelines consider privacy and data set gaps from the start, but human-based discrimination is only considered after the fact
Major discussion point
Intersectional Discrimination Challenges
Topics
Human rights principles | Gender rights online | Rights of persons with disabilities
Agreed with
– Louise Hooper
– Mher Hakobyan
Agreed on
Intersectional discrimination is inadequately addressed in current systems
Lived experience should be recognized as expert knowledge equal to technical expertise
Explanation
There’s a need to shift the understanding of expertise to include community-based knowledge and real-lived experiences alongside traditional technical and academic qualifications. This challenges the current hierarchy that privileges formal education over experiential knowledge.
Evidence
Examples of the Distributed AI Research Centre and Indigenous Protocol and AI Working Group where community-based expertise is given equal weight to technical knowledge
Major discussion point
Community Participation and Expertise
Topics
Human rights principles | Interdisciplinary approaches | Capacity development
Louise Hooper
Speech speed
154 words per minute
Speech length
1413 words
Speech time
548 seconds
AI systems capture patterns from inherently flawed human data, making them discriminatory by nature
Explanation
Since AI systems learn from data generated by human processes, and humans are inherently discriminatory beings, the resulting AI systems inevitably reflect these biases. The data used to train these systems is often ‘dirty’ due to inconsistent collection and input methods.
Evidence
General observation about human nature and data collection processes, noting that older white men with political or economic power are increasingly reluctant to address discrimination
Major discussion point
AI’s Role in Addressing Inequality and Discrimination
Topics
Human rights principles | Privacy and data protection
Agreed with
– Robin Aïsha Pocornie
– Audience
Agreed on
AI cannot solve inequality alone and reflects existing societal discrimination
Disagreed with
– Robin Aïsha Pocornie
– Milla Vidina
Disagreed on
Role of AI in addressing inequality
Current regulation focuses on product safety principles rather than human rights law, creating gaps in protection
Explanation
The EU AI Act is built on product safety principles similar to those used for consumer goods, rather than being grounded in human rights law. This approach may not adequately address the human rights implications of AI systems.
Evidence
Comparison between the EU AI Act (product safety approach) and the Council of Europe Framework Convention (human rights approach)
Major discussion point
Legal and Regulatory Frameworks
Topics
Human rights principles | Consumer protection | Legal and regulatory
Disagreed with
– Robin Aïsha Pocornie
– Mher Hakobyan
Disagreed on
Approach to preventing AI discrimination
The EU AI Act and Council of Europe Framework Convention provide different approaches but both have limitations
Explanation
The two regulatory frameworks take different approaches – the EU AI Act governs both states and private actors with product safety principles, while the Council of Europe Framework focuses on state obligations and human rights. Both have specific strengths and weaknesses in addressing AI discrimination.
Evidence
Detailed comparison of the two frameworks, noting that the Council of Europe Framework has Article 10 on equality and the EU AI Act has new powers for equality bodies
Major discussion point
Legal and Regulatory Frameworks
Topics
Human rights principles | Legal and regulatory | Jurisdiction
Equality bodies need stronger powers and adequate funding to effectively regulate AI discrimination
Explanation
Current equality bodies lack sufficient resources and regulatory powers to effectively address AI discrimination. They need enhanced capabilities to investigate, access information, and bring proceedings against discriminatory AI systems.
Evidence
Reference to new European legislation on equality bodies and articles 77 providing new powers for authorities protecting fundamental rights
Major discussion point
Legal and Regulatory Frameworks
Topics
Human rights principles | Legal and regulatory | Alternative dispute resolution
Intersectional discrimination is incredibly complicated and not properly dealt with in law
Explanation
Legal systems struggle to address intersectional discrimination effectively, with most people not properly understanding what it entails. There’s significant resistance from courts and states to recognize and prohibit intersectional discrimination.
Evidence
Examples from the Court of Justice of the European Union’s reluctance to deal with intersectional discrimination, contrasted with some recognition in ECHR and Council of Europe documents like the Istanbul Convention
Major discussion point
Intersectional Discrimination Challenges
Topics
Human rights principles | Gender rights online | Legal and regulatory
Agreed with
– Robin Aïsha Pocornie
– Mher Hakobyan
Agreed on
Intersectional discrimination is inadequately addressed in current systems
There’s great reluctance from states and courts to introduce legal prohibitions on intersectional discrimination
Explanation
Before even addressing AI-related intersectional discrimination, there’s fundamental resistance at the legal and political level to recognizing intersectional discrimination as a distinct legal concept requiring specific protections.
Evidence
Contrast between the Court of Justice of the European Union’s reluctance and some progress in ECHR and Council of Europe documents
Major discussion point
Intersectional Discrimination Challenges
Topics
Human rights principles | Legal and regulatory | Jurisdiction
Algorithmic discrimination is difficult to detect, prove, and access justice for due to opacity and information asymmetries
Explanation
The black box nature of AI systems makes it extremely challenging to identify discrimination, understand how decisions are made, or gather evidence for legal proceedings. There are significant power imbalances between those deploying AI systems and those affected by them.
Evidence
Description of black box algorithms, lack of access to data, inability to understand decision-making processes, and the costly, time-consuming nature of legal action
Major discussion point
Access to Justice and Enforcement Barriers
Topics
Human rights principles | Privacy and data protection | Alternative dispute resolution
Meaningful community participation requires adequate funding and decision-making power remaining with communities
Explanation
True community participation goes beyond consultation to ensuring communities have actual decision-making authority and the resources needed to participate effectively. Without proper funding and real power, community participation becomes tokenistic.
Evidence
Canadian case example where indigenous people faced algorithmic discrimination in sentencing, but the court’s response was inadequate – accepting discrimination exists but allowing it to continue with judicial awareness
Major discussion point
Community Participation and Expertise
Topics
Human rights principles | Capacity development | Inclusive finance
Agreed with
– Robin Aïsha Pocornie
– Mher Hakobyan
Agreed on
Community expertise and participation must be meaningful with decision-making power
Even when discrimination is proven, courts often provide inadequate solutions that don’t address the underlying problems
Explanation
Legal systems may acknowledge that algorithmic discrimination exists but fail to provide meaningful remedies. Courts sometimes accept discriminatory outcomes as acceptable if there’s supposed human oversight, which doesn’t actually solve the problem.
Evidence
Canadian sentencing algorithm case where the court accepted discrimination against indigenous people but deemed it acceptable because judges would supposedly account for this bias
Major discussion point
Access to Justice and Enforcement Barriers
Topics
Human rights principles | Alternative dispute resolution | Legal and regulatory
Milla Vidina
Speech speed
164 words per minute
Speech length
2057 words
Speech time
750 seconds
AI could potentially do minimal prevention through better documentation and transparency, but cannot completely resolve inequality
Explanation
Based on work with technical standardization, AI systems can implement some preventive measures like better documentation, logging, and transparency requirements. However, these measures only provide leverage for accountability and cannot solve underlying structural inequalities.
Evidence
Experience working with engineers and data scientists in technical standardization forums, developing standards for the EU AI Act compliance, including work on fairness metrics and human oversight mechanisms
Major discussion point
AI’s Role in Addressing Inequality and Discrimination
Topics
Human rights principles | Digital standards | Legal and regulatory
Disagreed with
– Robin Aïsha Pocornie
– Louise Hooper
Disagreed on
Role of AI in addressing inequality
Technical standards can embed some equality safeguards during design and development phases
Explanation
Through technical standardization processes, it’s possible to require developers to document their decision-making around fairness metrics, justify their choices, and implement human oversight mechanisms. This creates accountability trails and some preventive measures.
Evidence
Specific work on EU AI Act technical standards, including requirements for documenting fairness metrics selection, human oversight mechanisms, and stakeholder consultation processes similar to consumer product testing
Major discussion point
Prevention and Mitigation Strategies
Topics
Digital standards | Human rights principles | Legal and regulatory
Cross-regulatory cooperation between equality bodies, data protection authorities, and other regulators is essential
Explanation
Equality bodies cannot address algorithmic discrimination alone and need to work with data protection authorities, consumer protection agencies, and competition law enforcers. Each regulatory domain has different powers and sanctions that complement each other.
Evidence
Examples of collaboration between French Defender of Rights and data protection authority, and the Netherlands Chi Institute, showing how different regulatory powers can be combined
Major discussion point
Legal and Regulatory Frameworks
Topics
Human rights principles | Privacy and data protection | Consumer protection
Agreed with
– Mher Hakobyan
– Minda Moreira
Agreed on
Multi-stakeholder collaboration and cross-regulatory cooperation are essential
Legal reforms are needed including burden of proof changes and moving beyond grounds-based discrimination approaches
Explanation
Current legal frameworks require proving discrimination on specific grounds individually, which doesn’t address intersectional discrimination effectively. Legal reforms should include presumptions of algorithmic discrimination and truly intersectional approaches to discrimination law.
Evidence
Examples from Belgium where intersectionality is in non-discrimination law, EU Pay Transparency Directive including intersectionality, and equality bodies directives mentioning intersectionality in prevention powers
Major discussion point
Legal and Regulatory Frameworks
Topics
Human rights principles | Legal and regulatory | Gender rights online
Equality bodies serve as public access to justice mechanisms but need more resources and powers
Explanation
Equality bodies are distinguished from other institutions by their direct case-handling role and work with both private and public sectors. They need adequate funding and enhanced powers to effectively address algorithmic discrimination as an access to justice mechanism.
Evidence
Description of equality bodies’ unique powers including legally binding decisions, litigation powers, investigation powers, and funding for legal advice while leaving decision-making with complainants
Major discussion point
Access to Justice and Enforcement Barriers
Topics
Human rights principles | Alternative dispute resolution | Legal and regulatory
States should establish public facilities to assist all regulators with technical testing and investigation of algorithms
Explanation
Following the French model, states should create public facilities that provide technical expertise for algorithm testing and investigation to support all regulators, academics, and civil society. This would enable more coordinated and effective responses to algorithmic discrimination.
Evidence
Reference to France’s PEREN facility as the only known public facility providing intergovernmental algorithm investigation services
Major discussion point
Multi-stakeholder Collaboration
Topics
Human rights principles | Digital standards | Capacity development
Mher Hakobyan
Speech speed
143 words per minute
Speech length
1368 words
Speech time
571 seconds
Sometimes bans could be the appropriate prevention and mitigation measure rather than waiting for harm to occur
Explanation
Rather than developing complex redress mechanisms after AI systems cause harm, it may be more effective to prohibit certain types of AI systems entirely. This proactive approach prevents harm from occurring in the first place by drawing clear red lines around unacceptable AI applications.
Major discussion point
Prevention and Mitigation Strategies
Topics
Human rights principles | Legal and regulatory
Disagreed with
– Robin Aïsha Pocornie
– Louise Hooper
Disagreed on
Approach to preventing AI discrimination
Collaborative approaches with local organizations representing impacted communities are essential for effective advocacy
Explanation
Effective advocacy requires working directly with local civil society organizations that represent different communities impacted by AI systems. This collaborative approach ensures that communities are equipped to lead challenges against harmful AI systems rather than having solutions imposed from outside.
Evidence
Examples of research collaborations in Denmark, Serbia, Sweden, and Netherlands with disability rights organizations, racial justice groups, and migrant community representatives, plus litigation work in France with local digital rights organizations
Major discussion point
Multi-stakeholder Collaboration
Topics
Human rights principles | Rights of persons with disabilities | Capacity development
Agreed with
– Robin Aïsha Pocornie
– Louise Hooper
Agreed on
Community expertise and participation must be meaningful with decision-making power
Research often lacks knowledge about discrimination based on disability and socioeconomic grounds beyond racial and gender discrimination
Explanation
While there’s substantial research on racial and gender discrimination in AI, there’s insufficient knowledge about how AI systems discriminate against people with disabilities or based on socioeconomic status. This knowledge gap needs to be addressed to understand the full scope of AI discrimination.
Evidence
Experience from the Algorithmic Accountability Lab trying to expand research beyond well-documented areas to include disability rights and socioeconomic discrimination
Major discussion point
Intersectional Discrimination Challenges
Topics
Rights of persons with disabilities | Human rights principles | Inclusive finance
Agreed with
– Robin Aïsha Pocornie
– Louise Hooper
Agreed on
Intersectional discrimination is inadequately addressed in current systems
Communities often face barriers engaging with AI discussions due to more urgent immediate issues and intimidation by technical language
Explanation
Organizations representing affected communities often have more pressing immediate concerns and may not see the direct connection between AI and rights violations. Additionally, the technical nature of AI discussions can be intimidating and exclude community participation.
Evidence
Example of disability rights organizations being more focused on accessibility barriers and institutionalization issues, making AI advocacy seem less urgent or relevant
Major discussion point
Community Participation and Expertise
Topics
Rights of persons with disabilities | Human rights principles | Interdisciplinary approaches
Public bodies supporting civil society calls adds legitimacy and effectiveness to advocacy work
Explanation
When equality bodies, data protection authorities, and other public institutions support the same strong positions as civil society organizations, it enhances the credibility and impact of advocacy efforts. This support helps counter perceptions that civil society demands are too radical or ambitious.
Evidence
Acknowledgment of support from equality bodies, NHRIs, DPAs, and European Data Protection Supervisors in AI advocacy processes
Major discussion point
Multi-stakeholder Collaboration
Topics
Human rights principles | Legal and regulatory | Privacy and data protection
Agreed with
– Milla Vidina
– Minda Moreira
Agreed on
Multi-stakeholder collaboration and cross-regulatory cooperation are essential
Menno Ettema
Speech speed
133 words per minute
Speech length
1341 words
Speech time
600 seconds
Mandatory impact assessments and involving impacted communities in design are crucial prevention measures
Explanation
Survey results showed that participants identified mandatory impact assessments and community involvement in AI system design as key prevention strategies. However, these received fewer responses than other measures, suggesting they may be undervalued despite their importance.
Evidence
Mentimeter poll results showing various prevention measures ranked by participants, with mandatory impact assessments and community involvement receiving relatively fewer responses
Major discussion point
Prevention and Mitigation Strategies
Topics
Human rights principles | Interdisciplinary approaches | Capacity development
Main barriers include lack of transparency, access to data, commercial secrecy, and inadequate funding
Explanation
Survey participants identified multiple barriers to effective AI regulation enforcement, with transparency issues, data access problems, commercial secrecy protections, and funding constraints all receiving roughly equal concern as significant obstacles.
Evidence
Mentimeter poll results showing equal distribution of concerns across transparency, data access, commercial secrecy, and funding issues
Major discussion point
Access to Justice and Enforcement Barriers
Topics
Privacy and data protection | Human rights principles | Legal and regulatory
Audience
Speech speed
144 words per minute
Speech length
333 words
Speech time
138 seconds
Real-world discrimination must be solved first before AI can be fair, as algorithms reflect existing societal biases
Explanation
A participant named Somaya argued that discrimination in AI systems stems from discrimination that already exists in real life, such as in recruitment processes. Therefore, addressing societal discrimination is a prerequisite for creating fair AI systems.
Evidence
Example of recruitment discrimination where existing biased hiring practices would be reflected in any AI system trained on that data
Major discussion point
AI’s Role in Addressing Inequality and Discrimination
Topics
Human rights principles | Future of work | Interdisciplinary approaches
Agreed with
– Robin Aïsha Pocornie
– Louise Hooper
Agreed on
AI cannot solve inequality alone and reflects existing societal discrimination
Minda Moreira
Speech speed
137 words per minute
Speech length
491 words
Speech time
214 seconds
Multi-stakeholder approaches at global level involving all relevant actors are vital for meaningful cooperation
Explanation
The session concluded that effective responses to AI discrimination require collaboration between civil society, private sector, equality bodies, and communities at a global scale. This comprehensive approach is necessary to fully address discrimination in all its forms, particularly intersectional discrimination.
Evidence
Summary of session discussions and participant responses indicating consensus on the need for broad-based collaboration
Major discussion point
Multi-stakeholder Collaboration
Topics
Human rights principles | Interdisciplinary approaches | Capacity development
Agreed with
– Milla Vidina
– Mher Hakobyan
Agreed on
Multi-stakeholder collaboration and cross-regulatory cooperation are essential
Ayça Dibekoğlu
Speech speed
158 words per minute
Speech length
1483 words
Speech time
560 seconds
AI systems are increasingly used in critical areas like law enforcement, welfare, employment, and health, raising urgent concerns about discrimination and equality
Explanation
AI is being deployed across essential public services and sectors that directly impact people’s lives and rights. Despite being presented as neutral or objective, AI systems are actually shaped by discriminatory data, assumptions, and power structures embedded in their design and use.
Evidence
Examples of AI use in law enforcement, welfare systems, employment decisions, and healthcare
Major discussion point
AI’s Role in Addressing Inequality and Discrimination
Topics
Human rights principles | Future of work | Privacy and data protection
Moving beyond surface-level discussions requires exploring what prevention, mitigation, and meaningful redress should look like in practice
Explanation
The session aims to push beyond theoretical discussions to examine practical implementation of anti-discrimination measures in AI systems. This includes determining who gets to shape outcomes and how to make prevention and redress mechanisms effective in real-world applications.
Evidence
Session structure designed around prevention, mitigation, and redress with interactive polls and expert panel discussions
Major discussion point
Prevention and Mitigation Strategies
Topics
Human rights principles | Interdisciplinary approaches | Legal and regulatory
Meaningful dialogue requires active participation from all stakeholders, not just passive listening to expert panels
Explanation
The session is designed as a conversation rather than a traditional panel presentation, emphasizing the importance of audience participation through interactive tools and questions. This approach recognizes that many audience members are also experts actively working in the field of AI and discrimination.
Evidence
Use of Mentimeter for interactive polls and questions, explicit invitation for audience contributions and elaboration on answers
Major discussion point
Multi-stakeholder Collaboration
Topics
Human rights principles | Interdisciplinary approaches | Capacity development
Agreements
Agreement points
AI cannot solve inequality alone and reflects existing societal discrimination
Speakers
– Robin Aïsha Pocornie
– Louise Hooper
– Audience
Arguments
AI cannot solve inequality alone as it’s shaped by existing discriminatory data and power structures
AI systems capture patterns from inherently flawed human data, making them discriminatory by nature
Real-world discrimination must be solved first before AI can be fair, as algorithms reflect existing societal biases
Summary
All speakers agree that AI systems inherently reflect and amplify existing societal biases and discrimination, making it impossible for AI alone to solve inequality. They emphasize that underlying societal discrimination must be addressed first.
Topics
Human rights principles | Interdisciplinary approaches
Community expertise and participation must be meaningful with decision-making power
Speakers
– Robin Aïsha Pocornie
– Louise Hooper
– Mher Hakobyan
Arguments
Community expertise should be prioritized over technological solutions, with affected communities leading decision-making processes
Meaningful community participation requires adequate funding and decision-making power remaining with communities
Collaborative approaches with local organizations representing impacted communities are essential for effective advocacy
Summary
Speakers unanimously agree that community participation must go beyond consultation to include actual decision-making power and adequate resources. They emphasize that affected communities are the true experts and should lead rather than merely advise.
Topics
Human rights principles | Capacity development | Inclusive finance
Intersectional discrimination is inadequately addressed in current systems
Speakers
– Robin Aïsha Pocornie
– Louise Hooper
– Mher Hakobyan
Arguments
Intersectional discrimination is treated as an add-on rather than an integral part of the development process
Intersectional discrimination is incredibly complicated and not properly dealt with in law
Research often lacks knowledge about discrimination based on disability and socioeconomic grounds beyond racial and gender discrimination
Summary
All speakers acknowledge that intersectional discrimination is poorly understood and inadequately addressed in both technical development and legal frameworks, requiring more comprehensive approaches.
Topics
Human rights principles | Rights of persons with disabilities | Gender rights online
Multi-stakeholder collaboration and cross-regulatory cooperation are essential
Speakers
– Milla Vidina
– Mher Hakobyan
– Minda Moreira
Arguments
Cross-regulatory cooperation between equality bodies, data protection authorities, and other regulators is essential
Public bodies supporting civil society calls adds legitimacy and effectiveness to advocacy work
Multi-stakeholder approaches at global level involving all relevant actors are vital for meaningful cooperation
Summary
Speakers agree that effective responses to AI discrimination require collaboration across different regulatory bodies, civil society organizations, and other stakeholders at multiple levels.
Topics
Human rights principles | Legal and regulatory | Privacy and data protection
Similar viewpoints
Both speakers advocate for questioning the necessity of AI deployment and considering prohibition as a valid prevention strategy rather than assuming AI solutions are always needed.
Speakers
– Robin Aïsha Pocornie
– Mher Hakobyan
Arguments
The first question should be whether technology is needed at all, rather than automatically developing AI solutions
Sometimes bans could be the appropriate prevention and mitigation measure rather than waiting for harm to occur
Topics
Human rights principles | Legal and regulatory
Both speakers emphasize that equality bodies require enhanced powers and adequate funding to effectively address algorithmic discrimination as access to justice mechanisms.
Speakers
– Louise Hooper
– Milla Vidina
Arguments
Equality bodies need stronger powers and adequate funding to effectively regulate AI discrimination
Equality bodies serve as public access to justice mechanisms but need more resources and powers
Topics
Human rights principles | Legal and regulatory | Alternative dispute resolution
Both speakers recognize the need to value lived experience as expertise and address barriers that prevent meaningful community participation in AI discussions.
Speakers
– Robin Aïsha Pocornie
– Mher Hakobyan
Arguments
Lived experience should be recognized as expert knowledge equal to technical expertise
Communities often face barriers engaging with AI discussions due to more urgent immediate issues and intimidation by technical language
Topics
Human rights principles | Interdisciplinary approaches | Capacity development
Unexpected consensus
Technical standards can provide some prevention measures despite AI’s limitations
Speakers
– Milla Vidina
– Robin Aïsha Pocornie
Arguments
AI could potentially do minimal prevention through better documentation and transparency, but cannot completely resolve inequality
Complex social problems cannot be fixed by technology alone, especially when these problems are created by the technology
Explanation
Despite strong anti-techno-solutionist positions, there’s unexpected agreement that technical measures can provide some minimal prevention through documentation and transparency, even while maintaining that AI cannot solve underlying inequality.
Topics
Human rights principles | Digital standards | Legal and regulatory
Legal frameworks have significant limitations despite regulatory progress
Speakers
– Louise Hooper
– Milla Vidina
Arguments
Current regulation focuses on product safety principles rather than human rights law, creating gaps in protection
Legal reforms are needed including burden of proof changes and moving beyond grounds-based discrimination approaches
Explanation
Both legal experts acknowledge fundamental limitations in current regulatory approaches, with unexpected consensus on the need for substantial legal reforms rather than incremental improvements.
Topics
Human rights principles | Legal and regulatory
Overall assessment
Summary
Strong consensus exists on core issues: AI cannot solve inequality alone, meaningful community participation requires decision-making power, intersectional discrimination is inadequately addressed, and multi-stakeholder collaboration is essential. Speakers also agree on the need for stronger equality bodies and the importance of questioning whether AI is necessary at all.
Consensus level
High level of consensus among speakers on fundamental principles, with agreement spanning technical, legal, and advocacy perspectives. This consensus suggests a mature understanding of AI discrimination challenges and points toward comprehensive approaches that combine technical, legal, and social interventions rather than relying on any single solution.
Differences
Different viewpoints
Role of AI in addressing inequality
Speakers
– Robin Aïsha Pocornie
– Louise Hooper
– Milla Vidina
Arguments
AI cannot solve inequality alone as it’s shaped by existing discriminatory data and power structures
AI systems capture patterns from inherently flawed human data, making them discriminatory by nature
AI could potentially do minimal prevention through better documentation and transparency, but cannot completely resolve inequality
Summary
Robin takes the strongest position that AI fundamentally cannot solve inequality because it’s built on existing inequalities. Louise agrees AI is inherently discriminatory but focuses more on regulatory solutions. Milla takes a more nuanced view, acknowledging AI’s limitations while seeing potential for minimal prevention through technical standards and documentation.
Topics
Human rights principles | Digital standards | Legal and regulatory
Approach to preventing AI discrimination
Speakers
– Robin Aïsha Pocornie
– Louise Hooper
– Mher Hakobyan
Arguments
The first question should be whether technology is needed at all, rather than automatically developing AI solutions
Current regulation focuses on product safety principles rather than human rights law, creating gaps in protection
Sometimes bans could be the appropriate prevention and mitigation measure rather than waiting for harm to occur
Summary
Robin advocates for questioning whether AI is needed at all before development. Louise focuses on improving regulatory frameworks and legal approaches. Mher supports outright bans for certain AI systems as prevention. They agree on prevention being important but disagree on the primary approach.
Topics
Human rights principles | Legal and regulatory | Prevention and Mitigation Strategies
Unexpected differences
Technical standardization as a solution pathway
Speakers
– Robin Aïsha Pocornie
– Milla Vidina
Arguments
Complex social problems cannot be fixed by technology alone, especially when these problems are created by the technology
Technical standards can embed some equality safeguards during design and development phases
Explanation
This disagreement is unexpected because both speakers work on technical aspects of AI, yet Robin completely rejects technical solutions while Milla sees value in working within technical standardization processes. Milla’s practical experience with EU AI Act standards leads her to a more pragmatic view, while Robin maintains a more principled anti-techno-solutionist stance.
Topics
Digital standards | Human rights principles | Legal and regulatory
Overall assessment
Summary
The main areas of disagreement center on the role of AI in addressing inequality, the primary approach to prevention (regulatory vs. community-led vs. bans), and the value of technical standardization. However, there’s strong consensus on core principles like the importance of community participation, transparency, and the need for systemic rather than purely technical solutions.
Disagreement level
Moderate disagreement with high consensus on principles. The disagreements are primarily about methods and emphasis rather than fundamental goals. This suggests a mature field where practitioners agree on problems and desired outcomes but have different perspectives on the most effective pathways forward. The disagreements are constructive and complementary rather than contradictory, indicating potential for integrated approaches that combine different strategies.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers advocate for questioning the necessity of AI deployment and considering prohibition as a valid prevention strategy rather than assuming AI solutions are always needed.
Speakers
– Robin Aïsha Pocornie
– Mher Hakobyan
Arguments
The first question should be whether technology is needed at all, rather than automatically developing AI solutions
Sometimes bans could be the appropriate prevention and mitigation measure rather than waiting for harm to occur
Topics
Human rights principles | Legal and regulatory
Both speakers emphasize that equality bodies require enhanced powers and adequate funding to effectively address algorithmic discrimination as access to justice mechanisms.
Speakers
– Louise Hooper
– Milla Vidina
Arguments
Equality bodies need stronger powers and adequate funding to effectively regulate AI discrimination
Equality bodies serve as public access to justice mechanisms but need more resources and powers
Topics
Human rights principles | Legal and regulatory | Alternative dispute resolution
Both speakers recognize the need to value lived experience as expertise and address barriers that prevent meaningful community participation in AI discussions.
Speakers
– Robin Aïsha Pocornie
– Mher Hakobyan
Arguments
Lived experience should be recognized as expert knowledge equal to technical expertise
Communities often face barriers engaging with AI discussions due to more urgent immediate issues and intimidation by technical language
Topics
Human rights principles | Interdisciplinary approaches | Capacity development
Takeaways
Key takeaways
AI cannot solve inequality alone as it is inherently shaped by existing discriminatory data, assumptions, and power structures from society
Prevention requires assessing whether AI is actually needed before deployment, rather than automatically developing technological solutions
Community expertise and lived experience should be recognized as equal to technical expertise, with affected communities leading decision-making processes
Current AI regulation focuses primarily on product safety rather than human rights, creating significant gaps in protection against discrimination
Intersectional discrimination is particularly challenging to address both legally and technically, often treated as an add-on rather than integral to AI development
Algorithmic discrimination is difficult to detect, prove, and seek redress for due to system opacity, information asymmetries, and lack of access to data
Multi-stakeholder collaboration involving equality bodies, civil society, affected communities, and regulators is essential but requires adequate funding and institutional support
Meaningful community participation requires not just consultation but actual decision-making power and adequate resources
Cross-regulatory cooperation between equality bodies, data protection authorities, and other regulators is crucial for effective enforcement
Resolutions and action items
Develop e-learning courses for equality bodies and government institutions on AI and equality issues through Council of Europe initiative
Continue technical standardization work to embed equality safeguards in AI development processes
Establish public facilities (like France’s PEREN) to assist regulators with technical testing and investigation of algorithms
Formalize and institutionalize platforms where different regulators can collaborate and share expertise
Reform legal frameworks including burden of proof changes and moving beyond grounds-based discrimination approaches
Provide adequate funding specifically for equality bodies and community participation in AI governance
Implement mandatory impact assessments and transparency requirements for AI systems
Unresolved issues
How to effectively address intersectional discrimination in AI systems both legally and technically
How to ensure meaningful community participation beyond tokenistic consultation
How to balance commercial secrecy with transparency requirements for algorithmic accountability
How to make AI regulation global and coordinated across different jurisdictions
How to overcome resistance from states and courts to recognize and address intersectional discrimination
How to provide adequate funding for community participation and equality body enforcement
How to bridge the gap between technical expertise and community knowledge in AI governance
How to ensure adequate remedies when algorithmic discrimination is proven in court
Suggested compromises
Focus on minimal prevention through better documentation, logging, and transparency rather than complete bias elimination
Use technical standards to embed some equality safeguards during design phase while acknowledging limitations
Implement self-certification processes for AI systems with human rights oversight mechanisms
Combine technical solutions with policy and legal changes rather than relying on technology alone
Allow equality bodies to fund litigation while leaving decision-making power with complainants
Create regulatory sandboxes for testing AI systems with community input before full deployment
Establish cross-sector collaboration between different types of regulators rather than siloed approaches
Thought provoking comments
Community always goes above the technology. The community that’s being harmed is the expert and of the integration of diverse knowledge. This means that technological knowledge, such as computer science, data-driven knowledge on a higher education level, is more accepted than community-based expertise, so real-lived experiences.
Speaker
Robin Aïsha Pocornie
Reason
This comment fundamentally challenges the traditional hierarchy of expertise in AI development by positioning lived experience as equal to or more valuable than technical knowledge. It reframes who should be considered an ‘expert’ in AI discrimination discussions.
Impact
This perspective became a recurring theme throughout the session, with multiple speakers (Mher, Louise) referencing the importance of community expertise and the problematic nature of privileging technical knowledge over lived experience. It shifted the conversation from purely technical solutions to community-centered approaches.
AI cannot solve inequality because it is inherently made by the inequalities that are already existing. So it cannot fix what it is based on.
Speaker
Robin Aïsha Pocornie
Reason
This comment cuts to the philosophical core of the AI fairness debate by arguing that AI systems, being products of unequal societies, cannot transcend their origins to solve inequality. It challenges techno-solutionist thinking at its foundation.
Impact
This insight directly influenced the Mentimeter poll results where nearly half the participants agreed that inequality cannot be solved through AI. It provided a theoretical framework that other speakers built upon, particularly when discussing the limitations of technical approaches to discrimination.
We have two attempts that I’d like to just touch on. One is the Council of Europe Framework Convention on Artificial Intelligence… The other is the EU AI Act… The EU AI Act, by contrast, is directed to and governs states, but also providers and deployers of AI… It’s built on product safety principles rather than a human rights law.
Speaker
Louise Hooper
Reason
This comparison illuminated a crucial distinction between human rights-based and product safety-based approaches to AI regulation, highlighting how the foundational framework shapes the entire regulatory approach and its effectiveness in addressing discrimination.
Impact
This analysis provided essential context for understanding current regulatory limitations and sparked discussion about which approach is more effective for addressing discrimination. It influenced later conversations about the adequacy of existing legal frameworks.
Can you solve the inequality question, for example? How do you select a fairness measure or a debiasing method that could more effectively tackle potential inequality aspects? if you have to assess risks before you… So most of this is about design and development… How can you foresee risks to equality? And could an engineer make this assessment?
Speaker
Milla Vidina
Reason
This comment exposed the practical impossibility of expecting engineers to make human rights assessments without proper training or interdisciplinary support, revealing a fundamental gap in current AI development processes.
Impact
This insight shifted the discussion toward the need for interdisciplinary teams and mandatory human rights expertise in AI development. It influenced conversations about technical standardization and the limitations of self-certification processes.
We often in this kind of fora, you know, we give priority to people who have technical expertise and a lot of professionalized kind of expertise, so to say, but we don’t think of a person that has the lived experience as an expert. And I think these are limitations that we need to address to be able to actually go broader and address intersectional discrimination more effectively.
Speaker
Mher Hakobyan
Reason
This comment reinforced and expanded on Robin’s earlier point about expertise, specifically connecting it to the challenge of addressing intersectional discrimination. It highlighted how current expert hierarchies perpetuate the very problems they claim to solve.
Impact
This observation deepened the conversation about meaningful participation and provided a direct link between expert hierarchies and the failure to address intersectional discrimination effectively. It influenced the final session messages about the need for multi-stakeholder approaches.
What we saw in Canada, there was a similar issue to the sentencing case in the US, where indigenous people in Canada are discriminated against by an algorithm in sentencing decisions. The response of the court was, we accept that the discrimination is there, but it’s alright because the judge knows that and will make a different decision. It’s just not enough.
Speaker
Louise Hooper
Reason
This concrete example powerfully illustrated how even when discrimination is proven and acknowledged, the remedies offered are often inadequate. It demonstrated the gap between recognition of harm and meaningful redress.
Impact
This example provided a stark illustration of inadequate remedies that influenced the discussion about meaningful redress. It contributed to the session’s emphasis on the need for more robust accountability mechanisms and adequate funding for enforcement.
Overall assessment
These key comments fundamentally shaped the discussion by challenging three core assumptions: that technical expertise should be privileged over lived experience, that AI can solve the inequalities it embeds, and that current legal frameworks provide adequate protection. The comments created a progression from identifying problems (AI embedding existing inequalities) to examining current solutions (inadequate legal frameworks and expert hierarchies) to proposing alternatives (community-centered approaches and interdisciplinary collaboration). The discussion evolved from a technical focus to a more holistic understanding that positioned discrimination as a systemic issue requiring structural rather than purely technological solutions. The recurring theme of expertise and who gets to define it became central to understanding why current approaches fail, particularly in addressing intersectional discrimination.
Follow-up questions
How can we ensure meaningful involvement of impacted communities in AI design processes beyond tokenistic consultation?
Speaker
Robin Aïsha Pocornie
Explanation
This addresses the gap between stating the need for community involvement and actually implementing it effectively, ensuring communities have decision-making power rather than just advisory roles
What specific funding mechanisms and amounts are needed to support equality bodies in AI regulation enforcement?
Speaker
Louise Hooper and Milla Vidina
Explanation
Multiple speakers identified funding as a critical barrier, but specific details about adequate funding levels and mechanisms were not explored
How can technical standards effectively embed human rights safeguards when engineers lack human rights expertise?
Speaker
Milla Vidina
Explanation
This highlights the challenge of implementing human rights protections in technical AI systems when the people designing them may not have the necessary human rights knowledge
What would a public facility for algorithmic testing and investigation look like in practice across different countries?
Speaker
Milla Vidina
Explanation
Vidina mentioned France’s PEREN facility as a model but didn’t elaborate on how this could be replicated or scaled across other jurisdictions
How can we better document and research discrimination based on disability and socioeconomic grounds in AI systems?
Speaker
Mher Hakobyan
Explanation
Hakobyan noted that while there’s substantial research on racial and gender discrimination, there’s less knowledge about other forms of discrimination that AI systems may perpetuate
What are the ‘other measures’ that participants identified as important for preventing AI discrimination?
Speaker
Audience (via Mentimeter)
Explanation
The Mentimeter poll showed ‘other measures’ as a significant response category, but these weren’t specified or explored further
How can we move from ground-based discrimination approaches to truly intersectional legal frameworks?
Speaker
Linda Ardenghi (trainee) and Milla Vidina
Explanation
The question about intersectional discrimination revealed that while it’s often mentioned in documents, it’s not effectively addressed in practice or law
What does ‘skin in the game’ mean practically for companies developing AI systems?
Speaker
Audience (via Mentimeter)
Explanation
This was mentioned as a way to reduce AI discrimination but wasn’t elaborated on in terms of specific accountability mechanisms
How can we better coordinate between different regulatory bodies (equality bodies, data protection authorities, competition authorities) on AI issues?
Speaker
Milla Vidina
Explanation
Vidina noted that these bodies don’t communicate effectively despite all having stakes in algorithmic discrimination, but specific coordination mechanisms weren’t detailed
What are the specific technical requirements and processes for mandatory human rights impact assessments in AI development?
Speaker
Multiple speakers
Explanation
While impact assessments were frequently mentioned as important, the specific methodologies and requirements weren’t explored in detail
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.