Lightning Talk #245 Advancing Equality and Inclusion in AI

23 Jun 2025 14:20h - 14:40h

Lightning Talk #245 Advancing Equality and Inclusion in AI

Session at a glance

Summary

This lightning talk focused on advancing equality and inclusion in artificial intelligence systems, co-organized by the Council of Europe and the European Union Agency for Fundamental Rights at the Internet Governance Forum. Deputy Secretary-General Björn Berge opened by highlighting the systemic nature of AI bias, noting that women comprise only 22% of the AI workforce and that algorithms routinely screen out women from jobs and misinterpret names and accents. He emphasized that Europe is responding through the Council of Europe Framework Convention on Artificial Intelligence, the first international treaty placing AI under the rule of law, alongside the EU AI Act.


David Reichel from the EU Agency for Fundamental Rights identified key risks to equality in AI, explaining that machines are not neutral despite common assumptions. He provided examples of biased training data, such as facial recognition systems trained primarily on white male faces, and demonstrated how large language models can perpetuate discrimination against religious minorities and exhibit intersectional biases. Reichel noted that African American dialect speech patterns trigger higher error rates in AI systems, illustrating systemic discrimination.


The discussion revealed that companies primarily use AI for efficiency rather than fairness, missing opportunities to create more equitable decision-making processes. Reichel advocated for embracing current regulations as opportunities to identify and address societal biases through fundamental rights impact assessments. The session concluded with emphasis on the importance of AI literacy, proper documentation access for testing systems, and ensuring remedies are available when discrimination occurs, establishing a framework for more inclusive AI development.


Keypoints

**Major Discussion Points:**


– **Systemic bias and discrimination in AI systems** – The discussion highlighted how AI perpetuates and amplifies existing inequalities, with examples including algorithms that screen out women from jobs, facial recognition systems that work poorly for non-white faces, and biased training data from sources like racially profiling police forces.


– **Current regulatory frameworks and legal responses** – Speakers emphasized the importance of existing and emerging regulations, particularly the EU AI Act, the Council of Europe Framework Convention on AI (the first international AI treaty), and how anti-discrimination laws apply to AI systems.


– **The myth of AI neutrality** – A key point was debunking the misconception that machines are neutral, using the example of travelers in 2015 who believed automated border gates would be less discriminatory than human guards, which has proven false.


– **Practical implementation challenges and solutions** – Discussion focused on the need for guidance on conducting fundamental rights impact assessments, the importance of AI literacy, and tools being developed to help organizations detect and address AI discrimination in practice.


– **Opportunities for fairer AI systems** – The conversation explored how proper assessment and regulation can be viewed as opportunities rather than burdens, allowing organizations to identify societal biases and create more accountable, fair decision-making processes.


**Overall Purpose:**


The discussion aimed to provide a comprehensive overview of equality and inclusion challenges in AI systems, showcase current regulatory responses from European institutions, and demonstrate practical approaches for addressing bias and discrimination while highlighting opportunities for creating fairer AI systems.


**Overall Tone:**


The tone was professional and educational throughout, maintaining a balance between acknowledging serious concerns about AI bias and discrimination while remaining constructively optimistic about solutions. The speakers presented a realistic but forward-looking perspective, emphasizing that while AI poses significant equality risks, proper regulation, assessment, and implementation can transform these challenges into opportunities for creating more just and equitable systems.


Speakers

– **Sara Haapalainen**: Works for the Hate Speech, Hate Crime and Artificial Intelligence Unit of the Council of Europe


– **David Reichel**: Head of Data and Digital Sector from FRA (European Union Agency for Fundamental Rights)


– **Bjorn Berge**: Deputy Secretary-General of the Council of Europe


**Additional speakers:**


– **Ivana Bartoletti**: Was scheduled to speak but was unable to attend due to unfortunate circumstances. Role/expertise not specified in the transcript.


Full session report

# Advancing Equality and Inclusion in Artificial Intelligence Systems: A Comprehensive Discussion Report


## Introduction and Context


This lightning talk, co-organised by the Council of Europe and the European Union Agency for Fundamental Rights at the Internet Governance Forum, addressed the critical challenges of advancing equality and inclusion in artificial intelligence systems. The session brought together two key speakers from European institutions to examine how AI systems perpetuate discrimination and explore regulatory and practical solutions for creating more equitable technology.


The discussion was originally scheduled to include Ivana Bartoletti, who was unfortunately unable to attend due to unforeseen circumstances. The remaining speakers provided comprehensive coverage of the topic from institutional, regulatory, and practical perspectives. The session included interactive elements using Mentimeter to gather audience input on AI risks and opportunities.


## Speaker Contributions and Key Arguments


### Björn Berge: Framing AI as a Justice Issue


Deputy Secretary-General Björn Berge of the Council of Europe opened the discussion by establishing AI as fundamentally a justice issue, stating: “Let us not treat AI as just a technological issue. Let us treat it also as a justice issue. Because if AI is to serve the public good, it must serve everyone equally and fairly. Period.”


Berge highlighted that women comprise only 22% of the AI workforce, directly influencing the fairness and bias of AI systems. He provided concrete examples of discrimination: algorithms screening out women from job opportunities, chatbots misinterpreting names and accents, and AI systems making decisions without providing explanations or recourse.


He emphasised Europe’s regulatory response through the Council of Europe Framework Convention on Artificial Intelligence, which opened for signature last year as “the first international treaty placing AI under the rule of law” specifically for human rights and equality protection.


### David Reichel: Evidence-Based Analysis of AI Bias


David Reichel, Head of Data and Digital Sector from the European Union Agency for Fundamental Rights, provided detailed empirical analysis of AI bias, systematically challenging the misconception that machines are neutral. He illustrated this with automated border control systems: “Back then, the majority of travellers actually said, oh, that’s good because a lot of border guards are discriminating us. When the machine is there, it’s better and it’s neutral. Unfortunately, 10 years later now, we learned a lot of examples that this is unfortunately not true. Machines are not neutral.”


Reichel presented specific research findings from FRA’s work testing offensive speech detection algorithms. Their research revealed intersectional biases in AI systems, showing higher error rates when processing gendered language combined with religious or ethnic identifiers. For example, when testing Italian language processing, they found that combining gender with religious terms increased error rates significantly. He also highlighted how African American dialect speech patterns trigger elevated error rates in AI systems.


He noted that organisations “very rarely” use AI to make better or fairer decisions, focusing almost exclusively on efficiency gains. He argued this represents a missed opportunity, suggesting that bias detection could serve as a diagnostic tool: “If we look into biases, we actually learn a lot about where society is going wrong. If there’s recruitment software that prefers men over women, then this reflects ongoing practices.”


Reichel concluded with a business-focused argument: “It’s also not innovative if you use AI and discriminate. It’s also not efficient and not sustainable. So there’s the opportunity to actually create a better AI.”


### Sara Haapalainen: Practical Implementation and Legal Integration


Sara Haapalainen from the Council of Europe’s Hate Speech, Hate Crime and Artificial Intelligence Unit focused on practical implementation, emphasising that AI systems “are not neutral and reproduce structural inequalities that need active promotion of equality.”


She stressed the continued relevance of existing anti-discrimination legislation alongside new AI-specific regulations, and highlighted collaborative efforts between European institutions. She described a joint project between the Council of Europe and the EU developing practical tools including policy guidelines, online training programmes, and complaint handling protocols specifically for equality bodies.


Haapalainen concluded with three key points: first, that AI systems are not neutral and require active promotion of equality; second, that access to remedies and documentation is crucial when discrimination occurs; and third, that the Council of Europe’s Committee of Experts is currently drafting a soft law instrument on AI equality and non-discrimination.


## Audience Engagement and Key Findings


The session included interactive polling using Mentimeter, which revealed important insights about audience perceptions. When asked about the biggest risks of AI for equality and inclusion, the top responses were:


– Biased algorithms


– Lack of regulation and laws


– Opacity of AI systems


When asked about opportunities, the most popular response was “increasing AI literacy,” highlighting the importance of education and awareness in addressing AI bias.


## Areas of Strong Consensus


The speakers demonstrated remarkable alignment on fundamental issues, creating a coherent European institutional perspective on AI governance and equality.


Both speakers agreed that AI systems are not objective or neutral tools but rather reflect and amplify existing societal biases. This consensus emerged through complementary evidence: Berge’s statistical evidence about workforce diversity and discriminatory outcomes, and Reichel’s technical examples of biased training data and algorithmic discrimination.


The speakers also showed strong consensus on the necessity of comprehensive legal frameworks, though with different emphases. Berge highlighted the groundbreaking Framework Convention, while Reichel emphasised how multiple EU laws including the AI Act, Digital Services Act, and data protection regulations can work together. Haapalainen stressed that existing anti-discrimination legislation remains relevant alongside new AI-specific regulations.


Both Reichel and Haapalainen emphasised the gap between regulatory requirements and practical implementation capabilities, agreeing that organisations often lack the knowledge and tools necessary to assess biases effectively. This led to shared emphasis on developing concrete tools and guidance, including fundamental rights impact assessments and capacity-building programmes.


## Practical Outcomes and Future Directions


The discussion identified several concrete initiatives translating insights into practical action. The speakers highlighted ongoing collaboration between the Council of Europe and EU institutions to develop practical tools for addressing AI discrimination, including policy guidelines for equality bodies, online training programmes, and complaint handling protocols.


Reichel indicated that the European Union Agency for Fundamental Rights will publish several relevant reports covering high-risk AI applications, remote biometric identification, and the digitalisation of justice systems. These publications will provide additional empirical evidence and practical guidance for addressing AI bias in specific contexts.


The speakers emphasised AI literacy and awareness as essential components of effective equality promotion, suggesting ongoing need for educational initiatives targeting AI developers, users, and oversight bodies.


## Conclusion


This discussion represents a mature approach to AI governance that positions equality and human rights as central concerns in AI development and deployment. The strong consensus among European institutional representatives suggests a coherent, rights-based approach to AI regulation is emerging.


The speakers successfully demonstrated that addressing AI bias requires legal frameworks, practical tools, institutional capacity, and shifts in how organisations approach AI development. Their emphasis on viewing regulation as opportunity rather than burden, and framing AI assessment as a diagnostic tool for broader social inequalities, provides a constructive foundation for progress.


The collaborative approach between the Council of Europe and EU institutions demonstrates how international cooperation can address challenges that transcend national boundaries. The session’s interactive elements and focus on practical implementation alongside regulatory development suggests European institutions are moving toward effective enforcement and support mechanisms.


The discussion contributes to growing international consensus that AI systems must be designed, deployed, and governed to promote rather than undermine equality and human rights, with the European approach providing a valuable model for ensuring AI serves everyone equally and fairly.


Session transcript

Sara Haapalainen: Welcome to the lightning talk on advancing equality and inclusion in AI, co-organized by the Council of Europe and the European Union Agency for Fundamental Rights. My name is Sara Haapalainen and I work for the Hate Speech, Hate Crime and Artificial Intelligence Unit of the Council of Europe. In this session, we will give you a snapshot on key challenges related to bias, anti-discrimination and equality in AI systems and opportunities to address them. To have a more inclusive discussion, we invite the audience to contribute to this session through an online platform I’ll introduce shortly. Discussions can also continue after this short session. But first, I have the great pleasure to give the floor to the Deputy Secretary-General of the Council of Europe, Björn Berge, to say a few opening words. Please, the stage is yours.


Bjorn Berge: Thank you very much, Sara, and very good afternoon to all of you. Let me first start by congratulating Norway, my home country, for hosting this year’s Internet Governance Forum. We are here to address a reality we can no longer ignore. Artificial intelligence is shaping how people access jobs, services, information and justice. But too often, these systems reinforce inequality. Women make up just 22% of the world’s AI workforce. That lack of diversity influences not only how systems are built, but how fair or biased they become. And we clearly see the impact. Algorithms screening out women for jobs. Chatbots that misread names and accents. Decisions made with no explanation or recourse. These are not isolated issues. They are systemic deficiencies, and they demand a systemic approach and response. In Europe, we are responding. The Council of Europe Framework Convention on Artificial Intelligence opened for signature last year. It is the first international treaty placing AI under the rule of law, and at the service of human rights, democracy and equality. Together with the EU AI Act, it offers a legal foundation to ensure AI upholds rights, not only undermines them. But as you know, rules alone are not enough. Equality bodies, national human rights institutions and civil society must help shape, monitor and improve these systems. Our joint project with the European Union to support equality bodies is one example of this in practice. Earlier this year, the UN Secretary-General, Antonio Guterres, asked, are we ready for the future? His answer was plain, no. And he was right. Let us not treat AI as just a technological issue. Let us treat it also as a justice issue. Because if AI is to serve the public good, it must serve everyone equally and fairly. Period. I thank you very much, and please enjoy the discussion now between my two colleagues.


Sara Haapalainen: Thank you very much. Thank you very much, Deputy Secretary-General of the Council of Europe, for setting the stage for our discussion. Let’s unpack the linkages between human rights and AI a bit more together with David Reichel, Head of Data and Digital Sector from FRA, the European Union Agency for Fundamental Rights. But before we dive into the discussion, I also want to announce that unfortunately our second speaker, Ivana Bartoletti, is not able to be with us today due to unfortunate circumstances. Nevertheless, the audience can share their thoughts via the Mentimeter by going to menti.com and adding the code shown on the screen. You will have the same question as David. We will discuss first about the risks AI poses to equality. What do you think are the key risks for equality in AI? The mic is yours.


David Reichel: Thank you, Sarah. It’s a big pleasure to be here. Good afternoon. I’m from the EU Agency for Fundamental Rights, where we’ve been dealing with AI and equality issues for many years now. One of the key risks for equality when using AI is to use it for decision making without properly assessing it for potential biases. It’s a key risk if we use AI blindly. To give an example, in the year 2015, so 10 years ago, we interviewed people at EU border crossing points about the use of automated border gates. Back then, the majority of travelers actually said, oh, that’s good because a lot of border guards are discriminating us. When the machine is there, it’s better and it’s neutral. Unfortunately, 10 years later now, we learned a lot of examples that this is unfortunately not true. Machines are not neutral. Using them is not without risks. To the contrary, AI, as we also just heard, may perpetuate, it may reinforce, or even create inequalities and discrimination. We heard examples also in the opening speech of the Deputy Director General of the Council of Europe. Why can this happen? There are several reasons why the use of AI may lead to more inequality. For example, the training data, the data used to build certain algorithms are already biased. If you have police forces, which in some areas, unfortunately, engage in racial profiling, you shouldn’t use these data to automate predictions for police. What else could be the case? Training data may not be of high quality, may not be representative. A lot of people heard already about the case of face recognition technology that worked really well at the beginning for white male faces, but not at all for black and female faces. The problem obviously was it was just trained on white male faces. What is more, currently more and more AI tools use general purpose AI models, meaning large language models trained on much data from the internet. We tested offensive speech detection algorithms by building them ourselves and then testing them for certain biases. What we found was that these speech detection models very often overreact to certain words, most notably words like Muslim or Jew, because there’s a lot of hatred targeted at these groups. However, the reason for these biases was not always in our training data. Sometimes the biases were also coming from their large language models. There was also some bias included, intersectional bias. For example, in Italian, we tested the offensive speech algorithms for biases. And in Italian, like also in German, you use more gendered nouns. And the error rates for non-offensive sentences for the feminine version of Muslim in Italian was much higher than for the masculine version. And the other way around for Jews. Lastly, we also did a test and found that speech that is completely neutral, but more likely to be African American dialect, that was prone to higher error rates for non-offensive speech compared to other sentences that were not likely to have this kind of dialect. So we have enough evidence that we should be worried. And this is not to get me wrong. The use of AI is great, of course, and we can use it in a variety of areas. But as was also hinted to in the opening speech, we need regulation. And we not only need regulation, but we also need guidance on how to apply the regulation in practice. In our research, we speak very often to developers and users of AI. And a lot of companies, but also public administrations, want to do the right thing, but often don’t know how do I actually assess biases and how do I mitigate the risks in practice. So as was mentioned, in the EU, we have several laws that can help making AI fairer and better. We have the EU AI Act recently adopted, which has a lot of requirements that can help providers and deployers of AI to look into biases. We have the Digital Services Act, which tackles the very large online platforms and has risk mitigation. and lastly we also have our long adopted data protection laws in the EU that also help processing data in a fairer way and to avoid discrimination.


Sara Haapalainen: Thank you, outlining the serious risks to individuals and to our societies, obviously affecting those who are already impacted by discrimination and inclusion. You also went a bit to the solutions and opportunities which we will discuss next, but let’s have a look at the Mentimeter and what the audience voted as key risks for equality in AI. I see biased algorithms and lack of regulation and laws as the highest, and then opacity of the AI systems as a third key risk foreseen in AI. You touched upon the biased algorithms, also the regulation, obviously in the Europe at least we are lucky we are having regulation, not only the EU AI, but also what was mentioned by the Deputy Secretary General of the Council of Europe, we have the Framework Convention of AI, which is actually a global international treaty, so we have some tools and elements. I also want to highlight that together with the Council of Europe and the European Union are currently developing some solutions to respond to some of the risks in addition to regulation, and we are for example cooperating with the equality bodies across Europe and together we are developing tools and knowledge to detect and address discrimination resulting from the use of AI systems, including ensuring sufficient remedies, and these tools include inter alia policy guidelines and online training on AI and anti-discrimination, as well as a protocol to handle victims’ complaints. But let’s dive into more of the solutions and opportunities, and we have also the same question for the audience. So what are the key opportunities for ensuring equality in AI?


David Reichel: Thank you. It’s a pity that Ivana can’t be with us, but I’m also happy not just to speak about risks but also opportunities of AI. In our research with developers and users of AI, we also asked about why are companies or administrations using AI? And we were very surprised to see that almost exclusively people were saying we want to use it for efficiency purposes, we want to make things faster and quicker. That’s fair enough, of course that’s a good reason to use, but first of all efficiency as such is not enough reason to interfere with fundamental rights on the one hand. But secondly I was also surprised to see that very rarely those using AI say we want to use AI to actually make better decisions, to make fairer decisions. And when people see in the Mentimeter also that risks of fundamental rights are the lack of regulation and biased algorithms, I would really suggest that we all embrace the current regulation and use it to make use of all the opportunities of AI. First of all, when regulation requires you to assess AI before you use it in practice, then this is an opportunity. If we look into biases, we actually learn a lot about where society is going wrong. If there’s recruitment software that prefers men over women, then this reflects ongoing practices. So learning about these disadvantages in society is a great opportunity. Once we identify these issues, we can of course think about how we can fix those biases and try to find ways to make fairer and also more accountable decision making. Obviously to embrace this fairer decision making and proper assessments, we need protocols and guidance on how this can be done in practice. A lot of the law includes so-called fundamental rights impact assessments, or at the Council of Europe it’s the Human Rights, Democracy and Rule of Law impact assessment. A lot of people think this is very complicated to do, but in a sense I would say it’s not that difficult to do. You usually start by describing your system, what tasks you want to automate, what purposes you’re using AI, what technology is used, what training data did you use. And secondly, you look into different fundamental rights. You usually start with privacy and data protection. Do you process information about the private life of people that you’re actually not allowed to do? That should be ruled out. Secondly, and that’s the big block, bias and non-discrimination. So here you would start by scanning the information that is processed in your algorithm. Is there any information linked to protected characteristics, ethnic origin, gender, and so forth? Obviously, just taking this information out is sometimes not enough, so you have to look for so-called proxies. Is there any information that is highly correlated with some protected characteristics? And lastly, there’s also the right to an effective remedy under any fundamental rights impact assessment. So making this assessment and making transparent what you’re doing, you allow the decisions also being challenged in court, and this way provide an equality of arms. So as was mentioned, the Council of Europe, but also in the EU, we work a lot on this guidance and help developers and providers of AI to make these kind of assessments. And this way, really showing that the regulation is an opportunity to embrace a human rights approach to the use of AI. It’s of course not innovative if you use AI and discriminate. It’s also not efficient and not sustainable. So there’s the opportunity to actually create a better AI. At the Fundamental Rights Agency, we are working on several reports. One on high-risk AI, where we looked into the use of AI in the area of employment, education, law enforcement, and migration. And we will publish a report later this year, which will include some of this guidance. We’ll have another report on the use of remote biometric identification, including face recognition in the context of law enforcement. And the third one on digitalization of justice, which will all come out later this year.


Sara Haapalainen: Thank you very much, David. In addition, let me look at the Mentimeter. So the audience has suggested that the key opportunity for ensuring equality in AI is increasing AI literacy, which is indeed very important as well, and is included in the European regulation as well. We need to raise awareness of the AI, its risks and opportunities. I will also use the opportunity from the Council of Europe side to highlight three things based on our work with the equality bodies and based on the Council of Europe study published on the impact of artificial intelligence, their potential to promote equality, including gender equality and risks they may cause in relation to non-discrimination. And mention three things. Firstly, as we know already, AI systems are not neutral and reproduce and amplify structural inequality. Equality needs to be promoted in and through the use of AI and informed by the views of those impacted, as outlined in the Council of Europe Framework Convention on AI. In addition to the AI governance, we must also remind the legislators and the AI deployers that the anti-discrimination legislation applies to AI as well. So it’s not only the AI legislation that we should look into. And then, secondly, if prevention fails and discrimination by AI system occur, then we need to ensure access to remedies, exactly as David was saying, to restore rights and provide justice for those being discriminated. And here, the human rights institutions, equality bodies and CSOs can play an important role to inform the rights of the victims, but also request testing of AI systems, which requires obviously having also access to documentation of the AI systems. And then finally, and thirdly, the Council of Europe’s Committee of Experts on Artificial Intelligence, Equality and Discrimination is currently drafting a soft law instrument for states on AI equality and non-discrimination. And this will provide also valuable additional guidance on the topic in the future. With these remarks, I want to thank you very much for the discussion, our Deputy Secretary General Björn Berge and David Reichelt, as well as the audience for the insights and participation. If you have questions or would like to discuss with us further, please feel free to connect with us after this talk. And I would like to also show we have some QR codes and links for the audience, assumingly, yes, on the screen. So if you are interested in our work as the Council of Europe and for our publications and work, you can take the QR codes on the screen. Thank you very much and have a lovely rest of the day here at the IGF. Neil Gaiman, Fischer, Constantine Tang theライ THE CHINESE COUNTRY For your information, a session will start on the open stage in five minutes.


B

Bjorn Berge

Speech speed

89 words per minute

Speech length

312 words

Speech time

208 seconds

Women represent only 22% of AI workforce, influencing system fairness and bias

Explanation

The Deputy Secretary-General argues that the lack of gender diversity in AI development directly impacts how fair or biased AI systems become. This underrepresentation in the workforce creates a systemic issue that influences the design and implementation of AI technologies.


Evidence

Statistical data showing women make up just 22% of the world’s AI workforce


Major discussion point

AI Systems Perpetuate and Amplify Inequality


Topics

Human rights | Economic


Algorithms screen out women from jobs, chatbots misread names and accents, decisions lack explanation or recourse

Explanation

Berge highlights concrete examples of how AI bias manifests in real-world applications, affecting employment opportunities and creating barriers for people with different linguistic backgrounds. He emphasizes that these discriminatory outcomes often occur without transparency or accountability mechanisms.


Evidence

Specific examples of algorithmic discrimination in hiring processes and chatbot interactions with non-standard names and accents


Major discussion point

Systemic Risks and Real-World Impact of AI Bias


Topics

Human rights | Economic


Council of Europe Framework Convention on AI is the first international treaty placing AI under rule of law for human rights and equality

Explanation

Berge presents this treaty as a groundbreaking legal instrument that establishes international standards for AI governance. He positions it alongside the EU AI Act as providing a comprehensive legal foundation to ensure AI systems respect human rights rather than undermining them.


Evidence

The Framework Convention opened for signature last year and works together with the EU AI Act


Major discussion point

Legal and Regulatory Framework for AI Governance


Topics

Legal and regulatory | Human rights


Agreed with

– David Reichel
– Sara Haapalainen

Agreed on

Comprehensive legal frameworks are essential for addressing AI bias and discrimination


D

David Reichel

Speech speed

150 words per minute

Speech length

1410 words

Speech time

561 seconds

Biased training data from discriminatory practices leads to perpetuation of inequality in AI systems

Explanation

Reichel explains that when AI systems are trained on data that already contains discriminatory patterns, they will reproduce and amplify these biases in their decision-making. He emphasizes that this is a fundamental problem in AI development that requires careful attention to data quality and representativeness.


Evidence

Example of police forces engaging in racial profiling – using such biased data to automate police predictions would perpetuate discrimination


Major discussion point

AI Systems Perpetuate and Amplify Inequality


Topics

Human rights | Legal and regulatory


Agreed with

– Bjorn Berge
– Sara Haapalainen

Agreed on

AI systems perpetuate and amplify existing societal inequalities rather than being neutral tools


Face recognition technology initially worked well for white male faces but failed for black and female faces due to training data limitations

Explanation

Reichel uses this well-known case to illustrate how non-representative training data creates systematic bias in AI systems. The technology’s poor performance for certain demographic groups demonstrates the real-world consequences of inadequate diversity in training datasets.


Evidence

Historical case of face recognition technology that was primarily trained on white male faces, resulting in poor performance for other demographic groups


Major discussion point

Systemic Risks and Real-World Impact of AI Bias


Topics

Human rights | Sociocultural


Large language models contain intersectional biases, with higher error rates for gendered and dialect-specific language

Explanation

Reichel presents research findings showing that AI systems exhibit complex, intersectional biases that affect multiple identity categories simultaneously. His team’s testing revealed that these biases can be particularly pronounced when gender and religious or ethnic identities intersect, and when different dialects are involved.


Evidence

FRA’s testing of offensive speech detection algorithms showed higher error rates for feminine versions of religious terms in Italian, and higher error rates for African American dialect in neutral speech


Major discussion point

Systemic Risks and Real-World Impact of AI Bias


Topics

Human rights | Sociocultural


EU has multiple applicable laws including AI Act, Digital Services Act, and data protection laws to make AI fairer

Explanation

Reichel outlines the comprehensive legal framework available in the EU to address AI bias and discrimination. He emphasizes that these laws provide various requirements and mechanisms that can help both providers and deployers of AI systems identify and mitigate biases in their applications.


Evidence

Specific mention of EU AI Act with bias assessment requirements, Digital Services Act for large platforms, and existing data protection laws


Major discussion point

Legal and Regulatory Framework for AI Governance


Topics

Legal and regulatory | Human rights


Agreed with

– Bjorn Berge
– Sara Haapalainen

Agreed on

Comprehensive legal frameworks are essential for addressing AI bias and discrimination


Companies want to do right but lack knowledge on assessing biases and mitigating risks in practice

Explanation

Based on his research interactions with AI developers and users, Reichel identifies a gap between good intentions and practical implementation. He argues that while organizations genuinely want to develop fair AI systems, they often lack the technical knowledge and practical guidance needed to effectively identify and address biases.


Evidence

Research findings from speaking with developers and users of AI in companies and public administrations


Major discussion point

Practical Implementation and Assessment Solutions


Topics

Economic | Legal and regulatory


Agreed with

– Sara Haapalainen

Agreed on

Practical implementation support and guidance are crucial for effective AI governance


Fundamental rights impact assessments provide structured approach to evaluate AI systems for bias and discrimination

Explanation

Reichel advocates for systematic assessment procedures that examine AI systems across multiple dimensions of fundamental rights. He outlines a practical methodology that includes system description, privacy assessment, bias analysis, and remedy mechanisms to ensure comprehensive evaluation of AI systems.


Evidence

Detailed methodology including system description, privacy/data protection review, protected characteristics analysis, proxy identification, and effective remedy provisions


Major discussion point

Practical Implementation and Assessment Solutions


Topics

Legal and regulatory | Human rights


Agreed with

– Sara Haapalainen

Agreed on

Practical implementation support and guidance are crucial for effective AI governance


Regulation should be embraced as opportunity to identify societal biases and create fairer decision-making systems

Explanation

Reichel reframes regulatory compliance from a burden to an opportunity for improvement. He argues that when AI systems reveal biases, this actually provides valuable insights into existing societal inequalities, and that addressing these biases can lead to more accountable and fairer decision-making processes overall.


Evidence

Example of recruitment software preferring men over women as reflection of ongoing discriminatory practices in society


Major discussion point

Opportunities for Fairer AI Development


Topics

Legal and regulatory | Human rights


S

Sara Haapalainen

Speech speed

125 words per minute

Speech length

1019 words

Speech time

486 seconds

AI systems are not neutral and reproduce structural inequalities that need active promotion of equality

Explanation

Haapalainen emphasizes that AI systems inherently reflect and amplify existing societal biases rather than being objective tools. She argues that achieving equality requires deliberate, proactive efforts informed by the perspectives of those who are most likely to be negatively affected by AI systems.


Evidence

Reference to Council of Europe study on AI’s impact on equality and the Framework Convention on AI requirement for inclusive approaches


Major discussion point

AI Systems Perpetuate and Amplify Inequality


Topics

Human rights | Legal and regulatory


Agreed with

– Bjorn Berge
– David Reichel

Agreed on

AI systems perpetuate and amplify existing societal inequalities rather than being neutral tools


Anti-discrimination legislation applies to AI systems in addition to specific AI legislation

Explanation

Haapalainen clarifies that existing anti-discrimination laws remain relevant and applicable when AI systems are involved in decision-making processes. She emphasizes that AI governance should not be viewed in isolation but must be considered alongside established legal frameworks for preventing discrimination.


Major discussion point

Legal and Regulatory Framework for AI Governance


Topics

Legal and regulatory | Human rights


Agreed with

– Bjorn Berge
– David Reichel

Agreed on

Comprehensive legal frameworks are essential for addressing AI bias and discrimination


Joint project between Council of Europe and EU develops tools including policy guidelines and complaint protocols for equality bodies

Explanation

Haapalainen describes a collaborative initiative aimed at building practical capacity for addressing AI discrimination. The project focuses on empowering equality bodies across Europe with concrete tools and knowledge to detect, address, and provide remedies for AI-related discrimination.


Evidence

Specific tools mentioned include policy guidelines, online training on AI and anti-discrimination, and protocols for handling victims’ complaints


Major discussion point

Practical Implementation and Assessment Solutions


Topics

Legal and regulatory | Human rights


Agreed with

– David Reichel

Agreed on

Practical implementation support and guidance are crucial for effective AI governance


AI literacy and awareness of risks and opportunities are essential for ensuring equality

Explanation

Based on audience input through Mentimeter, Haapalainen highlights the critical importance of education and awareness-raising about AI systems. She notes that increasing AI literacy is viewed as a key opportunity and is incorporated into European regulatory frameworks.


Evidence

Audience feedback from Mentimeter identifying AI literacy as top opportunity, and inclusion in European regulation


Major discussion point

Opportunities for Fairer AI Development


Topics

Human rights | Sociocultural


Access to remedies and documentation is crucial when discrimination occurs, with equality bodies playing important oversight role

Explanation

Haapalainen emphasizes the importance of accountability mechanisms when AI systems cause discrimination. She argues that victims need both access to justice and the ability to challenge AI decisions, which requires transparency in AI system documentation and active involvement from human rights institutions.


Evidence

Mention of equality bodies’ role in informing victims’ rights, requesting AI system testing, and need for access to AI system documentation


Major discussion point

Opportunities for Fairer AI Development


Topics

Human rights | Legal and regulatory


Agreements

Agreement points

AI systems perpetuate and amplify existing societal inequalities rather than being neutral tools

Speakers

– Bjorn Berge
– David Reichel
– Sara Haapalainen

Arguments

Algorithms screening out women for jobs, chatbots that misread names and accents, decisions made with no explanation or recourse


Biased training data from discriminatory practices leads to perpetuation of inequality in AI systems


AI systems are not neutral and reproduce structural inequalities that need active promotion of equality


Summary

All three speakers agree that AI systems are not objective or neutral tools, but rather reflect and amplify existing societal biases and discrimination patterns, creating systemic risks for equality


Topics

Human rights | Legal and regulatory


Comprehensive legal frameworks are essential for addressing AI bias and discrimination

Speakers

– Bjorn Berge
– David Reichel
– Sara Haapalainen

Arguments

Council of Europe Framework Convention on AI is the first international treaty placing AI under rule of law for human rights and equality


EU has multiple applicable laws including AI Act, Digital Services Act, and data protection laws to make AI fairer


Anti-discrimination legislation applies to AI systems in addition to specific AI legislation


Summary

All speakers emphasize the importance of robust legal and regulatory frameworks, highlighting both new AI-specific legislation and the continued relevance of existing anti-discrimination laws


Topics

Legal and regulatory | Human rights


Practical implementation support and guidance are crucial for effective AI governance

Speakers

– David Reichel
– Sara Haapalainen

Arguments

Companies want to do right but lack knowledge on assessing biases and mitigating risks in practice


Fundamental rights impact assessments provide structured approach to evaluate AI systems for bias and discrimination


Joint project between Council of Europe and EU develops tools including policy guidelines and complaint protocols for equality bodies


Summary

Both speakers recognize that while organizations have good intentions, they need concrete tools, guidance, and capacity-building support to effectively implement fair AI practices


Topics

Legal and regulatory | Human rights


Similar viewpoints

Both speakers view regulatory frameworks and awareness-raising as opportunities rather than burdens, emphasizing the positive potential for creating more equitable AI systems through proper governance and education

Speakers

– David Reichel
– Sara Haapalainen

Arguments

Regulation should be embraced as opportunity to identify societal biases and create fairer decision-making systems


AI literacy and awareness of risks and opportunities are essential for ensuring equality


Topics

Human rights | Legal and regulatory | Sociocultural


Both speakers emphasize the importance of systematic assessment procedures and accountability mechanisms, including the need for transparency and access to remedies when AI discrimination occurs

Speakers

– David Reichel
– Sara Haapalainen

Arguments

Fundamental rights impact assessments provide structured approach to evaluate AI systems for bias and discrimination


Access to remedies and documentation is crucial when discrimination occurs, with equality bodies playing important oversight role


Topics

Human rights | Legal and regulatory


Unexpected consensus

Reframing regulation as opportunity rather than burden

Speakers

– David Reichel
– Sara Haapalainen

Arguments

Regulation should be embraced as opportunity to identify societal biases and create fairer decision-making systems


AI literacy and awareness of risks and opportunities are essential for ensuring equality


Explanation

Rather than viewing regulatory requirements as obstacles to innovation, both speakers consistently frame them as valuable opportunities for improvement and learning about societal inequalities, which is somewhat unexpected in technology policy discussions


Topics

Legal and regulatory | Human rights


Overall assessment

Summary

The speakers demonstrate strong consensus on fundamental issues: AI systems perpetuate inequality, comprehensive legal frameworks are necessary, and practical implementation support is crucial. They share a unified vision of AI governance that prioritizes human rights and equality.


Consensus level

Very high level of consensus with no apparent disagreements. This strong alignment suggests a mature understanding of AI governance challenges and indicates that European institutions have developed a coherent, rights-based approach to AI regulation that could serve as a model for global AI governance frameworks.


Differences

Different viewpoints

Unexpected differences

Overall assessment

Summary

The discussion shows remarkable consensus among all three speakers on the fundamental issues surrounding AI bias and discrimination. There were no direct disagreements identified in the transcript.


Disagreement level

Very low disagreement level. The speakers demonstrate strong alignment on key issues including the existence of AI bias, the need for regulation, the importance of practical implementation tools, and the necessity of protecting human rights. The few partial agreements identified relate to different emphases in approach rather than fundamental disagreements. This high level of consensus suggests a mature understanding of AI bias issues among European institutions and may facilitate coordinated policy implementation, though it might also indicate a need for broader perspectives in future discussions.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers view regulatory frameworks and awareness-raising as opportunities rather than burdens, emphasizing the positive potential for creating more equitable AI systems through proper governance and education

Speakers

– David Reichel
– Sara Haapalainen

Arguments

Regulation should be embraced as opportunity to identify societal biases and create fairer decision-making systems


AI literacy and awareness of risks and opportunities are essential for ensuring equality


Topics

Human rights | Legal and regulatory | Sociocultural


Both speakers emphasize the importance of systematic assessment procedures and accountability mechanisms, including the need for transparency and access to remedies when AI discrimination occurs

Speakers

– David Reichel
– Sara Haapalainen

Arguments

Fundamental rights impact assessments provide structured approach to evaluate AI systems for bias and discrimination


Access to remedies and documentation is crucial when discrimination occurs, with equality bodies playing important oversight role


Topics

Human rights | Legal and regulatory


Takeaways

Key takeaways

AI systems are not neutral and systematically perpetuate and amplify existing societal inequalities, particularly affecting underrepresented groups


The lack of diversity in AI development (women represent only 22% of AI workforce) directly influences how fair or biased systems become


Biased training data and inadequate representation in datasets lead to discriminatory outcomes in real-world applications like recruitment, face recognition, and content moderation


Europe has established comprehensive legal frameworks including the Council of Europe Framework Convention on AI and EU AI Act to address these issues


Fundamental rights impact assessments and human rights-based approaches provide practical tools for identifying and mitigating AI bias


AI should be treated as both a technological and justice issue, requiring systemic approaches rather than isolated technical fixes


Existing anti-discrimination legislation applies to AI systems in addition to specific AI regulations


AI literacy and awareness are essential for ensuring equality in AI development and deployment


Resolutions and action items

Council of Europe and EU are developing joint tools for equality bodies including policy guidelines, online training, and complaint handling protocols


FRA will publish reports on high-risk AI, remote biometric identification, and digitalization of justice later in the year


Council of Europe’s Committee of Experts is drafting a soft law instrument on AI equality and non-discrimination for states


Organizations should embrace current regulations as opportunities to conduct proper bias assessments before deploying AI systems


Developers and users need practical guidance on how to assess biases and mitigate risks in AI systems


Unresolved issues

How to effectively address intersectional biases in large language models that are increasingly used in AI applications


Ensuring adequate access to AI system documentation for equality bodies and civil society organizations to conduct proper oversight


Bridging the gap between companies wanting to ‘do the right thing’ and having practical knowledge to assess and mitigate AI bias


Balancing efficiency goals (primary driver for AI adoption) with fairness and fundamental rights considerations


Addressing the challenge that biases can come from multiple sources including training data and underlying large language models


Suggested compromises

Using regulation as an opportunity rather than burden – embracing requirements for bias assessment as a way to improve AI systems


Combining multiple legal frameworks (AI-specific laws, data protection, and anti-discrimination legislation) for comprehensive coverage


Involving equality bodies, national human rights institutions, and civil society in shaping, monitoring, and improving AI systems rather than leaving it solely to developers


Thought provoking comments

Let us not treat AI as just a technological issue. Let us treat it also as a justice issue. Because if AI is to serve the public good, it must serve everyone equally and fairly. Period.

Speaker

Björn Berge


Reason

This comment reframes the entire AI discussion by shifting the paradigm from viewing AI primarily through a technical lens to understanding it as fundamentally a matter of social justice and human rights. It challenges the common tendency to treat AI development as value-neutral and emphasizes that equality must be built into AI systems from the ground up.


Impact

This statement set the philosophical foundation for the entire discussion, establishing that technical solutions alone are insufficient. It influenced the subsequent speakers to focus not just on identifying biases but on systemic approaches to ensuring fairness, and positioned the conversation within a human rights framework rather than purely technical considerations.


Back then, the majority of travelers actually said, oh, that’s good because a lot of border guards are discriminating us. When the machine is there, it’s better and it’s neutral. Unfortunately, 10 years later now, we learned a lot of examples that this is unfortunately not true. Machines are not neutral.

Speaker

David Reichel


Reason

This observation is particularly insightful because it reveals the dangerous myth of technological neutrality through a concrete, relatable example. It shows how public perception of AI fairness can be fundamentally flawed and demonstrates the evolution of understanding about AI bias over the past decade.


Impact

This comment served as a crucial turning point that debunked the neutrality myth and provided empirical grounding for why AI regulation is necessary. It shifted the discussion from theoretical concerns to practical, evidence-based examples of how AI perpetuates discrimination, making the abstract concept of algorithmic bias tangible for the audience.


I was also surprised to see that very rarely those using AI say we want to use AI to actually make better decisions, to make fairer decisions… If we look into biases, we actually learn a lot about where society is going wrong. If there’s recruitment software that prefers men over women, then this reflects ongoing practices.

Speaker

David Reichel


Reason

This comment is profound because it reframes AI bias detection as a diagnostic tool for uncovering existing societal inequalities rather than just a technical problem to solve. It challenges organizations to see bias assessment not as a compliance burden but as an opportunity for social insight and improvement.


Impact

This observation shifted the conversation from viewing bias mitigation as a defensive measure to seeing it as a proactive opportunity for social progress. It influenced the discussion toward embracing regulation and assessment processes as valuable tools for understanding and addressing systemic discrimination, rather than viewing them as obstacles to AI deployment.


It’s also not innovative if you use AI and discriminate. It’s also not efficient and not sustainable. So there’s the opportunity to actually create a better AI.

Speaker

David Reichel


Reason

This comment is strategically brilliant because it reframes fairness in AI using business language that resonates with decision-makers. By connecting ethical AI practices to innovation, efficiency, and sustainability, it makes a compelling business case for equality rather than positioning it as a constraint on technological progress.


Impact

This statement provided a bridge between ethical imperatives and practical business considerations, potentially influencing how organizations approach AI development. It helped conclude the discussion on a constructive note, showing that ethical AI and successful AI are not competing goals but complementary objectives.


Overall assessment

These key comments fundamentally shaped the discussion by establishing a progression from philosophical foundation to practical implementation. Berge’s justice-focused framing set the moral imperative, Reichel’s neutrality myth-busting provided the empirical foundation, his insight about bias as societal diagnosis offered a constructive reframing, and his business case for ethical AI provided actionable motivation. Together, these comments transformed what could have been a technical discussion about algorithmic problems into a comprehensive examination of AI as a tool for either perpetuating or addressing social inequality. The speakers successfully moved the audience from awareness of problems to understanding of opportunities, creating a narrative arc that was both educational and empowering.


Follow-up questions

How can companies and public administrations practically assess biases and mitigate risks in AI systems?

Speaker

David Reichel


Explanation

David noted that many organizations want to do the right thing but often don’t know how to actually assess biases and mitigate risks in practice, indicating a need for more practical guidance


How can AI be used to make better and fairer decisions rather than just more efficient ones?

Speaker

David Reichel


Explanation

David expressed surprise that companies almost exclusively use AI for efficiency purposes rather than to make fairer decisions, suggesting this represents an underexplored opportunity


How can we ensure access to AI system documentation for testing and remedy purposes?

Speaker

Sara Haapalainen


Explanation

Sara mentioned that equality bodies and CSOs need access to documentation of AI systems to request testing, but this access requirement needs further development


What specific guidance will emerge from the Council of Europe’s soft law instrument on AI equality and non-discrimination?

Speaker

Sara Haapalainen


Explanation

Sara mentioned that the Committee of Experts is currently drafting this instrument, but the specific guidance it will provide is still being developed


How can intersectional biases in AI systems be better detected and addressed?

Speaker

David Reichel


Explanation

David provided examples of intersectional bias (e.g., gendered nouns combined with religious terms) but this complex area requires further research and solutions


How can bias from large language models be mitigated when they are incorporated into other AI tools?

Speaker

David Reichel


Explanation

David noted that biases were coming from large language models used in general purpose AI, not just training data, indicating a need for solutions to address this source of bias


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.