Day 0 Event #174 Human Rights Impacts of AI on Marginalized Populations
Day 0 Event #174 Human Rights Impacts of AI on Marginalized Populations
Session at a Glance
Summary
This panel discussion focused on the impacts of artificial intelligence (AI) on marginalized populations, exploring both the opportunities and risks presented by AI technologies. Experts from government, industry, and civil society organizations shared insights on how AI can advance equity in areas like healthcare and education, while also potentially exacerbating existing biases and inequalities.
Key concerns raised included the lack of diverse representation in AI development, biases in training data that reinforce discrimination, and the misuse of AI for surveillance and censorship by authoritarian governments. Panelists emphasized the need for human rights impact assessments, increased transparency from companies and governments, and the inclusion of marginalized voices in AI governance discussions.
Specific recommendations included conducting regular bias audits of AI systems, strengthening data protection regulations, creating inclusive digital spaces, and establishing clear accountability mechanisms. The importance of addressing AI use in military contexts was also highlighted as a critical area requiring more attention and safeguards.
The discussion underscored the necessity of a multi-stakeholder approach to AI governance, with governments, companies, and civil society working together to ensure AI benefits all of society. Panelists stressed that protecting human rights must be central to AI development and deployment, particularly for the most vulnerable populations.
The conversation concluded by emphasizing the timeliness of these issues in light of ongoing UN processes and the need for continued international collaboration to shape inclusive and equitable AI systems that respect the rights of all individuals, regardless of their background or identity.
Keypoints
Major discussion points:
– The risks and opportunities of AI for marginalized populations
– The need for diverse representation in AI development and governance
– Challenges of AI use in military/defense contexts and lack of transparency
– The importance of human rights considerations in AI policy and regulation
– The role of multistakeholder collaboration in addressing AI challenges
The overall purpose of the discussion was to examine how artificial intelligence impacts marginalized populations, identify key risks and opportunities, and explore ways that governments, companies, and civil society can work together to ensure AI benefits all of society while mitigating potential harms.
The tone of the discussion was largely serious and concerned, with speakers highlighting significant challenges and risks posed by AI to vulnerable groups. However, there were also notes of cautious optimism about AI’s potential benefits if developed responsibly. The tone became more action-oriented toward the end, with concrete suggestions for next steps and collaborations to address the issues raised.
Speakers
– Alisson Peters: Deputy Assistant Secretary of State for the Bureau of Democracy, Human Rights, and Labor in the U.S. State Department
– Desirée Cormier Smith: Special Representative for Racial Equity and Justice at the U.S. State Department
– Dr. Geeta Rao Gupta: Special Representative for Gender Equity and Equality at the U.S. State Department
– Jessica Stern: U.S. Special Envoy to Advance the Human Rights of LGBTQI+ Persons
– Kelly M. Fay Rodriguez: Special Representative for International Labor Affairs at the U.S. State Department
– Sara Minkara: U.S. Special Advisor on International Disability Rights
– Nicol Turner Lee: Expert on the intersection of race and technology, digital divide, and digital equality
– Nighat Dad: Founder and Executive Director of the Digital Rights Foundation, expert on online harassment and digital security for women
– Rasha Younes: Human rights advocate and researcher at Human Rights Watch, expert on LGBTQI+ rights
– Amy Colando: Head of Microsoft’s Responsible Business Practice, expert on technology and human rights
– Guus Van Zwoll: Representative from the Netherlands government, involved in the Freedom Online Coalition
Additional speakers:
– Dr. Lee: Audience member asking a question
– Usama Kilji: Representative from Bolo B, a digital rights organization in Pakistan
– Khaled Mansour: Member of Meta Oversight Board
Full session report
Expanded Summary of Panel Discussion on AI’s Impact on Marginalised Populations
Introduction
This panel discussion brought together experts from government, industry, and civil society to explore the impacts of artificial intelligence (AI) on marginalised populations. Alisson Peters, Deputy Assistant Secretary of State for the Bureau of Democracy, Human Rights, and Labor, opened the discussion by emphasizing the importance of a multistakeholder approach to understanding AI’s societal impacts and the U.S. government’s commitment to addressing AI governance.
Key Themes and Discussion Points
1. Risks and Opportunities of AI for Marginalised Populations
The panel acknowledged the dual nature of AI’s potential impact. Nicol Turner Lee, an expert on the intersection of race and technology, highlighted both opportunities and challenges. She noted AI’s potential to improve healthcare outcomes and educational access for underserved communities, while also cautioning that AI systems often reinforce historical discrimination against marginalised groups. Turner Lee provided specific examples, such as AI-driven hiring tools potentially discriminating against women and minorities, and facial recognition systems misidentifying people of color at higher rates.
Jessica Stern, U.S. Special Envoy to Advance the Human Rights of LGBTQI+ Persons, offered a nuanced perspective, stating that “Computers might be binary, but people are not.” She suggested that generative AI could help reimagine inclusive futures and allow for safe and authentic self-expression, while also cautioning about the need to address biases in AI training data.
Sara Minkara, U.S. Special Advisor on International Disability Rights, pointed out that AI development often leaves out the disability community entirely, highlighting the need for inclusive design and development processes.
2. Addressing Biases and Harms in AI Systems
A significant portion of the discussion focused on strategies to mitigate biases and potential harms in AI systems. Nicol Turner Lee emphasised the need to interrogate AI models for bias and question whether automation is appropriate in various contexts. Nighat Dad, Founder and Executive Director of the Digital Rights Foundation, called on companies to do more to address harms on their platforms.
Rasha Younes, a human rights advocate and researcher at Human Rights Watch, proposed concrete steps, suggesting that “developers should conduct regular bias audits and build diverse representative data sets.” She also recommended that policymakers require independent testing of AI systems for biases, particularly when deployed in public-facing roles.
Amy Colando, Head of Microsoft’s Responsible Business Practice, shared insights on how her company employs various approaches to combat societal biases in AI systems. She discussed Microsoft’s efforts to increase transparency while maintaining customer confidentiality, highlighting the tension between these two objectives. Colando also detailed Microsoft’s responsible AI development practices, including ethical guidelines, diverse team composition, and ongoing research into AI safety and fairness.
3. Ensuring Inclusive AI Governance
The need for more inclusive approaches to AI governance was a recurring theme throughout the discussion. Alisson Peters highlighted U.S. government efforts, including a national security memorandum on AI that emphasizes human rights considerations. She noted that the U.S. government has policies to ensure human rights assessments in AI procurement.
Nighat Dad pointed out that AI governance conversations are heavily concentrated in Global North countries, often excluding perspectives from regions where these technologies are deployed. Rasha Younes stressed the need to strengthen protections against digital targeting of vulnerable groups.
Guus Van Zwoll, representing the Netherlands government and the Freedom Online Coalition (FOC), discussed efforts to keep human rights central in AI governance discussions. He mentioned that the FOC would be updating its 2020 statement on AI and human rights in the coming year, organizing workshops to educate policymakers on AI challenges for human rights, and spotlighting examples of AI that can advance human rights for marginalised groups.
4. Transparency and Accountability in AI Development
Khaled Mansour, a member of Meta’s Oversight Board, highlighted the challenge of transparency in human rights impact assessments. The discussion also touched on the need for more conversation on the embedded militarisation of everyday AI tools, a concern raised by audience member Usama Kilji, who called for more discussions and safeguards around military use of AI.
Unresolved Issues and Future Directions
Despite the comprehensive nature of the discussion, several issues remained unresolved. These included questions about how to effectively include marginalised voices in AI governance discussions, balancing transparency in human rights impact assessments with customer confidentiality, addressing AI use in military settings and its potential humanitarian impacts, and closing the digital divide to ensure equitable access to AI benefits.
The panel suggested several areas for further research and action, including:
1. Revising and updating the 2020 FOC statement on AI and human rights to reflect current AI developments and emphasise the disproportionate impact on marginalised communities.
2. Identifying and highlighting the disproportionate impact of AI on marginalised groups through tools like Stanford’s AI MISUSE tracker.
3. Conducting community-rooted AI research that prioritises diversity and addresses AI impacts on marginalised groups.
4. Exploring how AI can be leveraged to empower marginalised groups while ensuring accountability and ethical development.
Conclusion
The discussion underscored the necessity of a multi-stakeholder approach to AI governance, with governments, companies, and civil society working together to ensure AI benefits all of society. Panellists stressed that protecting human rights must be central to AI development and deployment, particularly for the most vulnerable populations. The conversation highlighted the ongoing challenges and opportunities in shaping inclusive and equitable AI systems that respect the rights of all individuals, regardless of their background or identity.
Session Transcript
Alisson Peters: All right, good afternoon, good evening, everyone, we’re going to get started, forgive us for some technical difficulties. Can everyone hear us? Hopefully everyone online can hear us as well. Well, thank you all for joining us, both in person and online. I’m going to try to talk as loudly as I possibly can, because I know it’s been a bit challenging for folks online today to hear every session. It’s my pleasure to be here on behalf of the United States government, where I serve as Deputy Assistant Secretary of State for our Bureau of Democracy, Human Rights, and Labor in the State Department. Before we get started in today’s session, we have the esteemed honor of welcoming virtually a number of our special envoys and representatives to the United States government representing various different marginalized populations, and they wanted to send their greetings as well.
Desirée Cormier Smith: So AI offers incredible potential to advance equity by increasing access to health care, education, and economic opportunity for those who need them the most. However, too often marginalized populations bear the worst harms of AI. There is vast evidence showing how AI systems can reinforce historical patterns of discrimination that disproportionately impact people of African descent, indigenous peoples, Roma people, and other marginalized racial and ethnic communities. And the risks of harm are the most pronounced for people who experience multiple
Dr. Geeta Rao Gupta: That’s right. AI tools are aiding the creation and dissemination of technology facilitated gender-based violence, or TFGBV, especially against women and children. This especially pernicious form of harassment and abuse is already threatening the ability of women and girls to participate in all spaces, online and offline, and has grave consequences for democracy.
Jessica Stern: Yes, computers might be binary, but people are not. Generative AI can help us reimagine inclusive futures and express ourselves safely and authentically. However, we need to be mindful about biases in the data that AI tools and systems are built on and how they translate into individuals’ lives. Nuanced data about LGBTQI plus people with appropriate privacy protections can help ensure that recommendation algorithms governing, for example, our shopping habits or content we consume on social media don’t entrench harmful social stereotypes or censor the beautiful diversity of humanity. AI is an exciting set of technologies that have the potential across all sectors to help us consider and integrate diverse perspectives.
Kelly M. Fay Rodriguez: Humans are the future of work, and freedom of association and collective bargaining are central to safeguarding workers’ rights and standards amid the rapid expansion of AI technologies, including and in particular for marginalized populations. Unions play an essential role in advocating for practices that can increase the meaningful representation of women and diverse groups and marginalized populations in AI. They advocate for safe work environments, limiting invasive and unsafe workplace monitoring. They ensure fair employment practices, secure equitable compensation, and ensure that benefits are shared.
Sara Minkara: AI is a reality of our present and our future, but also what is a reality is that a lot of time AI is built in a way that leaves us behind. And when I say us, I’m saying the disability community. We need to ensure that AI in development, in design, in the testing, in implementation is accessible for everyone, including the disability community. And not just on the assistive technology side of things, but for all technology.
Alisson Peters: Thank you very much to all of our special representatives and envoys. I think as you heard here at the start of our session, our shared goal in the United States government is really to harness the opportunities of artificial intelligence, whether that be on economic growth, increased access to quality education and advancement in medical care, while mitigating the risk. And we know all too often some of AI’s most egregious harms fall on marginalized populations and those experiencing multiple and intersecting forms of discrimination, including algorithmic biases, increased surveillance, and online harassment. We’re witnessing around the globe an unfortunate trend of governments misusing artificial intelligence in ways that significantly impact marginalized populations, such as through social media monitoring and other forms of surveillance, censorship, and other harassing information manipulation. To counter these abuses over the last four years, the United States government has taken several steps to encourage safeguards at the national level. We’ve introduced a number of executive orders and memos into our government system to safeguard the use and deployment and development of artificial intelligence for human rights. And at the international level, we’re working closely with the Freedom Online Coalition and other key like-minded partners through the UN and other multilateral systems to lay the groundwork for continued international and multi-stakeholder collaboration for years to come. But there’s more work to be done, and that’s where today’s discussion ends. discussion really comes in. The only way that governments can work to ensure that merchants aren’t disproportionately harmed by technological advancements is in partnership with you all around the room, around IGF and those that are online. We’re focused heavily on ensuring that we can easily create safeguards into our systems to ensure that we do not see the dissemination of disinformation and harmful synthetic imagery that can harm marginalized populations, how AI systems can exacerbate existing digital and real world divides, how they can reinforce stereotypes that further stigmatization, especially when these systems are not accessible for all their users. So we’re quite fortunate today by joining with an esteemed panel of experts that have really gathered to work to fight back against these worrying threats and trends. We have a number of our panelists online and we’re fortunate to be joined here in the room by two of them as well. First, we’ll hear from Dr. Nicole Turner Lee, who’s a leading voice on the intersection of race and technology and the digital divide and is a recognized expert on issues of digital equality and inclusion. Her work ensures that all communities, particularly marginalized ones, benefit from technological advancements. We’re also joined by Amy Colando, who’s a lawyer with deep expertise on the intersection of technology and human rights. As the head of Microsoft’s Responsible Business Practice, she leads a team dedicated to advancing Microsoft’s commitment to human rights norms and a responsible value chain that respects and advances human rights. Nagat Dodd, our friend and partner, is a globally recognized lawyer and advocate for women’s rights and digital privacy. She’s the founder and executive director of the Digital Rights Foundation, which focuses on issues of online harassment. data protection, and digital security for women and marginalized populations in Pakistan and is a member of MEDA’s Oversight Board. And Rasha Younis is a prominent human rights advocate and researcher at Human Rights. Her work has highlighted the systemic discrimination and violence faced by LGBTQI plus individuals in the MENA region and beyond, and her efforts have been instrumental in bringing international attention to these issues and pushing for legal reforms. So first I wanted to start out a little bit in the scene in terms of both the risks and opportunities that come from AI and the threats to marginalized populations. Let’s kick things off with Nicole, I’ll turn to you first. Some of these issues benefited from extensive international conversations, from the recognition in the engineering community over the past decade that it is critical to address harmful biases in AI, to efforts to curb the misuse of artificial intelligence and generative AI tools for image-based sexual abuse. Help us set the stage. Where do you think important progress has been made over the past several years and what are current challenges do you think that need to be addressed or elevated on the agenda, particularly as we’re all gathered here this week at IGF to address critical internet governance discussions? I think it’s really important that you help us think a little bit thoughtfully about where the current gaps and opportunities exist that we can leverage.
Nicol Turner Lee: Well thank you so much for the kind introduction and also thank you to all of you and the IGF for hosting this conversation. Before we start though, I do also want to say that I am the author of a new book, Digitally Invisible, to ensure that people know that there is content that I’ve written about this disconnect between the opportunities of technology and those who are marginalized or impacted by it. So I want to lean into this conversation on where we have seen some opportunity and where we have challenges. And in particular, in my few short moments just answering this question, I do want to point out that one of the opportunities that has become most prominent is our ability to engage in artificial intelligence, given the distributed compute power that we have. So I think it’s really important to have this conversation because it also lends itself by the opportunities also pose themselves threats. But what we are seeing is the ability to distribute networks in a way because we are building compute power that has, you know, capacity, which I’ve been doing this for about 30 years in terms of technology and its accessibility by people of color in particular, and we’ve not seen this very distributed network evolve as it is done today with chips and power. The other thing that has been an opportunity of AI has been the way it’s been integrated into a variety of verticals at the Brookings Institution. We started what’s called an AI equity lab allows us to workshop journalism and AI health care and AI criminal justice and AI. And why we do that, first and foremost, by putting the name of the sector and then AI is that we’ve seen an incredible influence of technology tools on these verticals that in essence determine quality of life on the social welfare side, as well as the economic opportunity side. And so I think we’ve come a long way, for example, in health care. We’re actually seeing personalized medicine. We’re seeing more efficiency among doctors when it comes to personalized medicine and the management of health. We’re seeing a lot more contemporary reaction and quick reaction. We saw that during the COVID vaccine development when it comes to pinpointing things that would have taken a very long time in our intellectual discovery are now happening through AI. And I think another area where we’ve seen a lot of promise has been in climate. where we’re able to use drone-enabled surveillance to look at where we have thermal outputs or throughputs that have potential danger for natural disaster or wildfires. We’re also seeing for agriculture, for example, because many of these are very intersectional, the ability to look at climate as it relates to watering times or when we’re able to be most productive in crop development. So I wanna put that out there because I often sound like a pessimist, which I will sound like now when it comes to AI and marginalized communities. So where we see these efficiency growth spurts, one of the areas that we’re seeing a lot of bias, as it’s already been indicated by many of the speakers, is when it comes to flipping these opportunities into challenges or hurdles. So I’ll just close with a couple of thoughts that will frame, hopefully, the rest of the conversation. Obviously, there’s demographic bias. In the United States, that demographic bias is profoundly defined by race and ethnicity, and gender has become more of a human rights concern. In other countries outside of the United States, class has also found its way into the demographic biases, as well as both the United States and outside the United States, geography has become biases. Where you live, who you are, and what you do matters because it is reflected in what we call the Brookings Institution, the traumatized nature of the data which is training these models. It comes with those historical biases, and those historical biases are often traumatized, meaning if there are systemic inequalities that point to the unequal access to education, for example, they will show up in the training data, and as a result, have a consequential outcome of either greater surveillance or less utility for students that may be in that category or impacted. The other area where we actually have challenges is not just by who’s commoditized by AI, those who are impacted by them, but who’s creating them. The lack of representation of who sits at the table to design the model. absent of the people who are actually impacted from them, creates, I think, an over-judgment of power that has consequences that can foreclose on the economic social opportunities that AI models can, the ones that I just spoke about. For example, when we think about who is developing models for the health of black women, let’s just take that for example, people may not understand that the lack of participation of black women in clinical trials may mean that they may not show up, particularly when it comes to breast cancer diagnosis in training models. This was actually recently put out by the Journal of American Medicine, that black women disproportionately experience breast cancer because their data is not represented in major data sets. That actually shows up in AI because AI is not divorced of the market-based data that is actually training these systems. The other thing when it comes to the challenges that we have with AI is the fact that, as it’s been mentioned and as my book suggests, we have a digital divide. We’re creating AI systems, and in many respects, we haven’t closed the accessibility divide. That creates its own set of challenges as to who will be able to benefit. And when you also think about generative AI, and I’ll sort of close here to provide enough time for my colleagues to chime in as well. When we think about the global majority, we do a lot of work at the Brookings Institution on how these systems show up, not only in terms of marginalized populations in the US, but all over the world. In the African Union, for example, we know that there’s a digital language divide, and generative AI is primarily English-based, and it is not necessarily trained on the plethora of dialogue that comes out of a variety of global majority countries. As a result of that, we see challenges when it comes to representation, not only in training data, but whether or not populations actually see themselves in these tools, particularly generative AI that is meant and designed to be, again, a lever for economic and social mobility in those areas. I mean, that along with the rights of workers. who is taking those jobs to be able to annotate the data. I could go on and on, but there are so many structural, behavioral, as well as output or consequential outcomes that occur when we don’t have the right people at the table, when we continue to commoditize subjects to demarginalized populations to fuel the AI models that we’re developing. Third, we don’t interrogate these models. And I’ll just say this, we don’t interrogate them for bias. We also don’t interrogate whether or not they should be used at all, or a decision should be automated in the first place. So I will stop here and look forward to this conversation. Hopefully I gave you enough to talk about as we go into the next speakers. And thank you so much for having me.
Alisson Peters: Thank you so much, Nicole. I think you did a really phenomenal job, first and foremost, plugging your book, which I encourage everyone to buy, but also both laying out the real tangible opportunities that we see from AI, everything from journalism, healthcare, addressing the impacts of climate change, and then laying out in detail some of the tremendous risks that we see for marginalized populations. So you addressed issues around the accessibility divide, exacerbating existing in our societies through use of big data. You talked about who gets a seat at the table and the design deployment and use of these technologies and beyond. So I next wanted to turn to Nagat. Your organization has really been on the front lines of documenting, I think, some of the risks that Nicole just laid out, the exact impact to marginalized populations, whether that be from women and girls. And I know you’ve done a lot of work on tech-facilitated gender-based violence or impacts to human rights defenders or religious minorities. And I’m hoping you can sort of build off of what Nicole was talking about in terms of the broader risks that she laid out and give us some tangible examples or two of where you. seen both the benefits and risks of AI tools to marginalized populations, and then really because we do have many different stakeholders at the table this week in the IGF conversations, whether that be from governments or the private sector, where do you think there’s gaps that require more attention in our international discussions?
Nighat Dad: At Digital Rights Foundation, we have been doing a lot of work around addressing tech-facilitated gender-based violence, and I feel that talking about AI or AI tools is an extension of what we have been talking for years around digital tools or digital rights, and all the harms that we are now connecting with an AI is actually an extension of those harms with the usage of AI, and they have become more sophisticated and advanced. That’s the same case with the tech-facilitated gender-based violence, where we are now seeing how deep-fake images of women and young girls are actually creating more risks for them, specifically when they are from the regions and cultures which are more conservative, where the honor of families or the society is connected to women bodies. One challenge that we are witnessing is basically verifying whether these deep fakes are actually real or unreal. That was not the case before AI generated content when it comes to images and videos. I think another challenge is that regulating this space. Tech companies really have to do a lot, and sitting on META’s oversight board, we actually framed our own experience as a board in terms of what companies like Meta can do to use automation around dealing with the harms on their platforms and released a paper on this. When it comes to the governments, I feel that there is a huge gap of governing AI, the conversations and I always say this, even while sitting at the UN Secretary General’s AI high level body, that the concentration of these conversations are very much concentrated in some global North countries. And in the past, we have seen how technology that is being developed, designed, built, mostly, you know, like dumped in our regions, you know, and we have no say into how, you know, these are these technologies are designed for the marginalized groups in our regions. No, that exactly is the case with the AI tools as well. I mean, there are some benefits where, you know, like it’s it’s also being used in the health care and climate monitoring, climate change and AI power to translation tools are also breaking down language barrier for marginalized groups. But I feel that all these opportunities are still connected to the entire cycle of how AI is being developed, designed, processed and deployed. I think there are lots of things to say, but there is a huge responsibility on AI companies on tech platforms where, you know, all these harms are being being increased by the use of AI. But the government’s also that how we can bring more accountability and oversight into the regulations that they are they are framing without including civil society voices and without having a conversation on human rights violations when it comes to AI tools.
Alisson Peters: Thanks so much, Nagat, I think, you know, you raise a really important point that I suspect we will have a lot of additional conversations this week at IGF about, which is if we don’t protect this multistakeholder model of Internet governance, a multistakeholder model of conversations around the regulation and governance of AI and emerging technologies. Then we will be missing sort of an entire part of the conversation, which is how are these tools being deployed and used in ways that are impacting the whole of society, not just the governments and the people representing them. I think that’s a good pivot over to you, Rasha, as you’ve done a lot of work looking at the impacts of AI tools from government misuse of these technologies. And I know you’ve done an incredible amount of work documenting the ways in which autocratic governments have used technology to repress marginalized populations, particularly LGBTQI plus persons. I’m hoping you could share a little bit of insight on how policymakers and AI developers should be thinking about these issues in relation to the governance and regulation of artificial intelligence, particularly sort of reflecting on the years of research that you’ve done.
Rasha Younes: Thank you so much, and thank you for having me today. In 2023, we published a report on the digital targeting of LGBTQI plus people across the Middle East and North Africa region, particularly in Iraq, Lebanon, Egypt, Jordan and Tunisia. What we found is that governments are using monitoring tools, usually manual monitoring, not sophisticated tools to target and harass LGBTQI plus people. And the significant finding that we had is that these abuses do not end in the instance of online harm in the sense that they are not transient, but reverberate through individual lives in ways that often ruin their lives entirely. In our report and in our follow-up campaign, which we published in 2024, we urge particularly technology platforms such as meta platforms, grinders, same-sex dating apps. etc. to address some of the structural issues that are related to content moderation, that are related to biases, that facilitate and allow for these abuses to take place, especially when they are in the wrong hands. So especially when they are exploited for malicious purposes, such as government targeting of LGBTQI plus people in contexts where they already face criminalization, whether it be direct criminalization of same-sex relations or other laws, such as cybercrime legislation and morality and indecency, debauchery laws that are used to target LGBTQI plus people simply for expressing themselves online. In developing this work, I also want to acknowledge that we are building off of work that Article 19 has done for many years on this specific issue, as well as the framework that Afsaneh Raboo introduced, which is designing from the margins, specifically in technology and AI systems, being able to design technologies with the interests, impacts, and rights of the most marginalized in mind. In some of the recommendations that we aim for, we really want to strengthen protections against digital targeting, while acknowledging that technology can always be used for malicious purposes. There are many ways that regulations and addressing biases and algorithms, for example, can help mitigate some of these abuses that take place offline as a result of online targeting. For example, AI systems often amplify historical biases, as my other co-panelists have said, embedded in the data that they are trained for, which leads. to discriminatory outcomes for LGBTQI plus individuals. So to mitigate these biases, developers should conduct regular bias audits and build diverse representative data sets and policymakers should also require independent testing of AI systems for biases, particularly when deployed in public facing pools. Incentives for inclusive algorithm design that incorporate the input of LGBTQI plus advocates and civil society experts should be central in requiring and enhancing these systems to better protect the most vulnerable users. When it comes to content moderation systems, we saw and investigated that automated systems frequently misidentify LGBTQI plus content as harmful or inappropriate, especially in languages other than the English language, such as the many dialects of the Arabic language as we found in our reporting, which inadvertently silences advocacy around LGBTQI plus rights, especially in contexts where advocates, activists and community organizers resort to technology in order to empower and connect and build community around their rights. When public discourse and any offline organizing around gender and sexuality is either prohibited or could lead to criminalization and arbitrary harassment of these activists. So particularly in content moderation, there must be a training of moderation algorithms on inclusive data sets that recognize the diversity of LGBTQI plus discourse and incorporating human oversight, particularly for sensitive content, ensuring nuanced understanding of this context. And finally, establishing appeal mechanisms that allow for an effective. remedy for users to challenge automated systems of automated decisions of moderation that unfairly remove LGBTQI plus content or otherwise leave content online that could be harmful and lead to the arbitrary arrest, harassment, torture and detention and other abuses of LGBTIQ plus individuals that, as I said before, reverberate throughout their lives. Finally, I definitely think that this should happen with the privacy and data security of individuals in mind and enforcing robust data protection regulations that allow for penalties, for misuse of sensitive data, especially when it comes to the outing of individuals who are LGBTIQ plus people on public platforms, online harassment, doxing and the resulting discrimination and violence that people face offline in their individual daily lives. As I said earlier, centering LGBTQI voices in design of AI tools is extremely important. So engaging directly with organizers, activists, experts to understand the unique needs and challenges of LGBTIQ plus individuals and also for tech platforms to prioritize the creation of these inclusive digital spaces that actively counter discrimination and harassment that could also happen in tandem. Human rights impact assessments are extremely important. We already know that comprehensive evaluation of risks associated with content moderation, government surveillance and other issues is incredibly important in informing the changes and the upgrading of these tools to be able to safeguard. the human rights of those most impacted by these technology-facilitated harms. Establishing accountability platforms both for governments, for developers, and establishing clear grievance mechanisms for individuals and groups affected by AI-driven decisions is central to beginning to address these harms and the offline consequences of these harms across the globe. Thank you.
Alisson Peters: Thank you so much, Rasha. I think you gave us some really tangible recommendations on how the harms from automated systems. You talked a bit about doing bias audits. I heard human rights impact assessments, providing access to grievance mechanisms, access to remedy. A number of the recommendations you raised are actually expectations set out in the UN Guiding Principles on Business and Human Rights. Earlier this year, the United States government led in the full UN General Assembly agreed to a resolution on safe, secure, and trustworthy artificial intelligence, which encourages and calls for increased implementation of the UN Guiding Principles. Certainly, all governments of the UN have agreed with a number of the recommendations that you laid out in terms of expectations, both for governments and private industry, private sector. I think that’s a good pivot over to you, Amy, as we’ve heard some really tangible recommendations that Rasha has laid out, building off of some of the risks that both Nicole and Nagat outlined. I’m hoping you can share a little bit of self-reflection from Microsoft’s perspectives. What do you think that companies should be doing more of to mitigate the harms that have just been laid out by our speakers? Also, if there are particular steps that you feel like we as governments can and should be taking in terms of industry to help promote these steps. action, I think that would be quite helpful as well. So, over to you, Amy, and thank you for joining us.
Amy Colando: Thank you so much, and thank you so much for having me, both, oops, let me see whether the audio will work out. I’m just going to keep on talking, and we’ll hope it works out. So, thank you so much for inviting me, and I’m learning a lot already in terms of our engagement. These multi-stakeholder conversations are incredibly important to shine a light on our practices, to help us think of additional steps we can and should be taking to commit on the promise of AI. So, let me start a little bit with sharing some examples from Microsoft with the understanding that these are just simply examples, and the multi-stakeholder process is incredibly important in terms of getting that feedback and scrutiny in terms of areas we can do better. My team coordinates Microsoft’s corporate-level human rights due diligence, including human rights impact assessments, under our commitment to respecting human rights and providing remedy under the UN Guiding Principles. That process includes, and is very intentional about, interviewing marginalized populations, and allows us to understand the needs of diverse groups of our users, our supply chain, and our employees, so we can enhance our respect for the rights of marginalized populations. Turning to AI, we recognize there are particular areas of promise and potential, as well as particular areas that might exacerbate existing divides and harms. AI, at its foundation, as Nicole said, requires infrastructure and connectivity, and we’ve established our Global Data Center Community Pledge, which commits us to building an operating infrastructure that addresses societal challenges and creates benefits for communities. This forms the basis of how we engage with stakeholders during all steps of the data center process, including after it is up and operationalized and is tailored to every location so it is respectful of local cultures and contexts and environmental needs. For example, in Australia, this meant weekly meetings over an eight-month period to incorporate traditional indigenous practices into our design process. Through engagement, we introduce the project, gather insights that help us inform our data center design, respecting our neighbors and the environmental resources around them. Next, for the development and deployment of AI, Microsoft’s Office of Responsible AI has partnered with the Stimson Center to bring a greater diversity of voices from the global majority to the conversation on responsible AI through our Global Perspectives Responsible AI Fellowship Program. The fellowship program convenes a multidisciplinary group of AI fellows from around the world, including Africa, Latin America, Asia, and Eastern Europe, across a series of facilitated activities. These activities and the fellows take part are intended to foster a deeper understanding of the impact of AI in the global majority, exchange best practices on the responsible development and use of AI, and inform an approach to responsible AI. To combat the societal biases in AI systems, we employ a variety of approaches and are constantly learning from dialogues exactly like the one we’re having here. In 2018, we identified our six responsible AI principles, including fairness. Our policies are designed to clarify how fairness issues may arise and who may be harmed by them, and we take active steps to implement them into tactical controls and code of conduct. For generative AI systems, we’ve leveraged the U.S. National Institute of Standards and Technology Risk Management Framework to develop tools and practices to map, measure, and manage bias issues, which involve the risk of generating stereotyping and demeaning outputs. In alignment with the goal of minimizing representational harms, we’ve made significant investment in red teaming to identify areas of harms across different demographic groups. manual and automated measurements to understand the prevalence of stereotyping and demeaning outputs and mitigations to flag and block those outputs. We look forward to working with governments, multilateral institutions and multi-stakeholder processes to continue to develop these frameworks, including through OECD due diligence conversations, to help a consistent and aligned approach to improve the offering of A.I. and the potential to serve marginalized populations. For our own for our own generative services, we’ve established a customer code of conduct which prohibits the use of Microsoft generative services for processing, generating, classifying or filtering content in ways that can inflict harm on individuals or society. We have developed and deployed a framework for customer use of sensitive A.I. features, including facial recognition and neural voice. Customers must register for these services, a process that includes defining proposed use cases and may not use the service for other use cases. And we institute technical controls for abuse monitoring and detection. The classifier models that we’ve developed detect harmful text and or images and user prompts, inputs and completions or outputs. The abuse monitoring system also looks at usage patterns and employs algorithms and heuristics to detect and score indicators of potential abuse. Detected patterns consider, for example, the frequency and severity at which harmful content is detected. These prompts and completions are then flagged through content classification and were identified as part of a potentially abusive pattern, are subject to additional review processes to help confirm the system’s analysis and inform actioning decisions. That’s conducted through human review and A.I. review. And then we have a feedback loop with customers and that in turn includes improvements to our own systems. Finally, I’d like to close on a theme that have been identified by my my fellow panelists in terms of the need for more representative data. and to ensure that we are bringing forward marginalized populations to be able to see themselves in the promise of AI. Recently, we identified that our services, our generative AI services, could be improved in terms of representation of people with disabilities, a one billion population around the world. We then partnered with Be My Eyes, which is a service and app that generates videos to allow vision impaired individuals to be able to communicate with others in a crowdsourced platform to actually visualize, visualize items that they’re looking at. This license to the Be My Eyes content allows us to ensure and advance the representation of people with disabilities in our service. In short, or not in short, because I’m closing now, I appreciate the opportunity to be here and to learn from others on the panel about how we can improve our processes and continue to work with government and civil society to advance AI. Thank you.
Alisson Peters: Thanks so much, Amy. I know I have a bunch more questions for you all. I mean, I think we just heard from you, Amy, the amount of work that Microsoft is doing to develop effective safe, happening amongst industry in this space. And yet what we’ve heard from Nagat and from Nicole in Russia is that, there’s real challenges in terms of developing effective safeguards. We know that, I think Nagat, you talked about the need to also ensure that we’re not concentrating a lot of these discussions in specific regions or specific countries or specific companies. And I think all of you across the board talked a bit about ensuring that we have more representative data that we’re recognizing that’s exacerbating biases and discrimination that occur. our society, or in the case of online harms, you know, things like to facilitate gender-based violence, it’s exacerbating gender-based violence that exists in our societies already. And so, I do want to go to the audience for questions, but in sort of reflecting on the questions that we might get, it’s also helpful for us to hear a little bit more about your recommendations on how we overcome some of the challenges that we’re seeing in developing effective safeguards if we have time. So, let me go over to the audience, I know we have folks online as well, if I could just ask our IT friends to pull up any questions, please put them in the chat, and if there’s any questions in the audience, I see folks are having problems hearing as well, so hopefully you can hear us, but if you have any questions, please put it in the chat, and any questions in the room.
Dr. Lee: Hi, thank you so much, Dr. Lee, I’m a big fan, thank you all for taking the time. Amy just mentioned infrastructure and data centers, and I have a question, as the government, U.S. government, sorry, as the U.S. government is integrating AI more and more into public systems, what is the government doing to ensure that patterns of environmental racism and issues with pollution and things that have affected marginalized communities in the U.S. will not be replicated with more and more AI use?
Nicol Turner Lee: I guess I can jump in, I think that’s a great question, I mean the type of power generation that’s going to be required for data centers are definitely going to, in many respects, lead us into areas where there is either more land or less respect for the dignity of the land some people have. So I think we have to, I like the way Amy’s talked about it with Microsoft, we’ll come up with the criteria and some values on where we decide to put those data centers because in the United States, the type of gigabit plus power that is required to actually not just keep these systems operating, but also to keep them cool, will have a disproportionate effect on communities that are either of color or indigenous or communities in which we used to have this term a long time ago in economic development, brownfields, where there’s a possibility to go in and exploit the land for the purposes of the type of potential nuclear reactor objects that are going to be needed to do data centers. And so I urge, I’m not a government employee, but I urge more conversation in this, right? Because it is an area that is becoming increasingly important as nuclear power becomes more distributed and hope that we can find the same type of reputational as well as harm reduction that we’ve spoken about today in terms of the models themselves and how we actually deal with this physical infrastructure.
Alisson Peters: Amy, is there anything you wanted to add to that as well?
Amy Colando: No, Nicole, that was such an excellent comment. I think that it’s recognizing the kind of continuing trends that we see in other words, it’s not AI, it’s like a brand new issue. There’s many new aspects about it, but the trends in terms of power and discrimination continue. Again, like many aspects of AI, I’d say there’s advantages and disadvantages. We are using AI, in fact, to develop new types of concrete that are less impactful on the environment. We have our own sustainability pledge. Other companies do as well, of course. We are continuing to uphold our pledge on carbon outputs. that we made prior to the advances of AI in the last couple of years, and we’ll continue to uphold that as we move forward and look for carbon-free sources of power.
Alisson Peters: And I will just say, you know, from the U.S. government perspective, we have over the last four years under the Biden administration, rolled out a number of new policies, executive order memos from our White House that are really focused on ensuring that as our own government is purchasing artificial intelligence systems, is using automated systems for decision-making, is deploying AI in different ways, and is also providing AI to other governments, that human rights is a core element of sort of that risk assessment that we’re doing, and that is a component in a lot of the new actions and regulations that we have rolled out. One of the things that I will note is we are working currently in the Council of Europe as government on a new convention on artificial intelligence, AI, human rights, rule of law, and democracy, and this framework convention is the globe’s first ever legally binding treaty on artificial intelligence, and one of the key things that that process is doing is also building out a risk assessment framework that has human rights at its core. So as government, we have a framework that we can actually look to that helps us assess what the risks are, whether that be to environmental rights, to environmental defenders, or other fundamental freedoms, freedom of expression and beyond, that that is core to everything that we’re working on. So this is a key piece of a lot of the work that we’re doing as it relates to safe, secure, and trustworthy AI in the US, and I know I speak for other governments that are here at IGF on that as well. And if we could just pull up the questions online, I just wanna make sure also that we’re not missing those.
Usama Kilji: Thank you very much for a very insightful discussion. I’m Usama Kilji. I’m with Bolo B, which is a digital rights organization in Pakistan. So my question specifically around AI use in military and in war, and I think around the world. and we’ve seen increasing use of AI in facial recognition technologies in conflict and in war, but we’re seeing that a lot of these conversations are to at least get the military use of AI, which has acute human rights impacts. So I’m wondering, what can governments and companies do to have more conversations around military use and what safeguards they can put in place, because currently in conflicts, we’re seeing very bad consequences for civilian populations.
Alisson Peters: One more question.
Khaled Mansour: Thank you. My name is Khaled Mansour. I am with Meta Oversight Board. It’s a follow-up to Osama’s question, because I bet you we will get the answers either from you that you do all these checks, human rights impact assessment. Our challenge here is transparency. So what is preventing you from publishing at least a portion of these reports so people who are affected by AI technologies, especially either clients of Microsoft or the U.S., actually happening and how the U.S.
Alisson Peters: Thanks so much. So maybe I’ll turn it over to the panelists first. But we have two questions. One is, how do we better address AI use in military settings with recognition that quite often as we’re having conversations around safeguards, around automated systems, we’re excluding the defense sector from those discussions. So what more could we be doing there? And then second question in terms of transparency reporting. And I know I saw another question back here. I’ll see if we get time. But maybe I’ll turn over to you online first, colleagues, if anyone wants to jump in on either of those questions.
Amy Colando: Sure, I can jump in a little bit. And this is an area of on which I welcome feedback because one of the cornerstones of how I think my team operates is commitments to accountability and transparency in terms of how we uphold Microsoft’s responsibility to respect human rights. At the same time, of course, there are confidentiality commitments to our customers and that those commitments are the same regardless of any customers. Let me just put that out there as a way we, that’s kind of the cornerstones of how we operate. I’d mentioned briefly during my opening remarks that we divide our AI services into potentially sensitive AI services, including facial recognition and neural voice. For those services, we do require defined use cases regardless of customer. And we review those defined use cases against our own responsible AI commitments, which are grounded in respect for human rights. We are endeavoring to increase transparency. So for example, during this last year, my team worked directly on updating some of our transparency around data center operations and the types of services we offer in data centers. But as you know, I’m sure there’s more we can do, more we can do as an industry and where we can do in terms of the kind of industry accepted level of due diligence. I think that’s going to be enormously helpful. So there’s this floor and rather than a race to the bottom, it’s a race to the top in terms of how private sector can work with governments and with civil society to ensure that we’re upholding universal human rights.
Nicol Turner Lee: And I’ll jump in with regards to that question in terms of militarization. So the challenges that we have with AI is that we have a militarization when it comes to just human rights and civil rights. But then we’ve also seen, and I like the way that the audience members sort of talked about this, this integration of a variety of technologies sort of embedded for the use of militarization. So what do I mean by that? We’re seeing facial recognition embedded into other AI-enabled technology that are being used for force. We’re seeing less accountability and transparency about that integration in many respects. And I think, you know, for the United States in particular, and other countries who have an ongoing race to AI with China, these create certain vulnerabilities and national security concerns that we have to pay attention to. So that’s the first thing I want to say. The other thing I think is really important, and I love the way we’re talking about, particularly the United States government’s sort of integrated diplomacy with human rights and AI security, is that I once heard someone say, and I’ll share with the truth because that was so profound, that in the absence of data privacy or international data governance strategy, we actually are also contributing to a national security concern. And so really thinking about ways in which we’re not handling data privacy, like Rasha also spoke about, right, really lends itself to greater militarization because it allows for governments, particularly authoritarian governments, to obstruct the type of transparency and accountability that we need when it comes to these systems of weaponization. And so, you know, I think that, you know, we have, we’re probably going to see a shift to more national security conversations in the United States. National Security Memo is an example of that. I just served as Secretary of Mayorkas on the AI Safety Board about critical infrastructure and AI protections. And I think across the world, I was just in Barcelona at the Smart City Depot, we’re seeing a lot of conversations about embedded militarization of just everyday AI tools and how they can be reversed for that type of application. So I think it’s a conversation we definitely need to have and the U.N. needs to continue.
Alisson Peters: Thanks so much, Nicole. And I will just say on the really important question in terms of how do we address the use of automated systems in our military apparatuses and not just use, but also development and design, right? You know, there’s two things that we’re working on, at least in the U.S. government context. First, we completely agree. with you that we can’t exclude. Test, test, it may work. Apologies all, unless IT issues always. But first and foremost, I think we agree with you on the importance of these conversations. It’s why we started a political military declaration to actually start a global conversation on use of AI in the military. And we would encourage governments that have not joined that declaration, not just because the importance of the declaration, but the importance of the policy conversations around this declaration to do so. And we’re happy to talk to any governments that are here at IGF and beyond. And then the second piece, which I think Nicole talked about is our national security memorandum on AI use in our national security system. We fully recognize that we can’t actually look at how to address everything from human rights impacts of AI to actually how our government is designing and deploying AI itself. Would actually address this in our national security institutions. And so we issued a pretty groundbreaking national security memorandum and to the point on transparency, that’s all public. And that deal is deploying these tools, but all elements of our national security system. So if you have not already had a chance to take a look at that national security memorandum, then happy to also share offline with you. But I think that is certainly an approach that we’re quite proud of as it relates to government transparency and accountability. Before I close this session, I wanted to invite our friends and colleagues from the governments in Netherlands. We have Hoest van Zwolle, who has been, I’ll say a partner of crime in all of our efforts to address the human rights impacts of artificial intelligence. Netherlands is going to be the chair of the Freedom Online Coalition Task Force on Artificial Intelligence and Human Rights next year. For those that are not familiar with the FOC, it’s a coalition of over 40 governments that are dedicated to addressing and ensuring the protection of human rights online, in which Netherlands serves as the chair this year. So I’m hoping to turn over to you, Hustu, close us out, and really just share some concrete ideas, particularly in reflection to some of the great questions that we’ve received on how the FOC can work with other governments next year to address these challenges under your leadership and in partnership with my government and other governments around the room.
Guus Van Zwoll: Thank you so much, Alison. Can you hear me? Yeah, okay. So it will be the TFAIR next year, as we call it, Task Force on AI and Human Rights. And human rights are, of course, a very different thing than humanitarian rights, but I do want to, humanitarian rights, but I just want to briefly touch on the issue of military and AI. In 2023, we started RE-AIM, which is the Responsible Use of AI in a Military Domain. We did it as the Netherlands, and this year the conference was held by South Korea. And together with South Korea, we launched a first committee resolution last month in the UN on exactly this issue, on what is responsible use of AI by the military. And that resolution had a broad support. We had 165 countries in favor, only two against, and six countries abstained. So I think that that is a pretty good start, at least for this conversation. I’m happy to discuss this later after the sessions as well. So I made some quick notes on what our plans are for next year. And basically, we basically want to continue this discussion, this fabulous discussion. Thank you so much, Alison and the US for organizing this. Because we see that the Freedom of the Line Coalition must practically. engaged on AI governance now, as critical global norms and standards are being shaped in the upcoming months. It will not take years, it will be literally months. And this is why we as the Netherlands want to co-lead the TFAIR next year. Our responsibility is to ensure that human rights remain central to these frameworks, protecting vulnerable populations and shaping inclusive and equitable AI systems. In 2020, the FOC already published a joint statement on AI and human rights. But 2020, that is in terms of AI a couple of centuries ago, basic age history was before image generation and was also before larger language models. So we know that the AI system really changed. And I think it’s our task next year to revise this successful 2020 FOC statement, emphasizing as well on the disproportionate impact on AI on marginalized communities. The aimed update will provide clear guidelines for embedding human rights principles into AI governance globally. How do we do this? Well, for example, by collaborating with Stanford’s AI MISUSE tracker, we will try to identify and highlight disproportionate impact of AI on marginalized groups, such as through biased surveillance or exclusionary practices. This tool will ensure transparency and accountability while driving advocacy and for equitable AI practices. We will also organize practical workshops and simulations to equip policymakers and diplomats with the tools and knowledge needed to address these AI challenges and opportunities for human rights with a focus on marginalized communities and women. We will try to get leading voices, very much like the ones we have heard today, to an organization to educate us diplomats and policymakers on the challenges that they see most daunting. Another focus would be on community-rooted AI research. that prioritizes diversity and addresses AI impacts on marginalized groups. These contributions would offer valuable perspectives for fostering an inclusive and rights-based AI governance. We will also spotlight examples of AI that can advance human rights, such as tools for bypassing censorship or supporting civic engagement. These studies will demonstrate how AI can be leveraged to empower marginalized groups while ensuring accountability and ethical development. A great example of this is the Signpost project by the International Rescue Committee. This initiative leverages AI to provide critical information to displaced populations via mobile apps and social media, delivering content in multiple languages. The choices that we will make now and that we are discussing at the IGF this week will determine how AI helps create a fairer world or deepens the current inequalities. Through T-FAIR, we will aim to keep human rights at the center of the AI governance discussion, supporting marginalized communities and building a future based on fairness and accountability. As the upcoming chair for T-FAIR, but also as the chair of the Freedom Online Coalition this year, we are convinced that the FOC, the Freedom Online Coalition, provides a great networking platform to advance this goal. Thank you.
Alisson Peters: Thank you so much, Joost. The Netherlands has been such an incredible leader of the Freedom Online Coalition this year, and I know I speak for my government when I say we’re really eager and excited to work with you all and really build out on today’s discussion next year on the task force. I know there’s never enough time for these conversations, especially when we have such incredible panelists, but I really do want to thank you all for joining on your weekends, where you’re all located, and everyone for joining us in the room. I will say, in concluding this discussion, we will have at IGF throughout the this week and beyond as we look ahead to WSIS Plus 20 and other UN processes, continued debates around the future of artificial intelligence. How do we both leverage the opportunities from AI, the opportunities that Nicole talked about, and how do we also ensure that we are mitigating the risks? So the rewards and the risks are continued conversation that we’re having in AI policy debates. And I think what you have heard from each of our panelists is this question about who is actually setting the table for those debates? Who is at the table? Are they representative of the populations that we as governments are tasked with protecting? Are they representative that industry is actually working and has access? And are they representative of the populations that will be most impacted by how these technologies are being designed, deployed, and used in their societies? We have a lot of conversations in the UN around ensuring that we’re advancing AI for good. But we know that we can only advance AI for good when basic human rights of all people, no matter where they’re located, no matter their faith, no matter their gender, no matter their sexual orientation and beyond are respected. So this is a really timely conversation for IGF. It’s a timely conversation given we just had Human Rights Day this last week. I thank you all for coming, and I hope that we can continue these discussions throughout this week at IGF and beyond. So on behalf of the United States government and my Bureau of Democracy, Human Rights, and Labor, I want to thank you all and thank you all online. And we look forward to being in touch through the Freedom Online Coalition to continue these important discussions. Thank you. Thank you. you
Desirée Cormier Smith
Speech speed
123 words per minute
Speech length
82 words
Speech time
39 seconds
AI can advance equity in healthcare, education, and economic opportunity
Explanation
AI has the potential to increase access to healthcare, education, and economic opportunities for those who need them most. This could help reduce inequalities and promote equity in these crucial areas.
Major Discussion Point
Risks and Opportunities of AI for Marginalized Populations
Agreed with
Nicol Turner Lee
Jessica Stern
Agreed on
AI can both advance equity and reinforce discrimination
Nicol Turner Lee
Speech speed
175 words per minute
Speech length
1906 words
Speech time
653 seconds
AI systems often reinforce historical discrimination against marginalized groups
Explanation
AI systems can exacerbate existing biases and inequalities in society. This is because they are often trained on historical data that reflects past discriminatory practices and societal inequities.
Evidence
Example of breast cancer diagnosis models not accurately representing black women due to lack of participation in clinical trials.
Major Discussion Point
Risks and Opportunities of AI for Marginalized Populations
Agreed with
Desirée Cormier Smith
Jessica Stern
Agreed on
AI can both advance equity and reinforce discrimination
Need to interrogate AI models for bias and whether automation is appropriate
Explanation
It is crucial to examine AI models for potential biases and discriminatory outcomes. Additionally, there should be careful consideration of whether certain decisions should be automated at all.
Major Discussion Point
Addressing Biases and Harms in AI Systems
Agreed with
Sara Minkara
Rasha Younes
Agreed on
Need for diverse representation in AI development
Differed with
Amy Colando
Differed on
Approach to addressing AI biases
Dr. Geeta Rao Gupta
Speech speed
113 words per minute
Speech length
53 words
Speech time
28 seconds
AI tools are enabling technology-facilitated gender-based violence
Explanation
AI technologies are being used to create and spread technology-facilitated gender-based violence (TFGBV). This form of harassment and abuse particularly targets women and children, threatening their ability to participate in online and offline spaces.
Major Discussion Point
Risks and Opportunities of AI for Marginalized Populations
Jessica Stern
Speech speed
124 words per minute
Speech length
112 words
Speech time
54 seconds
AI can help reimagine inclusive futures but biases in data must be addressed
Explanation
Generative AI has the potential to create more inclusive futures and allow for safe self-expression. However, it’s crucial to address biases in the data used to train AI systems to prevent reinforcing harmful stereotypes.
Major Discussion Point
Risks and Opportunities of AI for Marginalized Populations
Agreed with
Desirée Cormier Smith
Nicol Turner Lee
Agreed on
AI can both advance equity and reinforce discrimination
Kelly M. Fay Rodriguez
Speech speed
120 words per minute
Speech length
86 words
Speech time
43 seconds
Unions play a key role in safeguarding workers’ rights amid AI expansion
Explanation
Labor unions are essential in protecting workers’ rights as AI technologies rapidly expand. They advocate for fair employment practices, safe work environments, and equitable compensation in the context of AI implementation.
Major Discussion Point
Risks and Opportunities of AI for Marginalized Populations
Sara Minkara
Speech speed
123 words per minute
Speech length
79 words
Speech time
38 seconds
AI development often leaves out the disability community
Explanation
AI is frequently developed without considering the needs of the disability community. It’s crucial to ensure that AI is accessible for everyone, including people with disabilities, in all aspects of its development and implementation.
Major Discussion Point
Risks and Opportunities of AI for Marginalized Populations
Agreed with
Nicol Turner Lee
Rasha Younes
Agreed on
Need for diverse representation in AI development
Nighat Dad
Speech speed
130 words per minute
Speech length
474 words
Speech time
218 seconds
Companies must do more to address harms on their platforms
Explanation
Tech companies need to take more responsibility in addressing the harms caused by AI on their platforms. This includes issues like deep fake images and videos that disproportionately affect women and girls.
Evidence
Experience from META’s oversight board in framing how companies like Meta can use automation to deal with harms on their platforms.
Major Discussion Point
Addressing Biases and Harms in AI Systems
AI governance conversations are concentrated in Global North countries
Explanation
Discussions about AI governance are primarily taking place in developed countries. This leads to a lack of input from regions where these technologies are often deployed, particularly affecting marginalized groups in those areas.
Major Discussion Point
Ensuring Inclusive AI Governance
Rasha Younes
Speech speed
114 words per minute
Speech length
878 words
Speech time
458 seconds
Developers should conduct regular bias audits and build diverse datasets
Explanation
To mitigate biases in AI systems, developers need to regularly audit their systems for bias and ensure they are using diverse, representative datasets. This is particularly important for protecting LGBTQI+ individuals from discriminatory outcomes.
Evidence
Findings from a report on digital targeting of LGBTQI+ people across the Middle East and North Africa region.
Major Discussion Point
Addressing Biases and Harms in AI Systems
Agreed with
Nicol Turner Lee
Sara Minkara
Agreed on
Need for diverse representation in AI development
Need to strengthen protections against digital targeting of vulnerable groups
Explanation
There is a pressing need to enhance safeguards against the digital targeting of vulnerable populations, particularly LGBTQI+ individuals. This includes addressing biases in content moderation systems and ensuring privacy protections.
Evidence
Examples of government targeting of LGBTQI+ people using monitoring tools and cybercrime legislation.
Major Discussion Point
Ensuring Inclusive AI Governance
Amy Colando
Speech speed
153 words per minute
Speech length
1472 words
Speech time
576 seconds
Microsoft employs various approaches to combat societal biases in AI systems
Explanation
Microsoft has implemented multiple strategies to address societal biases in their AI systems. This includes policies on fairness, tools to map and manage bias issues, and investments in identifying areas of harm across different demographic groups.
Evidence
Examples include the Global Data Center Community Pledge, partnership with the Stimson Center for the Global Perspectives Responsible AI Fellowship Program, and development of a customer code of conduct for generative AI services.
Major Discussion Point
Addressing Biases and Harms in AI Systems
Differed with
Nicol Turner Lee
Differed on
Approach to addressing AI biases
Alisson Peters
Speech speed
157 words per minute
Speech length
3211 words
Speech time
1220 seconds
US government has policies to ensure human rights assessments in AI procurement
Explanation
The US government has implemented policies requiring human rights assessments when purchasing or deploying AI systems. This includes executive orders and memos aimed at safeguarding human rights in the development and use of AI.
Evidence
Mention of executive orders and memos introduced into the US government system over the last four years.
Major Discussion Point
Addressing Biases and Harms in AI Systems
Multistakeholder model needed to understand AI’s societal impacts
Explanation
A multistakeholder approach is crucial for comprehensively understanding how AI tools are impacting society. This model ensures that discussions include perspectives beyond just governments and the people representing them.
Major Discussion Point
Ensuring Inclusive AI Governance
US government working on frameworks for risk assessment with human rights at core
Explanation
The US government is developing risk assessment frameworks that prioritize human rights considerations in AI development and deployment. This includes work on international conventions and national security memoranda.
Evidence
Mention of the Council of Europe convention on AI, human rights, rule of law, and democracy, and the US national security memorandum on AI use in national security systems.
Major Discussion Point
Ensuring Inclusive AI Governance
Khaled Mansour
Speech speed
137 words per minute
Speech length
82 words
Speech time
35 seconds
Challenge is transparency in human rights impact assessments
Explanation
There is a lack of transparency in the human rights impact assessments conducted by companies and governments. Publishing at least portions of these reports would allow people affected by AI technologies to understand how their rights are being considered.
Major Discussion Point
Transparency and Accountability in AI Development
Guus Van Zwoll
Speech speed
152 words per minute
Speech length
710 words
Speech time
279 seconds
Freedom Online Coalition working to keep human rights central in AI governance
Explanation
The Freedom Online Coalition, through its Task Force on AI and Human Rights, is working to ensure that human rights remain at the center of AI governance frameworks. This includes updating previous statements to address the evolving AI landscape and its impact on marginalized communities.
Evidence
Plans for collaborating with Stanford’s AI MISUSE tracker, organizing workshops for policymakers, and spotlighting examples of AI that advance human rights.
Major Discussion Point
Ensuring Inclusive AI Governance
Agreements
Agreement Points
AI can both advance equity and reinforce discrimination
speakers
Desirée Cormier Smith
Nicol Turner Lee
Jessica Stern
arguments
AI can advance equity in healthcare, education, and economic opportunity
AI systems often reinforce historical discrimination against marginalized groups
AI can help reimagine inclusive futures but biases in data must be addressed
summary
The speakers agree that while AI has the potential to advance equity and create inclusive futures, it can also reinforce existing biases and discrimination if not properly addressed.
Need for diverse representation in AI development
speakers
Nicol Turner Lee
Sara Minkara
Rasha Younes
arguments
Need to interrogate AI models for bias and whether automation is appropriate
AI development often leaves out the disability community
Developers should conduct regular bias audits and build diverse datasets
summary
The speakers emphasize the importance of including diverse perspectives, particularly from marginalized communities, in the development and auditing of AI systems to mitigate biases and ensure inclusivity.
Similar Viewpoints
These speakers emphasize the need for more inclusive and diverse participation in AI governance discussions, particularly to address the needs and vulnerabilities of marginalized groups.
speakers
Nighat Dad
Rasha Younes
Alisson Peters
arguments
AI governance conversations are concentrated in Global North countries
Need to strengthen protections against digital targeting of vulnerable groups
Multistakeholder model needed to understand AI’s societal impacts
Unexpected Consensus
Importance of unions in AI governance
speakers
Kelly M. Fay Rodriguez
Alisson Peters
arguments
Unions play a key role in safeguarding workers’ rights amid AI expansion
Multistakeholder model needed to understand AI’s societal impacts
explanation
While most discussions focused on government and tech company roles, there was unexpected consensus on the importance of labor unions in shaping AI governance and protecting workers’ rights in the context of AI expansion.
Overall Assessment
Summary
The main areas of agreement include the dual nature of AI in both advancing equity and potentially reinforcing discrimination, the need for diverse representation in AI development and governance, and the importance of addressing biases and harms in AI systems.
Consensus level
There is a moderate to high level of consensus among the speakers on the key challenges and necessary steps for ensuring inclusive and responsible AI development and governance. This consensus suggests a growing recognition of the need for multistakeholder approaches and increased attention to the impacts of AI on marginalized communities, which could potentially influence future policy and industry practices in AI development and deployment.
Differences
Different Viewpoints
Approach to addressing AI biases
speakers
Nicol Turner Lee
Amy Colando
arguments
Need to interrogate AI models for bias and whether automation is appropriate
Microsoft employs various approaches to combat societal biases in AI systems
summary
While both speakers acknowledge the need to address biases in AI systems, they differ in their approaches. Turner Lee emphasizes the need for critical examination of AI models and questioning the appropriateness of automation, while Colando focuses on Microsoft’s implemented strategies and tools to manage bias issues.
Unexpected Differences
Overall Assessment
summary
The main areas of disagreement revolve around the specific approaches to addressing AI biases and ensuring inclusive AI governance.
difference_level
The level of disagreement among the speakers appears to be relatively low. Most speakers agree on the fundamental issues surrounding AI’s impact on marginalized populations and the need for more inclusive governance. The differences mainly lie in the specific strategies and focus areas each speaker emphasizes. This level of disagreement suggests a general consensus on the importance of addressing AI’s risks for marginalized groups, but highlights the need for further discussion and collaboration on the most effective approaches to tackle these issues.
Partial Agreements
Partial Agreements
Both speakers agree on the need for more inclusive AI governance discussions. However, Dad emphasizes the lack of input from regions where these technologies are deployed, particularly affecting marginalized groups, while Peters focuses on the importance of a multistakeholder approach to comprehensively understand AI’s societal impacts.
speakers
Nighat Dad
Alisson Peters
arguments
AI governance conversations are concentrated in Global North countries
Multistakeholder model needed to understand AI’s societal impacts
Similar Viewpoints
These speakers emphasize the need for more inclusive and diverse participation in AI governance discussions, particularly to address the needs and vulnerabilities of marginalized groups.
speakers
Nighat Dad
Rasha Younes
Alisson Peters
arguments
AI governance conversations are concentrated in Global North countries
Need to strengthen protections against digital targeting of vulnerable groups
Multistakeholder model needed to understand AI’s societal impacts
Takeaways
Key Takeaways
AI offers opportunities to advance equity but also risks reinforcing discrimination against marginalized groups
Effective safeguards and inclusive governance are needed to mitigate AI harms to vulnerable populations
Multistakeholder collaboration is crucial to ensure AI development considers diverse perspectives
More transparency and accountability are needed in AI development, especially regarding human rights impacts
AI governance must center human rights and protect marginalized communities
Resolutions and Action Items
The Freedom Online Coalition will update its 2020 statement on AI and human rights in the coming year
The FOC will organize workshops to educate policymakers on AI challenges for human rights
The FOC will spotlight examples of AI that can advance human rights for marginalized groups
Unresolved Issues
How to effectively include marginalized voices in AI governance discussions
Balancing transparency in human rights impact assessments with customer confidentiality
Addressing AI use in military settings and its potential humanitarian impacts
Closing the digital divide to ensure equitable access to AI benefits
Suggested Compromises
Developing industry-accepted standards for due diligence and transparency in AI development
Creating inclusive digital spaces that actively counter discrimination while protecting privacy
Thought Provoking Comments
AI offers incredible potential to advance equity by increasing access to health care, education, and economic opportunity for those who need them the most. However, too often marginalized populations bear the worst harms of AI.
speaker
Desirée Cormier Smith
reason
This comment succinctly captures the core tension at the heart of AI’s impact on marginalized groups – its potential for both benefit and harm.
impact
It set the stage for the entire discussion by framing the key issues around AI and marginalized populations that subsequent speakers explored in more depth.
Computers might be binary, but people are not. Generative AI can help us reimagine inclusive futures and express ourselves safely and authentically. However, we need to be mindful about biases in the data that AI tools and systems are built on and how they translate into individuals’ lives.
speaker
Jessica Stern
reason
This comment insightfully highlights how AI systems can reinforce or challenge existing social constructs around identity, particularly for LGBTQI+ individuals.
impact
It broadened the conversation to consider AI’s impact on gender and sexual identity expression, which was further explored in later comments about LGBTQI+ rights.
We’re creating AI systems, and in many respects, we haven’t closed the accessibility divide. That creates its own set of challenges as to who will be able to benefit.
speaker
Nicol Turner Lee
reason
This comment draws attention to how existing digital divides can be exacerbated by AI, potentially widening inequality.
impact
It shifted the discussion to consider not just the design of AI systems, but also who has access to them, leading to further exploration of global inequities in AI development and deployment.
The concentration of these conversations are very much concentrated in some global North countries. And in the past, we have seen how technology that is being developed, designed, built, mostly, you know, like dumped in our regions, you know, and we have no say into how, you know, these are these technologies are designed for the marginalized groups in our regions.
speaker
Nighat Dad
reason
This comment highlights the global power imbalances in AI development and governance, raising important questions about representation and self-determination.
impact
It prompted further discussion about the need for more inclusive, global approaches to AI governance and development.
To mitigate these biases, developers should conduct regular bias audits and build diverse representative data sets and policymakers should also require independent testing of AI systems for biases, particularly when deployed in public facing pools.
speaker
Rasha Younes
reason
This comment offers concrete, actionable steps to address AI bias, moving the conversation from problem identification to potential solutions.
impact
It shifted the discussion towards more practical considerations of how to implement safeguards and protections in AI development and deployment.
Overall Assessment
These key comments shaped the discussion by progressively deepening the analysis of AI’s impact on marginalized populations. The conversation moved from identifying broad tensions and challenges to exploring specific impacts on different groups (e.g. LGBTQI+, Global South populations) and finally to proposing concrete actions and governance approaches. This progression allowed for a comprehensive exploration of the complex interplay between AI, human rights, and marginalized communities, while also highlighting the urgent need for more inclusive and equitable approaches to AI development and governance.
Follow-up Questions
How can we ensure patterns of environmental racism and pollution affecting marginalized communities in the U.S. are not replicated with increased AI use?
speaker
Audience member (Dr. Lee)
explanation
This question addresses the potential environmental impacts of AI infrastructure on marginalized communities, which is an important consideration as AI becomes more integrated into public systems.
What can governments and companies do to have more conversations around military use of AI and what safeguards can they put in place?
speaker
Usama Kilji
explanation
This question highlights the need for more discussion and safeguards around AI use in military and conflict situations, which can have severe human rights impacts on civilian populations.
What is preventing companies and governments from publishing at least a portion of their human rights impact assessment reports related to AI technologies?
speaker
Khaled Mansour
explanation
This question addresses the need for greater transparency in how companies and governments assess the human rights impacts of AI technologies, which is crucial for accountability and public trust.
How can we revise and update the 2020 FOC statement on AI and human rights to reflect current AI developments and emphasize the disproportionate impact on marginalized communities?
speaker
Guus Van Zwoll
explanation
This area for further research is important to ensure that human rights principles are embedded in AI governance globally, reflecting the rapid changes in AI technology since 2020.
How can we identify and highlight the disproportionate impact of AI on marginalized groups through tools like Stanford’s AI MISUSE tracker?
speaker
Guus Van Zwoll
explanation
This research area is crucial for ensuring transparency, accountability, and advocacy for equitable AI practices that don’t disproportionately harm marginalized communities.
How can we conduct community-rooted AI research that prioritizes diversity and addresses AI impacts on marginalized groups?
speaker
Guus Van Zwoll
explanation
This research direction is important for fostering inclusive and rights-based AI governance by incorporating diverse perspectives and experiences.
How can AI be leveraged to empower marginalized groups while ensuring accountability and ethical development?
speaker
Guus Van Zwoll
explanation
This area of research focuses on identifying and developing AI applications that can advance human rights and support marginalized communities, balancing the potential benefits with ethical considerations.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Related event
Internet Governance Forum 2024
15 Dec 2024 06:30h - 19 Dec 2024 13:30h
Riyadh, Saudi Arabia and online