WS #362 Incorporating Human Rights in AI Risk Management

26 Jun 2025 09:00h - 10:00h

WS #362 Incorporating Human Rights in AI Risk Management

Session at a glance

Summary

This panel discussion focused on incorporating human rights considerations into AI risk management practices, bringing together perspectives from industry, civil society, and multilateral organizations. The session was organized by the Global Network Initiative (GNI), which works at the intersection of technology and human rights, particularly when companies face government requests that may impact freedom of expression or privacy rights.


Company representatives from Google and Telenor Group emphasized that successful human rights integration in AI requires top-level management commitment and comprehensive governance structures. Both organizations highlighted that their AI principles build upon existing commitments to UN Guiding Principles on Business and Human Rights and GNI frameworks. They stressed the importance of operationalizing these principles through specific processes, training, and risk assessment procedures that embed human rights considerations throughout the AI development lifecycle.


The UN Office of the High Commissioner for Human Rights, through their BTEC project, presented findings on the shared responsibility between companies and states in protecting human rights in AI deployment. They noted that AI technology evolution has outpaced current regulatory frameworks and emphasized the need for companies to conduct human rights due diligence while states must ensure adequate protection through appropriate regulations.


Civil society representatives highlighted the particular challenges facing the Global South, where different socioeconomic contexts and varying regulatory environments create unique human rights risks. They emphasized the need for culturally sensitive approaches and context-specific risk assessments, noting that AI systems developed in the Global North may have different implications when deployed in the Global South.


The discussion revealed broad consensus on the necessity of mandatory human rights impact assessments for high-risk AI systems, with participants supporting risk-based regulatory approaches similar to the EU AI Act. However, panelists also acknowledged that human rights frameworks alone may not capture all societal-level impacts of AI, suggesting the need for broader comprehensive approaches that address both individual and collective effects of AI deployment.


Keypoints

## Major Discussion Points:


– **Corporate Human Rights Governance as Foundation**: Multiple panelists emphasized that companies must establish baseline human rights governance structures before integrating AI-specific principles. This includes commitment to UN Guiding Principles, having dedicated human rights programs, and ensuring board-level oversight of human rights risks.


– **Operationalizing Human Rights in AI Risk Assessment**: The discussion focused extensively on practical implementation methods, including red teaming, comprehensive risk assessment frameworks, multi-stakeholder engagement, and the need for culturally sensitive staff who understand local contexts, particularly in Global South deployments.


– **Regulatory Landscape and Mandatory Assessments**: Panelists addressed the evolving regulatory environment, particularly the EU AI Act’s requirements for fundamental rights impact assessments for high-risk AI systems, and debated the merits of legally mandating human rights due diligence versus voluntary corporate initiatives.


– **Global South Perspectives and Context-Specific Challenges**: Significant attention was given to how AI impacts differ between Global North and South, including varying socioeconomic realities, different constitutional rights frameworks, and the need for locally relevant benchmarks and taxonomies rather than one-size-fits-all approaches.


– **Beyond Individual Rights to Societal Impact**: The conversation evolved to consider whether traditional human rights frameworks adequately capture broader societal impacts of AI, such as educational transformation, job displacement, and community-level effects, suggesting the need for more comprehensive assessment approaches.


## Overall Purpose:


The discussion aimed to explore how human rights considerations can be better integrated into AI risk management practices across different stakeholders – companies, multilateral organizations, and civil society – with particular attention to bridging the gap between Global North AI development and Global South deployment contexts.


## Overall Tone:


The discussion maintained a collaborative and constructive tone throughout, characterized by shared commitment to human rights principles despite different organizational perspectives. The tone was professional and solution-oriented, with panelists building on each other’s points rather than presenting conflicting viewpoints. There was a notable shift toward more nuanced complexity as the conversation progressed, moving from foundational principles to practical implementation challenges and ultimately to broader questions about the adequacy of current frameworks for addressing AI’s societal impacts.


Speakers

**Speakers from the provided list:**


– **Min thu Aung** – Accountability and Innovation Manager at the Global Network Initiative (GNI)


– **Alexandria Walden** – Representative from Google (specific title not mentioned in transcript)


– **Elina Thorstrom** – Representative from Telenor Group/DNA (leads AI risk assessment working group)


– **Nathalie Stadelmann** – Representative from OHCHR (Office of the High Commissioner for Human Rights), part of the BTEC project


– **Jhalak Mrignayani Kakkar** – Representative from CCG (founding member of GNI), works on tech law and policy in the Global South, expert in Global Partnership for AI


– **Caitlin Kraft-Buchman** – (Role/organization not clearly specified in transcript, appears to work on gender and AI issues)


– **Audience** – Various audience members asking questions


**Additional speakers:**


– **Byungil Oh** – From Jinbonet, a digital rights organization based in South Korea


– **Richard Ringfield** – From BSR (Business for Social Responsibility)


Full session report

# Human Rights Integration in AI Risk Management: A Multi-Stakeholder Panel Discussion


## Executive Summary


This panel discussion, organized by the Global Network Initiative (GNI) as part of the Internet Governance Forum, brought together diverse stakeholders to examine how human rights considerations can be effectively integrated into artificial intelligence risk management practices. The session featured representatives from major technology companies (Google, Telenor Group), multilateral organizations (UN Office of the High Commissioner for Human Rights), and civil society advocates from both the Global North and South.


The discussion revealed broad agreement on fundamental principles while highlighting significant implementation challenges. Participants agreed that human rights must serve as the foundational framework for AI governance, though the conversation revealed complex questions about how to address both individual and broader societal impacts of AI systems, particularly across different global contexts.


## Session Context and Structure


Min Thu Aung from GNI opened the session by outlining the organization’s role in advancing human rights integration in AI governance. GNI has developed implementation guidelines for their AI and human rights principles and participates in the OECD AI Network of Experts. They are also involved in the B-Tech project’s Generative AI Human Rights Due Diligence initiative, which aims to develop practical guidance for companies conducting human rights impact assessments for AI systems.


The session was structured as a panel discussion followed by Q&A, with some participants joining remotely. Technical difficulties affected portions of the discussion, particularly limiting the contribution from the UN representative.


## Corporate Perspectives on Human Rights Integration


### Google’s Approach to Foundational Frameworks


Alexandria Walden from Google emphasized that human rights must be integrated at the foundational level rather than treated as an add-on. She stated that “human rights has to be at the baseline and then integrated into those processes and frameworks,” arguing that companies need comprehensive baseline human rights governance structures before developing AI-specific principles.


Walden described Google’s operational approach, which includes technical processes such as red teaming, secure AI frameworks, and coalition work with other stakeholders. She emphasized that having principles alone is insufficient without robust operational frameworks that technical teams can implement in their day-to-day work.


### Telenor’s Comprehensive Risk Assessment


Elina Thorström from Telenor Group reinforced the importance of top management commitment for effective implementation. She outlined Telenor’s approach, which builds AI strategy foundations on responsible AI principles with comprehensive governance structures that ensure human rights considerations are embedded throughout the organization.


Thorström emphasized that company values and policies should guide behavior “irrespective of which country we are at,” suggesting that internal ethical frameworks should exceed local regulatory requirements. She described Telenor’s integrated risk assessment approach, which examines human rights, security, privacy, and data governance holistically through cross-organizational collaboration.


## Global South Perspectives and Contextual Challenges


Jhalak Mrignayani Kakkar from the Centre for Communication Governance introduced crucial complexity by highlighting how AI technologies may have dramatically different implications when deployed across different contexts. She noted that “even within the Indian context, within an urban part of India versus a semi-urban part of India versus a rural part of India will differ significantly.”


Kakkar emphasized the need for context-specific approaches to risk assessment that reflect local socioeconomic realities. She argued that meaningful human rights due diligence requires culturally and linguistically sensitive staff who understand local contexts, and stressed the importance of sustained multi-stakeholder engagement between companies, civil society, academia, and governments.


She also noted the heterogeneity in constitutional rights embedding across Global South countries, requiring tailored approaches that consider local legal and cultural frameworks.


## International Human Rights Law Framework


Caitlin Kraft-Buchman from Women@TheTable provided a strong advocacy for rights-based frameworks over company-specific ethical approaches. She argued that “we tend to think of ethical principles, wonderful, and we love them, but they’re very a la carte, whereas human rights frameworks and international human rights law has been agreed by everybody.”


Kraft-Buchman emphasized the need for intentional design-based approaches that include multidisciplinary teams with social scientists, human rights experts, and anthropologists from the development stage. She also identified public procurement as a significant leverage point for deploying rights-respecting AI at scale.


## Regulatory Approaches and Implementation


The discussion revealed general support for regulatory frameworks among participants. Walden expressed Google’s support for risk-based regulatory approaches, particularly for high-risk AI applications, while acknowledging that voluntary approaches alone may be insufficient for widespread industry adoption.


However, Kakkar introduced a nuanced perspective on enforcement, suggesting that the lack of strict enforceability in human rights due diligence might actually encourage more meaningful assessments, as companies may be more willing to identify problems when they know there isn’t an automatic negative consequence.


## Individual Rights versus Societal Impact


A significant discussion point emerged when Richard Ringfield from Business for Social Responsibility questioned whether individual human rights-based approaches could adequately capture AI’s broader societal impacts. He noted that “many of the really big impacts that AI will have will be more at the societal rather than the individual level,” including transformations in education, employment, and social structures.


This intervention prompted acknowledgment from participants that traditional human rights frameworks, while essential, may need to be supplemented with broader societal impact evaluations. Kakkar responded by noting that human rights concepts are being reinterpreted for group and community settings, including developments in group privacy and community data rights.


## Questions and Additional Perspectives


The Q&A session included a question from Byungil Oh about South Korea’s National Human Rights Commission’s AI impact assessment tool and the Korean AI Basic Act, highlighting the global nature of efforts to develop human rights-based AI governance frameworks.


Technical difficulties limited the contribution from Nathalie Stadelmann of the UN Office of the High Commissioner for Human Rights, though she was able to emphasize the importance of shared responsibility between companies and states in protecting human rights in AI development and deployment.


## Key Areas of Consensus


Despite representing diverse stakeholder perspectives, participants showed agreement on several fundamental points:


– Human rights must be foundational to AI governance rather than an afterthought


– Top management commitment is essential for effective implementation


– Operational frameworks and processes are necessary to translate principles into practice


– Multi-stakeholder collaboration is critical for developing effective approaches


– Context-specific considerations are important, particularly for Global South deployment


– Some form of regulatory framework is needed to ensure widespread adoption of rights-respecting practices


## Outstanding Challenges


The discussion highlighted several unresolved challenges:


– Lack of standardized human rights impact assessment methodologies for AI systems


– Uncertainty about appropriate risk thresholds for triggering mitigation measures


– Questions about the adequacy of individual-focused rights frameworks for addressing systemic societal impacts


– Need for context-specific approaches that reflect diverse global realities


– Balancing transparency requirements with practical implementation concerns


## Conclusion


This panel discussion demonstrated both the growing consensus around the importance of human rights integration in AI governance and the significant practical challenges that remain in implementation. While participants agreed on fundamental principles, the conversation revealed the complexity of translating these principles into effective practice across diverse global contexts.


The emphasis on contextual sensitivity, particularly regarding Global South perspectives, represents an important evolution in AI governance conversations. The discussion also highlighted the emerging recognition that comprehensive AI governance may require frameworks that address both individual human rights and broader societal impacts.


The session underscored the ongoing need for multi-stakeholder collaboration, standardized assessment methodologies, and regulatory frameworks that can effectively balance mandatory requirements with practical implementation realities. While significant challenges remain, the broad consensus on fundamental principles provides a foundation for continued progress in integrating human rights considerations into AI risk management practices.


Session transcript

Min thu Aung: ♪♪ ♪♪ ♪♪ Good morning, everyone. Thank you so much for attending our panel on incorporating human rights in AI risk management. My name is Min Aung. I’m the Accountability and Innovation Manager at the Global Network Initiative, or GNI. Our panel today is set in the context of this. Governments around the world, especially in the EU of course, are starting to require tech companies to manage human rights risks in the way that they design and indeed use AI. Of course, companies do have AI-specific tools and principles, but sometimes these can fall short of aligning with international human rights standards. This panel aims to bring together a diverse range of voices from industry, from civil society, and also from multilateral bodies to explore how we can better integrate human rights into AI risk management practices. Perhaps a quick introduction to GNI first. We are a multi-stakeholder initiative. We bring together four constituencies of academics, civil society, companies, and investors for accountability, for shared learning, for collective advocacy on government and company policies and practices, which are at the intersection of technology and human rights. This is particularly relevant when companies face government requests or demands that have an impact on freedom of expression or privacy rights. We have a set of principles, the GNI principles and implementation guidelines, which aim to guide companies on how to respond when receiving government requests or demands that may have an impact on freedom of expression and privacy rights, as well as a law on conducting ongoing due diligence and so on. The GNI principles and implementation guidelines do indeed apply to AI insofar as governments produce mandates on AI services in various shapes or forms and indeed the need for companies to conduct human rights due diligence and, where necessary, impact assessments on AI services. In that vein, we as GNI have indeed been quite active in a variety of fora related to AI and also activities for our membership. For example, we are members of the OECD AI Network of Experts and we have been involved in BTEC’s GenAI Human Rights Due Diligence Project. Within the GNI company assessments, we are exploring the intersection between AI and human rights due diligence. We have also obviously put out many statements of concerns on potential rights violations in relation to AI government mandates in Canada and in India, among others. We have also hosted learning discussions among our members on AI as well as government mandates, the last of which was during our annual learning forum, which was held in DC last year. Last but not least, we have an AI working group within the Global Network Initiative where a selection of GNI members with deep AI experience are, amongst other things, developing a policy brief on government interventions in AI and rights-respecting responses to these mandates. Enough about GNI, now moving on to our panel. Firstly, we will hear from two company panelists from different parts of the Internet stack, along the theme of diversity, on how they integrate human rights considerations into the development and deployment of AI-related products and services. Secondly, we will hear from BTEC themselves on multilateral efforts to promote incorporation of human rights in AI governance. Finally, we will hear the views from civil society panelists on what more can be done by companies and by policymakers, lawmakers, and regulators on incorporating human rights in AI. We will then have a Q&A, and then I will provide a summary, and then we will wrap up. All right, we look like we are on time. Perhaps I will start with my first request for intervention to Alex Walden from Google. I will let you introduce yourself in a bit. Google is obviously an integrated player in the AI ecosystem, for example, developing consumer-facing AI services like Gemini and its predecessors, of course. We also note, of course, that Google is a founding member of GNI and has a dedicated human rights program and has had responsible AI principles since 2018. And, of course, that Google’s approach to human rights due diligence is informed by the UN Guiding Principles as well as the GNI framework. Just a couple of questions, which you may address in any order that you feel comfortable. I guess firstly, how does Google conduct risk assessments across its footprint? And how do you incorporate that human rights are incorporated in these risk assessments? And indeed, looking more externally, how do laws and regulations like the EU AI Act and the AI Basic Act in South Korea influence how human rights are incorporated in your AI risk assessments? And indeed, what advice do you have for other companies to normalize AI risk assessments, human rights within AI risk assessments within their operations? Over to you.


Alexandria Walden: All right. Thank you. Thanks for that question. Thanks to GNI for putting this session together. I think it’s a really important topic. I will try to – I think I can actually get to all of those across a few things. So, I mean, the first thing to say, and it may seem obvious, but I do think it’s important to point out that actually where companies should start with how you integrate human rights into how your AI work is fundamentally to make sure that the company has human rights governance at its baseline, right? So, at Google, we are committed, as you said, to the GNI principles. We’re committed to the UN guiding principles on business and human rights. And what that means for us is that we have a corporate policy that says that these are our values and that we have ways to operationalize these commitments throughout the company. Obviously, it doesn’t mean that we’re doing it perfectly, but what it does mean is that these are values that are set from the highest level of the company, that they are risks that are reviewed by the board, and that we have a human rights program that works to implement and ensure these commitments across our various products and services. And so, having that in place is what allows us to then get to a point where, okay, we also have AI principles on top of that. Our AI principles build on top of our commitments to the UNGPs, our GNI principles. And so, the AI principles are really about a single type of technology that we have that’s integrated across our policies. Our AI principles, again, reinforce the commitment to international law and human rights. And so, that is a reminder to everyone who is doing technical work or who is doing more qualitative trust and safety work or public policy work or legal work across these products that human rights is something we need to be thinking about. And then, at the more operational level, because we have this sort of governance structure and these principles integrated, that means that everyone who is in our technical teams, those who are developing AI, are aware that they should be thinking about what rights-related impacts or risks might arise in their work. And so, we have to set up processes to address that. And so, really, that’s, I think, the biggest piece, is making sure that you have processes and teams in place to operationalize all of those principles. Those teams are the ones that write the policies to ensure we are thinking about how human rights might manifest in AI and content or related to AI and privacy or related to AI and bias and discrimination. All of these things require process and really guidelines. So, you just sort of keep getting at the more granular and granular level of what’s required. But ultimately, human rights has to be at the baseline and then integrated into those processes and frameworks. So, we do things like red teaming or we have the safe framework, the secure AI framework, and that’s sort of some coalition work that we do with other companies. All of that embeds the way we think about human rights but really gets more pragmatic at the operational level for testing and for red teaming, et cetera, to make sure that we are identifying where risks may arise.


Min thu Aung: Elina Thorström, Jhalak Mrignayani Kakkar, Elina Thorström, Min thu Aung, Alexandria Walden and has been a strong advocate of AI ethics and uses AI indeed within its network management, customer services and indeed customer-facing IoT solutions. So if I could pose similar questions to what I posed to Alex, how does Telenor Group indeed conduct AI risk assessments across its footprint and how do you ensure that human rights is incorporated in these risk assessments and how does your exposure to the EU AI Act within Europe but indeed the comparative lack of such laws and regulations within Telenor’s Asian business units influence how human rights are incorporated into AI risk assessments and what advice do you have for companies as well? Thank you.


Elina Thorstrom: Thank you Min and hello everyone. It’s great to be here and talk about human rights and AI, an extremely topical topic currently. And if we look at what Telenor does, I think very similarly what Alex pointed out, that it all comes down to having that top management commitment to responsible AI as well as human rights. And that is at the core of what Telenor does. And for example how we have started building our AI strategy, the foundation of it is really responsible AI. And that’s embedded to everything we do when we develop, deploy AI applications etc. So I think the top management commitment to that is very important in order for us to actually achieve those goals and promote responsible AI throughout the organisation and actually bring that to the structures and procedures as also Alex pointed out. What we have done at DNA for example is that we have driven our AI risk assessment from our Telenor’s responsible AI principles. So actually those principles guide our risk assessment work. However of course AI governance is a much more broader topic than that. So of course we need a lot of more different elements to that besides risk assessment. So for example awareness and training building, we need to have proper tools in use, we need to work together for example with our vendors and with our stakeholders. And we need to have those policies, guidelines as well as principles in place. So it’s a very big topic, comprehensive topic where we need to take into account these different elements. And then if we look at the risk assessments that we do in DNA, we go through very practical level our AI applications and think about the risks very holistically. So we have a very comprehensive view on this. And it has been very rewarding I must say. I lead that working group myself. And we look at human rights perspectives, we look at impacts on security, privacy, we look at data governance. So we do a very holistic view on AI. And we see that very beneficial also because AI is such a large topic. So this is the way we have built our own program. And if we look at our responsible AI principles, the first one actually is promoting incorporating human rights to our procedures. So that’s why it’s core at the risk assessment procedures that we also do. Coming into then to the question of EU’s AI Act and how to apply legislation when AI Act as we know is in EU. But Telenor as mentioned, we operate also in Asia. However our values, our policies, our principles, they guide our work. Irrespective of which country we are at. So that is also at the core of how we do things. Although a part of AI governance and good AI governance is of course to make sure that we are compliant with the regulation. But that’s only a part of it. So the company’s culture, policy, guidelines, those are the ones that actually guide us and the work that we do. Perhaps also to the last question of what would be my suggestions to go forward. I would come back to the commitment part. So I think that’s the important and key element. So that we are actually committed to human rights approach as well as responsible AI. But also the collaboration. So we utilize expertise throughout our organization. And that has helped a lot and also supports our work. And also means that we learn from each other a lot. So this I think would be my… And it’s a journey. So this is not a sprint. So it’s a journey that we are all in this together. And we learn as we go. Thank you.


Min thu Aung: Great, thank you very much. I see a lot of commonalities in the answers. So that’s really great to hear. So perhaps we can move to a different, I guess, different actor within the AI landscape to Natalie from OHCHR, part of the BTEC project. So Natalie, thank you so much for joining remotely. Good to see you. I hope you can hear and see us just fine. So we would love to explore the role of the UN more generally and of BTEC more specifically in this context. So BTEC obviously has produced various outputs on the intersection of human rights and AI. So most notably the taxonomy of human rights risk connected to Gen AI, which was produced in November, which GNI was also involved in. So we would love to hear how do you see the role of multilateral organizations like the UN, like the OECD and others in ensuring the widespread adoption of human rights in AI risk assessments? What role indeed do you see for WSIS and GDC in promoting this adoption? And last but not least, how do you see the kind of global geopolitical divides on human rights in AI impacting this drive and what suggestions do you have for companies that are navigating these divides and changes? Over to you.


Nathalie Stadelmann: Thank you. Thank you, Jean. I hope everybody can hear me. And thank you very much for the invitation and for bringing together this panel. And greetings from Geneva. So just a few words about the BTEC project. It’s a project that was launched by the High Commissioner Office for Human Rights already almost six years ago. And really very much with the goal to translate the guiding principle for the technology sector. So we have produced a lot of guidance in very much a multi-stakeholder fashion. So we are working with GNI and with the OECD and as well with some of the companies through our committee of practice. And I obviously recognize Alex from Google in that space. And just that we are really using the guiding principle as the global standard for business conduct. And in that respect, when it comes to the governments of AI, I couldn’t miss the opportunity because we have the Human Rights Council now ongoing in Geneva. And we just produced BTEC report that was mandated by the Human Rights Council. We just showed the interest on this topic about the shared responsibility of companies that are developing and deploying AI to respect human rights as well as the state’s duty to protect those rights through requiring those companies to respect human rights as well as to access remedy. So maybe I was thinking to just run you through quickly the latest findings that we have in this report. So obviously we know that the key message is that innovation can bring a lot of promises but as well peril for human rights. Especially in terms of the complex human rights challenges that they bring and some of them because of their unforeseen nature. So the report very much acknowledged the speed and scale of which AI technology are evolving and that outspaced the current regulatory framework. Jhalak Mrignayani Kakkar, Elina Thorström, Min Thu Aung, Alexandria Walden Jhalak Mrignayani Kakkar, Elina Thorström, Min Thu Aung, Alexandria Walden Jhalak Mrignayani Kakkar, Elina Thorström, Min Thu Aung, Alexandria Walden Jhalak Mrignayani Kakkar, Elina Thorström, Min Thu Aung, Alexandria Walden Jhalak Mrignayani Kakkar, Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorstrom, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden


Min thu Aung: Elina Thorström, Min Thu Aung, Alexandria Walden Elina Thorström, Min Thu Aung, Alexandria Walden


Caitlin Kraft-Buchman: Thank you so much, Min, and thank you very, very much for including us in this conversation. We began this journey, actually, in 2017 with OHCHR and the Women’s Rights Division, where we convened—we thought we were just convening several professors from FAFL and ETH and one or two people from OHCHR, but 21 lawyers showed up in this room because everybody was so fascinated with this idea that gender and AI together, what did they have to do with one another? So that alliance, in a way, has led us to working with FAFL, somebody doing a master’s thesis on what these human rights impacts are, and to our creating a methodology that’s now been taught at Cambridge, Technical University Munich. We work with African Center for Technology Studies in Kenya, which is a Pan-African university with Chile’s National Center for Artificial Intelligence, and the course actually sits on the CERBON Center for AI website, which looks at a human rights-based approach to the AI lifecycle. We’ve also taught it at CERN, because we found that technologists, of course, want to do the right thing, but they don’t really know—human rights is really quite abstract idea. It’s particularly abstract to people from North America, who tend to think of civil and political rights as being the only—as the primary human rights, and economic and social rights really not really being part of the larger conversation. People are like, really? Are you sure? Right to health? Is that really a thing? So we have this design course, which is really, really focused on—a design course for developers, for data science majors, but now we’ve found that policymakers like it as well, because it’s a conversation. It’s a critical analysis. And what we’ve also found is that a common vocabulary needs to sort of be implemented, because policymakers are even afraid of talking to technologists, and technologists don’t really understand what the policymakers want. So we’re also trying to create a space where people can have conversations, because I think good people want to make this all work for this technology that’s all surrounding us. And this starts with a design-based approach, with intentionality, with understanding what an objective is, that actually why are you making a product, actually what is the impact that you would like to have, through to who should the team be sitting around the table, and when we talk about diversity. In this case, we’re not only talking about diversity of gender. We’re talking geographic diversity, but also multidisciplinary diversity. Where are the social scientists? Where are the human rights experts? Where are the anthropologists? If it’s a medical application, where are the doctors, the nurses, who are the people taking the blood? Really, everybody in all the stakeholders in the lifecycle of a product, because we’re seeing that there’s a lot of siloed-off invention. And then going through there to data discovery, to understanding whether you actually have the data. We know that for health applications, for example, we’ve never really discovered the data on women’s bodies, no less women of the global south, or people of the global south. So what does that mean, then, when you’re deploying, at scale, a health application that actually only has the data for a very, very small demographic, and what are some of the fixes to that? So it’s a question of creating awareness, and creating a conversation. So that’s really what we’re focused on. In terms of, and being intentional, and I think the intentionality is really key, in terms of our work, we would like everyone, of course, to expand beyond compliance. So this notion of principles, we tend to think of ethical principles, wonderful, and we love them, but they’re very a la carte, whereas human rights frameworks and international human rights law has been agreed by everybody, and as a point of departure, it really is a very good place to start, as opposed to one company’s or one academic institution’s idea of what really should be foregrounded or not. So we think that that would actually help everybody work towards really, kind of, systemic rebalance, and maybe even using some of these products to look at the way that these can be positively help, instead of just be deployed. We would say that one thing I just want to say is an opportunity, where we’re also working very deeply, and have for some time, is on procurement, because we know that public procurement, in particular, because we know that it’s a very large part, it’s 13% of the EU GDP. In developing nations, it can be up to 30 to 40% of the GDP. These things are being, really, these products are being deployed at scale, and we think that using really interesting AI deployment levers, we would maybe be able to take products that connect people to services, as opposed to only detect fraud. So right now, we’re sort of in a negative part of how, and all of us want to save money for our governments and ourselves, but it’s also how do we also connect to better quality of life and to services. So we think that that could be a lever, a really interesting deployment, and indeed, we’re working on technical guidelines that, in other words, questions that procurers can ask of vendors from the public sector about, like, oh, are you doing a fairness metric? Right now, if people are asking that, it’s just like, check, yes, we did. You know that didn’t work with Compass, all of you know, but so which fairness metrics? Why did you use that? Did you experiment? Why did you think that was a good idea? And how these conversations really sort of more deeply before things are deployed. And finally, I’ll say we’re also working on human rights and AI benchmark, which is going to be sort of the first machine learning benchmark that’s dealing with an international human rights law framework, which we’re hoping, once it’s put on, you know, it’ll sit on Hugging Face and other open source platforms, developers, machine learning experts can use to understand whether what they’ve created really does match with human rights criteria, international human rights law.


Min thu Aung: Thanks. That’s quite an innovation. Very impressive. Thank you so much for that, Caitlin. Okay, so we’ll now shortly move on to an intervention from Jalak, but before that, we’re just one intervention away from the Q&A session, so for those that are online, I would encourage you to pose your questions already in the chat, if you haven’t done so already, and we will take the online questions first in the spirit of, you know, ensuring everybody online also feels a part of this, you know, a part of this room. So yeah, then we’ll move on. Jalak, if I can, so yeah, Jalak CCG is a founding member of GNI, and within your work on tech law and policy in the Global South, you’ve done extensive research exploring different sort of modalities of AI laws and regulations, including a recent workshop that you did on AI and rule of law with South Asian judiciary members back in November. You’ve also been an active participant as an expert in GPI, the Global Partnership for AI, and of course the recent benefits of AI impact the Global South in ways that are maybe sometimes different to the impacts on the Global North, right? So the relative absence of the dedicated laws and regulations, maybe there’s varying capacity to enforce laws that may or may not exist, different consumer patterns, and perhaps the potential to impact very large populations that may have different levels of AI literacy or indeed digital and media literacy. So the role of Global South governments in protecting or indeed not protecting user rights will be covered in GNI’s policy brief on government interventions in AI that I alluded to a bit earlier on. So yeah, so in your view, when creating local AI laws and regulations, or indeed adapting existing laws and regulations in the context of AI, what can or what should Global South policymakers, lawmakers, and regulators learn from the human rights impacts of companies’ emerging AI activities? Secondly, where do you see opportunities to influence the inclusion of human rights in companies’ risk management processes and policies in the Global South? And last but not least, taking a very particular Global South angle here, why is it so important for the Global South in general and India in particular? Over to you, Jalak.


Jhalak Mrignayani Kakkar: Thank you. Thanks, Min. I think there’s a lot of work happening globally on human rights due diligence, risk assessment of AI systems. A lot of it is concentrated currently in what we call the Global North. And I think there may be, there’s not enough work that’s currently happening in the Global South. and the Global South. Why it’s important that work happens in the Global South is there are different socioeconomic realities, different societal contexts, but also if a lot of these technologies are being developed in the North, we don’t know. They could have very different implications in the South. They’re not being developed and designed keeping in mind those contexts. I think which really underlines the need for human rights due diligence by companies. It underlines the need for human rights risk assessment to be built into governance frameworks that are being designed in the Global South because that will really allow us to operationally, proactively identify risks in a methodical way instead of post-facto reacting to harms. It’ll also help regulators understand what the harms are and really design governance regulatory risk mechanisms accordingly. I think one of the things that there’s now increasing focus amongst academics and civil society in the Global South is while at the global level there’s been a lot of benchmark development, taxonomy development that has happened, I think increasingly many of us in our context are looking at how we can build out more specific benchmarks and taxonomies that more accurately cover the range of risks and harms that arise in our specific context. I think that really is maybe the first step towards enabling an effective human rights due diligence exercises by companies. Many of our AI companies are global companies and very often a lot of their staff is global staff that is not very familiar with local contexts. I think that’s a very key part for academy and civil society in various country contexts to come in and start playing this role of developing benchmarks, taxonomies, but also points to the need for a sustained multi-stakeholder engagement between companies and civil society and academia on one hand so that there can be cross-learning, cross-pollination of ideas, but also with governments as they figure out how to identify harms and conduct human rights risk assessment. I think it’s sometimes hard to articulate what risks should be assessed for and I think we’ve been talking about how human rights frameworks provide a great underpinning and starting point for identification of risks. But what I do want to point to is there is, for instance, a lot of heterogeneity within the global south in terms of what rights are embedded in their constitutions, the extent of embedding of human rights. And while it’s important to pay attention to human rights frameworks, I think we also have to be strategic about perhaps language we are using and how we are encouraging governments in certain contexts to adopt certain ethical principles or frameworks. And we have to think about how we approach some of these conversations and we frame these conversations so that we can reach the intended outcome and impact that we want. So I think we really have to think about how the other sort of question that has repeatedly come up in the Indian context within which I work is how broadly do you define the risk? If there’s too much breadth and too much variety of risks being identified, it can hinder the development of more specific assessment tools and methods, for instance for algorithmic bias, but if it’s too specific, you lose out on the ability to allow a more broad capture of harms that may be arising as these due diligence or risk assessments are conducted. How do you prioritize certain human rights over others, given their mutually affirming character, and I think the way a particular service interacts, even within the Indian context, within an urban part of India versus a semi-urban part of India versus a rural part of India will differ significantly. So I think even within a particular country context, there will have to be various scenarios that are built out in terms of context that the same technology is being deployed in. So I think there are many, many things to think about as these risk assessments are being designed, and I think it’s important to keep that in mind. And that goes back to a point that I raised earlier, that you need culturally and linguistically sensitive stuff, which I mean even within the Indian context, that means in different parts of the country there’s people speaking different languages, so you may need staff involved in this that has a multiplicity of perspectives or to engage with the different challenges that emerge in those contexts. I’ll just close up by pointing to two points. Risk assessment, human rights due diligence. One of the criticisms is the lack of enforceability. But perhaps that’s also where the value is, because perhaps companies are more incentivized to conduct it when they know that there isn’t a negative consequence. But I think one of the challenges that we’ve seen is, at what threshold do you ask for risk mitigation to be undertaken? How do you articulate that? That is a challenge at this moment in time, to specify that threshold. And here I want to point back to the fact that this is where multi-stakeholder conversations and openness become important. But also, what is the level of transparency that we expect from companies? What is the level of transparency governments should require in legislation, regulation they’re designing? Because until there is a level of disclosure, it’s hard to really identify what challenges are emerging and what needs to be articulated more clearly to make this a more meaningful exercise so that we can really ensure that AI is developing in a way that supports human rights rather than starts to impede it.


Min thu Aung: Thank you very much for the talk. Lots of things to consider to make sure that AI development is context-specific and also considers different rights impacts, not only within the global South in general, but also even going in a more granular level between urban and rural impacts, perhaps even. Thank you so much. All right, then we move to the Q&A part of our panel. I don’t actually see any questions in the chat yet, apart from questions about the links that Natalie kindly re-shared, so thank you so much for that, Natalie. Then perhaps moving to the room, are there any questions from the room to our panelists? Yes, I see one from Ben. Any others? Yes. Okay. Richard? Did you have a question too? Yes. People have to line up. Okay, great. All right. Thank you. Go ahead, please. If you could introduce yourself first, and then a question, please. Thank you. Thank you.


Audience: My name is Byungil Oh. I’m from Jinbonet, a digital rights organization based in South Korea. Last year, Korea’s National Human Rights Commission released a human rights impact assessment tool for AI, and I was involved in its development. We conducted a human rights impact assessment on the Human Rights Commission’s pilot system earlier this year. The process wasn’t just about going through a checklist. It involved an exchange of diverse perspectives, so we found it very useful. However, the tool has not yet been widely used, mainly because there is no legal obligation to conduct such assessment. Of course, some companies may conduct internal risk assessment, but independent assessment that include participation from affected parties are rarely carried out. All the Korean past The panel discussed basic AI law last year, which includes a provision related to human rights impact assessment. It only states that efforts should be made to conduct them. It does not mandate them. Korean civil society groups are now calling for mandatory human rights impact assessment for high-risk AI systems. I would like to hear the panelist’s view on legality, legally mandating such assessment.


Min thu Aung: Thank you. That’s a great question. Thank you so much for that. I would propose having one company intervention at least on the question that was posed, mandating human rights due diligence for AI, and I guess in the context of high-risk AI perhaps. And perhaps, yeah, one civil society intervention at least if possible. And Natalie, if you want to jump in, please feel free. Who would like to go first?


Alexandria Walden: I’m happy to just jump in quickly. I think if you have human rights governance inside of a company, then you should be doing ongoing human rights due diligence across all of the activity in your company, and that should also apply to your AI work, one. And then two, with respect to sort of regulations to require human rights due diligence, and then specifically for high-risk application areas, we see that with the EU AI Act, and many companies, including mine, have supported a risk-based approach, which does mandate fundamental rights or human rights impact assessment for high-risk application. So I think that’s something that you will see a lot of support for from industry.


Caitlin Kraft-Buchman: Wonderful. Thank you so much. Yeah, and I’m really happy for the question, because I think that we need to focus basically more on impact than necessarily even risk or harm, but just really say how, from the very get-go, what is the impact on humans? Integrate it, as we’ve just heard it, you know, all the way through the objective and the design, all the way through. As we know, in the HUD area, which is the sort of fundamental rights impact suggestion from the Council of Europe, there’s going to now be a more formal – I’m sure Natalie will speak to that – more formal, because there is no standardized human rights impact assessment anywhere from any body, sort of international body. But that HUD area really brings – I mean, what we’ve done, we’ve worked with the Turing, who did it, and we’ve brought the stakeholder part of it really way up front, and I think that that’s going to really make a huge difference if you do have this sort of multi-stakeholder consultation, this idea of co-creation, really at the get-go. I just want to say two things. I think that we’re going to also go to a right to know, ultimately, in terms of the legislation, and that right to know will be the transparency of what the training data is writ large, right, once we get all the IP issues settled. And then the second thing will be the explainability, that really, at all levels of society, we can kind of understand what’s happening with the algorithm and then also potential redress. Yeah, that’s it.


Min thu Aung: Thank you, Caitlin. Would anyone else like to intervene? Natalie should talk about their impact assessment. I mean, Natalie, would you like to intervene as well, perhaps talking about your impact assessment tools?


Nathalie Stadelmann: Sure. Maybe just to draw, because I didn’t really go into the recommendation of the report, but indeed, to the colleague representing civil society in South Korea, I would invite him to look at the recommendation we have when it comes to states, when indeed there are regulatory requirements requiring, basically, a company to conduct human rights due diligence, and there should be, as well, encouragement that they publish the human rights due diligence and impact assessment that they have implemented. And those regulations should, as well, as much as possible, request companies developing and deploying AI that they verify the data input and the resulting output to ensure that there is proper representation in terms of gender, race, cultural diversity, and basically safeguards against any negative impact linked to possible discriminatory AI outputs and their consequences. So this is a recommendation in the report. And in terms of human rights impact assessment, we have produced, together with the great support of GNI, as well, and that’s part of the resources listed in the session panel, guidance specifically on generative AI, and there is detailed guidance on human rights impact assessment of generative AI. So I would, as well, invite colleagues to have a look at this specific guidance that we produced now a bit more than a year ago. And just a comment, as well, to the colleague on the panel who mentioned India, I wanted just to draw attention that there will be an AI summit in India in February. And that’s very interesting, precisely in terms of bringing global majority perspective into the discussion. And it seems from the documents published so far that the focus will be on open, transparent, and rights-respecting AI development during the summit. And I think it’s very welcome that after countries like the UK and South Korea and France having hosted those past AI summits, that this summit next February in India, I believe, will be really a good opportunity to possibly, because there was this question asked to me about the geopolitical context, as well. I think we have seen, as well, Brazil developing AI regulation, and so as counterbalance in brackets to the developers in the global north.


Min thu Aung: Thank you very much, Nathalie. I appreciate it. We have two and a half minutes left, and I think two questions. So if we could have the questions together, if possible. Okay, great. Thanks, Ben. Richard, please go ahead.


Audience: Richard Ringfield from BSR. I think while a human rights-based approach is really a necessary prerequisite, many of the really big impacts that AI will have will be more at the societal rather than the individual level. So we’re seeing already, for example, shifts in the way education needs to be carried out as a result of generative AI and people’s ability to research and learn. It’s likely that AI will lead to job displacement or shifts in different jobs. So I’m just wondering whether the panel thinks that those sort of bigger societal impacts or risks can be captured by a human rights-based approach, or whether we need to go a bit beyond sort of the individual human rights-based approach to make sure we fully acknowledge all of the risks that come with AI.


Min thu Aung: Wonderful question. Thank you, Richard. This is really quite open to anyone, so would anyone like to intervene there?


Elina Thorstrom: I can, for example, tell about our approach. A very good question, I think. Sorry. Excellent. Very good question. And how we at least see it is that we need to have a very comprehensive approach at Telenor. And it requires, of course, taking into account the human rights, but it is correct, as you say, that that is definitely not enough. So we need to look at AI much more broadly and look at the impacts, what it has also at the company level, and also educate, train, and build awareness to our employees. So all of these are essential part of, in my opinion, on AI governance.


Jhalak Mrignayani Kakkar: Yeah, I agree. Just two points. I think social media platforms have pointed to the need of societal impact assessment. Secondly, existing human rights are being reinterpreted in group and community settings like privacy, group privacy, community rights over data. So I think there’s also a reinterpretation and broadening of perspective required around human rights in the technology context.


Min thu Aung: Thank you very much. We have 30 seconds remaining. So I would like to, unless anyone has any last-minute must-have interventions. No? I would like to close the panel here. Thank you so much to our panelists for taking part and sharing their views. Thank you so much to those participating online and also for the questions that we received. So yeah, as per the IGF requirements, we will be posting up a summary of this related to the session. So please feel free to read there. And with that, I thank everyone again. Appreciate it. Thank you. Thank you. Thank you.


A

Alexandria Walden

Speech speed

182 words per minute

Speech length

691 words

Speech time

227 seconds

Companies need baseline human rights governance before integrating AI principles, with corporate policies committed to UN Guiding Principles and board-level oversight

Explanation

Walden argues that companies must establish fundamental human rights governance structures as a foundation before layering on AI-specific principles. This includes having corporate policies that commit to international standards like the UN Guiding Principles, with risks reviewed at the board level and human rights programs that implement commitments across products and services.


Evidence

Google’s commitment to GNI principles and UN Guiding Principles, with AI principles building on top of these commitments, corporate policy setting values from highest company level, and board-level risk review


Major discussion point

Corporate Human Rights Governance and AI Risk Management


Topics

Human rights | Legal and regulatory


Agreed with

– Elina Thorstrom

Agreed on

Top management commitment is essential for effective human rights integration in AI


Technical teams need processes and guidelines to operationalize human rights principles through red teaming, secure AI frameworks, and coalition work

Explanation

Walden emphasizes that having principles is insufficient without operational processes that enable technical teams to implement human rights considerations. This requires specific frameworks, testing methodologies, and collaborative approaches to identify and address rights-related risks in AI development.


Evidence

Red teaming processes, secure AI framework (SAIF), coalition work with other companies, and policies ensuring consideration of human rights in AI content, privacy, bias and discrimination


Major discussion point

Operational Implementation of Human Rights in AI Development


Topics

Human rights | Infrastructure


Agreed with

– Elina Thorstrom

Agreed on

Operational processes and frameworks are necessary to implement human rights principles in AI development


Companies should conduct ongoing human rights due diligence across all activities, including AI work, with support for risk-based regulatory approaches

Explanation

Walden advocates for comprehensive human rights due diligence that encompasses all company activities, including AI development and deployment. She supports regulatory frameworks that mandate human rights impact assessments for high-risk AI applications, viewing this as a reasonable and necessary approach.


Evidence

Support for EU AI Act’s risk-based approach that mandates fundamental rights impact assessment for high-risk applications, industry support for such regulatory requirements


Major discussion point

Corporate Human Rights Governance and AI Risk Management


Topics

Human rights | Legal and regulatory


Agreed with

– Audience

Agreed on

Support for risk-based regulatory approaches with mandatory human rights assessments for high-risk AI


Disagreed with

– Audience

Disagreed on

Regulatory enforcement vs voluntary approaches


E

Elina Thorstrom

Speech speed

132 words per minute

Speech length

745 words

Speech time

338 seconds

Top management commitment to responsible AI and human rights is essential, with AI strategy foundations built on responsible AI principles

Explanation

Thorstrom argues that successful integration of human rights into AI development requires commitment from the highest levels of company leadership. She emphasizes that responsible AI must be the foundation of AI strategy and embedded throughout organizational structures and procedures.


Evidence

Telenor’s AI strategy foundation built on responsible AI, responsible AI principles guiding risk assessment work at DNA, top management commitment driving responsible AI throughout organization


Major discussion point

Corporate Human Rights Governance and AI Risk Management


Topics

Human rights | Legal and regulatory


Agreed with

– Alexandria Walden

Agreed on

Top management commitment is essential for effective human rights integration in AI


Comprehensive risk assessment should examine human rights, security, privacy, and data governance holistically through cross-organizational collaboration

Explanation

Thorstrom advocates for a holistic approach to AI risk assessment that goes beyond human rights to include security, privacy, and data governance considerations. She emphasizes the importance of utilizing expertise throughout the organization and collaborative learning processes.


Evidence

DNA’s comprehensive AI application review process examining human rights, security, privacy, and data governance; cross-organizational collaboration and expertise utilization


Major discussion point

Operational Implementation of Human Rights in AI Development


Topics

Human rights | Cybersecurity | Legal and regulatory


Agreed with

– Alexandria Walden

Agreed on

Operational processes and frameworks are necessary to implement human rights principles in AI development


AI impacts extend beyond individual rights to societal level changes in education, employment, and social structures requiring comprehensive approaches

Explanation

Thorstrom acknowledges that while human rights approaches are necessary, AI’s impacts on society require broader consideration including company-level impacts and employee education. She advocates for comprehensive AI governance that addresses these wider societal implications.


Evidence

Need for employee education, training, and awareness building; comprehensive approach beyond human rights at company level


Major discussion point

Broader Societal Impact Assessment


Topics

Human rights | Economic | Sociocultural


Agreed with

– Jhalak Mrignayani Kakkar
– Audience

Agreed on

AI impacts extend beyond individual rights to broader societal implications


Disagreed with

– Jhalak Mrignayani Kakkar
– Audience

Disagreed on

Scope of impact assessment – individual vs societal level


C

Caitlin Kraft-Buchman

Speech speed

164 words per minute

Speech length

1316 words

Speech time

480 seconds

Human rights frameworks provide better foundation than a la carte ethical principles since they represent globally agreed standards

Explanation

Kraft-Buchman argues that international human rights law offers a superior foundation for AI governance compared to individual companies’ or institutions’ ethical principles. She emphasizes that human rights frameworks have been agreed upon by all nations and provide a comprehensive starting point rather than selective ethical approaches.


Evidence

International human rights law agreed by everybody as point of departure, contrast with a la carte ethical principles that vary by company or institution


Major discussion point

Corporate Human Rights Governance and AI Risk Management


Topics

Human rights | Legal and regulatory


Design-based approach with intentionality is needed, including multidisciplinary teams with social scientists, human rights experts, and anthropologists

Explanation

Kraft-Buchman advocates for intentional design processes that bring together diverse expertise from the beginning of AI development. She emphasizes the need for multidisciplinary teams that include not just technologists but also social scientists, human rights experts, and other relevant specialists depending on the application area.


Evidence

Design course methodology taught at Cambridge, Technical University Munich, African Center for Technology Studies, Chile’s National Center for AI; emphasis on geographic and multidisciplinary diversity including doctors, nurses for health applications


Major discussion point

Operational Implementation of Human Rights in AI Development


Topics

Human rights | Sociocultural | Development


Public procurement represents significant deployment lever (13% EU GDP, 30-40% developing nations GDP) for connecting people to services rather than just detecting fraud

Explanation

Kraft-Buchman highlights public procurement as a major opportunity for positive AI deployment, noting its substantial economic impact. She advocates for using procurement processes to deploy AI systems that connect people to services and improve quality of life, rather than focusing primarily on cost-saving measures like fraud detection.


Evidence

Public procurement statistics: 13% of EU GDP, 30-40% of GDP in developing nations; current focus on fraud detection versus potential for service connection


Major discussion point

Multilateral Organizations and Global Cooperation


Topics

Economic | Development | Legal and regulatory


J

Jhalak Mrignayani Kakkar

Speech speed

124 words per minute

Speech length

1058 words

Speech time

511 seconds

Global South faces different socioeconomic realities where technologies developed in the North may have different implications, requiring specific benchmarks and taxonomies

Explanation

Kakkar argues that AI technologies developed in the Global North may have vastly different impacts when deployed in Global South contexts due to different socioeconomic conditions and societal structures. This necessitates the development of context-specific risk assessment tools and frameworks rather than relying solely on Global North-developed standards.


Evidence

Different socioeconomic realities and societal contexts in Global South, technologies not designed keeping those contexts in mind, increasing focus on building specific benchmarks and taxonomies for local contexts


Major discussion point

Global South Perspectives and Context-Specific Challenges


Topics

Human rights | Development | Sociocultural


Multi-stakeholder engagement between companies, civil society, academia, and governments is essential for cross-learning and context-appropriate solutions

Explanation

Kakkar emphasizes the critical need for sustained collaboration between different stakeholders to ensure effective human rights due diligence in AI. She argues that global companies often lack familiarity with local contexts, making engagement with local civil society and academia essential for developing appropriate solutions.


Evidence

Many AI companies are global with staff unfamiliar with local contexts, need for sustained multi-stakeholder engagement for cross-learning and cross-pollination of ideas


Major discussion point

Global South Perspectives and Context-Specific Challenges


Topics

Human rights | Development | Legal and regulatory


Agreed with

– Nathalie Stadelmann
– Min thu Aung

Agreed on

Multi-stakeholder collaboration is essential for effective AI governance


Strategic framing of conversations is needed given heterogeneity in constitutional rights embedding across Global South countries

Explanation

Kakkar points out that Global South countries have varying degrees of human rights embedding in their constitutions and legal frameworks. This requires strategic approaches to how human rights and AI conversations are framed to achieve intended outcomes while respecting different national contexts and legal traditions.


Evidence

Heterogeneity within Global South in constitutional embedding of human rights, need for strategic language and framing of conversations to reach intended impact


Major discussion point

Global South Perspectives and Context-Specific Challenges


Topics

Human rights | Legal and regulatory | Sociocultural


Human rights due diligence allows proactive risk identification rather than post-facto harm reaction, requiring culturally and linguistically sensitive staff

Explanation

Kakkar argues that proper human rights due diligence enables organizations to identify and address potential harms before they occur, rather than reacting after damage is done. She emphasizes that this requires staff who understand local cultural and linguistic contexts, which is particularly important in diverse countries like India.


Evidence

Proactive identification of risks versus post-facto reaction to harms, need for culturally and linguistically sensitive staff, example of different languages spoken in different parts of India requiring multiplicity of perspectives


Major discussion point

Operational Implementation of Human Rights in AI Development


Topics

Human rights | Sociocultural | Development


Legal mandates for human rights impact assessments face implementation challenges without enforcement mechanisms, requiring transparency thresholds and disclosure requirements

Explanation

Kakkar acknowledges the tension between mandatory human rights assessments and their practical implementation. She notes that while lack of enforceability may actually encourage company participation, there remain challenges in defining thresholds for risk mitigation and determining appropriate levels of transparency and disclosure.


Evidence

Criticism of lack of enforceability but potential value in encouraging company participation, challenges in articulating thresholds for risk mitigation, need for transparency and disclosure requirements


Major discussion point

Regulatory Frameworks and Legal Requirements


Topics

Human rights | Legal and regulatory


Human rights concepts are being reinterpreted for group and community settings, including group privacy and community data rights

Explanation

Kakkar argues that traditional individual-focused human rights frameworks are being expanded and reinterpreted to address collective and community impacts of AI technologies. This includes concepts like group privacy and community rights over data that go beyond individual rights protections.


Evidence

Social media platforms pointing to need for societal impact assessment, reinterpretation of privacy as group privacy, community rights over data


Major discussion point

Broader Societal Impact Assessment


Topics

Human rights | Legal and regulatory | Sociocultural


Agreed with

– Elina Thorstrom
– Audience

Agreed on

AI impacts extend beyond individual rights to broader societal implications


Disagreed with

– Elina Thorstrom
– Audience

Disagreed on

Scope of impact assessment – individual vs societal level


N

Nathalie Stadelmann

Speech speed

138 words per minute

Speech length

839 words

Speech time

363 seconds

AI technology evolution outpaces current regulatory frameworks, requiring shared responsibility between companies and states for human rights protection

Explanation

Stadelmann argues that the rapid pace of AI development has created a gap where existing regulatory frameworks cannot keep up with technological advancement. This necessitates a shared responsibility model where both companies and states have obligations to protect human rights in AI development and deployment.


Evidence

BTEC report mandated by Human Rights Council showing speed and scale of AI evolution outpacing regulatory frameworks, shared responsibility between companies developing/deploying AI and states’ duty to protect rights


Major discussion point

Regulatory Frameworks and Legal Requirements


Topics

Human rights | Legal and regulatory


Regulations should require publication of human rights due diligence and verification of data input/output for proper representation and non-discrimination

Explanation

Stadelmann advocates for regulatory requirements that mandate transparency in human rights due diligence processes and verification of AI systems’ data inputs and outputs. This includes ensuring proper representation across gender, race, and cultural diversity while implementing safeguards against discriminatory outcomes.


Evidence

BTEC report recommendations for regulatory requirements on publishing human rights due diligence, verification of data input and output for gender, race, cultural diversity representation, safeguards against discriminatory AI outputs


Major discussion point

Regulatory Frameworks and Legal Requirements


Topics

Human rights | Legal and regulatory


UN and multilateral bodies play crucial roles in translating guiding principles for technology sector through multi-stakeholder guidance development

Explanation

Stadelmann describes the UN’s role, particularly through BTEC, in translating broad human rights principles into practical guidance for the technology sector. This involves multi-stakeholder collaboration with organizations like GNI and OECD to develop actionable frameworks for companies.


Evidence

BTEC project launched 6 years ago to translate guiding principles for technology sector, multi-stakeholder work with GNI, OECD, and companies through committee of practice


Major discussion point

Multilateral Organizations and Global Cooperation


Topics

Human rights | Legal and regulatory


Agreed with

– Jhalak Mrignayani Kakkar
– Min thu Aung

Agreed on

Multi-stakeholder collaboration is essential for effective AI governance


Global AI summits, particularly upcoming India summit, provide opportunities to bring global majority perspectives into rights-respecting AI development discussions

Explanation

Stadelmann highlights the importance of global AI summits in fostering international cooperation on AI governance, with particular emphasis on the upcoming India summit as an opportunity to center Global South perspectives. She sees this as a counterbalance to previous summits hosted by Global North countries.


Evidence

AI summit in India in February focusing on open, transparent, and rights-respecting AI development, contrast with previous summits in UK, South Korea, France, Brazil developing AI regulation as counterbalance to Global North developers


Major discussion point

Multilateral Organizations and Global Cooperation


Topics

Human rights | Development | Legal and regulatory


M

Min thu Aung

Speech speed

133 words per minute

Speech length

1969 words

Speech time

881 seconds

Multi-stakeholder initiatives like GNI are essential for accountability and collective advocacy at the intersection of technology and human rights

Explanation

Min thu Aung argues that organizations like GNI, which bring together academics, civil society, companies, and investors, play a crucial role in ensuring accountability and shared learning. These initiatives are particularly important when companies face government requests that impact freedom of expression or privacy rights.


Evidence

GNI brings together four constituencies (academics, civil society, companies, investors) for accountability, shared learning, collective advocacy; GNI principles guide companies on government requests impacting freedom of expression and privacy


Major discussion point

Multilateral Organizations and Global Cooperation


Topics

Human rights | Legal and regulatory


Agreed with

– Jhalak Mrignayani Kakkar
– Nathalie Stadelmann

Agreed on

Multi-stakeholder collaboration is essential for effective AI governance


AI governance requires active engagement across multiple international forums and policy development processes

Explanation

Min thu Aung emphasizes that effective AI governance necessitates participation in various international bodies and collaborative projects. This includes membership in expert networks, involvement in due diligence projects, and development of policy briefs on government interventions in AI.


Evidence

GNI membership in OECD AI Network of Experts, involvement in BTEC’s GenAI Human Rights Due Diligence Project, AI working group developing policy brief on government interventions in AI


Major discussion point

Multilateral Organizations and Global Cooperation


Topics

Human rights | Legal and regulatory


Companies need practical guidance on incorporating human rights into AI risk assessments across different regulatory environments

Explanation

Min thu Aung highlights the need for companies to understand how to conduct human rights-informed risk assessments while navigating different regulatory frameworks like the EU AI Act. This requires practical advice on normalizing human rights considerations within AI operations across various jurisdictions.


Evidence

Questions posed to panelists about conducting risk assessments, incorporating human rights, navigating EU AI Act and other regulations, advice for normalizing AI risk assessments


Major discussion point

Corporate Human Rights Governance and AI Risk Management


Topics

Human rights | Legal and regulatory


A

Audience

Speech speed

145 words per minute

Speech length

324 words

Speech time

133 seconds

Legal mandates for human rights impact assessments are necessary because voluntary approaches lack widespread adoption

Explanation

The audience member from South Korea argues that while human rights impact assessment tools exist and can be valuable, they are not widely used without legal obligations. Even when companies conduct internal risk assessments, independent assessments with affected party participation are rarely carried out.


Evidence

Korea’s National Human Rights Commission human rights impact assessment tool not widely used due to lack of legal obligation, Korean AI Basic Act only states efforts should be made rather than mandating assessments


Major discussion point

Regulatory Frameworks and Legal Requirements


Topics

Human rights | Legal and regulatory


Agreed with

– Alexandria Walden

Agreed on

Support for risk-based regulatory approaches with mandatory human rights assessments for high-risk AI


Disagreed with

– Alexandria Walden

Disagreed on

Regulatory enforcement vs voluntary approaches


AI impacts extend beyond individual human rights to broader societal transformations requiring comprehensive assessment approaches

Explanation

The audience member from BSR argues that while human rights approaches are necessary, many significant AI impacts occur at societal rather than individual levels. These include changes in education systems, job displacement, and shifts in social structures that may not be fully captured by traditional individual-focused human rights frameworks.


Evidence

Examples of societal impacts: shifts in education due to generative AI affecting research and learning capabilities, job displacement and shifts in employment patterns


Major discussion point

Broader Societal Impact Assessment


Topics

Human rights | Economic | Sociocultural


Agreed with

– Elina Thorstrom
– Jhalak Mrignayani Kakkar

Agreed on

AI impacts extend beyond individual rights to broader societal implications


Disagreed with

– Elina Thorstrom
– Jhalak Mrignayani Kakkar

Disagreed on

Scope of impact assessment – individual vs societal level


Agreements

Agreement points

Top management commitment is essential for effective human rights integration in AI

Speakers

– Alexandria Walden
– Elina Thorstrom

Arguments

Companies need baseline human rights governance before integrating AI principles, with corporate policies committed to UN Guiding Principles and board-level oversight


Top management commitment to responsible AI and human rights is essential, with AI strategy foundations built on responsible AI principles


Summary

Both speakers emphasize that successful integration of human rights into AI requires commitment from the highest levels of company leadership, with corporate policies and governance structures established at the board level


Topics

Human rights | Legal and regulatory


Operational processes and frameworks are necessary to implement human rights principles in AI development

Speakers

– Alexandria Walden
– Elina Thorstrom

Arguments

Technical teams need processes and guidelines to operationalize human rights principles through red teaming, secure AI frameworks, and coalition work


Comprehensive risk assessment should examine human rights, security, privacy, and data governance holistically through cross-organizational collaboration


Summary

Both speakers agree that having principles alone is insufficient and that companies need specific operational processes, frameworks, and collaborative approaches to effectively implement human rights considerations in AI development


Topics

Human rights | Infrastructure | Legal and regulatory


Support for risk-based regulatory approaches with mandatory human rights assessments for high-risk AI

Speakers

– Alexandria Walden
– Audience

Arguments

Companies should conduct ongoing human rights due diligence across all activities, including AI work, with support for risk-based regulatory approaches


Legal mandates for human rights impact assessments are necessary because voluntary approaches lack widespread adoption


Summary

There is agreement that regulatory frameworks requiring human rights impact assessments for high-risk AI applications are necessary and supported by industry, as voluntary approaches have proven insufficient


Topics

Human rights | Legal and regulatory


Multi-stakeholder collaboration is essential for effective AI governance

Speakers

– Jhalak Mrignayani Kakkar
– Nathalie Stadelmann
– Min thu Aung

Arguments

Multi-stakeholder engagement between companies, civil society, academia, and governments is essential for cross-learning and context-appropriate solutions


UN and multilateral bodies play crucial roles in translating guiding principles for technology sector through multi-stakeholder guidance development


Multi-stakeholder initiatives like GNI are essential for accountability and collective advocacy at the intersection of technology and human rights


Summary

All three speakers emphasize the critical importance of bringing together diverse stakeholders including companies, civil society, academia, and governments to develop effective AI governance frameworks


Topics

Human rights | Legal and regulatory


AI impacts extend beyond individual rights to broader societal implications

Speakers

– Elina Thorstrom
– Jhalak Mrignayani Kakkar
– Audience

Arguments

AI impacts extend beyond individual rights to societal level changes in education, employment, and social structures requiring comprehensive approaches


Human rights concepts are being reinterpreted for group and community settings, including group privacy and community data rights


AI impacts extend beyond individual human rights to broader societal transformations requiring comprehensive assessment approaches


Summary

There is consensus that AI’s impacts go beyond individual human rights to encompass broader societal changes including education, employment, and community structures, requiring expanded assessment frameworks


Topics

Human rights | Economic | Sociocultural


Similar viewpoints

Both speakers advocate for using established international human rights frameworks as the foundation for AI governance rather than relying on individual companies’ ethical principles, emphasizing the universal agreement and comprehensive nature of human rights law

Speakers

– Caitlin Kraft-Buchman
– Nathalie Stadelmann

Arguments

Human rights frameworks provide better foundation than a la carte ethical principles since they represent globally agreed standards


UN and multilateral bodies play crucial roles in translating guiding principles for technology sector through multi-stakeholder guidance development


Topics

Human rights | Legal and regulatory


Both speakers emphasize the need for context-specific approaches to AI development that consider diverse perspectives and local realities, requiring multidisciplinary expertise and culturally sensitive frameworks

Speakers

– Jhalak Mrignayani Kakkar
– Caitlin Kraft-Buchman

Arguments

Global South faces different socioeconomic realities where technologies developed in the North may have different implications, requiring specific benchmarks and taxonomies


Design-based approach with intentionality is needed, including multidisciplinary teams with social scientists, human rights experts, and anthropologists


Topics

Human rights | Development | Sociocultural


Both speakers support comprehensive human rights due diligence requirements for AI systems, with regulatory frameworks that mandate transparency and verification processes to ensure non-discriminatory outcomes

Speakers

– Alexandria Walden
– Nathalie Stadelmann

Arguments

Companies should conduct ongoing human rights due diligence across all activities, including AI work, with support for risk-based regulatory approaches


Regulations should require publication of human rights due diligence and verification of data input/output for proper representation and non-discrimination


Topics

Human rights | Legal and regulatory


Unexpected consensus

Industry support for mandatory human rights regulations

Speakers

– Alexandria Walden
– Audience

Arguments

Companies should conduct ongoing human rights due diligence across all activities, including AI work, with support for risk-based regulatory approaches


Legal mandates for human rights impact assessments are necessary because voluntary approaches lack widespread adoption


Explanation

It is somewhat unexpected to see strong alignment between a major tech company representative and civil society on the need for mandatory regulatory requirements, as industry typically resists additional regulatory burdens. This suggests a maturing recognition that voluntary approaches are insufficient


Topics

Human rights | Legal and regulatory


Acknowledgment of limitations in current human rights frameworks for AI

Speakers

– Jhalak Mrignayani Kakkar
– Elina Thorstrom
– Audience

Arguments

Human rights concepts are being reinterpreted for group and community settings, including group privacy and community data rights


AI impacts extend beyond individual rights to societal level changes in education, employment, and social structures requiring comprehensive approaches


AI impacts extend beyond individual human rights to broader societal transformations requiring comprehensive assessment approaches


Explanation

There is unexpected consensus across different stakeholder types that traditional individual-focused human rights frameworks may be insufficient for addressing AI’s broader societal impacts, suggesting a need for new approaches beyond established human rights paradigms


Topics

Human rights | Economic | Sociocultural


Overall assessment

Summary

There is strong consensus on the need for top management commitment, operational frameworks for implementation, multi-stakeholder collaboration, and regulatory requirements for human rights in AI. Speakers also agree that AI impacts extend beyond individual rights to broader societal implications requiring expanded assessment approaches.


Consensus level

High level of consensus across diverse stakeholders (industry, civil society, multilateral organizations, Global South perspectives) on fundamental principles and approaches, with surprising alignment on the need for mandatory regulatory frameworks. This suggests the field is maturing toward shared understanding of necessary governance structures, though implementation challenges remain regarding context-specific applications and enforcement mechanisms.


Differences

Different viewpoints

Scope of impact assessment – individual vs societal level

Speakers

– Elina Thorstrom
– Jhalak Mrignayani Kakkar
– Audience

Arguments

AI impacts extend beyond individual rights to societal level changes in education, employment, and social structures requiring comprehensive approaches


Human rights concepts are being reinterpreted for group and community settings, including group privacy and community data rights


AI impacts extend beyond individual human rights to broader societal transformations requiring comprehensive assessment approaches


Summary

While there’s agreement that AI impacts go beyond individual rights, there are different views on how to address this. Thorstrom emphasizes comprehensive company-level approaches, Kakkar focuses on reinterpreting existing human rights frameworks for group settings, and the audience member suggests that traditional human rights frameworks may be insufficient for capturing societal-level changes.


Topics

Human rights | Economic | Sociocultural


Regulatory enforcement vs voluntary approaches

Speakers

– Alexandria Walden
– Audience

Arguments

Companies should conduct ongoing human rights due diligence across all activities, including AI work, with support for risk-based regulatory approaches


Legal mandates for human rights impact assessments are necessary because voluntary approaches lack widespread adoption


Summary

While Walden supports risk-based regulatory approaches for high-risk AI applications, the audience member from South Korea argues more broadly that legal mandates are necessary because voluntary approaches are not widely adopted, suggesting a difference in views on the extent of regulatory requirements needed.


Topics

Human rights | Legal and regulatory


Unexpected differences

Adequacy of human rights frameworks for AI governance

Speakers

– Caitlin Kraft-Buchman
– Audience

Arguments

Human rights frameworks provide better foundation than a la carte ethical principles since they represent globally agreed standards


AI impacts extend beyond individual human rights to broader societal transformations requiring comprehensive assessment approaches


Explanation

This disagreement is unexpected because both parties are advocating for comprehensive approaches to AI governance, yet they differ on whether existing human rights frameworks are sufficient. Kraft-Buchman strongly advocates for human rights frameworks as superior to other approaches, while the audience member suggests these frameworks may be inadequate for capturing broader societal impacts, potentially requiring approaches that go beyond traditional human rights paradigms.


Topics

Human rights | Economic | Sociocultural


Overall assessment

Summary

The discussion shows remarkable consensus on fundamental principles – all speakers agree that human rights should be central to AI governance, that multi-stakeholder collaboration is essential, and that proactive risk assessment is preferable to reactive harm mitigation. The main disagreements center on implementation approaches rather than core values.


Disagreement level

Low to moderate disagreement level with high strategic significance. While speakers largely agree on goals, their different perspectives on implementation methods (voluntary vs mandatory approaches, individual vs societal impact focus, adequacy of existing frameworks) reflect important tensions in the field that could significantly impact how AI governance develops globally. These disagreements are constructive and reflect the complexity of translating human rights principles into practical AI governance mechanisms across different contexts and stakeholder perspectives.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers advocate for using established international human rights frameworks as the foundation for AI governance rather than relying on individual companies’ ethical principles, emphasizing the universal agreement and comprehensive nature of human rights law

Speakers

– Caitlin Kraft-Buchman
– Nathalie Stadelmann

Arguments

Human rights frameworks provide better foundation than a la carte ethical principles since they represent globally agreed standards


UN and multilateral bodies play crucial roles in translating guiding principles for technology sector through multi-stakeholder guidance development


Topics

Human rights | Legal and regulatory


Both speakers emphasize the need for context-specific approaches to AI development that consider diverse perspectives and local realities, requiring multidisciplinary expertise and culturally sensitive frameworks

Speakers

– Jhalak Mrignayani Kakkar
– Caitlin Kraft-Buchman

Arguments

Global South faces different socioeconomic realities where technologies developed in the North may have different implications, requiring specific benchmarks and taxonomies


Design-based approach with intentionality is needed, including multidisciplinary teams with social scientists, human rights experts, and anthropologists


Topics

Human rights | Development | Sociocultural


Both speakers support comprehensive human rights due diligence requirements for AI systems, with regulatory frameworks that mandate transparency and verification processes to ensure non-discriminatory outcomes

Speakers

– Alexandria Walden
– Nathalie Stadelmann

Arguments

Companies should conduct ongoing human rights due diligence across all activities, including AI work, with support for risk-based regulatory approaches


Regulations should require publication of human rights due diligence and verification of data input/output for proper representation and non-discrimination


Topics

Human rights | Legal and regulatory


Takeaways

Key takeaways

Companies must establish baseline human rights governance with top management commitment before integrating AI-specific principles, using UN Guiding Principles as foundation rather than ad hoc ethical frameworks


Effective AI risk management requires comprehensive, holistic approaches that integrate human rights considerations into technical processes through red teaming, secure AI frameworks, and cross-organizational collaboration


Global South contexts require specialized attention due to different socioeconomic realities, with need for context-specific benchmarks, taxonomies, and culturally sensitive implementation approaches


Multi-stakeholder engagement between companies, civil society, academia, and governments is essential for effective human rights integration in AI development and deployment


Risk-based regulatory approaches with mandatory human rights impact assessments for high-risk AI systems are gaining industry support, particularly following EU AI Act model


AI impacts extend beyond individual rights to broader societal changes requiring both human rights assessments and societal impact evaluations


Public procurement represents a significant leverage point for deploying rights-respecting AI at scale, particularly in developing nations where it comprises 30-40% of GDP


Resolutions and action items

GNI AI working group to complete policy brief on government interventions in AI and rights-respecting responses


BTEC to continue developing standardized human rights impact assessment tools and guidance, building on existing generative AI guidance


Participants encouraged to review BTEC’s latest Human Rights Council report on shared responsibility for AI governance


Civil society and academia in Global South to develop context-specific benchmarks and taxonomies for AI risk assessment


Companies to implement ongoing human rights due diligence across all AI activities with appropriate transparency and disclosure mechanisms


Unresolved issues

Lack of standardized human rights impact assessment methodology across international bodies


Challenges in defining appropriate risk thresholds for triggering mitigation measures in AI systems


Difficulty balancing breadth versus specificity in risk identification frameworks


Enforcement mechanisms for human rights due diligence requirements remain unclear


Geopolitical divides on human rights approaches to AI governance continue to impact global coordination


Questions about prioritizing different human rights when they conflict in AI deployment contexts


Uncertainty about optimal levels of transparency and disclosure requirements for companies


Gap between individual human rights frameworks and broader societal impact assessment needs


Suggested compromises

Risk-based regulatory approach that mandates human rights assessments only for high-risk AI applications rather than all AI systems


Strategic framing of human rights conversations in different jurisdictions based on local constitutional and cultural contexts rather than universal approach


Voluntary human rights due diligence with incentives rather than punitive enforcement mechanisms to encourage company participation


Combination of individual human rights assessments with broader societal impact evaluations to capture full scope of AI implications


Multi-stakeholder consultation processes that include affected communities in co-creation rather than top-down assessment approaches


Thought provoking comments

Human rights has to be at the baseline and then integrated into those processes and frameworks… having that in place is what allows us to then get to a point where, okay, we also have AI principles on top of that. Our AI principles build on top of our commitments to the UNGPs, our GNI principles.

Speaker

Alexandria Walden (Google)


Reason

This comment established a foundational framework that human rights governance must precede AI-specific principles, challenging the common approach of treating AI ethics as separate from broader human rights commitments. It provided a clear hierarchy and integration model.


Impact

This comment set the tone for the entire discussion by establishing that human rights should be the foundational layer, not an add-on. It influenced subsequent speakers like Elina Thorström to emphasize similar points about top management commitment and comprehensive approaches, creating a consistent thread throughout the panel.


Our values, our policies, our principles, they guide our work. Irrespective of which country we are at… Although a part of AI governance and good AI governance is of course to make sure that we are compliant with the regulation. But that’s only a part of it. So the company’s culture, policy, guidelines, those are the ones that actually guide us.

Speaker

Elina Thorström (Telenor)


Reason

This insight challenged the compliance-focused approach to AI governance, arguing that internal values should drive behavior regardless of local regulatory requirements. It highlighted the tension between regulatory compliance and ethical leadership.


Impact

This comment shifted the discussion from regulatory compliance to values-driven governance, influencing later speakers to discuss the limitations of purely compliance-based approaches and the need for companies to go beyond legal requirements.


If a lot of these technologies are being developed in the North, we don’t know. They could have very different implications in the South. They’re not being developed and designed keeping in mind those contexts… even within the Indian context, within an urban part of India versus a semi-urban part of India versus a rural part of India will differ significantly.

Speaker

Jhalak Mrignayani Kakkar


Reason

This comment introduced crucial complexity about contextual differences in AI impacts, challenging the assumption that risk assessments can be universally applied. It highlighted the need for granular, context-specific approaches even within single countries.


Impact

This intervention significantly deepened the discussion by introducing the concept of contextual variation in AI impacts. It led to more nuanced conversations about the need for culturally and linguistically sensitive staff, different benchmarks for different contexts, and the limitations of one-size-fits-all approaches to human rights due diligence.


We tend to think of ethical principles, wonderful, and we love them, but they’re very a la carte, whereas human rights frameworks and international human rights law has been agreed by everybody, and as a point of departure, it really is a very good place to start, as opposed to one company’s or one academic institution’s idea of what really should be foregrounded or not.

Speaker

Caitlin Kraft-Buchman


Reason

This comment provided a sharp critique of the current approach to AI ethics, distinguishing between voluntary ethical principles and binding human rights frameworks. It challenged the legitimacy and effectiveness of company-specific ethical approaches.


Impact

This comment reframed the discussion from AI ethics to human rights law, emphasizing the importance of universally agreed standards. It reinforced the earlier points about human rights as foundational and influenced the conversation toward more concrete, legally grounded approaches rather than voluntary principles.


Many of the really big impacts that AI will have will be more at the societal rather than the individual level… I’m just wondering whether the panel thinks that those sort of bigger societal impacts or risks can be captured by a human rights-based approach, or whether we need to go a bit beyond sort of the individual human rights-based approach.

Speaker

Richard Ringfield (BSR)


Reason

This question challenged the fundamental premise of the entire panel by questioning whether human rights frameworks, traditionally focused on individual rights, are adequate for addressing systemic societal changes brought by AI. It introduced a critical limitation of the human rights approach.


Impact

This question prompted the panel to acknowledge the limitations of individual-focused human rights approaches and led to discussions about societal impact assessments, group privacy rights, and the need for broader governance frameworks. It added important nuance to the conversation by highlighting what might be missing from a purely human rights-based approach.


Risk assessment, human rights due diligence. One of the criticisms is the lack of enforceability. But perhaps that’s also where the value is, because perhaps companies are more incentivized to conduct it when they know that there isn’t a negative consequence.

Speaker

Jhalak Mrignayani Kakkar


Reason

This paradoxical insight challenged conventional thinking about enforcement, suggesting that the voluntary nature of human rights due diligence might actually be its strength rather than weakness. It introduced a counterintuitive perspective on regulatory design.


Impact

This comment introduced complexity to the discussion about mandatory versus voluntary approaches to human rights assessments. It influenced the conversation about the Korean AI law and the debate over whether to mandate human rights impact assessments, showing that enforcement mechanisms need careful consideration of incentive structures.


Overall assessment

These key comments fundamentally shaped the discussion by establishing a clear hierarchy (human rights as foundation, not add-on), introducing critical complexity about contextual variation and Global South perspectives, challenging the adequacy of voluntary ethical approaches, and questioning whether individual rights frameworks can address systemic societal impacts. The comments created a progression from basic principles to implementation challenges to fundamental limitations, resulting in a nuanced conversation that moved beyond simple advocacy for human rights in AI to grapple with practical and theoretical challenges. The discussion evolved from ‘why’ human rights matter in AI to ‘how’ to implement them effectively across different contexts, and finally to ‘whether’ current frameworks are sufficient for AI’s societal impacts.


Follow-up questions

How to determine the threshold for requiring risk mitigation in human rights due diligence for AI systems

Speaker

Jhalak Mrignayani Kakkar


Explanation

This is a critical operational challenge for implementing effective human rights risk management, as it determines when companies must take action to address identified risks


What level of transparency should be expected from companies and required by governments in AI human rights assessments

Speaker

Jhalak Mrignayani Kakkar


Explanation

Transparency is essential for identifying emerging challenges and making human rights due diligence a meaningful exercise, but the appropriate level needs to be defined


How to balance breadth versus specificity in defining AI risks for assessment purposes

Speaker

Jhalak Mrignayani Kakkar


Explanation

Too broad a definition hinders development of specific assessment tools, while too narrow a definition may miss important harms that could arise


How to prioritize certain human rights over others in AI risk assessments, given their mutually affirming character

Speaker

Jhalak Mrignayani Kakkar


Explanation

This addresses the practical challenge of resource allocation and focus when multiple human rights may be impacted by AI systems


Development of context-specific benchmarks and taxonomies for Global South countries to identify AI risks and harms

Speaker

Jhalak Mrignayani Kakkar


Explanation

Global benchmarks may not capture the specific socioeconomic realities and societal contexts of Global South countries, requiring localized frameworks


How to strategically frame human rights conversations with governments that have varying levels of human rights embedding in their constitutions

Speaker

Jhalak Mrignayani Kakkar


Explanation

Different countries have different constitutional frameworks for human rights, requiring tailored approaches to achieve intended outcomes


Whether societal-level AI impacts can be fully captured by individual human rights-based approaches

Speaker

Richard Ringfield (BSR)


Explanation

AI’s biggest impacts may be societal rather than individual (education shifts, job displacement), raising questions about whether current human rights frameworks are sufficient


Development of the first machine learning benchmark dealing with international human rights law framework

Speaker

Caitlin Kraft-Buchman


Explanation

This would provide developers with tools to test whether their AI systems align with human rights criteria, filling a current gap in available assessment tools


How to ensure effective multi-stakeholder engagement between companies, civil society, academia, and governments for cross-learning on AI human rights

Speaker

Jhalak Mrignayani Kakkar


Explanation

Sustained collaboration is needed for knowledge sharing and developing effective human rights risk assessment frameworks


Development of standardized human rights impact assessment methodology for AI systems

Speaker

Byungil Oh (Jinbonet)


Explanation

While tools exist, there’s no standardized international methodology, and current tools often lack legal mandates for implementation


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.