Embedding Human Rights in AI Standards: From Principles to Practice
10 Jul 2025 16:00h - 16:45h
Embedding Human Rights in AI Standards: From Principles to Practice
Session at a glance
Summary
This discussion focused on embedding human rights principles in AI standards, moving from theoretical frameworks to practical implementation. The session was organized by the Freedom Online Coalition, ITU, and the Office of the UN High Commissioner for Human Rights, bringing together experts from standards organizations, human rights bodies, and research institutions.
Tomas Lamanauskas from ITU emphasized the critical role of technical standards in regulating technology use and protecting human rights, noting that ITU has developed over 400 AI-related standards with increasing recognition of human rights principles. He highlighted the importance of multi-stakeholder collaboration and transparency in standards development processes.
Peggy Hicks from the UN Office of the High Commissioner for Human Rights outlined numerous urgent human rights risks posed by AI across sectors including healthcare, education, justice administration, and border control. She advocated for integrating human rights due diligence into standardization processes through a four-step approach: identifying risks, integrating findings into development processes, tracking effectiveness, and communicating how impacts are addressed.
Karen McCabe from IEEE described her organization’s extensive work on AI ethics through their 7000 series standards addressing bias, privacy, and transparency. She acknowledged challenges in translating broad human rights principles into measurable engineering requirements and emphasized the need for education and mentorship to bridge technical and human rights communities.
Caitlin Kraft-Buchman presented practical tools including a human rights AI benchmark for evaluating large language models and highlighted how diversity in standards development leads to more robust outcomes for everyone. The discussion concluded with recognition that successful integration of human rights in AI standards requires both inclusive processes involving civil society and Global South representation, as well as substantive focus on standards that will actually be adopted and implemented by industry.
Keypoints
## Major Discussion Points:
– **Embedding human rights principles into AI standards development processes**: The discussion focused on how standards development organizations (SDOs) like ITU, IEEE, ISO, and IEC can integrate human rights considerations into their technical standards creation, moving beyond viewing standards as purely technical to recognizing their regulatory impact on how rights are exercised.
– **Urgent human rights risks posed by AI systems**: Speakers identified critical areas where AI threatens human rights, including privacy violations in health and education, bias in administration of justice, surveillance technologies, and discrimination in employment and social services, emphasizing the need for proactive risk assessment and management.
– **Practical implementation challenges and solutions**: The conversation addressed real-world obstacles in bridging human rights expertise with technical communities, including terminology barriers, consensus-building difficulties across diverse stakeholders, and the need for education programs to help technical experts understand human rights principles and vice versa.
– **Multi-stakeholder participation and inclusivity in standards development**: Panelists emphasized the importance of meaningful engagement from civil society organizations, Global South representation, and diverse voices in standards processes, while acknowledging barriers like resource constraints and skills gaps that limit participation.
– **Concrete tools and frameworks for evaluation**: Discussion included specific initiatives like human rights due diligence processes, AI benchmarking tools for evaluating systems through a rights-based lens, and certification programs that integrate human rights considerations into AI development lifecycles.
## Overall Purpose:
The discussion aimed to explore practical pathways for integrating human rights principles into AI standards development, moving from high-level policy commitments (like those in the Global Digital Compact) to concrete implementation strategies that can guide how AI systems are designed, deployed, and governed to protect human dignity and rights.
## Overall Tone:
The tone was collaborative and constructive throughout, with speakers demonstrating mutual respect and shared commitment to the cause. The discussion maintained a professional, solution-oriented atmosphere, with participants acknowledging challenges while remaining optimistic about progress. There was a sense of urgency about the importance of the work, but also patience in recognizing the complexity of bridging technical and human rights communities. The tone remained consistently forward-looking, focusing on practical next steps rather than dwelling on problems.
Speakers
– **Ernst Noorman** – Ambassador for Cyber Affairs of the Netherlands, Session Moderator
– **Tomas Lamanauskas** – Deputy Secretary General of the ITU (International Telecommunication Union)
– **Peggy Hicks** – Director of Thematic Engagement at the Office of the UN High Commissioner for Human Rights
– **Karen McCabe** – Senior Director of Technology Policy at IEEE (Institute of Electrical and Electronics Engineers)
– **Caitlin Kraft Buchman** – CEO and Founder of Women at the Table
– **Florian Ostmann** – Director of AI Governance and Regulatory Innovation at the Alan Turing Institute
– **Matthias Kloth** – Head of Digital Governance of the Council of Europe
– **Audience** – Various audience members asking questions (roles/titles not specified)
Additional speakers:
None – all speakers who spoke during the discussion were included in the provided speakers names list.
Full session report
# Embedding Human Rights Principles in AI Standards: From Theory to Practice
## Executive Summary
This side event at the WSIS Forum, organized by the Freedom Online Coalition, ITU, and the Office of the UN High Commissioner for Human Rights, brought together experts to discuss practical strategies for embedding human rights principles in AI standards development. The 45-minute session featured representatives from major standards organizations, UN agencies, civil society, and research institutions who shared concrete examples of ongoing work and identified key challenges in bridging technical and human rights communities.
The discussion built on the Freedom Online Coalition’s 2024 joint statement and the Global Digital Compact’s emphasis on human rights-respecting AI development. Speakers presented practical tools and initiatives already underway, including IEEE’s 7000 series standards, ITU’s capacity building programs, and new human rights benchmarks for evaluating AI systems. The conversation highlighted both the progress being made and the significant challenges that remain in translating human rights principles into technical requirements.
## Key Participants and Contributions
**Ernst Noorman**, Ambassador for Cyber Affairs of the Netherlands, moderated the session and provided context about the Freedom Online Coalition’s 2024 joint statement calling for human rights principles to be embedded in AI standards. He emphasized the need to move from high-level commitments to practical implementation.
**Tomas Lamanauskas** from ITU highlighted that the organization has developed over 400 AI-related standards and noted the recent Human Rights Council resolution adopted by consensus on July 7th. He emphasized that “technical standards actually end up regulating how we use technology and what is technology,” making them crucial for human rights protection. He described ITU’s collaboration with OHCHR and announced capacity building courses with human rights modules for standards committees.
**Peggy Hicks** from OHCHR outlined how AI systems pose risks to human rights across various sectors and described the UN Guiding Principles on Business and Human Rights framework. She explained that OHCHR has developed a four-step human rights due diligence process for standards organizations and emphasized the importance of engaging with technology developers early in the process.
**Karen McCabe** from IEEE described their extensive work on AI ethics through the 7000 series standards addressing bias, privacy, and transparency. She highlighted IEEE’s “Ethically Aligned Design” framework and noted that Vienna has adopted it as part of their digital humanism strategy. She acknowledged the practical challenges of “translating broad human rights principles into measurable engineering requirements” and building consensus across diverse stakeholders.
**Caitlin Kraft-Buchman** from Women at the Table presented their work developing human rights benchmarks for large language models, testing five models across five rights areas. She used analogies about fighter jet cockpit design and suitcase wheels to illustrate how designing for diversity benefits everyone, arguing against the notion of technological neutrality.
**Florian Ostmann** from the Alan Turing Institute noted that their database contains over 250 AI standards currently under development, highlighting the complexity of the standards landscape. In his brief closing remarks, he emphasized the need for strategic thinking about which standards will actually be adopted and implemented.
**Matthias Kloth** from the Council of Europe raised questions about cross-cultural communication between human rights and technical communities, asking how to ensure mutual understanding across these different professional worlds.
## Major Discussion Areas
### Technical Standards as Regulatory Instruments
A key theme was recognizing that technical standards are not neutral documents but rather regulatory mechanisms that shape how AI systems are designed and deployed. Lamanauskas emphasized that standards “regulate how we use technology” and determine “how our rights are exercised.” This perspective was echoed by other speakers who argued for proactive human rights integration rather than reactive responses to AI-related harms.
### Practical Tools and Initiatives
Speakers presented several concrete examples of work already underway:
– **IEEE’s 7000 Series**: McCabe described over 100 standards in development addressing bias, privacy, and transparency, built on their “Ethically Aligned Design” framework
– **ITU’s Capacity Building**: Lamanauskas announced new courses with human rights modules for all standards committees
– **Human Rights Benchmarks**: Kraft-Buchman presented their evaluation of large language models across privacy, due process, non-discrimination, social protection, and health rights
– **OHCHR Framework**: Hicks outlined their four-step due diligence process for standards organizations
### Implementation Challenges
The discussion identified several key obstacles:
– **Translation Difficulties**: McCabe noted the challenge of converting broad human rights principles into specific technical requirements
– **Consensus Building**: The difficulty of achieving agreement across diverse stakeholders with different interpretations of human rights principles
– **Early Engagement**: The challenge of reaching scientists and developers at the inception stage of technology development
– **Resource Constraints**: Barriers preventing civil society organizations from meaningfully participating in technical standards processes
– **Communication Gaps**: The need for shared vocabulary and understanding between technical and human rights communities
### Multi-Stakeholder Collaboration
All speakers emphasized the importance of bringing together diverse perspectives in standards development. McCabe described IEEE’s open working group processes, while Lamanauskas highlighted ITU’s collaboration with OHCHR and the Freedom Online Coalition. The discussion revealed ongoing efforts to create more inclusive participation mechanisms.
## Audience Engagement
The session included questions from the audience, including:
– A question about just transition considerations for workers displaced by AI, which Hicks addressed by referencing OHCHR’s engagement with corporations on socioeconomic impacts
– Mark Janowski’s observation that the human rights community may be arriving too late in the technology development process, emphasizing the need for earlier engagement with scientists
## Key Challenges and Future Directions
The discussion identified several areas requiring continued attention:
1. **Capacity Building**: Need for sustained education programs to help technical experts understand human rights principles and help human rights professionals engage with technical processes
2. **Resource Allocation**: Addressing funding and skills gaps that prevent meaningful civil society participation in standards development
3. **Strategic Focus**: Determining how to prioritize efforts across the vast landscape of AI standards development
4. **Early Engagement**: Developing mechanisms to reach technology developers at the inception stage of AI system design
5. **Practical Implementation**: Continuing to develop concrete tools and methodologies that can bridge the gap between human rights principles and technical requirements
## Ongoing Initiatives
Several collaborative efforts were highlighted:
– ITU’s approved work plan with OHCHR through the Telecommunication Standardisation Advisory Group
– Development of an AI standards exchange as recommended in the Global Digital Compact
– Continued expansion of IEEE’s ethics-focused standards
– Women at the Table’s ongoing benchmark development for AI systems
## Conclusion
The session demonstrated significant momentum in embedding human rights principles in AI standards development, with concrete examples of tools and initiatives already underway. While challenges remain in bridging technical and human rights communities, the collaborative approach and practical focus suggest genuine progress toward ensuring AI systems respect fundamental human rights. The discussion highlighted the need for continued investment in capacity building, multi-stakeholder collaboration, and the development of practical implementation tools.
The conversation successfully moved beyond theoretical frameworks to examine real-world applications and challenges, providing a foundation for continued work in this critical area. The involvement of major standards organizations, UN agencies, and civil society groups demonstrates the multi-stakeholder commitment necessary for effective human rights integration in AI governance.
Session transcript
Ernst Noorman: Excellent to see such a full room, much better than this enormous room which people spread around and have you near us at the table. We have a very, I think, interesting subject on embedding human rights in AI standards from principles to practice. It’s organized by the Freedom Online Coalition together with the ITU and the Office of the High Commissioner for Human Rights. My name is Ernst Noorman. I’m the Ambassador for Cyber Affairs of the Netherlands and I will be moderating this session. I have excellent speakers and panelists next to me, which I will introduce in a minute. Just introducing the topic, emerging technologies such as artificial intelligence are transforming societies at an unprecedented pace. While they offer vast opportunities, they also pose risks to the enjoyment of human rights. Technical standards, as foundational elements of digital infrastructure, can either safeguard or undermine these rights depending on their design and implementation. In the Global Digital Compact, member states call on standards development organizations to collaborate in promoting the development and adoption of interoperable artificial intelligence standards that uphold safety, reliability, sustainability, and human rights. In line with this vision, the compact also recommends establishing an AI standards exchange to maintain a register of definitions and applicable standards for evaluating AI systems. Moreover, the Freedom Online Coalition in 2024 joint statement on standards urges standard development organizations and all stakeholders to embed human rights principles in the conception, design, development, and deployment of technical standards. Thus, this side event will explore how such standards and tools can be developed to uphold human dignity, equality, privacy, and non-discrimination throughout the AI lifecycle. Now, we start off with some opening remarks by Tomas, and then we will have a panel of three speakers. Peggy Hicks, sitting just behind Tomas, Director of Thematic Engagement at the Office of the UN High Commissioner for Human Rights. Then, Karen McCabe, she is the Senior Director of Technology Policy at IEEE. I just asked, you know, what is the meaning already of the IEEE? She said, well, we don’t want to use it anymore, but for your knowledge, Institute of Electrical and Electronics Engineers, but that’s an abbreviation without the dots anymore. And then Caitlin Kraft-Buchman, CEO and Founder of Women at the Table. And to the left of me, Florian Ostmannn, Director of AI Governance and Regulatory Innovation at the Alan Turing Institute, will present closing remarks. But first, we start with Tomas Lamanaskas, Deputy Secretary of the, Secretary General of the ITU. And really happy to have you next to me, Tomas.
Tomas Lamanauskas: Thank you, Ernst and Ambassador Norman. Indeed, it’s a pleasure to be with you here today. And especially, you know, being side by side with our friends from Human Rights, High Commissioners of Human Rights Office. Thank you, Peggy, you know, for great collaboration. I think this event today is an example how, indeed, ITU, the UN Digital Agency, collaborates with the UN Human Rights Agency, you know. And in a way, also, with the members, its strong support and leadership, including through the Freedom of Land Coalition, we were able to be chaired by the Netherlands in the last period, indeed. And, indeed, this session has become, like, rather traditional in the WSIS Forum context. You know, I think we’ve now been having, you know, I remember last year, I think the year before, where we kind of really start looking into how to really make the digital technologies work. embed human rights perspective. And indeed, when we see here, and I think this is also very important, that is, we see these two events happening side by side, which is a formal high-level event for PLOS20 and AI for Good, which is exploring AI governance. And that’s important, it’s exploring AI governance, because the summit started as a solution summit, you know, how do AI, how can AI just simply benefit people, but now we realized without the governance, it’s very difficult to achieve. And today is also very important for us as AI Governance Day in this regard. So indeed, it’s in that any governance in AI needs to serve the humanity, and in serving humanity through the established frameworks, including human rights frameworks. And indeed, we have had, you know, long-standing collaboration with the Office of High Commissioner of Human Rights now, so at least, you know, I mean, given additional impetus in 2021, with the additional Human Rights Council resolution on the human rights and emerging technologies and standards that really govern that framework, how standard organizations should collaborate and should work together. And indeed, this is also embedded in this clear understanding that technical standards actually end up regulating how we use technology and what is technology. So they are not, even though we sometimes say this is a technical issue, these technical issues actually very well determine how our rights are exercised, because also, and standards can also, from the positive side, you know, allow us to translate, you know, principles and high-level freedoms into actual implementation through the technology. And I think this here is also something that is seen as a guardrail, you know, in this guardrail, but at the same time is also, I would argue, it’s also encouragement of use, because, you know, for people to use artificial intelligence, and we have that in different powers, they need to trust it, they need to have a confidence in it, they need to know that the artificial intelligence they will use will not be biased against them, will not reject the job, you know, from the basic things, like not rejecting to job duplication or, you know, not using their image for misinformation and abuse and to other much more fundamental aspects of human rights as well, but I think it’s really important. So at you, of course, being also the AI agency, sorry, being UN Digital Agency, it’s also a standard development organization. And as a standard development organization, we have a suite of standards now, in terms of AI, more than 400 standards. And of course, and then we have our member states already starting to embed in the standard development process principles that human rights are important. So there is an importance of, so a number of resolutions that were adopted in our last World Conclusion Association Assembly, 24 in New Delhi, that governs us, is actually embedded in already human rights concepts already in some specific resolutions from VETAverse and some others, just showing recognition. And that’s actually, it seems like a small things, but here in the world, these recognitions are the big thing, because they really show that the consensus is emerging. So again, it’s not, this important is an important topic, even I was double-checking my facts with Peggy, but I think July 7th now, the Human Rights Council just adopted a new resolution on human rights and emerging technologies. So again, this shows that, and this is adopted, I think, importantly, by consensus. I think, and that you are used to consensus, and I think Human Rights Council likes to vote, but I think on this one, it was really adopted by consensus and shows that it’s really member states are coming together around that. I think it’s also important, it’s not only about the intergovernmental cooperation, it’s actually opposite, it’s actually including different stakeholders here, and we have, of course, IEEE here at the table, but in our work, we have Alan Turing Institute that will also work in different contexts, women at the table, but also different organizations, such as Article 19, and are operated like some vendors, like Mark Erickson, are strongly involved in this work, and I think it’s very important that we deliver. Now, in terms of a two specific aspects, well, of course, we’re trying to increase transparency of our standards to allow also everyone to judge and see whether these standards apply with the human rights perspectives. We’re also looking into capacity building courses throughout your academy to enhance human rights literacy among A2 members. And then, in response to what is called in our technical term T-SAG, which is basically the body that governs our standard development work in between the assemblies, we are actually also doing a number of steps to make sure that those experts who come to our meetings or lead civilization work are aware of human rights perspectives. We did a survey of study group leadership. We’re doing a comparative survey of peer standard development organization practices. We raise awareness, including events like that, through everyone. And we also, you know, build capacity, as I mentioned. What’s important for us, too, is not doing that alone. First of all, our close circle of friends in the World Standards Corporation, ISO, and IEC, with whom we work very closely together, and human rights as being one of the key, you know, I would say one of the three key pillars of our collaboration, next to artificial intelligence and green technologies, and indeed shows the importance we place to that. So, indeed, I think it’s very important that we continue working in this spirit together, I think, because this is important work to make sure that, I think as some people are saying here, that the AI and new digital technologies serves humanity, not the other way around. And it’s really AI for good and digital technologies for good. So, thank you very much. And I have to apologize. I’ll have to leave. But that’s not a reflection of how important human rights is. It’s just a reflection of how busy the city is this week. So, thank you, my friends, and over to you. Thank you.
Ernst Noorman: I really thank you, Thomas, for taking the time to join us at this panel, and it shows also the importance you attach to this subject on human rights. Thank you. Applause for Thomas. Much better. Then we continue with the questions to the panel. First, I’ll start with you, Peggy. From your perspective, what are the most urgent human rights risks posed by AI that technical standards must address? and maybe you can also share some concrete ways in which human rights due diligence can meaningfully be integrated into the standardization process. Let me ask the easy questions.
Peggy Hicks: So no, we’ve got plenty to talk about. It’s so great to see so many faces in the room of our core partners within this work and obviously we still have ITU present so I can thank you for the Tomas’s nice words about the close collaboration that we’ve had on these issues which has really been an advance and important step forward. The Fremont Line Coalition in Netherlands as well, you know, I want to pay thanks to the work that you’ve done. The Fremont Line Coalition’s strong statement on standards and human rights published last year was really a, you know, a groundbreaking moment I think because we’ve seen such a gap really between the human rights side and the standard setting side and to see those pieces come together in the Fremont Line Coalition was really encouraging. We’ve been working, as I said, with ITU quite substantially in these areas and we’re developing a work plan on technical standards and human rights that was approved by the TSAG a few weeks ago but I think it’s also the case and very glad to be here with another standard development organization, IEEE, ISO and IEC. We’ve really expanded our work in this area. I think ITU has helped to open the door, help us to understand and engage with the standards community more easily and it’s an area where things have really been moving in a positive way but turning to your question about what are some of the most urgent risks, it’s like asking me to, you know, identify, you know, between my various children. There are so many risks and so little time. The reality is we’ve done lots of work sort of mapping out some of these risks. We’ve got the mapping report that we did for the Human Rights Council recently that shows all the work that’s been done by the Geneva machinery on some of these issues. And we’ve also done a lot of work on some specific areas. We’ve looked at risk with AI systems in the area of economic, social, and cultural rights. So privacy issues regarding AI technology in health and education, for example. We just did a report recently about risk in administration of justice and sort of the rollout of AI without some of the guardrails that we need in that space. We’re just doing work now on digital border technologies and use of AI in that space. And those are really just indicative of some of the areas in which we’re looking at it. I think it’s fair to say that AI is infusing all of the different areas of human rights engagement and therefore we see some risks in those spaces and also in many cases, some opportunities as well. We’ve been working within our BTEC project, which works with the business and human rights side of this with some of the major players in this space. And in that context, we did a whole taxonomy of how we line up generative AI and sort of the universal declaration and what are the different risks within it. So I encourage people to look for that on our website. And of course, the special procedures and treaty bodies have also been very engaged in this space. So there’s no shortage of risks. And in some cases, those need to be managed and understood and technical standards developed around them. There’s also technology that until those things are in place, we shouldn’t have it on the market. And we’ve talked about that in the context of surveillance and other technologies that we’ve seen those risks really emerging quite significantly. But obviously we’re here today because we know that technical standards can play a really important role. As Thomas has said, and as you introduced, we can really move forward in terms of what we all wanna see, which is human rights being integrated in these conversations and technology that actually delivers for people in the way that it’s intended to do so. So that requires, as we saw in the GDC, states and standard setting organizations putting human rights front and center and really ensuring that the processes depend on the important principles we have around multi-stakeholder principles and that they are transparent, open, and inclusive going forward. So we’re, in terms of the types of ways we want technical standards to address the risks AI poses, we’re looking at the process and management side where we want concrete steps to enhance transparency and multi-stakeholder participation. That’s a key element. Transparency is also, you know, in all of these conversations, we need to know from states and business about the kinds of systems they use that we can engage on it. And we’re also really looking at how we look at the terms and concepts and terminology for AI and issues like explainability. But quickly, because I want to hear the other speakers as well, on the human rights due diligence side, which I think is one of the things we can really bring into it, what is that framework for assessing risks and proactively managing them and looking at them in a technical standard setting context? We’re really looking for standards development organizations to really adopt some of what we’ve learned through human rights due diligence processes in order to identify and mitigate risks going forward. And there’s sort of a four-step process there. They can use this to identify and assess human rights risks. They can integrate those findings into standard development organization processes. And then we ask them to take it the next step. They need to track the effectiveness of what they what’s being done and then also communicate how the impacts are being addressed. So it’s a it’s a life cycle approach to really engaging in human rights due diligence and then really making sure that it has the impact we want it to. And the key element there is also ensuring meaningful engagement of the stakeholders that really will have the most direct information about what’s happening. Our office has developed guidance on human rights judelogy for use in the UN system, which we hope we’re working with our UN partners to roll out and implement, and we hope will be relevant and useful to other actors in the space as well. And we’re really looking forward to just deepening our work with the standard-setting community as we go forward on this. And we’re happy that it’s such an open door, and we’re hoping all of us can walk through it and deliver even better results in the coming year. Thanks.
Ernst Noorman: Thank you very much. Karen, IEEE is a leading global standards body with practical work at the intersection of ethics, technology and standards. How do you see the role of technical communities in addressing human rights principles with the AI standards lifecycle? Well, thank you for that question. And I know you mentioned, you know, we go by IEEE.
Karen McCabe: But before I go there, first, I want to thank the organizers for this session. It’s really a very important topic. And I know in IEEE and our standards development communities, we take this very seriously. And once I share some of the work that we’re doing, you’ll see how we’re addressing it in that way. You know, just for really briefly, for those who may not be aware of IEEE, which we are the Institute of Electrical and Electronics Engineers, but we go by IEEE because our technical scope and our breadth has really expanded over the many, many years, probably about 130 right now. So we’re dealing with so many different technical aspects. We have like 45 technical societies and councils and a lot of good work that we do. And our mission is to advance technology for the benefit of humanity. So our communities of our volunteers and the work that we do, that’s really central and focal to the to the work that we’re doing at IEEE through our education programs, through the publications that we do and how we convene around conferences. But we’re also a technical standards developer. We consider ourselves a global standards developer because our standards are used globally. They’re developed from people around the world, used around the world. Probably many of you, if you’re not familiar with us, our IEEE 802 standards for wireless technology, how all our wireless technology works is a standard that we developed. And we’ve had a great partnership with IEC, ISO, and ITU as well in collaborating and sharing of information, joint development and whatnot. So it’s really a pleasure to be here to really talk about this most important situation. We do recognize, as I mentioned here, the imperative of this, and also there’s complexity associated with it. We greatly appreciate the reports that have come out and the call for standards-developing communities to look at human rights and how they could take human rights into considerations when they’re developing standards and they’re looking at their processes in that regard. But while standards bodies and standards themselves, they really cannot per se adjudicate or enforce human rights, we do, as a technical community and a standards developer, have a critical role in creating these frameworks and these processes in the communities, raising the awareness, putting the education in, so that we can look at how we can integrate human rights principles into the designs and deployment and the governance of AI systems. And as we mentioned, other digital technologies and technologies in general, because technologies are, they’re not sort of in their silos. And when you look at AI and we look at other upcoming technologies like quantum, et cetera, it’s cross-cutting, you know, you don’t see, we see IECT technologies in the power sector and in the vehicular technology areas as well. So, it’s really critical. And I think that’s where one strength that IEEE can bring when we’re addressing this important topic is that we sort of have this very broad and deep technical community that crosses many of these disciplines that technology and other, AI, I should say, other technologies are cross-cutting with that. So again, this is, you know, very important, but it also, I think, raises some interesting considerations, I’ll say. You know, we definitely need to have this discussion and I think many standards bodies, including IEEE, are looking at this, but I’m just going to take a moment here just to talk about some of the practical considerations when we look at this. It doesn’t mean we should not be going forth with this, and we have a lot of bodies doing significant work in here, but just to put this on the table, and then we’ll talk about how we’re approaching it from our perspective. So incorporating human rights into standards can present some challenges, if you will. They could be technical, procedural, or institutional. So technically, it’s sort of difficult to translate broad human rights principles into measurable engineering requirements. Now that mindset of how we look at standards and how we define standards, because we do have a broad portfolio of sociotechnical standards, if you will, we’re looking at technical standards and how they’re interplaying with such issues as human rights and other societal impacts. But we have to look at our processes, and we have to look at the communities and educating them around these topics. Procedurally, most standards are developed by consensus-type processes. So when you’re looking at human rights principles, they could be interpreted differently among different stakeholders from different parts of the world based on how they view human rights. So that’s another factor we have to take into consideration. And then institutionally, standards bodies, we’re not necessarily courts or regulators. They’re primarily forms for voluntary consensus standards and collaboration where we bring diverse minds and specialists from all kinds of geographic regions around the world, again, to develop this standard. But I do think there is, and we’ve been seeing a trend of technical standards and standards bodies being really more sensitized and more aware of these issues. If you think about the potential unintended consequences of technology and how that can impact human rights, but well-being and growth and prosperity as well. So this is really important to us at IAAA. And that’s why we have various approaches and programs that we’ve been doing regarded to human rights. So we study the report quite closely about human rights recommendations for standards bodies as well. But prior to that, we already started and had launched many, many activities and programs that fall within, I guess one could say, sort of this human rights lens, right? So early as 2016-17, we started to look at AI very deeply and specifically at the IEEE. We launched that with what we call a body of work, ethically aligned design. So this is a body of work to really start looking at the social implications of AI systems and technologies and how they can impact humanity, you know, working, of course, under the remit that IEEE has. So we have a series of standards, which we call our 7,000 series, that are addressing these issues, bias, privacy, transparency. And that body of work grew to right now about over 100 standards that we have in this place. Some are looking at vertical applications, some are looking at horizontal applications. So just to give you a little flavor of that, we have a standard that focuses on transparency and autonomous systems, one that’s addressing data privacy processes, one that’s providing methodologies for reducing algorithmic bias. We also stood up associated with this IEEE certification program that is also looking at and addressing, it’s really kind of built more around the processes. So when you’re sort of developing these technologies and the processes around them, this type of issues of human rights and unintended consequences is not sort of an afterthought, because it’s very hard to go back and fix that. It’s out in the world. So how various industry actors or others who are developing these technologies can take these factors into deep consideration when they’re doing that. So the certification program that we have is really looking around those types of processes that we have as well. We also make sure that we, and this is something that IEEE is very good at, it’s convening, is we want to make sure that we have meaningful and inclusive, as we were talking about. It has to be meaningful. It has to be inclusive dialogue. So we facilitate an open multi-stakeholder process through our public working groups. The standards that we do are open, they’re transparent. this perspective and that some of them, many of them, have that perspective of human rights. So I think, you know, there’s sort of almost a natural progression, if you will, the more we’re kind of addressing these types of issues and standards and newer communities are coming in. I don’t want to say by default, you know, there definitely needs to be some processes, if you will, and education around it. We’re starting to hear more and more of these issues and what’s out there and why it’s so important in our working groups and in our community. Just a few more examples, you know, when I mentioned the ethically aligned design framework, when we rolled that out a few years after that, it was really the city of Vienna that used this strategy and this framework in their digital humanism work. And the work that they’re doing there is how do we protect human rights, democracy and transparency at the center of urban digital transformation. So this really provided a great framework for them. And I guess, by extension, addressing some of the human rights issues as well, when they look at those types of issues and using the framework that we have. And, you know, basically sort of in closing, this really illustrates, you know, sort of the pathways that we have built out, you know, more to come and how we can continue to build those out and how we can embed those human rights principles and technical standards, you know, that, you know, standards bodies, standards development could be very complex. There’s a lot of actors involved and different bodies and liaising agreements that have to happen with that. So it’s not just sort of sitting in the IEEE or the ITU. And, you know, that’s why the level of information and awareness about these issues and how we can not only do them in our own communities, but then by liaising and collaborating with other standards bodies and other actors, we can also help have sort of a multiplier effect, if you will, and trying to share, you know, the issues and how they should be addressed and what we can do as, you know, technical standards bodies when we’re moving forward. So with that, I know we have a bunch of other speakers, so I’ll close here, but thank you for your time. No, thank you very much, Karen. And for me, it’s quite a new world with your opening and the depth, also, you’re working, you’re working, your organization working on human rights and standards is quite impressive.
Ernst Noorman: But I’m afraid that the panel, the discussion will be only for 45 minutes. I’m already quite sure we’ll be running short on time. So let’s continue now with Caitlin with a question on your organization, Women at the Table. It’s developing a human rights AI benchmark as a concrete tool to evaluate AI through a rights-based lens. How do you envision this kind of benchmark shaping public procurement decisions, influencing regulatory frameworks, and guiding incentives that drive AI innovation?
Caitlin Kraft Buchman: Thank you for that question. So this comes out of this AI benchmark. So now we’re moving into sort of the practical application of what does all of this mean. And quite to our surprise, there is no human rights, international human rights framework benchmark for machine learning at all. So we’ve taken upon ourselves with a little bit of extra money we had left over to hire a bunch of evaluators from CERN physicists who do evaluation benchmarks. And we’ve just started this process. We’re looking at a mix of, with our limited time and finances at the moment, like five rights, a mix of civil, political, economic, social. So we’re doing privacy, due process, non-discrimination, of course, which is an umbrella, social protection, and health to look at them to see exactly how five different large language models are understanding what human rights means. And what we’re very interested in is to see also if they don’t understand it, and then this will be a paper. We’re hoping that this is for machine learning professionals. So there are a lot of ethical benchmarks that many of my colleagues here have made for themselves, but these are all guidelines and they’re all, even though this is a narrative benchmark, this is made for people who are like reading NeuroIPS papers and are actually building LLMs. So we’re hoping that they will then be able to test a lot of what their large language models understand of international human rights benchmarks. What we also understand is that there are, we’re going to move to model benchmarking approaches. So now you’re going to say, okay, I’m doing something, I’m a municipality, I want to do something about social protection. This LLM does, understands this so much better than another. And we, I saw yesterday with the World Bank that all of the IFIs, all of the different financial institutions are understanding for different products that they’re making, their different large language models that handle different things differently. So we’re going to now have to have a more nuanced, a sort of a larger approach. So we’re hoping this will create a bit of clarity. Maybe it’ll reveal how to choose wisely and see how some systems are more suitable, like for AI procurement. But I would also be remiss not to mention this notion just in our work that was a little bit earlier with the Gender Responsive Standards Initiative, which is something that we co-drafted with BSI and ECE. It’s held at the ECE Secretariat. And this notion of technology being neutral is really been sort of discredited. So what we did do using gender as a point of entry, how standards actually affect different people differently. We know IEC, when they did all their electricity things for your stove, they did lots of different experiments because young people, old people, men, women sort of conduct the electricity differently. So this is a sort of a normal practice that electricity is not actually neutral. It doesn’t behave on everybody the same way. We use often this example of the cockpits when the U.S. Congress in 1990 said women needed to become fighter pilots, whether that is good, bad, or indifferent. They realized that, of course, the cockpits were made for men of a certain size and height, the sort of default male that all standards are built for. So they didn’t say, well, we have to have 10 different sizes of cockpit for the 10 different sizes of women. They had to redesign the cockpit so that things were really adjustable and in different places. And they made much more efficient planes for that reason, because they had to build a cockpit for that kind of diversity. We also see for something like, remember, I don’t know how old all of you are, but it used to be that suitcases didn’t all have wheels on them, but it was only when women entered the workforce and all of a sudden they didn’t want to lug them and here and there they made little dainty ones. Now everybody has wheels on their suitcases because it just makes more sense. And that’s sort of how diversity. And the men are happy too. Yeah, exactly. Super, super happy. That’s what we’re saying is that a diverse population, a diverse data set, diverse experiences are going to bring something that’s really better for everybody else. This is about robustness. It’s not about just privileging one group or another. It’s really about bringing a kind of a large sort of 360 and multidimensional experience to everybody else. And on top of that, also for just in terms of being technical, we wrote for BSI, an inclusive data standard where we really do look at, I’m very proud of being the technical author of actually a standard, but really what is data, which is also, data is also received knowledge that comes from other places. That’s all that it is. And without context, without purpose, it’s kind of meaningless and we have to understand what we do with data and how we govern it for that reason. Okay.
Ernst Noorman: Thank you. Thank you very much. We have, well, 10 minutes left, but I’m definitely have to keep a reserve of time for Florian for some concluding remarks, but I can imagine there are some questions. And if you have a question, please make it a phrase with a question mark. and not a statement. Please introduce yourself and to who you address the question.
Matthias Kloth: Thank you. Good afternoon to everybody. My name is Matthias Klote. I’m the Head of Digital Governance of the Council of Europe. Our contribution to the discussion is our Framework Convention on AI and Human Rights, the first international treaty actually addressing this, which is a treaty with a global vocation. All like-minded countries around the world can join and we already have signatories from several continents. I would also briefly like to say that we actually have developed a methodology for risk and impact assessments on human rights called Huderia. We worked with the Alan Turing Institute on this. I would like to ask a question to Mrs. McNay, just because she already touched on that. How do we ensure that we cross these two worlds where we as a human rights community explain to people from a technical world about what certain notions mean and that on the other hand we understand the technical issues? I think this seems to be a challenge which is very important to overcome. Thank you. Yes, so how do we understand each other? That’s a great question because you know we all come to
Karen McCabe: the table based on our experiences and our work environment, our education, etc. I can give an example. I have a colleague sitting next to me who was involved in a lot of our work, Ms. Michael Lucan here. In the early days of our AI work, people were interested in many people, not just technologists, so it was attractive to people who were not what I would say non-traditional. They’re not traditional standards developers. They’re not necessarily coming from the technical community working in technology per se. There were ethicists, there were lawyers, there were marketing people, there were civil society organizations. So when they start to come together to form the working groups and write standards, this is very foreign to them in that regard. Just how the process works, but how you write a standard and these terms that you’re using so that when you’re defining them. You know, terminology is very fundamental to standards. So we all are starting on the same page of what we mean when we’re talking about something. And it was challenging, you know, quite frankly, of bringing these different diverse voices from their different experiences into the fold of standards development. So I think, you know, multiple things happened. You know, we had to set up some mentorship and education programs. So we brought in, you know, technical experts, if you will, more traditional standards developers and process people to help them explain that. But likewise, I think our technical experts and our process people learned a lot from the different perspectives and how from the new actors in our process of what they meant and what was meaningful and impactful and why we should be considering things and vice versa. The new actors were learning about these processes, but, you know, we really had to take a hard look at our processes as well, you know, and how we can also build out these frameworks, you know, so we had ethically aligned design and we had a framework around that, which sort of launched our standards work, that 7000 series I mentioned. And then when everyone started to come together, it’s like, well, we’re going to work within this framework. And it seems so well, we have this framework, this is so we’ll just follow the framework. And then we start putting people in a room together. You know, it was a little bumpy, you know, so we had to do a lot of communication. I know this sounds like, you know, sort of this, nothing really earth shattering here, but sometimes you lose sight of this, right, that, you know, you’re, you’re might be talking over each other, but you really don’t mean to be you’re talking the same language, but it’s different ways you define different terminology. So we had to do a lot of communication, a lot of mentoring, and education around that. So I wish there was sort of an easier answer to that question. But it’s really about, I think, a lot of communication skills, you know, quite frankly, having patience, and really identifying, you know, experts, technical experts that, you know, I would say are open minded, if you will, to those types of challenges and providing that level of guidance that can go along the way. So that’s just, you know,
Caitlin Kraft Buchman: Can I just say something, ITU is also I think now has a new set of courses where they’re going to all their standards committees are going to have a human rights module and I think that that will help things and probably all standards bodies should give that course just so people understand sort of basic human rights principles, all the things that we’ve all agreed to and I just must say for us that we do have a course that sits open, it’s free, it sits on the Sorbonne Center for AI website that is an attempt to have policymakers but also technologists have the same vocabulary when they’re making technology together.
Ernst Noorman: Thank you. Tony can take another question if we are allowed to go five minutes on until ten to, then we can one question please, brief question and a brief answer.
Audience: My question is for Peggy because you know I’m from the low carbon sector, we are talking about low carbon just transition, I think the AI sector is also need some low carbon just transition and how do you work with companies, big corporations and how because these are the organizations lay off people because of AI and how do you make sure the just transition is happening? Thank you.
Peggy Hicks: Okay, so there’s a great question and I’m on instruction to have a short answer and we actually just did a report that I’ve reviewed and will be issued shortly that’s about just transition and how we achieve it. So that’s the bottom line to my answer. But the main things I think we bring to it is making sure that we are looking at all of those risks and trying to integrate them in and bring the human rights standards to bear on that decision making.
Ernst Noorman: So we work with companies about what their responsibilities are and how they apply the UN guiding principles on business and trade. One last brief question and brief answer before I give the floor to Florian. Thank you very much, Mark Janowski, Czech Republic.
Audience: The question is, does anybody talk to the people responsible for the inception of the technologies? We’ve been talking about the cycle, which is more standard setting, and we’ve been doing quite a lot, member states, OSHR, and other NGOs. But is anybody talking, and there’s a progress, and we know about it, and also thank you. But is anybody talking to the scientists who were actually at the inception of these technologies? Because I think we were just not reaching them enough, because we’re actually late in the cycle. Thank you. Thank you.
Peggy Hicks: I’m looking to Peggy. I have a tiny answer on that. Look, it’s where are those scientists, right? And there’s a whole other conversation about the fact that many of them are in the corporations. But part of what we’re looking at is in our second phase of our Gen-AI project is answering exactly that question and trying to engage at the beginning of the cycle on development of products and tools. Okay, thank you very much for these questions. And then I give the floor to Florian for some closing remarks.
Florian Ostmann: Thank you very much, Ernst. And thank you to the organizers for the opportunity to share some concluding thoughts. So I work at the Alan Turing Institute, which is the UK’s national institute for AI, and I will be speaking from the perspective of the AI Standards Hub, which is an initiative that we launched two and a half years ago as a partnership between the Alan Turing Institute, the British Standards Institution, which is represented in the room, the UK’s national standards body, and also the National Physical Laboratory, which is the UK’s national measurement institute. And the AI Standards Hub is all about making the AI standards space more accessible and sharing information, advancing capacity building, and also doing research on what the priorities are, what gaps exist, and what is needed in the AI standards space. Thinking about the socio-technical implications, the human rights aspects of AI has been a really important component of that work over the last couple of years. We’ve been fortunate to collaborate with UNOHCHR and also with the ITU on some of this work, most recently through a summit that we organized in London in March. And so I’ll just share a couple of reflections, you know, based on the work we’ve done. I won’t go into detail on the risks, I think Peggy did a good job in, you know, talking about what are the risks, and I think we can all agree the reason why we’re all in the room is that we recognize that AI raises important human rights questions, so we can assume that as agreed. But I’ll share some reflections about, you know, what do we need in order to make sure that standards, we end up with standards that recognize human rights and integrate human rights considerations. And I think there are broadly two angles that are worth thinking about and emphasizing. The first one is sort of a question of process. And I think Karen spoke to some of this. So, you know, who needs to be involved in order to make sure that we end up with suitable standards. And then the other one is a question of substance, in terms of what do standards need to look like in order to be adequate from a human rights perspective. So I’ll say a few words on each of those dimensions. And from the process perspective, you know, I think it’s important to recognize that with the human rights expertise is held across many different groups. And we know that not all of these groups are traditionally equally represented in the standards development process. So a couple of different factors here. One is the important role of civil society organizations as a, you know, source of human rights expertise and the fact that CSOs are traditionally not very strongly represented. There is an important point around the Global South being represented. We know that, of course, proportionally the Global South is less strongly represented in international standards development processes. And then there’s also the question of individuals. Who are the people? So if an organization decides to engage, who is the person from the organization that’s representing the organization? Is that a technical expert or is it someone from the human rights to diligence team, for example, right? And in some cases, probably the answer is it should be both or they should be working together. But that’s a really important consideration, which not just thinking about the organization, but who actually, which voice from within the organization is it? So there’s important considerations around making sure that all these different voices are represented. And what’s important to recognize is that there are, of course, obstacles to getting that representation. So especially for CSOs, the first obstacle is often resourcing. Private sector companies who are in the AI business have a business case for why they should engage in standards development. For CSOs, that’s not the case. So it’s much more difficult for a CSO to justify involvement. There’s an important issue around skills. I think Karen spoke to that. It’s good to see that different organizations, ourselves and the ITU and others, the work that Caitlin mentioned, is trying to address that. So part of that is demystifying what are standards. We are sometimes trying to avoid the term technical standards because it creates this misconception that the content of standards isn’t necessarily particularly technical. Some standards are, but some of the most important standards are management standards. You don’t need to be a computer scientist to develop a good management system standard for AI. So demystifying that, making sure that people are equipped with the knowledge, also the cultural knowledge, where they feel they can make an active contribution. And I get the signal I need to wrap up. I just very briefly… Just one last consideration. So those are the process considerations. The last thing I wanted to say is on the substance. If you think about what standards are needed, it’s really important to recognize that it’s a vast field. We’ve got a database for AI standards. It’s got over 250 standards currently in there that are being developed or under development. And so which standards should we be focusing on in terms of integrating human rights? CSOs have often, we’ve done a lot of engagements, have told us from their perspective, the ideal is to have a horizontal standard, right, that addresses AI issues from a human rights perspective, because it means you engage with one standards project and you’ve covered sort of the full landscape in theory. But we also know that industry is often focused on much more narrowly focused standards that are focused on sectors or particular use cases, and the horizontal standard may not actually get used that much. And so it’s really important to think about, you know, which standards will be the ones that get adopted and how do we make sure that human rights considerations find their way into those standards. It’s not enough to just have a catalog where there is, you know, one standard that has human rights included. Thank you very much. And that’s, with that, it comes to the end of this session. I must admit that I learned a lot. I hope you did as well. And I want to invite you to give a big round of applause for our panelists and for Florian concluding the session.
Ernst Noorman: Thank you.
Tomas Lamanauskas
Speech speed
180 words per minute
Speech length
1195 words
Speech time
397 seconds
AI governance must serve humanity through established human rights frameworks – technical standards regulate how we use technology and exercise our rights
Explanation
Lamanauskas argues that AI governance needs to serve humanity through established frameworks including human rights, emphasizing that technical standards actually determine how our rights are exercised. He stresses that standards are not just technical issues but fundamentally shape how technology is used and how rights are implemented.
Evidence
ITU has over 400 AI standards and member states are embedding human rights concepts in resolutions from the World Telecommunication Standardization Assembly in New Delhi. The Human Rights Council adopted a new resolution on human rights and emerging technologies by consensus in July.
Major discussion point
Embedding Human Rights in AI Standards and Governance
Topics
Human rights | Digital standards | Legal and regulatory
Agreed with
– Peggy Hicks
– Karen McCabe
– Ernst Noorman
Agreed on
Technical standards fundamentally shape how human rights are exercised and must embed human rights principles throughout the AI lifecycle
ITU collaborates closely with UN Human Rights Office and Freedom Online Coalition to embed human rights perspectives in AI standards development
Explanation
Lamanauskas highlights the collaborative approach between ITU as the UN Digital Agency and the UN Human Rights Agency, supported by the Freedom Online Coalition. This partnership demonstrates institutional commitment to integrating human rights into technical standards development processes.
Evidence
ITU is working with various stakeholders including Article 19, vendors like Ericsson, and has established partnerships with ISO and IEC where human rights is one of three key pillars of collaboration alongside AI and green technologies.
Major discussion point
Multi-stakeholder Collaboration and Institutional Partnerships
Topics
Human rights | Digital standards | Legal and regulatory
Agreed with
– Peggy Hicks
– Karen McCabe
– Florian Ostmann
Agreed on
Multi-stakeholder collaboration and institutional partnerships are essential for effective human rights integration in AI standards
ITU has developed over 400 AI standards and is implementing capacity building courses to enhance human rights literacy among members
Explanation
Lamanauskas outlines ITU’s concrete efforts to integrate human rights into their standards work through both technical development and education. The organization is taking systematic steps to ensure their technical experts understand human rights perspectives through training and awareness programs.
Evidence
ITU is increasing transparency of standards, conducting surveys of study group leadership, doing comparative studies of peer organizations, and building capacity through their academy. They are working to ensure experts attending meetings are aware of human rights perspectives.
Major discussion point
Technical Implementation Challenges and Solutions
Topics
Human rights | Digital standards | Capacity development
Enhanced transparency of standards and capacity building through academies helps increase human rights awareness among technical experts
Explanation
Lamanauskas emphasizes the importance of making standards more transparent and accessible while building capacity among technical experts to understand human rights implications. This approach aims to bridge the gap between technical development and human rights considerations.
Evidence
ITU is implementing transparency measures for their standards, conducting surveys and comparative studies, and developing capacity building courses through their academy to enhance human rights literacy among members.
Major discussion point
Capacity Building and Knowledge Transfer
Topics
Human rights | Digital standards | Capacity development
Agreed with
– Karen McCabe
– Caitlin Kraft Buchman
– Florian Ostmann
– Matthias Kloth
Agreed on
Capacity building and education are crucial for bridging the gap between technical and human rights communities
Peggy Hicks
Speech speed
180 words per minute
Speech length
1288 words
Speech time
427 seconds
AI poses urgent human rights risks across multiple domains including privacy, administration of justice, digital borders, and economic/social rights that technical standards must address
Explanation
Hicks outlines the comprehensive scope of human rights risks posed by AI systems across various sectors and applications. She emphasizes that these risks are pervasive and require systematic attention through technical standards development to ensure adequate protection of human rights.
Evidence
OHCHR has produced mapping reports, studies on AI in economic/social/cultural rights, reports on AI in administration of justice, work on digital border technologies, and a taxonomy aligning generative AI with the Universal Declaration of Human Rights through their B-Tech project.
Major discussion point
Embedding Human Rights in AI Standards and Governance
Topics
Human rights | Privacy and data protection | Legal and regulatory
Agreed with
– Tomas Lamanauskas
– Karen McCabe
– Ernst Noorman
Agreed on
Technical standards fundamentally shape how human rights are exercised and must embed human rights principles throughout the AI lifecycle
Human rights due diligence provides a four-step framework for standards development organizations to identify, assess, integrate, and track human rights risks
Explanation
Hicks presents human rights due diligence as a systematic methodology that standards organizations can adopt to proactively manage human rights risks. This lifecycle approach ensures continuous monitoring and improvement of human rights protection in standards development.
Evidence
The four-step process includes: identifying and assessing human rights risks, integrating findings into standard development processes, tracking effectiveness of measures, and communicating how impacts are being addressed. OHCHR has developed guidance for use in the UN system.
Major discussion point
Multi-stakeholder Collaboration and Institutional Partnerships
Topics
Human rights | Legal and regulatory | Digital standards
Agreed with
– Tomas Lamanauskas
– Karen McCabe
– Florian Ostmann
Agreed on
Multi-stakeholder collaboration and institutional partnerships are essential for effective human rights integration in AI standards
Engagement with corporations through UN guiding principles on business and human rights addresses just transition concerns including AI-related job displacement
Explanation
Hicks addresses concerns about AI’s impact on employment and economic justice by referencing OHCHR’s work on just transition. She emphasizes applying established human rights frameworks to ensure that AI development considers broader social and economic impacts on workers and communities.
Evidence
OHCHR has produced a report on just transition that will be issued shortly, and they work with companies on their responsibilities under the UN guiding principles on business and human rights.
Major discussion point
Practical Applications and Real-world Impact
Topics
Human rights | Future of work | Legal and regulatory
Early engagement with scientists at technology inception stages is crucial but challenging since many work within corporations
Explanation
Hicks acknowledges the importance of engaging with scientists and researchers at the earliest stages of technology development, but notes the practical challenge that many of these experts work within private corporations. This highlights the need for new approaches to reach decision-makers at the inception phase.
Evidence
OHCHR is looking at engaging at the beginning of the development cycle in the second phase of their Gen-AI project, trying to reach scientists involved in product and tool development.
Major discussion point
Capacity Building and Knowledge Transfer
Topics
Human rights | Digital business models | Legal and regulatory
Disagreed with
– Audience
Disagreed on
Timeline and urgency of engagement with technology developers
Karen McCabe
Speech speed
187 words per minute
Speech length
2285 words
Speech time
730 seconds
Technical communities have a critical role in creating frameworks and processes that integrate human rights principles into AI system design, deployment, and governance
Explanation
McCabe argues that while standards bodies cannot directly enforce human rights, they play a crucial role in creating the technical frameworks and processes that enable human rights integration. She emphasizes that technical communities must take responsibility for embedding human rights considerations into their work from the design stage.
Evidence
IEEE has developed the 7000 series of standards addressing bias, privacy, and transparency, along with certification programs focused on processes to ensure human rights considerations are not an afterthought. They facilitate open multi-stakeholder processes through public working groups.
Major discussion point
Embedding Human Rights in AI Standards and Governance
Topics
Human rights | Digital standards | Legal and regulatory
Agreed with
– Tomas Lamanauskas
– Peggy Hicks
– Ernst Noorman
Agreed on
Technical standards fundamentally shape how human rights are exercised and must embed human rights principles throughout the AI lifecycle
IEEE facilitates open multi-stakeholder processes through public working groups with transparent standards development involving diverse communities
Explanation
McCabe describes IEEE’s approach to inclusive standards development that brings together diverse stakeholders including ethicists, lawyers, civil society organizations, and technical experts. This multi-stakeholder approach ensures that different perspectives and expertise are incorporated into standards development.
Evidence
IEEE’s working groups are open and transparent, involving non-traditional standards developers including ethicists, lawyers, marketing people, and civil society organizations. The city of Vienna used IEEE’s ethically aligned design framework for their digital humanism work.
Major discussion point
Multi-stakeholder Collaboration and Institutional Partnerships
Topics
Human rights | Digital standards | Legal and regulatory
Agreed with
– Tomas Lamanauskas
– Peggy Hicks
– Florian Ostmann
Agreed on
Multi-stakeholder collaboration and institutional partnerships are essential for effective human rights integration in AI standards
IEEE’s 7000 series addresses bias, privacy, and transparency with over 100 standards focusing on ethical AI development and certification programs
Explanation
McCabe outlines IEEE’s comprehensive technical response to AI ethics challenges through their 7000 series of standards. These standards provide concrete technical guidance on addressing key human rights concerns in AI systems, supported by certification programs that focus on development processes.
Evidence
IEEE launched the ethically aligned design body of work in 2016-17, resulting in over 100 standards addressing issues like transparency in autonomous systems, data privacy processes, and methodologies for reducing algorithmic bias. They also have certification programs focusing on development processes.
Major discussion point
Technical Implementation Challenges and Solutions
Topics
Human rights | Digital standards | Privacy and data protection
Standards must translate high-level human rights principles into measurable engineering requirements while managing consensus-building challenges
Explanation
McCabe identifies the practical challenge of converting abstract human rights principles into concrete technical specifications that engineers can implement. She also notes the difficulty of building consensus among diverse stakeholders who may interpret human rights principles differently based on their backgrounds and geographic contexts.
Evidence
IEEE faced challenges in bringing together diverse voices from different disciplines and had to establish mentorship and education programs to help non-traditional standards developers understand the process while technical experts learned from new perspectives.
Major discussion point
Practical Applications and Real-world Impact
Topics
Human rights | Digital standards | Legal and regulatory
Bridging technical and human rights communities requires communication skills, mentorship, education programs, and shared vocabulary development
Explanation
McCabe emphasizes the practical challenges of bringing together technical experts and human rights professionals who speak different professional languages and have different approaches to problem-solving. She stresses the need for deliberate efforts to build understanding and communication between these communities.
Evidence
IEEE had to establish mentorship and education programs, provide guidance from open-minded technical experts, and invest significant time in communication and patience to help diverse stakeholders work together effectively in standards development.
Major discussion point
Technical Implementation Challenges and Solutions
Topics
Human rights | Digital standards | Capacity development
Agreed with
– Tomas Lamanauskas
– Caitlin Kraft Buchman
– Florian Ostmann
– Matthias Kloth
Agreed on
Capacity building and education are crucial for bridging the gap between technical and human rights communities
Caitlin Kraft Buchman
Speech speed
160 words per minute
Speech length
976 words
Speech time
364 seconds
Human rights AI benchmarks are needed as concrete tools to evaluate AI systems and guide procurement decisions and regulatory frameworks
Explanation
Kraft Buchman identifies a critical gap in the availability of human rights-based evaluation tools for AI systems. She argues that creating benchmarks based on international human rights frameworks will provide practical tools for decision-makers to assess and compare AI systems from a rights perspective.
Evidence
Women at the Table is developing a human rights AI benchmark testing five rights (privacy, due process, non-discrimination, social protection, and health) across five different large language models, created by evaluators from CERN physicists who specialize in evaluation benchmarks.
Major discussion point
Embedding Human Rights in AI Standards and Governance
Topics
Human rights | Digital standards | Legal and regulatory
Disagreed with
– Florian Ostmann
Disagreed on
Approach to standards development – horizontal vs. sector-specific focus
Technology is not neutral – diverse perspectives and inclusive data standards improve robustness and effectiveness for all users
Explanation
Kraft Buchman challenges the notion of technological neutrality by demonstrating how technology affects different people differently. She argues that incorporating diverse perspectives and experiences into technology design creates more robust and effective solutions that benefit everyone, not just privileged groups.
Evidence
Examples include aircraft cockpit redesign when women became fighter pilots (resulting in more efficient planes), the evolution of wheeled suitcases when women entered the workforce, and electrical standards that account for how different people conduct electricity differently.
Major discussion point
Technical Implementation Challenges and Solutions
Topics
Human rights | Digital standards | Gender rights online
Free educational courses help policymakers and technologists develop shared vocabulary for collaborative technology development
Explanation
Kraft Buchman emphasizes the importance of education and shared understanding between different professional communities working on AI and human rights. She advocates for accessible educational resources that help bridge the knowledge gap between policymakers and technical experts.
Evidence
Women at the Table offers a free course on the Sorbonne Center for AI website designed to help policymakers and technologists develop the same vocabulary when making technology together.
Major discussion point
Capacity Building and Knowledge Transfer
Topics
Human rights | Digital standards | Capacity development
Agreed with
– Tomas Lamanauskas
– Karen McCabe
– Florian Ostmann
– Matthias Kloth
Agreed on
Capacity building and education are crucial for bridging the gap between technical and human rights communities
Florian Ostmann
Speech speed
183 words per minute
Speech length
1080 words
Speech time
353 seconds
Standards development requires both process considerations (who participates) and substance considerations (what standards should contain) to adequately address human rights
Explanation
Ostmann provides a framework for thinking about human rights integration in standards development by distinguishing between procedural and substantive aspects. He argues that both dimensions are essential – having the right participants in the process and ensuring the resulting standards have appropriate content to address human rights concerns.
Evidence
The AI Standards Hub database contains over 250 AI standards currently being developed or under development, demonstrating the vast scope of the field and the need for strategic thinking about which standards to prioritize for human rights integration.
Major discussion point
Embedding Human Rights in AI Standards and Governance
Topics
Human rights | Digital standards | Legal and regulatory
Cross-sector collaboration between standards bodies, civil society, and technical communities is essential for effective human rights integration
Explanation
Ostmann emphasizes the need for collaboration across different sectors and types of organizations to effectively integrate human rights into AI standards. He highlights the importance of bringing together diverse expertise and perspectives to address the complex challenges of human rights in AI.
Evidence
The AI Standards Hub is a partnership between the Alan Turing Institute, British Standards Institution, and National Physical Laboratory, and has collaborated with UNOHCHR and ITU, including organizing a summit in London in March.
Major discussion point
Multi-stakeholder Collaboration and Institutional Partnerships
Topics
Human rights | Digital standards | Legal and regulatory
Agreed with
– Tomas Lamanauskas
– Peggy Hicks
– Karen McCabe
Agreed on
Multi-stakeholder collaboration and institutional partnerships are essential for effective human rights integration in AI standards
Civil society organizations face resourcing and skills obstacles in participating in standards development, requiring targeted support and demystification efforts
Explanation
Ostmann identifies specific barriers that prevent civil society organizations from effectively participating in standards development processes. He argues that addressing these barriers through targeted support and education is essential for ensuring adequate human rights expertise in standards development.
Evidence
Private sector companies have a business case for engaging in standards development while CSOs do not, creating resource disparities. Many standards are management standards rather than technical standards, meaning computer science expertise is not always required for meaningful contribution.
Major discussion point
Capacity Building and Knowledge Transfer
Topics
Human rights | Digital standards | Capacity development
Agreed with
– Tomas Lamanauskas
– Karen McCabe
– Caitlin Kraft Buchman
– Matthias Kloth
Agreed on
Capacity building and education are crucial for bridging the gap between technical and human rights communities
Focus should be on standards that will actually be adopted by industry, not just horizontal standards that comprehensively cover human rights
Explanation
Ostmann argues for a strategic approach to human rights integration that prioritizes standards likely to be implemented rather than just creating comprehensive human rights standards that may not be widely adopted. He emphasizes the importance of ensuring human rights considerations reach the standards that will actually shape AI development and deployment.
Evidence
While CSOs often prefer horizontal standards that address AI from a human rights perspective, industry tends to focus on narrowly focused standards for specific sectors or use cases, and horizontal standards may not get used much in practice.
Major discussion point
Practical Applications and Real-world Impact
Topics
Human rights | Digital standards | Legal and regulatory
Disagreed with
– Caitlin Kraft Buchman
Disagreed on
Approach to standards development – horizontal vs. sector-specific focus
Ernst Noorman
Speech speed
120 words per minute
Speech length
811 words
Speech time
404 seconds
Emerging technologies like AI are transforming societies at unprecedented pace while posing risks to human rights that technical standards can either safeguard or undermine
Explanation
Noorman frames the discussion by highlighting the dual nature of AI and emerging technologies – they offer vast opportunities but also pose significant risks to human rights enjoyment. He emphasizes that technical standards, as foundational elements of digital infrastructure, play a crucial role in determining whether these rights are protected or undermined depending on their design and implementation.
Evidence
References the Global Digital Compact where member states call on standards development organizations to collaborate in promoting interoperable AI standards that uphold safety, reliability, sustainability, and human rights. Also mentions the Freedom Online Coalition’s 2024 joint statement urging embedding of human rights principles in technical standards.
Major discussion point
Embedding Human Rights in AI Standards and Governance
Topics
Human rights | Digital standards | Legal and regulatory
Agreed with
– Tomas Lamanauskas
– Peggy Hicks
– Karen McCabe
Agreed on
Technical standards fundamentally shape how human rights are exercised and must embed human rights principles throughout the AI lifecycle
The Global Digital Compact and Freedom Online Coalition provide frameworks for establishing AI standards that uphold human dignity, equality, privacy, and non-discrimination throughout the AI lifecycle
Explanation
Noorman outlines the international policy framework that supports human rights integration in AI standards. He specifically mentions the Global Digital Compact’s recommendation for an AI standards exchange and the Freedom Online Coalition’s call for embedding human rights principles in technical standards development and deployment.
Evidence
The Global Digital Compact recommends establishing an AI standards exchange to maintain a register of definitions and applicable standards for evaluating AI systems. The Freedom Online Coalition’s 2024 joint statement urges standard development organizations to embed human rights principles in conception, design, development, and deployment of technical standards.
Major discussion point
Multi-stakeholder Collaboration and Institutional Partnerships
Topics
Human rights | Digital standards | Legal and regulatory
Matthias Kloth
Speech speed
180 words per minute
Speech length
192 words
Speech time
63 seconds
The Council of Europe’s Framework Convention on AI and Human Rights represents the first international treaty addressing AI and human rights with global vocation
Explanation
Kloth presents the Council of Europe’s Framework Convention as a groundbreaking legal instrument that establishes binding international standards for AI and human rights. He emphasizes that this treaty has global reach, allowing like-minded countries from all continents to join and establish common legal frameworks for AI governance.
Evidence
The Framework Convention on AI and Human Rights is the first international treaty addressing this intersection and already has signatories from several continents. The Council of Europe has also developed a methodology for risk and impact assessments on human rights called Huderia, developed in collaboration with the Alan Turing Institute.
Major discussion point
Embedding Human Rights in AI Standards and Governance
Topics
Human rights | Legal and regulatory | Digital standards
Bridging technical and human rights communities requires ensuring mutual understanding between experts who explain human rights concepts to technical professionals and vice versa
Explanation
Kloth identifies the critical challenge of creating effective communication and understanding between the human rights community and technical experts. He emphasizes that this two-way knowledge transfer is essential for successful integration of human rights principles into technical standards and AI development processes.
Evidence
The Council of Europe worked with the Alan Turing Institute on developing Huderia methodology for risk and impact assessments on human rights, demonstrating practical collaboration between human rights and technical communities.
Major discussion point
Capacity Building and Knowledge Transfer
Topics
Human rights | Digital standards | Capacity development
Agreed with
– Tomas Lamanauskas
– Karen McCabe
– Caitlin Kraft Buchman
– Florian Ostmann
Agreed on
Capacity building and education are crucial for bridging the gap between technical and human rights communities
Audience
Speech speed
154 words per minute
Speech length
164 words
Speech time
63 seconds
AI sector needs low carbon just transition similar to other industries, with focus on how corporations handle AI-related job displacement
Explanation
An audience member raises concerns about the environmental and social justice implications of AI development, drawing parallels to just transition concepts in the low carbon sector. They specifically question how to ensure that companies implementing AI technologies address the displacement of workers and ensure fair transition processes.
Evidence
References the concept of low carbon just transition from other sectors and notes that organizations are laying off people because of AI implementation.
Major discussion point
Practical Applications and Real-world Impact
Topics
Human rights | Future of work | Legal and regulatory
Engagement with scientists at technology inception stage is crucial but currently insufficient, as the human rights community may be arriving too late in the development cycle
Explanation
An audience member from Czech Republic highlights the gap in engaging with scientists and researchers who are responsible for the initial development of AI technologies. They argue that current efforts focus too much on later stages of the technology lifecycle, missing opportunities to influence fundamental design decisions at the inception phase.
Evidence
Notes that while there has been progress in standards setting and engagement by member states, OHCHR, and NGOs, there appears to be insufficient direct engagement with the scientists actually creating these technologies at the earliest stages.
Major discussion point
Capacity Building and Knowledge Transfer
Topics
Human rights | Digital standards | Legal and regulatory
Disagreed with
– Peggy Hicks
Disagreed on
Timeline and urgency of engagement with technology developers
Agreements
Agreement points
Technical standards fundamentally shape how human rights are exercised and must embed human rights principles throughout the AI lifecycle
Speakers
– Tomas Lamanauskas
– Peggy Hicks
– Karen McCabe
– Ernst Noorman
Arguments
AI governance must serve humanity through established human rights frameworks – technical standards regulate how we use technology and exercise our rights
AI poses urgent human rights risks across multiple domains including privacy, administration of justice, digital borders, and economic/social rights that technical standards must address
Technical communities have a critical role in creating frameworks and processes that integrate human rights principles into AI system design, deployment, and governance
Emerging technologies like AI are transforming societies at unprecedented pace while posing risks to human rights that technical standards can either safeguard or undermine
Summary
All speakers agree that technical standards are not neutral technical issues but fundamental determinants of how human rights are exercised in AI systems. They consensus that standards must proactively embed human rights principles from design through deployment.
Topics
Human rights | Digital standards | Legal and regulatory
Multi-stakeholder collaboration and institutional partnerships are essential for effective human rights integration in AI standards
Speakers
– Tomas Lamanauskas
– Peggy Hicks
– Karen McCabe
– Florian Ostmann
Arguments
ITU collaborates closely with UN Human Rights Office and Freedom Online Coalition to embed human rights perspectives in AI standards development
Human rights due diligence provides a four-step framework for standards development organizations to identify, assess, integrate, and track human rights risks
IEEE facilitates open multi-stakeholder processes through public working groups with transparent standards development involving diverse communities
Cross-sector collaboration between standards bodies, civil society, and technical communities is essential for effective human rights integration
Summary
Speakers unanimously emphasize the need for collaborative approaches involving multiple stakeholders including standards bodies, human rights organizations, civil society, and technical communities to effectively integrate human rights into AI standards.
Topics
Human rights | Digital standards | Legal and regulatory
Capacity building and education are crucial for bridging the gap between technical and human rights communities
Speakers
– Tomas Lamanauskas
– Karen McCabe
– Caitlin Kraft Buchman
– Florian Ostmann
– Matthias Kloth
Arguments
Enhanced transparency of standards and capacity building through academies helps increase human rights awareness among technical experts
Bridging technical and human rights communities requires communication skills, mentorship, education programs, and shared vocabulary development
Free educational courses help policymakers and technologists develop shared vocabulary for collaborative technology development
Civil society organizations face resourcing and skills obstacles in participating in standards development, requiring targeted support and demystification efforts
Bridging technical and human rights communities requires ensuring mutual understanding between experts who explain human rights concepts to technical professionals and vice versa
Summary
All speakers recognize that effective human rights integration requires deliberate capacity building efforts, education programs, and communication initiatives to help technical experts understand human rights principles and help human rights professionals understand technical processes.
Topics
Human rights | Digital standards | Capacity development
Similar viewpoints
Both speakers emphasize systematic, process-oriented approaches to human rights integration that involve continuous monitoring and assessment throughout the standards development lifecycle.
Speakers
– Peggy Hicks
– Florian Ostmann
Arguments
Human rights due diligence provides a four-step framework for standards development organizations to identify, assess, integrate, and track human rights risks
Standards development requires both process considerations (who participates) and substance considerations (what standards should contain) to adequately address human rights
Topics
Human rights | Digital standards | Legal and regulatory
Both speakers challenge the notion of technological neutrality and emphasize the practical challenges of translating human rights principles into concrete technical implementations while ensuring diverse perspectives are included.
Speakers
– Karen McCabe
– Caitlin Kraft Buchman
Arguments
Standards must translate high-level human rights principles into measurable engineering requirements while managing consensus-building challenges
Technology is not neutral – diverse perspectives and inclusive data standards improve robustness and effectiveness for all users
Topics
Human rights | Digital standards | Legal and regulatory
Both recognize the importance of addressing the broader social and economic impacts of AI, particularly regarding job displacement and the need for just transition approaches that protect workers and communities.
Speakers
– Peggy Hicks
– Audience
Arguments
Engagement with corporations through UN guiding principles on business and human rights addresses just transition concerns including AI-related job displacement
AI sector needs low carbon just transition similar to other industries, with focus on how corporations handle AI-related job displacement
Topics
Human rights | Future of work | Legal and regulatory
Unexpected consensus
Practical implementation challenges are acknowledged by all stakeholders without defensiveness
Speakers
– Karen McCabe
– Florian Ostmann
– Caitlin Kraft Buchman
Arguments
Standards must translate high-level human rights principles into measurable engineering requirements while managing consensus-building challenges
Civil society organizations face resourcing and skills obstacles in participating in standards development, requiring targeted support and demystification efforts
Technology is not neutral – diverse perspectives and inclusive data standards improve robustness and effectiveness for all users
Explanation
Unexpectedly, representatives from technical standards organizations openly acknowledge significant challenges in their processes and the need for fundamental changes, rather than defending current practices. This suggests genuine commitment to improvement.
Topics
Human rights | Digital standards | Capacity development
Focus on practical tools and concrete implementation rather than just principles
Speakers
– Caitlin Kraft Buchman
– Peggy Hicks
– Florian Ostmann
Arguments
Human rights AI benchmarks are needed as concrete tools to evaluate AI systems and guide procurement decisions and regulatory frameworks
Human rights due diligence provides a four-step framework for standards development organizations to identify, assess, integrate, and track human rights risks
Focus should be on standards that will actually be adopted by industry, not just horizontal standards that comprehensively cover human rights
Explanation
There is unexpected consensus on prioritizing practical implementation tools over theoretical frameworks, with even human rights advocates emphasizing the need for concrete, usable tools rather than just comprehensive principles.
Topics
Human rights | Digital standards | Legal and regulatory
Overall assessment
Summary
The discussion reveals remarkably strong consensus across all speakers on the fundamental importance of embedding human rights in AI standards, the need for multi-stakeholder collaboration, and the critical role of capacity building. Key areas of agreement include the non-neutrality of technical standards, the necessity of systematic approaches to human rights integration, and the importance of practical implementation tools.
Consensus level
High level of consensus with significant implications for the field. The agreement spans institutional representatives, technical experts, and civil society, suggesting genuine momentum for change. The consensus on practical challenges and implementation needs indicates readiness to move from principles to concrete action, which could accelerate progress in embedding human rights in AI standards development.
Differences
Different viewpoints
Approach to standards development – horizontal vs. sector-specific focus
Speakers
– Caitlin Kraft Buchman
– Florian Ostmann
Arguments
Human rights AI benchmarks are needed as concrete tools to evaluate AI systems and guide procurement decisions and regulatory frameworks
Focus should be on standards that will actually be adopted by industry, not just horizontal standards that comprehensively cover human rights
Summary
Kraft Buchman advocates for comprehensive human rights benchmarks that can evaluate AI systems across multiple rights domains, while Ostmann argues for focusing on sector-specific standards that industry will actually adopt rather than broad horizontal standards that may not be implemented
Topics
Human rights | Digital standards | Legal and regulatory
Timeline and urgency of engagement with technology developers
Speakers
– Peggy Hicks
– Audience
Arguments
Early engagement with scientists at technology inception stages is crucial but challenging since many work within corporations
Engagement with scientists at technology inception stage is crucial but currently insufficient, as the human rights community may be arriving too late in the development cycle
Summary
While Hicks acknowledges the challenge of early engagement and describes OHCHR’s efforts to reach scientists at inception stages, the audience member argues more forcefully that current efforts are insufficient and the human rights community is arriving too late in the development process
Topics
Human rights | Digital standards | Legal and regulatory
Unexpected differences
Effectiveness of comprehensive vs. targeted standards approaches
Speakers
– Florian Ostmann
– Caitlin Kraft Buchman
Arguments
Focus should be on standards that will actually be adopted by industry, not just horizontal standards that comprehensively cover human rights
Human rights AI benchmarks are needed as concrete tools to evaluate AI systems and guide procurement decisions and regulatory frameworks
Explanation
This disagreement is unexpected because both speakers are working toward the same goal of effective human rights integration in AI systems, but they have fundamentally different views on whether comprehensive horizontal approaches or targeted sector-specific approaches are more effective. This represents a strategic disagreement about implementation methodology rather than goals
Topics
Human rights | Digital standards | Legal and regulatory
Overall assessment
Summary
The discussion showed remarkably high consensus on fundamental goals with limited but significant disagreements on implementation approaches, timing of engagement, and strategic priorities for standards development
Disagreement level
Low to moderate disagreement level with high implications – while speakers largely agreed on the importance of embedding human rights in AI standards, the strategic disagreements about horizontal vs. sector-specific approaches and timing of engagement could significantly impact the effectiveness of implementation efforts. The consensus on goals but divergence on methods suggests need for coordinated strategy development to reconcile different approaches.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers emphasize systematic, process-oriented approaches to human rights integration that involve continuous monitoring and assessment throughout the standards development lifecycle.
Speakers
– Peggy Hicks
– Florian Ostmann
Arguments
Human rights due diligence provides a four-step framework for standards development organizations to identify, assess, integrate, and track human rights risks
Standards development requires both process considerations (who participates) and substance considerations (what standards should contain) to adequately address human rights
Topics
Human rights | Digital standards | Legal and regulatory
Both speakers challenge the notion of technological neutrality and emphasize the practical challenges of translating human rights principles into concrete technical implementations while ensuring diverse perspectives are included.
Speakers
– Karen McCabe
– Caitlin Kraft Buchman
Arguments
Standards must translate high-level human rights principles into measurable engineering requirements while managing consensus-building challenges
Technology is not neutral – diverse perspectives and inclusive data standards improve robustness and effectiveness for all users
Topics
Human rights | Digital standards | Legal and regulatory
Both recognize the importance of addressing the broader social and economic impacts of AI, particularly regarding job displacement and the need for just transition approaches that protect workers and communities.
Speakers
– Peggy Hicks
– Audience
Arguments
Engagement with corporations through UN guiding principles on business and human rights addresses just transition concerns including AI-related job displacement
AI sector needs low carbon just transition similar to other industries, with focus on how corporations handle AI-related job displacement
Topics
Human rights | Future of work | Legal and regulatory
Takeaways
Key takeaways
Technical standards are not neutral – they fundamentally regulate how technology is used and how human rights are exercised, making human rights integration essential rather than optional
Multi-stakeholder collaboration between UN agencies (ITU, OHCHR), standards bodies (IEEE, ISO, IEC), civil society, and technical communities is critical for effective human rights integration in AI standards
Human rights due diligence provides a concrete four-step framework (identify, assess, integrate, track) that standards development organizations can adopt to systematically address human rights risks
Practical tools like human rights AI benchmarks are needed to evaluate AI systems and guide procurement decisions, as no international human rights framework benchmark for machine learning currently exists
Bridging technical and human rights communities requires dedicated education, mentorship programs, and development of shared vocabulary to overcome communication barriers
Standards development must focus on both process considerations (ensuring diverse participation, especially from Global South and civil society) and substance considerations (what standards should contain)
Industry adoption is key – standards must be practical and focused on sectors/use cases that will actually be implemented, not just comprehensive horizontal standards
AI governance consensus is emerging globally, as evidenced by the Human Rights Council’s recent consensus resolution on human rights and emerging technologies
Resolutions and action items
ITU to implement capacity building courses with human rights modules for all standards committees and enhance human rights literacy among members
OHCHR and ITU to continue developing and implementing their approved work plan on technical standards and human rights through TSAG
IEEE to continue expanding their 7000 series standards addressing bias, privacy, and transparency, with over 100 standards in development
Women at the Table to complete their human rights AI benchmark evaluation of five large language models across five rights areas (privacy, due process, non-discrimination, social protection, health)
Standards development organizations to adopt human rights due diligence processes including transparency measures and meaningful stakeholder engagement
Continued collaboration between ITU, ISO, and IEC with human rights as one of three key pillars alongside AI and green technologies
Development of AI standards exchange to maintain register of definitions and applicable standards for evaluating AI systems as recommended in Global Digital Compact
Unresolved issues
How to effectively reach and engage scientists at the inception stage of technology development, particularly those working within corporations
Addressing resource and capacity constraints that prevent civil society organizations from meaningfully participating in standards development processes
Managing the complexity of translating broad human rights principles into measurable engineering requirements while maintaining consensus across diverse stakeholders with different interpretations
Ensuring just transition considerations for workers displaced by AI implementation, particularly in collaboration with large corporations
Determining which specific standards should be prioritized for human rights integration given the vast landscape of over 250 AI standards currently under development
Balancing the need for comprehensive horizontal human rights standards with industry preference for narrowly focused, sector-specific standards that are more likely to be adopted
Addressing the challenge that many key AI scientists and developers work within corporations, making early-stage engagement difficult
Suggested compromises
Developing both horizontal human rights standards for comprehensive coverage and sector-specific standards for practical industry adoption
Creating mentorship and education programs that pair technical experts with human rights specialists to bridge knowledge gaps
Establishing free educational courses and shared vocabulary resources to help both policymakers and technologists collaborate effectively
Using frameworks like ‘ethically aligned design’ to provide structure while allowing flexibility for diverse stakeholder input
Focusing on management system standards for AI that don’t require deep technical expertise, making them more accessible to human rights practitioners
Implementing transparency measures and open multi-stakeholder processes to accommodate different perspectives while maintaining technical rigor
Pursuing collaborative approaches between standards bodies through liaising agreements to create multiplier effects for human rights integration
Thought provoking comments
Technical standards actually end up regulating how we use technology and what is technology. So they are not, even though we sometimes say this is a technical issue, these technical issues actually very well determine how our rights are exercised.
Speaker
Tomas Lamanauskas
Reason
This comment reframes the entire discussion by challenging the common misconception that technical standards are neutral or purely technical matters. It establishes that standards are inherently political and rights-affecting instruments, which is foundational to understanding why human rights must be embedded in AI standards.
Impact
This insight set the conceptual foundation for the entire panel discussion. It shifted the conversation from whether human rights should be considered in technical standards to how they should be integrated, making the case that technical decisions are inherently human rights decisions.
Incorporating human rights into standards can present some challenges… Technically, it’s difficult to translate broad human rights principles into measurable engineering requirements… Procedurally, most standards are developed by consensus-type processes. So when you’re looking at human rights principles, they could be interpreted differently among different stakeholders from different parts of the world.
Speaker
Karen McCabe
Reason
This comment introduced crucial practical complexity to the discussion by acknowledging the real-world challenges of implementation. Rather than offering platitudes, McCabe provided an honest assessment of the technical, procedural, and institutional obstacles that must be overcome.
Impact
This shifted the discussion from idealistic goals to practical implementation challenges. It grounded the conversation in reality and prompted other speakers to address how these challenges could be overcome, leading to more concrete solutions and methodologies.
This notion of technology being neutral is really been sort of discredited… We use often this example of the cockpits when the U.S. Congress in 1990 said women needed to become fighter pilots… They had to redesign the cockpit so that things were really adjustable and in different places. And they made much more efficient planes for that reason, because they had to build a cockpit for that kind of diversity.
Speaker
Caitlin Kraft-Buchman
Reason
This comment used a powerful concrete analogy to illustrate how designing for diversity and inclusion actually improves outcomes for everyone. It challenged the false choice between efficiency and inclusivity, showing that inclusive design often leads to better overall solutions.
Impact
This analogy provided a tangible way to understand abstract concepts about inclusive AI design. It shifted the framing from human rights as a constraint on innovation to human rights as a driver of better innovation, making the business case for inclusive standards development.
How do we ensure that we cross these two worlds where we as a human rights community explain to people from a technical world about what certain notions mean and that on the other hand we understand the technical issues? I think this seems to be a challenge which is very important to overcome.
Speaker
Matthias Kloth
Reason
This question identified the fundamental communication and knowledge gap that underlies many of the implementation challenges discussed. It highlighted that the problem isn’t just technical or legal, but fundamentally about bridging different professional cultures and vocabularies.
Impact
This question prompted concrete responses about mentorship programs, education initiatives, and cross-disciplinary collaboration methods. It moved the discussion toward practical solutions for building bridges between communities, leading to specific recommendations for training and capacity building.
Is anybody talking to the scientists who were actually at the inception of these technologies? Because I think we were just not reaching them enough, because we’re actually late in the cycle.
Speaker
Mark Janowski
Reason
This comment challenged the entire premise of the discussion by suggesting that focusing on standards development might be addressing the problem too late in the process. It raised the critical question of whether intervention at the research and development stage might be more effective.
Impact
This question forced participants to confront the limitations of their current approach and consider earlier intervention points. It highlighted a potential gap in strategy and prompted Peggy Hicks to mention their second phase work on engaging at the beginning of the product development cycle.
It’s really important to recognize that it’s a vast field. We’ve got a database for AI standards. It’s got over 250 standards currently in there… CSOs have often told us the ideal is to have a horizontal standard… But we also know that industry is often focused on much more narrowly focused standards… It’s not enough to just have a catalog where there is one standard that has human rights included.
Speaker
Florian Ostmann
Reason
This comment revealed the complexity and fragmentation of the AI standards landscape, highlighting the strategic challenge of where to focus limited resources and attention. It showed that good intentions aren’t enough without strategic thinking about implementation and adoption.
Impact
This insight brought strategic realism to the discussion’s conclusion, emphasizing that success requires not just developing good standards but ensuring they get adopted and used. It highlighted the need for strategic prioritization and practical considerations about industry adoption patterns.
Overall assessment
These key comments fundamentally shaped the discussion by moving it through several important transitions: from theoretical principles to practical implementation challenges, from viewing human rights as constraints to seeing them as drivers of innovation, and from idealistic goals to strategic realism about adoption and effectiveness. The comments collectively built a more nuanced understanding that embedding human rights in AI standards requires not just good intentions but also cross-cultural communication, strategic thinking about intervention points, and realistic assessment of implementation challenges. The discussion evolved from a high-level policy conversation to a practical roadmap for action, with each insightful comment adding layers of complexity and realism that ultimately strengthened the overall framework for moving forward.
Follow-up questions
How do we ensure cross-understanding between human rights communities and technical communities when explaining concepts and technical issues?
Speaker
Matthias Kloth
Explanation
This addresses a fundamental challenge in bridging the gap between human rights expertise and technical standards development, which is crucial for effective integration of human rights principles in AI standards.
How do you work with big corporations on just transition in AI, particularly regarding layoffs due to AI implementation?
Speaker
Audience member from low carbon sector
Explanation
This question highlights the need to understand how human rights frameworks can address the socioeconomic impacts of AI adoption, particularly job displacement and ensuring equitable transitions.
Is anybody talking to the scientists responsible for the inception of AI technologies, rather than focusing only on later stages of the development cycle?
Speaker
Mark Janowski
Explanation
This identifies a potential gap in engagement with AI researchers and developers at the earliest stages of technology development, suggesting that human rights considerations may be introduced too late in the process.
Which AI standards should be prioritized for integrating human rights – horizontal standards that cover broad AI issues or sector-specific standards that may see more adoption?
Speaker
Florian Ostmann
Explanation
This strategic question addresses the challenge of ensuring human rights considerations are embedded in standards that will actually be used and implemented, rather than just existing in comprehensive but potentially underutilized frameworks.
How can we ensure meaningful representation of Global South perspectives, civil society organizations, and appropriate expertise (technical vs. human rights) in standards development processes?
Speaker
Florian Ostmann
Explanation
This addresses systemic representation gaps in standards development that could undermine the effectiveness and legitimacy of human rights integration in AI standards.
How can we address resource and skills barriers that prevent civil society organizations from participating effectively in AI standards development?
Speaker
Florian Ostmann
Explanation
This identifies practical obstacles to inclusive participation in standards development, which is essential for ensuring diverse perspectives and human rights expertise are incorporated.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.