Open Forum #44 Building Trust with Technical Standards and Human Rights

19 Dec 2024 10:45h - 11:45h

Open Forum #44 Building Trust with Technical Standards and Human Rights

Session at a Glance

Summary

This discussion focused on incorporating human rights considerations into technical standards for emerging technologies, particularly in the context of artificial intelligence (AI) and digital governance. Participants from various sectors, including government, civil society, and international organizations, emphasized the importance of a multi-stakeholder approach in developing standards that respect human rights.

Key points included the need to break down silos between human rights experts and technical communities, and the challenges of involving diverse stakeholders in the standard-setting process. Speakers highlighted the importance of considering human rights from the inception of new technologies, rather than as an afterthought. The discussion touched on specific issues such as linguistic diversity in AI development, privacy concerns, and the potential for discrimination in AI systems.

Participants stressed the financial and resource challenges faced by civil society organizations and small enterprises in participating in standards development processes. They called for more accessible ways to contribute to these discussions, such as through workshops and targeted input opportunities. The role of the Internet Governance Forum (IGF) in facilitating these conversations was emphasized.

The discussion also addressed the responsibilities of tech companies in respecting human rights, with mention of the UN Guiding Principles on Business and Human Rights. Speakers noted the importance of human rights due diligence in both company operations and standard-setting organizations. The Global Digital Compact was highlighted as a key framework for advancing human rights in technical standards.

Overall, the discussion underscored the critical need for collaboration between technical experts, human rights advocates, governments, and industry to ensure that emerging technologies and their governing standards protect and promote human rights.

Keypoints

Major discussion points:

– The importance of incorporating human rights considerations into technical standards for emerging technologies

– Challenges in involving diverse stakeholders, especially civil society and marginalized groups, in standards development processes

– The need for a multi-stakeholder approach and breaking down silos between technical and human rights communities

– Balancing innovation with ethical considerations and human rights protections

– The role of governments, companies, and international organizations in promoting human rights-aligned standards

Overall purpose:

The goal of this discussion was to explore how human rights principles can be better integrated into technical standards for digital technologies, and to identify challenges and potential solutions for making standards development processes more inclusive and rights-respecting.

Tone:

The tone was largely collaborative and solution-oriented. Speakers approached the topic from different perspectives but shared a common interest in improving standards processes. The tone became more urgent when discussing the need for concrete actions and greater civil society involvement. Overall, there was a sense of cautious optimism about the potential for positive change if stakeholders work together effectively.

Speakers

– Dhevy Sivaprakasam: Consultant, OHCHR

– Olivier Alais: Program Coordinator, ITU

– Marek Janovský: First Secretary for Cyber Diplomacy at the Permanent Mission of the Czech Republic in Geneva

– Shirani De Clercq: Expertise France Economist, seconded at the Saudi Ministry of Digital Technology

– Yoo Jin Kim: Representative from OHCHR

– Gbenga Sesan: Executive Director of Paradigm Initiative

– Florian Ostmann: Director of Innovation at Alan Turing Institute

Additional speakers:

– Mizna Tregi: Representative for the Saudi Green Building Forum

Full session report

Incorporating Human Rights into Technical Standards: A Multi-Stakeholder Approach

This discussion at the Internet Governance Forum in Riyadh focused on the critical need to incorporate human rights considerations into technical standards for emerging technologies, particularly in the context of artificial intelligence (AI) and digital governance. Participants from various sectors, including government, civil society, and international organisations, emphasised the importance of a multi-stakeholder approach in developing standards that respect human rights.

Key Themes and Arguments:

1. Multi-stakeholder Collaboration and Breaking Silos

Participants stressed the necessity of involving diverse stakeholders in the development of technical standards. Dhevy Sivaprakasam, the moderator, highlighted the need for a multi-stakeholder approach to include diverse perspectives. Marek Janovský emphasized the importance of breaking silos between human rights experts and technical bodies, and stressed the government’s role in fostering dialogue and outreach, particularly in including youth and emerging tech companies.

Gbenga Sesan, Executive Director of Paradigm Initiative, emphasised civil society’s role in bringing user experiences to standards discussions. Olivier Alais, representing the ITU, underscored the need for collaboration between public, private, and civil society sectors, and mentioned the Freedom Online Coalition’s joint statement linking technical standards and human rights.

2. Balancing Innovation with Human Rights Considerations

The discussion explored the challenge of integrating human rights considerations into technical standards without impeding innovation. Shirani De Clercq, an economist from Expertise France, highlighted the tension between economic objectives and ethical principles in AI development. She also discussed Saudi Arabia’s context, mentioning the AI ethics principles issued by SADAYA and the AI Adoption Framework.

Gbenga Sesan argued that considering human rights and user experiences can actually promote innovation by improving services and experiences. He also raised the issue of internet shutdowns and their impact on human rights.

3. Challenges in Implementation

Several speakers highlighted practical challenges in implementing human rights-based standards. Financial and resource constraints for inclusive participation were noted by audience members and speakers alike. Florian Ostmann emphasised the need for lowering barriers to participation in standards development, particularly for civil society and small and medium enterprises (SMEs). He also stressed the importance of considering the use of technology, not just the properties of systems.

Yoo Jin Kim, representing OHCHR, stressed the importance of human rights due diligence by companies and standards bodies. Kim also highlighted the critical risks to human rights from emerging technologies, particularly noting the lack of transparency in AI development and use. She mentioned the Global Digital Compact (GDC) and its relevance to human rights in technical standards.

4. Specific Concerns and Examples

Shirani De Clercq raised an important point about linguistic diversity in AI development, noting that while 7% of internet users are native Arabic speakers, only 0.8% of internet content is in Arabic. She also mentioned the Gaia Accelerator project in Saudi Arabia and discussed the PDLP (Saudi Arabia’s equivalent to GDPR) and its implications for data privacy.

Marek Janovský highlighted the challenges of neuroscience and brain-related technologies. An audience member raised concerns about blockchain standards making.

Florian Ostmann provided specific examples of human rights implications of AI, such as bias in recruitment algorithms and the use of AI in law enforcement.

Conclusion and Future Directions:

The discussion underscored the critical need for collaboration between technical experts, human rights advocates, governments, and industry to ensure that emerging technologies and their governing standards protect and promote human rights. While there was general consensus on the importance of incorporating human rights into technical standards, the practical challenges of implementation remain significant.

Key takeaways included the need to lower barriers for participation in standards development processes, especially for civil society and SMEs, the importance of agile methodology in implementing standards, and the need to consider both the properties and use of technology in standards development.

Unresolved issues include creating scalable models for integrating human rights into technical standards without slowing innovation, addressing financial and resource constraints that limit participation, and effectively balancing economic objectives with ethical principles in standards development.

The discussion highlighted the complexity of the challenge but also demonstrated a growing recognition of the interconnectedness between technical standards and human rights. This suggests a potential shift towards more inclusive and rights-based approaches in technology development and governance, though significant work remains to be done to realise this vision.

Session Transcript

Dhevy Sivaprakasam: Okay, so good afternoon in Riyadh and good morning, good night, wherever everyone is. We’re happy to introduce you all to the session today on building trust with technical standards and human rights. We all recognize in this IGF on the multi-stakeholder approach that there is a real need to have a rights-based approach to technical standards. And here we have today, and we’re very happy to introduce quite a number of good experts in the room from civil society, government, and corporate perspective as well. I will now just hand over to Olivier to give an initial introduction.

Olivier Alais: Thanks a lot and good afternoon and good morning to all colleagues and friends. So it’s a great honor to welcome you to this important session. So it’s co-organized by Czech Republic, ITU, and OHCHR. And today we come together to address critical questions and how can we include human rights in technical standards for emerging technologies to build trust in our digital future. At ITU, we understand that technical standards are the invisible foundation of our connected world and traditionally standards have been focused on two goals, technical accuracy and commercial success. And today we must add a third essential dimension, the human rights perspective. So this new approach requires us to ask important questions. It’s about how to protect privacy and data, how to ensure freedom of expression and access to information, how do we guarantee non-discrimination and inclusivity. So addressing this question, it’s vital to build a trust that ensures emerging technologies are widely adopted. and serve everyone. So what does this matter now? Technology like AI, Internet of Things, Metaverse, offer incredible opportunities, but they also create challenges. And without clear guidance, technology can unintentionally harm the very rights it aims to protect. That’s why collaboration is so important, and this type of panel is very important. For example, the Freedom Online Coalition delivered recently its first joint statement explicitly linking technical standards and human rights. The recent IT resolution on Metaverse is a milestone, it was two months ago. It is the first to explicitly reference human rights. Our partnership with OHCHR is also key, and with human rights specialists, and to the commitment to implement the Global Digital Compact by turning human rights principles into technical guidance. So thanks a lot

Dhevy Sivaprakasam: for your attention. I look forward to this conversation and to working together to ensure technology serves humanity’s best interests. So I’m giving you back the floor. Thank you. Thanks so much, Olivier. And to speak to these issues, we have five different speakers today, and I’m very happy to introduce them. Marek Janowski, First Secretary for Cyber Diplomacy at the Permanent Mission of the Czech Republic in Geneva, joining us online. Shirani de Klerk, Expertise France Economist, seconded at the Saudi Ministry of Digital Technology, on site next to me. Yujin Kim, my colleague from OHHR in Geneva, joining us online as well. Thanks, Yujin, for joining. And Benga Sasan, Executive Director of Paradigm Initiative on my right. And finally, but not last, Florian Osman, Director of Innovation at Alan Turing Institute, who’s also joining us online. So let’s kick off the discussion, starting off with Marek. So the first question we wanted to throw to you, just to hear your thoughts, is how can international cyber diplomacy efforts promote the inclusion of people with disabilities in the public sector? And we’ll start with you, Marek. Sure. So, of human rights in the development of technical standards for emerging technologies.

Marek Janovský: Hello everybody. I hope you can hear me. Greetings from Geneva from the Czech Permanent Mission. I’m glad to be here with you with the expert community. I will try to be brief. I hope not to exceed much time. So if I am too long, please don’t hesitate to stop me. I’ve got a few points and some questions. So to your question, actually the diplomatic cyber community, what we can do actually is to keep raising of our awareness to these matters. These matters of linking human rights and the development of the standards is not a self-standing issue. It’s actually linked to, let’s say, a broader international relations and how new and emerging technologies change these international relations. So we’re not talking about a vacuum. It’s a link. It’s a change of, let’s say, environment that we live in as humans and as societies. So that is one of the reasons why I think the diplomatic communities globally start to take interest and raising interest in these matters. So just by way of introduction, I wanted to just point on that. I think the first thing I would like to mention is the attention we need to pay to the whole cycle of new emerging technologies, of their development. What we’re working on here with ITU specifically and the UIC HR colleagues is one of the points in the cycle. It’s standard development, standardization, but there is also inception. There’s also, you know, the youth development. and also disposal of technologies that the other phases that need to be, let’s say, heated. And so we’re now talking to you, to the expert audience, I think that you have experience with the other ones as well. The third element I’d like to mention is that it’s important for us, for diplomatic cyber community, or let’s say new tech people working in diplomacy, is to try to break the silos between specifically the experts working on human rights, such as the High Commissioner’s office in Geneva or elsewhere as well, and specific bodies such as ITU or ISO or IEC, etc. or IEEE. It’s important that the ITU is not the only game in the town, and there are others that can join the efforts, and need to join the efforts in order for the tech and the digital transformation to be a success. Another point I would like to mention is the importance of youth. I’d like to actually just point out that young people, they’re actually even in Riyadh now, NGOs and others, they make a crucial part in this, so I’ll be actually quite happy to hear from them what they think about this important link. Maybe one of the other points is, and this is the question I wanted to basically raise, is how the IGF could help to actually advance this. Because the diplomatic community is one thing, but we need a wholesale approach to a change of paradigm in how we actually perceive the development and use of the new and emerging technologies. I think this is key that not only diplomatic communities, but others join in in this effort and, you know, each of us will play a role in making you know this a success because I don’t think that once we decided to follow this path of digital transformation. We don’t have much choice, but to try to make it safe and human rights based. Thank you very much.

Dhevy Sivaprakasam: Thank you very much. There was a really good overview actually of the multi stakeholder approach and maybe also zooming down now to the Saudi Arabia experience like to ask Shirani specifically on the work that you’ve been doing in Riyadh concerning non discrimination. What are the use cases illustrating the impact of technology biases in Saudi Arabia and what solutions have you seen being implemented here to address those challenges.

Shirani De Clercq: Thanks again for inviting me in this great panel. In our ministry, in the ministry where I work in, we have a technology foresight department. For their monthly meeting, someone in their team tests the latest apps on the market. And these days, mostly the AI, Gen AI, and presents its pros and cons during a meeting. One occurring theme that has been observed is the biases within these applications. A simple example, when you ask a Gen AI model to depict a traditional Saudi family, you sometimes end up having a Saudi woman with a men’s tube. We also have other stereotypes, but today just stick on to this image issue. Is it really a big deal to be represented, falsely misrepresented? In the digital world, an entire minority could be falsely represented or even erased. Erased from, for example, an AI assisted recruitment process. It could become a big issue in the long run. So bias in AI systems often stems from how data is collected and how models are trained. But improving the fairness of AI requires more than just diversifying the data sets. While ensuring the data reflects the full range of appearance and cultural practice in Saudi Arabia is crucial, we must also design AI systems that don’t automatically discard what they deem unusual. So equally important is the composition of the development team. Inclusive teams representing various backgrounds and experiences are more likely to recognize blind spots and feed algorithms with data that genuinely reflects their populations. For more UN videos visit www.un.org So, what do we do in these situations? In 2023, the Saudi Data and AI Authority, which is SADAYA, issued an AI ethics principle aimed at guiding organizations in the responsible use of their technologies. And recently, in September 2024, SADAYA issued the AI Adaption Framework designed in a use-case-driven methodology, a very flexible approach. There’s another issue on language. Have you ever heard of the principle of linguistic relativity? So it says that the way people think of the world is influenced directly by the language that people use to talk about it. While 7% of internet users are Arabic as its native speakers, only 0.8% of internet content is in Arabic. So a model trained on modern standard Arabic fails to understand regional dialects. So research shows that customizing models to local language variations significantly improves the accuracy. For example, ArabBERT, an Arabic-focused language model, boosts dialect identification accuracy from about 84% to 92% simply by incorporating more dialect-specific data. In doing so, we not only improve technology’s effectiveness but also ensure that the digital world genuinely represents Saudi linguistic and cultural richness. So it’s very important for us.

Dhevy Sivaprakasam: Yeah, now switching gears to also Yujin, maybe you could speak a bit more to what are the critical risks that OHHR sees emerging from these technologies on a broader level. We heard Shirani speak to the Saudi Arabian perspective and more maybe the global trends that we’ve been observing from the office. Thank you.

Yoo Jin Kim: Thanks so much and thank you. for this question. It’s great to see some familiar faces online and on site. So to start, I would mention again, our reports that we had published last year on the relationship between human rights and technical standards in relation to digital technologies. So to recap, you know, on one hand, the report showed that, you know, how technical standards are relevant to the enjoyment of human rights. For instance, stressing that many standards define processes and actions that directly respond to certain human rights related concerns. So to be more concrete, some examples include standards on privacy by design, privacy risk assessment, management, perhaps accessibility standards on the web, for example, which allow people with impaired vision to navigate and access the internet. So the ways in which these standards that I’ve mentioned, as an example, are the way they’re designed are important to protecting the right to privacy, for example, freedom of expression and association, the right to life, in essence, across the whole spectrum of human rights, although I’ve only listed just a few rights here. And let’s look at a few more recent examples. The Internet Engineering Task Force has done a great amount of work around this. And right now, they’re currently discussing an internet protocol related to air tags, and gender violence. So the working group on DULT, DULT stands for Detecting Unwanted Location Trackers Accessory Protocol. So this protocol is being discussed in ITF to protect people against being unknowingly tracked. And it’s discussing a real issue in a way, you know, that tackles gender domestic violence cases, to create a standard that allows the AirTags to communicate with Bluetooth devices, so that the person being tracked can detect and discover the hidden tag. So, and on the other side, I would say our report showed that the risks related to the standards development, for example, standards that define technical features that are necessary for digital infrastructures functioning, have particular relevance for human rights, as we have seen the transmission control protocol and HTTP. So in this case, the weakness or lack thereof, or lack of inscription in these protocols can facilitate mass surveillance programs that systematically undermine the right to privacy, and facilitate targeted surveillance both by the state and by non-state actors. So there have, of course, been some really important resolutions recently from the UN General Assembly, Human Rights Council, and our reports, of course, on human rights and technical standards. But other reports that we have issued, for example, on internet shutdowns, the rights of privacy in the digital age, stress the risks that relate to new and emerging technologies. And other reports, such as the use of AI in border management by law enforcement, which gets into the use of AI, for example, facial recognition technologies, real-time and disproportionate impacts that it has. And I think one important thing to note is the lack of transparency, which has really been the undertone in the development and design and the use of AI. And this really the lack of transparency and thus the lack of accountability has really led to some harm and increased risks to human rights. And let me just conclude now that while technical standards can have an important role in creating conditions that are conducive to exercising human rights, clearly there are cases and risks posed to human rights by the way human rights are, by the way standards are designed, but also the way they are deployed. So this is why we really need to put human rights at the center and front of digital technologies and the standards that underpin them. And we have to make sure that standard setting processes really rest on multi-stakeholder principles and become as transparent, inclusive, and open as possible.

Dhevy Sivaprakasam: Thank you. And yeah, so we heard from both Yujin and Shirani and also Marek on the importance of having voices from marginalized communities and minorities also involved in this process. And Marek mentioned also a change in paradigm, so it’s good to have someone from paradigm initiative. I’m sure you heard that many times. So Venga, how can civil society organizations influence the technical standard setting processes and include voices that traditionally excluded in

Gbenga Sesan: the room? Thank you. I definitely heard paradigm mentioned and I’ve been hearing that all week, so that’s great. It’s good to know that our brand is spreading. Just very quick things, and I’ve organized my response in under five P’s because then it makes it easier to remember. And the first is prioritization. It’s important for civil society to prioritize participation in conversations around technical standards. And I say that because for many years, even myself and a few colleagues had conversations around, is it worth it? Because it’s expensive to participate and you need to decide. And that is where you begin to ask yourself the question that what is the connection between human rights and these technical standards? And then thanks to the Office of the High Commission, your report and some of the conversations we’ve had have helped in terms of painting that picture that if you don’t have the conversations at the design stage and the standards are set, you will then end up with fighting the fires eventually. The second is participation, because it’s one thing to complain that there are issues, but it’s another thing to participate and to bring knowledge to the table. And I think this is really important because when you bring knowledge to the table, you may just be presenting a side of the conversation that people have not even considered at all. And we see that in our work when, for example, we have conversations with security agencies or with the judiciary, that what the knowledge we bring to the table about human rights, about digital rights, are obviously not what they discuss every day. So it’s helpful for them to see that. The third is partnership. I said earlier today in one of the sessions that we need to see the spirit of multi-stakeholderism in government delegations. It’s not just about governments being in the room and then civil society is in the room, but even the sense of government. So government is a representation of the people. So governments need to start going for technical standards conversations with civil society, with businesses that I believe they already go with, and with, of course, they don’t have a choice with the technical community, so that there’s a partnership that brings in all of these elements, which then brings me to my fourth point, which is people. Because at the end of the day, my suspicion and the reality is that the services that will use the standards are focused on markets. The market is made up of people. Markets are made up of people. So if the people are not at the center of the experience, it’s like the UX, you’re building a user experience. And I think that civil society has a unique opportunity to bring user lived experience to the table of these conversations. We were talking about internet fragmentation and we got all technical until we stepped back and said, wait a second, how does fragmentation affect the internet user? And that was very helpful. And of course, finally, is the process itself. I was glad to hear Eugene mentioned earlier, something about privacy by design. This is where we need to begin to have conversations about processes, such that anything, basically talking about human rights by design. So it’s not human rights consideration. It is a fact that human rights, because it is people focused, is at the center of the entire process. It’s at the center of the conversation that we’re having about technical standards.

Dhevy Sivaprakasam: Those are very important points. And I’m now going to turn to Florenta to speak about how to make that actually effective, to actually incorporate the human rights perspective into the technical. standard processes, and where you see the challenges in this. The tough question comes to you.

Florian Ostmann: Thank you very much, and thank you for the invitation. It’s great to be part of this discussion. So I’ll be speaking from the perspective of our work at the Alan Turing Institute, which is the UK’s national institute for AI, and an initiative that we set up a couple of years ago called the AI Standards Hub. I’ll say more about that later, if I get a chance. But just to say, our work is focused on AI rather than digital technologies more broadly, but I think a lot of the considerations apply more broadly as well. I think I won’t go into too much detail on why AI raises human rights implications. I think that’s already been eloquently set out by my previous colleagues on the panel. Just to illustrate, as we’ve heard, there are privacy implications. AI can have implications for physical safety, those important questions around non-discrimination, due process when AI is used in legal or administrative processes, and important questions around surveillance, just to name a few important human rights aspects. So it’s very clear that AI raises human rights questions. Now the question is, why is it important for human rights to be considered in the context of standardization for AI? I think one important thing to emphasize here is that standardization, we’ve seen over the last couple of years, is increasingly being looked at as a tool for AI governance, often through important links between regulation and standards. So in the EU context, for example, there’s a very important and direct link between the enforcement and implementation of the AI Act and the standards that are being developed at the European level. So standards aren’t just a standalone tool, but they do have important links to the broader landscape. and are considered increasingly an enabler for AI governance more broadly. And that’s why it’s important, I think, and absolutely critical, that human rights considerations are part of standardization. Otherwise, standards won’t be able to play that overall enabling role. One thing that I also want to briefly sort of emphasize or highlight is the importance to think about the use of technology rather than just properties of systems, because I think that’s an important shift to some extent if you think about traditional domains of standardization, where standards are primarily about properties and specifications for systems. That is very important in the AI context, as it is everywhere. But as we’ve heard already, some of the most critical human rights-related impacts in the AI context may be associated with the use of systems, regardless of whether their properties meet certain specifications. And so, again, that’s something where I think standardization needs to broaden its scope if standards are meant to play this broad role of governance enablers and think about the use rather than just properties of systems. Now, to the second part, what are the main challenges for including human rights expertise in standards development? I think at a high level, sort of two points. The first one is that we are dealing with two different cultures. You know, think about the standardization community and the human rights community. There are different conceptual frameworks at play, different languages and, you know, different cultures of collaborating. The second one is a simple point about simply being different communities. So the people involved in standardization traditionally aren’t, you know, are separate from the group of people who traditionally focused or have a professional focus on human rights and human rights due diligence. Now, the fact that there are these different communities means that stakeholders that have expertise in human rights and human rights due diligence are not particularly familiar in many cases with standardization as a field, and that creates obstacles for them to actively engage. It starts with the fact that the space is very complex, there’s a wide range of standards development organizations, they each have their own rules for participating, it’s difficult to understand, you know, what are the most important developments, what are the most important standards projects, and then how do I get involved in those, given that there are different rules. Secondly, obstacles around skills, so skills and knowledge about how does the standards development process work, knowledge about, you know, what are different types of standards, there’s a very common misconception, I think, around the notion of technical standards, we tend to avoid using the term technical standards, because it tends to imply, you know, the content of standards developed in organizations like the ITU ISO, by necessity is particularly technical. That is the case for some standards, it’s not the case for all standards. And that often creates misunderstandings. And then as a last point, of course, it has to be mentioned that there are important challenges around resourcing, as has already been mentioned by previous speakers as well, it’s important to recognize, I think, that think about the multi stakeholder approach to standards development, certain stakeholder groups, you know, have a business case for being involved in standard development industry, as the obvious example for that important pockets of human rights expertise, you know, are to be found outside of industries, especially in civil society space. And it’s much more difficult for civil society organizations to find resources and make the business case for being involved. That is changing. I mean, the awareness is changing, but the challenge of finding the resources remains remains the same, so far.

Dhevy Sivaprakasam: So much, Florian, I think at this time, we’d like to open the discussion to the floor and check if anyone would have any questions you would like to post to our speakers here. Sorry, I think. Apologies. Um, I think yeah. maybe it’s the time of the day, but maybe we can then move on to also other questions that we had for the speakers and maybe putting Marek back online. What role do you envision that the Czech Republic plays for governments in ensuring standards align with the multi-stakeholder process and also include human rights perspective in development?

Marek Janovský: Yeah, many, many thanks again on this national perspective. I’d like to just maybe refer very quickly to what I have already mentioned at the beginning of the panel. It’s the silos breaking. If we are to succeed, we need to make sure that people actually communicate. So the experts on both sides find a way to respect one another and do not disregard one another in terms of some kind of a joint work. I think it sounds banal, but it proved to be a very difficult thing to do from a diplomatic perspective. So that’s maybe one point. Again, breaking the silos, that’s what the Czech Republic in the EU does. I mean, not alone, because we would not have been able to do it, but we try to foster the even cross-regional, let’s say, dialogue. So as we have heard before, we need to talk, for example, the EU needs to talk with African countries because we’re actually facing some of the joint or challenges in this ever-increasing digital world. Linguistically, for example, as we have heard before, it’s the same issue. And I have to subscribe to the previous speaker saying that once we’re exposed to a different language, we think about a different language. We, of course, are subconsciously changed the way we perceive the world, that’s for sure. But there are other things which are similar to that, which actually are influencing our brains and thoughts. I’ll probably come back to that. Another point maybe worth mentioning outside of the outreach, which we want to foster also here, actually, through you guys. Again, the IGF would need to play a bigger role in this. And seriously, the Czech Republic would like to actually, not to task, but to ask the IGF to be of a help in this specific area of work. Because we think that we need more opinions, more, let’s say, recommendations from you, because you’re the experts that can actually help. Be it the governments, be it the civil society organizations, be it researchers, be it the private companies who are there. That brings me to the point of private companies. I think this is also one of the ways or elements that the Czech Republic is trying to foster. That’s why we’re extremely grateful for the OICHR’s project, BTEC, that is being at least run in Geneva. I think it would benefit from a worldwide coverage as well. Just trying to talk to not only the big tech, but trying to talk to the emerging and the SMEs. Again, to the young people who are actually driving companies who are at the, let’s say, frontier development of these applications. Because basically the world is going to be theirs and they need to step up and tell us how to do it as well. Not only to develop, but develop in a responsible way. So, I would end. there. And maybe just to say a few remarks, if I made to the previous conversation, it’s interesting to focus on standardization and on the use. I agree. But maybe a question to the audience or the other colleagues. When we talk about new technologies, such as sign neuroscience, and other things, other applications, which are going to be directly, which are you know, directly focusing on our brain activities on our thoughts, how can we improve the user’s capacities and skills? In my view, there’s no way it needs to be it needs to be already, let’s say, clean and responsible from the inception part, you know, the systemic proprieties need to be done in a in a in a good way in a responsible way. I don’t think that the user who is being swarmed by countless of applications and like, you know, internal things, connectivity, etc, is going to be able to focus on responsible usage and skills only, I think, just from a daily usage perspective, there’s no way. So again, the inception and the first phases of the development cycle are extremely key to get it right. Thank you.

Dhevy Sivaprakasam: Thanks so much, Mark. We have a lady in the room who would love to ask a question at this point. So we’ll just pass the mic.

Audience: First of all, thank you so much for this insightful discussion. And let me allow me to introduce myself. My name is Mizna Tregi. And I am the representative for the Saudi Green Building Forum, which is an organization committed to sustainable development and, and also fostering innovative principles. Now, the discussion highlighted a critical critical gaps in how human rights are considered in technical standards for emerging innovations. Despite the rapid advancement, overlooking these principles exposes marginalized groups to risks such as discrimination, privacy violations, and lack of transparency. Now, this challenge is further compounded by limited involvement of diverse stakeholders, which threatens to create unsafe and uninclusive technological environment. Now, to address these issues, there is a pressing need to ensure active participations from all sectors in the development of these technical standards. By integrating a human rights based approach, we can design the systems that prioritize transparency, fairness and accountability. Also, strengthening collaborations between public and private sector, and civil society is equally crucial to ensure these standards reflect the needs of all communities. Now moving forward, actionable steps include creating a human rights guide to align technical standards with principles like equality, equity and justice. Also, we can establish a robust multi stakeholder platform that can foster the exchange of experiences and expertise, and also best best practices, while regular human rights impact assessments will ensure alignment with sustainable development goals. Now, my question is, how can we create a scalable models for for integrating human rights into technical standards without slowing the pace of innovation, as well as how can we involve as, as His Excellency said, the private sector to implement these standards? Thank you.

Dhevy Sivaprakasam: Thank you so much. That was a really good question. I would just invite anyone in the room to come in on that because I think you raised multiple points that actually spoke to all of what all of the speeches of the panelists today. So anyone who wants to respond?

Gbenga Sesan: Thanks, that’s that’s a fantastic question, actually, because one of one of the reasons this is fantastic is because it actually takes us to the center of innovation. You know, question is how, how do we ensure standards are human rights compliant, without slowing down the pace of innovation. And I think it’s important to say this. Innovation is about providing improving experiences, right? Improving experiences, making services better, and doing things in different ways. And at the center of innovation are the people who interact with these experiences. And the whole sense of human rights is basically saying we want to make sure that the rights of those who engage with these platforms, who interact with these experiences is respected. A very simple example that I love to give is internet shutdowns. If we build if we bake the fact that shutdowns are not allowed, which is now part of the GDC, all countries have now agreed 29, 29 D that there, you know, no shutdowns, that itself doesn’t slow down the improvement of the internet, but it accelerates it. It means that people can use it, people can give feedback, people can have an experience of the entire and not fragmented internet. So thanks for asking that, because it actually, you know, takes us to the point of, you know, why human rights conversations are also helpful with technical standards, because they actually promote the sense of innovation. It’s not about the people. It’s not about the person creating the tool, but about a user, like, you know, user experience, and how their rights are respected.

Dhevy Sivaprakasam: Can you hear us? Okay, because I’m just seeing from the chat that the audio from the room apparently is a bit low for participants online. Please do continue to type in the Yeah, do continue to type in the chat and we will try to get this sorted. Thanks. Yeah, I wanted to see if anyone else was interested in responding to the question that was posed. But if not, then I might actually move on because two questions both to Eugene and to Shirani that were directly linked to what you actually raised, which was for Shirani, in Saudi Arabia, you spoke about the challenges that of the inclusion of minorities and marginalized communities. What are the financial and social challenges that you see in Saudi Arabia? And what are the challenges that you see in Saudi Arabia? And you mentioned, you know, the inclusion of minorities and marginalized communities. What are the financial and social implications you think that needs to be tackled to tackle this issue?

Shirani De Clercq: Really response to something you approached one of your questions, the dilemma between financial revenue and ethical, social values, traditional values. So it’s always a dilemma and both of them. So that’s why I just took two or three examples. And there’s no answer, of course, because we don’t know. So so one of the projects of like, in the race for the AI leader, you have projected to contribute to 5.1 the AI is projected to contribute to $5.1 billion to GDP by 2030. So there are many projects, and one of them that I identified is the Gaia Accelerator, so pushed by, supported by Saddaia and NTDP. So it has invested $160 million to fund 120 AI startups in Saudi. So while these startups from its economic returns, in the implementation of Saddaia’s AI ethic principle that has been deployed in 2093 and the framework last year, it’s going to slow down the whole process. We have an objective with KPIs where the companies have to become future unicorns. So that’s our main objective. But on the other side, there are principles that comes in and we say that this has to be, by design, you have to include ethical values in it. Why do you do it, and how do you do it? The others are like unbiased data sets with financial constraints. So you know about the ALM, the Arabic language model, which has been developed by Saddaia. It addresses often found in global LLMs like GDP4 that poorly represents Arabic dialects. So while ALM bridges this gap, expanding such initiatives, it costs enormously. So what do you fund? These kind of languages are future unicorn that makes you proud. And something else which is more difficult to discuss, it’s like Nata platform. It has 1.8 million daily digital verification users. Or Tavak Kalam, you say it like that, right? Tavak Kalam? Kalam, yeah, Kalam. It’s just developed during the COVID period. Kalam, sorry for the pronunciation. There’s 19.9 million users. No, 17.9 million users, which is enormous. So on one side, you need data to reduce biases. Sorry, on one side, you need data to reduce biases. And on the other side, there’s the privacy law. Saudi Arabia has published, I think, in 2023 the PDLP, which is the equivalent to the GDPR to protect data privacy in Saudi Arabia. So the real question is the same as yours. How do you classify and prioritize what is the most important thing for a country? I suppose it depends on the time period of your economic situation, because not all countries will face the same problematic at the same time. And I’m really for the agile method. No standardization can be deployed on one shot. I think you have to try, adjust. And I have the feeling that the Sadeas AI framework, ethical framework, has this methodology, very agile, that means you test with a use case methodology. And if it works, you improve and improve and implement it one by one. Thank you.

Dhevy Sivaprakasam: Thanks, Yorani. Actually, just building on that point and also the fact that our kind speaker, I mean, participant raised earlier about principles and standards that are already there on human rights principles. Oh, sorry. Okay, so we have another question. So I’ll pause and please feel free to speak. Thank you.

Audience: Yeah, I think I’m sitting on the wrong side of the room. So I have a question about blockchain standards making and multilateralism in that. So we did a project. I’m from India. I worked on a project. It was an Indo-Australian project. We started working on blockchain standards. And it was a three-year project. And by the end of it, we were just looking at what the involvement in standardization processes for multilateral forums was. So we worked with BIS and then Standards Australia, but then we also worked in like an international level. And the problem that we found was that most people just didn’t participate in standardization processes, not because they didn’t know that it was important, but because it’s not practically feasible. So I think that it’s important to get like a civil society approach and then like from small and medium enterprises, but contributing to standards conversations, which is something that we’ve also attempted to do, is an extremely resource and time-intensive process. It’s something that requires significant upselling. You have to upskill yourself. You have to spend a lot of time working on these things. It’s also something that you don’t actually get paid for. So most of us do it in addition to the work that we do. So in that context, unless, like even when we did conclude our project, we came up with like a roadmap of how potentially standards organizations could get people more invested in the standards-making process. But I think unless there’s like significant incentives being offered, which some governments are doing, but a majority of them can’t really do, that’s not something that’s very likely to change. A large organization could potentially devote some people to participate in standards conversation. Small and medium enterprises just can’t afford it. Civil societies can’t really afford it. It requires a lot of expertise. It requires a lot of time. In that kind of, unless like standards-making conversation drastically changes in the next two years where it becomes possible to contribute without that much investment, I’m not sure how a multilateral conversation on standards would really work out.

Dhevy Sivaprakasam: That’s a really important point. And I do recognize Florian spoke a bit to this before. So yeah, Florian, please come in to respond.

Florian Ostmann: Yeah, thank you. I’ll just briefly come in also because I need to drop out in five minutes, I’m afraid. I think it’s a really important point. So we don’t work on blockchain, but I think the point applies more generally really, when it comes to civil society involvement in standards development for emerging technologies. I just thought I’d briefly give some examples of what we’ve done. I think that the problem really needs to be addressed at different layers. So one thing we’ve done in the AI space with the AI Standards Hub is to build a database of standards projects that are under development. That’s something that didn’t exist before. And so there’s a real challenge at a very foundational level for organizations that aren’t familiar with the space of understanding, tracking what is going on across the large number of SDOs that are active in the space, and then to be able to decide what is the standards project that’s with my limited resources is the one that I should be focused on and engaging. So that’s an example for how we try to contribute to solving that in the AI space through that kind of database that tracks those developments. Secondly, to your point on skills and upskilling, so we’ve developed a range of e-learning materials and we also have occasional in-person events. Some of those are not AI specific, so it might be useful if you’re interested to take a look. We have several e-learning modules on how the standardization process works and the role of different SDOs, for example. Then lastly, and I think that’s probably the most important point and where the most value lies, the biggest difference can be made, is thinking about ways for lowering the barrier to participating and contributing to standards development. Traditionally, of course, the way to contribute is to join a committee, whatever committee that is. In some organizations, if it’s ISO or… IEC or SenSenelec, it’s through the MIRO committee at the national level, in other cases it’s the study group directly like in the ITU, but it’s a formal process and it’s quite a time commitment that can be quite daunting and many organizations may feel they’re unable to commit to joining a committee. So we’ve been trying to experiment with ways of lowering that barrier in collaboration with SDOs, for example with SenSenelec and working group chairs, to create spaces through workshops, for example, that sort of on a one-off basis provide an opportunity for interested organizations, especially those that struggle. So we have a dedicated workstream to civil society or civil society organizations, you know, look at a project that’s under development and give targeted input on certain questions that the committee can then consider, you know, in the committee’s work. And I think that’s something we’ve had a lot of positive feedback on and we’ll try to do more. We’ll actually probably have a workshop for civil society organizations to feed into European standards for risk management for AI in February. So anyone who’s interested, please take a look at our website. We’ve been very, you know, pleased and proud of our partnership with Eugene and colleagues from OHCHR, have having done a couple of events together, and we’ll also be working together on a summit that the AI Standards Hub will be hosting in March. You’ll find more information on that global summit on AI standardization, where sort of civil society inclusion will be a really important focus.

Dhevy Sivaprakasam: Thanks very much, Florian. I just want to also share with the room a comment in the chat from Gopal. In my humble opinion, it’s best to have responsibilities before rights, making responsible citizens. Now netizens is crucial to building trust. using technology standards, we need a trust layer lower than the network layer in the ISO-OSI reference model. So thanks, Gopal. I think it’s also related to the question we asked to Eugene as you speak about responsibilities before rights. Eugene, could you speak a bit more on the project, the BTEC project that encourages companies to actually meet the responsibilities

Yoo Jin Kim: that they have to protect the rights of individuals as well? Sure, and I’m really glad to hear from my other colleagues on the panel here on the challenges non-technical community, the CSO, for example, face, which is, I think, really important to tackle. And just to note that we understand standard development takes time. It’s a complex and challenging environment. So really, we need all stakeholders involved in a continued effort in a collaborative manner to make sure that human rights are front and center to tech standards development. So it’s not something that we can flick on a switch or flick it off to make things happen. So I think also it really highlights the importance of a panel that we have here today with diverse participants on this panel. So I think I will focus a bit more, like you said, on the BTEC project and the role of companies and perhaps outline some of our next steps in the coming year. So the role of the tech companies is something in the standard setting space is something we would really like to stress and emphasize a bit more next year. We already engage with them with tech companies through our BTEC project, which was mentioned, which focuses on the role of tech companies and how they can operationalize the UN guiding principles on business and human rights. So within this BTEC project, our office has been able to provide or has been able to discuss how to foster business conduct responsibly when it comes to AI, provided practical guidance to tech companies. So operationalizing the UNGPs, for example. And recently we have a, we published a foundational paper on generative AI and also a taxonomy of human rights risks connected to generative AI. So this is really some of the examples and our more recent outputs we have in order to provide guidance to tech companies. And I’m happy to share a link where you can access these resources and the work that we have done so far. But the challenge now is really looking at how to engage meaningfully with companies on technical standards, human rights, because tech companies are a stakeholder group that is really crucial to be, to stay engaged because they take a part in standard setting processes. And sometimes in various fora, they’re the one driving some of this tech standards development. And so human rights to diligence is a key element. I think that should be highlighted. So human rights to diligence for both companies and for standard setting organizations. And in order for us to raise awareness vis-a-vis tech companies and also for standard development organizations, we will be conducting some consultations next year to better understand their view and how to better engage with them and better meaning more effectively. And this really, again, brings us to the importance of participation and multi-stakeholderism. It’s already been noted, but the challenge really is on getting non-technical communities involved. When I say that it’s civil society organizations, it’s academia, but also noting the disparity in participation for various reasons. between the global north and the global majority, and also the, you know, I think our, I heard from the floor as well, from other panelists, that we need to have more SMEs. I think Mark had mentioned this, the importance is also not just engaging with big tech, but looking at the ecosystem, right? We also need to be engaging with young entrepreneurs who are really at the frontiers of tech development, and also small and medium-sized enterprises. So, I think, I hope this is clear. I mean, there is enormous benefit to multi-stakeholder participation. Gbenga mentioned this, but, you know, it’s really important to bring in a perspective, you know, side of conversation that maybe no one in the room has heard before. And that’s really the value add of multi-stakeholder participation in developing tech technologies and standards that serve everybody. And in this sense, I would like to emphasize that IGF, you know, the Internet Governance Forum has been an important venue to discuss in a multi-stakeholder setting, you know, human rights and technical standards, but a host of other issues in this space. And that said, I think we’re now arriving at such critical juncture, right, with the steps to implementing the Global Digital Compact, the review of the multi-stakeholder model at the WSIS plus 20 next year. So, let me finish on a note by saying that in the GDC, if you look at line 58 in the adopted text, it places a great importance on human rights in relation to tech standards. It explicitly refers to AI standards that must respect human rights. So, we must also emphasize the importance of this forum to continue to discuss how to develop and deploy the technical standards that can be human rights respecting, and that will foster more public trust in different technologies. So thank you so much.

Dhevy Sivaprakasam: And that was a really good summary of the whole thing. And also at the end of the day, we are here at IGF and we hope to discuss this conversation at the next IGF in Norway. So it’s been great to have all of you online. I would like to maybe call upon Olivia if you had final words. I’ve been told that we have two minutes left and so we might get kicked out of the room. But Olivia, please do come in.

Olivier Alais: Thanks a lot. It was a very interesting conversation and thanks a lot for all being here. So of course, technical standard has to be human rights enablers. We need to be more collaborative and to have a multi-stakeholder effort. And we need also, and as we said, to come with actionable solution for technical community and really to be in line with a global digital compact. So thanks a lot for being here. And I heard that we are still all working together to move on standardization and human rights.

Dhevy Sivaprakasam: Yeah, Marek, Eugene, Lauren, Benga, and Charani. Thank you. Thank you very much. Thank you all. Bye-bye. Thank you. Bye everyone. Thank you. Thanks a lot to all. Thanks a lot to Jean-Claude also for being our reporter. Thank you. Thank you.

D

Dhevy Sivaprakasam

Speech speed

151 words per minute

Speech length

1202 words

Speech time

476 seconds

Need for multi-stakeholder approach to include diverse perspectives

Explanation

Sivaprakasam emphasizes the importance of involving various stakeholders in the process of developing technical standards. This approach aims to ensure that different perspectives, especially from marginalized communities, are considered in the standard-setting process.

Major Discussion Point

Major Discussion Point 1: Incorporating Human Rights into Technical Standards

Agreed with

Marek Janovský

Yoo Jin Kim

Gbenga Sesan

Agreed on

Importance of integrating human rights considerations into technical standards

M

Marek Janovský

Speech speed

140 words per minute

Speech length

1260 words

Speech time

539 seconds

Importance of breaking silos between human rights experts and technical bodies

Explanation

Marek Janovský stresses the need to foster communication and collaboration between human rights experts and technical bodies. This is crucial for ensuring that human rights considerations are effectively integrated into technical standards.

Major Discussion Point

Major Discussion Point 1: Incorporating Human Rights into Technical Standards

Agreed with

Dhevy Sivaprakasam

Yoo Jin Kim

Gbenga Sesan

Agreed on

Importance of integrating human rights considerations into technical standards

Government’s role in fostering dialogue and outreach

Explanation

Marek Janovský highlights the role of governments in promoting dialogue and outreach between different stakeholders. He emphasizes the importance of cross-regional dialogue and engaging with various communities, including young people and emerging tech companies.

Evidence

The speaker mentions the Czech Republic’s efforts in fostering cross-regional dialogue and the importance of talking with African countries about joint challenges in the digital world.

Major Discussion Point

Major Discussion Point 3: Role of Different Stakeholders in Standards Development

Agreed with

Dhevy Sivaprakasam

Gbenga Sesan

Olivier Alais

Agreed on

Need for multi-stakeholder approach in technical standards development

S

Shirani De Clercq

Speech speed

121 words per minute

Speech length

958 words

Speech time

471 seconds

Challenges of bias and misrepresentation in AI systems

Explanation

De Clercq discusses the issues of bias and misrepresentation in AI systems, particularly in the context of Saudi Arabia. She highlights how AI models can perpetuate stereotypes and potentially erase or misrepresent minority groups in the digital world.

Evidence

The speaker provides an example of how Gen AI models sometimes depict a Saudi woman with a men’s thobe when asked to represent a traditional Saudi family.

Major Discussion Point

Major Discussion Point 1: Incorporating Human Rights into Technical Standards

Tension between economic objectives and ethical principles

Explanation

De Clercq highlights the dilemma between pursuing financial revenue and adhering to ethical and social values in AI development. She discusses the challenge of balancing economic goals with the implementation of AI ethics principles.

Evidence

The speaker mentions the Gaia Accelerator project, which has invested $160 million to fund 120 AI startups in Saudi Arabia, and the potential conflict with implementing AI ethics principles.

Major Discussion Point

Major Discussion Point 2: Challenges in Implementing Human Rights-Based Standards

Differed with

Audience

Differed on

Balancing innovation and human rights considerations

Y

Yoo Jin Kim

Speech speed

140 words per minute

Speech length

1469 words

Speech time

626 seconds

Critical risks to human rights from emerging technologies

Explanation

Kim outlines various risks to human rights posed by emerging technologies, particularly AI. She emphasizes issues such as privacy violations, potential for mass surveillance, and the lack of transparency in AI development and deployment.

Evidence

The speaker references UN reports on internet shutdowns, rights of privacy in the digital age, and the use of AI in border management.

Major Discussion Point

Major Discussion Point 1: Incorporating Human Rights into Technical Standards

Agreed with

Dhevy Sivaprakasam

Marek Janovský

Gbenga Sesan

Agreed on

Importance of integrating human rights considerations into technical standards

Importance of human rights due diligence by companies and standards bodies

Explanation

Kim stresses the need for both companies and standards development organizations to conduct human rights due diligence. She highlights the importance of engaging tech companies in discussions about technical standards and human rights.

Evidence

The speaker mentions the BTEC project, which focuses on helping tech companies operationalize the UN Guiding Principles on Business and Human Rights.

Major Discussion Point

Major Discussion Point 2: Challenges in Implementing Human Rights-Based Standards

G

Gbenga Sesan

Speech speed

167 words per minute

Speech length

861 words

Speech time

307 seconds

Civil society’s role in bringing user experiences to standards discussions

Explanation

Sesan emphasizes the unique opportunity civil society has to bring user lived experiences to technical standards discussions. He argues that this perspective is crucial for ensuring that standards are people-focused and respect human rights.

Evidence

The speaker uses the example of internet fragmentation discussions, highlighting how considering the user perspective changed the conversation.

Major Discussion Point

Major Discussion Point 1: Incorporating Human Rights into Technical Standards

Agreed with

Dhevy Sivaprakasam

Marek Janovský

Yoo Jin Kim

Agreed on

Importance of integrating human rights considerations into technical standards

Civil society’s need to prioritize and participate in standards discussions

Explanation

Sesan argues that civil society organizations need to prioritize participation in technical standards discussions. He emphasizes the importance of bringing knowledge to the table and presenting perspectives that may not have been considered.

Major Discussion Point

Major Discussion Point 3: Role of Different Stakeholders in Standards Development

Agreed with

Dhevy Sivaprakasam

Marek Janovský

Olivier Alais

Agreed on

Need for multi-stakeholder approach in technical standards development

F

Florian Ostmann

Speech speed

158 words per minute

Speech length

1500 words

Speech time

567 seconds

Difficulties for non-technical stakeholders to engage in standards processes

Explanation

Ostmann highlights the challenges faced by non-technical stakeholders, particularly civil society organizations, in engaging with standards development processes. He points out issues such as the complexity of the space, lack of familiarity with processes, and resource constraints.

Evidence

The speaker mentions the difficulty in understanding the wide range of standards development organizations and their different rules for participation.

Major Discussion Point

Major Discussion Point 1: Incorporating Human Rights into Technical Standards

Need for lowering barriers to participation in standards development

Explanation

Ostmann emphasizes the importance of finding ways to lower the barriers for participation in standards development processes. He suggests alternative methods for contributing that don’t require the same level of time commitment as joining a formal committee.

Evidence

The speaker mentions experiments with workshops that provide opportunities for interested organizations to give targeted input on certain questions without joining a committee.

Major Discussion Point

Major Discussion Point 2: Challenges in Implementing Human Rights-Based Standards

A

Audience

Speech speed

157 words per minute

Speech length

638 words

Speech time

242 seconds

Balancing innovation pace with human rights considerations

Explanation

An audience member raises the question of how to create scalable models for integrating human rights into technical standards without slowing down innovation. This highlights the perceived tension between rapid technological advancement and ensuring human rights protections.

Major Discussion Point

Major Discussion Point 2: Challenges in Implementing Human Rights-Based Standards

Differed with

Shirani De Clercq

Differed on

Balancing innovation and human rights considerations

Financial and resource constraints for inclusive participation

Explanation

An audience member points out the practical challenges of participating in standards-making processes, particularly for small and medium enterprises and civil society organizations. They highlight the significant time and resource investment required, which often goes unpaid.

Evidence

The speaker mentions their experience with a three-year Indo-Australian project on blockchain standards, highlighting the resource-intensive nature of the process.

Major Discussion Point

Major Discussion Point 2: Challenges in Implementing Human Rights-Based Standards

O

Olivier Alais

Speech speed

142 words per minute

Speech length

378 words

Speech time

159 seconds

Need for collaboration between public, private and civil society sectors

Explanation

Alais emphasizes the importance of collaborative efforts between different sectors in developing human rights-based technical standards. He stresses the need for a multi-stakeholder approach to create actionable solutions for the technical community.

Major Discussion Point

Major Discussion Point 3: Role of Different Stakeholders in Standards Development

Agreed with

Dhevy Sivaprakasam

Marek Janovský

Gbenga Sesan

Agreed on

Need for multi-stakeholder approach in technical standards development

Agreements

Agreement Points

Need for multi-stakeholder approach in technical standards development

Dhevy Sivaprakasam

Marek Janovský

Gbenga Sesan

Olivier Alais

Need for multi-stakeholder approach to include diverse perspectives

Government’s role in fostering dialogue and outreach

Civil society’s need to prioritize and participate in standards discussions

Need for collaboration between public, private and civil society sectors

Speakers agreed on the importance of involving various stakeholders, including governments, civil society, and private sector, in the process of developing technical standards to ensure diverse perspectives are considered.

Importance of integrating human rights considerations into technical standards

Dhevy Sivaprakasam

Marek Janovský

Yoo Jin Kim

Gbenga Sesan

Need for multi-stakeholder approach to include diverse perspectives

Importance of breaking silos between human rights experts and technical bodies

Critical risks to human rights from emerging technologies

Civil society’s role in bringing user experiences to standards discussions

Speakers emphasized the need to incorporate human rights considerations into technical standards development, highlighting the importance of collaboration between human rights experts and technical bodies.

Similar Viewpoints

Both speakers highlighted the potential risks and challenges posed by emerging technologies, particularly AI, to human rights and fair representation.

Shirani De Clercq

Yoo Jin Kim

Challenges of bias and misrepresentation in AI systems

Critical risks to human rights from emerging technologies

Both Ostmann and the audience member emphasized the practical challenges faced by non-technical stakeholders, particularly civil society and small organizations, in participating in standards development processes due to resource constraints and complexity.

Florian Ostmann

Audience

Difficulties for non-technical stakeholders to engage in standards processes

Need for lowering barriers to participation in standards development

Financial and resource constraints for inclusive participation

Unexpected Consensus

Balancing innovation with human rights considerations

Audience

Gbenga Sesan

Balancing innovation pace with human rights considerations

Civil society’s role in bringing user experiences to standards discussions

While the audience member raised concerns about slowing innovation by integrating human rights considerations, Sesan unexpectedly argued that considering human rights and user experiences can actually promote innovation by improving services and experiences.

Overall Assessment

Summary

The main areas of agreement included the need for a multi-stakeholder approach in technical standards development, the importance of integrating human rights considerations into these standards, and the challenges faced by non-technical stakeholders in participating in the process.

Consensus level

There was a moderate to high level of consensus among the speakers on the importance of incorporating human rights into technical standards and the need for diverse stakeholder participation. This consensus implies a growing recognition of the interconnectedness between technical standards and human rights, suggesting a potential shift towards more inclusive and rights-based approaches in technology development and governance.

Differences

Different Viewpoints

Balancing innovation and human rights considerations

Shirani De Clercq

Audience

Tension between economic objectives and ethical principles

Balancing innovation pace with human rights considerations

De Clercq highlighted the dilemma between pursuing financial revenue and adhering to ethical principles in AI development, while an audience member questioned how to integrate human rights into technical standards without slowing innovation. This reflects a tension between rapid technological advancement and ensuring human rights protections.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement centered around balancing innovation with human rights considerations and the practical challenges of involving diverse stakeholders in technical standards development.

difference_level

The level of disagreement among speakers was relatively low. Most speakers agreed on the importance of incorporating human rights into technical standards and involving diverse stakeholders. The differences were mainly in the approaches to achieve these goals and the challenges faced in implementation. This suggests a general consensus on the overall direction, but highlights the complexity of practically integrating human rights considerations into technical standards development.

Partial Agreements

Partial Agreements

All speakers agreed on the importance of involving diverse stakeholders in technical standards development. However, they differed in their approaches: Janovský emphasized breaking silos, Sesan stressed civil society’s need to prioritize participation, while Ostmann highlighted the difficulties faced by non-technical stakeholders in engaging with these processes.

Marek Janovský

Gbenga Sesan

Florian Ostmann

Importance of breaking silos between human rights experts and technical bodies

Civil society’s need to prioritize and participate in standards discussions

Difficulties for non-technical stakeholders to engage in standards processes

Similar Viewpoints

Both speakers highlighted the potential risks and challenges posed by emerging technologies, particularly AI, to human rights and fair representation.

Shirani De Clercq

Yoo Jin Kim

Challenges of bias and misrepresentation in AI systems

Critical risks to human rights from emerging technologies

Both Ostmann and the audience member emphasized the practical challenges faced by non-technical stakeholders, particularly civil society and small organizations, in participating in standards development processes due to resource constraints and complexity.

Florian Ostmann

Audience

Difficulties for non-technical stakeholders to engage in standards processes

Need for lowering barriers to participation in standards development

Financial and resource constraints for inclusive participation

Takeaways

Key Takeaways

There is a critical need to incorporate human rights considerations into technical standards for emerging technologies

A multi-stakeholder approach involving diverse perspectives is essential for developing inclusive and rights-respecting standards

Significant challenges exist in implementing human rights-based standards, including balancing innovation with rights protection and resource constraints for inclusive participation

Different stakeholders (government, private sector, civil society) have important roles to play in standards development

Breaking silos between human rights experts and technical bodies is necessary for effective collaboration

Resolutions and Action Items

Explore ways to lower barriers for participation in standards development processes, especially for civil society and SMEs

Conduct consultations with tech companies and standards organizations to better understand their views on human rights integration

Continue discussions on human rights and technical standards at future Internet Governance Forum meetings

Unresolved Issues

How to create scalable models for integrating human rights into technical standards without slowing innovation

How to address the financial and resource constraints that limit participation of civil society and SMEs in standards processes

How to effectively balance economic objectives with ethical principles in standards development

Suggested Compromises

Adopt an agile, use-case driven methodology for implementing ethical frameworks in standards, allowing for iterative improvements

Create spaces for targeted, one-off input from civil society organizations on specific standards projects, rather than requiring full committee membership

Thought Provoking Comments

Technology like AI, Internet of Things, Metaverse, offer incredible opportunities, but they also create challenges. And without clear guidance, technology can unintentionally harm the very rights it aims to protect.

speaker

Olivier Alais

reason

This comment succinctly captures the core tension at the heart of the discussion – the promise and peril of new technologies in relation to human rights.

impact

It framed the entire conversation that followed, establishing the need to proactively consider human rights in technical standards.

Have you ever heard of the principle of linguistic relativity? So it says that the way people think of the world is influenced directly by the language that people use to talk about it. While 7% of internet users are Arabic as its native speakers, only 0.8% of internet content is in Arabic.

speaker

Shirani De Clercq

reason

This comment introduced a concrete example of how technical standards can have real-world impacts on human rights and cultural representation.

impact

It shifted the discussion from abstract principles to specific challenges, prompting others to consider more tangible examples of bias and exclusion in technology.

I think one important thing to note is the lack of transparency, which has really been the undertone in the development and design and the use of AI. And this really the lack of transparency and thus the lack of accountability has really led to some harm and increased risks to human rights.

speaker

Yoo Jin Kim

reason

This comment highlighted a critical issue in AI development that directly impacts human rights considerations.

impact

It deepened the conversation by introducing the concepts of transparency and accountability, which became recurring themes in subsequent comments.

I think at a high level, sort of two points. The first one is that we are dealing with two different cultures. You know, think about the standardization community and the human rights community. There are different conceptual frameworks at play, different languages and, you know, different cultures of collaborating.

speaker

Florian Ostmann

reason

This insight gets to the heart of why integrating human rights into technical standards is so challenging.

impact

It reframed the discussion from being solely about technical challenges to also considering cultural and communication barriers between different expert communities.

Innovation is about providing improving experiences, right? Improving experiences, making services better, and doing things in different ways. And at the center of innovation are the people who interact with these experiences. And the whole sense of human rights is basically saying we want to make sure that the rights of those who engage with these platforms, who interact with these experiences is respected.

speaker

Gbenga Sesan

reason

This comment reframes the potential conflict between innovation and human rights as a false dichotomy, arguing that respecting human rights is central to true innovation.

impact

It challenged the assumption that considering human rights might slow innovation, offering a new perspective on how they can be mutually reinforcing.

Overall Assessment

These key comments shaped the discussion by moving it from abstract principles to concrete challenges, highlighting the complexity of integrating human rights into technical standards. They introduced important concepts like transparency, accountability, and cultural differences between expert communities. The discussion evolved to consider not just technical solutions, but also cultural, linguistic, and collaborative challenges in ensuring human rights are respected in technological development. Overall, the comments deepened the conversation and broadened its scope, emphasizing the need for a multi-stakeholder, culturally sensitive approach to addressing these challenges.

Follow-up Questions

How can the IGF help to advance the inclusion of human rights in technical standards?

speaker

Marek Janovský

explanation

This is important to explore how the multi-stakeholder IGF forum can contribute to addressing this challenge beyond just diplomatic efforts.

How to improve users’ capacities and skills to engage responsibly with emerging technologies that directly interface with brain activities and thoughts?

speaker

Marek Janovský

explanation

This is crucial to consider as technologies like neuroscience applications become more prevalent and potentially impact human cognition and decision-making.

How can we create scalable models for integrating human rights into technical standards without slowing the pace of innovation?

speaker

Audience member (Mizna Tregi)

explanation

This addresses the challenge of balancing human rights considerations with technological progress and innovation.

How can we involve the private sector to implement human rights-aligned technical standards?

speaker

Audience member (Mizna Tregi)

explanation

This is important to ensure buy-in and practical implementation from key stakeholders developing and deploying technologies.

How can standards-making conversations be changed to allow for more inclusive participation without requiring extensive time and resource investments?

speaker

Audience member (unnamed)

explanation

This addresses the practical challenges faced by civil society and small/medium enterprises in contributing to standards development processes.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.