Main Session on Cybersecurity, Trust & Safety Online | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Moderator 1 – Olga Cavalli

The analysis provides insights into various aspects of cybersecurity and cybercrime, including the relationship between global initiatives and national regulations and activities. It examines the need to enhance the link between global and national policies in this area. The analysis also highlights the role of the Internet Governance Forum (IGF) in facilitating multi-stakeholder discussions on cybersecurity initiatives. It emphasizes the importance of IGF in providing a platform for ministers, industry experts, and civil society representatives to engage in discussions and exchange ideas on cybersecurity.

Another proposed solution discussed in the analysis is the creation of a database of cybersecurity experts for knowledge sharing. Efforts have been made to develop such databases within the Organization of American States and through the United Nations’ open-ended working group on cybersecurity. The aim is to facilitate information exchange and collaboration among different countries and regions.

The analysis also mentions the agreement on the need for a pool of knowledge to aid in dealing with cybercrime. Olga Cavalli, a contributor in the analysis, supports the idea of having a knowledge pool to enhance capabilities in tackling cybercrime.

Furthermore, the importance of dialogue and cooperation in leveraging technology is emphasized. Active participation, interaction, learning, and sharing information are highlighted as key elements in harnessing the potential of technology. The analysis advocates for trusting in human skills and potential in the context of technology.

Overall, the analysis reveals the interconnectedness of global initiatives, national regulations, and activities in the field of cybersecurity and cybercrime. It underscores the significance of multistakeholder dialogues, knowledge sharing, and cooperation in addressing these issues effectively. The positive sentiment expressed throughout the analysis signifies a collective belief in the potential of these approaches in building more secure and resilient digital environments.

Katitza Rodriguez

The discussions on cybercrime and cybersecurity highlight the importance of finding a balance between security and the protection of privacy and human rights. However, the current UN Cybercrime Treaty is viewed as counterproductive and potentially undermines privacy rights. It focuses primarily on enhancing law enforcement powers, with minimal attention given to strengthening systems and networks at a technical level. Additionally, certain provisions within the treaty could criminalise the work of independent security researchers and allow governments to force engineers to compromise security measures, potentially violating privacy.

To address the challenges of cybersecurity, it is suggested that systems and networks need to be strengthened at the technical level. This would require better incentives to encourage the development of more secure software, devices, and networks. Legal protections for security researchers are also necessary to ensure their work in identifying vulnerabilities and improving security is not hindered. Furthermore, there is a need for enhanced education and information sharing about threats, vulnerabilities, and solutions among users and developers.

One of the major concerns with the current UN Cybercrime Treaty is the lack of adequate safeguards and disparities in privacy protections across different countries. The treaty places mandatory powers for criminal investigations, while the safeguards remain optional. This opens the door for potential abuses and compromises privacy rights. The variation in privacy protection across countries also raises concerns about meeting international human rights standards, as international cooperation in dealing with cybercrime may depend on the privacy protection provided by the assisting country’s national law.

Another argument presented is that there should be a minimum baseline for privacy protections in the UN Cybercrime Treaty. It is suggested that treaty-specific safeguards should be put in place to protect and establish a baseline for international cooperation. Currently, there is a significant disparity in the level of privacy protections and human rights from one country to another.

Companies also play a crucial role in upholding human rights. It is argued that companies should have grounds to refuse cooperation if a request is disproportional or violates human rights law. The broad scope of the treaty may lead to potential abuses, and companies need the ability to deny cooperation on grounds of human rights violations.

Overall, the discussions on cybercrime and cybersecurity underscore the need to ensure that human rights and privacy are not compromised in the pursuit of security. The current UN Cybercrime Treaty lacks adequate safeguards and may potentially violate privacy rights. Strengthening systems and networks at a technical level, providing legal protections for security researchers, and enhancing education and information sharing are seen as positive steps. A minimum baseline for privacy protections is necessary, and companies should have the ability to refuse cooperation if it violates human rights. Upholding human rights and promoting international cooperation are essential in effectively addressing cybercrime.

Ernesto Rodriguez Hernandez

There is increasing concern regarding the development of cyber offensive capabilities and the legitimacy of using force in response to cyber attacks, particularly for certain states. These states have included offensive cyber weapons as part of their national security strategies and doctrines, considering the use of force as a legitimate response to cyber attacks. This trend raises questions about the potential implications and consequences of such actions in international relations and security (keyword: cyber offensive capabilities, legitimacy of using force, national security strategies, international relations and security).

The misuse of Information and Communication Technologies (ICTs) and media platforms has emerged as a significant threat to nations. Individuals, organizations, and states are covertly and unlawfully exploiting computer systems to carry out attacks. These activities can potentially trigger international conflicts. Moreover, social networks and electronic broadcasts are being used as tools for interventionism, promoting hate speech, incitement to violence, destabilization, and the dissemination of false information and fake news. This misuse of ICTs and media platforms not only endangers peaceful relations but also undermines principles of justice and strong institutions (keywords: misuse of ICTs, media platforms, international conflicts, hate speech, incitement to violence, destabilization, false information, fake news).

To address these challenges, it is suggested that countries commit to using ICTs for peaceful purposes, fostering cooperation and development. A global commitment should be established, encouraging nations to utilize information and communication technologies for the betterment of society. Additionally, implementing technical assistance mechanisms to exchange good practices can enhance international cooperation in this domain (keywords: peaceful use of ICTs, cooperation, development, technical assistance mechanisms, international cooperation).

In addition to commitments and cooperation, the development of a legally binding international instrument is necessary to bridge the gaps in cybersecurity. This instrument should complement existing international law and effectively address the growing challenges and threats in cyberspace. International cooperation is crucial in tackling these issues and ensuring stability and security in the digital realm (keywords: legally binding international instrument, cybersecurity, international law, international cooperation, stability, security).

It is also acknowledged that existing international laws need adjustments to encompass cyberspace adequately. Given the highly dynamic nature of cyberspace, traditional laws may not cover emerging issues. Thus, a binding regulatory framework is needed to establish clear standards and criteria for cyber activities. Additionally, violations of treaties and conventions should be effectively addressed to maintain the integrity and security of the digital world (keywords: existing international laws, dynamic nature of cyberspace, binding regulatory framework, standards, criteria, violations of treaties and conventions).

The complexities involved in maintaining good cyber practices and the violations of treaties and conventions highlight the need for comprehensive solutions. It is important to address the challenges faced by both developing and developed countries in the digital arena. Developing countries, in particular, face difficulties due to the digital divide, leading to disparities in accessing information and financing for development. Bridging this divide and promoting equal opportunities for all nations is crucial for achieving sustainable development and reducing inequalities (keywords: good cyber practices, violations of treaties and conventions, digital divide, equal opportunities, sustainable development, reducing inequalities).

The importance of discussions and consensus-building regarding cyberspace-related topics is crucial. Continued dialogue among nations and stakeholders is necessary to navigate the complex landscape of cyberspace and develop shared norms and principles. Such discussions can help establish conventions that promote the development of all peoples and safeguard global interests (keywords: discussions, consensus-building, cyberspace-related topics, shared norms and principles, conventions, development of all peoples, global interests).

Lastly, it is important to recognize the time constraints involved in addressing all the issues in the world of cyberspace. The rapidly evolving nature of technology and the increasing threats in cyberspace pose significant challenges. While it is essential to strive for comprehensive solutions, it is also prudent to acknowledge the limitations and prioritize actions that can have the greatest impact in mitigating risks and promoting a secure and peaceful cyberspace (keywords: time constraints, rapidly evolving nature of technology, comprehensive solutions, mitigating risks, secure and peaceful cyberspace).

In conclusion, the analysis highlights the pressing need for regulations and international cooperation to address the growing challenges in cyberspace. This includes concerns about the development of cyber offensive capabilities, the misuse of ICTs and media platforms, and the gaps in current cybersecurity measures. Adopting a binding regulatory framework, adjusting existing international laws, bridging the digital divide, and fostering dialogue and consensus-building are vital steps towards creating a safer and more inclusive digital environment. (keyword: regulations, international cooperation, cyber offensive capabilities, misuse of ICTs, media platforms, cybersecurity measures, digital divide, safer and more inclusive digital environment).

Elizaveta Belyakova

The protection of children from cyber security threats and cyber crime is highlighted as a crucial matter in the collection of statements. The creation of the Russian Alliance for the Protection of Children in the Digital Environment demonstrates the commitment to address this issue. Furthermore, the issue gained recognition at the 2023 BRICS Summit, where participating countries pledged to take action to secure a safe digital environment for children. Major IT companies are urged to commit to initiatives aimed at mobilizing the public to protect children on the Internet. The Russian IT society has already taken voluntary steps in this direction. Additionally, the Digital Ethics of Childhood Charter was established to establish ethical principles regarding children’s online safety. A major concern expressed in the statements is the threat of sexual exploitation and abuse of children on the Internet. Disturbingly, a UN statistic reveals that 80% of children in 25 countries reportedly feel at risk of sexual abuse or exploitation online. The We Protect Global Alliance goes as far as to describe the easy access to child sexual abuse material online as a “tsunami.” The need for data exchange pertaining to the localization of harmful and dangerous material, as well as information on new criminal methods and attacker details, is stressed. In 2022, the removal of 1,126 units of content relied on a hash database with the involvement of major players in the UK IT market. Furthermore, joint efforts and open discussions are considered essential in protecting children from cyber threats. Elisaveta, a prominent figure, emphasizes the significance of collective actions in safeguarding children against such threats. In conclusion, the protection of children from cyber security threats and cyber crime is a pressing issue. The establishment of organizations such as the Russian Alliance for the Protection of Children in the Digital Environment and voluntary commitments by major IT companies demonstrate progress towards securing a safe digital environment for children. However, the prevalence of sexual exploitation and abuse online remains a distressing concern, necessitating further action and cooperation.

Folake Olagunju

The analysis conducted on cybersecurity in the West African region reveals several key findings. One major issue is the lack of national and regional coordination to effectively combat cyber threats. This coordination is crucial for developing and implementing comprehensive strategies that address the complex challenges posed by cybercrime. Due to resource limitations on technical, financial, and human fronts, the West African region is ill-equipped to tackle cybersecurity effectively.

Another important aspect highlighted in the analysis is the insufficient allocation of resources to the cybersecurity sector. Many member states do not have dedicated budget line items for cybersecurity, which hampers their ability to invest in the necessary tools and technologies needed to protect against cyber threats. Additionally, there is a shortage of qualified personnel in the region, further exacerbating the problem. Furthermore, the weak critical infrastructure in West Africa makes it particularly susceptible to cyber attacks, with frequent power outages and telecommunication disruptions already being commonplace.

The COVID-19 pandemic has brought necessary attention to the importance of cybersecurity and digitalization. As member states were forced to adopt digital solutions to meet the daily needs of their citizens, the conversation around digitalization security increased. This shift has highlighted the need for robust cybersecurity measures to safeguard critical systems and protect personal data.

The analysis also emphasizes the need for increased cooperation, information sharing, and involvement of the private sector in cybersecurity efforts. Peer-to-peer cooperation between member states is crucial for effectively combating cyber threats. Elisabetta, a prominent figure, stressed the importance of data exchange for child protection and overall cybersecurity. Furthermore, the analysis suggests that incentives for workers and partnerships with the private sector can significantly improve the cybersecurity landscape in the region.

The approach taken by the Global Forum on Cyber Expertise (GFC) in collaborating with regional economic communities, such as the Economic Community of West African States (ECOWAS), is commendable. The GFC aims to enhance capacity building in the region through such partnerships, which aligns with the goals and needs of the West African countries.

While global best practices in cybersecurity are valuable, the analysis highlights the need for local adaptations. Cybersecurity practices from other regions, such as the US, may not be directly applicable in West Africa without considering the local context and challenges. Therefore, promoting a practical approach to cybersecurity and encouraging local adaptation of global strategies are essential.

To address the issues surrounding cybersecurity in the region, a joint platform for advancing cybersecurity in West Africa was launched under the G7 German presidency. This program aims to establish an Information Sharing and Analysis Center (ISAC). The establishment of such a center is considered an excellent approach to enhancing information sharing and cooperation among stakeholders.

In conclusion, the comprehensive analysis of cybersecurity in West Africa underscores the urgent need for action. Promoting a cybersecurity culture, improving communication, coordination, and cooperation at both national and regional levels, and prioritizing resource allocation are vital steps to effectively combat cyber threats in the region. Additionally, the analysis highlights the significance of local adaptations, capacity building, and the involvement of the private sector in addressing cybersecurity challenges and protecting critical infrastructure.

Christopher Painter

The analysis explores various arguments surrounding cybercrime negotiations, stakeholder involvement, capacity building, cybersecurity, emerging technologies, and the digital economy.

One of the key points highlighted is the need for expertise in cybercrime negotiations. The analysis suggests that negotiations often lack input from experts who understand the real challenges on the ground. It further notes that the current model of negotiations, which is primarily built for countries, tends to involve a lot of geopolitical issues. This argument highlights the importance of including expert knowledge to develop effective policies to combat cybercrime.

Another argument put forth is the crucial involvement of stakeholders outside of government in cybercrime discussions. The analysis emphasizes that there is a wealth of outside expertise and insights available, and bringing in this expertise is essential for creating effective policies. This argument recognizes the need to involve various stakeholders such as industry experts, civil society organizations, and academia to ensure a more comprehensive approach to tackling cybercrime.

The analysis also addresses the growing international issue of cybercrime. It notes that cybercrime, particularly ransomware attacks, has significantly affected people’s daily lives, making it a pocketbook and backyard issue for many individuals. Additionally, cybercrime has become a political priority for countries around the world. These observations underscore the urgency of addressing cybercrime as a pressing global challenge.

In terms of cybersecurity, the analysis highlights the need for sustained attention to this issue. It acknowledges that cybersecurity has matured as a policy concern but identifies the challenge of bridging the gap between the political level and the practitioner level. The analysis suggests that ransomware attacks have helped raise the profile of cybersecurity and the importance of protecting against new and existing threats.

Furthermore, the analysis touches on the issue of policymakers feeling intimidated by technical issues. It points out that policymakers often struggle to understand complex technical concepts, which can hinder effective policymaking. However, the analysis argues that technical concepts can be understood by non-technicians if they are adequately explained. This highlights the importance of effective communication and education to bridge the gap between technical experts and policymakers.

Capacity building is also identified as a key aspect in the analysis. It highlights that countries, especially developing ones, require assistance in dealing with cybersecurity threats, building national strategies, establishing emergency response teams, and applying international norms and laws. The analysis praises the Global Forum on Cyber Expertise for coordinating capacity-building efforts worldwide.

Notably, the analysis observes the importance of not losing sight of the larger cybersecurity issue amidst the focus on emerging technologies. While emerging technologies are part of the problem, it is crucial to maintain a holistic approach to cybersecurity.

In conclusion, the analysis provides insights into various aspects of cybercrime, cybersecurity, stakeholder involvement, and capacity building. It underscores the need for expertise in cybercrime negotiations and involving stakeholders outside of government. The analysis highlights the urgency of addressing cybercrime as a global issue and the necessity of sustained attention to cybersecurity. It also emphasizes the importance of adequately explaining technical concepts, capacity building efforts, and maintaining a balance between addressing emerging technologies and the larger cybersecurity challenges.

Audience

The analysis encompasses a range of important discussions relating to cybersecurity, digitalisation, international law, knowledge-sharing, and solar technology. One notable point emphasised is the need to effectively utilise emerging technologies to enhance cybersecurity measures. This argument underscores the importance of staying ahead of cyber threats by employing advanced technologies. Policymakers are urged to prioritise cybersecurity and treat it seriously.

The significance of cybersecurity is further underscored by research findings which indicate a potential increase in a country’s GDP per capita with the maturity and implementation of robust cybersecurity measures. This evidence demonstrates the economic benefits that can be gained through investing in cybersecurity.

The debate surrounding the need for a legally binding instrument to govern responsible state behaviour in cyberspace is another key aspect addressed. While international law is already deemed applicable in cyberspace, there is a call for a specific instrument to address the complexities and unique challenges of cybersecurity. The involvement of stakeholders such as the technical community, civil society, and international law firms is viewed as crucial in shaping this instrument.

The analysis also highlights the importance of knowledge-sharing in the context of cybersecurity. It suggests the creation of a worldwide knowledge pool accessible to individuals across different regions, including Africa and South West Africa. The inclusion of non-technical aspects, such as social, psychological, and gender considerations, is emphasised as essential in developing comprehensive cybersecurity strategies.

In terms of infrastructure development, the potential of using solar technology to address the electricity gap in West Africa’s telecommunications sector is explored. This could potentially provide a sustainable solution to power critical telecommunications infrastructure.

Another important topic in the analysis is the need for specific measures to ensure the safety and security of children online. It is argued that a holistic approach to cybersecurity may not effectively protect children, and a more targeted and comprehensive approach is required. This includes the building of children’s capacity through internet governance and digital literacy initiatives.

The importance of cross-border collaboration in addressing cybersecurity challenges is also highlighted. Given the transnational nature of cyber threats, regional and international cooperation is deemed crucial to effectively counter cybercrime. The analysis suggests the involvement of organisations like Interpol for effective collaboration.

Lastly, the analysis suggests that instead of establishing a new convention or treaty on cybercrime, it may be more practical and efficient to focus on improving or expanding the role of existing organisations such as Interpol. This approach could lead to more effective outcomes in combating cybercrime.

In conclusion, the analysis provides a comprehensive overview of various topics related to cybersecurity and related fields. It emphasises the need for the effective use of emerging technologies, the importance of policymakers’ attention to cybersecurity, the role of international law in cyberspace, knowledge-sharing, and the potential of solar technology in telecommunications infrastructure. The analysis also highlights the importance of including non-technical aspects in cybersecurity, protecting children online, and promoting cross-border collaboration. Overall, the analysis offers valuable insights and recommendations for addressing the challenges and opportunities in the field of cybersecurity.

Moderator 2 – Tracy Hackshaw

The discussion on cybersecurity covered a range of important topics, including best practices, case studies, and practical issues. The participants expressed a strong desire to focus on real-world experiences and address core issues rather than engaging in theoretical debates. This emphasis on practicality and applicability underscores the need for actionable strategies and solutions in the field of cybersecurity.

In relation to the protection of children in the online environment, there was a call for the adoption of self-regulation mechanisms and social initiatives. The participants highlighted the importance of collaboration between various stakeholders in reducing online risks and threats. Tracy, for example, spoke about the role of Elisabetta from the Alliance for the Protection of Children in the Digital Environment, who could provide expert insights on this matter. It was also noted that the prevalence of child exploitation risk in the digital environment needs to be addressed, as evidenced by statistics shared by Elizaveta Belyakova. The rise of child trafficking in areas where the online world has a significant influence further highlights the urgency of this issue.

The discussions also shed light on the importance of equal communication and participation of all stakeholders in multilateral meetings regarding internet governance. In order to ensure effective participation, Tracy highlighted the role of the Internet Governance Forum (IGF), where all participating stakeholders can freely exchange ideas. It was acknowledged that language barriers may hinder effective communication, and suggestions were made to provide interpretation services to overcome this challenge.

The notion of collective action was deemed essential in addressing cybersecurity challenges, particularly in developing nations. The positive experiences shared by Falake and Tracy Hackshaw demonstrated how countries that excel in cybersecurity can assist those facing challenges in this domain. This highlights the potential benefits of sharing best practices and collaborating on cybersecurity efforts among nations.

Additionally, resource allocation towards cybersecurity in developing nations was acknowledged as a crucial aspect of addressing cybersecurity challenges. Participants, such as Falake, highlighted the issue of limited resources in West Africa. However, the discussions emphasized that collective action and collaboration can help overcome these limitations and contribute to the enhancement of cybersecurity measures.

In conclusion, the discussions on cybersecurity focused on practical aspects, such as best practices and case studies, rather than dwelling on theoretical debates. The protection of children online, equal participation in multilateral meetings, collective action among nations, and resource allocation in developing countries emerged as key areas of concern. Overall, the discussions provided valuable insights into the challenges and potential solutions in the field of cybersecurity.

Alissa Starzak

The analysis explores various perspectives on cybersecurity, including the importance of prevention and secure design in protecting against cybercrimes, as highlighted by Cloudflare. International collaboration and a human-centric approach are also emphasized as crucial in addressing cybercrimes, while the Internet Governance Forum (IGF) is seen as a proactive platform for promoting a human-centric approach. Additionally, the role of emerging technologies like Artificial Intelligence (AI) in cybersecurity is recognized. The analysis stresses the need for global collaboration, data sharing, and industry involvement to improve security. It also discusses legal considerations, treaty definitions, and the importance of empowering individuals through education and preventive measures. Overall, the summary captures the main points and keywords of the analysis accurately, while ensuring UK spelling and grammar are used.

Session transcript

Moderator 1 – Olga Cavalli:
Okay. Thank you, everyone, for being with us this early afternoon here in beautiful Kyoto, Japan, and welcome all the remote participants that are following from all over the world. For my colleagues in Latin America, it must be extremely late at night, but I’m sure that some of us, some friends of us are there. And thank you, distinguished colleagues and dear co-moderators here. My name is Olga Cavalli. I am the National Cyber Security Director of Argentina, and also I chair the South School of Internet Governance. Here with me is my dear friend, Tracy Hochschild. He is now the chef d’Entrepris de Post from the Universal Post Union Caribbean. He will be helping me in chairing this very interesting session about bridging the gap between international negotiations and on-ground experiences in cybercrime response. And I would like to briefly present my distinguished colleagues and friends who are with us this afternoon, Mr. Christopher Painter. He is Director of the GFC, Global Forum on Cyber Expertise, but also he is a very well-known expert in cybersecurity, cybercrime, and he was the first cyberambassador or cyberdiplomat in the world. And we have online Katitza Rodriguez from the Electronic Frontier Foundation. She is the Policy Director for Global Privacy. She is online. Are you there, Katitza? Yes, I’m here. Are you with us? Oh, Katitza, how are you? How are you, Olga? She is in New York, but she is Peruvian. She is from Latin America. Welcome, Katitza. And we have Alisa Staltzak. She is Cloud’s first Vice President from Global Head of Public Policy. Welcome, Alisa. And over to Tracy, who will present the other distinguished panelists.

Moderator 2 – Tracy Hackshaw:
Thank you very much, Olga. Let me first introduce, well, welcome everyone to today’s session. I’d like to introduce Elizaveta Belyakova. She’s the Chairperson of the Alliance for the Protection of Children in Digital Environments. Are you there, Elizaveta? Yes. Hello. She’s online. We also have a very special guest with us, Ernesto Rodriguez Hernandez, Deputy Minister of the Ministry of Communications of the Republic of Cuba. Welcome, Deputy Minister. And remote we have Fulake Olagunyu, Program Officer for Cybersecurity, Internet and e-Applications at the Economic Community of West African States, who is also remote. Fulake, are you there? I am indeed here. A very early good morning to everyone. And back over to Olga now to continue the program.

Moderator 1 – Olga Cavalli:
Thank you, Tracy, and thank you all distinguished panelists who are with us in person or remotely here in Kyoto and remotely. What’s the purpose of this session? There are several international activities and forums and United Nations efforts in relation with cybersecurity, cybercrime, like, for example, the Open-Ended Working Group and United Nations ADOC Committee. There are So, I would like to ask you a question. I would like to ask you a question about the global initiatives, the regional initiatives, but how those initiatives do impact or do relate with national activities and national regulations, activities of the CISRTs, activities of companies trying to build different best practices, and is this a real relationship? And if so, how do we find ways to enhance the global policy with local efforts? Is there a link, there is a link, or could we find ways to enhance a possible link in between these two kind of global and national and regional activities? So, for this, we have prepared some ideas and some questions for our โ€‘โ€‘ no, sorry, I have to give the floor to Tracy, who will do some other comments, general comments about the CISRTs, and then I will turn it over to you, Olga.

Moderator 2 – Tracy Hackshaw:
Thank you. โ‰ซ That’s all right, Olga, thank you. Just to add to what you’re saying, I think today we want to look at some best practices, as you were saying, maybe get some case studies out of the group to really understand or get to the real meat of the matter. What are the onโ€‘theโ€‘ground experiences that you have, whether with your clients, with your clients, with your clients, and how do you deal with that? And, you know, what are the best practices, preventative measures, and maybe some lessons learned for us. So I think that’s what we want to deal with today, and I hope that we can really get to the crux of the matter, get some questions from the audience as well, and let’s deal with the real issues. Let’s not get into theory, let’s get into the real base issues

Moderator 1 – Olga Cavalli:
that are in cybersecurity. So, Christopher, I know you have a lot of experience, and you have a lot of experience in cybersecurity, and you have a lot of experience in cybersecurity, and you have a lot of experience, Christopher, he has a vast experience, and now, if you talk to him, and he explains to you how many places he will travel in the next month, you will be amazed, because he’s going all over the world, trying to do a very important effort in relation with capacity building, related with cybersecurity. So, Christopher, how can international bodies, like the United Nations, openโ€‘ended work? So I think it’s very important that we have a good understanding of what is going on in the world. And I think it’s very important that the working group and the Ad Hoc Committee on Cybercrime, for example, and similar entities effectively incorporate the expertise, insights, and real-world experiences gained from cybersecurity practitioners.

Christopher Painter:
Thank you, Olga, and it’s great to be here, and I should say that you’re wondering what I’m wearing around my neck. I’m wearing a white top and I’m not wearing a black top. And I’m wearing a black top because I work in cybersecurity work. So that shows in a sense that cybersecurity is matured as more of a policy issue. But I wanted to express my gratitude to the government of Japan who I worked closely with when I was in the government in terms of cybersecurity and cybercrime issues. And I have had a long experience as a federal prosecutor doing that, and I’ve had a long experience in the U.S. as well as in the U.S. government, and I’ve also followed these negotiations, both the open-ended working group in New York, I’ve been at several of those meetings, and the ad hoc negotiation on cybercrime. And of course, those are not the only games in town. There are things that are outside the U.N. as well. I think that’s really an important question. How do you incorporate real on-the- ground experience? Because often the people who negotiate, and I think that’s a big challenge, is that they’re not necessarily the people who are the practitioners or understand some of the real challenges on the ground. And then the other thing that happens, particularly at the U.N. level, it’s great because it brings every country in the world together, but it’s not built for other stakeholders. It’s built for countries, and it’s also, you know, just as a natural course of matters, there’s a lot of geopolitical aspects, so a lot of the things that happen in those venues are driven by geopolitical issues more than sometimes practical issues, and bringing that expertise, especially when, say, you’re negotiating a cybercrime treaty, it’s really important to know how things work, how investigations work, what has worked, what hasn’t worked, what are impediments, making sure that you’re respecting human rights. So having those experts in the room, I think, is really important, both from government, but also from outside government, and the same with the OEWG on the cybersecurity issues. There is a lot of work that needs to be done on the cybersecurity issues, and I think, you know, it’s really important to have those experts in the room, and the same with the OEWG on the cybersecurity issues. There is a lot, a wealth of experience and knowledge outside of those rooms normally, and so you need to figure out how to bring them in. So I’d say they’ve done a pretty good job at reflecting a lot of the things going on, but imperfect, and we’ll talk more about this later in terms of multi-stakeholder involvement, but, you know, I have, I do think the cybercrime treaty is building off the Budapest Convention, it’s looking at where that’s going to go. I can’t tell you where it’s going to end up right now. There are, again, geopolitical differences in terms of scope of this treaty, et cetera, and the open-ended working group, you know, it’s good that all these countries are meeting. That wasn’t happening before. Every country is seized with this issue. At the same time, it’s sometimes it’s like listening to paint dry. You’re sitting there and it’s like nothing’s really happening, and they’re not moving very quickly. So I do think that there’s been activity, there’s been movement, there’s been more of a political priority of countries on these issues, which is important, but I also think there’s a gap between that political level and the practitioner level, which we really need to do everything we can to bring together, because you need both. You need that information, you need to translate between policymakers and people on the ground. whether it’s cybercrime or cybersecurity, so that the trade space, you understand what the trade space is, and the policies you’re making reflect reality and also help the people who are trying to protect our networks and also to go after cybercriminals. The one thing I’d say before I wrap up, it’s interesting that ransomware and the scourge of ransomware over the course of the last few years has really converted this more than ever before into a priority for countries around the world. Because it’s become a pocket bush, a poke issue, where people have to wait in line for gas, or they can’t get their health insurance, or they can’t get their hamburger, or other kinds of food. That makes it not just a pocketbook issue for people, but also in a backyard issue. It makes it a political issue. And so that’s raised this more than I’ve seen before. And I hope we can sustain that effort, because we need sustained attention on this. Not just here in this room, which is a great forum, and the whole IGF to talk about it, but beyond that.

Moderator 1 – Olga Cavalli:
Thank you, Grishan. I think you emphasized the importance of the dialogue in between technicians and non-technicians, government officials, and diplomats. I consider myself a technician. I’m an engineer. So we should develop this ability of explaining concepts, not very technical things, but really conceptual things. So other colleagues that are involved in other stakeholders can understand concepts. Once you get the concept, sometimes it’s easier to.

Christopher Painter:
And I think that helps. A lot of policymakers are afraid of this issue, because they think it’s technical. But they deal with complex issues every day. And you don’t have to be a coder to understand the key geopolitical and other issues here. I’m a recovering lawyer, and I can still talk to people.

Moderator 1 – Olga Cavalli:
Excellent. Thank you. Thank you very much. And we will follow up with you with other questions in a moment. I will go now to Katica, that she is in New York. Katica, are you there? Hi, Katica. I’m here. Nice to see you virtually. It’s been a while we haven’t met. I hope to see you somewhere in the future in person. Katica, considering the role of the media, how can effective communication about cyber attacks and cybersecurity risks contribute to bridging the gap between negotiations and practical efforts, fostering public awareness, and facilitating informed decision making? The floor is yours.

Katitza Rodriguez:
Thank you, Olga. It’s nice to hear from you. Hi, everyone. Well, first, I will start saying that my work focuses on global privacy topics, including cross-border data exchanges in the context of criminal investigations. And I have been involved for a long time in discussions on cybercrime policy and legislative issues from a civil society perspective. And right now, that focus of the grand part of my job is in the United Nations Cybercrime Treaty, the negotiation that is currently being held in Vienna or in New York. That’s been a large process at the global level, involving, as it was said by the previous speaker, by different member states around the world. Over six negotiation sessions and a lot of controversy about the scope of the treaty, even at the basic level of defining what cybercrime is and how cross-border law enforcement and evidence guided data assistance should work. But back to your question, we see in the media security problems everywhere, from data breaches to rastonware, infections to botnets. The harms are obvious, and we read it all the time in the media. Cybersecurity must make people more secure, however, against such threats, and not less, and should not undermine our privacy and human rights. And we would like to see the media help people better understand the threat’s landscape, and also the importance of a rights-based approach to cybersecurity. To protect a free and open internet from malicious attacks, we need to address the underlying problem. Far too many programs, devices, and systems are woefully insecure, and the process of the internet is not safe. discovering, disclosing, and patching vulnerabilities is often underfunded and disorganized across the public and private sector at the domestic and international level, at the federal or state level. End users often have little or no awareness of how to protect themselves against such attacks. Media narrative responding to crisis like ransomware need to address this whole landscape of vulnerability. And legislators should not just demand expansions of surveillance and law enforcement powers, including across borders. The media can also shed light on the key roles of security researchers, digital security trainers, journalists, and others, improving and safeguarding our rights. And many of these professionals are also working inside of technology companies that have made security research a priority. By presenting their stories, challenges, and contributions, the media can humanize the often technical world of cybersecurity. This narrative is essential to contrast with the UN Cybercrime Treaty current stance, which potentially jeopardize these professionals’ work, the work of security researchers. To address cybersecurity challenges, we need better incentive to make software, devices, and networks secure, better education for users and developers, and better sharing of information about threats, vulnerabilities, and solutions. And we need legal protections for security researchers. Unfortunately, high-profile efforts like the UN Cybercrime Treaty, which is currently under negotiation, have focused almost exclusively on enhancing law enforcement powers, including across borders, while giving minimal attention to a positive cybersecurity agenda of making systems and networks stronger at the technical level. Sometimes this focus only on law enforcement powers is short-sighted and even counterproductive, I will say, because it can undermine human rights, for instance, by sweeping in online dissent, speech, political activities, as supposed form of cybercrime. And because it can interfere with the work of people who are actually trying to make us safer, to improve security at the technical level. Some provisions in the treaty threatens to criminalize the essential work of independent security researchers in identifying vulnerabilities and getting them fixed. Other provisions threaten to allow governments to compel engineers to undermine and bypass security measures in the name of furthering an investigation. These proposals can be interpreted in a very broad way, which could require, compel an engineer, someone with knowledge on computer systems to secretly turn over keys and password even without their employer’s knowledge. So these go against cybersecurity and these tariffs are complex, but the media has a responsibility to try to explain these issues in an easy way, to understand not only the criminal landscape of ransomware, which is important, but also the whole importance of the whole cybersecurity ecosystem. So this broader perspective is important. For one reason, it would show how the landscape of cybersecurity is just not cops and robbers. So many people, than law enforcement and criminals play a crucial role. Second, because it might emphasize that cyber attacks are not magic and their perpetrators are not wizards. All of the core of cybercrime attack have detail grounded in the operations of ICT systems and protection fixes and countermeasures. So the media can demystify those attacks. Demystifying those attacks can also reduce the public’s sense of powerlessness in the face of them. And finally, to conclude, it could also show opportunities for cooperation on strengthening our infrastructure. For example, between tech companies and independent academic researchers, or for cooperation between government and technologies in proactively strengthening ICT systems and promoting cybersecurity research and testing. And that’s something that the media can also promote in their own narrative. Thank you.

Moderator 1 – Olga Cavalli:
Thank you very much, Yaltitza. You bring very important concepts. For example, the importance of security by designing all the devices. The importance of knowing the vulnerabilities. For example, in the national search of Argentina, we have a policy that once we know about a vulnerability, we do communicate it to other areas of the government that we are in contact with. And also, you mentioned the important role of the media in getting to know and informing the public about what is happening. And we were talking yesterday in an open forum that we organized exactly that. That sometimes we get to know about attacks only by the media. And sometimes those who have been attacked or in vulnerable situation not usually share all the information. And so the role of the media is really relevant. Thank you for your comments. And we will get back to you in a moment. Now I would like to ask Elisa a question. Elisa, how can the Internet Governance Forum, where we are now, play a proactive role in promoting a human-centric approach to cybersecurity and facilitating the exchange of insights and experiences between governments? I’m the director of the global cybersecurity organization, and I’m here to talk about the global cybersecurity initiatives and practical on-the-ground efforts which is mainly the purpose of this session. And welcome, and thank you for being with us.

Alissa Starzak:
Thank you for having me. I’m very excited to be here today. I actually think it’s worth a little bit of background about why I’m here on the stage to begin with, and then I can actually tackle that question. Cloudflare is a global cybersecurity organization, and we are a global cybersecurity organization that’s been around for a long time. We have a lot of customers in many countries. We protect people all around the world with millions of customers. In fact, something like 20% of all of the websites in the world pass through our network on a daily basis. One of the things that’s interesting for me on this stage is that we fit in a couple of different places as a member of industry. We’re both a cybersecurity company that is involved in cybersecurity, but we’re also a cybersecurity company that is involved in cybercrime and in internal networks. So we play sort of on both sides of that game and really have an opportunity to think about what those issues look like. So thinking about that question, you know, one of the things that we do at Cloudflare is we try to make it easy for people to protect themselves, recognizing in this space that prevention is actually better than addressing cybercrime in many ways. If the goal, there’s a goal in cybersecurity to protect in the first instance, to prevent cybercrime, to protect in the second instance, to protect in the third instance, to protect in the fourth instance, to protect in the fifth instance. So those questions are really tied very closely together because it’s not just about enforcement. If you can prevent cybercrime in the first place, that is a much better place to be overall. So from the point of industry, thinking about what those solution sets look like, making sure that you have things that are secure by design, that are easy to implement from a protection standpoint is incredibly important. So, you know, one of the things that Cloudflare does is, you know, it’s a global organization. It’s a global organization, but it’s not in, you know, if cybercrime is global, let’s just be honest, it crosses borders, and practically sometimes that means international collaboration on the law enforcement side. And so some of the discussion points have to be about making sure that those barriers are addressed, and addressed in a sort of human-centric way that protects rights. So, you know, I think, you know, I think it’s really important to think about what those solutions look like, and what those solutions look like. So, you know, IGF in particular, you know, one of the things that Cloudflare does is we offer a bunch of initiatives that actually provide protections to vulnerable groups in particular. We think about secure by design, and we try to think about what those offerings look like, and make sure that people understand what they can do to protect themselves. So we actually think a forum like IGF is a place where you can have those discussions about what initiatives are available currently and what it looks like in the future. are thought through and that people are aware of what is out there, even before you get to the cybercrime side. I think the other thing, the amazing thing about IGF is that it’s multi-stakeholder. So you actually have not only governments in the room, but civil society in the room. You have industry in the room. You have a lot of different players who all bring a different piece to that puzzle and can have a conversation together. And there aren’t that many forums like that. So thinking about that practically, that is, I think, one of the biggest things that IGF can do, really thinking about that piece. How do we make sure that people know what initiatives are out there? And then also, what are the real barriers to enforcement? What are the challenges that come into play?

Moderator 1 – Olga Cavalli:
Thank you, Elisa. I think you really made a good point about the role of the IGF in bridging gaps in between the different stakeholders, which I think it’s fantastic space. And also, this concept of equal footing, that you can find authorities, ministers, experts from, a person from civil society, experts from the technical community, and exchange some ideas, and have a coffee or share a bento and some sushi. And that, bridging that gap is really important. That sometimes in this multilateral meetings, which are also very important, it’s not so easy. And there are some barriers that prevent some of the stakeholders in participating freely and exchanging information with other stakeholders. So, I will give the floor to my dear colleague, Tracy. Now, the floor is yours to continue with questions to other panelists.

Moderator 2 – Tracy Hackshaw:
Thank you very much, Olga. And just a few housekeeping guidance. Just at the end of this round of questions, we’ll be asking you to come to the mics or go on Zoom and ask your questions. So, get ready for that. So, you can probably begin lining up as soon as we begin the question period. And secondly, one of our speakers in this round will be speaking in Spanish. So, if you don’t have an interpretation available, and you don’t speak Spanish, it’s a good time to try and look for those devices. Or, I guess if you speak English, you can follow the transcript. All right, so moving on now to one of our remote speakers, Elisabetta. She’s from the Alliance for the Protection of Children in Digital Environment. So, Elisabetta, how can we promote the adoption of best practices such as self-regulation mechanisms and social initiatives to foster collaboration in combating digital threats and reducing risks with a particular emphasis on safeguarding children in the online environment? Elisabetta.

Elizaveta Belyakova:
Hello. Let’s not forget that one of the most important part of our society is, of course, children. And so, protecting children from cyber security and cyber crime is of utmost importance. And that has been increasing over the last couple of years. And this is the reason why we created the Russian Alliance for the Protection of Children in the Digital Environment. This trend is something that we saw beginning in 2023 in the BRICS Summit, where the ministers of the Digital Development of Prospective Country openly expressed their commitment to the issue of protecting children in the digital environment, calling for the creation of this interstate alliance for the protection of children. And these statements, once again, emphasize the relevance of this topic, not only in Europe, but in countries of the global south. Now, speaking on behalf of the Russian IT society, I can safely argue that voluntary commitments of digital platforms and major players in the IT market are the most effective way of public mobilization aimed at protecting children on the Internet. Let me once again emphasize that this is precisely an independent initiative of the largest IT companies of Russia. The alliance promotes the protection of the youngest generation from cyber crime through not restrictions, not through restrictions, but through education, creation of positive content, such as games, podcasts, et cetera. And as an example, for more precise measures, allow me to mention that we created the Digital Ethics of Childhood Charter, which reflects the recommendations concerning the issue of child safety, parents, it’s from parents, teachers, and others. And this is soft law. It’s based on ethical principles, the respect for the child as an individual, shared responsibility, the protection of privacy and values in the online space, as well as inclusivity. One of the key points expressed in the Charter is self-regulation of digital platforms in terms of proactive content moderation. This means that platforms themselves have to vow to take steps to prevent the spread of negative content that could potentially harm children. In addition, the Alliance also works to improve digital literacy for children and adults. This, for example, has led to one-point years of existence of the Alliance, making more than 20 events. And this is an important step in protecting children and the digital environment. What it demonstrates is that given the right conditions, major digital platforms are actually willing to admit their responsibilities and take actions to shield children from negative content and other online threats, Considering the risk for children in the era of global digitalization means that it should be noted that the greatest concern here is the threat of sexual exploitation and abuse on the Internet. It has never been easier for sex offenders to contact their potential victims, share images, and encourage others to commit crimes. And here is an example. Let me mention the fact that about 80% of children in 25 countries report feeling at risk of sexual abuse or exploitation online. That’s a UN statistic. Indeed, the problem of sexual exploitation and sexual violence and, in general, the abundance of content that exists on the Internet containing images and scenes of sexual nature that can cause mental trauma to a child is the most acute issue that really needs to be solved. Unfortunately, this is something that we have not yet solved. Despite the efforts of the global community and international organizations, we are still very far from solving this problem. The World Health Organization in its 2022 report on preventing online violence focused on child sexual abuse and highlighted that the particular issue is very, very important. The We Protect Global Alliance emphasizes in its information resource that, and I quote here, access to child sexual abuse material online is becoming easier and easier and the volume is growing so much that we could say we’re experiencing a tsunami of child sexual abuse material online, end quote. So we all are well aware that a primary school child can easily, with literally three clicks, access content that is so highly traumatic and can cause serious psychological trauma. And something has to be done about this, of course. I would also say that there are general statistics about the number of incidents and cases with negative consequences for a child’s health, and this clearly does not reflect the real picture because children themselves, and especially their parents in many cases, do not take such incidents out into the public space. They don’t complain about it. So therefore, law enforcement and other agencies aren’t asked for help. So what additional steps can we do here to protect children from sexual exploitation and sexual abuse online? How can we strengthen the fight against the spread of this sexual abuse material? And in general, how can we prevent the spread of sexually explicit content that threatens the mental health of children? The most effective and practical measures, in our opinion, are the following. First of all, exchange of data on the localization of materials that are harmful and dangerous to children. The exchange of data on new methods, for example, and mechanisms that are used by criminals. The creation of black lists of resources with these materials on them in child pornography. As well as the exchange of data about attackers and criminals themselves. And… we call them predators. Also, we need to look at blocking content creators, downloaders, and think about the consciously malicious distribution of harmful content. Moreover, such data exchange should be carried out at the interstate level and between authorized organizations to protect children on the Internet. We need to also have campaigns and look at digital fingerprints and hashes. For example, in 2022, 9,126 units of content were removed in the category of sexual content, and this was using the hash database. Again, hash digital fingerprints here. The largest tech players on the Russian IT market have participated in this. In conclusion, I’d like to invite my distinguished colleagues to join forces with me and work together. We’re always open to cooperation and invite participants in today’s discussion to join our digital ethics of childhood charter together, and we can help to protect future generations. I thank you very much for your kind attention.

Moderator 2 – Tracy Hackshaw:
Thank you very much, Elisabetta. I thought there were some very interesting stats you gave there. Eighty percent of children feel at risk of being exploited. That’s fascinating. I mean, I have children myself, and they’re online, and 80 percent feel at risk. That’s really, really troubling, especially in our part of the world where I’m from, the child trafficking is very big, and it seems to be coming from the online world a lot. I think we do have to take a look carefully at that. I also like the fact that you gave some practical examples of how we can combat this, utilizing data exchange, blacklisting, and, you know, working with the authorities to get things moving as well as self-regulation. So I’m hopeful that we could take this up in the question-and-answer session, find a little bit more about what you did in Russia to get this moving. So moving on to our next question now, I’m going to ask Deputy Minister Ernesto Rodriguez-Hernandez, the threats faced by states in cyberspace are increasingly worrying. It is a topic that is widely debated on international stages. What is your opinion on the matter and what actions do you consider could contribute to mitigating these threats? Deputy Minister?

Ernesto Rodriguez Hernandez:
Thank you very much, Trevi, and Olga, thank you for allowing me to be able to share my thoughts on these topics, which I feel are of paramount importance for all of our humanity. I believe that, first and foremost, we need to characterize cyberspace, where everything is being developed. Undoubtedly so, there is a growing development of cyber offensive capabilities, and as a matter of fact, there are national security strategies of some states, which also include the possibilities of using offensive cyber weapons and also to undertake cyber offensive operations, undoubtedly. There are now preventive cyber attacks with a view to deterring adversaries, and they can turn all of these issues, they can turn cyberspace into a new scenario of conflict. This is a danger that has now been heightened by the doctrines, which consider the use of force as a response, a legitimate response to a cyber attack. Increasingly, there is a more covert and illegal use of other nations’ computer systems by individuals, organizations, and also states to carry out computer attacks against third countries. In addition, this can also be a trigger for international conflict. The misuse of information and communication technologies and media platformsโ€”I’m referring to social networks and radio and electronic broadcastsโ€”as a tool for interventionism by promoting hate speech, incitement to violence, subversion, destabilization, the dissemination of false fake news, all of this with political purpose against other states. This is a pretext for them to use this force. This also constitutes an increasing threat to nations, and where it is more important to abide by these international principles of international law. Where are these actions being undertaken? They are being part of a so-called fourth-generation warfare, which is rooted in manipulating emotions, the use of information that has been stored and processed, in the clear violation of issues that are so important, like protecting personal data rights. Many companies are also involved in this. They turn all of this into a business model. All of this is taking place in an international context. various threats, with armed conflicts, unconventional wars, attempts at regime changes, and frequent violations of the United Nations Charter, as well as violating international law. This is the environment where these actions in cyberspace are being carried out. Now, the question is, what is it that we can do to counteract all of these threats? First and foremost, I think that we must undertake a global commitment so that the use of ICTs are used only for peaceful purposes, for the benefit of cooperation and the development of peoples. We must also do away with the colossal technological gap, which are obstacles and a hindrance for these developing countries to invest in the security of their ICTs, and that is the current state of affairs. It is dire, it is essential, and we have also spoken about this in many states. We need to adopt an international instrument, which is legally binding, which complements the applicable international law, which also should respond to the significant legal gaps in the sphere of cybersecurity and to effectively address the growing challenges and threats through international cooperation. It is of paramount importance to increase cooperation in order to grapple with cyber incidents, and as part and parcel of others’ exchange, we need to exchange information which does not compromise the privacy of states, nor should it violate any national legislation. We must also implement technical assistance mechanisms to exchange good practices, which would enable us to grapple with all of these incidents and also should bolster the operational capabilities vis-ร -vis a cyber attack. As much as possible, we should also standardize all of these cyber attacks. We should use common terminology which fosters exchange, international exchange. And finally, I think that is also of paramount importance, we need to set a multilateral mechanism under the auspices of the United Nations to determine impartially and unequivocally the origin of all of these incidents related to the use of ICTs.

Moderator 2 – Tracy Hackshaw:
Thank you very much, Deputy Minister. Again, some very valuable points raised. You pointed to the violation of international law that is used to involve state actors in cyber space, but I think your call for the use of ICT to be used for development purposes and for peace, I think that is really something that we need to take a closer look at, and as you said, the call for an international instrument to complement existing international law I think is something we could discuss as we get into the question and answer session. Thank you very much for your comments. Again, questions and answers coming up after this round of questions, so I’m going to go to Fulake Olagunyu, who’s from the ECOWAS, and she’s going to give, I think, a very practical take on what’s happening in West Africa. So Fulake, the economic community of West African states comprises 15 states. Which are the main challenges that these countries face in relation to cybercrime and cybersecurity? Over to you.

Folake Olagunju:
Thank you very much, Tracey. So rightly, as Elisa rightly said, the issues we’re talking about today are completely global, so that would actually put into context what I’m about to say. So what we’re seeing within the West African region is obviously just a quick geographical background here. The 15 member states, 11 of them are actually classified as LDCs, so you can already envisage the sort of issues and challenges they already have without even putting cyber into place. So what we’ve seen over the last few years is that we’re lacking a national coordination at the national level, which is then leveraging into an effective cooperation at the regional level. We’ve seen that not only cyber is global, it is also a trust game, and for that to actually happen effectively, countries actually need to speak to one another. And for them to do that, they have to be willing to collaborate to actually combat the cyber threats. This obviously requires adherence to international conventions. We’re talking about the Budapest Conventions. On the African side, we’re talking about the Malabo Convention. But aside from that, we’re talking about the member states actually having the capacity and the capability to actually coordinate and collaborate with one another. Resources is a big thing that is lacking on the side of the world, and resources has three levels. We’ve got the technical, the financial, and again, the human. Globally, we do acknowledge that there is a death of cybersecurity workforce personnel. But on this side of the world, I think it’s a little bit different. It’s a little bit more critical. We do have personnel that are highly qualified that can do the job, but we don’t have enough of those people available. So it’s an issue of how do we then cascade down the little knowledge that we have so that it actually makes an impact across the region. Finances, a lot of our budgets are actually not concentrated on cybersecurity. I think one or two member states have actually put into their national budgets a very thin line on cybersecurity. We’re trying to see how we can encourage member states to do a bit more. Again, on the technical front, there’s a lack of the requisite equipment and facilities actually required to actually do the job. Aside from that, I think we’re not only โ€“ the region hasn’t gotten to the point where you’re seeing cyber as a warfare like other parts of the region. So the mentality is still sort of boots on ground. We’re still looking at physical security, and I think that conversation needs to be leveraged a bit more. I will say that there was a watershed moment. I think everyone talks about pre-COVID and after COVID, and I think COVID was a watershed moment for the West African region because a lot of conversations prior to that had revolved around the socioeconomic development issues that the region faces. But what COVID actually did was it brought digitalization to the forefront because everyone was obviously not prepared, and then you had to now make sure your citizens could actually meet their daily needs and requirements. And I think the good thing โ€“ if I can say that, and I use that very loosely โ€“ the good thing about COVID is that because conversations about digitalization were brought to the forefront, member states now have to start thinking about security. It’s getting the attention it needs, and I’m going to borrow Chris Painter’s word here about sustained attention. I think what we’re trying to do from the Commission’s perspective now is to actually ramp up and promote that our member states actually start doing a bit more when it comes to cyber security. Some are doing very well, others are not. But it’s our mandate to ensure that as the 15 member states come together that they actually do more because the region promotes free movement of goods and people. So you can imagine if we’re promoting free movement of goods and people, we’re also indirectly promoting the free movement of cyber threats across the region. So we need to do a bit more. I think also one of the things that is a really big issue is the weak critical infrastructure that we have. I don’t know if a lot of people know about what goes on in Africa. I’ll speak specifically about West Africa. The grids are down, sometimes telecommunication maps are down. It’s a calm w-2 for the people on this side of the world not having energy or sometimes not having lights, power grid, social infrastructure. So if we do sort of go two, three years from now and a cyber attack occurs on any of our critical infrastructure, a lot of people will be nonetheless wiser because they’re already used to not having this infrastructure. So it’s a key challenge. within the side of the world, we’re trying to see how we can promote the development of these infrastructure, make sure they’re actually built to a capacity that can actually help its citizens, trying to promote a bit more cooperation and coordination amongst our member states. We’re trying to ensure that there’s that peer-to-peer cooperation. It’s not constantly looking towards the global north because the nuances are different, the environments are different. We’re talking to member states that have done fairly well. And actually, I will take that back. Member states that are doing really, really well within the side of the region to actually pick up other countries that have not, show them what they have done so that we can actually collectively address these challenges together. Like Elisabetta rightly said, one of the things that she has seen in her own thematic in terms of child protection is the exchange of data. That is not just about children, which is very important, but I think it’s a cross-section of actually promoting information sharing on cybersecurity, on cybercrime as a whole. And that is still lacking within this side of the world, and we’re trying to see how we can do that. Like Khatisa rightly said, incentives. A lot of the workers within the sector, the public sector, are not incentivized enough to actually do more. So it’s a way of trying, we need to try and figure out a creative way to encourage government, public sector, to actually do more, to incentivize their people to actually stay, train them, retrain them, or make them pivot to other areas where they can actually use the skills they already have. Involve the private sector and see how we can use that PPP collaboration to sort of move the cybersecurity landscape, improve the resilience and strengthen it across the West African region. I’ll stop here for now. Thank you.

Moderator 2 – Tracy Hackshaw:
Thank you very much, Falake. And I think you had some extraordinary points there on how we can work together, collective action. What I pulled that out as a small and developing state citizen myself, I think we do have opportunities to work together, to learn from each other and share best practices. In my particular field of work at the UPU, we are establishing an ISAC, an innovation sharing analysis center. And I think that’s very useful for the postal sector in this case, but maybe sectoral ISACs within West Africa, within other regions can be one way we can look at this. You also mentioned the issue of limited resources, limited resource budgetary and people. Again, and that’s a problem in many least developed countries and SIDS, underserved regions as a whole. So I think we do need to get that support, but again, with collective action, working together, we can probably drive this forward. I really appreciate your thoughts on that. So now we’re moving to questions. Questions and answers. And answers. So Olga, maybe we could see what’s happening on the floor and online.

Moderator 1 – Olga Cavalli:
Do we have colleagues in the room that would like to make questions to our speakers or comments or remarks adding to what we have been hearing? There are mics in the room that you can line up or we may have questions from remote. Do we have questions from remote? It’s a quiet afternoon. In the meantime, I would like to thank Folake for her comments. I think she summarized very well all the major concerns that we all have, especially in developing countries, not only in West Africa, that we share the same challenges in all the developing countries in the world. No comments, questions from audience? I don’t see hands. Some comments, questions from remote? Okay, one minute.

Audience:
We have one question online and it’s from Riyad Hassan, Vice Chair, Bangladesh Youth IGF and a member of the Bangladesh Remote Hub. And the question is, how can we ensure effective use of emerging technologies to ensure cybersecurity?

Moderator 1 – Olga Cavalli:
Is addressed to any speaker? No, no, no. So the question is?

Audience:
How can we ensure effective use of emerging technologies to ensure cybersecurity?

Moderator 1 – Olga Cavalli:
How do we ensure the use of emerging technologies to ensure security? Okay.

Audience:
Effective use of emerging technologies to ensure cybersecurity.

Moderator 1 – Olga Cavalli:
Okay, who would like to address? Yes, please, Elisa, go ahead.

Alissa Starzak:
You know, one of the reasons I wanted to take that one is because I think one of the things that we’re talking about a lot right now is really that idea of secure by design, just secure by default. So that would be one. And the other piece on the emerging tech side, I think it’s been said a lot at IGF, but there is a reality that AI is coming for cybersecurity as well. And actually. I think we see both the negative potential for AI and cybersecurity in the world of things that you could use for exploitation. But the positive of that is that there are a lot of systems where AI can actually help. If you’re talking about big data sets, for example, identifying vulnerabilities quickly, being able to patch quickly, being able to identify them in sort of real time and correct, all of those things are coming. And I think that the reality of being able to make sure that we have systems that enable that is incredibly important. I think that we are going there. I think that’s one of those areas for global collaboration as well. Because when you start talking about data sets and information sharing, a lot of tech, a lot of requires, and AI certainly requires big data sets. So being able to do that for cybersecurity systems, making sure that you have adequate access to data to enable that protection is important.

Moderator 1 – Olga Cavalli:
Fantastic. Chris, do you want to add something?

Christopher Painter:
Yeah, I might add that I agree with everything you said. I’d say that emerging technologies are part of the larger problem, but we can’t lose sight of the larger cybersecurity issue too. And one of the things that we didn’t talk about yet, and something that my colleague from ECOWAS mentioned, is when we’ve had these UN meetings, when I’ve talked to countries from around the world, almost to a person, especially for the developing world, the global south, their number one interest is they need help. They need capacity building. They need the ability to actually deal with these threats, whether they be nation state threats or criminal threats. They need to be able to actually take things that have been decided in the UN, like norms of behavior and international law, and apply them. They need to be able to have certs, emergency response teams. They need to have national strategies. They need all of these things. And of course, when you’re looking at new technologies, you build that in. So the fear a little bit is you see these new technologies, and it becomes a bright shiny object, and you forget about the foundation you have to build for all of these things. And what the GFC does, what the Global Forum on Cyber Expertise does, is it really does that, coordinates that capacity building around the world. It’s multi-stakeholder. We have 200 members and partners, countries, civil society, industry, academia, and the whole idea is to work around the world to make sure that countries and others are up to speed. You know, they have these basic things in place, and it’s a sharing platform where they can work with each other. So I think that’s, you know, as I think about practical aspects of this, as opposed to sort of the political and other aspects, the most practical aspect is just helping countries protect themselves, both from new threats, but also from existing threats, and really build that capability. the world. So, I think it’s important for us to make sure that we have the capability around the world.

Moderator 1 – Olga Cavalli:
Thank you, Chris. We have two colleagues lining up. I didn’t see who was coming first, so.

Audience:
Colleague on the right came first. Okay, thank you. Go ahead, please. Okay, thank you very much. My name is Christopher, and I’m the director of digitalization, and I’m very happy to be here. So, thank you very much for all that you’re doing, Chris, Olga, and everyone in particular. The first comment is to corroborate what Folake has been saying concerning scenario in West Africa that is really awakening, you know, like, with regard to the good part of COVID. So, digitalization is a serious business, and with it, of course, the trend, you know, of sewing pieces into pieces are definitely important, and we are hoping, indeed, because lot of things are moving online. Secondly, the United Nations Commission for Africa sponsored a kind of research to measure the link between cyber security and development. The objective is to present to policy-makers to take cyber security seriously, and we are hoping that by the end of the year, we will be able to see that, in fact, cyber security will have a kind of maturity, kind of yield between 0. 6 and 5.4% increase in GDP per capita, so with these, you can see that, indeed, if you take cyber security seriously, then there will be value-add even to the economic power of the citizens. So just to bring that to the discussion. Thank you.

Moderator 1 – Olga Cavalli:
Thank you very much.

Christopher Painter:
I’m going to turn it back over to you, Mr. Kudlow. Thank you for that, and I was at the IGF in Addis last year where they launched that study, and I would say that that is really critical if you think, and other speakers have said this, a lot of the interest around the world pre-pandemic, but even now, is seizing the digital economy, digitization, you know, often the economic parts of government don’t talk to the security parts of government, or their communities don’t even talk, but they talk to the security parts of government, and I think that is really critical, and I think that’s where capacity-building comes in, but having statistics like that, which really makes it for the sustainable effort that we need over time and not just another boutique effort, but one that’s really important as part of it.

Moderator 1 – Olga Cavalli:
Thank you, Chris. Any other comments from colleagues? Speakers remote, would like to make any remark or comment about the two questions that we have received or comments? No?

Folake Olagunju:
Can I join in for some strange reason? I can’t find my raise hand. Okay. Thank you very much. So just to quickly tap into what Chris said. Yes, the GFC is actually doing amazing work, but I think the approach the GFC is taking is the best approach or it’s a good approach because they’re also trying to leverage on regional economic communities such as the ECOWAS. And the ECOWAS Commission has been working with the GFC over the last few years to see how they can actually drill down that capacity building to make it more meaningful to the member countries that are involved. Because obviously, if you think about it, if you’re trying to do capacity building from a global level, it would not actually tap down into what really matters to me working and living in West Africa, for example. So that was just one point. The second point was to what Jimson was saying. Yes, we are aware of the UNESCO report, but I also wanted to say again, it needs to be drilled down again. We’re looking at trying to promote cybersecurity or encourage more member states to be more cyber aware. To do that, we need the awareness, we need the evidence, and we need the practicality of how it has worked. We cannot, for example, take what has worked in the US and bring it to West Africa and expect it to work without actually having meaningful engagements with the people on the streets. So for that to work, there needs to be a practical angle. And one last point, just to quickly sort of go back to what Tracy was saying about ISACs, that’s a very important point you have mentioned there. It’s one of the approaches the commission has taken. We’ve just started a new program that was launched under the G7 German presidency. It’s the joint platform for advancing cybersecurity in West Africa. And one of the things we’re looking at is actually setting up an ISAC. So it will be good to see how we can also see what you have done and how we can leverage on that. Thank you very much, Olga, for giving me the floor. Thank you.

Moderator 1 – Olga Cavalli:
Yes, please.

Christopher Painter:
I don’t mean to hog the mic, but just. Relating to that comment, a couple of years ago, or maybe more than a couple of years ago, we recognized exactly what was being said, that this has to be a demand-driven approach. It can’t just be layered on. You just can’t say, here are some programs, go run with it. That’s not sustainable. So having it both globally and regionally driven is really important, and we work to create an African experts group. We’ve worked in the different communities. We’re having a big conference in Ghana at the end of November, bringing the development community and the cyber community together. We also have a hub with the OAS in the Latin American, Caribbean, and North American region. We have something in ASEAN. We have just launched a Pacific hub for the Pacific Islands. It’s important to have both of those together, and they can share information across them, because although the lessons might be different, there might be lessons that can be learned from other approaches.

Moderator 1 – Olga Cavalli:
Thank you, Chris. Tracy, you would like to ask, let our colleague to make the other question.

Moderator 2 – Tracy Hackshaw:
Of course. So over to you, colleague at the mic. Yes. State your name and where you’re from.

Audience:
Yes. Thank you very much. My name is Jacopo Pembelliet. I’m from the Ministry of Foreign Affairs of the Netherlands, and I do have a question. First of all, thank you very much, everyone, and I think it’s very great that we have this panel talking about end cybercapacity building and cybercrime and responsible state behavior, because I think all those three topics are equally important when we are discussing how to increase cyber resilience. And I do have a question for the Deputy Minister, but also for the other panelists maybe. The Deputy Minister was talking about the proposed or the perceived need for a legally binding instrument on responsible state behavior in addition to international law. And well, I mean, we’ve had a lot of discussions on international law at the U.N. level, and I was wondering regarding these discussions and also considering that already at the U.N. level it was agreed that international law is applicable in cyberspace, and of course we are all working out, or tech real experts are actually working out, other than me, how international law is exactly applicable to cyberspace and how these specific rules would be applicable. And so I was wondering if you could elaborate on how a new or new negotiations on a legally binding instrument are actually related to this ongoing effort to find out how these existing rules apply to cyberspace and how โ€“ because, of course, I don’t think we want to be premature in starting that. I think it’s very important that all countries do that before we go to see whether there are gaps that we need to fill with a new treaty. And also, since we are here at the IGF, I think it’s important to say that when we are talking about international law and cyberspace, that really it should not only be international lawyers, but it should also be international law firms. And I think it’s very important that all countries do that before we go to see whether there are gaps that we need to fill with a new treaty. And also, as Remember that we do have, some of the, you know, indicators that we need to look at that, but we also really need the technical community and civil society and all stakeholders to think about that, because it’s, really, an issue that is not only for those writing about international law. Thank you very much.

Ernesto Rodriguez Hernandez:
talents First and foremost, there is, indeed, an international debate that is deeply rooted in both of these issues. There is a need to have a binding regulatory framework. Also, the applicability of international law, and very specifically, in terms of international humanitarian law, in terms of security vis-a-vis ICTs. I feel that there is one scientific element that’s important, what we are going to have to grapple with. This is a completely different environment. Cyberspace is highly dynamic. The various topics that trigger conflicts and also safeguard international security are very different from the traditional ones that we knew before, from yesteryear. Therefore, I feel that we cannot think that the current existing standards, the good practices which strive to proliferate their use and the responsible use within cyberspace that are necessary, indeed, for us to be able to have a safe cyberspace where we work together geared toward development. As a matter of fact, I feel that we have addressed this topic here. We’ve talked about disruptive technologies, emerging technologies as well. This means that there are new standards and that they need to be binding. It cannot be voluntary, and the state should not decide whether they will be abiding by these laws. I think that we need to all be on an equal footing, all countries. We must abide by them. They need to safeguard, they need to favor, foster a peaceful cyberspace geared toward cooperation and the development of peoples.

Christopher Painter:
So, first of all, thank you for that question, and thank you to the Netherlands because they launched the GFC, so thank you for that. But taking my GFC cap off for a moment and based on my prior experience, look, I think that the norms of behavior that have been agreed to by every country in the UN, every one of them, they are voluntary, but they’re political commitments by those countries, and so if they agree to do that, they should be held accountable for that. Other countries should hold them accountable, and we’ve seen violations, and the accountability hasn’t been there. So it doesn’t matter if you had a new treaty or not. We’ve seen treaties violated all the time, and if there’s not accountability, they end up just being words on paper. I do think doing a treaty at this point is premature. I think the norms are a good step. I think there’s more development that needs to be done. A lot of countries, even though they agree to the norms, don’t know what they mean, so you have to build, I think, on this. And also, you know, there’s a challenge because some countries, when they think of a treaty, think of a treaty about information. They worry about what they think is harmful information, and so that gets very much into human rights and free speech, and I worry about how the that plays out as well. So I think a treaty is far down the pike right now. I think there’s a lot we have to do, including capacity building, which is a more urgent thing to do right now.

Alissa Starzak:
I just wanna add, actually, I agree with that, Chris. I think from the industry standpoint, one of the things that becomes challenging for us is that when you try to define things in a treaty, one of the fears from industry is that you might end up actually less secure. So one of the challenges from a practical standpoint that has come up in some of the treaty negotiations, for example, is the question of access for researchers trying to do security research, what that then means. If you try to define that prematurely, you actually risk a world where vulnerabilities don’t get patched because there are too many legal, challenging legal questions around it. And I think that’s really concerning from a practical standpoint for industry. So I will say I think industry sort of watches these questions really carefully. We’ve been sort of strong proponents of norms in the space, really trying to think about, we look at it very much from a protection across the board from an industry standpoint. So we’ve been trying to look at what actually promotes that primarily, and really thinking about what that looks like just for everyone on the ground.

Moderator 2 – Tracy Hackshaw:
Yes, thank you very much. And I know time is short, but we do have one more question from the floor for this round. State your name and proceed.

Audience:
Hello, my name is Dr. Mona Hauck, and I’m from Germany. And I have been working in EU projects, AI, medical area for the last few years. And I’m not an AI technique, I’m working with experts. And I have been responsible of supporting the doctor networks within the EU. So my experience, I just got triggered by what was Falak saying, is that I wonder, I have seen so many experts. I have been working with more than 16 nations, and it’s amazing, but the knowledge and all the lessons learned, and I work in business, industries, big, major companies, the knowledge gets lost, and I wonder if I listen to Falaka if there’s not a way of somehow creating a knowledge pool with all the technology that is available to not only think about, like, cybersecurity, but the people who create cybersecurity, the people who you want to teach kind of like a certain mind-set, so in order to avoid bias and to include gender aspects on all the things, so we’ve been creating kind of like a certain best practice case for the EU, and it was the first time that we did it by applying, by kind of like, sorry, coaches who supported the tech experts, people who were trained, like, from a social psychologist and social science, and I was wondering, and all these things, because so much gets lost if people are focusing on tech and not including the other aspects, if there’s anything, so my vision is that there is a worldwide knowledge pool where people from Africa and South West Africa can just kind of like access to it. It’s a dream.

Moderator 2 – Tracy Hackshaw:
There are questions on the corners. I’m wondering if you could just take the other questions and then pull them together. What do you think?

Moderator 1 – Olga Cavalli:
Yeah, that’s a good idea, and I was wondering if remote experts would like to make any comment or respond to any of the comments that we have received. All right, so thanks for your question.

Moderator 2 – Tracy Hackshaw:
Let’s go to the colleague on the right and then colleague on my left and my right, and then we will pull them together and get them all answered. So go ahead, sir.

Audience:
Thank you, Tracey. I’m from Benin. I’m from the Ministry of Economy and Trade. I’m missing from Benin. I’m from Ministry of Economy and Finance of Benin. I thank Folake for her intervention. I have just one question for her. I want to know if it’s possible to use solar technology to solve the gap of lack of electricity for critical telecommunication infrastructure in West Africa. Thank you.

Moderator 2 – Tracy Hackshaw:
Thank you very much. And colleague on my right. Go ahead.

Audience:
Thank you very much. Let me introduce myself. This is Ganesh. I work for the government of Nepal as a secretary in the Ministry of Energy. in the prime minister’s office. We discussed a lot about the cyber security as well as the cyber crime. And we discussed a little about the cyber security and the children. UN report shows that one in five girls and one in 13 boys have been sexually exploited or abused before reaching the age of 18. Don’t you think this is the serious thing to our future challenge? So concerning this, I would like to raise some of the things that need to be considered. First of all, having a kind of holistic approach for all cyber security may not be functioning. So we need a separate compact to address the challenges of the cyber security related to the children because of their specific necessity and the specific characteristics of the children. The second thing I would like to emphasize that one of the speakers already said about building the capacity of the children themselves through the internet governance as well as some of the internet technology by giving digital literacy about making aware about their right. So this might be one of the most important things to save from the online safety measures. The third thing that is most important due to the nature of the cross-border issue of the cyber security, we need cross-border, regional, international collaboration across member state and the private sector and the ICT companies. That is very important in addressing technology facilitates child sexual exploitation and abuse. This is the third thing I would like to emphasize. Thank you very much.

Moderator 2 – Tracy Hackshaw:
Thank you very much. And I think we have an online intervention.

Audience:
Thank you, Tracey. We have some questions online. From Rwanda, from Sri Lanka, what is the impact of cyber crimes conventions to cyber crime investigations? And another question from Amir, what should be done when some cross-border digital platform refuse to cooperate with national competent authorities regarding cyber crime cases and refuse to establish official representative in the country? Thank you.

Moderator 1 – Olga Cavalli:
I would like to address the question made by a colleague from Germany. I agree with you. This is a relevant desire from many of us of having this pool of knowledge and especially to be shared among different countries and regions. Just to let you know that there are some efforts in preparing a database of experts, at least in the Organization of American States, or correct me if I’m wrong, Chris, the open-ended working group on cyber security in the United Nations, they were working on a database of all the experts of different countries, which is not the whole picture, but it’s a start, and it should be an ongoing process. What is difficult from these efforts is to keep them updated, and sometimes it’s expensive and complicated, but I think it’s an idea that we all have in mind.

Christopher Painter:
I think the UN effort is more points of contact for CBM, so it’s more limited. And I’d say that it’s a great idea, and part of what you’re saying is being able to translate between the policy and technical people, too, which I think is important. I’d mention the Sybil portal again, which is what my organization has, which now has over 800 best practices, guides, tools that can be used, and it’s open to everyone, not just members of the GFC at sybilportal.org, but there are other things out there, and I think we’re building toward that.

Moderator 1 – Olga Cavalli:
And who would like to …

Katitza Rodriguez:
Olga? Do you hear me? Sorry? Olga?

Moderator 2 – Tracy Hackshaw:
Yes, Folake, I think Folake is trying to speak. No, Katica. Olga?

Moderator 1 – Olga Cavalli:
Ah, pardon. Sorry, Katica. I didn’t see you. Thank you. I’m so sorry. Please go ahead. I’m so sorry. And we have Qsai in the queue. We will go to you in a moment, Qsai. Nice to see you there. Katica, the floor is yours.

Katitza Rodriguez:
Okay. The challenge of remote participation, but I’m very grateful to be able to participate online, so thank you, Olga. So there were many questions that were very interesting through the discussions. One of them is related to the application of international human rights law, or international human rights, or international law. I will respond to this when it comes to the negotiation of the UN Cybercrime Treaty. I don’t want to confuse my answer with the other negotiation of the WEEG, which there is no treaty. But when it comes to the UN Cybercrime Treaty, we have seen that many of the powers for criminal investigations, including across borders, are obligatory, while some of the safeguards are optional. Moreover, the chapter on international cooperation should be subject to international human rights law. They should be in the treaty-specific safeguards that protect or are the baseline for international cooperation. But instead, the treaty deferred to national law what the level of protection of privacy. This means that when one country wants to assist another on cooperating in investigating a crime, the law that will apply is the law of the country that is providing the assistance in collecting the evidence. But as we see in a joint negotiation with 1,900 countries and more, the level of privacy protections and the level of human rights deferred country by country. So if we don’t have a minimum baseline on the chapter on international cooperations that all states should comply with, we are allowing very strong cross-border surveillance powers with almost minimum of just leaving to national law to decide the level of protection what it is. And that could be very bad in many countries. And we are talking about surveillance powers that are very invasive, like real-time intersection communication, real-time location of data, or even data that will identify a subscriber. The people who speak to Truffaut, who are just criticizing their governments, end up being criminalized just for posting a tweet. And sometimes people get criminalized in many countries just, for instance, for being LGBTQ, for being themselves, because that sometimes trumps the country. So the issues that provide international cooperation and letting countries decide what the crimes that will be allowed for that cooperation, without restricting it to the core cybercrimes that have to be in the treaty, open the door of providing a legal basis of international cooperation for crimes that, for some countries, could be crimes, and I call it ding crimes, but it could be also an act of expression in many other countries. So without clear respect of international human rights law, clear say that’s in the treaty that are the basis for the negotiation, and a narrow scope limited to core cybercrimes and crimes that are specifically defined in the treaty, the treaty could be too broad that we could lead to abuse. So that’s one of the things that are concerning. Two other questions about the grounds of refusal in criminal investigations. So it depends, you know, sometimes companies refuse to cooperate on human rights grounds. As I said, sometimes some countries criminalize someone for being LGBTQ themselves, if sometimes or sometimes a journalist for writing an article, or sometimes an activist for writing a post. And so sometimes there are grounds for refusal because the request is disproportionate, because the request is in violation of human rights law, and in those terms, it’s valid for the company to be able to refuse cooperation, to deny or challenge requests that are disproportionate. Obviously, we are not talking about the requests that are necessary for the investigation for crimes that respect human rights law that are necessary to fight cybercrimes. But the problem that we’re having in the treaty right now is that this international cooperation will be based for any crime as defined by national law by any country, which will be a lot of crimes, and it will be very difficult for the authority to be able to even know what the crime is about and in which grounds they can refuse it, which will create a lot of, it will be even very difficult for democratic countries who will be willing to refuse a request and will end up failing to, failing to, failing to be able to actually notice that the request is a proportionate, that the request is not compliant with the legality principle, that doesn’t follow all the criminal procedure safeguards, and it will be able not to deny the request. We’ll end up handing over the information, and when we hand over information on a, of of an activist in places where speaking get you in jail, you can lead this type of investigation, can lead to torture, disappear, and very serious human rights abuses. It’s opened the door for what we call transnational repression. So I just want to flag that, that grounds of refusals are important in the safeguards. Are important not to undermine criminal investigations, but to be able to deny requests when these are disproportionate in violation of international human rights law. Thank you.

Moderator 1 – Olga Cavalli:
Thank you, Catista. I think that you have summarized very well the complexity of transnational concept of all these activities. I suggest that we take the question from QSY, and then we let our experts to do final comments. QSY, welcome.

Moderator 2 – Tracy Hackshaw:
And perhaps we just close. And shall we close the queue after that?

Moderator 1 – Olga Cavalli:
We have to close the queue after QSY, and then we will get some final comments and wrap up. QSY, nice to see you, and welcome.

Audience:
Thank you, Olga. QSY Alshati from Kuwait. There was a talk about the need of an international convention or a treaty on cybercrime, and I agree with the panelists when they said this is a long way to go. Actually, we have the early in the century, we have the Budapest Declaration, which was about cybercrime. And there are countries that were about 40 or more that signed that declaration. But yet, the cooperation or international cross-border cooperation in cybercrime did not become. I wonder if we need a treaty rather than, let’s say, improve or expand the role of current organizations like the Interpol, for example. Interpol currently is working from country to country when it comes to cybercrimes, whether in exchanging information, criminal investigation, and facilitating the relation between a country to a country when it comes to such an issue. Is it an option or a possibility to improve that role of an organization like, for example, the Interpol, rather than going for the track of a cybercrime treaty or a new convention on that? Thank you.

Moderator 1 – Olga Cavalli:
Thank you. I suggest that we give the floor to each of our panelist experts and then we wrap up. You can answer the questions and do some final comments. Chris?

Christopher Painter:
Yeah. Thank you, Olga. Just in answer to that question, while there are now negotiating, we have the Budapest Convention where many countries, I think it’s north of Haiti now, have either signed or acceded. There’s negotiations now happening in the UN for new cybercrime as opposed to Cybersecurity Treaty, which we were talking about, and I think that’s very many years down the line. But even there, there’s major differences between countries, between thoughts. Some want a very expansive view where it covers everything, including things I think would be protected human rights. Others want a view that’s more focused on cybercrime. We’ll see what happens. But I do agree that we need to continue to improve operational coordination. Interpol is a great vehicle. We have, when I was in the government, I used to chair something called the G8 and G7 High Tech Crime Group. We created the 24-7 point of contact network that now has over 80 countries, I believe, to share information. So that operational coordination is key. And just in my wrap-up comments, look, this is a space that continues to evolve and become more and more important for everything we do. And if you’re not following these negotiations in the UN, if you’re not following these things that are happening on a regional basis, you should. Because it really impacts all the stuff we talk about, the IGF, but even much broader than that. And I think, again, one of the keys is capacity building. The keys are working with countries who need help. And that’s something that, you know, I invite you to join us in that. If you’re a country and you’re not a member of the GFC, we welcome you to become one. If you’re a stakeholder, we want to engage with you too. This is, I think โ€“ and one of the reasons I’m doing this role after leaving government is I think that foundational thing about capacity building, which both helps us defeat all the bad things, but also enable all the good things, is critically important, and it’s a practical thing we can achieve.

Moderator 1 – Olga Cavalli:
Thank you. Alisa?

Alissa Starzak:
So I just want to pick up on that theme of capacity building. I think industry has a role in that as well. I think that one of the things that we see and are looking for are really to think about how we improve security for everyone. That is not necessarily being done on the government-to-government side. It’s really thinking about what resources are available โ€“ again, capacity building straight up โ€“ and then thinking about that collaboration that happens. Sometimes that collaboration is on information sharing related to threats. I think there’s a lot of work that can be done there. But again, first and foremost, I think there are a lot of people that just need tools, understand how to protect themselves, how to make sure those attacks don’t happen in the first place.

Moderator 1 – Olga Cavalli:
Thank you.

Ernesto Rodriguez Hernandez:
I feel that we are debating a highly complex issue. We do not have much time to make headway with the necessary resolve. The criteria that we use are divided. The standards, the abiding standards as well. the treaties, the conventions, how they are also violated, how they are not abided by with good practices, with good coexistence, with no binding criteria. This is something which we need to continue addressing. We need to continue speaking about it and to reach a consensus. For developing countries, it is a very difficult situation, and capacity building, which we mentioned here, they are not comparable between the developing world and the developed world. What are the capabilities that a developing country has to ascertain where the cyber attacks are coming from, where there’s such a gap in terms of the digital divide, in terms of the information that we have access to, financing as well for development, where there isn’t parity. In a few words, I think that these are topics which we need to continue discussing. With the understanding and the responsibility that we cannot, we do not have the time to put everything in order and to fix everything in the world of cyberspace. As I said, we need to foster all of these conventions for the development of all of our peoples. Thank you.

Moderator 2 – Tracy Hackshaw:
Thank you, Deputy Minister. And let’s go to our online panellists. Elisaveta, please, final words and responses. We literally have 30 seconds left.

Elizaveta Belyakova:
Well, in conclusion, I just wanted to thank my distinguished colleagues for this debate and call upon them to join our efforts and so that we work together because cooperation and participating in today’s discussion has allowed us to further our work to protect childhood, as I’ve mentioned, and you can find a link to our website in the information. I hope that we will be able to protect children from these cyber threats, and I hope that we’ll have a more open world. Thank you very much.

Moderator 2 – Tracy Hackshaw:
Folake?

Folake Olagunju:
Thank you very much, Tracy. So I think more needs to be done to promote the development of a cybersecurity culture, which I think hopefully will lead to better communication and coordination. at the national level, which then leverages to effective cooperation at the region that is required. Thank you very much.

Moderator 2 – Tracy Hackshaw:
And maybe five seconds from Katitza. Five seconds, Katitza.

Katitza Rodriguez:
Thank you. I just will conclude that for me, human rights are universal, and we don’t have to view them as fundamentally conflicting with sovereignty. In principle, these are rules that a state have already accepted and already agreed upon to govern their own behavior. So in the treaty discussions on cybercrime, we have been saying that we want to see minimum safeguards that are strong for both the domestic and international surveillance powers as a minimum basis for international cooperation. But this has not made it into the current test, and this is really concerning. Instead, international cooperation may be provided by the privacy protection of the national law of the country providing the assistance, which can vary enormously, and we may not meet international human rights standards. So to conclude, to the question if sovereignty and human rights inherently conflict, I will say that not necessarily. The perceived tensions extend from the state’s geopolitical views and tensions, not for the principles itself. The treaty requires a minimum anchor in international human rights law and standards, ensuring that the state’s collective actions respect not just individual sovereignties, but the shared universal human rights. This is not about sidelining domestic standards. On the contrary, it’s about elevating global standards to ensure that not human rights are compromised in the name of international cooperation, in the name of mutual assistance. Our goal should be clear, a convention that not only acknowledges but actively champions strong human rights. Instead of using sovereignty as an argument to undermine human rights, a state should craft a convention that authorizes criminal investigations only for core cybercrimes, as defined in the convention, while providing concrete and detailed human rights safeguards for both domestic and international cooperation. We would like to see human rights not only referred in the preamble, but throughout all the chapters. And we would like to have well-documented, and we would like to have a very narrow scope that is limited to such core cybercrimes. That’s all for now. Thank you so much for inviting me.

Moderator 2 – Tracy Hackshaw:
Thank you very much, Katitza, thank you very much.

Moderator 1 – Olga Cavalli:
Tracy, your final comments?

Moderator 2 – Tracy Hackshaw:
No, I’m gonna cede my comments to you, Olga. Thank you all for coming. We have three minutes over. So, Olga, final words to you.

Moderator 1 – Olga Cavalli:
Okay, so thank you for the constructive dialogue. Share information, cooperate, be active. I am always positive towards technology. I trust human ingenuous and human capacity. So, the journey is the destination with this issue. So, let’s keep on talking, keep on interacting, keep on learning, and I wish you enjoy the session. And thank you all very much. Thank you, our panelists. Thank you, the audience. And thank you, the remote participants. And thank you, the remote experts. Thank you all.

Alissa Starzak

Speech speed

239 words per minute

Speech length

1391 words

Speech time

349 secs

Audience

Speech speed

159 words per minute

Speech length

1761 words

Speech time

664 secs

Christopher Painter

Speech speed

231 words per minute

Speech length

2517 words

Speech time

654 secs

Elizaveta Belyakova

Speech speed

190 words per minute

Speech length

1318 words

Speech time

416 secs

Ernesto Rodriguez Hernandez

Speech speed

132 words per minute

Speech length

1249 words

Speech time

570 secs

Folake Olagunju

Speech speed

193 words per minute

Speech length

1738 words

Speech time

539 secs

Katitza Rodriguez

Speech speed

153 words per minute

Speech length

2319 words

Speech time

908 secs

Moderator 1 – Olga Cavalli

Speech speed

177 words per minute

Speech length

2085 words

Speech time

707 secs

Moderator 2 – Tracy Hackshaw

Speech speed

174 words per minute

Speech length

1386 words

Speech time

479 secs

Lights, Camera, Deception? Sides of Generative AI | IGF 2023 WS #57

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Deepali Liberhan

META, a leading technology company, has effectively employed AI technology to proactively remove potentially harmful and non-compliant content even before it is reported. Their advanced AI infrastructure has resulted in a significant reduction of hate speech prevalence by almost 60% over the past two years. This demonstrates their commitment to maintaining a safe and responsible online environment.

In line with their dedication to responsible AI development, META has made a pledge to follow several essential principles. These principles include prioritising security, privacy, accountability, transparency, diversity, and robustness in constructing their AI technology. META’s adherence to these principles is instrumental in ensuring the ethical and sustainable use of AI.

Furthermore, META has demonstrated its commitment to transparency and openness by publishing content moderation actions and enforcing community standards. They have also taken additional steps to foster transparency by open-sourcing their large language model, Lama 2. By doing so, META allows for external scrutiny and encourages collaboration with the broader AI community.

The company prioritises fairness in its AI technology by using diverse datasets for training and ensuring that the technology does not perpetuate biases or discriminate against any particular group. This dedication to fairness underscores the company’s commitment to inclusivity and combating inequalities.

META’s approach to AI development extends beyond internal measures. They have incorporated principles from reputable organisations such as the Organisation for Economic Co-operation and Development (OECD) and the European Union into their work on AI. By aligning with international standards, META demonstrates its commitment to upholding ethical practices and participating in global efforts to responsibly regulate AI.

Recognising the importance of language inclusivity, META emphasises the need to make AI resources and education available in multiple languages. This is particularly crucial in a diverse country like India, where there are 22 official languages. META aims to ensure that individuals from various linguistic backgrounds have equal access to AI resources, ultimately contributing to reduced inequalities in digital literacy and technology adoption.

META values local partnerships in the development of inclusive and diverse AI resources. They acknowledge the importance of collaborating with local stakeholders for a deeper understanding of cultural nuances and community needs. Engaging these local partners not only enriches the AI development process but also fosters a sense of ownership and inclusion within the communities.

In terms of content moderation, META’s community standards apply to both organic content and generative AI content. The company does not differentiate between the two when determining what can be shared on their platforms. This policy ensures consistency in maintaining the integrity of the online space and avoiding the spread of harmful or misleading content.

Prior to launching new services, META conducts extensive stress tests in collaboration with internal and external partners through red teaming exercises. This rigorous testing process helps identify and address potential vulnerabilities, ensuring the delivery of robust and trustworthy AI services to users.

User feedback is highly valued by META. They incorporate a feedback loop into their product development process, allowing users to provide input and suggestions. This user-centric approach enables continuous improvement and ensures that the technology meets the evolving needs and expectations of the users.

To combat the spread of false information and manipulated media, META has established specific policies and guidelines. They have a manipulated media policy that addresses the sharing of false content, with an exemption for parody and satire. This policy aims to promote accurate and trustworthy information dissemination while allowing for creative expression.

In terms of community safety and education, META actively consults with experts, civil rights organizations, and government stakeholders. By seeking external perspectives and working collaboratively with these entities, META ensures that their policies and practices align with an inclusive and safe online environment.

Deepali Liberhan, an advocate in the field of AI, supports the idea of international cooperation and having clear directives for those involved in generative AI. She emphasises the importance of provenance, watermarking, transparency, and providing education as essential aspects of responsible AI development. Her support further highlights the significance of establishing international partnerships and frameworks to address the ethical and regulatory aspects of AI technology.

In the development of tools and services for young people, META recognises the importance of engaging them as significant stakeholders. They actively consult young people, parents, and experts from over 10 countries when creating tools such as parental supervision features for Facebook Messenger and Instagram. This inclusive approach ensures that the tools meet the needs and expectations of the target audience.

To enhance the accuracy of generative AI in mitigating misinformation, META incorporates safety practices such as stress testing and fine-tuning into their product development process. These practices contribute to the overall reliability and effectiveness of generative AI in combating misinformation and ensuring the delivery of accurate and trustworthy information.

Collaboration with fact-checking organisations is pivotal in debunking misinformation and disinformation. META recognises the importance of partnering with these organisations to employ their expertise and tools in combating the spread of false information. Such partnerships have proven to be effective in maintaining the integrity and trustworthiness of the online space.

Public education plays a vital role in ensuring adherence to community standards. META acknowledges the importance of raising awareness about reporting inappropriate content that violates these standards. By empowering users with the knowledge and tools necessary to identify and report violations, META contributes to a more responsible and accountable online community.

In conclusion, META’s use of AI technology in content moderation, responsible AI development, language inclusivity, local partnerships, and online safety highlights their commitment to creating a safe, inclusive, and transparent digital environment. By incorporating user feedback, adhering to international principles, and actively engaging with stakeholders, META demonstrates a dedication to ongoing improvement and collaboration in the field of AI.

Hiroki Habuka

Generative AI has diverse applications, but it also presents numerous risk scenarios. These include concerns about fairness, privacy, transparency, and accountability, which are similar to those encountered with traditional AI. However, the foundational and general-purpose nature of generative AI amplifies these risks. The potential risk scenarios associated with generative AI are almost limitless.

To effectively manage the challenges of generative AI, society must embrace and share the risks due to the uncertainty surrounding these emerging technologies. Developers and service providers cannot predict or expect all possible risk scenarios. It is crucial for citizens to consider how to coexist with this cutting-edge technology.

Technological solutions and international, multi-stakeholder conversations are of paramount importance to successfully address the challenges of generative AI. These discussions involve implementing solutions such as digital watermarks and improving traceability on a global scale. The increasing emphasis on multi-stakeholder conversations, which go beyond intergovernmental discussions, reflects the recognition of the importance of involving various stakeholders in managing generative AI risks.

A balanced approach is necessary to address the privacy and security risks associated with the advancement of generative AI technology. While efforts to enhance traceability and transparency are underway, they also give rise to additional privacy and security concerns. Therefore, striking a balance between various concerns is crucial.

Regulating generative AI requires collaboration among multiple stakeholders as the government’s understanding and accessibility to the technology are limited. Given the ethical implications, policy-making should also involve more than just governmental authorities. Multi-stakeholder collaboration is necessary to effectively understand and regulate the technology.

The ethical questions brought about by evolving technologies like generative AI require democratic decision-making processes. Privacy risks and achieving a balance between public risks can be managed through democratic practices. Therefore, democratic processes are essential when addressing the ethical complexities of newer technologies.

The proposal of Agile Governance suggests that regulations for generative AI should focus more on principles and outcomes rather than rigid rules. The iterative process of governance is essential for technology evolution, allowing for adaptability and refining regulations as the technology progresses.

Private initiatives play a significant role in providing practical guidelines and principles to ensure the ethical operation of generative AI. Given the limitations of government and industry in understanding and implementing such technologies, private companies and NGOs can contribute valuable insights and expertise.

Young people should be included in decision-making processes concerning generative AI. Their creativity and adeptness in using technology make them valuable contributors. Prohibiting the use of generative AI for study or education is not the answer. Instead, measures should focus on checks for misconduct or negative impacts. Regular assessments in the study field can help ensure responsible and ethical use of generative AI.

In conclusion, generative AI holds immense potential for diverse applications, but it also comes with countless risk scenarios. Society must embrace and share these risks, and technological solutions, as well as multi-stakeholder conversations, play a crucial role in managing the challenges associated with generative AI. A balanced approach to privacy and security concerns, multi-stakeholder collaboration, democratic decision-making processes, agile governance, and private initiatives are essential in regulating and harnessing the benefits of generative AI.

Audience

During the discussion, the speakers highlighted the critical need to address the negative impact of AI-based misinformation and disinformation, particularly in the context of elections and society at large. They noted that such misinformation campaigns have the potential to cause significant harm, and in some cases, even result in loss of life. This emphasised the urgency for action to combat this growing threat.

To effectively address these challenges, the speakers stressed the importance of developing comprehensive strategies that involve various stakeholders. These strategies should engage nations, civil society, industry, and academia in collaborative efforts to counter the spread of misinformation and disinformation. By working together, these sectors can pool their resources and expertise to develop innovative solutions and implement targeted interventions.

The panel also underscored the significance of raising awareness about these challenges on a large scale. By spreading awareness through education, individuals can be equipped with the necessary tools to identify and critically evaluate misinformation. Education plays a vital role in empowering people to navigate the digital landscape and make informed decisions, contributing to a more resilient and informed society.

In conclusion, the speakers argued that addressing AI-based misinformation and disinformation requires a multifaceted approach. Strategies involving nations, civil society, industry, and academia are necessary to counter these emerging challenges effectively. Additionally, spreading awareness about the threat of misinformation through education is crucial in empowering individuals and safeguarding societal integrity. By taking these steps, we can strive towards a more informed and resilient society.

Moderator

Generative AI has the potential to bring great benefits in various fields including enhancing productivity in agriculture and modelling climate change scenarios. However, there are concerns related to disinformation, misuse, and privacy. International collaboration is crucial in promoting ethical guidelines and harnessing the potential of generative AI for positive applications. Discussions on AI principles and regulations have already begun, and technological solutions can help mitigate risks. Finding a balance between privacy and transparency, addressing accessibility, education, and language diversity, and supporting AI innovators through regulations and funding are important steps towards responsible and equitable deployment of generative AI technologies.

Vallarie Wendy Yiega

An analysis of the provided information reveals several key points regarding generative AI and its impact. Firstly, it highlights the importance of young advocates understanding and analyzing AI. The youth often lack a comprehensive understanding of AI and its subsystems, indicating the need for increased awareness and education in this field.

Another significant finding is the need for generative AI to be accessible to diverse languages and communities. The localization of AI tools in different languages is crucial to ensure that marginalized communities, who do not identify with English as their main language, can fully benefit from these technologies. A concrete example is the BIRD AI tool available in Swahili, which has different responses than the English version, demonstrating the importance of localization and testing.

The analysis also emphasizes the necessity of youth involvement in policy development for AI. It acknowledges that AI is advancing at a rapid pace, outpacing the ability of existing regulations to keep up. Thus, it is crucial for young people to play an active role in shaping policies that govern AI technology.

Furthermore, the analysis uncovers the prevalence of copyright and intellectual property issues in generative AI. It highlights the importance of safeguards to protect authors’ intellectual property rights and prevent misuse of AI-generated content. Examples such as the use of digital watermarks to indicate AI-generated content versus human-generated content and the need for client consent in data handling discussions illustrate the issues at hand.

Another crucial finding is the need for multi-stakeholder engagement in the development of regulatory frameworks for AI. This approach involves collaborating with academia, the private sector, the technical community, government, civil society, and developers to strike a balance between promoting innovation and implementing necessary regulations to ensure safeguards.

Collaboration between countries is identified as a critical factor in the responsible use and enforcement of AI. Given that AI is often used in cross-border contexts, international cooperation is essential to establish unified regulations and enforcement mechanisms.

The analysis also addresses the importance of promoting innovation while adhering to safety principles concerning privacy and data protection. It argues that regulation should not stifle development and proposes a multi-stakeholder model for the development of efficient guidelines and regulations.

Moreover, it stresses the role of governments and the private sector in educating young people about generative AI. It argues that the government cannot work in isolation and should be actively involved in incorporating AI and generative AI into school curriculums. Additionally, the private sector, civil society, and online developers are encouraged to participate in educating young people about generative AI, going beyond the responsibility solely resting on the government.

The analysis provides noteworthy insights into the challenges faced by countries in regulating and enforcing AI technology. It highlights that despite being forward-leaning in technology, Kenya, for example, lacks specific laws on artificial intelligence. The formation of a task force in Kenya to review legislative frameworks regarding ICT reflects the country’s effort to respond to AI and future technologies.

In conclusion, the analysis underscores the urgent need for young advocates to understand and analyze AI. It emphasizes the importance of diverse language accessibility, youth engagement in policy development, safeguards for copyright and intellectual property, multi-stakeholder engagement in regulatory frameworks, international collaboration, and the promotion of innovation with privacy and data protection safeguards. It also emphasizes the roles of governments, the private sector, civil society, and online developers in educating young people about generative AI. These insights provide valuable guidance for policymakers, industry leaders, and education institutions in navigating the challenges and opportunities presented by generative AI.

Olga Kyryliuk

Generative AI technology offers numerous opportunities but also poses inherent risks that require careful handling and education. It is argued that generative AI cannot exist separately from human beings and thus requires a strong literacy component. This is especially important as generative AI is already widely used. The sentiment regarding this argument is neutral.

To effectively use generative AI, education and awareness are deemed necessary. Analytical thinking and critical approaches are crucial when analyzing the information generated by AI. It is suggested that initiatives should be implemented to teach these skills in schools and universities. This approach is seen as positive and contributes to SDG 4: Quality Education.

On the other hand, generative AI has the potential to replace a significant number of jobs. Statistics suggest that around 80% of jobs could be substituted by generative AI in the future. This perspective is viewed negatively, as it threatens SDG 8: Decent work and Economic Growth.

To address the potential job displacement, there is a call for reskilling and upskilling programs to prepare the workforce for the usage of generative AI tools. Microsoft and data.org are running such programs, promoting a positive sentiment and supporting SDG 8.

Efforts should also be united to raise awareness, promote literacy, and provide education around generative AI. The technology is accompanied by risks that require a collective approach involving proper understanding and education. This argument is viewed positively, aligning with SDG 4 and SDG 17: Partnerships for the goals.

Internet governance practices offer a collaborative approach towards finding solutions related to generative AI. Communication between different stakeholders, including the inclusion of civil society, is vital in these discussions. This perspective is regarded positively and supports SDG 17.

Stakeholder involvement, including creators, governments, and users, is seen as essential in shaping meaningful and functional policies concerning generative AI. Governments across the world are already attempting to regulate this technology, and users have the opportunity to report harmful content. The sentiment is positive towards involving a diverse range of stakeholders and aligns with SDG 9: Industry, Innovation, and Infrastructure and SDG 16: Peace, Justice, and Strong Institutions.

A balanced legal framework is deemed necessary to avoid hindering innovation while effectively regulating harmful content. It is acknowledged that legal norms regulating harmful content already exist, and overregulating the technology could impede innovation. This viewpoint maintains a neutral sentiment and supports SDG 9.

Furthermore, advocates for user education and awareness on concepts like deepfakes emphasize the importance of a better understanding and prevention. Users need to know how to differentiate real-time videos from deepfakes, and educational programs should be in place. This perspective is positively viewed, aligning with SDG 4.

In conclusion, generative AI offers great potential but also carries inherent risks. Education, awareness, stakeholder involvement, and a balanced legal framework are crucial in handling this technology effectively. Additionally, reskilling and upskilling are necessary to prepare the workforce for the adoption of generative AI tools.

Bernard J Mugendi

The analysis focuses on several key points related to the use of generative AI. One of the main arguments put forth is the importance of promoting remote access and affordability, especially in rural areas. It is highlighted that rural areas often struggle with internet connectivity, and the affordability of hardware and software platforms is a pressing challenge. This is particularly relevant in communities in East Africa and certain regions in Asia.

The speakers also emphasize the need for generative AI solutions to be human-centred and designed with the end user in mind. They give the example of an agricultural chatbot that failed due to a language barrier encountered by the farmers. Therefore, understanding the local context and considering the needs and preferences of the end users is crucial for the success of generative AI solutions.

Additionally, the analysis underscores the value of data sharing among stakeholders in driving value creation from generative AI. It is mentioned that the Digital Transformation Centre has been working on developing data use cases for different sectors. Data sharing is seen as fostering trust and encouraging solutions that can effectively address development challenges. An example of this is the Agricultural Sector Data Gateway in Kenya, which allows private sector access to various datasets.

The speakers also emphasize the importance of public-private partnerships in the development of generative AI solutions. They argue that both private and public sector partners possess their own sets of data, and mistrust can be an issue. Therefore, creating an environment that fosters trust among partners is crucial for data sharing and AI development.

Collaboration is deemed essential for generative AI to have a positive impact. The analysis highlights a multidisciplinary approach, where stakeholders from the public and private sectors come together. An example is given in the transport industry, where the public sector takes care of infrastructure, while the private sector focuses on product development.

Furthermore, there is a call for more localized research to understand the regional-specific cultural nuances. It is acknowledged that there is a gap in funding and a lack of engineers and data scientists in certain regions, making localized research vital for understanding specific needs and challenges.

The speakers also emphasize the importance of transparency in the use of generative AI. They mention an example called “Best Take Photography,” where AI generates multiple pictures that potentially misrepresent reality. To ensure ethical use and avoid misrepresentations, transparency is presented as crucial.

The need for more engineers and data scientists, as well as funding, in Sub-Saharan Africa is also highlighted. Efforts should be made to develop the capacity for these professionals, as they are essential for the advancement of generative AI in the region.

In addition to these points, public awareness sessions are deemed necessary to discuss the potential negative implications of generative AI. The example of “Best Take Photography” is used again, showing the risks of generative AI in creating false realities.

The analysis makes a compelling argument for government-led initiatives and funding for AI innovation, particularly in the startup space. The Startup Act in Tunisia is presented as an example of a government initiative that encourages innovation and supports young innovators in AI. It is argued that young people have the ideas, potential, and opportunity to solve societal challenges using AI, but they require resources and funding.

Lastly, the speakers highlight the potential risks of “black box” AI, where algorithms cannot adequately explain their decision-making processes. This opacity can lead to the spread of misinformation or disinformation, underscoring the need for transparency in how models make decisions.

Overall, the analysis provides valuable insights into the various aspects that need to be considered in the use of generative AI. It highlights the importance of addressing challenges such as remote access, affordability, human-centred design, data sharing, public-private partnerships, collaboration, localized research, transparency, capacity development, public awareness, government initiatives, and the risks of black box AI. The emphasis on these points serves as a call for action in leveraging generative AI for positive impact while addressing potential pitfalls.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Bernard

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Deepali

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Hiroki

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Olga

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

&

’Vallarie

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

IRPC Human Rights Law and the Global Digital Compact | IGF 2023 #24

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Helani Galpaya

The Global Digital Compact (GDC) is receiving positive feedback for its consistent reference to human rights. However, there is a lack of clarity on how the guiding principles on business and human rights will be implemented within the GDC framework. The non-binding and voluntary nature of these principles has resulted in little visibility on the implementation process.

Furthermore, stakeholders are not effectively brought together in the GDC discussions, hindering progress. Different stakeholder groups, such as businesses and civil societies, have been talking separately rather than engaging in meaningful dialogue. The current draft of the synthesized document does not facilitate interaction between these stakeholder groups, exacerbating the issue.

The GDC and the multilateral system also fall short in holding nations accountable for human rights violations. Private companies and state mechanisms are identified as major violators of human rights, and rogue nations are not adequately held accountable for hindering tax negotiations and censoring the internet. This lack of accountability undermines the effectiveness of the GDC and the wider multilateral system in ensuring peace, justice, and strong institutions.

On a positive note, the GDC recognises the interrelation between human rights and socio-economic rights. It highlights that the type of human rights one can engage in depends on socio-economic power. This understanding helps address the issue of inequality and promotes a more holistic approach to human rights.

However, there is a stark divergence between the vision of the GDC and the reality on the ground. Proposed laws and regulations are being introduced that fundamentally hinder rights, despite ongoing GDC discussions. There is a rush to enact these laws, resulting in violations of rights. This inconsistency undermines the credibility and impact of the GDC.

The responsibility for internet usage lies with all actors in the value chain, including consumers. The GDC emphasises the need for educational measures to teach civic responsibility online and to change behaviours. It draws on the example of the environmental protection movement, which has successfully ingrained responsibilities into society. This reinforces the importance of individual responsibility in ensuring a safe and inclusive digital space.

Furthermore, it is vital to teach internet users, especially young ones, about their responsibilities and appropriate civic behaviours online. The permeation and unintentional reach of ideas on the internet highlight the need for individuals to be cautious and mindful of their actions and words online.

Notably, the consultation process for the GDC is considered inclusive compared to other global processes. However, it is acknowledged that this inclusivity is inherently imperfect. Consultation processes often tend to be dominated by privileged individuals, limiting diverse perspectives and hindering the effectiveness of the GDC.

Ground-level action by local agents is seen as essential after the development of a suitable Global Digital Compact. It is important to ensure that nations develop policies that align with the GDC post-formation. This implementation is crucial for translating the principles and objectives of the GDC into tangible actions and outcomes.

In conclusion, while the Global Digital Compact shows promise in its commitment to human rights and the recognition of interrelated socio-economic rights, there are significant challenges that need to be addressed. The lack of clarity on implementation, insufficient stakeholder engagement, and the failure to hold nations accountable for violations all weaken the effectiveness of the GDC. Nonetheless, the responsibility for internet usage lies with all actors, and teaching individuals about their responsibilities online is vital. The consultation process for the GDC is relatively inclusive but can still be improved. Ground-level action is crucial for translating the GDC into meaningful policies and outcomes.

Moderator – Raashi Saxena

The Global Digital Compact (GDC) is a collaborative effort between the United Nations, governments, and civil society to address the issue of technology avoidance in relation to the Sustainable Development Goals (SDGs). This multi-stakeholder approach aims to bring together various actors to find innovative digital solutions.

The International Rights and Policy Coalition (IRPC) plays a significant role in the GDC, particularly in addressing the gaps of digital inclusion and connectivity for marginalized groups. The IRPC focuses on fostering public and political participation for women, migrants, and refugees. By prioritising digital inclusion, the IRPC aims to reduce inequalities and promote SDG 10 โ€“ Reduced Inequalities.

Equating offline and online human rights is crucial, especially concerning freedom of expression, information, and net neutrality. The recognition of these fundamental rights in the digital realm contributes to SDG 16 โ€“ Peace, Justice, and Strong Institutions. The goal is to ensure that individuals have the same rights and protections both offline and online.

Artificial Intelligence (AI) has a significant impact on various sectors, including financial services and public health. The use of AI in recruitment processes raises concerns about privacy and security. Additionally, the development of synthetic media and deepfakes poses challenges in terms of trust and authenticity. The implications of AI on SDG 3 โ€“ Good Health and Wellbeing and SDG 8 โ€“ Decent Work and Economic Growth need to be carefully considered.

The European Union (EU) Coalition on Internet Governance actively encourages partnerships and youth involvement. They have a partnership with the Youth Coalition on Internet Governance and encourage young people to participate and find opportunities through their email list. This commitment to collaboration and youth engagement aligns with SDG 17 โ€“ Partnerships for the Goals and SDG 9 โ€“ Industry, Innovation, and Infrastructure.

In the context of Raashi Saxena’s event, active audience participation is encouraged through group activities. The audience will be divided into groups of four or five, promoting interaction and engagement. This approach creates an inclusive environment that allows participants to exchange ideas and perspectives.

Overall, the GDC, IRPC, the recognition of offline and online human rights, the implications of AI, the EU Coalition on Internet Governance, and Raashi Saxena’s audience activities all contribute to advancing the SDGs and fostering a more inclusive and innovative digital future.

Moderator – Santosh Babu Sigdel

The Dynamic Coalition on Internet Rights and Principles (DCIRP) is actively working towards ensuring that human rights are effectively upheld in the online space. They have developed a charter on Human Rights and Principles, which has been translated into multiple languages to engage stakeholders on both regional and national levels. Their efforts have resulted in the availability of the charter in 28 languages, allowing for broader accessibility and understanding of human rights in the context of the internet.

The IRPC (Internet Rights and Principles Coalition) advocates for a rights-based approach to internet frameworks. They have been actively involved in various international forums, such as EuroDIG in Finland and the UNESCO Conference in France, as well as the Internet Governance Forum (IGF). Through their engagement with organizations and volunteers, they have raised awareness about digital rights and have collaborated with various organizations to promote the rights-based approach.

The importance of collaborating with local individuals in the translation process is emphasized by both the DCIRP and IRPC. They highlight the need for human awareness and discretion in using accurate and contextually relevant translations. Local stakeholders discussing and reviewing drafts among themselves helps ensure accuracy and maintain the integrity of the translated content.

Furthermore, the translation of the charter goes beyond language alone; it also aims to build local capacity. Participants are engaged with both the language and the concepts presented in the charter, contributing to a deeper understanding and broader adoption of the Human Rights and Principles it encompasses. This approach promotes inclusivity and empowers local communities to actively participate in the enforcement of these principles.

However, the regulation of misinformation and disinformation online can pose a significant challenge to freedom of speech. Governments may use the idea of responsibility as a pretext to control and restrict legitimate expressions, discussions, and the spread of information. Striking a balance between protecting the public from harmful misinformation and disinformation while safeguarding the fundamental right to freedom of speech and expression is important.

In South Asian countries, including Nepal, there have been attempts by governments to impose internet regulations under the guise of responsibility. Such actions raise concerns regarding the potential erosion of freedom of speech and expression, highlighting the need to be vigilant and ensure the preservation of these fundamental rights.

The Global Digital Compact (GDC) is an initiative started by the United Nations in 2021 to address digital challenges and promote peace, justice, and strong institutions. However, there is a lack of awareness about the GDC in South Asia and other developing countries. This lack of awareness poses a challenge to the enforceability and effectiveness of the GDC. Broad stakeholder participation is crucial in the design process of the GDC, but stakeholders in South Asia and other developing countries are often not adequately informed about the initiative. An all-inclusive approach that incorporates the perspectives and insights of stakeholders from diverse regions and backgrounds is essential for the GDC to have a meaningful impact.

In conclusion, the work of the DCIRP and IRPC in promoting and upholding human rights in the online space is commendable. Their efforts encompass translation, awareness-raising, capacity-building, and engaging stakeholders to ensure a rights-based approach in internet frameworks. Challenges remain in striking a balance between regulating misinformation and preserving freedom of speech, as well as raising awareness about initiatives such as the Global Digital Compact. Building collaborations and inclusivity in these efforts are important for addressing these challenges and achieving a more equitable and rights-centric digital landscape.

Audience

During the discussion on internet governance and digital rights, various topics were explored. Piu, a member of the EU Coalition on Internet Governance, expressed interest in getting involved in the dynamic coalition and highlighted the importance of young people’s participation in updating chapters and providing translation. This emphasises the need to include diverse perspectives in shaping internet governance policies.

The debate about freedom of expression, both online and offline, raised important considerations about finding a balance between regulation and protecting free expression. The principles of legitimacy, necessity, and proportionality were identified as essential factors in ensuring that governments do not overstep their boundaries when regulating online content. This argument acknowledges the importance of safeguarding both freedom of expression and preventing harm.

Another crucial aspect discussed was the need for a clear delineation of responsibilities between states, businesses, and stakeholders in the online environment. Each stakeholder was recognised as having a role in balancing freedom of expression with the prevention of the spread of harmful content. This underscores the significance of collaboration and cooperation among different actors to maintain a safe and open digital space.

Recognizing the potential of technology to be inclusive for individuals with disabilities, the speakers appreciated the advancements that can create more accessible spaces and opportunities. However, they also highlighted challenges such as regional variations in sign language and the continued struggle for internet accessibility for individuals with hearing impairments. This highlights the ongoing work required to bridge these gaps and ensure true inclusivity in the digital realm.

Elevating community involvement and empowerment emerged as a key theme in the discussion. The voices of people with disabilities often face barriers in being heard, and existing systems can hinder their active participation and expression of needs and opinions. The speakers highlighted the importance of fostering a supportive environment that uplifts and amplifies the voices of individuals with disabilities, advocating for inclusive design, technological advancement, and increased public participation.

The participants also examined the rights of children in the context of the internet, acknowledging both the positive and negative aspects of their exposure online. The need for careful monitoring and regulation to protect children’s cognitive development was emphasized, as unrestricted internet access can have detrimental effects. Additionally, there was a call for stronger mechanisms to prevent exploitation and abuse of children online, as they are often more vulnerable in the digital environment than in the physical world.

The Global Digital Certificate (GDC) solution and its connection to human rights were questioned during the discussion. The participants observed that none of the group members had considered the GDC solution from a human rights perspective. This raises concerns about the potential implications of implementing a global digital certificate system and highlights the importance of assessing its compatibility with fundamental rights and freedoms.

The speakers also criticized government-imposed internet shutdowns as a breach of digital rights. They argued that strong legislation at the national level is necessary to effectively enforce digital rights. The absence of nation-specific legislation was identified as a significant obstacle to protecting and upholding these rights.

To safeguard digital rights, the participants proposed strategic litigation as an effective approach. By utilizing existing rights such as the right to privacy and the right to access information, individuals and organizations can challenge government actions that infringe upon these rights. This highlights the importance of legal strategies and creative approaches in upholding and defending digital rights.

In conclusion, the discussion on internet governance and digital rights delved into various aspects, including the involvement of different stakeholders, the balance between regulation and free expression, the inclusivity of technology for individuals with disabilities, the protection of children’s rights online, and the legal considerations surrounding digital rights. The exploration of these topics provides valuable insights into the complexities and challenges of governing the digital realm while upholding fundamental rights and ensuring inclusivity for all.

Vint Cerf

Vint Cerf, a prominent figure in the field of technology, highlights the importance of considering human responsibilities alongside human rights in the online environment. He argues that online users and providers should be well-informed about their responsibilities and actively fulfil them. Cerf’s stance suggests a shift towards a more holistic approach in creating a safe and responsible online space.

The concept of the social contract, as proposed by philosopher Jean-Jacques Rousseau, is also brought into the discussion. According to Rousseau, individuals agree to relinquish certain freedoms in exchange for the protection of their remaining rights. This notion emphasizes the understanding that, as we enjoy certain rights, we also hold responsibilities for the well-being and security of others.

The correlation between rights and responsibilities is another crucial aspect highlighted in the analysis. It is important to acknowledge that while we demand our rights to be respected, it is also important to recognize and fulfill our responsibilities towards others. This recognition helps establish a balanced and harmonious relationship between individuals and society.

Additionally, the mention of norms as behavior patterns taught by society without the need for enforcement offers an interesting perspective. Norms play a significant role in shaping our understanding of responsibilities and guiding our actions within a social setting. They provide a framework for acceptable behavior and assist in fostering cooperation and cohesion within communities.

In conclusion, the analysis reveals an increasing recognition of the importance of human responsibilities alongside human rights, particularly in the online environment. Vint Cerf’s standpoint, along with the concepts of the social contract and the correlation between rights and responsibilities, encourages individuals to be more aware of their obligations and to contribute to the creation of a responsible and ethical digital space. By understanding and fulfilling our responsibilities, we can build a more inclusive and harmonious society, both online and offline.

Wolfgang Benedek

Wolfgang Benedek raises concerns regarding the Global Digital Compact (GDC) and its effectiveness in advancing human rights. Benedek questions the value added by the GDC in terms of progress towards the rights enshrined in the charter and its implementation.

One of Benedek’s main points is the lack of enforcement within the GDC. Despite the commitments made within the compact, Benedek highlights a lack of genuine and effective measures to ensure compliance. This lack of enforcement undermines the potential impact of the GDC in promoting and protecting human rights in the digital sphere.

Furthermore, Benedek emphasizes the challenge of reaching agreement within the GDC. He suggests that the difficulties in achieving consensus on crucial issues hinder substantial progress towards the goals outlined in the charter. This lack of agreement may result from differing perspectives and priorities among the stakeholders involved.

Through these criticisms, Benedek highlights the need for improvements in the implementation and effectiveness of the GDC. His concerns underscore the importance of meaningful enforcement mechanisms and a more inclusive and collaborative decision-making approach. By addressing these issues, the GDC can enhance its ability to promote and uphold digital human rights.

In conclusion, Wolfgang Benedek questions the value of the Global Digital Compact’s progress towards human rights, focusing on two key areas: the lack of enforcement and the challenges of reaching agreement. These criticisms highlight the need for improvements in the implementation and effectiveness of the GDC. By addressing these concerns, the GDC can play a stronger role in advancing human rights in the digital age.

Dennis Redeker

During the conference on internet governance and the Global Digital Compact (GDC), several topics were discussed by the speakers. One important development was the translation of the 10 Principles document of the charter into Japanese by the Dynamic Coalition for the IGF conference in Japan. The aim of this translation was to generate more interest among Japanese stakeholders and promote wider adoption of the Charter. As a next step, a task force is being set up to translate the entire Charter into Japanese, and they are inviting individuals with knowledge in internet governance or international law, or ideally both, to join this initiative.

Additionally, a survey revealed that the majority of internet users believe that technical experts should have the most influence when shaping the GDC. However, this perception doesn’t align with reality. According to the survey, businesses are perceived to have more influence than necessary, while national governments and academics are perceived to have less influence. This highlights the need for more public consultation and input in shaping the GDC, with less reliance on traditional powerholders. It is essential to seek the opinions of citizens, NGOs, and academics to ensure a more inclusive and representative digital governance framework.

The Internet Rights and Principles Coalition (IRPC) plays a significant role in the translation of their charter into different languages. They collaborate with various partners, including universities and student groups, to facilitate the translation process. This not only creates a valuable resource for the community but also helps students gain a better understanding of the implications and context of the charter in their own language. This collaborative effort in translating the charter into different languages provides an instructive experience for students and translators.

Dennis Redeker, an organizer at the conference, is facilitating a group discussion activity to promote engagement and a deeper understanding of the charter. The activity involves grouping audience members and allowing them to discuss specific articles of the charter. Participants are encouraged to choose an article of interest and consider potential challenges that may arise in the next 10 years regarding the associated rights. The aim is to find ways in which the Global Digital Compact can effectively address these challenges.

Dennis Redeker emphasizes the importance of continuing discussions on the relevance of the charter articles not only in the present but also in the future. He encourages participants to share copies of the principles with friends who may be interested, thus spreading awareness about the charter and its importance.

In conclusion, the conference on internet governance and the Global Digital Compact addressed various topics, including the translation of the charter into Japanese, the perception versus reality of stakeholder influence, the role of collaboration in translation efforts, and the need for public consultation. The group discussion activity led by Dennis Redeker aimed to foster engagement and explore challenges and solutions regarding the charter. Overall, it highlighted the significance of inclusive and representative digital governance for a more equitable and sustainable future.

Session transcript

Moderator – Santosh Babu Sigdel:
for being here. I’m moderating the first session of this workshop. English is not my first language, so bear with me. I’m Santosh Sigdel from Nepal. I’m co-chair of this Dynamic Coalition on Internet Rights and Principle and executive director of Digital Rights Nepal. So this is a dynamic coalition session, workshop on human rights law and the global digital compact. And in this session, the agenda of the session is we’ll be discussing briefly about the work of the IRPC, setting about the Dynamic Coalition. After that, we’ll be discussing briefly about the digital global compact and the human rights. And after that, there will be short speeches from three of the experts. Three of the experts. And after that, there will be a group session where we’ll be navigating the current human rights challenges or the challenges we might face in next 10 years and how global digital compact can be used as a tool to navigate those challenges in the online spaces. And there will be some recommendation for the stakeholders from our side to address those challenges that we foresee through our group exercise. So that is the outline of this workshop. And regarding this Dynamic Coalition, we started it in 2008. And it is an open network of individual and organization who are making human rights effective in the online space. And this is an open coalition. Anybody can join the coalition by subscribing to the email. And our website for this coalition is internetrightsandprinciple.org. And since 2014. The IRPC coalition is also the steering committee member of the Media and Information Society at the Council of Europe. The charter of the coalition, one of the major outcomes of the dynamic coalition is the Human Rights Charter and Principles, the 10 Internet Rights Principles and the Charter of the Rights and Principles. And it was published in 2018. And one of the major our work is translation, making it available in different languages. And so far, the charter has been translated into 12 languages. And it is available in 28 languages. The charter, the principles are available in 28 languages. Through the charter, we engage with the stakeholder in regional level, at the same time at the national level, country level. And we also collaborate with other dynamic coalition. And we engage with the stakeholder, for example, National Human Rights Commission in different countries. At the same time, we also raise the awareness about the need of implementing the right-based approach while developing and implementing internet framework in country level. So we have been, the translation work is kind of voluntary approach. So there is no dedicated support or funding for that. So we collaborate with the other organization and volunteers for different countries. And Dennis would be saying about the Japanese charter that we recently, last year, we had translated the charter into Nepali. And we had launched it in the IGF Ethiopia in Addis. This year, we have translated the charter into Japanese language. And we have got a copy of the charter in Japanese as well. So Dennis will be talking about the Japanese translation in a while. But before that. I’ll be setting about the work that the DC Dynamic Coalition carried out this year, in 2023. So the Dynamic Coalition officially presented at the EuroDIG in Finland. Similarly, in Africa, we participated in the Ghana SIG and presented the charter. There was a UNESCO Conference on Platform Accountability, sorry, Guideline for the Social Media Governance in France, where we participated and presented the charter. Similarly, there was a similar event in Nepal, where we participated. I represented IRPC at that time. We also engaged this year with the National Human Rights Commission of Nepal and advocated for the implementation of the charter in the local context and emphasizing their role, role of the National Human Rights Commission. Similarly, we audited our website for the accessibility approach, and we improved the accessibility of the IRPC website this year. Similarly, in IGF earlier, we co-organized the Global Digital Compact Southern Perspective Workshop. And similarly, the work of this year, the Japanese translation of the charter, which Dennis will be further explaining about it.

Dennis Redeker:
Absolutely. Thank you so much, Santosh. Indeed, today, and on occasion of the IGF in Japan, we’re launching the Japanese translation of the 10 Principles document. This is the short version, if you will, of the charter. The charter is a document of 25-plus pages. That obviously is a much bigger effort. This 10 Principles document, we got help translating for this conference. And this conference also is the place where we hope that we can garner some more interest in the charter among Japanese stakeholders in order to get more engagement of or coalition with the people in Japan. And so if you know someone, if you’re yourself a Japanese speaker. Feel free to join a task force we’re setting up to translate the Charter in its entirety. As we said, we now have 12 different translations of the Charter. It would be great to have a 13th soon, and that’d be a Japanese version. This is teamwork. It’s not that much time commitment. We look for people who have a grasp of Internet governance or of international law, ideally both. And you can contact me, you know, if you’re here in the room. Contact me, write me a personal message if you’re online. We’re happy to engage, and we hope to launch this then at a Japan IGF or an Asia-Pacific regional IGF or the IGF next year. And that’s already it with the updates from the coalition. Thank you for listening. We’ll distribute those after the session or in between, so you can take a flyer home if you want. There’s also the website on there.

Moderator – Santosh Babu Sigdel:
Thank you, Dennis, for the update. So this was the initial opening of the workshop where we are describing about the DC session and our work. So we are now entering into the main part of the session, talking about the human rights law and global digital compact. So in this main session, there will be three speakers talking about the different different dimension of the global digital compact and human rights law. So to open the session, I request co-chair of the Dynamic Coalition, Rasi, please.

Moderator – Raashi Saxena:
Hi, everyone. I hope you’re doing well. I’m also moderating the session, so I’m going to talk a little bit lightly about the GDC and just give everyone an overview of the events that have happened, how we’ve participated, and then, of course, give it to Helani and Dennis to move forward with our program. So the GDC was agreed upon in the summit of the future, looking at the roadmap. recommendations of digital corporations with United Nations governments, civil society, and looking at a more multi-stakeholder digital technology track. We’ve seen this also in a lot of conversations where the SDGs are very technology avoidant, so maybe this perhaps is a good gap to, at least we look at it as a way to fill it up. And then there was a there was also a decision held at the UN General Assembly that was held on the 22nd and 23rd of September, which would look at opening for comments on, you know, collecting inputs from interested stakeholders that were considered with GDC. We also have Sweden and Rwanda that are looking at appointing as co-facilitators into this process, looking at roadmaps, and from an IRPC point of view, some of the topics that interest us is of course the topic of digital inclusion and connectivity, and of course we have, you know, a few topics in our charter where it directly goes into addressing digital inclusion gaps that are faced by women and migrants and refugees, more looking at marginalized groups and how digital inclusion can also foster and include public participation, political participation, and how involving these groups in digital policymaking processes is important. We also look at the point of human rights online, where how can we equate that the same rights that people have offline are also seen online, looking at cross-cutting issues on freedom of expression, information, net neutrality, and of course also looking at promoting multilingualism and promotion of cultural diversity, and also looking strongly at children’s rights and how they should have agency and looking at ensuring that there is protection of privacy. There’s also one on digital trust and safety looking at artificial intelligence being an integral component on how the pervasive influence on how it could include and influence financial services and how one should have agency on knowing whether AI is used in the process of for example in recruiting, in the public health sector and looking at the ramifications of AI on driving economic growth but looking at it for more privacy concern, safety concern, security concern with the invent of synthetic media, deepfakes but yeah AI also looking at an advanced technology that could look at examples that are more triggering for regulation, having more policy discussions but of course just giving you a little overview and then can have Helani join in.

Helani Galpaya:
Okay right as a sort of a last-minute addition to this panel I’m gonna be look I think the GDC in its sort of current policy draft whatever which is really the only document we have to go on along with everything that was said in the various different consultations. I think it does a good job of referring repeatedly to human rights. It also refers to ongoing instruments sort of you know like guiding principles on business and you know the universal declaration of human rights so it doesn’t try to create a new set of rights I think you know that’s as it should. should be. There’s a lot about misinformation and curbing of that, putting responsibility off platforms, and then also about artificial intelligence. So there’s a huge focus on Article 19, specifically on rights. I think what it doesn’t do, and there’s time to do that, is really think about how those mechanisms are actually going to be implemented, like even the guiding principles on business and human rights. These are not binding. These are voluntary sign-up things. So I’m not quite sure how that process is, and there has been very little visibility on that. So I think that’s kind of important. One of the critiques I made yesterday in a session that Ambassador Gill was there as well was that there has so far been very little, really, to bring the different communities together. Because we’ve been, civil society consultations have been in one side. The businesses have been talking separately amongst themselves, et cetera. And somehow, this synthesized document in its current draft is out. But really, it’s the interaction of these different various stakeholder groups that I think has to be facilitated. But I find a fundamental structural issue in that private companies are not the only violators of human rights. And in fact, state and state mechanisms are a huge violator. And in setting this, as much as it gives way to multi-stakeholder consultation, it’s still set within a multilateral system where nation states are the primary voting base when it comes to key decisions. And that system is completely unable to hold rogue nations to account. People who hinder tax negotiations because they are the biggest earners in terms of tax when it comes to the platform economy, people who filter the internet for billions of people and don’t let their citizens browse what they like or say what they like. People who shut down the internet under the flimsiest of excuses. There is no mechanism to hold nations to account. I mean, this is just a feature of the multilateral system, not of the GDC, by the way, right? But I think that’s a real fundamental shortfall and we do really need to think about this. There’s no point talking about human rights when wars are going on. And there’s no mechanism to really intervene. So what are we talking about the internet? I think the other issue is that human rights are viewed as sort of this, almost this exotic set of rights without really embedding them in the other important set of rights, with our socioeconomic rights. And they’re incredibly intertwined. The GDC talks about the need for connectivity almost as if it’s like a one, zero, you have connectivity, but it’s going to be a moving sort of a continuum. And the kind of human rights you can engage in are really dependent on the kind of socioeconomic power you have. And the internet is vital to that, right? So they’re really interconnected. So I don’t think it really helps today to talk about these two things, the socioeconomic rights versus human rights, as if one is more important than the other. They’re so intertwined and quite important. There’s also, I think my last point is that the GDC is going on in this process, and in a way quite removed from what’s really happening on the ground. Because while this conversation is going on and everyone is waiting for the summit of the future, what’s really going on, for example, in South Asia is that everyone is coming up with laws and regulations that really fundamentally hinder rights. A recent example would be the proposed online safety bill. It’s called the online safety bill. It’s a speech moderation bill in Sri Lanka, right? We can cite many, many examples. Now we have no idea how. these two parallel systems are going to, you know, the GDC envisions this, you know, rights-based human-centric world and yet the underlying layer of laws, which are rushing, by the way, because, you know, everyone is pushing for these, they should be done, but they are completely, you know, removed from any of that GDC discussions and the vision we have for the future. So we’re already going to have a whole set of laws that violate, we already have, and the new ones are also going to violate, you know, speech and many other rights and that connection sort of is not really made and I find that problematic. So, thank you.

Moderator – Raashi Saxena:
Someone’s online or do you want me to comment? Is someone online or do I give them the floor for that? Who is online? Okay, so who? Yeah, we’ll have Wolfgang next. You can go first. If Wolfgang is not ready, I can actually see now. Sorry about that. Wolfgang, are you online? Okay, you can go ahead. Okay, Dennis, go ahead.

Dennis Redeker:
And I think it actually fits quite well here. Now, it’s the academic University of Bremen, Dennis Redeker, University of Bremen, speaking now, but really trying, I’m really trying to focus and to focus my talk and things I brought on this question of who are the potential human rights violators anyways, and you mentioned, Ilana, you mentioned states as one of the main sources potentially, so it’s not just companies obviously, and what I do when I I do research, I ask people about their opinions across the world, largely in the global South and Eastern Europe. So I’m presenting some results here that I think are quite interesting for those who care about the global digital compact consultations and the process. And then they relate, I think, to also what maybe people in these countries think about their governments with relation to making good rules on a global level and their government’s influence on processes at the UN level in this field. Now, it’s a very big challenge for someone who does public opinion research to actually ask the general population about a global digital compact. Because what have average internet users that I ask in terms of knowledge on the consultations and on the process? It’s very difficult. It’s a technical process. Most people will not be very attuned to the topics of global internet and digital governance. So I asked easier questions. I asked people, in your opinion, who should provide the input into writing of the global digital compact? And this is now explained to people who responded to the survey, what the GDC in general is. And I asked them, OK, so if that makes new rules on a global level for digital technologies and so on, who should have an influence on the global digital compact? And then I also asked them, in reality, who do you think actually has an influence in the process? And then lastly, I also asked them, so which are the principles? If you have to, in a very abstract way, have to think about the principles that you care most about, which of those do you want to be seeing as affirmed in the global digital compact? The survey I ran with a couple of colleagues. Among them here are actually Faye Courthouse, I want to point you out, one of the research assistance collaborators here in the room. This is a web-based survey asking internet users about their opinions, recruitment among social media users, the ads that were put out for this and were seen by 31.4 million users. There was a quota sampling conducted. In total, 41 countries, six languages, last year, early this year, and 17,500 people responded. And I asked them, OK, so who do you think the UN should listen to when it listens to different stakeholders? You can’t necessarily read it when you’re in the room here, so I’m going to read it out. The very left bar, about 60%, is technical experts. So internet users think that technical experts should have the most influence on shaping the GDC. Secondly, academics, with about 50%. 45% of the citizens think that citizens should be heard by the UN in the process. Obviously, in the consultations, there were opportunities like that. Then civil society, at about 40 plus percent of the respondents. And national governments, interestingly, although they seem to have a great deal of impact on the GDC, obviously, only 35% of the respondents thought they should be listened to and give input. Businesses, even less so. This is even more interesting. About 18% or so of respondents think that they should be heard on the Global Digital Compact. And I also asked them, OK, so this is what you want, but what do you think actually happens? Well, they think technical experts, they’re being listened to. No question about that. Academics, they’re not listened to enough, and I obviously agree. Citizens are not listened to. That’s what they think, in comparison to what they think, how it should be. NGOs, a little bit less than there should be. National governments are not being listened to. This is what the people out there think. I mean, this is 17,500 people. So they might be wrong. They’re certainly not wrong. Right, I think. But this is the perception. This is the idea what people have when you ask them. These are the general population. And people think that businesses, although they shouldn’t be listened to, they’re actually more listened to than they should be. So this is, I think, quite telling about this question. And it may actually tell you something about what, and this is connecting to the previous talk, what people think about who might be risks for human right-based digital governance. It might be the companies, it might be the states, but not the others. And so please let the others have an influence on this. And then I also asked the question, so this is a very, the topics of the GDC are very complex. You can’t really ask, how should the text be in the end? Please write it down. So I asked people about core principles. I selected some, and I asked them about all these principles, and asked them whether they agreed that is a principle that should definitely be included. Security of children online tops the list. Most people, more than 70% of the respondents think that’s very important, should be included. Security of privacy online, more than 60%. Fighting hate speech online is quite up there on the list. But also protection of intellectual property. But also greater cultural and linguistic diversity online. Network neutrality still makes the list kind of halfway. Right to encryption of data, more innovation, less regulation of digital technology is rather in the, I think, last third of those. Open source software doesn’t really feature up high. Neither does open data. And also those who say there should be no censorship online are only about 20% to 30%. This is probably also a result of conversation. I mean, what does no censorship mean, right? This also means no content moderation, essentially, if you will, of any kind. I have only just one or two slides more. And this is a slide that shows in which countries people tend to say there should be no censorship. online. And these are primarily countries in Eastern Europe, Central and Eastern Europe, Latin America, and not so much in Sub-Saharan Africa, Northern Africa, and Southeast Asia. And those asked about cultural and language rights online, it’s predominantly very strong around the African continent and somehow some places in Latin America. And yeah, maybe that’s just the data. But I think it’s insightful for us to think about the bottom-up view we often talk about. So how are people affected? But I wanted to think about how can we actually talk to people about the GDC and what they think about this in very abstract forms sometimes with these principles. That’s not the same discussion necessarily. But also very concretely, who should be listened to? Because that really is something, if we know how things are going in reality and what people think, that should give us at least some thought. And with that, I’m ending. And hopefully, we have Wolfgang online. Wolfgang.

Wolfgang Benedek:
If you talk about me, I’m Wolfgang Benedek. I’m sitting in Austria. And my regards to everybody. It’s a pleasure to listen to you, to see you here. I was involved in the past in drafting of the charter. So that’s quite some time ago. And I’m very pleased to see what progress has been made in the meantime with regard to the translations of the principles and of the charter itself. But I have to say, I’m not an expert of the GDC. I’m working on artificial intelligence and human rights at the moment. And we have a network of academics, a global network on the internet, and human rights online. And in that respect, I rather wanted to put the question, and this is to what extent in this global regulation, which I think is very much needed. However, on a soft law basis, we can expect some progress with regard to the rights enshrined in our charter, and also with regard to the question of implementation. But as I have listened to the presenters, this actually seems to be the gap, that there is little on enforcement, that is not untypical for such situations, when you need to get agreement. But still, the question is, what is the value added of the GTC in that respect?

Moderator – Raashi Saxena:
Thanks so much, Wolfgang. And now I will hand it over to Santosh and Dennis for the activity, or do we open it up for questions or comments from our audience? Yeah, we could do that. So I wanted to check if anyone had any questions or comments. And then everyone could raise their hand so that I could pass on the mic.

Vint Cerf:
Thank you very much. It’s Vince Cerf. When I listen to human rights, which I think we all care a great deal about, I begin to wonder about human responsibilities. And I don’t know whether we’ve ever had a charter that speaks to that. But it seems to me in the environment that we’re in, in the online world, that we really do have some shared responsibility in addition to asking for and expecting rights. And I don’t know whether we’ve tried to articulate that, but do you suppose it would be worth our time to try to offer what responsibilities we have as consumers, users, and providers of human rights? and occupiers of this online space.

Moderator – Raashi Saxena:
Is there anyone else that has any comments or questions? You could just raise your hand. Anyone? Yeah, I’ll give it a moment. OK. And then, oh, there’s a mic there. Go ahead.

Audience:
Hello, everyone. This is Piu from the EU Coalition on the Internet Government, a steering committee member. I have been to the Internet Coalition, and I have been present about this chapter during the webinar that was happening in the ASEAN, South Asian network. That is quite interesting. The thing is that I’m curious about how we can engage at this dynamic coalition and how we can get involved. And how do you have a plan to bring the young people to get involved in updating chapter or maybe translation part or something like that? That is what I would like to know.

Moderator – Raashi Saxena:
Yeah. Maybe I can. Yeah, I can respond to that. We’re actually very open with partnerships. We do have a partnership even with, say, the Youth Coalition on Internet Governance. A lot of those members also get transformed. I’m an example of that. I was a part of YCIG, and then I transitioned into being a steering committee member and then co-chair. We’re very open to having those conversations. The IRPC emailing list is also good for opportunities on how you can participate. And the IGF is a good place. There are different ways. One is, of course, the Internet Societies Program. But there are a lot of ways in which the country hosts have their own programs on how youth can participate in different sessions and contribute remotely, leading up to the event. and even after, whether it’s a discussion paper or it’s a policy proposal. So I would say the world is your oyster, but you can connect with us after the session and we can see how we can partner together on the charter translation or anything else. I hope that answers your question. Okay.

Dennis Redeker:
Maybe to add, we have some, I mean, you also have worked on the translation project. I’ll give you the mic in a second. Just one other example. It’s often also university students that work on the charter. We had two universities in Padua in Northeastern Italy and Salerno in Southern Italy, two lecturers getting their students involved into the translation of the charter. This was headed by Edoardo Schlester, Dublin City University, really trying to see whether this is worthwhile that obviously as a translation for Italians to read, but also at the same time for the students to think about what it means to translate an English language text into your own language. What does it mean in your context? Because often it’s not just a translation that you can do with automated translation services. Increasingly so maybe, but it really means trade offs as you translate it in another language. It means that you have to understand what it does actually mean. It’s a different term for hate speech maybe or something else. You got to think about these kinds of things, very instructive for everyone I think who engages with translations. But Santosh, you have another example, right?

Moderator – Santosh Babu Sigdel:
Yeah, just to continue with what Dennis said. We collaborated with the local NGO in the case of Nepal, Digital Rights Nepal. So it is necessary that the local people having the grasp of local language are engaged rather than the machines in the local level because the human agency is also important because in the draft translation, they can discuss it with their stakeholders. For example, I’ll give you one example. There is no real translation of the term governance in Nepali. When we talk about internet governance. the real term, real translation is not there. So we had to use internet governance in a romanized way. Because after a long discussion, we argued that any other term would not sufficiently grasp the whole idea of governance. So we’ll continue with the internet governance. So it is necessary that we work with the local community, local stakeholder. And that is not only the translation. Because it is also the part of building capacity of the local stakeholder. Because in the process of translating, you are working with the language of the charter. We are working with the concept of the charter. So that also builds the capacity of the local stakeholder that I felt personally.

Helani Galpaya:
I’m going to try and address this issue of, I think, responsibility. And I’m going to give a very retail level answer. I think responsibility lies with, obviously, all the different actors in the internet value chain. And I think consumers also do have it. I think back at the environmental protection movement. My mother’s generation, it was very normal to go on trips, eat out of plastic wrappers, and just leave that stuff there. We no longer do that. We would never cross the world. Now we are much more conscious about it. That’s a responsibility act that has been grilled into us through primary education, almost, maybe, and the global conversation. It’s also been matched with incentives for good behavior. So I think that’s a big part about it. And maybe sometimes punishment, the other side of incentives. So I do think there is a responsibility. The danger of only having the responsibility conversation is that it puts the responsibility only on individuals. And the language is women should safeguard. yourself on the Internet. That’s not the responsibility conversation, right? We have a civic responsibility because what we say and do on the Internet, unless it’s on a one-on-one message, it usually has externalities. There’s a broadcast function. Our ideas permeate to people who unintentionally come across it. So there is absolutely an externality and a responsibility and I think we need to start very much for retail individuals with sort of young people and how we teach them on what civic behavior online is. Can I just add something to that? So I

Moderator – Santosh Babu Sigdel:
just wanted to relate with the ongoing discourse around the regulation of misinformation, disinformation around the countries and we have seen some cases in the context of South Asia including in Nepal where the government is trying to impose this concept of responsibility towards the citizen that you have to filter the conversation or your expression on the Internet to make sure that there is no propagation of the fake news, so-called fake news or the misinformation or disinformation. So now based on that responsibility they want to frame a law or the legal regime where they can also kind of, they can also control the legitimate expression. So sometimes this responsibility argument is from the point of controlling the Internet that it is coming from that point of view also. I think we should also consider that.

Moderator – Raashi Saxena:
Maybe we could move to our activity. It’s Vince Nerfigan. Just thinking a

Vint Cerf:
little bit about an important document by Rousseau called the social contract and in some sense we agree to give up certain rights in exchange for safety and security for example. In different countries we will choose to give up more or less depending on what we’re comfortable with. There are other things that we call norms and Those are not exactly responsibilities. They are behavior patterns that the society generally teaches us we should adopt, but there’s not an enforcement element. But I do think it’s important for us to recognize that as we demand rights that we also recognize, we have responsibilities that go along with them. Sometimes we don’t often remember that part. Thank you. Sure.

Moderator – Raashi Saxena:
So my colleagues Dennis and Santosh are now going to take you through an activity. We are looking at banding up the audience into groups of four or five. So maybe we could have a random selection. I could just, I mean, go ahead. So one, two, three, four, five, you could come this way. Five people per group should be fine, right? I think there are enough people in the audience.

Dennis Redeker:
So it takes 20 minutes. Just some people are leaving, because they have to leave. But it takes about 20 minutes, and it’s going to be a discussion within the group. And you get to know new people, hopefully.

Moderator – Raashi Saxena:
Sorry? You have another obligation. You can go ahead. No, it’s fine. Thank you for coming. Sure, sure, sure, sure.

Dennis Redeker:
But for those who are remaining, it’s going to be only 20 minutes in that group. And you get to know your group mates, and that’s a great thing. And we’ll come back together and everything.

Moderator – Raashi Saxena:
OK, then we’ll pass you on. One, two, three, four, five. One, two, three, four, five. One, two, three, four, five, six. This group is larger. So three groups. You all could maybe come in closer, if that’s? Yes. So one, two, three, four, five. One, two, three, four, five. So you move actually that way with them, and you come in closer. Thank you for staying. Yes. Yes. Thank you.

Dennis Redeker:
So we’re now going to start the group phase. It’s about 20 minutes. We’ll have one group online. After we gave instructions, please go to the breakout room. I think, if I’m correct, we’re going to have a breakout room for you created. Correct? We’ll take lead here. Yeah. Yeah. And for everyone else, the idea for today is to have a look at the charter. This is the charter that we have brought. Unfortunately, only in a very bad copy. It’s the English version, obviously, with other translations. But the idea is to pick one of the articles which is relating to one or several rights and principles. One of the articles of the charter as a group. And to think about the challenges. that this article and the associated right and principle will face in the next 10 years. So think about the most horrible technological developments, political developments that can occur, and think about this in the first step. And then think in a second step about what the Global Digital Compact could do in order to help lower the risk, in order to help address the challenge that you have defined or the challenges that you have defined for the article that you’re focusing on. So there are three steps, essentially. I mean, first, get to know each other. Introduce yourselves to each other. Then pick an article. Think about challenges in the next 10 years. And then think about what you think the international community, through the Global Digital Compact, could do. What should be entailed? What should come out at the Summit of the Future in New York 2024 that would help against those challenges? Are there any questions, either here in the room or online? And afterward, we’ll come together and we’ll quickly discuss what you have come up with and have an overall discussion. Any questions here in the room? No? Then you have 20 minutes. Go.

Audience:
Thank you. Thank you. Thank you. The question is, if the teacher is trying to teach you as well, and there’s a violence between you two, can you establish an old document, where you can then reaffirm that there’s a violence between you two? The question is, if the teacher is trying to teach you as well, can you establish an old document, where you can then reaffirm that there’s a violence between you two? The question is, if the teacher is trying to teach you as well, can you establish an old document, where you can then reaffirm that there’s a violence between you two? The question is, if the teacher is trying to teach you as well, can you establish an old document, where you can then reaffirm that there’s a violence between you two? The question is, if the teacher is trying to teach you as well, can you establish an old document, where you can then reaffirm that there’s a violence between you two? The question is, if the teacher is trying to teach you as well, can you establish an old document, where you can then reaffirm that there’s a violence between you two? The question is, if the teacher is trying to teach you as well, can you establish an old document, where you can then reaffirm that there’s a violence between you two? The question is, if the teacher is trying to teach you as well, can you establish an old document, where you can then reaffirm that there’s a violence between you two? for all of your outputs. And towards, as we close, maybe we could have one or two representatives to talk about the outputs of synthesis learnings and looking at the role of stakeholders. So five more minutes. And yeah, enjoy. Thanks. Thank you. Thank you. Because it’s all well-handled, I can pick up about a half a dozen new regulations coming up at the present time. So I want to speak to the question that it seems like that people want to follow. And the point is that some governments want to use the expertise that’s made a little bit different than what we’ve got online. But when you talk, speak, or anything like that, it’s different, right? Why is it happening in other countries? So it’s also true that because of the frequent extension, the interpreters in the national data are too many ways, right? Even in the US. I mean, of course, because the United States is already a society where the government is helping the private sector. And now, it doesn’t seem to be so. I mean, it’s ridiculous. Now that we see these different kinds of interpreters, which is quite an astounding thing. And that was one reason for having a sharper interpreter, right? To come up with an internet-based interpretation of these rights. And hope that those old interpreters, maybe they still cooperate. Right. But that’s not important. It happens sometimes. We don’t recognize that proportion of the rights that we have to defend what we want to do and work for it. So that’s a possibility. So you really have to figure out the communication and how you can change it. It’s a very nice question. That’s a good question. Yeah. Such a good question. I think it’s a question that is known after you’ve been in some interpretation where people say it’s important. So that’s why. I agree with that. I agree with this whole point. As you are saying that there’s different interpretation of a given right in different institutions. For example, what they can make free speech means in Japan is very different from the US. So freedom of expression is also, you know, no right is an absolute right. In Indian constitution, that will be a freedom of expression. But there are certain limitations to that. Now, looking at the new environment where they’re dealing with misinformation and disinformation, we have a certain sort of regulation in terms of protecting the harm that can be caused by the spread of misinformation. At the same point of time, we have to keep in mind that the governments are not going overboard. They are following the set of principles of legality, necessity, and proportionality in this matter. The reason I’m saying this is that in global South Asia, and especially from the region where I come from, there has been people who have lost their lives in countries like India. like Bangladesh, India, because somebody wrote a hate post about the political community and just spread it like anything. So it’s about speed and scale when it comes to the open and online environment. What the right is there, but we, in India, we know that, OK, if I say something to you, it will take time to spread. So if I spread something on Facebook, that can actually hurt the minorities if I am just putting on some hate-based kind of post. So I’m not going to do that. Sometimes, politics becomes politics. The difference is so much faster. Yeah. It’s so much more. It becomes politics. Yeah, yeah, yeah. I mean, that’s why we need regulation, because we need to ensure that the regulation is not going overboard, and we have to do it proportionally with all the central authorities. The right now, what we see is that when Bangladesh is in the global south, and even in my region, they come up with certain regulations, they say that the purpose is this, that we want to curb the spread of misinformation and disinformation, but then they give such wide exceptions to national security and then say, and you know, some broad conflict, national security, which can be interpreted in a broad way to curb the rights of the fundamental rights of the people. And of course, national security is a government-informed policy. Yeah. That’s because the government will be doing it. It depends. It’s a question, then, about state and scale. Yeah. It depends. It’s a question about state and scale. It’s a chance that we need to address that, and it’s a regulation of self-regulation. We need to ensure that, like, when we are talking about freedom of expression and freedom of speech, we want to know, like, the context. We also know that there is a limitation. We can’t allow this to be. But here, the charter also needs to understand the global nature of the context. Misinformation gets spread very fast because of the conflict on Thailand. So when we are talking about freedom of speech, how to deal and whose responsibility it is to deal with misinformation, how to manage it. So should we mention that, OK, these are the responsibilities of the government, these are the responsibilities of the state, and this is what is expected from us as, you know, right-leaners as well. Because if we leave this challenge, then freedom of expression becomes, you know, redundant in that sense. If we are not controlling this mismanagement of what we speak and what kind of information we receive every night, right? So it’s a tool that’s out there. I think we’re wrapping up now. Do you have someone to inform you a little bit in the… Is that all right? Yeah. He’s watching. Are you fine with that? Yeah. OK. They have also chosen the volunteer. I would like to, I guess, say thank you. Maybe I’ll open up and then they’ll… Yeah. All right. So I’m wondering, I mean, I totally understand the problem. But on the other hand, we’re looking at a lot of other issues as well. It’s quite a big issue that we have to deal with. And we’re thinking about this for a long time. And I’m not sure how to respond to that. But I think that’s what we’re really trying to discuss in the party. And I would like to know how to address those issues and I think… I’m not sure if I completely understand, but…

Moderator – Raashi Saxena:
Hi, everyone. I, unfortunately, hi. So your five minutes are up. Maybe give you another two minutes. And I’m going to time it this time. Yeah? Can we have group one starting, please? I do believe, who’s group one, Santosh? You are group one. Yes. Yes, so we look forward to getting your inputs. And please introduce yourself. And if you could also talk a little bit about your group, what you discussed.

Audience:
OK, I’ll start by noting that we could not agree on who we report, so we picked two of us. We spoke about the freedom of expression, because that was the most obviously under threat all the time. Noting that a long time there’s been a debate about whether or how it applies online. There have been some people who agree, think that it should not be somehow different online. And of course, there are differences in the scale and speed and so on. But the principle should be the same. Exactly why it has been argued to be different and how it should be interpreted online, that part of why the charter actually was started. How does this apply online from human rights? OK, maybe I’ll hand over to you from this point. So the things that we were discussing in our group is that, of course, that freedom of expression is one of the most essential rights in terms of both in offline world and in the online world. But at the same point of time, we know every rights come up with certain restrictions. No right is an absolute right. Now, in the offline world, there were better ways. is to stop hate speech. So if somebody says which is hate speech or makes some comments which might provoke violence against a particular community, it could be curbed in a better way. Now with the speed and scale in the online world, we see that the spread of misinformation and disinformation. And coming from South Asia and coming from global south, we have seen in Myanmar, in Bangladesh, even in my own country, that OK, somebody writes a hateful post against a minority group, against some indigenous group. It spreads. And even the real physical loss of life takes place. So when we talk about freedom of expression in online world, we also have to be cautious about how to protect and stop the dissemination of misinformation as well. This also means that there should be regulations in that term. But we should also be mindful as civil society that the governments are not going overboard with those regulations in terms of the principles of legitimacy, necessity, and proportionality are followed. And the government should not use this particular thing to curb the actual application of the right. So the things like national security and curbing the spread of misinformation should not be used to curtail the rightful expression. So not all opinions can be labeled as misinformation. Of course, the facts are there. We can differ. And so we can’t say that all the opinions are misinformation. But we need to also find and devise ways so that, OK, these are the responsibilities of states, and these are the responsibilities of businesses. And as responsible stakeholders in the online environment, these are our responsibilities. So I’m not saying that the freedom of expression differs in online and offline world. It’s just that the speed and scale makes it much more difficult to control the spread of disinformation and misinformation. And that can actually. result in loss of life and serious harm to certain communities. Thank you. It’s the final comment. We did not solve all the problems, even though we talked about them.

Moderator – Raashi Saxena:
Okay, can we have group two now? Group two. Yes. You can please introduce yourself.

Audience:
Yeah, I’m Isuru from La Nacia. So this is my group, and so I’m just voicing out what we discussed. So we picked Article 13, which is about rights of persons with disabilities and how this can be a challenge, or how it will be like in the future. That’s what’s our discussion. And so our idea was that technology is getting better and better inclusive, so some of issues it might solve in the future, definitely. But we saw some gaps in areas like, if you take persons with hearing impaired, where you need like a lot of technological involvement, and also if you take like sign language is not kind of a universal language that you find. So there are like regional variations and language variations and all that. So how do you make internet accessible for them is going to be a challenge. Though you have all those technological developments, say I helping you to read your lips and turning it into a sign languages and all that. So the other point we discuss is that you need to have inclusion by design, so we can make all our design. inclusive, and we can provide comments on whatever the things, like the developments out there, and make those, and compel them to make those inclusive. And also, getting community engagement and empowerment is also going to be really important, because if you take persons with disabilities, they have always problems of raising their voices. Because of the system, the majority that follows, sometimes you have to, for example, like for a certain comments, you might have to submit written comments. Or you might have to go to a place, go to a building, and to give your comments. So those things might restrict some of persons with disabilities, like raising their voice. So then you need to avoid those things, and find a way to get their comments on those things as well. And yes, did I miss anything? OK then, thank you. Right.

Moderator – Raashi Saxena:
Thank you so much for your comments. Now we have last, but not the least, group three.

Audience:
Thank you. So hello, everyone. I am from Nepal, and my name is Dakal Ashok. And we are the last group here. And we have talked about rights to children and the internet. And I’m going to present our conclusion here. So first of all, I’d like to say that the children are the future of our nation. And whatever rights we give them today, it brings our nation. And like what brings our nation into 10 years is dependent upon the rights given today to our children. So there might be lots of rights. The children should be exercised, but we should filter them, like there should be a strong filtering system while giving them rights and exercising their rights. Like the rights from the internet, we have given them a proper right to use the internet in their studies. And we have given them the rights, including their freedom of expression and freedom association. But while giving such kind of right today, it might benefit them for those short-term basis only. But in the future, it can go darker and darker if it is not regulated in the right way. For example, we can see that the generative AI, like the ZGBT that children are using right now, it is becoming strong and strong. And we can see that it is becoming dangerous for the children. So if we do not give this kind of right to children, then they say the students are not getting their right. And if we give this kind of right to them, and then their cognitive ability might get destroyed in maybe five or 10 years. And they might fully depend on the kind of the AI. So there are both the positive and negative aspects of giving rights to the children. But what we should, what the government should is what they are giving to their children and what the future of the, what the consequences will it bring. So as it is, we have talk about freedom from exploitation and child abuse. Like in our society, in our school, in our college, in our country, we can see that there is a strong rule which protects children from exploitation and abuse imagery. But in the online area or in the online internet, they are not protected safely. They are exploited more on the internet than in the physical environment. So while giving the freedom from exploitation and child abuse imagery, it is effective for the physical concept only. But in the virtual world, it is not so strong. They might not get the time to exercise its right. So while the government body or any organization try to exercise the freedom from exploitation and child abuse imagery, they should be more specific on the internet aspect, like banning only the website or some kind of activities will not be effective. There should be a strong mechanism. So thank you so much.

Moderator – Raashi Saxena:
Thank you so much for your comments. And now Dennis will ask the online participants.

Dennis Redeker:
I’m already asked the question in chat. Just now, once again, last opportunity. If you want to weigh in from online, please raise your hand, I think. If that’s not the case, I think we can hand back to you, Rashi, for last round of comments. We start with you, you want to start with me? I thought with comments from everyone. I mean, there’s no, many things have been said. We haven’t really talked, I think, about the Global Digital Compact in the solutions phase so much, but if someone wants to bring it back to that, that’s the original thought of the discussion, but I mean, take it from here, sorry.

Moderator – Raashi Saxena:
That is true. Is there anyone who wants to, or has a counter argument? Do you want to go on, or do you want the mic? You can, yeah, go ahead.

Audience:
My name is Tomoe, I am based in Tallinn, Estonia. I’m from the Tallinn University, and I think it’s interesting that none of the group came up with the solution with the GDC, and that is maybe telling someone something about the impact of the GDC on the international human rights law perspective. I think the lady spoke about the human rights mentioning in the text, but how concretely we can talk about the rights and the legal, legality-connected arguments through the GDC, I think that’s maybe a bit questionable. Thank you.

Moderator – Raashi Saxena:
Thanks so much. Is there anyone else who has any rebuttals, solutions, comments? Maybe comments about what another group said, agreeing, disagreeing. There’s someone from online.

Audience:
Yeah, hi, can I go ahead? Yes, please go ahead. Okay, hi. Hello, everyone. My name is Veronika Panaye. I’m currently a student at the Tel Aviv University. It’s really interesting. I think in my group, we really didn’t have a lot of time to have a conversation, but we focused on the… right, the access to the internet. I know that we’re asked to identify some of the challenges and I spoke from a home perspective, so I’m from Nigeria. And one of the issues that, although we have that right, for me, I’m saying that what we’ve experienced in our country is where we’ve seen instances where government infringes on those rights. So an instance I gave was when the Nigerian government shut down access to Twitter in the country at that time, which is currently X. And although we have the rights to access the internet, we could see that the government could impeach on those rights. And other African countries have experienced, other countries even have experienced internet shutdowns and that is critical. So even if the article states that we have the rights to access the internet and everybody knows that, from my own perspective, I’m seeing that when we don’t have this passed into legislation in the country that we’re in, I’ll speak for my country, Nigeria, when we don’t have this legislation, it’s very, very difficult for enforcement purposes. So I mean, one of the solutions that I speak of is having the laws passed in the country that reflects these principles. Because we have these rights stated in our constitution and with the new interactions with technology, it’s difficult to like enforce. So it’s always good to pass, as in have rights or legislation in the country that passed that. Another way that the Nigerian nonprofit sector and people who are digital rights advocates have been able to like bypass this have been to use strategic litigation to change like the government in instances of breach of certain digital rights. So for us, that has been like a creative way of using present laws. for instance, like right to privacy, right to have access to information, and using those rights to go to the courts to challenge those rights. I hope I’ve been able to make a contribution. Thank you very much.

Moderator – Raashi Saxena:
Thank you so much for your inputs, you were great. Is there anyone else online that would like to give their inputs? And if not, anyone here? If not, I will give it back to the speakers, yeah.

Moderator – Santosh Babu Sigdel:
Before the speakers, just to wait to continue the discussion from where she has started about why we only discussed about the challenges and why there was no discussion about the solutions and what would be the role of the GD, Global Digital Compact in designing the solution. So just wanted to know, in your research, Dennis, you asked about who should be responsible, which stakeholders should be responsible for framing the GDC, or how effective they are in what role they have in framing the GDC, but maybe one question in the Global South or all over, we have to ask, how involved the stakeholders in different levels are in this Global Digital Compact designing process? It started in September 2021, and now we are in October 2023, two years has gone, and they plan to, UN plans to adopt it by the future of the future summit in September 2024. So we have almost 11 months or one year in our hand. But looking at the kind of discourse in South Asia or the developing countries, there is no discussion about the Global Digital Compact. Even the government, they don’t know about it. Stakeholders, civil society, media, they do not know about it. UN organization, they are not talking about the Global Digital Compact. So, what would be the, it would be another charter or another compact without, we are talking about the multilateralism, multistakeholderism, and participation of all the stakeholder, but if the compact is coming like this, without proper consultation or even stakeholder not knowing about you, not knowing about it in the larger framework, so what kind of impact it will have, and the second question is about enforceability that earlier Pahalani mentioned of. We have, for example, this business and human rights guidelines by the UN, and that is normative framework. So, what would be the kind of status of this global digital compact, and how it would be enforced, and what would be the role of different stakeholder in this whole process? If we look at the compact from that perspective, maybe we can have some roles into it, different stakeholder can have some roles into it, and we can design it in a manner that it would be helpful to address the challenges on different human rights on online space that we are discussing now.

Helani Galpaya:
Quick comment, I mean, I think as much as we find fault with it, it has made a lot of effort for consultation. So, I kind of agree, and I’m kind of disagreeing with you. Consultation processes, even at national level, are completely imperfect processes, right? The privileged participate, the knowledgeable people participate, we’ve done water on infrastructure, that’s the nature of consultations. Large amounts of money have been spent. This was a global consultation, and I thought it was much more consultative than many other global processes, actually, right? They did make the effort. it’s never going to be perfect. The only way to make it perfect is to run national level elections on whether we agree on some of the articles in there. So I think it is, by definition, going to be imperfect. And I think it’s still ongoing, meaning the official consultations are over, but there’s still a lot of opportunity to influence. And to me, the sign that there are at least three sessions here at the IGF itself. And for example, I told you the one organized by somebody else, but I happen to be a speaker. All the ambassadors from the host, from the countries that are hosting this, Germany, Kenya, and I forget which other one, and Amandeep Kale came. So there was somebody in the room all the time. So at least there’s the theater of listening, right? I don’t know what actual influence. So I think, I’m just trying to say something very positive about this, because in an imperfect situation, and I think the global mechanisms for actually enforcement are quite tough, because we don’t all live in the EU, because there’s some common understanding and structure where, yes, there’s some room for countries to wiggle around once the common framework is set. We don’t live anywhere in that world, right? So it’s really the next step is, even if we end up with a perfect GDC document, is, I think, at ground level for people like us to make sure that then our nations develop policies that are at least reflective and aligned with that, I think. Right.

Moderator – Raashi Saxena:
Definitely. So now we’re running out of time, so Dennis is gonna close the session for us.

Dennis Redeker:
Thank you so much. Thank you all for coming. I think this was very good to communicate not only your activities, but to be starting a discussion on how articles of the chart are still relevant, also today and in the next 10 years. We do have, obviously, English and Japanese versions of the 10 principles. Take one, take two, take three. Bring them to your friends who might be interested in this. Otherwise, thank you so much for joining. for joining us today, and also online, thank you.

Audience

Speech speed

146 words per minute

Speech length

3764 words

Speech time

1545 secs

Dennis Redeker

Speech speed

178 words per minute

Speech length

2613 words

Speech time

878 secs

Helani Galpaya

Speech speed

173 words per minute

Speech length

1663 words

Speech time

576 secs

Moderator – Raashi Saxena

Speech speed

148 words per minute

Speech length

1383 words

Speech time

560 secs

Moderator – Santosh Babu Sigdel

Speech speed

163 words per minute

Speech length

1724 words

Speech time

633 secs

Vint Cerf

Speech speed

164 words per minute

Speech length

259 words

Speech time

95 secs

Wolfgang Benedek

Speech speed

144 words per minute

Speech length

241 words

Speech time

100 secs

Internet fragmentation and the UN Global Digital Compact | IGF 2023 Town Hall #74

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Annaliese Williams

The analysis explores the importance of the technical community’s involvement in policy discussions and decision-making processes. Annaliese Williams, a government representative with extensive experience, actively participates in technical discussions and observes a common tendency among technical stakeholders to separate technical and policy issues. However, she believes that these issues are closely linked.

Williams argues that the technical community should have a more active role in policy discussions. She highlights that the Global Digital Compact, a comprehensive collaborative framework for achieving the Sustainable Development Goals (SDGs), does not necessarily prioritize technical stakeholders. This lack of representation poses a risk of marginalizing their unique perspective in policy-making processes.

Additionally, Williams emphasizes the significant expertise within the technical community. This expertise is crucial in facilitating conversations and decision-making processes related to technology, especially as the landscape rapidly evolves. The increasing reliance on the internet has also transformed the identity of the technical community, making their involvement even more valuable.

Peter, along with Williams, stresses the need for discussions on the role of the technical community. Both agree that engaging with governments and understanding the problems they seek to solve is crucial for effectively implementing technology. Williams emphasizes that establishing dialogues and building relationships with governments can provide technologists with a deeper understanding of the challenges they aim to address.

The analysis also highlights the importance of collaboration among technical stakeholders to improve coordination and governance. It underscores the need for greater collaboration among existing internet institutions to ensure effective coordination.

Governments play a significant role in the analysis. It emphasizes that governments are not adversaries but are responsible for protecting citizens. Their involvement in policy discussions and decision-making processes is vital for ensuring public security and maintaining peace and justice.

Furthermore, the analysis suggests that technical stakeholders should consider and coordinate their contributions to public policy processes. Even if they choose not to engage, policy conversations will still occur, and it is crucial for them to participate in order to make informed decisions.

Lastly, the analysis mentions that OUTA, an organization focused on internet governance, has recently published an internet governance roadmap. This roadmap serves as evidence of the growing need for collaboration among technical stakeholders to effectively address the complexities of internet governance.

In conclusion, the analysis underscores the significance of active engagement from the technical community in policy discussions and decision-making processes. It highlights the close link between technical and policy issues and the potential risk of marginalizing the voice of technical stakeholders. The expertise within the technical community, the evolving identity due to increased reliance on the internet, and cooperation with governments play crucial roles in achieving effective technology implementation. The involvement of technical stakeholders in public policy processes is essential for informed decision-making and improved governance.

Michael Kende

The Global Digital Compact, proposed by the UN, is an initiative aimed at addressing key issues in the digital space, including connecting the unconnected, data governance, human rights online, artificial intelligence, and preventing fragmentation of the Internet. This compact promotes a collaborative and inclusive approach to digital cooperation.

However, there is ongoing discussion regarding the role of the technical community within this compact and internet governance as a whole. The technical community, including stakeholders such as ICANN, IETF, and the IGF, plays a crucial role in ensuring an unfragmented and interoperable Internet. Questions have been raised about how to ensure the technical community’s involvement in negotiations and the future of internet governance. It is argued that the technical community must ensure its active participation in these processes to safeguard its interests and expertise.

One concern raised is that the original definition of multi-stakeholder governance does not explicitly mention the technical community. This exclusion has prompted calls for a more inclusive approach that recognizes the importance of the technical community in shaping internet governance frameworks. It is suggested that historical oversights or laziness in considering the role of the technical community should not lead to its subsuming within civil society.

Michael Kende, a prominent figure in the discussion, emphasizes the need for the technical community to take a proactive approach in addressing potential risks related to the internet. He argues that rather than being reactive, the technical community should anticipate and discuss potential risks in a timely manner. Kende proposes the concept of “forensics,” which involves examining what is said and by whom. He highlights the importance of addressing potential threats before they materialize.

Furthermore, Kende advocates for a comprehensive and proactive approach to internet governance. He suggests that the technical community should engage on a broader range of issues, such as protecting citizens and human rights, in addition to fulfilling its own role. By adopting this approach, Kende believes that the technical community can contribute to the development of an interoperable internet and help prevent fragmentation.

In conclusion, the Global Digital Compact proposed by the UN aims to address various topics related to the digital space. The role of the technical community within this compact and internet governance as a whole is under discussion. There are calls for the technical community to ensure its active participation in negotiations and the future of internet governance. Additionally, concerns have been raised about the exclusion of the technical community in the original definition of multi-stakeholder governance. Michael Kende highlights the importance of a proactive approach in addressing potential risks and suggests a comprehensive engagement on a broader range of issues. By doing so, he believes that the technical community can contribute to avoiding internet fragmentation and promoting an interoperable internet.

Audience

Jean-Franรงois expresses significant concern about the proposed merger of the technical community with another community. He questions the reasons behind specifically targeting the technical community for this change. His argument highlights the negative sentiment towards this proposed change, stating that the technical community should not be treated in this manner. Additionally, Jean-Franรงois enquires about the experiences of other communities who have undergone similar changes, suggesting that their perspectives could provide valuable insights.

Peter Koch emphasizes the critical role played by the technical community in internet governance. He asserts that there should be better understanding and recognition of their contributions. Koch suggests that instead of investing time and energy into the forensics of events, it would be more beneficial to focus on explaining the importance and contribution of the technical community. This positive sentiment stresses the need for greater appreciation of the technical community’s involvement in internet governance.

The analysis also reveals that the line between different stakeholder groups in internet governance is blurry, as noted by Peter Koch. The technical community’s ability to identify and explain potential side effects of regulations is crucial. This highlights the valuable insights that the technical community can provide in shaping effective and balanced internet regulations.

The analysis further shows that the demographics of negotiators for foreign ministries have changed significantly since 2005. There is a deficit of direct interaction between the technical community and their counterparts in the foreign ministry, indicating a need for closer collaboration and communication between these groups.

Overall, it is clear that there is a strong argument for increased collaboration between the technical community and policymakers. The analysis supports the notion that policymakers and the technical community should reach out to each other for more effective collaboration, as this will lead to better understanding and mutual benefit in achieving objectives in digital and technical sectors.

The analysis also highlights the importance of including technologists in policy discussions about technology. Technologists are the ones who ultimately implement the policies, making their inclusion critical for effective policy design and implementation. Examples from healthcare and architecture demonstrate the successful integration of professionals into relevant policy discussions, further reinforcing the argument for involving technologists in technology-related policy discussions.

Moreover, the technical community’s presence and involvement in every conversation involving internet regulation is strongly advocated. This includes the need for a different approach in conveying their message, focusing on equipping governments with a clear narrative that enables them to defend the internet.

In conclusion, the analysis underscores the importance of recognising the essential role played by the technical community in areas such as internet governance and technology policy-making. Collaboration, communication, and a deeper understanding between the technical community and other stakeholders are crucial for achieving effective policy outcomes and better internet governance.

Danko Jevtovic

The internet is a network of networks, defined by IETF-developed protocols such as IPv4 and IPv6. It is not fragmented, as the core technical layer remains intact and functional. The internet is defined by IP addresses assigned by regional Internet registries and the BGP routing. Trust in the root server system is essential to avoid internet fragmentation. The DNS system, managed by IANA, defines the internet for end users and must be trusted to maintain its continuity and interoperability. Protecting the mid-layer of the internet is crucial for content and ensuring the smooth flow of information. The mid-layer is critical for maintaining the interoperability and accessibility of the internet, and this should be acknowledged in discussions about a global digital compact. The technical community plays a critical role in preserving the freedom of open protocols and ensuring interoperability. Their concerns should not be overlooked in policy discussions. Attempting to regulate content through the mid-layer could lead to fragmentation and more issues. ICANN actively engages with governments to advise and influence public policy related to the internet. Collaboration between ICANN and governments is vital for well-informed policies. ICANN, along with other technical communities, is preparing for the VISIS plus 20 review to update and synchronize with the evolving world. ICANN takes measures to tackle DNS abuse and maintain communication with governments. Fragmentation of the internet could have significant consequences, especially for developing countries. The internet is crucial for their participation in the global world. It is essential to celebrate and protect the successes of the internet for all citizens of the world. Having one internet for one world allows countries to participate actively in global affairs, share culture and knowledge, and achieve common goals through partnerships.

Bruna Martins Dos Santos

The analysis provides a comprehensive overview of the discussions around internet governance and digital cooperation, offering insights into the viewpoints and arguments of different stakeholders.

One key point is the potential complementarity of fragmentation and diversity in internet governance discussions. Different perspectives and approaches resulting from fragmentation and diversity can contribute to a deeper understanding of challenges and opportunities in the field.

However, concerns arise about the tendency to bundle all stakeholders together without considering their individual contributions. This approach may disregard valuable discussions and problems from individual communities. The recent suggestion that civil society should engage with member states as part of delegations raises concerns about multi-stakeholderism.

Apprehensions surround the proposed Digital Cooperation Forum due to its potential exclusivity and costliness. The forum could amplify existing disparities and restrict participation for those with limited access or knowledge. Ensuring accessibility and inclusivity is crucial for any digital cooperation initiatives.

There are also concerns about excluding the technical community from decision-making processes. The shift towards an intergovernmental process in the Global Digital Cooperation (GDC) sidelines their expertise and input, which is vital for effective governance and coordination.

Including corporations in tech regulation discussions is seen as necessary to address issues concerning information integrity and content moderation. The creation of a Code of Conduct for Information Integrity and involving social media companies and other content-related corporations highlight their importance in such discussions.

The abandonment of the multi-stakeholder model in tech regulation disappoints civil society and technical communities. The move away from a model that promised improvements in participation spaces and the Internet Governance Forum (IGF) is considered a setback, leading to frustration among stakeholders.

Transparency is a significant concern in the GDC process. Unanswered questions, limited stakeholder dialogue, and unequal speaking opportunities highlight the need for a more transparent and inclusive approach.

In conclusion, the analysis stresses the importance of inclusive and transparent discussions among the technical community, civil society, and other stakeholders in internet governance. Recognizing the value of fragmentation and diversity while ensuring the active participation of relevant parties will lead to more effective and inclusive digital cooperation.

Moderator

The analysis explored various aspects of internet governance, with a particular focus on the involvement of the technical community. One of the key challenges discussed was network fragmentation, which has been an issue since the inception of the internet. The primary aim of the internet was to enable separate and fragmented networks to collaborate effectively. Resilience and scalability were identified as the main objectives in the early stages of the internet’s development.

To address the problem of fragmentation, it was stressed that unified protocols, shared management of technical resources, and collaborative governance are essential for the proper functioning of the internet. Efforts have been made to further unify network protocols, such as through the ITU’s Network 2030 initiative. Furthermore, the speakers underscored the significance of the technical community in achieving policy objectives, highlighting the importance of effective cooperation between the technical community and governments.

The analysis also explored the role of the technical community in shaping the internet. The internet is defined by the IP addresses assigned by the Regional Internet Registry, as well as the trust placed in the root server system by end-users. While different countries may have varied user experiences, it was noted that fragmentation and diversity can coexist as long as the middle technical layer functions effectively.

In addition, it was emphasized that the technical community should actively engage with policy stakeholders and governments instead of remaining passive observers. Their expertise and perspectives should be heard and considered in policy discussions to prevent the potential fragmentation of the internet. The analysis also highlighted the importance of the technical community’s involvement in discussing the possible consequences of policy decisions relating to internet regulation.

The analysis further touched upon the changing demographics of negotiators since 2005, with a call for increased engagement and collaboration between the technical community and foreign ministries. It also emphasized the need for timely preparation for upcoming negotiations, the impact of language barriers and different perspectives in interactions between the technical community and policy makers, and the importance of coordinated stakeholder responses to public policy processes.

Overall, the analysis underlined the critical role of the technical community in internet governance. It highlighted the necessity for their active engagement with policy stakeholders and governments, as well as their contribution to discussions on potential policy consequences. The pursuit of unified protocols, shared governance, and collective action from diverse stakeholders were identified as crucial for the preservation and functionality of the internet.

Session transcript

Moderator:
and the technical community and in the context of the GDC. We have four speakers today. So my name is David Abikassis. I’ll be moderating this session. I work with a company called Analysis Mason and we do quite a bit of work in this space. To my left, Dan Koyakowicz, the vice chair of the ICANN board. To my right, Ann-Lise Williams from .au and Bruna Martin-Santos from Digital Action. And online we have Michael Kendi, who’s my colleague from Analysis Mason and also works independently on a lot of these topics. The idea in this meeting, I know there’s quite a lot of scheduling conflicts, so it’s a few of us, but hopefully we can make it interactive and informative. Please don’t hesitate to come and sit around the table if you’d like to speak during the session. What we will do is have each one of the speakers spend about four or five minutes setting the scenes in their own areas and then we’ll open the floor to questions and discussions. And hopefully it can be interactive and we can agree or disagree in a constructive manner. So perhaps just to set the scene very briefly, we’ve been talking about fragmentation over the last couple of days. There were a number of meetings and town halls and workshops yesterday. I think Jean was in one yesterday. And one of the questions that came up is the types of fragmentation. And today we want to talk about the role of the technical community, but we can’t abstract this from the various types of fragmentation we’ve been talking about. The internet was built effectively from the start as a way to ensure that separate and fragmented networks could work together, could communicate. And the main objective in the origins of the internet was resilience and scalability, versatility ended up very much as sort of side effects of that resilience. But that’s what makes the value of the internet today. So the fragmentation of networks was an issue from the start and it was necessary to define unified protocols, shared technical resource management, and to some extent a degree of shared governance to ensure that the internet worked. There have been examples where there’s been attempts to unify networks, network protocols further. And for example the Network 2030 initiative of the ITU, there were some proposals for a new IP framework that would really fundamentally change the way the network protocols work. And that did not work, I think it’s fair to say, and we can explore why it fits of interest to people in this panel. So it’s important not to talk about fragmentation in the abstract, but in the context of specific aspects of the internet, including this middle layer of technical resources and technical standards, and in the context of specific policy objectives. The link with a global digital compact I think is really there in terms of the link with the policy objectives that we want to pursue. And the questions that I want to ask today are, what is the role of the technical community as sort of unified in some ways and fragmented in other ways that we can see today in pursuing those roles and in responding to the policy objectives set out in the GDC and more broadly by policymakers and governments, and how governments and the technical community can talk together in a more effective way is also a theme that we want to address. So I’ll stop there and I’ll hand over to Michael Kendi for a retrospective of how we got here. Great, well good morning from Geneva,

Michael Kende:
glad to be here at least virtually. Can you hear me? Yes we can. So as David said, the purpose of this town hall is to identify how the technical community can best represent itself within the UN global digital compact process on the topic of avoiding technical fragmentation, and that’s broadened out a bit as I’ll explain in a minute to more broadly the role of the technical community within the internet governance in general. So I’ll just give a brief background on the GDC, the global digital compact process and the role of the technical community. After a few years of discussions and reports on digital cooperation, the idea for the compact was proposed by the UN Secretary General in 2021 in a report called Our Common Agenda as a way to outline shared principles for an open, free, and secure digital future for all. And according to the report, the compact could cover a number of topics including connecting the unconnected, data governance, human rights online, artificial intelligence, and specifically avoiding fragmentation of the internet. And sometimes it’s kind of hard to get a handle on what is actually meant by what is the global digital compact going to be, and I think that’s still to be resolved, but a good quote that I found kind of explaining it comes from the Geneva Internet Platform saying, quote, that the GDC is the latest step in a lengthy policy journey to have at the very least a shared understanding of key digital principles globally and at most common rules that will guide the development of our digital future along the lines of the topics that I just mentioned. And the goal is to have this global digital compact agreed just under a year from now at the Summit of the Future in September 2024 in New York at the General Assembly meetings. As a way of gathering information, there were a number of consultations. Many people submitted their views on the various topics into the consultation. And then the two countries that are co-facilitating the development of the compact are Sweden and Rwanda, and they organized a number of thematic deep dives earlier this year in the spring on eight of the topics, including connectivity, data protection, etc., and slightly shifted from avoiding fragmentation of the Internet to more broadly Internet governance. But among the questions that were asked, it was definitely still in relation to fragmentation. The questions to be raised during the deep dive included how to ensure an unfragmented Internet, how to make sure it’s interoperable, and specifically the role of ICANN, IETF, and the IGF in supporting Internet governance. So even if Internet fragmentation may not be explicit anymore, the role of the technical community still is going to be addressed and should be addressed. And this definition of multi-stakeholder governance was developed and adopted actually starting here in Geneva in 2003 and then in the Tunis Agenda of 2005 at the World Summit on the Information Society, which itself was an intergovernmental meeting that allowed input from stakeholders, including the academic community, technical community, civil society, and the private sector. And this technical community was specifically highlighted as a stakeholder contributing to the work of government, private sector, and civil society. But recently a blog came out by the heads of ICANN, APNIC, and ARIN, that kind of started to say maybe that there’s a new tripartite view of digital cooperation, where the three players are the private sector, government, and civil society, and that the technical community is subsumed within civil society, which was not the case before and arguably is not relevant or should not be the case now. And I’ll drop the link to that into the chat for those of you who have access to it. But the question that I think we wanted to raise in this session was, do governments really understand the role of the technical community? How can one best get it across? How should the technical community ensure that there’s a role in the negotiations that are coming up later this year and in the spring? How to ensure a role in the summit of the future and to ensure a role in the global digital compact itself? So I’ll leave it there with those questions. I look forward to the discussion

Moderator:
that will follow. Thank you. Thank you, Michael. I’ll pass it on to Danko for a few remarks on, I guess, both the role of ICANN in that space. And Michael raised the question of whether the technical community should be a more unified stakeholder in the multi-stakeholder model, so if you have any thoughts on that. Thank you. Okay, we can hear that.

Danko Jevtovic:
So, well, what has been said in your introductions, and I think I agree with that, but let’s just try to take one step back when we discuss about the role of the technical community and sitting here at the Internet Governance Forum, maybe we should sometimes ask ourselves, what is the Internet? And from the technical point of view, Internet is a network of networks, obviously, but something that is defined by the IETF developed protocols, so we have IPv4 and IPv6 overlaid over each other, and the Internet is defined by the IP addresses that are assigned by the regional Internet registry and by the BGP routing that is connecting those IP addresses. But also for the end users’ point of view, Internet is defined by the DNS system. It’s defined by the root zone that is managed by IANA, and the key to that, I think, is that all of that actually depends on the trust of end users into the root server system in those 11 IP addresses that define the location of the root servers. So when we look at the fragmentation, I think we should ask ourselves, what are we discussing about? So we are actually discussing the system of trust that is rooted in the root server system, and that is all that is defined. So it is important, if you want to avoid the fragmentation, to continue to have the trust in that system, and that system is the mid-layer that is overlaid on the telecommunication networks and below the applications and the content of users, but this mid-layer is critical for the content. So I think I would agree that in fragmentation, we always had a different user experience in different countries, and, for example, in order to get to the content of Facebook, you have to log on. If you want to go to the website of a car manufacturer, usually you go to something.com and that moves you to your local country code domain name where you see content in your own language. So you have different experiences in different countries, but in my mind, this is not the fragmentation because the mid-technical layer is still there and still functions. So I would say that today, internet is not fragmented in that sense, and I would say that we are fine, but going into discussions about global digital compact, we have to think how we will protect this middle layer in this interoperability that is built on that trust into the whole system and the root servers, and this brings us back to this discussion, what is the role of the technical community and how, especially in the UN environment, it is basically multilateral and the global digital compact will be a contract of countries, how best to have the role of the technical community, and I think it is very important that it is well-defined and that it is a keeper of the future freedom of these open protocols and interoperability and a basis for the trust in the system.

Moderator:
Thank you, Nanko. You made a very important point here around the fact that fragmentation and diversity don’t need to be in opposition to one another and actually can complement each other. So if we avoid fragmentation at the right level, we will keep diversity where it matters. And perhaps, Bruna, if I can hand over to you to talk a little bit about the work that you’re doing with the Policy Network on Internet Fragmentation and also the perspective of the civil society, perhaps you can take a view on whether or not the technical community should be part of civil society or not. Deep and hard question, right? But starting with the PNIF,

Bruna Martins Dos Santos:
Policy Network on Internet Fragmentation is one of the intersectional work for the IGF currently, we are just in our second year, and we started with this perception that it was very often that some of the discussions about internet fragmentation went to a rather technical discussion or even excluding at times because it would be debates surrounding basically the technical layers or the technical aspects, and then a lot of people’s perceptions, questions about what happened to the user experience were kind of left in the middle of the way. So if you spoke with the folks at ICANN, ICANN community, they would maybe mention and bring some of the things that Duncan just brought up about like DNS or whether managing IP numbers, that would be related to this, if they failed, it would be related to fragmentation. But when you speak to civil society or even activists, some people very often brought up cases like internet shutdowns as examples. So what we did in the PNIF in the end of the day was to divide this debate into three baskets, the first one being the technical fragmentation of the technical layer, so classical discussions on this space. Second one would be the fragmentation of the user experience, so the interventions that might occur to the net or to the user experience in the general way, and that would affect their own perception or their own experience, like shutdowns or even court orders in asking for content to be geo-blocked or even blocking of applications on the internet. These are some of the examples we bring up. And the last one is the one that relates to the GDC, that is the fragmentation of internet governance and coordination, and it goes, it starts with an analysis that a lot of these forums that we have been engaging and discussing, they stopped to communicate with each other at some point. And the GDC process was the most concerning one, because it is placed from a member state’s perspective, but it felt like something that would go along the lines of the IGF, because at the very beginning of this process, it did discuss the IGF plus and how we would improve this space, right, but at some point these discussions were dropped from it. And then recently we had the suggestion of the digital cooperation forum, so what the PNIF does within this discussion is to say we should avoid duplication, we should make sure people have the time and the space and the knowledge to engage in these spaces, and last but not least, all of the digital cooperation discussions and governance, they should be leveraging from the IGF’s collective intelligence. So that is a little bit of some of the things we have been discussing, and on the discussion paper the PNIF just put out, they do highlight some concerns about the digital cooperation forum that was suggested within one of the policy briefs, mostly because it will be yet another expensive and excluding process. It’s not everybody that gets to go to New York, it’s not everybody that knows how to navigate UNGA or something like that, and to know that just now the tech envoy mentioned that civil society or any other stakeholder should engage with member states, should be part of the delegations, is something that hints, right, that multi-stakeholderism might not be a tool in that way, and it’s indeed concerning if the GDC continues to touch upon a lot of the topics and discussions we have here. So I think just to say that, and about the technical community, civil society, I do think it’s a misunderstanding to be honest, or it might be just from a very kind of policymaker perspective that doesn’t really dive into the multi-stakeholderism debate and divisions that we did in the WSIS and the Tunis process, right, so they might look at all of us as civil society in the broader way, but when we put everybody in the same box, we miss a lot of the relevant discussions, right, like a lot of the activism, a lot of the problems that each of our spaces can discuss. So I do think that bundling everybody up might be a little

Moderator:
bit problematic, but I’ll stop there. Thank you, Bruna. So there’s a couple of things that you’ve just said that I think resonate and that hopefully analysts can comment on as well. I think one is the fact that some of the discussions can be overly technical and excluding, so obviously I guess your perspective is from the perspective of civil society. but can it be excluding also for government stakeholders that may not have the expertise, and that resonates in your last comment around the misunderstanding that you highlight. Now, I think perhaps one thing that we can discuss around this table is the extent to which the technical community can make its own fate from that perspective by maybe putting together a more unified or a more coordinated front to be more visible to policymakers. Annelies, do you want to say a few words from your perspective as a member of the technical community, but also as a former government official? Thanks, David. My name is Annelies

Annaliese Williams:
Williams. I’m with the .au domain administration, Australia’s CCTLD, and as David has just referred to, I am relatively new to the technical community. I’ve been with OUDA for about two and a half years, but before that, for about 14 years, I was with the Australian government, including as Australia’s representative in ICANN’s Governmental Advisory Committee and Australia’s representative to the ITU. So I have been involved in these discussions for quite some time from a government perspective, and it is wonderful to be here today to be at the Internet Governance Forum. I did want to just really make the point that as wonderful as it is to come to these multi-stakeholder meetings, they’re very interesting, the point of the multi-stakeholder governance system itself is because preserving the open, free, secure and globally interoperable Internet is best done when we have all of the relevant stakeholders as participating in the discussions and the decision-making processes. So the Global Digital Compact is an intergovernmental process. It doesn’t necessarily have a seat at the table for the technical stakeholders, so my challenge, I guess, to the technical community is to involve yourselves in these conversations. There is a tendency I have observed over the years among many technical stakeholders to say, well, consider the technical and the policy issues to be separate, but they’re not separate. They are very closely linked and there does need to be more engagement between the policy stakeholders and the technical stakeholders. As David has just said, engaging with governments and helping them to understand the technical aspects of policy issues or the technical implications of potential policy issues is a really important role that the technical community can play. I think it is important that we do engage as a technical community. We need to lean into the conversation, engage with governments, listen to their concerns, instead of just saying that’s not my problem. It is our problem. It’s everybody’s problem to get together and have these conversations collectively. It’s everybody’s business to get engaged. The technical community is uniquely placed to be involved in these conversations. There is significant expertise in the technical community. My view is that the technical community needs to step up and lean into these conversations, or there is a risk that as a distinct stakeholder group, that voice will be lost. I think Bruna alluded to it before, but the Secretary-General yesterday in his speech referred to the business community, the civil society and governments in relation to the global digital compact, but there was no mention of the technical community. I think it is important to get engaged. It’s not enough to just sit on the sidelines and let somebody else do the discussions. My request and my invitation to the technical stakeholders is to lean in and engage with governments in these discussions. Thanks, David.

Moderator:
Thank you, Anneliese. We’re about the halfway mark, so what I would suggest we do is to open the floor to questions, discussions, recommendations. I would ask all of you to not hesitate to put forward strong views.

Audience:
Hi, thank you. Jean-Franรงois from the AI Foundation for the Record. I got a few questions here that I would like to look into. I don’t know exactly who will be willing to answer, and I don’t know if we have the answers in the room. So why are they asking this change specifically to the technical community? That’s something that is kind of surprising me. And how are they justifying that particular change? Which other communities have been merged in this same fashion? Has any other community been requested to be merged with civil society or with any other thing? Who has been consulted for this decision? And what is the technical community going to do to avoid being removed from the equation?

Bruna Martins Dos Santos:
Just to step in on some of the aspects, I think the exclusion comes from the perspective, right? If the GDC is going to be moving forward as a solely intergovernmental process, then everything that’s not governmental is going to be bundled up, right? And that’s why the tech envoy has just said right now that we should be asking for inclusion within delegations. So I think they’re mostly basically looking at everybody in the same way. I don’t understand whether… I know that there was one statement from him that was rather critical on the engagement of technical community and so on, and that’s also why on the civil society gathering we had this week, we had more or less of a general consensus in the room that none of this conversation should be moving forward without the technical community as well, because otherwise it would lose some of the aspects. But I just wanted to maybe weigh in on that. I think it’s mostly from the perspective of the intergovernmental and anything that’s not there should be left.

Audience:
Correct me if I’m wrong. From my memory, the statement was mentioned about a new tripartite model which will have governments cooperate in civil society. And so all of a sudden, corporations are not the government. So it’s not only governmental, but the only ones that seem to be merged somewhere is the technical community. So I’m very curious about why that particular move and under which circumstances and who has been consulted prior to that, if anyone.

Michael Kende:
I can answer a bit of that. David? Yes, go ahead, Michael. Sorry. So I think some of it might just be historic. I mean, if you… Hi, Konstantinos. Some of it may be historical, that the original definition of multi-stakeholder governance, the one that’s always quoted, talks about the development and application by governments, the private sector and civil society and their respective roles of shared principles, etc. So the technical community was never mentioned specifically there. It’s the same three that are being talked about now. But then if you dig into the Tunis agenda and everything, the technical community is discussed specifically. The other one I think that’s not mentioned now is the academic community, which I don’t think even is mentioned at all. So some of it may just be historical, just taking the same three. But I think that just highlights all the more the need for the technical community, for everyone to be specific and ensure a specific role so that it just doesn’t get subsumed out of historical, I don’t want to say laziness, but just looking back at the overview definition and not what went behind it. And I think that was… I put that blog in the chat if you have access to it. But I think that’s really important that people delve into the history a little bit and make sure that everyone knows that the technical community

Moderator:
was represented from the beginning. Thanks, Michael. And did you want to add something to this? No, same point. Thank you very much. Does somebody else want to weigh in, ask a question? Don’t be shy. Go ahead.

Audience:
Better? Yes. Good morning. My name is Peter Koch. I work for DINI, the German Top-Level Domain Registry, undoubtedly technical community. And I would respectfully suggest that we are allowing to be forced into a wrong discussion here. I did hear the SecGen mention the technical community explicitly, obviously in response to some concerns that were raised, in, again, response to a statement made by the ambassador at the time. And I think that’s a good way to start. In, again, response to a statement made by the ambassador at the EuroDIG meeting, already later corrected or amended at his appearance at the Caribbean IGF, if I recall correctly. So I would not suggest that this is all not necessary or not important. But I do think instead of investing energy in the forensics of these events, we might do ourselves and everybody else, because we are important, we might do ourselves a favor into looking into better explaining what the importance and the contribution of the technical community is. Which also might include the question, who is the technical community actually, given that the internet governance is evolving into digital, so that might include others. And of course, all the quote-unquote boundaries between these different stakeholder groups, none of these are very sharp or very thin, right? It’s always floating. So instead having the forensics going on, what is the contribution? And I do think that the contribution today is probably more important than 20 years ago, when it was all about names and numbers and so on and so forth, which still is important. But we see more and more regulation coming up and more and more demands for regulation that completely lose out of sight these unintended side effects that the technical community is probably well prepared to identify and explain. The further away we go from that technical layer, the more tempting the regulation appears to be these days, but the more unintended the side effects might be. And that’s something that I suggest we focus on.

Moderator:
Thank you. Thank you, Peter. Does someone want, maybe Danko, or does someone want to weigh in on I guess these two questions, who is the technical community and how should it engage? Which are the points that Peter raised. Okay, I have a bit of comment on that. So first on the

Danko Jevtovic:
forensics, UN Secretary General, in his opening speech, he mentioned technical community, but part of the IGF and WSIS process. And later on, when he speak about global digital compact, it was a different stakeholder group. So in a way, for me, that was a kind of message. And I think this is important why we want to emphasize the importance of the technical community. Obviously, because we should not talk about the model, we should talk about the success of internet that was brought by this open standard and everything. But I think the key, as you said, are the consequences. So we in the technical community often, we don’t make legislation, we have to observe the regulations of any country, but we are there to discuss with the countries and to help them understand the consequences of possible policy discussions. So for example, in the ICANN, there is obviously government advisory committee, and also we have a global stakeholder engagement and government engagement in trying to do all those things. But looking back on those discussions and how the things will turn out, I think it’s not very logical that, so there is a risk that things might turn out in a way that some of the concerns of the technical community in future will not be fully taken in account. And on your question, where it is coming from, I don’t think it’s kind of a decision that was made by someone. I think it’s a trend that is also resulting of what is happening on the internet now. So I was sitting as a MAG member for three years, a couple of years ago, and most of the discussions are not anymore about the names and numbers that seems to be kind of solved there, but most of the discussions are about the content, the abuses on the internet, negative consequences, crime, hate speech, and all that. And the importance of the technical community, names and numbers going back, is that that mid-layer is very convenient for some of the possible regulation to find a magical key that will solve the content problem. And we understand that that magical key actually does not work and can create all sorts of different problems. So I think this is the reason why we are trying to use this IGF to bring back this discussion about possible fragmentation and the roles that must be there to avoid, I think trying to quote Vint, to avoid that internet might end up where we deserve

Moderator:
it to be. Thank you, Danko. Alice, did you want to comment on this? No? Just to echo that, I don’t, and what Peter said, I don’t think it was

Annaliese Williams:
a deliberate decision. And I think Peter is absolutely right, we need to be having some conversations about what it is that the technical community can contribute to these conversations. And also, who is the technical community these days? I think the internet and the use of the internet and our reliance on the internet has changed significantly since the Tunis Agenda was written and the roles and definitions of stakeholders

Moderator:
was enshrined in those early documents. Thank you. I mean, it seems to me that one of the outcomes that’s possible from that, to kind of maybe square what the two of you said and what Danko just said, is that the technical community becomes just subsumed under the corporate side of things, also because corporations are the entities that are most easily regulated. And so, you know, is that something that’s desirable or not? I don’t know, perhaps that’s a question, too. You’re shaking your head. Does somebody else want to, Bruna, do you want to comment?

Bruna Martins Dos Santos:
Just about the inclusion of corporations in the whole thing. There is one part that will come at some point, that is member states, the UN, needing to ensure the buy-in from companies. Summit of the Future, the Summit of the Future is a broader process, right? It’s not just the GDC. It will also entail some discussions on the Code of Conduct for Information Integrity. And that touches upon a lot of these corporations, right? So that will talk about, like, social media companies, it will talk about some other, like, content-related corporations as well. So I think that by addressing them from the very beginning might be an initial attempt to get some level of a buy-in from these stakeholders in general. But at the same time, like, as civil society, like, we have been very critical of the whole process in general. And I think a lot of this conversation, both the technical community and civil society one, is kind of this shared consensus about some level of frustration for abandoning the multi-stakeholder model along the way in a process that started with this promise of improving the IGF, re-discussing the spaces, and re-discussing participation. So just to add this to the conversation as well.

Moderator:
Thank you, Bruna. Perhaps just behind you. Yeah, there was a question there. If you can say who you are, just briefly.

Audience:
Is that better? Okay. Good morning. We are morning still, yes. David Fairchild from the Canadian Mission in Geneva. I jumped in a bit late, but I think there’s something that we wanted to convey, which is we, the foreign ministries, which is the demographics of who’s doing the negotiating has changed significantly since 2005. And I think the challenge is that the people that you want to interact with are not the same people anymore. And it’s a challenge that I think is accelerating, but who in the technical community has actually met their foreign ministry counterparts in order to engage and educate? And we are running out of time. The GDC negotiations probably will begin shortly in the new year. We have two and a half months to prepare ourselves. There is no zero draft to work from as far as we understand. So it’s an empty canvas. And I think the challenge is different member states are at different levels of cooperation and collaboration internally. But I think for many of the technical community, it’s a new demographic that they’re not used to talking to. And so I just put that out as a challenge that’s, I think, on both sides of the floor. It’s a language we don’t speak, so there is a lost in translation aspect to this, but it behooves us as policymakers to seek out the technical community. But it also works the other way. I think that’s just an important point I think I

Moderator:
wanted to flag. Thank you, David. I think that’s very similar to what Anne-Lise said, so this talking the same language, engaging productively. I mean, I guess one of the questions that we’re grappling with is also who’s doing that engaging, and how are they speaking from one voice, are they speaking from many different voices, and how does that come across on the other side to those counterparts that you’re describing?

Audience:
Yeah, I just wanted to make a reflection. I would like a bit of an opinion about this cognitive dissonance that I seem to observe quite often when it comes to technology, as if technologies would be something nebulous. Imagine that we’re having this conversation in terms of health care, and we will have policies about how to manage health care, but we don’t recognize that doctors are the ones who are going to be applying all of that, or that you have the same conversations about codes for buildings, case scrapers, and you don’t talk with the architects, and you just put them into, oh, civil society will talk for them, or they will bundle them with corporates because they are the ones who are building the buildings. We are in this room because someone came up with the idea of building the internet, and that was a technologist. And no matter the resolutions that we have in these rooms, in the end, when it comes to technical application and technical implementations, it’s going to come from a technologist as well. So if we keep talking to them in language that they don’t understand, we don’t talk to them in ways that they’re going to be able to implement, I don’t see how we can remove them from the equation without considering having them in the room as an actual participant. It doesn’t compute to me, so maybe I’m missing something, if anyone can give me an argument about that.

Annaliese Williams:
I don’t think anybody here is suggesting that these conversations should be happening without the technical community. This is kind of the purpose of this session, is how can we engage in these processes? What have we got to say? What have we got to contribute? So yeah, I don’t think anybody here would argue that the technical community, the technical stakeholders shouldn’t be part of the conversation. But the fact is that these are intergovernmental processes, and the best way of engaging is to engage with governments and foster relationships and dialogues and help them to understand. And for the technical community to try and understand where they’re coming from and what problems the governments are trying to solve. I’m seeing Konstantinos

Audience:
wants to speak. Thanks. Yes, very quickly on this. You said exactly what I was about to say about the need to involve the technical community. I think the difference is how the technical community involves and what sort of message it conveys. And I think that this is what is different in many ways. 20 and 30 years ago, governments were not really invested or knew a lot of things about the internet. We were all celebrating the internet because it hadn’t been used yet as a weapon of any sort, whether it was misinformation or for cyber attacks or cyber whatever it was. So everyone was really behind it. There was little governmental interest, but right now we are at a place where governments, for better or worse, they’re interested. And they are stakeholders. That’s the whole part of the multi-stakeholder model, right? No one leads necessarily in the multi-stakeholder model in the sense that what I’m saying needs to go because I happen to know better. That’s not the way it works. So the technical community needs to be involved, but also the technical community, and I sort of leave aside what we mean by technical community for the time being, needs to understand that things have changed. And also, and it is important to provide a narrative that equips governments to defend the internet rather than give them very abstract notions of openness and global reach and interoperability. No, you need to tell governments how they can achieve this. We have the infrastructure that supports these things. That infrastructure will always, always exist. The question is, how can we make, how can we use this infrastructure to its fullest potential?

Moderator:
Thank you, Konstantinos.

Annaliese Williams:
Sorry, just to come back. I’m sorry, I’m not sure of this gentleman’s name, but I did want to just flag that OUTA has recently published an internet governance roadmap, which you can find on our website. And one of the actions that we’re calling for there is for greater collaboration among the technical stakeholders, the technical community, you know, to ensure better coordination amongst the existing internet institutions and, you know, strengthen collaboration between the policy and the technical stakeholders. So I just wanted to draw your attention to that.

Moderator:
Thank you, Anlis. Danko, you wanted to add something? Maybe Danko first, and then Michael

Danko Jevtovic:
can go ahead. I just wanted to thank to both of you on these comments. I think it is very true. And speaking of ICANN, I think we got the message. So first of all, traditionally, of course, ICANN has the Government Advisory Committee, and we are working with the country representatives, they advise ICANN board on the public policies, but we also have a lot of rotation there. So it’s continuous work. And we have a government engagement team that is also very much present here. So one of the engagements we are doing here. But also, as we have three large ICANN meetings throughout the world, in June next year, we will have a big meeting in Africa and Rwanda, that is not only for African region, but it’s global. And in connection to that, we will have a high-level ministerial meeting. And one of the things that ICANN is doing is trying to engage the governments to send high-level representatives to that meeting. So I think we get the message, and we are doing that. But also, very importantly, I think it’s not only about ICANN. ICANN has the resources to do those things, so that’s why we are doing it. But it’s also all the technical community coming together, and not only through the ICANN, but through different things. And of course, with ISOC, with country code registries, with the regional government, with ISOC, with country code registries, with the regional IP registries. And all together, we have to prepare ourselves for the VISIS plus 20 review that will probably move the things to be more in sync with the importance of the current world. And as you said, the governments see the importance of the Internet now. They see also negative consequences. They are very important with them. We have, for example, bilateral with the UK minister who said the most important increasing crime is online. So for the UK government, it’s important topic. And of course, ICANN is doing something. We are very active in the DNS abuse area. Our limit is quite narrow, but we take the importance of all these messages and communicate with governments very strongly. And I think this is something that shows the results.

Moderator:
Thank you, Danko. Michael, did you want to come in?

Michael Kende:
Yeah, I just wanted to go back actually to something that you had said in your introduction. You mentioned the new IP proposal. And there, there was kind of a, I guess, a clear and present danger that was presented. And along with a number of other people, the technical community responded. I think it was a paper from Olaf at ISOC. There was one from ICANN. I’m sure ITF was pushing back because it was tangible and it was clearly a technical issue where the technical community could respond and discuss how this would impact the Internet and fragment it in some ways completely. But as David Fairchild mentioned, there may not be a zero draft that could be responded to. So we end up, as Peter said, talking about forensics. What was said by the tech envoy? How was it said? Where did the technical community come up? And then the risk is that we end up kind of reacting too late when there’s finally something on paper and there’s a risk to be discussed. So I think one thing that would be interesting is how do we come up with examples or, you know, Peter mentioned some, you know, where there’s demands for regulation, where the technical community clearly can discuss the side effects. So one way I think to do it is to kind of look forward and talk about, you know, the kinds of risks that could come up and how the technical community has been addressing them all along. As with many of the things the new IP was aimed at addressing, they were already being addressed at ITF and elsewhere. So I think that’s something that we should think about so we don’t end up in February, March, April seeing the first draft and then in some ways it may be too late to react when we can finally see some tangible threats. Thank you, Michael. We can take one last

Moderator:
question. Yeah, go ahead. And then we’ll spend maybe just five minutes going through concluding

Audience:
remarks from anyone who wants. Thank you. First of all, really great to have this session at the IGF and really appreciate you putting this together. It’s a shame that not more people are there but I see that it’s being recorded so hopefully more people can benefit from it. I’m Eva Gnatyshenko. I work in the UK Government Department for Science, Innovation and Tech and listening to this conversation, I’m sort of disappointed that we’re making such a strong contrast between the technical community and a government as if those are two separate communities that somehow are antagonistic to each other. I’m a firm believer working on digital technical standards. I have a team of technical experts that is very much part of the technical community and is engaging actively in ICANN, in the ITF and other technical bodies. And for me, the really big question, and I’m wondering whether I can address that in the closing statements, is around the right mechanisms. I definitely echo David’s point on how many of you from the technical community are engaging with foreign ministries and UK foreign offices actually trying to get a lot more technical expertise into their missions and they are bringing experts in and I think this is a really great move. Same for us, we are trying to think about how we have a sustainable flow of talent into government that will never pay as well as the corporations do. But when I looked at the GDC consultation, and there was an open process, I agree with the concerns. We are very concerned about multistakeholder participation from here, but there was a chance to feed in and the participation and the contributions from private sector and from the technical community were very limited. We’re very disappointed to see that because there was a chance to feed in and I’m wondering whether that’s because it was so high level that it was hard to see those technical problems that might come further down the line. But there needs to be a mechanism to allow for that engagement and I do think we have a lot of good practice in the UK, how that’s working, but how do we do that on a global level beyond sort of fora like this and how do we speak the same language? That’s what I found. We spend a lot of time teaching our policy experts to speak technical and our technical experts to speak policy, but it’s not just a one-way street. We’re working with the IETF in particular to build that trust that governments are a stakeholder, as Konstantinos pointed out, and not see ourselves as enemies, but actually trying to achieve the same objectives and how can we kind of understand the means and where we might go wrong. Thank you.

Moderator:
Thank you, Eva. I think we can close here, just perhaps on my side, just to rebound on what you just said and perhaps a point to Jean. I think in your remarks it sounded to me as if you were saying that government needs to come to you and I fear that this is not, that wasn’t your point. Okay, you can tell me afterwards. All right, do you want to say a few words to conclude? Maybe Danko first?

Danko Jevtovic:
Thank you. So first, thank you for these comments. I think true, governments have technical experts and of course we have to work together and I think this is a very good message. But for a conclusion, thinking back about the fragmentation, I wanted also to, I’m coming from Serbia, it’s a small developing country in Europe, not part of European Union, and in such developing countries the internet is the key to be part of the global world. And I think the risks of the fragmentation of internet are real, but the possible consequences of those risks are especially significant for developing countries, because this is the, having one internet for one world is the way for a country to be part of that world, for people to export their services, to be part of the global workforce, to learn, to work, and you know, share culture. So I think this internet thing is actually kind of a world peace project and we often discuss now possible negative consequences, but we have to celebrate the successes of internet and we have to protect it for the citizens of the world, but also very much also for the developing part of the world. Thank you.

Moderator:
Thank you, Danko. Annelise, do you want to go next?

Annaliese Williams:
Thanks, David. I’d just like to thank you for your intervention. I think that’s a really important point that you’ve made and I do want to reiterate that we shouldn’t be seeing governments as the opposition, as the enemy. Governments aren’t just trying to stop everybody having a good time, they’re trying to protect their citizens and that’s the job of the government. So, you know, I would like to just, you know, encourage the technical stakeholders in the room to, you know, I think we do need to have a coordinated response into some of these public policy processes that are happening and these conversations will be happening whether we engage with them or not. So I would like the technical stakeholders to sort of, you know, give some real consideration to what it is that we can contribute and to some coordination amongst ourselves to provide that input to these processes. Thanks, David.

Moderator:
Thank you, Annelise. Bruna?

Bruna Martins Dos Santos:
Thanks. Just about the comments on the process of the GDC. I do think it was rather open and it was welcome to have consultations and so on, but there were some problems along the way. And speaking as part of like a group of CSOs that engaged on it from the very beginning, the first question we asked was what would be the modalities? This is still an unanswered question, like two years after. Yeah. And the whole like debate about the deep dives as well, right? Like when the deep dives were cut up in half right at the end of it, not allowing any other stakeholder to speak for more than three minutes or only allowing the ones present in New York to speak on those consultations, it kind of highlights that it’s not a governmental problem, but it highlights the excluding side of the conversation, right? So that’s when we criticize all of those spaces and so on. And maybe my last remark about that is that we need to do more things together, right? So maybe now it’s the moment for the technical community to advocate for more participation together with civil society and other stakeholders on this process that still lacks transparency, that still lacks defining a little better its scope and what will be the next step. So maybe it’s definitely time for collective action on that. So thanks a lot.

Moderator:
Thank you, Bruna. Michael, do you want to say a few words on your side?

Michael Kende:
Yeah. No, I just want to kind of also second the idea of the two-way engagement that’s been mentioned a few times and just maybe point out that there’s a long list of issues, not just avoiding fragmentation of the internet, but there’s a lot of other legitimate concerns of governments and one of the incentives for or one of the reasons that there’s discussions of fragmentation and the kind that Bruna is talking about, not just technical but other kinds, is because of concerns about protecting citizens, you know, the human rights issues and others. So maybe the way to engage is to engage on the other topics as well and show that the technical community is not just protecting its own role, which is of course important and needs to be understood for developing an interoperable internet, but for helping to address some of the other issues on the list and having a proactive approach rather than maybe a more defensive one. Thank you.

Moderator:
Thank you, Michael, and thank you very much for getting up at two o’clock in the morning to be with us. It’s much appreciated. Thank you. I suggest we close it here. Thank you all so much for contributing and for the discussion, and don’t hesitate to, I guess, interconnect after this session. Thank you. Thank you.

Annaliese Williams

Speech speed

154 words per minute

Speech length

978 words

Speech time

382 secs

Audience

Speech speed

188 words per minute

Speech length

1915 words

Speech time

611 secs

Bruna Martins Dos Santos

Speech speed

174 words per minute

Speech length

1415 words

Speech time

487 secs

Danko Jevtovic

Speech speed

151 words per minute

Speech length

1607 words

Speech time

638 secs

Michael Kende

Speech speed

161 words per minute

Speech length

1552 words

Speech time

580 secs

Moderator

Speech speed

168 words per minute

Speech length

1612 words

Speech time

575 secs

Internet’s Environmental Footprint: towards sustainability | IGF 2023 WS #21

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Gabriel Karsan

The analysis features several speakers discussing a range of topics related to climate change and sustainable development. One key point raised is the importance of recognising that we inhabit a common world with shared resources. The speakers stress the interconnected nature of humanity, emphasising that we all breathe the same air and live under the same sun.

Another topic covered in the analysis is the need for eco-friendly solutions within the internet industry. It mentions proposals such as using satellites and high altitude connectivity devices to make the internet more sustainable. The integration of technology with climate action strategies in India is cited as an example of potential solutions.

The analysis also addresses the issue of poor quality data leading to misinformation on the climate agenda. It suggests that the dissemination of inaccurate information can hinder effective progress in tackling climate change.

One of the speakers advocates for a multistakeholder approach to bridge the gap between grassroots movements and government. This inclusive approach is seen as crucial in ensuring effective and inclusive decision-making processes.

The analysis also explores the issue of addiction to hydrocarbons, acknowledging their extensive use in heating, transportation and electricity production. The shared challenge of dependence on hydrocarbons is highlighted as an issue that needs to be addressed.

In terms of creating change, one speaker believes in the power of group discussions. The importance of small groups in making a significant impact is emphasised, indicating the significance of collective efforts in addressing climate change.

Furthermore, the analysis underlines the interconnectedness of the world through the internet. It emphasises the need for boldness, vocal advocacy and accountability in promoting sustainable development. The need for eco-friendly hardware design and sustainable practices on the internet is also advocated for.

In conclusion, the comprehensive analysis underscores the importance of recognising our shared world and limited resources. It stresses the necessity of eco-friendly solutions and accurate data in addressing climate change. The analysis supports a multistakeholder approach, highlights the challenge of hydrocarbon addiction, and advocates for the power of group discussions in effecting change. It also underscores the role of the internet in fostering interconnectedness and calls for boldness, accountability and eco-friendly practices in driving sustainable development.

Ihita Gangavarapu

Ihita Gangavarapu, a coordinator of Youth Asia in India and a board member of ITU Generation Connect, possesses extensive experience in the field of technology, with a specific focus on the Internet of Things (IoT) and smart cities. Her expertise in this area makes her well-equipped to address the issue of environmental sustainability.

Gangavarapu strongly emphasizes the importance of IoT in monitoring various environmental parameters. In particular, she advocates for the use of IoT sensors in homes to monitor carbon footprint. This technology has the potential to make individuals more aware of their environmental impact and take steps towards reducing it. Furthermore, Gangavarapu highlights the use of IoT in agriculture and forest health monitoring. By employing IoT-enabled devices, it becomes possible to gather real-time data on these crucial aspects of our environment. Additionally, the IoT has the potential to predict forest fires and aid in urban planning, further contributing to environmental conservation efforts.

A key argument put forth by Gangavarapu is that consultations, cost incentives, and standardization are necessary to bridge the gap when government initiatives fall short in promoting environmental sustainability. She suggests that involved parties need to engage in meaningful discussions to ideate potential solutions. Furthermore, Gangavarapu believes that government incentives can motivate the private sector to develop environmentally conscious services and technologies. Additionally, she highlights the importance of standardization, particularly in developing IT standards for the environment. These measures are crucial in ensuring that environmental sustainability is prioritized and achieved in the absence of sufficient government initiatives.

Gangavarapu also stresses the role of individuals in reducing their negative environmental impact. By taking responsibility for their actions and making conscious choices, individuals can contribute significantly to environmental conservation. This aligns with the Sustainable Development Goal (SDG) 12: Responsible Consumption and Production, which emphasizes individual responsibility in achieving sustainability.

Moreover, Gangavarapu recognises the significance of environmentally-conscious design in artificial intelligence (AI). She highlights the integration of AI into everyday life and the need to incorporate an environmentally conscious dimension into its development. This aligns with SDG Goal 12: Responsible Consumption and Production and SDG Goal 13: Climate Action.

Another point raised by Gangavarapu is the importance of crafting and implementing policies right from the inception of a technology. She argues that a well-crafted policy can create awareness among all stakeholders in the technology supply chain regarding potential environmental repercussions. By starting policy development early in the tech development process, the carbon footprint consciousness of the entire chain can be influenced positively. This supports SDG Goal 9: Industry, Innovation, and Infrastructure.

Gangavarapu also highlights the importance of leveraging the internet to discuss environmental concerns and solutions. Utilizing the internet as a platform for communication allows for a wider reach and the opportunity to engage a larger audience in these discussions. This aligns with SDG Goal 9: Industry, Innovation, and Infrastructure.

Lastly, Gangavarapu advocates for supporting organizations and initiatives that focus on creating environmentally conscious products and services. By endorsing and investing in these initiatives, individuals and communities can contribute towards promoting sustainability.

In conclusion, Ihita Gangavarapu, with her expertise in IoT and smart cities, emphasises the importance of monitoring environmental parameters using IoT technology. She further advocates for consultations, cost incentives, and standardization to fill gaps in government initiatives towards environmental sustainability. Gangavarapu highlights individual responsibility, environmentally-conscious design in AI, crafting policies from the inception of technology, leveraging the internet for discussion, and supporting initiatives for environmentally conscious products and services. Through her various arguments and points, Gangavarapu underscores the need for collective efforts and conscious choices in achieving a more sustainable future.

Lily Edinam Botsyoe

The session primarily focuses on the issue of the internet’s carbon footprint and the need to explore sustainable alternatives. It is highlighted that internet carbon footprints currently account for 3.7% of global emissions, and this figure is expected to rise in the future.

Lily actively advocates for internet usage that is inherently sustainable, emphasizing the importance of considering the environmental impact of our online activities.

In terms of recycling, there is a discussion on the need for education and awareness in recycling processes. It is mentioned that e-waste recycling in Abu Ghoshi, Ghana, has resulted in harmful environmental impacts. However, the government of Ghana, in collaboration with GIZ, is taking steps to create awareness about sustainable recycling methods.

Furthermore, there is support for alternative methods of connectivity through equipment refurbishing. Refurbished e-waste can be redistributed to underserved communities, providing them with access to the internet. This approach has already been implemented in several cities in the United States through inclusion strategies focusing on refurbishing, reuse, and redistribution of e-waste.

The importance of resources and skills for implementing the three Rs (refurbishing, redistribution, and reuse) is also emphasized. It requires the gathering of in-use equipment, housing them, and preparing them for redistribution.

The session also acknowledges that technology, while contributing to environmental issues, can also play a crucial role in solving them. However, discussions on the economic benefits of technology often overlook its environmental impact. It is argued that a more balanced approach that considers sustainability beyond profit is needed.

Bottom-up approaches are highlighted as essential for sustainability solutions. Government-led solutions often fail to account for the importance of grassroots movements and the involvement of local communities.

When discussing artificial intelligence (AI), it is observed that the lack of policies guiding its implementation can lead to disadvantages to society. The fast-paced implementation of AI technologies contrasts with the lengthy and bureaucratic process of building policies. However, creating global policies adaptable to local contexts could ensure the right usage of AI for climate change initiatives, thus guiding its beneficial deployment.

There is agreement that awareness creation plays a fundamental role in addressing AI issues. Social media platforms are identified as a powerful tool for continuous and consistent awareness creation, facilitating the understanding of the issues at hand and generating demand for action.

Knowledge sharing is also emphasized as a significant way for individuals to actively address AI issues. By creating awareness and encouraging people to take action, individuals can contribute to spreading information about AI and its impact.

Technology is recognized as a double-edged sword in addressing climate change concerns. While it can contribute to environmental issues, it can also offer solutions to mitigate and adapt to the impacts of climate change.

Moreover, the act of connecting the unconnected is seen as inherently sustainable. By extending access to the internet and digital resources to underserved communities, it contributes to bridging the digital divide and promoting sustainable development.

Lastly, it is concluded that the conversation surrounding these topics must involve the participation of government, authorities, and people. Increasing awareness of the challenges and potential solutions is seen as a crucial step towards achieving sustainability goals.

Monojit Das

During the analysis, the speakers delved into several key aspects of internet governance and its impact on various topics. They explored its relevance to the 4th industrial revolution, sustainability, energy consumption, and environmental impact. One of the main discussions focused on the need for convergence and cooperation between different stakeholders.

Monojit Das, for instance, is researching internet-related issues, specifically examining the convergence between the multi-stakeholder and multilateral approaches. He believes that finding a convergence point is crucial for resolving debates in internet governance. Das emphasized the importance of participation from both online and offline participants in finding effective solutions.

The speakers acknowledged the complexity of the debate between the multi-stakeholder and multilateral approaches in environmental issues. They noted the inherent differences in positions held by multiple stakeholders and nations in environmental policy. Despite this complexity, they agreed that small collective actions can have significant long-term outcomes.

The analysis also focused on the significant energy consumption associated with internet use. Simple online activities, such as sending text messages, consume data and energy. Data centres, essential for internet infrastructure, consume substantial amounts of power. The speakers highlighted the need for energy efficiency and the promotion of renewable energy sources to power technology.

Furthermore, the speakers recognized that the internet plays a critical role in the 4th industrial revolution. They stressed the interconnections between energy, transportation, and communication technologies. By harnessing the potential of the internet, advancements in these areas can be achieved.

The impact of internet infrastructure on the environment was another concern highlighted during the analysis. The laying of submarine cables needs to consider and avoid disrupting underwater habitats, such as coral reefs and marine life.

Regarding individual action, the analysis suggested that efforts to reduce environmental impact can begin with individuals. The panel included individuals from various professional backgrounds dedicating time to environmental issues, and personal compliance with carbon-neutral policies can raise awareness.

The discussion on artificial intelligence (AI) research revealed differing perspectives. While the importance of understanding the full potential and threats of AI was recognized, there were debates as to whether research should continue. Some argued for pushing AI to its limits to test its capabilities, while others raised concerns about its potential dangers.

The analysis noted the success and positive reception of the session, indicating promise for future discussions on digital leadership and internet ecology. One notable observation was the optimism expressed by the speakers about the growth and expansion of the platform for discussion in future sessions.

In conclusion, the analysis highlighted the multifaceted nature of internet governance and its impact on various aspects of society. From the convergence of different approaches to the significance of collective actions and energy consumption, the speakers presented a comprehensive overview of the challenges and opportunities within the realm of internet governance. The importance of research on AI, individual actions, and sustainable practices were also emphasized. Ultimately, the analysis revealed a mixture of optimism and realism regarding the potential for positive change and future growth in discussions on internet ecology.

Audience

The need for eco-friendly internet infrastructure and measuring its carbon footprint is of utmost importance. DotAsia and APNIC Foundation have been exploring this since 2020, with the aim of gauging the eco-friendliness of internet infrastructure across countries through the EcoInternet Index. This index aims to provide a measurement tool to assess the environmental impact of internet infrastructure. The argument put forth is that narrative and measuring methods for the eco-friendliness of internet infrastructure are crucial.

The internet not only has its own carbon footprint but can also contribute positively towards addressing climate change. The EcoInternet Index takes into account the balance between the digital economy and traditional carbon-intensive industries, highlighting the potential for the internet to play a significant role in the fight against climate change.

Improving internet network efficiency is seen as a positive step towards sustainability. However, no specific supporting facts are provided for this argument.

A grassroots movement can enhance awareness and foster a move towards a more carbon-conscious internet use. This includes a call for data centres to use renewable energy sources, which is seen as a positive step towards responsible consumption and production.

Digital inclusion in remote areas can be more sustainable by utilizing renewable energy sources such as solar energy. By incorporating sustainable energy solutions, digital access can be expanded while reducing the environmental impact.

Standardization is identified as playing a crucial role in shaping an inclusive policy for a sustainable and eco-friendly internet. It helps establish consistent frameworks for measuring the carbon footprint and provides a means to quantify reports on methodologies and data collections. This standardization is seen as vital in mitigating the risks associated with the internet and promoting a sustainable approach.

Collaboration between stakeholders, including the private sector, civil society, technical community, and government, is recognized as key in shaping a sustainable internet. Each stakeholder has a unique role to play, contributing insights, creating awareness, providing technology, and formulating policies.

Sustainable cyberspace efforts and the work of Dr. Monagir and Ahita are commended by the audience. However, no specific details or supporting facts are provided to further elaborate on this point.

The narrative and measurement of environmental issues are considered important, but no specific supporting facts or arguments are provided.

Sustainable housing and reducing the carbon footprint in infrastructure are seen as viable solutions. AMPD Energy in Hong Kong is mentioned as an example of using special materials and technology to reduce the carbon footprint in housing and warehouses.

Reducing paper waste and promoting recycling is highlighted as a viable environmental strategy. The Wong Pao Foundation in Hong Kong is noted for producing results within a year.

One of the challenges in implementing sustainability initiatives is the lack of financial support. It is mentioned that financial support is often difficult to obtain when trying to implement sustainability measures, which can hinder progress.

Hong Kong’s progress towards achieving the SDGs is deemed slow. Currently, the main recycling effort in Hong Kong focuses on recycling bottles, cans, and paper. No specific details or arguments are provided for this observation.

Further research is needed to understand the impact of AI on climate change. Although no specific supporting facts or arguments are given, the stance is that thorough research is necessary before further developing AI technology.

Digital literacy is considered important for global digital inclusion efforts. Despite good internet access in Hong Kong, digital literacy is not very high. It is mentioned that considerations should be made on how to contribute to other regions of the world facing different digital situations.

In conclusion, there is a growing awareness of the need for eco-friendly internet infrastructure and measuring its carbon footprint. Collaboration between stakeholders, digital inclusion using renewable energy, standardization, and efforts to reduce paper waste are advocated for sustainability. However, challenges in implementation due to a lack of financial support and the slow progress of Hong Kong towards the SDGs are noted. The impact of AI on climate change and the importance of digital literacy for global digital inclusion are areas that require further research and consideration.

Annett Onchana

The Africa Climate Summit, held in Kenya, placed significant emphasis on the importance of accessing and transferring environmentally sound technologies to support Africa’s green industrialisation and transition. This highlights the crucial need for the continent to adopt sustainable practices and technologies in order to reduce carbon emissions and mitigate the impacts of climate change.

The summit also highlighted the necessity for cooperation among different stakeholders in addressing the digital footprint. As our reliance on digital technologies continues to grow, the environmental impact of the internet cannot be overlooked. Thus, it is crucial for governments, businesses, and individuals to work collaboratively and find ways to reduce the carbon footprint associated with digital activities.

Furthermore, the summit discussed the influence of consumer habits on the environmental impact of the internet. It raised the question of whether people’s purchasing decisions are driven by trends or functionality. By examining consumer behaviour, efforts can be made to promote sustainable consumption and production. This means individuals can make choices that have a lower environmental impact, such as opting for more energy-efficient products or those with longer lifespans.

It is important to note that consumer behaviour can play a significant role in mitigating the environmental impact of the internet. If people shift their purchasing habits towards more sustainable options, it can contribute to the reduction of carbon emissions and waste associated with the production and disposal of electronic devices.

Overall, the Africa Climate Summit underscored the importance of addressing the environmental impact of the internet, promoting sustainable technologies in Africa’s industrialisation efforts, and encouraging individuals to make more conscious choices in their consumption habits. By working together and adopting sustainable practices, positive change can be driven, and the adverse effects of climate change can be mitigated.

Session transcript

Lily Edinam Botsyoe:
and thank you for being such a patient audience. We know you’re online and I hear that we have quite an audience. If you know anybody who would benefit from this session, kindly nudge them, ping them, just say the session has started so they can join in. My name is Liliye Denamboche, and in a short while, I’ll be introducing everybody here and there’s some people who also give interventions online. We are so thankful for the gift of the internet that allows us to do all of these, and so we want you to stay interactive. If you have any questions, please put them in the chat. We’ll come back to them as soon as possible. This session is the IGF Workshop 21, and it’s titled The Internet Environmental Footprint Towards Sustainability. Our panel is a blend of experts or young leaders who essentially have backgrounds in different areas related to green tech, and also just around e-waste and other areas that will be discussed in a short while. So I’ll do a bit of what we look to do for the aim of this, and then have our panel members do introduction. So we’re looking to use a hybrid format, that is what we have right now, you online, we on-site, to get you our conversation. What you’re looking to do is to see how the internet’s carbon footprints, or do we know that the internet’s carbon footprints account for 3.7% of the global emissions, comparable to that, and this figure is expected to rise. So some of the things we’re discussing here is, what are alternatives? You’re looking in the broad sense of climate, connecting and unconnected in a way that is inherently sustainable. So that is the discussion we’re having. So, let’s start a conversation. I’m gonna start with our panel members on-site, and then I’ll allow you online to also introduce yourself. So to my left, can you please give us a short introduction?

Monojit Das:
Hi, thank you, Lily, for giving us the brief outline. My name is Manojit, I am from India. I worked on internet. And my PhD topic was finding out the convergence in this call that probably can be other ways to find out solution to the debate between the multi-stakeholder and the multilateralism. So probably this can be one of the ways. And we would like to get more input from the participants online and offline because it’s mostly about the participation, not just the one-sided way of telling this. Thank you.

Lily Edinam Botsyoe:
Thank you so much. Like you said, we want the interaction across board. And then to my right, our first panel member.

Ihita Gangavarapu:
Hi, everyone. Good evening. I’m Ida, I’m from India. I’m coordinator of Youth Asia of India and also a board member of ITU Generation Connect. It’s an absolute pleasure to be here. Thank you, Lily. And I have a background in tech. I have experience working with Internet of Things across multiple applications and smart cities. And in this regard, I’ve come across this particular persistent to this issue of environment footprint. And I’m really glad to have the opportunity to share some insights shortly.

Lily Edinam Botsyoe:
Thank you so much, and then to Carson. Hello, my name is Carson Gabriel. I’m from Tanzania, computer scientist, and I have a background in internet governance for almost six years now. I am part of the African Youth IGF, and particularly we are here to explore the role of the multi-stakeholder approach, something that we have been all raised on and how we use this element to create sustainability, especially towards a more progressive humanity that uses energy not only in terms of consumption, but also creating more diverse bridges for the next generation to enjoy. And we also have our online participants. What I forgot to add before was that we have Carson, who will co-moderate, and also with me. For our online panel members, if you are on, that is Innocent and also Jowdy, let’s start with you, Innocent, and Annette too, yes. So Innocent, the floor is yours, a short introduction, we go to Jowdy, and then to Annette. Yes, so Annette, if you can hear us, please.

Annett Onchana:
Thank you for the platform, good evening, good morning, actually a few minutes to afternoon from Kenya, and I am an African youth, passionate and interested in internet governance, and I am happy to contribute to these discussions, and I’ll also be your rapporteur for the session. You’re welcome, and thank you for having us.

Lily Edinam Botsyoe:
Thank you so much, Annette, so excited to have you. So in a short while, the conversation is going to shift to Carson, who is going to walk us through some guiding questions, but we’re going to take turns to have climate sustainability relates to us and the work we do, and because of we’re having to leave unfortunately, I’ll give a bit of context and then hand over to Carson to have the conversation kick-started. So initially, I mentioned my name is Lily, I live and I was born in Ghana, and currently doing a PhD in IT at the University of Cincinnati. Now I’ll give you the context of how sustainability pretty much aligns with the work I do and my experience with it. First as a Ghanaian, I live in Ghana, and one of the places that has been tagged as the world’s largest dump site, a place that is found in Accra, and one of the things that we saw growing up was how people would carry scraps or break apart e-waste to be able to find valuable metals, go out there and sell to people in the hopes of making money. people saw as trash, for people in Abu Ghoshi, it was treasure and the way that these materials were broken apart and the way that they were recycled to be able to get this precious materials in there was not done in a way that was environmentally friendly. These are people who only knew that they had to break apart some of these because they had to make a livelihood and they had to sell. So the issue of … in Ghana and some interventions came from the government of Ghana in collaboration with GIZ, that’s a German corporation of the German government. So what happened was a time of pretty much creating awareness of how some of the activities that probably if it was not done, recycling was not done in a good way, would harm the environment. It took, or it still takes a lot of, I mean, education and awareness creation for people who … recycling as a livelihood to see reason as to why the change in the world is pretty much sustainable. But that is just something that is currently ongoing and essentially it will take maybe some time to get people to understand the why. And when we are talking about connecting people, we want to see even how we can explore refurbishing some of these e-waste for communities that are underserved. And that is also pretty much an issue that we see in Accra and across Ghana. There are places that don’t have access, one of the biggest hindrance is access to devices. So if … a home, whether it could be refurbished and redistributed, who knows, they wouldn’t end up in dump sites to be recycled in such a way that it’s in the long run affect or negatively impacts the environment. So some of the things that I’m seeing from research and because I’m doing the research, I did my master’s thesis in US, was essentially how across the US, there are cities that have implemented inclusion strategies. some things related to EU management and I will also switch from the context of Ghana to the US. So, the best practice in some of the cities could implement, it’s actually city level, is that the inclusion strategies that look at refurbishing, redistribution and reuse. All these three R’s are in such a way that it’s contributing to making sure that all of the things that are probably dark that’s in use do not end up in the dump sites and probably would find a home with people who probably would use it also for their work or just to learn for students who need access to devices. In essence, these R’s can only come to fruition if there are resources for gathering these in use, housing them, if there were skilled aspects to prepare them and if there was a way, a program or something to be able to redistribute them. So, in essence, my perspective is from the US angle where I look at alternative to connectivity through US refurbishing and it’s because of the background I have with the story I just shared about the case of Abu Belushi and what it says that I found out the cities are implemented by US EU strategies and there are inclusion strategies in the US and some of these cities, they are tagged the trailblazer cities and they are usually, they have reports that sends out what it is that they’re doing that people can also implement for their own context and it’s something that others can replicate and customize for their own situation. So, I know probably there are questions or comments that can come from this submission. I’ll drop it here and then hand over the floor to Carson who will give us some more nudges and guiding questions to drive the conversation towards the end of this session. Thank you.

Gabriel Karsan:
Thank you very much, Lily and that was quite interesting what you shared, the local context and concept. Canada does. Well, let me start with what is the solution to a problem if the solution is the problem? Because when I think climate change now and the internet and everything, that is where I lie. That was the context when we were discussing about this session. It is a high quality question that we really need to discuss and I believe here, all of us have a lot to discuss. There is the climate change agenda and there is the climate reality. Now, when we live in the fourth industrial revolution, when we have an intersection of energy, transportation and communication technologies, highly embedded with the internet, I believe that the internet could do more rather than contributing to the 3.6% of emissions that are happening. So I would like to start with you, Dr. Monojit. If you’d like to share your context to our obligation today, what do you think the people want to hear and what is the agenda behind in trying to create an internet that’s sustainable and serves its purpose to the people? Okay, thank you.

Monojit Das:
Thank you, Gabriel. My understanding, I’d like to share about my understanding because I may not be the whole and soul of the internet or what exactly, but I’d like to share what my understanding is. There may be different stakeholders out here, whether it’s the government, academia, you know, the civil society. But if you try to dig into a little bit, you know, if you see government, government is actually dependent on the civil society for votes. And the civil society is dependent upon the technical companies, the big giants for funding because they don’t have their own. And if you go to academia, you still need a subsidized funding from the government. So everyone is, you know, some of the other interrelated with each other, we cannot deny that. So whenever we are going to civil society events or so that talks about free speech or anything, it is somewhat the other, the company has their own agenda behind it and whether you accept or not. So, and if you see. if the countries also have the trying to develop their own internet in terms of you can say firewalls or anything as such this is just trying to frame a security layer of their own so that you know internet the whole concept I believe was introduced when it was as a preventive measure under the DARPA or the that was a military thing all together it all about so so everyone has their own interest and I believe that it will continue it’s an unending but probably there can be some sort of convergence on the topics that can be worked upon together without without any differences and that is although what we all come together that we have all so and because here you don’t have any differences every stakeholder will agree because in the morning when you say good morning it’s not just the good morning text that is taking on kb of your data but at the same time it is coming going through a lot of subsolutely because we are not having dedicated satellite for us so we are using those data centers to say you know power you or save your images it takes a lot of energy so this energy is not coming from you know most of the cases although Microsoft and other are trying to you know put it under water but it’s not all 100 you know so what we have tried to find out is that there can be probably some areas where we all can collaborate in this between this debate of multi-stakeholder and multilateralism is to come up and promote energy efficiency and that when we talk about it can be largely on the devices as my co-panelist also mentioned about the e-waste you know utilizing e-waste has been because she has experienced in a very better way and the transition to powering the technology requires a lot of energy So instead if you can handle the solar power, using solar panels or any such that all the scientist experts are there. And in terms of when I say the regulatory measures and there can are, there are regulatory measures that have a moderation in terms of using of the contains what it could be. But in terms of using of the resources, there can be uniformity, everybody can agree on this point. And particularly when submarine cable is attacked, data, we have to be dependent on someone or the other because we as an individual, there’s always debate between security and privacy that initially I was a student and still I’m a student of security studies, so I understand the perspective of security much better. Either there is a debate we have to do about security or for privacy, because we cannot ask for privacy when we don’t have our own utility. For using the social media we’re still dependent on the social media to convey our message. So here what we can do is that, that’s this submarine cables when they’re laid, it should be laid. Now more and more cables are being laid on, at least we can focus that this do not disrupt the coral reefs or particularly doesn’t pass through the sensitive areas where the rich sources of marine animals are there. So this can be a little bit of areas where we can all collaborate. And the rest of the debate can continue and I’m sure many of us, we all know that everything within the debate between the multistakeholder has to, it’s an unending process. And with this, I’d like to say thank you to the moderator and please continue.

Gabriel Karsan:
Thank you very much, Dr. Monojit. And I believe we live in one world, one sun, the same air we breathe together. So this is something that it is for us. It is people centered. It is a people problem. And there’s so many solutions we’ve heard, like using satellites and high altitude connectivity devices to help mitigate the risks of climate change and climate agenda as brought by internet connectivity. So if we pivot to solutions, let me go here, come in from India, a place where you actually feel climate change, but it’s also quite technologically progressive. That might contribute in creating an eco-friendly and green sustainable internet.

Ihita Gangavarapu:
Thank you so much. And I think great points by all of my panelists here today. Excuse my wife’s sore throat. So, when we are looking at technology and connecting it to environment, you tend to think about how it’s affecting environment. But one major component, or maybe I would say one different lens would be how do we leverage technology to let’s say monitor various environmental parameters, monitor and track them, and also add intelligence to it so that when we conduct research, come up with standards and regulations, all data back. So in that, my personal experience working with Internet of Things, IoT technology, back in India, working in smart cities. We were looking at, you know, IoT enabled smart cities in India, where we were looking at multiple verticals for multiple use cases around air pollution monitoring, energy monitoring, water quality monitoring. And it is not just from a city perspective, even at a small room, how can we monitor the CO2 levels? And even in the kitchen of the households, we have placed IoT sensors and we were able to find out the kind of food that was the carbon footprint of that particular, let’s say type of bread that is being prepared in the kitchen. So I feel like the issue gets worse of, you know, the… issue of carbon footprint, you know, it’s becoming worse. If we don’t have accurate measures of determining the emissions derived from certain processes, products, services, and technologies. I think that is where I want to bring in IoT. So when you talk about Internet of Things, it’s all of these technologies. It’s Internet of all dumb things. That’s how I would explain. You have your smart refrigerators, you have baby monitors. All of these items are connected to the internet. And they have sensors that monitor various parameters. You have a lot of data from all of these IoT devices. One really interesting use case would be having IoT in an agricultural domain to monitor, let’s say, the soil moisture. And you can have IoT for monitoring the health of a forest also. You can also predict when a forest fire is likely to happen, right? So all of these are really important applications of IoT that help you have data-backed results and actions. When you look at smart cities, IoT enables smart cities particularly. They also help you with global planning, smart waste management, and traffic management, right? So these are not necessarily seen relevant in a local context, but in a global context have a lot of impact on the carbon footprint. And another important aspect I want to highlight upon is the importance of environmental compliance, right? Every country, every region might have their own set of issues or factors contributing to the footprint. But compliance or having global standards is really important. Back in India, I’ve seen people talk about, or let’s say, NGOs or even big organizations talk about is a factor, and that is why they don’t, let’s say, follow environment-friendly services or products. So, having global standards around this will help you streamlining certain processes so the cost itself does not, no longer stands as a barrier. I think that’s the thought process that I wanted to share with all of you, and I’ll hand it over to Karsan now.

Gabriel Karsan:
Thank you very much Ihita, and you mentioned a very key element of having evidence fact based mechanisms, and I got something that you said. We have to develop a graphic. How can we measure traffic of the internet because that can be a source that can help us guide the study and the research that’s needed, because now there’s so much centralization when it comes to technology and climate change, and I believe the only solution is having grassroots voices, because in the end, people processes and platforms are the things when they’re connected, help in mitigating these issues and I’m really happy to see you all the multi stakeholder approach with all your voices here, the climate debate can be sorted when we all speak so I would like to welcome you, our dear audiences to share your remarks, share your questions, and most importantly just share your voice to the next generation to be accountable, so that it can be an internet, 100 years from now, one earth, one internet that’s protected so please the audience. You’re open to your remarks. Maybe Mr. might have something to add right let me give you the. Thank you for call.

Audience:
Hello. Thank you. I think it’s a great conversation and conversation that, especially as a speaker mentioned, it is a conversation that can make. a difference from younger participants and from the youth. One of the things that I guess I will just add to this, I think how we, what the narrative is and how we think about measuring the eco-friendliness of the internet infrastructure is something that is very important. And actually for DotAsia, we have been working on this since 2020 with APNIC Foundation. And actually the ecointernet.asia report, or you can check it online, ecointernet.asia. And what we did is exactly as was discussed is the concept that it’s the internet, not only, of course the internet has its own carbon footprint and has its own impact on climate change. But what about the positive impact? How do we think about that? And that’s sort of what the EcoInternet Index was trying to do. And this is the first year we are releasing kind of a ranking between different countries, how eco-friendly the internet infrastructure is. We look at three axes, one is on economy and we look at the balance, or I guess the replacement of digital economy versus the old more carbon heavy economy. And then the energy access, really the energy source that powers the internet. And then the third area, which you touched on just a little bit as well, is the network efficiency. The network itself can also be more efficient. So beyond the ranking, one of the things that are more relevant to this conversation we are having is also what are the things that can be done. Users as young people, maybe the policy intervention are going to be important. but I think a grassroots movement to actually make this awareness that we can be more carbon conscious users of the internet and call for data centers to be using more renewable energy. Also, one key kind of finding in the report that we found really interesting is that digital inclusion data at IGF actually comes hand in hand with sustainability. Because the most remote places, it’s better to use solar energy, it’s better to use off-grid energy and build that capacity rather than try to build the more carbon heavy way into the remote areas. So, in fact, these are things that can, I guess, jumpstart people’s awareness and consciousness that how the network itself and the infrastructure actually can also help. And the balance between them is kind of what we want to achieve and being conscious about it is very important.

Lily Edinam Botsyoe:
Thank you very much. Thanks. So, the submission was spot on and got me to remember something that came from the 2019 session on environment. And it was how somebody had mentioned that there was, like you shared, that outlook or people’s interest to see how we can use technology, in essence, to solve climate issues. And the longer we have the issue of technology contributing to what it is that we’re trying to solve, right? And that’s always a dilemma. How do we balance all of that? But that conversation had steered from the angle that there has been for the longest time, a focus on the economic angle of technology, like how technology can foster development, how you can bring profit. And there was that part of there was little to no contribution for sustainability and how we can essentially, there’s another angle that was mentioned, there were three of them, but it got us to see that there is a need for people, or I mean everybody involved to reconsider what it says that we focus on for like technology beyond to see how it can also impact, I mean the world we live in, right? So beyond profit, what else? So that’s continuous focus on economic angle for the longest of time didn’t help, and so the impact part is something that’s, the impact both positive and negative is a part where people would want to consider looking at again, and also consider seeing how you can use or leverage technology for building sustainability. While I say this, there’s been a question that has come in the chat, and I’ll read it, because I’m co-moderating with Karsan to be able to start a conversation towards this line. So somebody that’s from Ghana has said that some of these solutions articulated in the past that have been spearheaded by government are quite historic, and doesn’t take into an account a bottom-up approach. How do we bridge the gap then? There’s a question for us. So.

Gabriel Karsan:
Thank you, thank you Kaiga. First of all, just to add on that, I believe we are caught in the misinformation of what the climate agenda is, and this is due to poor quality data that most of our governments have, not fact-checked. And I’d like to thank everyone who just provided a lot of solutions which are grassroots. So with that grassroots and governmental conversation, I think which can be brought by the multistakeholder approach it could be a mechanism to build on that. But I would also like to call on Dr. Munojit, because you have some experience working with government and academia. What are your points on this?

Monojit Das:
Thank you. Particularly in this point, like, you know. it has to start from somewhere. So probably that somewhere can be us, and that is also the reason you see that all the individuals in this panel, you see, we are also associated with many things like when it comes to security, and she has spoken on artificial intelligence, she’s from security privacy, but we feel that this is something the environment, there should be, you know, as a contribution to the society and the government that where we are living in, we can try to bring out a little bit from our time to take in the efforts to not just the networking we are doing outside this, but also actually on ground to find out that where can be the synergies to collaborate and find out why the dilemma that exists between the actual implementation on ground and on paper. So here, you know, with that, why the debate between the multi-stakeholder and the multilateral, it exists, if you relate that it will continue to exist still long, I don’t know if you like it or not, but it’s actually here to stay because there are differences, you know, and which is not going to be very easy to solve, but on this particular lines, as we discussed that we’re finding out how much internet was in a communication was sent between that we can be started from basically friend A to friend B, whether my home is net, you know, compliance is having the compliance of environmental friendly or not. So in this way, you know, where it is sent in a way that his or her house or the gender they follow, so that having the same, you know, carbon neutral policy or not, this can be some sort of a step really, some ideas and it can be then implemented because internet also, the world we are connecting today did not have many telecom companies, all started in a small lab, probably little bigger than this, with different computers. So now it has become distinct in the sometime other places we are connecting. So this is the start and we can do the change.

Gabriel Karsan:
Thank you very much. I’d like to go to Annette. Much has been spoken about policy and the role of policy mitigating our crisis here. Annette, if you’d like to share because Kenya is a country that is quite a pioneer in terms of secular economy and this is built on the traditions there. Just a fun fact, it comes from the Maasai culture. The Maasai culture is a form of people and traditions who are quite incredible in their ingenuity and the use of indigenous processes to cultivate secular economy based on their mechanisms. I believe there is where we can get the solution. Annette, you have five minutes to share your points if you can hear me.

Annett Onchana:
Thank you. I would really like to maybe shift the discussion a little bit and also give an example of Kenya. First and foremost, last month, September, we had the Africa Climate Summit happening in Kenya. One of the call to action items was, and I quote, a call to access to and transfer of environmentally sound technologies, including technologies that consist of processes and innovation methods to support Africa’s green industrialization and transition. That was just a step in the right direction. Tackling the internet’s environmental footprint is key in driving the key agenda. This is therefore a call to collaboration and cooperation of all different stakeholders in tackling the digital footprint. Also, it’s often said that we are creatures of habit. We also should question what we can do at individual levels to tackle our own internet’s environmental footprint. For example, buying new gadgets. Are we creatures of trend? or do we focus on functionality? And those are just some of the mitigation efforts that we can use to reduce the internal environmental impact. Thank you.

Gabriel Karsan:
Thank you very much. You talked about inclusivity and that is key. Well, since I like putting people on the spot, Ernest, Ernest, you’re quite potent in the field of standardization. So what can standardization aid in terms of mitigating the risk of the internet that’s sustainable? Please, Ernest.

Audience:
Thank you very much, Karsan. You got me off guard on there. All right. So I like what Ihita said about the carbon footprint measuring the carbon footprint. I think that’s what I’m going to focus. So I believe this is one of the very important aspects that standards can play in measuring the providing a consistent framework to establishing and quantifying reports on methodologies and data collections on how we can measure the carbon footprint and other issues. So I believe that by following these standards and manage their carbon emission areas of improvements and have informed decision. For example, we have various standards that talk about the environment ability. I’m sure most of you know about ITU. You can just go to ITUT standard on environment ability. We have standards that talk about issues to deal with climate change and also on how we are mitigating the issues of data and how we’re spending. the data on the internet, and also just our internet footprint and how we are using the internet and everything. So I believe standards plays a role in shaping an inclusive policy on how best we can shape our environment to be more sustainable and eco-friendly. And the other thing that I would also like to suggest is that collaboration in everything is key. We have a private sector, we have a civil society, we have the technical community, and we have a government, so it’s best that we collaborate together to shape one ideology and one agenda to ensure that the internet becomes more sustainable, because we can talk about environmental-friendly, but we have the experts from the technical community who can provide us insights on the best ways on how we can do it. We’ve got civil society who can provide the awareness aspect and also reaching out to many numbers, masses, and also we’ve got the private sector who have good data and also the technology at the end of the day, and we’ve got government who can provide policies and also, like they have the final say, they have the plumbers. Some of us, we are pipes, but we have the plumbers. Thank you very much.

Lily Edinam Botsyoe:
Thank you so much. So we’re gonna come back online for anybody who has a question or a contribution, but because we want to hear your recommendations also, we’ll pass a microphone around in a bit. Before we do that, we’re going to ask Ihita to also respond to the question around how to bridge the gap, especially when government initiatives aren’t doing what they have to do and how do we bridge the gap, as was asked by Keja online.

Ihita Gangavarapu:
Actually, that’s a really important question for us to think about. I’m not sure if we have a definite answer that will just go into action right away, but it’s something that we should all think about and discuss. The first thing that comes to my mind, looking at all of you in the room, this is called consultations, right? Looking at different stakeholders coming together. and discussing on work. Every small or big effort will definitely have a good or a bad impact. So we’re looking at a good impact in terms of reducing the footprint, making sure that environment is sustainable, it’s healthy. But what is a realistic start for us? I think that’s something we have to all ideate on in consultations that, let’s say, a government facilitates in their country, making sure the private sector comes together, civil society come together. Those developing technologies from a design point of view itself are incorporating in their supply chain, in their products, in various processes. So I think consultations is one. The second point, I think, is in terms of cost. Like I mentioned in a previous message, cost is a factor or is considered as a barrier for people to develop environmentally conscious services and technologies. So incentive from the governments to the private sector or those who are making an effort to reduce the footprint, I think that’s another point I’d like to highlight. And we also have a role to play. We have to be conscious ourselves as to what our daily activities, how are they contributing to the global environment, what is the type of effect that it’s creating? Is it a bad effect? How are we propagating this? So, and I really like that Ernest mentioned standards. He took an example of IT standards for environment. And I think standardization is a third aspect because a lot of standards, not just at a multilateral level, also multistakeholder organizations standards. So I think that is where an echo of how environmentally sound decisions need to be made in the digital space come together. So yeah, those are my inputs.

Lily Edinam Botsyoe:
Thank you so much for sharing, Ahita, and we’re gonna start hearing from you. Is there anyone who has heard, read, experimented, or probably just learned of a best practice with regards to sustainability in the case of technology that would want to share? If you’d want to, by all means, put up your hand and I’ll reach you with a microphone. So we want to, as much as possible, also learn from you. And if there was anything that has, please let us know, and we can add it to the recommendations that we have. So any idea whatsoever, you can put up your hand and I’ll give you the microphone. Right, do you have a question? You have a question for the panel, maybe? No, right. Anybody wants to contribute to this conversation so far? So, oh, the professor, right. You’d want to add to it?

Audience:
Yeah, sure. Though I’m speaking a little late, but I really appreciate having interacting with you, and I really appreciate the perception and the efforts which you’re putting for the sustainable cyberspace. So I think it’s up to you and really Dr. Monagir and Ahita, all are working really hard for this thing. So I think asking a question to all of you will be a challenging part for me also, because everything is so fine and filtered. Right, okay.

Lily Edinam Botsyoe:
Thank you. Thanks for coming to support. We appreciate you and the friendship. Anybody want to ask a question about sustainability in terms of technology, green technology, clean technology, clean energy? You can put up your hand and ask, or if you have a submission, a recommendation, you can also put up your hand and I can give you the microphone to share.

Gabriel Karsan:
We have? All right, okay. Hello. We are all addicted to hydrocarbons sadly, and this is something that powers us. In this situation, there are things which work, and things which do not work. So, our session is mostly about listening to all of you. So, I will pass the mic. Tell us, what is that one idea do you think we can work on? Because this is a small group, and this is where change happens. So, please respectfully, just one thing share with us, beginning here.

Audience:
Now, I think we, not a lot of idea to add to it, because we are working on it as well, and we want to listen. But I think the way to, the narrative and the measurement is really important. And if we can come up with a way to really talk about this, I think that would be important. But I would encourage you to, tomorrow morning, there’s going to be a main session on environment, which we will be talking about this. And so, I do encourage you to bring some of this discussion there, and ask those experts there. Actually, I’ll be moderating that session as well, but please bring today’s ideas there. Now, I will pass, help you pass. Okay, everyone speak. So, hi, I’m Jasmine. I’m with Admin, actually, from .Asia Hong Kong. I don’t know what should I say, maybe something I, well, like, there’s some solution that I want to share. It’s actually for more of a sustainable housing solution. So, there’s a one side of Hong Kong called AMPD Energy. So, what they’ve been… They’re using some kind of special materials and also technology to reduce the carbon footprint and also emission when it comes to that kind of transactional housing things and also warehouse boxes. So I forget, because I’m not an expert, so I do not know how to, I forget the term how to name the materials, but I’m just calling it AMPD Energy because I have been before. So this is one of the case that I would love to share, especially housing, you know, as you guys know, it’s quite a serious problem in Hong Kong. So definitely we want to reduce more carbon footprint and also impacts when it comes to housing materials and infrastructure. So that’s just a little thing that I could contribute. I just think about it, so I’ll just pass my mic to younger generation, also from Hong Kong. Yeah, I’m from Hong Kong. All right. So my name is Ethan. So I’m from the Wong Pao Foundation. So this foundation actually aims to reduce paper waste and give folks second lives. So actually we have already some results in one year since it’s actually a pretty new idea, or you can say our company. And I think, and we will have a lightning talk on Wednesday. So if you guys are free to join, we are very welcome you to join. Yep. That’s all. Oh, okay. Hello, thank you for your invitation. I’m Danny, also from Wong Pao, from Hong Kong. And actually we have same group that we’re using scenario, and I think we’ll have a discussion about how to… to build a sustainability environment. I have some point to share what I have encountered in Hong Kong. Because everybody wants to talk about sustainability, talk about protective environment, but when you want to implicate it, you will find that you always need to have a business model. And you always have to raise funds. You always have to have people who will give you money. But if you really want to just raise funds, or the money is coming from the government, it’s always very difficult. So I find that many, many people have the awareness to protect the environment, have some initiative, but to the final round, no money. No business to support. When you go into the business, they always want to make some impact, but the impact is not social impact. It’s about the business impact. So I think it is a gap between the commercial world and the sustainability initiative. And I believe, so that’s why we are here. I believe we have some synergy. We have to develop many, many kind of synergy. The ESG environment is possible. If just one initiative, I think it’s always hard to thrive because no financial support always. So that’s what I kind of want to share with you and hope we can have some synergy or other ESG solutions. Thank you. Oh, hello. We come from the same group. So because Danny has already mentioned about the things that happen in our company, but I want to address some situations that the world… Sometimes people will say maybe Hong Kong is a well-developed cities, but in terms of the SDGs, we have a very slow pace. anything of growth because right now we are just doing things like, okay, recycle the bottles and recycle the can and recycle the paper and that’s it. So what we are coming over is how we make use of the technology and then how to make the sustainable, the path move forward. So that’s why we come here to absolve the technology that or the works have been done over the world so we can bring it back and then give some, provide some solutions to maybe the government or maybe even how can we promote it within Hong Kong to chase it, the steps of how the world is working on. So these are my kind of thoughts that we want to share with you. Thanks. Okay, thank you for giving the mic. I just do research at University of Namibia. I’m a researcher at the University of Namibia. And yeah, I was just sharing a bit earlier at the main session on artificial intelligence. I was hearing said center saying, you must use the most powerful issues, including climate change issues. In my opinion, it’s always interesting to point out that of course, there are some possibilities with AI to address issues like climate change, but we always need to stress that it has an impact. on climate change. And then a question could be shouldn’t we wait and research first on the impact of AI before trying to develop these technologies to address the issues? So that would be a question for you, actually. Question. That’s a good sticker. I’m going to ask. Thank you. Thank you. Thank you. My name is Terry. And I’m also from Hong Kong, so I skipped a part. From the speech, from talk, I’m always thinking about a question is what can I do? What can me as a youth, as a youth from other side, can do, can contribute to the youth, to the people in the global South, in Africa? Because as we know that we are facing different situation of digital technology, for example, digital inclusion. In Hong Kong, we have the most advanced access and also availability of internet. But our digital literacy actually is not very high. But in Africa, in the global South, it’s another situation. Maybe you are still working on how to get the digital devices. So this is a lesson, or maybe it is a question for me to think about what can I do for youth, or what can I do for the other side of the whole world? Thank you.

Lily Edinam Botsyoe:
Thank you so much. So we have two questions right now. We have first one on, shouldn’t we be researching more on AI to see how the impact of AI wouldn’t negatively impact the environment before we develop it? it’s more, that’s how do we make sure that AI is serving us and serving us well and not contributing to the issue. One of the things I think Enes hinted was around policies. Usually when we say policies, it feels so far-fetched, but it’s some of the recommendations that we give to government for them to be able to see why they may be at disadvantage to something that they deploy as technology. And I see that because technology is fast evolving, one of the things that we usually would see is it’s just really fast-paced policy. People adapt technologies, probably you don’t even have the basic infrastructure in place, how to make sure that it is deployed within the confines that protects humans and within the confines that can make it beneficial to society. So it’s what you’re seeing as a problem. Policies that ensure that there is the right deployment, right usage, sometimes are missing. And the process of building policy is sometimes bureaucratic, so lengthy, it’s not swift to implement. And so for a long time, the disadvantage will be felt before there is a curative approach to addressing it. It’s usually not done before things happen. Nobody thinks of it till it’s really an issue, and that’s what we are seeing. So if there were a policy that probably were even global, that people are learning from, countries are adapting and customizing for their context to be able to deploy AI for climate change initiatives, who knows, it will probably be helpful. So my best thoughts was policy, because it helps us to be able to create a certain kind of guiding principles for how some of these technologies are deployed. You can add to it, right? So Ihita also wants to add to the point. Ihita?

Ihita Gangavarapu:
Yeah, thanks Lily. So she’s given a policy dimension to it, and I think it’s really important that she starts guiding principles. you’re looking at AI, I think it’s definitely here to stay. So, and we’re already, I think, past, I don’t know, we still use the term emerging technology, I think it’s quite emerged now, and quite integral part of basically everyday life, especially those who have, let’s say, use the services regularly. But in cyber security, we use the term security by design. So, can we have something similar when it comes to environment, right? Can we have something environmentally conscious by design? Like, I think it’s a great question, because at the very starting of a supply chain, when you’re designing development, then maybe comes, you are deploying it. Can we have something there? And I think that is where if you have policy starting right at the beginning of coming up with the technology itself, then the whole chain itself is very conscious, and to what could be the repercussions of not following a particular policy on the carbon footprint. And that’s the kind of thought process that I have. Be happy to add more to it. But given the time, I think I will hand it over to the moderators.

Monojit Das:
In addition, there’s one thing that whether we should first focus on how much, you know, potential or how much threat can AI pose to us, then we can think on the extension part. It is an unending, because you see, in a few months back, the very top CEOs decided, you know, we should stop researching on AI for six months. Probably that wasn’t the news going around. But did they stop working? Did the company stop? No, they didn’t stop. So, it’s like, they’re working and they want others to stop, you know. So, at least they want to be on the advantageous side. That’s what I feel. Because unless and until we research to what extent we can go, we cannot find out what can be the drawbacks. It should be parallel, you know. We have to build a house, then we can find whether it’s earth-resistant or not. Unless you can shake it, before that you can’t make a two-story and then think whether the six-story will fall or not. We have to build a six-story and then probably we can shake it to see whether it falls or not.

Lily Edinam Botsyoe:
I like the analogy at the end, with a two-story and a six-story, like you’ve got to test it to see if, that’s interesting. There was a second question actually, it said, what can I do, right? I think we as individuals too, I think we’ve been demanded from different stakeholder groups the question is, what can we do? I would start from the awareness creation. So you’ve had something from here, you want to go and probably just, you can use a social media. That’s sometimes like very little, but if it were consistent and continuous and creates awareness in such a way that people are seeing, then people get to know that there’s an issue and there is a demand for some hearing, for people to take some action. So I would take that angle, I know some others would want to add to it. So for me, I feel like one of the things that people would need to be able to swing into action is a knowledge and knowledge sharer, to be the one to create awareness and help people to swing into action. I would have, who wants to add something? What can individuals do?

Gabriel Karsan:
Thank you very much for those contributions. All I want to say is what we should do is an important question. Be bold, be vocal, be accountable. It’s one internet, it’s one world. The solutions are here. We need to provide solution digital leadership. And I’m so happy for our colleagues from Hong Kong and all the room who shared this perspective. It shows how much we are connected, the internet, interconnected networks, but behind those nodes are people. And the solutions are always people centered. The first, second, even third industries, but they were mitigation based on challenges. We see how eco-friendly design has gone in hardware. We can do it as well on the internet. and make it sustainable. So, I will welcome my panelists to finish to give some parting words. Just mine, I remind you, be bold, be vocal, and let’s do it, let’s cooperate. And thank you all.

Monojit Das:
Thank you. Your turn. Well, thank you. I feel if you see this way, like the topic that we have come up with might have attended other sessions as well. So this topic itself, getting selected and getting an opportunity to present in front of you and you opening up to yourself. It still speaks that we have this idea is a success. And I’m sure the success has already started to come out from yourself. And with this, as my co-panelists have highlighted, it can be started with small steps, taking initiative from making groups. And that will lead to us probably in the next session and next year we’ll be having a bigger platform to discuss upon. Thank you.

Ihita Gangavarapu:
Yeah. So I love the motivation and the points by Carson. And I also agree with the points made by… That is something we’d be able to do with the internet, right? And at the end of the notes, the people. So we’ll be able to talk about what solutions we require, what are our concerns, and we should leverage the internet for that. In addition to that, what I’d like to say is that there are organizations and initiatives that are working and putting in their efforts to create products and services that are environmentally conscious, right? And we should be identifying them and supporting them. So I think that is something we should be working towards and helping them. Thank you.

Lily Edinam Botsyoe:
So I get a last say and this is a thank you to all of you for being an engaged audience for you online. Thank you. for making the time to be a part of the conversation. We’ve come to see that technology essentially can be helpful for mitigating what we see when we talk about climate change agendas, we talk about the negative impact of technology, and also can maybe be contributing to what the problem is. And I’ll make the point clear that it behooves us to start the conversation, get government, get authority, get people to be aware. And at the end of this, let us know that connecting the less billion, connecting the unconnected is inherently a matter of sustainability. Thank all of you for coming. The notes will be shared with all of you and let’s create some change. Thank you. We can all come here and join us for a group picture, please. So please, let’s get a group photo and thank you very much, arigato. Thank you. Oh, sorry. Excuse me. Thank you. This is everybody? Everybody wants to speak on this session? Yeah, it’s just all of us. Like a fun one, right? Let’s do a fun one. Hey. Let’s say yo-yo. Yo-yo. Great. Thank you so much. Oh, my god. I made it straight to my house. All right. I’m just going to do that one more time. Let’s see. Oh. Yeah. Everybody’s so nervous. And I love that no one’s talking to you right now. Because I love that. Yeah. Okay. Maybe? I will throw it. You see, it’s supposed to be moved up. Here is the key. Here is the key. I am going to hit it.

Annett Onchana

Speech speed

118 words per minute

Speech length

252 words

Speech time

128 secs

Audience

Speech speed

149 words per minute

Speech length

2275 words

Speech time

916 secs

Gabriel Karsan

Speech speed

189 words per minute

Speech length

1153 words

Speech time

366 secs

Ihita Gangavarapu

Speech speed

179 words per minute

Speech length

1510 words

Speech time

506 secs

Lily Edinam Botsyoe

Speech speed

191 words per minute

Speech length

3182 words

Speech time

1000 secs

Monojit Das

Speech speed

190 words per minute

Speech length

1729 words

Speech time

545 secs

Increasing routing security globally through cooperation | IGF 2023 WS #339

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Annemiek Toersen

The Netherlands Standardization Forum plays a significant role in promoting interoperability and provides advice to the Dutch government regarding the use of mandatory open standards. The forum consists of approximately 25 members from various sectors, including government, businesses, and science. One of their key efforts is the compilation of a list of mandatory open standards, primarily focused on public sector organizations. This ensures effective communication and information sharing between different governmental entities.

Open standards are essential for secure and trustworthy data exchange, enabling seamless communication and compatibility between different systems and technologies. They also contribute to accessibility for all individuals, regardless of their technical capabilities, and promote vendor neutrality by reducing dependence on specific vendors.

The Netherlands Standardization Forum utilizes the internet.nl tool to monitor and measure the growth of internet security standards and other open standards. This tool helps conduct annual reviews of procurement tenders, assessing the government’s performance in implementing open standards. The forum reports these results to the cabinet, ensuring transparency and accountability in open standards adoption.

Annemiek Toersen, a supporter of the forum, advocates for the use of Resource Public Key Infrastructure (RPKI) to prevent Internet hijack. To support its adoption, Toersen proposes sponsoring courses on RPKI to educate and train personnel within Dutch government institutions.

Education and workshops play a crucial role in promoting the adoption of open standards. By providing information and training, governments can make informed decisions and effectively implement these standards. The European Union (EU) also monitors the adoption rate of internet standards, including RPKI, to ensure that European countries stay up to date with the latest advancements.

Internet.nl, an open and accessible tool, is available worldwide for implementation. It has already inspired countries like Australia, Brazil, and Denmark to adopt it. The availability of an English version facilitates global cooperation, and the team behind Internet.nl offers assistance and support to ensure successful implementation.

For a procedure to be accepted, substantial deployment and support are necessary. The involvement of multiple organizations helps validate its efficacy and practicality for wide-scale implementation. Public discussions and workshops are necessary to improve routing security and advance technologies like RPKI.

In conclusion, the Netherlands Standardization Forum plays a vital role in promoting interoperability and advising the government on the use of mandatory open standards. Open standards facilitate secure data exchange, accessibility, and vendor neutrality. The forum uses the internet.nl tool for monitoring and measurement, and Annemiek Toersen supports the use of RPKI. Education and workshops are crucial for the widespread adoption of open standards, and the EU monitors the adoption rate of internet standards. Internet.nl is available worldwide, and the acceptance of a procedure requires substantial deployment and support. Continued efforts are needed to progress security measures and advocate for improved strategies in the digital realm.

Olaf Kolkman

Routing security is a critical concern when it comes to safeguarding the core of internet infrastructure. The argument is that protecting the routing space is vital, as it serves as the backbone of the internet. To address this issue, a prioritization of routing security is necessary.

The Mutually Agreed Norms on Routing Security (MANRS) have been established to tackle routing security challenges. MANRS offers a set of measures that participants in the routing system agree to adopt. Different programs are available for Internet Service Providers (ISPs), Content Delivery Networks (CDNs), Internet exchange points, and vendors. The MANRS Observatory helps track incidents and community adoption, ensuring transparency and accountability.

Another proposed measure is the implementation of certification schemes to enhance routing security. Participants can obtain certification through an audit scheme, potentially increasing their market value. The argument suggests that a certification scheme could create higher value in the market, thereby incentivizing participants to prioritize routing security.

Collaboration among routing system participants is emphasized as a crucial aspect in addressing common action problems. The lack of visibility among participants is seen as a challenge, but by making each participant’s commitment to routing security visible, this issue can be overcome. Increased visibility could incentivize the adoption of routing security measures and promote a more secure routing system.

Olaf Kolkman, although not directly involved in the process, raises a question about the specific Request for Comments (RFCs) used in the initiative. He suggests forwarding the question to individuals such as Bart or Rรผdiger, who may have the answer. This demonstrates a willingness to seek expertise and knowledge from relevant sources.

In conclusion, securing routing is of utmost importance for protecting the core of internet infrastructure. Initiatives such as MANRS and certification schemes aim to enhance routing security. Collaboration, visibility, and certification can incentivize participants to prioritize and adopt routing security measures. Seeking input from relevant experts highlights the commitment to obtaining accurate information. An integrated approach is necessary to address challenges and ensure the secure functioning of the routing system.

Verena Weber

Routing vulnerabilities persist in the world of internet security due to various challenges. These challenges include the collective action problem, where the actions of one actor depend on others in the system. The cost of implementing routing security practices is also a challenge. Furthermore, available security techniques require a layered approach, which can increase the risk of mistakes.

To improve routing security, there is a need to enhance the measurement and collection of time series data on routing incidents. Governments can support this effort by funding and ensuring continuous measurement. Several countries, such as the United States, Netherlands, Brazil, and Switzerland, have shown a proactive approach towards routing security and can lead by example.

Governments can play a significant role in bolstering routing security by implementing best practices, facilitating information sharing, and defining common frameworks with the industry. Information sharing and wider adoption of implemented practices can also contribute to improving the situation.

At a broader level, awareness-raising and training at the EU level are important to equip individuals with the necessary knowledge and skills to tackle routing security challenges effectively.

In summary, routing vulnerabilities persist due to various challenges, but governments have an increased interest and can play a crucial role in improving routing security. By actively engaging in efforts to enhance data collection, implement best practices, and facilitate information sharing, governments can strengthen routing security. Additionally, awareness-raising and training at the EU level are essential for addressing routing security issues effectively.

Moderator

During the discussion, operators expressed concerns about the deployment of Resource Public Key Infrastructure (RPKI). Some operators were hesitant to pay for routing securities, raising doubts about the effectiveness and value of such investments. These concerns indicate a negative sentiment towards RPKI deployment. It was also noted that further steps, including ASPATH validation, are needed to enhance routing security measures. This suggests a neutral stance towards the need for additional measures to improve the security of routing.

Operators’ skepticism about investing in routing securities reflects their reluctance to allocate resources without clear benefits or guarantees. This negative sentiment emphasizes the need for persuasion and reassurance to encourage operators to adopt and invest in routing security measures.

Furthermore, there was a request for clarification regarding the tracking of governments on internet.nl. The concern raised implies uncertainty or confusion about the extent to which governments can monitor or track activities on the internet.nl platform.

On a positive note, it was highlighted that Annemiek Toersen’s team provides assistance and inspiration to other countries through the English version of internet.nl. This knowledge exchange among countries, such as Australia, Brazil, and Denmark, illustrates the positive impact Annemiek Toersen’s team has in promoting the use of internet.nl and its code.

Lastly, the moderator sought clarification from Annemiek on RPK standards during the discussion, indicating a need for further understanding or insight into the implementation and impact of RPK standards.

In conclusion, the discussion highlighted concerns and skepticism among operators regarding RPKI deployment and investing in routing securities. The need for additional measures, such as ASPATH validation, was emphasized to enhance routing security. There was also a request for clarification regarding government tracking on internet.nl. However, the positive contribution of Annemiek Toersen’s team in supporting and inspiring other countries with the English version of internet.nl was acknowledged. Further clarification on RPK standards was sought from Annemiek, indicating a desire to gain more insights into this topic.

Katsuyasu Toyama

The deployment of Resource Public Key Infrastructure (RPKI), specifically the use of Route Origin Authorizations (ROAs), varies across regions. Europe and the Middle East have greater adoption of ROA, with approximately 70% usage, while Africa and North America lag behind with less than 30%. This difference was observed in data from APNIC Labs.

One of the contributing factors to the slower adoption of ROA is the lack of knowledge and skills among internet service provider (ISP) operators. In Singapore and Thailand, it has been reported that some operators lack the necessary expertise to effectively implement ROA. This skills gap impedes the deployment of ROA and highlights the need for more practical understanding in this area.

Another challenge arises from the operation of ROA cache servers, which are currently available as open-source software. Efforts in Japan are being made to provide ROA cache servers at Internet Exchange Points (IXPs), but concerns have been raised regarding the security of the communication channel between routers and the ROA cache. The absence of encryption raises security concerns and emphasizes the need for improved measures in this domain.

To encourage broader adoption of RPKI and ROA, it is recommended that organizations or governments issue recommendations for their deployment. In Singapore, for instance, governmental regulations have helped to some extent in promoting ROA implementation. Such industry or country-level recommendations can lead to wider adoption and improved routing security.

The occurrence of route leaks underscores the importance of striving for improved global routing security. Route leaks have negative impacts on internet stability and security. The need for enhanced security measures, such as Autonomous System Path (ASPATH) validation, is evident. However, ASPATH validation is acknowledged as an imperfect solution that requires further development to address existing limitations.

The enforcement of RPKI is currently driven by penalties imposed on non-compliant entities. Although this serves as a motivation for deployment, operators remain skeptical about investing in routing securities. Their skepticism may stem from concerns about practicality, effectiveness, and potential costs associated with implementing such measures.

In conclusion, the deployment of RPKI, particularly the use of ROA, varies across regions, with Europe and the Middle East leading in adoption. The skills gap among operators, challenges related to ROA cache server operation, and operator skepticism towards investing in routing securities present obstacles to wider adoption. However, recommendations from organizations or governments, improved global routing security measures, and ongoing efforts in ASPATH validation can contribute to broader deployment of RPKI and advancement in routing security.

Audience

The analysis examines the discussions surrounding the implementation and adoption of Resource Public Key Infrastructure (RPKI) and routing security. Various speakers shared valuable insights and perspectives on the subject.

One speaker highlighted the commitment of a non-profit organisation to provide free online training in technologies such as BGP security and RPKI. This initiative aims to assist individuals facing budget constraints that prevent them from travelling or attending physical training sessions. The organisation’s focus on social impact rather than profit-making reinforces their dedication to promoting knowledge accessibility.

Another speaker emphasised the flexible training programs offered by the organisation. They expressed a willingness to negotiate tailor-made programs to suit the community’s needs. Additionally, they were open to discussions about offering discounts for training sessions, considering factors such as the number of participants and potential impact.

The analysis also discussed the automation of RPKI, with contrasting viewpoints presented by two speakers. One speaker suggested that automation has facilitated the expansion of Public Key Infrastructure (PKI) with web servers, citing the example of Let’s Encrypt, which provided free certificates based on Acme. This automation was seen as a catalyst for PKI expansion. However, another speaker disagreed, emphasising the importance of resource holders personally signing statements within the portal. They argued that the process of signing statements is not so complex that it should be automated, underscoring the significance of individual responsibility in this regard.

A digital platform called internet.nl was mentioned, which currently checks only Route Origin Authorizations (ROAs) and not Route Origin Validations (ROVs). This limitation in checking ROVs was acknowledged, as it necessitates separate ISP space that has an invalid route to perform the check. This insight provides context to the capabilities and limitations of the internet.nl platform.

The European Union (EU) was mentioned as monitoring the adoption rate of modern internet standards, such as RPKI and “manners.” This observation indicates the EU’s interest in promoting the usage of these standards and highlights their commitment to enhancing internet security and infrastructure.

The analysis revealed the existence of several Request for Comments (RFCs) that have established RPKI-related standards. These standards pertain not only to the establishment of ROAs and origin validation but also introduce new objects in RPKI, such as the upcoming “ASPA.” The inclusion of these standards demonstrates ongoing efforts to develop and enhance RPKI.

The incomplete implementation of BGP-SEC, a standard specifically designed for RPKI, was a concern discussed by one of the speakers. They expressed their worries about the lack of comprehensive BGP-SEC implementation, which requires significant resources. This issue was described as often overlooked in discussions surrounding RPKI and routing security. This observation highlights a potential blind spot within the ongoing discourse and emphasises the need to address this gap to ensure the effective implementation of RPKI.

The audience also raised important points regarding the need for discussions and improvements in the implementation and deployment of BGP-SEC and routing security. It was suggested that the current focus seems to be on the immediately available options, potentially neglecting the necessity for further advancements and enhancements in the field.

Furthermore, resource allocation was deemed crucial for the future development and deployment of RPKI and routing security. The audience stressed the importance of securing necessary resources, including personnel and adequate security measures, to effectively drive advancements in these areas.

In conclusion, this analysis provides a comprehensive overview of the discussions surrounding RPKI implementation and routing security. The insights shared by various speakers shed light on the commitment of organisations to offer free online training and tailor-made programs, the potential of automation in RPKI, limitations of existing platforms, the EU’s monitoring efforts, the establishment of RPKI-related standards, concerns related to incomplete BGP-SEC implementation, and the need for discussions and resource allocation. These discussions contribute to a holistic understanding of the challenges, opportunities, and directions for improvement in the realm of RPKI and routing security.

Bastiaan Goslings

The analysis of the provided information reveals several important points regarding routing security and the adoption of open standards in the internet infrastructure. One key aspect is the Resource Public Key Infrastructure (RPKI), which offers a more secure method of routing security by using cryptography to verify the originating network of routing information. This prevents impersonation and unauthorised usage. Efforts to promote the use of RPKI and improve routing security are seen as crucial and should be intensified.

The MANRS initiative also plays a significant role in protecting the core of internet infrastructure by promoting routing security. Bastiaan Goslings, a proponent of the initiative, is positive about its next level, MANRS+. There is also an encouragement for participants to spread awareness and convince other networks to join MANRS. This highlights the collective effort required to enhance routing security.

RIPE NCC plays a vital role in providing training courses on RPKI and BGP security, which are essential for the adoption of open standards. They offer free online courses, conduct webinars and host meetings to educate individuals on RPKI and other routing security measures. Additionally, RIPE NCC is open to providing tailor-made trainings and considering discounts based on the potential impact and volume.

While RIPE NCC has not implemented an incentive programme like SIDN for adopting open standards, the idea is open for consideration. The decision to adopt such a programme would require the agreement of the members. This emphasises the importance of collective decision-making within member-based organisations.

The automation of creating RPKI space is not a straightforward process and may be perceived as technically complex or costly. However, it is worth noting that automation, as exemplified by the creation of “Let’s Encrypt,” has proved successful in facilitating the adoption of open standards in the Web PKI realm. This suggests that further advancements in automation could address the perceived complexity associated with implementing RPKI.

Regarding certificate validation, Internet.nl primarily checks Regional Internet Registry (RIR) and Autonomous System (AS) Operator certificates, rather than Route Origin Authorisation (ROA) certificates. This underlines the specific focus of certificate checking on the platform.

The analysis also emphasises the need for further improvement beyond the creation of ROAs and validation in internet regulation. Discussions have taken place regarding organising workshops for Dutch government policymakers and cooperation with RIPE to achieve these improvements. This signifies an acknowledgement of the necessity to go beyond the existing tools and approaches to enhance internet regulation.

In conclusion, the analysis reveals the importance of routing security and the adoption of open standards in the internet infrastructure. Efforts to promote the use of RPKI and improve routing security are crucial. The MANRS initiative plays a significant role in this regard, with supporters like Bastiaan Goslings actively encouraging participation and spreading awareness. RIPE NCC provides essential training courses and is open to considering incentives. Automation of the RPKI space and further improvements in internet regulation are also areas of interest. Overall, the analysis highlights the ongoing efforts and challenges in enhancing routing security and promoting the adoption of open standards in the internet infrastructure.

Session transcript

Bastiaan Goslings:
Good day, everyone. For those who are not seated yet and do intend to attend the session, please be seated. We’d like to start. We’re already a couple of minutes over time, we have a busy schedule. My name is Bastiaan Gosselinks. I work for the RIPE NCC, and I have the honor to be coordinating and coordinating a session called Increasing Routing Security Globally Through Cooperation. I think we are all very much aware, you know, we’re a couple of days into the IGF and it’s been mentioned multiple times what the impact of the internet is and the essential role it plays in many of our societies. So whether it comes to work, leisure, education, doing business, even public service, more and more everything is being delivered online and we’re so accustomed, you know, to using apps and devices to communicate and to consume content. It’s a given, like electricity coming out of a plug or water coming from a tap. The internet just works, which is a great thing. So seeing, you know, what happened during the COVID crisis and a lot of traffic, you know, more being generated because people are working from home and learning from home. But because of all of this, the dependency of underlying functionalities that support our uses of online services and apps, we need to take a closer look. And in this case, we’re going to do so at the routing that underpins the internet. It’s actually one of the building blocks that everything else depends on, the actual exchange of internet traffic. So what we do is get some experts from different stakeholders on a panel and, you know, see what their different perspectives are, either regional or from a stakeholder view and see, you know, what answers we can provide potentially and hopefully also have a discussion with people in the room and people online. So I have the honor to present, firstly, Verena Weber, Policy Analyst for the OECD. And then here to my left, Katsuyasu Toyama. He works for the Japanese Internet Exchange Point, JPNAP, and is also chair for the Asian Pacific Internet Exchange Point Association, APIX. Then on my right, I have Annemie Coutours, and she works for the Dutch Forum for Standardization. And they’re doing some interesting stuff with regard to certain routing security tools. So she’s more than happy to share that more with you. I will be providing a perspective from the RIPE NCC, what we do, both technically and in terms of engagement and community and, you know, and spreading the message. But I especially also want to thank people who were involved in preparing this, Lauren Crean from the OECD and Benjamin Boersma from the Dutch Internet Standards Platform. This is a sequence of the speakers, and we aim to have at least a half an hour of interactive dialogue with the audience. So we look very much forward to hear what you think on this. But let me start off. Routing security, RPKI specifically as the tool, which I’ll go into a bit more detail later, and the role that the RIPE NCC provides as a regional internet registry. So what actually could you consider to be the internet? Well, in this case, we’ll use a definition that the internet is actually a collection of individually managed networks. In technical terms, those are called autonomous systems. There are more than 70,000 of those in the routing system. And for people to actually experience one internet, these networks need to seamlessly or at least, you know, not visibly for others outside of this ecosystem. They need to interconnect with each other in order to create an end-to-end connectivity from every single endpoint to every single other endpoint. And in order to do so, these networks need to speak to each other. They need a common language. And that’s what we refer to in internet terms as standards and underlying protocols. There’s no central coordination. It’s actually an organic thing, the way that networks interconnect. It’s mostly based on commercial business relationships and need for reachability. But there’s no like central management or authority, you know, that runs all of this. So the protocol, the language that these networks speak is called the border gateway protocol. This is actually quite an old protocol. It’s from the 90s in the previous century. And in theory, using this protocol as a network, you’re the one that sounds maybe very obvious, but you’re the one that should be announcing your network identifier and the IP addresses behind that, right? That’s what the end users and the end devices use. You’re the one that should be announcing your network. But with this protocol, it’s actually technically possible for anyone to announce anything. This protocol actually assumes that everyone is telling the truth. There is no real hard built insecurity in this protocol. So again, in any autonomous system, any network can announce any prefix subset of IP addresses. And even, you know, like all these 70,000 networks are not directly interconnected to all of each other. So most of the time, traffic, you know, goes through a series of networks before it reaches its destination. This sequence of networks is called an AS path. And that’s not even, that’s also like a given, if you receive such an announcement, you will in essence accept it. There’s no way to actually verify whether it’s correct or not. When this information is not correct, and people just share this amongst each other, people propagate to the entire internet. Again, as I mentioned, this is an old protocol. And implicitly, it actually assumes that everyone, you know, that uses it and interconnects with each other is trustworthy. And when this was developed, people knew each other. So it’s like this peer review, and you know, this, what we call in Dutch social control, that was part of it. The main goal was just to make it work, no overhead. And there were no like ex ante security concerns here, because there was no need to. And again, no single authoritative source, no central control. Which makes this susceptible to incidents. If you think of abusive behavior, an attacker can use this, right, to impersonate itself as another network, to intercept traffic from others, to prevent another network, you know, from being reachable at all, basically disappearing from the internet. And if you are able to redirect traffic, you can use it for other purposes, maybe stealing credentials, stealing cryptocurrency, sending spam. But that’s when malicious purposes, there’s a real intent to do something bad, that actually most of the time, it’s accidents, you know, people configuring routing sessions, configuring their routers, and just making typos, and this wrong information then being propagated on the internet. So in order to make this routing more secure, and again, that might sound quite obvious as well, you need to be able to verify the routing information received from another network. And it has IP addresses, an announced prefix that you receive, has it actually been originated by the network that is entitled to do so? Has this sequence of networks, right, that actually point to the originating network, is that correct? Has that been tampered with? You want to prevent the propagation of incorrect routing information. So where does the RIPE NCC in this case come in? We are a regional internet registry, a term I already used. There are five of those globally. And we cover the region, Europe, Middle East, and central parts of Asia. And that is where we, for our members, which is mostly like networks, organizations running networks, traditionally ISPs, that need IP addresses and AS numbers to run their networks, they come to us in order to receive those resources. And these are the resources that are needed to actually route internet traffic. So what we do, we distribute those resources, we register them in a public database, everybody can check who is responsible for what. So we can guarantee the unique holdership, maybe imagine IP address, you know, can only distribute it once, it can only be used by one, for one endpoint, and not multiple times. And combined with that, we can distribute certificates to our members, who can then cryptographically sign their IP addresses, and the relationship with their autonomous system number, so their network identifier, in order then to take the next step for others to check who is entitled to use which IP addresses and what network number. And that’s where the term resource key public infrastructure steps in as a tool, RPKI, so to speak. So how does RPKI improve routing security? Well, as I mentioned, it makes cryptographically with a certificate and a statement with regard to an AS number, a network identifier, and the IP addresses that are associated with it. And these cryptographic statements can then be used by other networks, you can download them, use specific software tools for that, routing validators are called, to actually verify whether statements they receive, when you connect your end router to the rest, to other networks, and receive routing announcements from them, route announcements from them, to actually verify whether those are correct or not. And that, you know, refers to the originator of an announcement, that does not really say anything about the path, the sequence, and, you know, the other networks that are mentioned in there. But that’s actually something that RPKI can also play a role in in the future. And that’s then called path validation. So the five RERs, we are one of them for our region, and then globally, there are five of them, they act as trust anchors here. And then the whole signing of resources happens in a hierarchical fashion. So the RERs distribute certificates to their resource holders, to their members, and they can then use those certificates to sign their resources, and create statements, and those statements are called route origin authorization statements, ROAS. I just want to comment on, to make it more specifically, you know, what RPKI can contribute here, there was already, there still is an older system in place called the Internet Routing Registry. It’s from the 1990s. RPKI was developed shortly after 2010. I think, you know, the IRC was published in 2012. And the IRR system is basically databases as distributed, I think there are 12 of them, when networks, you know, can register their routing objects, you know, and their routing policies. So their AS numbers associated with the prefixes, the IP addresses that they’re responsible for. The thing there is, if you use those database, you need to maintain them, which can be automated to some extent, but it is the responsibility, you have to actually see to it that the information in there is accurate. And here too, the thing is, okay, the information is there, and it’s very useful if it’s accurate, but there’s no hard way of actually verifying that it’s correct, what is in there. And that’s where RPKI steps in. It’s not only because the RERs are responsible for this system, they actually have control, they distribute these resources, they have insight into who can use what, so they have control over the accuracy of the data, so which network can use which IP addresses. And because cryptography is involved, it brings a hard form of trust. So as a mechanism, it’s quite powerful. It can prevent hijacks and route leaks, and I mentioned the stepping stone towards path validation. But the thing is, it’s opt-in. On the one hand, it’s good, right? You’re not going to enforce this, at least not where we’re at now, for people to use this. But there needs to be incentives for people, actually, to start doing this. So in terms of adoption, on our own side, you see it can differ quite substantially per region and per country. On average, on an aggregate level, close to 45% of allocated IP address space, IPv4 in this case, is covered by these statements. So on the one hand, that’s good, and we see a growing line. That’s not going fast enough. So what are the potential factors limiting adoption of routing security, and in this case specifically, the adoption of RPKI? And I think my colleagues here in the panel will go into more detail with regard to their experience in this. But you hear that implementing it is technically supposedly not trivial, especially if you have a quite complex network and customers and suppliers you’re dealing with. The thing is that while many, many incidents happen, they don’t really seem to have a visible impact, so an impact that scares people and that gives them a reason to act upon. And there’s a collective action problem, so to speak. So if you implement this, you basically help the rest, but there’s no immediate, at least that’s what people perceive it, that’s not there. There’s no immediate benefit, so you make your cost, you make the effort, and what does it then bring you? Yeah, it makes it for others easier, but while on the other hand, you think if all your services are provided online and it’s about continuity of service and also reputation, right, damage that you could do to reputation if things go bad, that there definitely is a reason to get your act together. And I think the OECD will also maybe go into that in a bit more detail, but there seems to be, it’s a bit of a challenge to get really robust data on this and insightful data and also that others and policy makers can use. So briefly before I end, so what’s the IRIPE NCC doing here? Well, it’s hard in our strategy, strategic goals to operate a resilient, externally auditable and secure resource certification trust anchor, and combined with that to promote the use in this case of RPKI. We take our role of trust anchor very, very seriously. And yeah, to promote the use, not only are we here at the IGF, right, to talk about this, but we do a lot of training. We provide free online courses, so anyone can go to academy.iripe.net and create an account and take the courses for free online. For those that prefer to have a physical trainer, and we do that initially, we did that especially for our members. We travel around the service region to give in-house trainings to people, also with regard to routing security and best practices. And we host webinars, so then to make it less of an impediment for people that travel, you can do it online. And we do so also on request. So like if there’s a need to, for instance, you know, we did such a thing with the Dutch government, we can organize tailor-made trainings. And then obviously the outreach in a community building. In part, we host many meetings where we talk about this and update the community with regard to what we’re doing and where we’re at. We host ROAS signing parties, so just get people in one room, right? Because it might seem, yeah, quite a large threshold for people to actually do this, but if you then take them by the hand and show them how easy it can be done through the portal and you can do it on the fly, and before you know it, you have your ROAS and everything is in green. So that’s really, that’s quite successful. Technically speaking, we are preparing ourselves for the introduction of ASPA, Autonomous System Provider Authorization, and that’s meant to take it a step further when it comes to a path validation. Once the standards have been finalized and published, we will be ready to support this. And with regard to the infrastructure itself, the hardening of it, of security, we’re working on auditing it and having it formally certified. Also in terms of, you know, we’re not regulated, but we actually try to act as if we are, and for the benefit of us all. Slide I put in there in terms of internet messaging services, just a shameful plug here. There’s a lot of data, you know, that we collect, probes that we have installed all over the place, you know, that give, and via RIPEstat, you know, give people a nice interface to get more insight. And also in terms of routing, there’s a routing information service where we have like three, 23 globally distributed collectors at internet exchange points that collect the routing data and give people insights, you know, and the data has been collected since 1999. So there’s a lot of stuff, especially, you know, for researchers and academics that might be useful there. So that was my introduction, a bit longer than I hoped for, but I hope this made sense. I tried to make it not too technical, and then I would like to hand over to Verena from the OECD. Thank you.

Verena Weber:
Thank you, Bastian. And we’re just trying to sort out the technical issues. So Bastian, could you log in on Zoom to share the presentation, so it seems, so that our remote participants can see the slides as well. Meanwhile, I’ll start. So good morning, good afternoon, good evening, everyone. My name is Verena Weber. I’m working for the OECD where I’m heading the communication infrastructure and services policy unit. So for those of you who don’t know the OECD, so we are an international organization that is composed of currently 38 member countries. We have a further six countries that are currently in the accession process, and our membership spans from the Americas, Asia, and Europe. So the idea of the OECD is really to write a forum for member countries to exchange best practices and advise on public policies. And we do like, as the organization, the entire organization covers a huge range of issues from trade to education and digital policies, which is where my team sits. So, and like, we have one working party that is dealing with telecommunication issues, which has the same name. So we’re a working party on communication, infrastructure, and services policies. So basically we have a program of working budget where our member countries tell us those are the key issues we would like to work on with you guys in the next two years. And you’ll see that security was one of those priorities. We do the broadband statistics for the OECD. So if you go to the OECD broadband portal, you’ll find all our statistics on broadband for our member countries. And as I mentioned, like we had quite an important work stream with our assistant working party on security in a digital economy, where we looked at how we can secure communication networks. So this was a series of three reports. We had one more general report looking at the main trends, how communication networks will evolve and what does that mean in terms of security implications. We had one more specific report on the DNS and we had a third one, which is the one that I want to present today, which is on routing security. And I would like to acknowledge my colleague, Lauren Crean, who you can see now on the screen. Hi, Lauren. So she was instrumental to the report. So let’s dive right in. So you could think, okay, it’s quite strange that actually the OECD is looking into routing security. So why is that? And so our members wanted to know more about the issues that Bastian already presented around routing security. So basically, what’s the problem? What are the scope and scales of routing incidents that we’re facing today? So that was one important point that we tried to address. Then obviously, if we all agree, okay, there are incidents when it comes to routing security. The next question is, okay, how can we mitigate that? So what security techniques have been proposed are available. Bastian mentioned some of those and how effective are they? And then of course, one important point, and this is the one I’ll focus on during this presentation is what is the role of policymakers, right? So what should be their role in this multi-stakeholder community in securing the routing system? And I think like one conclusion from Bastian’s presentation is that, well, routing vulnerabilities have been understood for many years now, but they persist, right? He already went into the fact why that’s the case. Bastian, could we move to the next slide, please? We’re still figuring out tag issues. Perfect. So what are the challenges we see? And there is a great overlap with Bastian, which is good news because otherwise, I think we should start to get word on this panel. So first of all, I mean, the internet is a network of networks. So that you mentioned, collective action is needed. So that means that basically that one actor’s actions depend on the actions of the other actor in the system, but this is also why we’re all here to have a multi-stakeholder approach to discuss these issues. The next issue that I think Bastian has mentioned so far as well, actually that costs money, right? The implementation costs a bit of money, but if you are implementing routing security techniques, you’re not directly benefiting from that, right? And you still have a problem if there are other actors in the ecosystem that don’t do so. So that’s the second issue. And then obviously, there are now like a set of different solutions out there to make routing more secure. But I mean, basically companies need quite a layered approach to secure their routing efforts. So which can also increase the risk of mistakes and misconfigurations. And so there’s not one thing at the moment that actors can do to fix the problem once and for all. So this is the background we’re facing. Next slide, please. So we looked a bit at what countries are doing in the OECD. And what we do see is that our countries are becoming more interested in routing security. I mean, this is not surprising given that more of our lives are digitally being transformed. I mean, all of our economies are going digital. So the internet is increasingly seen as a critical infrastructure that we need to protect. So on the slide, you see just a couple of examples. So for example, the FCC launched an inquiry in February of 2022 about internet routing vulnerabilities and followed up on this notice of inquiry together with CISA. They hold a workshop in August of this year, published a blog post outlining recent actions. And one of them includes basically the federal government’s BGP security practices, basically meaning cleaning up a bit their routing techniques, including RPKI. Then we have Sweden. So Sweden and the regulator of Sweden, PTS, they undertook quite an extensive monitoring of BGP vulnerabilities. So they looked at, you know, how well are their companies doing? And basically what they found in this exercise, which took them a few years, is that like broadly speaking, it’s fine, but they had some recommendations for certain actors to improve. And the third action I would like to mention is the one by ENISA, which is the European Network and Information Security Agency, which published a report on seven steps to shore up the border gateway protocol. If we go to the next slide, please. So now, you know, if we take a step back and say, okay, you know, what should and could our governments do? So we identified four key pillars in the report. And one important point I would like to make here that this is really not about measures that one place undue regulatory burden on operators. So this is certainly not what we intend to do, nor to centralize the control of the routing system, right? So our four recommendations for policy actions that we identified is like one, we need to get better in the measurements of routing incidents and the collection of time series data. So just during the period where we basically were actively working on the report, we found that some data collection has been discontinued. We found that data collection is heavily dependent on really interested individuals in the community, right? This is individuals that have been doing this for their entire lives for a very long time that are really passionate about this. But for example, we found one person who changed jobs and then suddenly, you know, we have a problem, right? So this is something that’s not ideal. What we also see, and so we are showing different measurement efforts in the report on routing incidents is that, you know, they vary quite a bit in terms of results. So basically we had to explain policymaker, okay, this is the available, but yes, it might not always be consistent. And yes, there are different measurement approaches. So really like one big action for policymakers would be to really fund and ensure, you know, continuous measurement of routing incidents really, and, you know, to build up a time series that we can work with. Now, the second important area is that obviously governments could lead by example by implementing routing good practices and promoting the deployment of available techniques, especially obviously when it comes to government-owned IP addresses and autonomous systems. So, and even, you know, what I mentioned, all these techniques are currently like a bit incomplete, but because none of the techniques fully addresses the issue. I mean, they offer a lot of protection against routing incidents. Now, my third point here is that governments obviously have an important role in information sharing between different stakeholders through, for example, formalized feedback groups. So we could also think about, you know, using established systems, such as the certs that we have across many OECD members to basically, so use them to enhance information sharing. And finally, governments could also define a common framework with industry on how to improve routing security. And, you know, there is a big, there are a lot of different options on how to do this. So they range from formalized partnerships to regulatory monitoring of implemented techniques to voluntary guidelines, or finally, you know, and that’s like the strongest step to more defined secondary legislation. So on the next slide, I have a couple of examples. That I would like to share with you. So the United States has been doing great in promoting the measurement and collection of time series data. So this is through the NIST RPKI monitor that tracks the global implementation of RPKI. And then of course, you know, we know that the technical community, like including RIPE NCC and APNIC provide very useful data, but we can see that, you know, in some cases it makes sense that a government complements and supports that data collection effort. Now, when it comes to leading by example, so we have one very successful case in the Netherlands, and we will hear more from Annemieke in a minute. So this is why I won’t go into further details. I did mention like the US, we have the National Cybersecurity Strategy that commits the government to implement good routing practices and security in its own IP space, which is basically one of the OECD recommendations we have. Australia is getting more active. So through the Australian Cybersecurity Center, so they have guidelines for gateways that provide information and recommended action to improve security. And they also provide information on BGP route security and namely RPKI implementation. Then in our host country, Japan, we have the Ministry of Internal Affairs and Communications, the MIC, that sets standards for safety and reliability of information and telecommunication networks that propose further information sharing among operators, especially during security incidents to one, determine the cause of the incidents and two, consider appropriate countermeasures. And finally, when it comes to defining a common framework with industry, we have a couple of countries such as Brazil and the United States that have quite a good multi-stakeholder collaboration with industry and other stakeholders. So the Japanese guidelines that I just mentioned are an example of voluntary guidelines, but then we also have more legal frameworks. So for example, Switzerland has broad general guidelines for communication services providers that aim to establish a minimum level of security of communication infrastructure and services. And Finland, zooming into BGP, has basically legislation that stipulates to uphold basic security of the BGP. So you can see like they range from like a pure consultation cooperation with different stakeholder groups to legal requirements. So that’s the range of measures that we’re seeing at the moment. And if we move to the next slide, please. So the main takeaways of this presentation. So we all know that routing vulnerabilities are happening. Not all have severe effects, but some can have them. And they can affect the availability, integrity and confidentiality of communication services. And this is something we don’t want to happen. Only what gets measured gets improved. So at the OECD, we’re quite evidence-based driven. So we really need better data on routing incidents. We do see several ongoing efforts to improve routing security, but no single technique at the moment meets all of the challenges. And then finally, governments have an increased interest in routing security. And so we propose several actions in the report to really improve overall routing security. Thank you very much.

Bastiaan Goslings:
Yeah, thank you very much, Evelina. Very insightful. And I’m very glad that you, from that perspective, could share this with us.

Katsuyasu Toyama:
Next is Katsuyasu Toyama from JPNAP and APIX. Probably more technical perspective. Yeah, thank you very much. My name is Katsuyasu Toyama. Yeah, I’m from the operation community. So operating the JPNAP Internet Exchange in Japan and also a chairperson of an APIX Association of Internet Exchanges in Asia-Pacific region. So today, from this standpoint, I’d like to show you about an Asia or world situation. Okay, please, next. So I well remembered approximately six years ago. So we had a big failure that has caused the big tech. Oh, this is like Google, maybe you remember. Yeah, they leaked the peer traffic to upstream provider. Please, next. Yeah, so they leaked the prefixes and then the traffic is rerouted to the worst one. So at the time, the connection, sorry, the communication with the content and the eyeball is in the loss or delay. So the degree of the quality of the communication. Okay, please. Yeah, so these kind of a misoperation, but also the hijacking is often frequently. Yes, please. So, but as Bastien mentioned, the routing insecurity is another long time, very important things. So network operators have been trying to secure our Internet for a long time. So at first, the route filling with an IRR, yeah, that is the routing information, which is not authorized or certified, but we use the data for a long time. But. that was sometimes not up-to-date, obsolete. Yeah, so sometimes to use that data. So the 2010s, yeah, so RPKI started. So now we are moving to the RPKI. So please go next. So how widely RPKI is deployed? Please state. Oh, ROA and ROV already mentioned, so please. So this is the data from APNIC Labs, which are published on the web. And the summarized, according to the regions, the basically the RIR region. So the Africa, North America, Asia and Oceania, Latin America, Europe and Middle East. And each regions, they have some kind of ratio of deployed the ROA. So the green part, you can see, that is a range or ratio of ROA enabled. So as you could see, the Europe and the Middle East, the many space, approximately 70% are already covered in the ROA. But in Africa or area, that is the North America region, there’s still less than 30%. Yeah, so according to the regions, the deployment or penetration of the RPKI, especially the ROA part is not different. Okay, so please go to the next slide. Yeah, so this is also the same, the comparing from the route object. Okay, so you could see some ROA invalid. I think this is not on the hijacking, but I think the misconfiguration of an misregistration of an ROA and such kind of things. Okay, but still, yeah, Europe is very widely covered by the ROA. Okay, go please. So why the networks have registered or deployed ROA? I think, I believe that some global tier one providers and also the big techs are recommended to register ROA. And sometimes they’re saying to the eyeball networks, if you do not register ROA, in a future, we will reject your routes. Yeah, so like in the Japanese, for example, the Japanese operators, they are all fear to lose the connection. Yeah, and cannot access to the such and the famous and the popular services. So they gradually started to deploy and register the ROAs. I think that is a risk. Yeah, so if they do not register the ROA, maybe they will lose the customers. And that means that they lose the money. Okay, so next please. Okay, so as I mentioned, I am conducting the APIX and there I asked and did a survey about ROA and RPKI kind of things. Okay, so why ROA is used or not used in your country or economy? And we got not many, but a few replies. So in Bangladesh, as you could see, in Bangladesh, ROA becomes approximately 90%. Okay, so they said that did no big challenge and networks, they’re doing that by themselves and it becomes a normal. It’s a great thing, I think. In a Singapore case, the government recommended, government regularly recommended a few years ago, but that is not regulated, only the recommendation. And that becomes an approximate 60%. Okay, and the Thailand case, oh, they also the 43%, yeah. And, oh, the obstacles or, yeah, what is in the, yeah, prohibited or not allowing to do the ROA is sometimes they are saying, if you look at Singapore’s answer, oh, there, some ISPs operators do not have a necessary knowledge or skillset. Okay, and also in Thailand, they are saying the same kind of things. So operational level, they should learn more and make convinced, yeah, doing the RPKI things. Yeah, oh, I think that the management level of the Thailand, oh, they are allowed to do that, but the engineers have not much knowledge or skill about it. Okay, so these are the operators’ reactions. So please go to the next. So then, oh, how about ROV? So as far as I know, not so many networks deployed on ROV. Yeah, oh, this is a feedback from such operators in HPEC. And, oh, I asked two IXP friends in the HPEC region and Bangladesh guys replied, oh, they are not deployed ROV in their internet exchange. But the person says they are in the deploying phase and maybe deployed by the end of this year. And Singapore already, I guess, this is in the SCIS and Thailand case, the mechanics, they are deploying the ROV. So, oh, yeah, some of the internet exchange are doing the ROV on their route servers. Yeah, but then also they are saying that sometimes the knowledge is not so enough to do that. And especially as, yeah, some kind of a peer to lose some kind of that valid route. So please go to the next. So this is a feedback from Japanese operators. Yeah, so why they do not deploy ROV? Yeah, because sometimes they appear about the invalid route or mistakenly judged, that is very dangerous. Yeah, so that is one of the reasons. And the other reasons are still the software engineer need it because RPKI softwares are basically open source and not appliance provided. So need more software engineers. And also the not many network engineer itself is not so many. For example, then small ISP or cable TP operators, they do not have enough engineers. Only one engineer operator, that is not a rare case. So in that case, they are very busy. So not too time to learn over the RPKI. Okay, so please go to the next. So what can ISP do for this case? So of course in the ROV at an internet exchange, this is a typical case. We are doing the ROV for a long time. And also the invalid routes are not announced to the peer. So we have discarded that. Okay, so this is, of course, and as I mentioned, several internet exchanges in APAC region are doing this kind of the ROV. This will reduce the burden of our networks. Okay, so this is another good thing. And, okay, so please go on to the next. And not only that, we are doing the experimental project to facilitate ROV. Yeah, as I mentioned, some networks, some operators says they do not have enough software engineers to deploy it and several kind of software. So some of them say that, oh, internet exchange people, please operate. Please do the service about an ROA cache servers. The ROA cache servers is left as open software. And it is sometimes very difficult to operate. So in Japan, we are now trying to challenge to provide ROA cache servers at the IXPs, and which can be used by IXP users. Okay, yeah, but there is some kind of difficulties because the ROA cache should be operated in one. It’s, yeah, so the communication channel between the routers and the ROA cache not encrypted in general. And of course, there are some options to encrypt and on top of it to exchange some kind of information. But still, the part, we think that no good standard, not good implementation, it’s not, we don’t have that. Okay, so that is a concern. Okay, please go next. So as a conclusion of my talk, I would like to suggest that for to deploy the ROA, so some organization in a country should recommend that. Now, I like the approach to industry by doing them by ourselves, but sometimes the cost issue or the engineers are not so many, so need some justification. And higher level or easy or persuaded, if there is some kind of a standard or recommendation of a country level, that is easy to do that. So NIR or regulator, government, maybe you can do that kind of a recommendation. That is one of the good things, I think. And of course, the RPKI solicitation and implementation should be updated. Yeah, as I mentioned, there are some lack part or that less part, so that should be implemented. And of course, the global routing security, that is a long and winding road, as you know. So the first case I told about the root leak, that should be needed as an ASPAS validation. Yeah, so that is not the next or next step. Yeah, but then we have to do a lot of things, but we should go for that goal. Okay, thank you very much.

Bastiaan Goslings:
Thank you, thank you very much. I think very insightful because like practical experiences, what operators are doing and what you can see at your internet exchange. So I think it’s very much adds, it gives the perspective on what we’re talking about here, the evidence-based approach, so to speak. I suggest any questions, comments, let’s do them after the last presentation from Annemiek, who will be speaking on behalf of the Dutch Forum for Standardization of Thinking. Floor is yours. I thought I’d put it on.

Katsuyasu Toyama:
Thank you very much, both of you, all of you.

Annemiek Toersen:
Thank you for the compliments, Verena, for the whole thing and the backgrounds, Katsuyasu. My name is Annemiek Toersen from, yeah, you could call it the Dutch, but we have to say Netherlands Standardization Forum, but it doesn’t make sense. So if you put the next screen on, sheet on, then you can follow everything. What is the Netherlands Standardization Forum? It’s a think tank with about 25 members and focused on interoperability and advises the Dutch government as a whole. And those members are on personal title involved in this forum, and they have a background in the government, but also in businesses and science. And the focus, the main focus of the forum is a list with mandatory open standards. This is our core business is focused on the mandatory of the open standards we were talking about earlier. And the scope of this list is only for public sector organizations. Of course, private, it’s not, can also use it. Can also use it. There will be a nice, next slide, please. But what are, why are we using those open standards? Well, as you all might know, because you’re joining this workshop, the open standards are for interoperability, exchanging data safely and trustworthy. Security, in order to be trustworthy to the society. It should be accessible for everyone. 25% of our population in the Netherlands are not able to watch internet or they have no access to it. So we should realize that. And of course, vendor neutrality. So we shouldn’t be dependent on vendors. For open standards is very important in your services. Next slide, please. An adoption of strategy internet security standards, we have three levels or three points, three items, which I here have a slide of. First of all, we focus on the obligations. I already told you of the mandatory. We do that with a comply or explain lists. So, well, I’ll come back to that later on. And that is, of course, comply or explain for new investments. Furthermore, we have public commitments with implementation deadlines. So later on, I will show you also what that means, especially RPKI is one of those. And we have obligations by mandatory by law. So lately, July the 1st, we had in the Dutch government approved a law for HTTPS, for instance, and ASTS. An open security standard. Furthermore, we have second one is monitoring and third cooperating. I will go first deeper in the obligations. I already told you about the comply or explain list, but the list is for about 40 standards and all those open standards, 15 of them are security standards. And what we do on the list is we have experts gathered to collect it in order to evaluate those standards and the criteria are mentioned here. You have to go to the next slide, please. Okay. Okay, let’s see if you go back to the former sheet, please. Yeah. And then we have the security standards. If the adoption strategy, number two and the monitoring, I go, yeah, sorry that I mixed up, but I had different slides deck, but that’s not a problem. I go from the sheets. The mandatory, we had by law. The second was cooperation. So that means we cooperate a lot with public and private companies. We have contact with vendors. An example is that we have letters written to Microsoft in order to implement Dane. Not only we, yeah, due to our fact we wrote letters, other countries followed like European countries. And therefore the coming spring, they announced that they will implement Dane, well, next year, 2024. And we exchange a lot of knowledge. And that’s nice because, yeah, then we promote adoption in that way. Monitoring, the last one is that we used the tooling of internet.nl. We monitor, apart from that, we review vendors. So if we procure ICT service in the government, you should ask for open standards. If you don’t do that, then you have a reason to explain in your annual report in order to explain, for instance, it’s too much expenses. Could be a severe reason. If not, then you have to use them. The measurements will be published twice a year and offered to the cabinet. So if I can have next slide, please. So, oh yes, you need, if you have only one company or one organization using open standards, then it doesn’t work effectively. You better have, you can only have advantage of it if there are more organizations using open standards. Therefore we call it a critical mass needed. And another thing is that end users don’t know any, and can’t verify. So you need more transparency and awareness is needed. So the information asymmetry is necessary. If we can have the next one. This one I recognize, sorry for that. I apologize, but this is okay. I was talking about criteria. The most important is the openness, added value, market support, and proportionality. And apart from that, open standards do also have different kinds of categories. For instance, internet and security standards. RPKI is one of them, but we also have document and web in the e-invoicing and administration. Accessibility, for instance, the WCAG is also a famous one. And when governments invest or buy such, they must choose for the relevant standards on the list. Otherwise they should have a severe reason to explain in the annual report, as I just mentioned. If you’ll go to the next one, please. I already talked about the internet security standards. Here you see a couple of them. In total, there are about 15. The most, we mostly recognize HTTPS. I already mentioned that there is a mandatory for it. Now, well, of course, here we are for RPKI, but there are more. Next sheet, please. The second, we cooperate. We cooperate, let me see, because my slides are different. Cooperation, including contacts with vendors. So I already mentioned that we, for instance, have contacts with large suppliers. Here you have vendors and hosters, like Cisco, Microsoft, OpenExchange, and Google. Akamai is also, well, you can read yourself here. But we also do international contacts. Like last week, we were represented in the Michieux, workshops on the modern email security standards with European governments. And we reused the internet.nl code. And other countries take that notice, just like Australia and Brazil and Denmark. So they use internet.nl for their measurements. That’s very nice that we can inspire other countries So if you are interested, actually, in also using the internet.nl, please send us a mail and then we can help you in the future. Next sheet, please. We also mentioned monitoring, so measuring. There are two things on the procurement. If you, once a year, we take, go through all the tenders which are done in the Netherlands and we review them. And during the review, we see what’s happening and we see how the growth of using internet security standards are growing and other open standards as well. And we offer this report to the cabinet. So governments will be spoken of. Yeah, they will call, well, they are announced. They will see how they do well or not well. So therefore, the next is that we, the second part is that we measure by internet.nl. We do that twice a year, but that is specific on the internet security standards. Can we go to the next sheet, please? Okay, here you see internet.nl, how it works. It’s actually very easy. You put your URL in it or your email and you find out in one sheet what you’re doing. We also have a Hall of Fame. So if people have 100% score, they can have a special t-shirt from us. And it’s a collector’s item actually, but what we do is more naming than shaming and that works out very well, quite well. Next sheet, please. Yeah, that’s so slow. And if you don’t ask it, you don’t get it. Yeah, well, anyway. Next sheet, please. Okay, if there are any questions in that way, I would also mention that the reason we have RPKI on our list of open standards is that the Ministry of Foreign Affairs was hijacked. We are one of the examples of, unfortunately, Katsuyasu mentioned in his story. We were hijacked and that was a big problem because in 2014, November, this journalists found out and we were in the newspapers in the Netherlands. Later on in 2015, it resulted in parliamentary questions, unfortunately. So it could be worse actually, but due to the RPKI in future using, we can prevent this disaster. It was accidentally found out actually because the Netherlands, the NCST in Holland, submitted RPKI to be continued in the complier explain list. Due to the hijack, they submitted it to us in 2019. Unfortunately, we could implement it in 2022 in the internet.nl measure tooling. So therefore we now check also all governments using RPKI. And well, that means that we have a good site of RPKI in future among the governments and that will be nice. But yeah, if there are any further questions about it, I would love to answer them. Thank you very much. And excuse us for the wrong presentation.

Bastiaan Goslings:
Yeah, thank you. Thank you very much, Annemieke. And yeah, I also feel like somewhat uncomfortable and apologies. I think you did really well despite the fact that the latest version of the presentation somehow did not end up in the slide pack, but I think the message came across very, very clearly. You did really well. So I wanna use the remaining time we have according to plan, 25 minutes to open up the floor for anyone who would like to contribute here, ask questions, ideas, comments. Let me first check if there’s online anyone who would… No? Okay, thank you. In that case, in the room, is there anyone who would like to… Gentleman here, Olaf, please go ahead.

Olaf Kolkman:
Yeah, Olaf Kolkman, Internet Society. I would be amiss if I wouldn’t be talking about what I’m going to talk about. I strongly align, very strongly align with everything that the panel said. Routing security is a top priority if we want to protect the core of the internet infrastructure. The routing space… I’m big on that screen. The routing space needs protecting. And Lauren said it, and you all actually all said it, it’s a common action problem. And that common action problem comes with a lack of visibility. It’s very difficult to see whether a participant in the routing system deploys routing security measures and make that visible, and thereby create a little bit more value in the market. And when thinking about this, and this has been a discussion within the technical community for already a couple of years, I think five or six now, the community came up with a set of norms called the Mutually Agreed Norms on Routing Security. And basically these are a number of measures that participants in the routing system agree to take. They’re different, we have different programs, we have programs for ISPs, we have programs for CDNs, for Content Distribution Networks, we have programs for internet exchange points and for vendors, and there are some different requirements there. And with this program, we try to get visibility in sort of general terms for people to understand whether people are good players in the routing space. We also want to see whether that has impact. So we have an observatory called the MANRS Observatory in which we track incidents, but also how does the community adopt and adapt to certain technologies. And yes, the incidents come from data sets that may or not be all trustworthy. And by the way, not all incidents are actually caused by malice. What more do I want to say? Yes, in taking that other step about creating value, the community is now looking at what we call MANRS+, it’s the working title, whereby we are trying to identify stronger controls than the ones that are now in the MANRS program that can actually be audited. And so with an audit scheme, you can also imply a certification scheme. And with a certification scheme, you might create a higher value. If you are certified, you have probably a higher value in the market. And we hope that by making the consciousness of routing security more visible, we hope that that also creates the value for the participants when they sell their goods and their connectivity services. So that’s what I wanted to add, because I think the MANRS community, which we host as Internet Society, the MANRS community is actually trying to forward the incentives. And I know JPNexus, GeneXus is a member. Thank you.

Bastiaan Goslings:
Thank you for that, Olaf. Interesting. I was very much aware of MANRS, so you had the opportunity to plug that. I think it’s a very, very important initiative in taking it to the next level, right? MANRS+, I think it’s good. It’s been around for quite a while. I’d assume, right? And it’s a good thing that those entities, organizations, companies that join MANRS, right? And I think you have like different programs, right? ISPs, Internet Exchange Points, CDNs. You might even have more by now. Those are the ones. And again, that’s a good thing that actually want to commit to this, right? And want to live up to the spirit of it and also the practicalities of what you then need to comply with. But what about the rest? You know, that is not there. So I do hope that the members, I don’t know if they’re members, but like the participants also go out to take out the message, you know, towards their respective communities. And I had the slide here, you know, with some of the factors, limiting adoption of routing security and whether, especially when it comes to those that operate networks and the technicalities of using these type of tools, as well as potentially, you know, costs involved to implement it. The projects can be quite significant, especially if you have a large countries, you know, spending network with a lot of equipment, et cetera. So I do hope, you know, that the MANRS participants also help to spread the gospel and to work with other networks who not yet are on board to convince them, hey, you know, it’s not that complex or it’s not that expensive or I can help you do this or that.

Olaf Kolkman:
Yeah, I think it is a community of ambassadors, so to speak. We actually have MANRS ambassadors that try to do that. And mind you, that doesn’t exclude the things that internet.nl does. And for instance, the procurement approach that is being taken. Those are all kinds of additional things that help boost routing security.

Bastiaan Goslings:
And that’s why we’re in the game. I fully agree. No, thanks for that. Sorry, is there someone online who wants to? Yeah. You have to do that first and then Professor Muller.

Moderator:
It’s more of a comment from Benjamin Bruceman. And the question actually, the domain name registry of the Netherlands sidn.nl has an incentive program to give discounts if the domain owner uses some open standards, for example, DNSSEC, Dane, et cetera, to improve adoption. And the question is, did RIPE NCC look into giving discounts to IP space owners? Thank you, Benjamin. I saw that one there coming. I know it’s a very fair question.

Bastiaan Goslings:
And I think, you know, sidn, the Dutch CCCLD operator for .nl, yeah, did really great work there and it had an impact. I think it’s quite a low margin business being a registrar and most of those organizations do hosting and other stuff as well. But there’s not a lot of money there to be made. So any discount they can get, you know, and if it’s relatively trivial to implement the NSSEC, then they’ll go for it. And I think the situation for the RIPE NCC is somewhat different. SIDN is not a member-based organization like us. So the management can more easily decide, you know, let’s do this. Combined with the fact that .nl actually receives part of the fee that registrants pay registrars. So per .nl domain name, part of it goes to SIDN. So they have like room to give a discount. For the RIPE NCC, we’re a member-based organization and we don’t charge based on the resources that our members receive or use or whatever they do with them. Everyone pays a straight membership fee. So whether you are a small host or a very big international, one of the big tech companies, everyone pays the same. But this might be an idea and I’m thinking out loud now that that would have to be decided by the members themselves, right? They’d set the membership fee, whether that would be an interesting thing to consider in order to help move this forward. But it’s not up to the RIPE NCC and in this case itself to decide upon that, but it’s a fair question. So thanks for that, Benjamin.

Annemiek Toersen:
Thank you very much. I can also thank RIPE for, yeah, government, Dutch government institutions, because RIPE sponsored courses for RPKI in order to adopt this open standard. So therefore you can sponsor also in a way, not give a discount, but also sponsor by giving courses about RPKI. That might be a suggestion for other environments and other countries, other governments. Thank you.

Bastiaan Goslings:
I’m very happy to take that on board and maybe even more happy to say that that’s actually what we’re already doing. I mentioned it briefly, we do give these, I don’t think it’s scalable to give them away for free, like the face-to-face ones, right? Like you have an actual trainer that travels to go somewhere and spends a number of days with people and has a backup, et cetera. Like we’re a nonprofit organization, so this is not a moneymaker for us anyway. And it’s more about, it’s important to spread the message and to help these people to learn of these technologies and to actually use them. We have also for BGP security and RPKI, free online trainings. So they’re available to anyone. So if you, for whatever reason, I can understand that, right? Like in terms of budget, it’s a challenge to actually come to travel or to go to an actual training. Then the online courses are free of charge available. So that, I think at least as a first step would be good for people to be aware of. So if you guys are not aware of them, I think we need to do a bit more marketing in terms of spreading the message that this is available. And on the other hand, and I think it’s a good example of what we did with the Dutch public officials, right? People from the Dutch government and agencies, et cetera, that we are more than happy to have like a dialogue and to see like what in a tailor-made form we can provide in terms of training. And then maybe also including a discount, no problem. Depending also on the amount of people and the impact that we potentially can have. So, and anyone that has questions about that, feel free to contact me, to come to me and to see what we can do here. Thanks for the suggestion. Sorry, go ahead.

Audience:
So, we’ve been studying the web PKI and one of the big actions that facilitated the expansion of PKI with web servers is the automation and the creation of let’s encrypt which offered free certificates based on this Acme, which later became the Acme standard of automating is, is it possible for some kind of automation to happen in the RPKI space or is it so different that you can’t use that model. I’m not aware of the technicalities of the example like or the analogy that you make with regard to PKI. From what I’ve seen, like, this is something you know we’re not going to automate in terms of, we are not going to create a row as you know to the sign statements for the resource holders, that’s something they need to do themselves but like seeing the way that this actually works within the portal it’s so trivial is maybe too big of a word you need to know what you’re doing but for the people you know that you’re speaking very fast, could you slow down and speak a little louder so people actually understand what you’re saying.

Bastiaan Goslings:
Yeah. Okay, well I’m sorry for that. The way you know that our members, create these statements that were within the portal, it is so easy. Like so that should not be an impediment for them to actually do it this is not something that we can automate and do for them automatically this, it is something that needs to be triggered by the resource holder him or herself to actually create create these statements. I don’t know if that answers your question or not because that will be my initial response. Well I guess, what is the impediment then. Well that’s, I think what we were what we’re discussing here right well what’s the reason for someone not to do it, either they perceive it to be too technically complex or maybe on the validation parts, using the tools, it’s too expensive to you know to configure routers or other equipment they need to get for this accordingly. That’s what they automated in the web API they thought it was too complicated to manage certificates so they created the acne protocol, I just don’t know whether the model is applicable at all but you need to make the choice and then then the tools of course support the choice that you make in such a way. And that part is the tools are sufficiently mature and the way that we do it via the portal, but also the validating software, etc. I don’t, I don’t think. And I’m not a network engineer but imagine and also talking to network engineers that should not be the challenge in itself right if you can run a network you can do this in itself that’s not a technical challenge, but they need to make the choice themselves and do it, probably that’s more of a challenge management approve them to implement this. I have some comments and questions from zoom.

Moderator:
First comment from Benjamin Bruce mark for information manners, the mutually agreed norms for routes and security is currently in procedure to be decided to be put on the Dutch comply or explain list. I have quite a few questions. I’ll ask them one by one, please keep me on the list. First question from Bart Knuben, could we get to a point that RPKI is the default. For example, that networks do not accept routes that are not covered by RPKI roles. Shall I read everything.

Bastiaan Goslings:
I don’t know if I may ask my neighbor here because he thought that was an interesting example you refer to tier one operators and others. I think large content providers, demanding from customers you know that they have their resources assigned so maybe you can answer to that question.

Katsuyasu Toyama:
Yep. So, oh, from the operator sides. Oh. enforcement is in a good way, but usually. So such kind of some kind of penalty is now driving that deployment of the RPKI, but it’s still the people are operators are anxious about So at this point of time, the ROA and ROV is only for certificate and validator origin. So yeah, ASPATH validation is the two or more steps forward. So, not in the perfect solution. So, kind of the enforcement is necessary but then there’s still that is on the way. So, oh, some operators are skeptical to pay the money on that routing securities. Yeah, still. Yeah. So, that is I think the problem. Thank you.

Moderator:
I have another question from Lauren Crayon addressed to forum standardization. Could you provide further details regarding the tracking of governments you mentioned on internet.nl. Will this be tracking ROA creation and or ROV? Thanks.

Bastiaan Goslings:
Hello. Yeah. The last part I couldn’t understand from you. Sorry. Can you repeat that for me. You were mentioning tracking of governments on internet.nl. Would you track ROAs? ROAs? What is it? No, we don’t check ROAs. Sorry, can I interrupt? This is pretty technical about the ROA or the ROV indeed. On the internet.nl, we only check the ROA, so the certificates at the moment.

Audience:
To actually check the ROV is more complicated. It could be done, but we would need separate ISP space to that actually has an invalid route to do the check. Currently, we don’t do that and we use the ethnic data to report this data back to the government or yearly reporting.

Bastiaan Goslings:
Okay, thank you very much.

Audience:
I have another question from Mark Knubben. On its EU Internet Standards Deployment Monitoring website, the EU is also monitoring the adoption rate of modern internet standards like RPKI and manners. What could or should the EU do more? What could or should we do more about measuring or monitoring? Yeah, monitoring the adoption rate of modern internet standards. Well, what we do more is connect governments in order to do so. We inform them, we give workshops, but I’m not sure what he wants to know.

Annemiek Toersen:
We just measure, we offer it, we inform. I don’t know exactly what he really wants to know because, sorry, I can’t answer the question.

Bastiaan Goslings:
All right. Can you repeat it again?

Moderator:
Yeah, sure. So, the EU is monitoring. So, there is a paper EU Internet Standards Deployment Monitoring website, where the EU is monitoring the adoption rates of modern internet standards like RPKI and manners. The question is, what could or should the EU do more?

Annemiek Toersen:
EU, so this question for everyone, I guess. Or not.

Bastiaan Goslings:
Sorry. Yeah, I mean, it’s hard for me to speak on behalf of the EU and I would get in trouble for doing that.

Verena Weber:
So, basically, I mean, I think you mentioned one point, right, like information sharing is good. I mean, I think if you could present what you guys are doing, you know, and have this more widely adopted. I mean, that is probably like another issue, but I don’t know the site well enough to basically say, you know, okay, how well is it working? How many governments are implementing it? So, from our report, I know that, you know, some governments are really quite active, but there are quite a few that are not, right? So, I think, you know, like training, raising awareness and stuff might be also an issue on the EU level, but again, you know, I don’t want to speak for the EU.

Annemiek Toersen:
We inspire two other countries. You already mentioned that, it was Australia and Brazil, and Denmark also uses the English version of internet.nl, so therefore we exchange our knowledge about that and we inspire them. Also, if there are any other countries also here available, if they want to help for that, we can answer, we can help you in assisting using the code. So, the English version of internet.nl is available. So, if anyone needs that here, we would like to know and help you.

Moderator:
Thank you. Rรผdiger Vogt has his hand up. He also wrote in the chat the question, but maybe I can ask the technical desk if they can unmute him, then he can ask it himself. If that’s possible. Okay. Okay, I can just read it then. My question is to Annemieke. Which RPK standards are you listing in your advice? Are you including advice on standards that still wait for implementation, or would you need to still fill gaps? I don’t understand.

Annemiek Toersen:
That is positive, then it comes to the list, comply or explain. And if not, yeah, then it can be to another list which we call recommended lists. But Olaf likes to have an answer for that.

Olaf Kolkman:
While I’m not involved with the process, I do understand your question. I think your question is which specific RFCs were input to this process. I’m not quite sure if you noticed, Annemieke, but I’m sure that Bart or somebody else in the office would be able to answer that. And I’d be happy to forward that to Rรผdiger, but Bastian knows him too. I think Rรผdiger in the meantime is unmuted.

Bastiaan Goslings:
I don’t see the red microphone. Rรผdiger, can you speak? Yeah, if you can hear me.

Audience:
Yes, well, okay, kind of Annemieke, as Olaf was telling, there are quite a number of RFCs that define RPKI-based standards. And so far, I only have been hearing about use of establishment of ROAS and the use of origin validation. I think Bastian was mentioning the upcoming ASPA, which will be another object in the RPKI. And in fact, the RPKI design right from the beginning was targeting something that has been defined for quite a number of years as full standards BGP-SEC, which is not yet implemented. And which actually needs significant action and resources for getting implementation and deployment. And people are usually not talking about it, which is a problem.

Bastiaan Goslings:
So I’m done.

Annemiek Toersen:
Okay, thank you very much for your question or remark. Most probably my colleague Benjamin could talk more about that.

Audience:
Yes, so I put the reference documentation in the chat. Currently, our RPKI, which went into procedure has like one RFC attached to it, which is 6,380, but also lists three other RFCs with recommendations. Regarding the BGP-SEC, we don’t do that yet because it needs to be put in procedure by somebody.

Bastiaan Goslings:
Is that clear for you?

Olaf Kolkman:
Can I ask a qualifying question with this? For the procedure, in order to be accepted, there is the expectation that there’s a reasonable amount of deployment of a particular standard or specification, is it not?

Annemiek Toersen:
That’s correct, Arlof. You need not only one organization using it, but you have to find a companion, fellow organizations in order to have a severe standard and accept it in practice. It should be in practice and it should be supported.

Olaf Kolkman:
I think that answers Rรผdiger’s question because BGP-SEC is not very much deployed at the moment.

Audience:
Okay. Hi. Seems I’m still there. Yes, I’m very much aware that BGP-SEC, in fact, is essentially not implemented. I learned over the past few years that public discussion of making use of RPKI and improving routing security tends to essentially stress the stuff that is essentially really available. In many cases, the advocates of that argue in a way that, yes, what’s available now is kind of solving old world problems and ignoring to work on getting the improvements that are still necessary. Actually, one of the really bad problems is that for the future deployment standards, development work needs serious resources. Serious resources, in particular, if we want to progress security. That’s not happening. The question is, how could we actually work on getting those resources available and in place with the proper people? Thanks.

Annemiek Toersen:
That’s a good question. In the Netherlands, we organized a workshop in cooperation with RIPE. We opened a course and only policymakers of the Dutch government could join these courses. That’s what we did together. Perhaps you have an addition, another possibility.

Bastiaan Goslings:
Sorry, Rรผdiger. I think it’s interesting points you make. Just to confirm, I personally definitely did not want to make the point that we can focus on the tools that are there and that will solve all of our problems. Definitely not. I think when it comes to creating the ROAs and doing the validation, on the other hand, we still have quite a long way to go in terms of adoption, but you’re absolutely making a good point that there’s a lot more that needs to be achieved and that we need to build on. I want to thank everyone here. I’m sorry that’s quite abrupt stopping this now, but the next workshop is going to start soon. People need to prepare for that. I really want to thank everyone for joining. I hope this was interesting. Again, with regard to the topic, if you want to follow up, I’m assuming I speak for all the panelists, right? Come to us and approach us and see how we can together move this forward. I want to thank all of my panelists here. Great for all you guys being here and contributing. I’m really happy. Again, thank you again. Also, the audience for being here and participating also online. Thank you. Thank you.

Annemiek Toersen

Speech speed

144 words per minute

Speech length

2127 words

Speech time

887 secs

Audience

Speech speed

104 words per minute

Speech length

659 words

Speech time

380 secs

Bastiaan Goslings

Speech speed

189 words per minute

Speech length

5211 words

Speech time

1655 secs

Katsuyasu Toyama

Speech speed

145 words per minute

Speech length

1955 words

Speech time

809 secs

Moderator

Speech speed

112 words per minute

Speech length

351 words

Speech time

189 secs

Olaf Kolkman

Speech speed

145 words per minute

Speech length

738 words

Speech time

306 secs

Verena Weber

Speech speed

178 words per minute

Speech length

2536 words

Speech time

854 secs

How to enhance participation and cooperation of CSOs in/with multistakeholder IG forums | IGF 2023 Open Forum #96

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Pavlina Ittelson

The European Commission has launched a new initiative called Civil Society Alliances for Digital Empowerment (CADE), led by the Diplo Foundation and funded by the European Commission. This project aims to enhance civil society participation in international Internet governance (IG) processes, with a particular focus on the Global South. The goal is to address the challenges faced in IG, including the fragmentation of forums, lack of capacity and understanding of human rights impacts, and the requirements of technological development. The initiative seeks to promote a more inclusive approach to Internet governance by involving civil society in a multi-stakeholder manner, allowing for the inclusion of diverse perspectives. By doing so, it can bring attention to issues such as women’s rights, language and culture aspects, and the rights of indigenous groups, which are currently underrepresented in Internet governance forums. The lack of diversity and inclusion within specialist standardization bodies is also highlighted as a concern. Efforts should be made to address these disparities and ensure that a wider range of perspectives are considered in decision-making processes. Capacity building, grassroots participation, and engagement guidance are identified as key areas requiring attention for civil society organizations to effectively contribute to IG processes and advocate for their interests. Partnerships between civil society organizations from the Global North and Global South are encouraged to facilitate knowledge sharing and collaboration for a more equitable and effective approach to Internet governance. Trivial technical solutions are seen as potential remedies for scalability issues within IG, such as replacing challenging anti-bot measures to improve accessibility and user experience. Opportunities for public engagement are found in the environmental sector and youth rights, where public involvement can contribute to progress. While collective input from civil society organizations is valuable, it is important to strike a balance between collective action and the preservation of diverse opinions and perspectives. Ensuring that diverse voices are included is essential for effective decision-making processes. Collaboration between organizations and network building can greatly benefit civil society by amplifying their impact and creating a stronger collective voice. Navigating participation procedures in various international bodies remains a challenge for civil society, but strategic engagement in specific forums can help achieve their goals. Long-term involvement and understanding trends are considered crucial for success. In conclusion, the CADE project aims to empower civil society and promote their active participation in international Internet governance. Addressing the challenges of fragmentation, capacity building, diversity, and inclusion is crucial to achieving a more inclusive and effective approach to Internet governance. Through partnerships, collaboration, and strategic engagement, civil society can play a significant role in shaping the Internet’s future.

Viktor Kapiyo

Civil society organizations from the Global South face various challenges in their work. One major challenge is the difficulty in accessing global processes due to financial barriers. These organizations often lack the necessary resources to participate in international meetings and forums, limiting their ability to have their voices heard on important issues. Additionally, the limited internet reach in the Global South further exacerbates this problem, hindering their ability to engage in online discussions and access relevant information. Furthermore, there are only a few organizations in the region that focus on internet governance, isolating the voices of civil society in these discussions.

However, there is a positive sentiment towards the need for awareness and capacity building among more people in the Global South. The Kenya School of Internet Governance has played a critical role in this regard, having trained nearly 5,000 individuals on internet governance in just six years. The idea is to make internet governance conversations accessible to everyone, acknowledging that anyone with an email is a stakeholder. This approach recognises the unique contexts of local organizations and aims to amplify their voices in global discussions.

Collaborative approaches and coalition-building are also considered crucial in the field of digital rights. By forming partnerships and working together, organizations can address the lack of linkages that previously existed across various digital rights organizations. This collaboration allows for collective problem-solving, knowledge exchange, capacity building, and resource leveraging. By combining the competencies of different organizations, particularly in terms of physical presence and understanding local dynamics, their collective impact is strengthened.

Additionally, partnering with organizations from the Global North can benefit those in the Global South. Global North organizations often have established relationships with policymakers and a better understanding of local dynamics, which facilitates the presentation of views by Global South organizations. Such partnerships also lead to capacity building and knowledge exchange. Global North organizations possess technical resources, such as ICT skills, which Global South organizations can leverage to enhance their work.

Funders also play a crucial role in strengthening civil society organizations. However, disjointed and fragmented funding can create problems for these organizations. Global South organizations often find themselves competing for the same funds to address similar problems, hindering collaboration. Moreover, funders’ goals do not always align with the specific needs of organizations in the Global South. Therefore, it is essential for funders to coordinate their goals and understand the dynamics of Global South organizations to provide effective support.

Building good relationships with legislators and demonstrating expertise is crucial for civil society organizations when trying to influence legislation. It is important to establish these relationships before submitting views and demonstrate the potential value that the organization can bring to the legislative process.

Working collaboratively with other civil society organizations and presenting a united front can add weight to arguments. This approach demonstrates strength in numbers and increases the impact of advocacy efforts.

Finally, being prepared for potential counterarguments and understanding the local context are crucial for civil society organizations. By being well-prepared, organizations can effectively respond to opposing arguments and address the specific concerns and needs of their local communities.

In conclusion, civil society organizations from the Global South face various challenges, including limited access to global processes, internet reach, and organizational capacity. However, there is a positive sentiment towards the need for awareness, capacity building, and collaboration. Partnering with organizations from the Global North, coordinating funders’ goals, building relationships with legislators, working collaboratively, and being prepared are essential strategies for strengthening civil society organizations and making a meaningful impact.

Marlena Wisniak

Stakeholder engagement is considered a vital component of policymaking at all levels. It emphasises the need for collaboration, iteration, and inclusivity, ensuring that stakeholders’ voices are heard and that they have influence over the decision-making process. However, there is a clear power imbalance between stakeholders, hindering inclusivity in policymaking. This asymmetry of power is evident in the unequal playing field between civil society, the private sector, and states, as well as regional disparities and safety issues that impede activist participation.

To address these challenges, transparency in stakeholder engagement is essential, ensuring public visibility of participation mechanisms and discussion outcomes. Proper resourcing, including financial contributions and trainings, is crucial for effective multi-stakeholder participation, particularly for marginalized groups and non-digital rights organizations.

Participation in standardization processes, especially in the field of artificial intelligence (AI), is complex due to its technical nature. Civil society’s limited representation in standardization bodies, with a disproportionate focus on digital rights organizations and AI expertise, hinders diversity and inclusivity in these processes.

Global North organizations can learn from global majority-based organizations to incorporate diverse perspectives. However, stakeholder engagement faces resistance in many countries outside the United States and Europe, requiring innovative advocacy strategies.

International governance mechanisms may have limited influence in the European Union (EU) and the United States (US) but greatly impact national regulations within the global majority. UNESCO guidelines and recommendations from entities like the United Nations (UN) shape national regulations, though enforcement can sometimes become problematic.

The diversity of civil society and the global majority, including different languages and cultural norms, should be considered in policymaking and stakeholder engagement processes.

In the context of internet governance, there is a need for more inclusive perspectives. Incorporating learnings from initiatives like the Digital Services Act (DSA) Human Rights Alliance is important in shaping international internet governance.

While inclusive informal networks exist, coordination among these networks proves challenging, impacting effective stakeholder engagement and collaboration.

Privileged and well-networked organizations have the advantage of exposure and influence, creating an unequal platform for stakeholder engagement. This inequality must be addressed to achieve inclusivity.

Organizations need to take responsibility for bringing in new voices and perspectives, such as offering panel spots to others or having someone accompany them to ensure representation.

Understanding the UN advocacy process is challenging, even with dedicated UN advocacy officers, hindering effective stakeholder engagement in international policymaking.

In conclusion, stakeholder engagement is crucial for effective policymaking, necessitating the addressing of power imbalances, promoting transparency and accountability, providing resources and training, and embracing inclusivity. Considerations include international governance mechanisms, language and cultural diversity, and coordination within informal networks. Call for organizations to bring in new voices persists, while understanding the UN advocacy process is crucial to effective stakeholder engagement.

Jovan Kurbalija

The analysis of the speakers’ statements reveals several noteworthy points regarding the impact of triviality on people’s participation in large systems. One speaker highlights the influence of navigation experience in UN corridors on individuals’ sense of belonging and their feeling of being part of the process. The speaker suggests that this experience can shape people’s participation in the system, implying that a positive and inclusive navigation experience can enhance engagement.

Another significant concern raised by the speakers is the accessibility of documents from various international bodies such as the UN, EU, and others. It is argued that the lack of interactivity in PDF formats can hinder the accessibility of these documents. The speakers suggest that the static nature of PDF formats may limit the ability of individuals to engage with the content effectively, potentially excluding certain groups from participating fully.

Furthermore, the design of UN Secretary General policy briefs is highlighted as not being online-friendly. It is suggested that the current design may pose challenges for users trying to access and engage with the content online. This aspect negatively affects the user experience and may impede people’s ability to participate in policy development processes.

The sentiment among the speakers towards the current state of information accessibility in policy development processes is largely negative. The mention of the European AI Act exemplifies this sentiment, as its complex display hinders consultation, potentially limiting effective engagement. However, there is a positive aspect as well. The analysis reveals an acknowledgement of the importance of encouraging alternative thinking and creativity in the current world context. This suggests that fostering diverse perspectives and innovative approaches can contribute to more inclusive and effective policy development.

In conclusion, the analysis highlights the importance of considering the impact of triviality on people’s participation in large systems. It emphasizes the need for a positive and inclusive navigation experience in institutional settings such as the UN. Additionally, it underscores the significance of improving the accessibility of documents from international bodies, including the design of policy briefs. The sentiment of connectivity and engagement in policy development processes is largely negative, but there is a recognition of the value of alternative thinking and creativity. These insights provide valuable considerations for policymakers and institutions aiming to enhance public participation and effective governance.

Peter Marien

Summary: The need for strong participation of civil society in global digital governance is emphasised as a positive argument in the context of the EU. The EU strongly advocates for civil society involvement as it believes that without it, societies tend to drift off in directions that are not aligned with a human-centric model. Similarly, advocating for multi-stakeholder level discussions is seen as positive and necessary. It is argued that certain discussions should not be limited to intergovernmental talks alone.

However, there is a negative aspect to consider as well – the lack of knowledge and capacity within civil society organisations. It is observed that civil society faces challenges internally within the European Commission, and there is a general lack of know-how when it comes to global digital governance. This lack of expertise and capacity hinder the effective participation of civil society in shaping digital governance policies.

The importance of inclusion of the Global South in digital dialogue is seen as a positive argument. It is noted that there is currently a gap in participation from the Global South in global digital governance. An initiative led by the Diplo Foundation known as the Civil Society Alliances for Digital Empowerment (CADE) project is mentioned as being instrumental in addressing this gap.

Furthermore, it is highlighted that more capacity and resources are needed for civil society to participate meaningfully in internet governance discussions. The fast-paced nature of the internet scene and the increased global attention to these issues, particularly due to the COVID-19 pandemic, create a demand for civil society to possess not just the know-how, but also the necessary resources for participation.

Peter Marien, a supporter of civil society’s meaningful participation in internet governance discussions, emphasises the importance of investment in capacity building and resource allocation. He argues that adopting new technologies, such as artificial intelligence (AI), requires resources for meaningful participation, especially since AI has become a fundamental topic in internet governance.

Initiating consultation processes with the general public is seen as beneficial and positive. Notably, a Nobel laureate journalist actively interviewed a random selection of individuals, which had a significant impact. Extensive consultation processes, which sometimes receive thousands to tens of thousands of inputs from society and are sometimes even analysed by AI, occur in EU legislation processes.

Additionally, it is argued that consultation processes should also involve non-experts, as citizens, despite lacking expertise, can have a notable impact. This underlines the value of diverse perspectives and the democratization of public engagement.

In terms of diplomacy and communication, it is acknowledged by Peter Marien that sensitivity should be maintained when dealing with such matters. This implies that diplomatic interactions require a tactful approach to foster constructive dialogue.

Peter recommends seeking dialogue and creating a trusted relationship with the involved government when it comes to government relations and policy-making. He refers to experiences in other countries, like Kenya, where open dialogue has been beneficial.

Finally, it is suggested that reaching out through other organizations that may have better access to open dialogue can be fruitful. By collaborating with strategic alliances and other organizations, civil society can effectively enter the conversation and contribute to the discourse on internet governance.

In conclusion, the expanded summary highlights the need for strong civil society participation, the importance of multi-stakeholder discussions, the lack of knowledge and capacity within civil society, the inclusion of the Global South, the necessity for increased capacity and resources, Peter Marien’s support for investment in capacity building, the benefits of initiating consultation processes with the public including non-experts, the importance of maintaining sensitivity in diplomacy and communication, the significance of building a trusted relationship with the government, and the suggestion of reaching out through other organizations for open dialogue. These various aspects contribute to shaping effective global digital governance and promoting a human-centric model. The summary aims to be an accurate reflection of the main analysis text.

Tereza Horejsova

The arguments and stances presented emphasize the importance of civil society in policy processes. Civil society organizations are viewed as crucial actors in policy development, as they often prioritize the interests of individuals. They provide multiple perspectives and efficient coordination mechanisms, enabling policy processes to benefit from a wide range of viewpoints.

The Internet Governance Forum (IGF) is highlighted as a significant platform for civil society to engage with others and influence relevant issues. Traditionally, the IGF has been dominated by civil society participation, and it offers a safe space for civil society to have a say in the governance of the internet.

Moreover, the contribution of all stakeholders, including the private sector and civil society, is considered vital for the development of digital policy. It is argued that it would be absurd to discuss digital policy without consulting these stakeholders, as their involvement ensures a more inclusive and comprehensive approach.

The Global Forum on Cyber Expertise (GFC) is recognized for acknowledging the importance of multi-stakeholder cooperation in capacity building related to cyber security. The GFC serves as a platform for actors involved in cyber security to come together and work collaboratively towards building expertise in this field.

Despite these positive aspects, there are concerns about the effectiveness of consulting civil society organizations in a superficial or “tick-the-box” approach. Civil society organizations have varied agendas and objectives, making it challenging to consult them effectively. Some policy fora are criticized for conducting pro-forma consultations that do not necessarily lead to meaningful outcomes. This lack of sufficient coordination of priorities among donors is seen as a barrier to the effective involvement of civil society organizations in policy forums.

On a different note, Tereza Horejsova’s perspective is highlighted, as she believes in the importance of introducing new and inexperienced voices in panels. She encourages experimenting with panel compositions to achieve fresh perspectives and downplays the risks associated with having first-time panelists. This approach fosters inclusivity and contributes to reducing gender inequalities and promoting diversity in panel discussions.

In summary, the arguments and stances presented emphasize the crucial role that civil society organizations play in policy processes. They bring valuable inputs, diverse perspectives, and efficient coordination mechanisms. The Internet Governance Forum and the Global Forum on Cyber Expertise are identified as important platforms for civil society to engage in relevant discussions. However, there are concerns about the superficiality of consultations and the lack of sufficient coordination among donors. Additionally, Tereza Horejsova’s perspective highlights the need for inclusivity and fresh perspectives in panel compositions. These observations underscore the significance of multi-stakeholder cooperation and the active involvement of civil society in policy development processes.

Audience

Technical standards bodies can be complex and overwhelming, making it challenging for new participants to navigate and contribute effectively. These bodies consist of numerous working and study groups within each organization, leading to a fragmented landscape. It can be difficult for newcomers to determine which meetings to attend and how to make meaningful contributions. The dominance of the United States and Europe in these bodies further complicates the situation, potentially marginalising participants from other regions around the world.

However, there are strategies and structures that can support and facilitate smoother participation. One such approach is the provision of engagement strategies and support, such as financial assistance and assistance with visa processes. For instance, Article 19’s Global Digital Program offers support structures that take care of finances and visa processes for participants. They also provide one-on-one mentorship to help participants understand complex concepts and bounce ideas off after meetings. This support helps participants overcome logistical barriers and align the priorities of civil society organisations with the needs and objectives of technical standards bodies.

Cooperation and collaboration between the Global North and Global South in technical standards bodies should embrace an inclusive approach and avoid a white-saviorist mentality. There is much to learn from the Global South, and creative advocacy strategies can flourish outside of the US and Europe. By embracing a collaborative approach that respects the knowledge and expertise of all regions, technical standards bodies can become more equitable and representative.

International governance mechanisms have a significant impact on national regulation, particularly in the global majority. Entities like UNESCO recommendations can disproportionately influence national regulatory frameworks, potentially shaping policies that may not be in the best interests of countries in the global majority. Therefore, it is important for these governance mechanisms to involve a diverse range of voices and perspectives to ensure fair and inclusive decision-making processes.

It is crucial to acknowledge that civil society and the global majority are not monoliths. There is significant diversity within regions, and even within a single country like India, there are multiple languages and perspectives. Recognising this diversity strengthens the ability to address inequalities and promote inclusivity within technical standards bodies.

Capacity building is a process that takes time and cannot be achieved in a day. This is particularly evident in areas like climate change, where developing the necessary expertise and infrastructure to meet the goals outlined in global agreements like the Paris Agreement is a long-term endeavour. Recognising the gradual nature of capacity building is crucial to avoid unrealistic expectations and foster sustainable progress.

Citizen participation at the local level plays a crucial role in addressing global issues. As demonstrated during the Paris Agreement process, citizen assemblies can provide valuable input and insights. Encouraging citizen participation in different parts of the world can foster capacity building and contribute to global efforts to address pressing challenges.

Institutional capacity building is vital for civil societies. By strengthening their institutional structures, civil society organisations can better engage with governments and stakeholders to influence policy making. For example, the pending implementation of India’s Personal Data Protection Act and Digital India Act highlights the need for a strong front when dealing with governments. These regulations will impact the digital activities of 1.4 billion people, emphasising the importance of civil society organisations advocating for their interests.

When engaging with governments and policymakers, alternative methods of engagement beyond traditional consultation processes should be explored. Consultation processes prior to the introduction of a bill in India, for example, have proven to be more fruitful in generating meaningful engagement. Finding ways to engage directly with parliamentarians and government officials can lead to more effective and impactful involvement.

While common input by civil society organisations can be valuable, it is important to strike a balance between shared perspectives and maintaining a variety of opinions and perspectives. Overlooking the diversity of opinions within civil society organisations can limit the range of perspectives presented and potentially hinder inclusive decision-making processes.

Collaboration between donors is crucial for promoting synergies and avoiding duplication of efforts. Donors such as the European Union and the State Department are often working on similar projects but may not be collaborating effectively. Encouraging collaboration among donors can lead to more efficient and coordinated support for initiatives and maximise impact.

Creating a wider network of civil society organisations can foster sharing and collaboration. This approach allows organisations to build upon each other’s work, share resources, and learn from one another’s experiences. By creating a supportive network, civil society organisations can collectively address challenges and contribute to social progress.

Rules for interaction in vast spaces, such as international forums and technical standards bodies, need to be shared and clarified to facilitate effective engagement. Currently, the lack of definitive rules and different interaction styles across spaces can hinder meaningful communication and collaboration. Workshops to brief participants on interaction techniques and establish common ground for engagement are proposed as a possible solution.

In conclusion, navigating and contributing to technical standards bodies can be challenging due to their complex nature. However, supporting engagement strategies, fostering collaboration, and promoting inclusivity are essential for facilitating participation and ensuring the effective functioning of these bodies. Empowering civil society organisations, embracing diverse perspectives, and building strong institutional capacity are key components of this process. By working together, stakeholders can foster meaningful dialogue, create impactful policies, and drive positive change towards achieving the Sustainable Development Goals.

Session transcript

Pavlina Ittelson:
and the inclusiveness of the spaces here in the right room. I see that this is a time where everybody would rather take a nap with a jet lag than discuss serious topics, but we do have a wonderful panel and a good topic today, so I hope you will all be engaged. And we see this as an open forum, as a dialogue, as a learning experience, and we’d like to hear as much from you. And I see a lot of expertise in the room as from our panel. And let me kick us off then with introducing myself. So my name is Pavlina Ittelson I’m the Executive Director of Diplo-US, and I’ll be moderating this session and also speaking on behalf of Diplo Foundation. We have Peter Merian, Team Leader of Digital Governance, Unit 5 in Science Technology Innovation at DG INTPA. We have Teresa, IGF MAG member, GFC Outreach Manager, and esteemed board member of Diplo-US. Then we have Victor Kapiyo, member of the Board of Trustees of Kenya ICT Action Network. And Marlena Vyshnyak, Senior Advisor of Digital Rights of European Center for Nonprofit Law, ECNL. We also have online participation, and my colleague Sita Lakshmi is as a moderator who will come in with the questions from online. So what are we going to be talking about? We will discuss in this session a new initiative by the European Commission, where Diplo Foundation is a part of, and a new project, Civil Society Alliances for Digital Empowerment, CADE, led by Diplo Foundation, working with nine partners globally that aims to increase participation of the civil society into international IG processes, just funded by European Commission. We have partners at this table and some in the audience as well. There are Forus International, ECNL, CIPESA, KictaNET, Sarvodaya Fusion, Vision for Change, SMEX, Fundacion Carisma, and PICISOC So quite a big group and quite a diversity of views on our end. So our aim is to discuss how to improve and enhance engagement of civil society organizations in multi-stakeholder forums. What challenges do civil society face in meaningful engagement? And also, how can we bring in the perspective of Global South, civil society into the international multi-stakeholder forums? Specifically, we will talk also about standardization forums at ITU, IETF, and ICANN. So with this, I would ask Peter to start us off with a short introduction, please.

Peter Marien:
Thank you very much. Good afternoon, everybody, or maybe people online, good morning or good evening. Thanks a lot for giving me the mic. I think I’m probably the least knowledgeable in the room on the topic. But anyway, I’m glad to kick it off. So maybe a bit of context, because as was mentioned, we are moving into a new project on this topic, and I’d just like to shape a little bit how we got to that point. So about three years ago, the topic of digital became a priority for the European Commission. And in my DG International Partnerships, we were looking at how to best approach this topic to work at this. Specifically, we’re looking at this topic at the global level, national level, regional level, and of course, at different thematic levels. And when we looked at this specific topic, we always look at this through our lens of a human-centric digital development. And that means that, of course, the human is centric, not the state, not the company. And also, as you’ve probably heard many times before, we are aiming at tackling the digital divide. So very soon we came on this topic of global digital governance. What does this mean? This was quite new to us. This is also why I have to stress we are still in learning mode. And another aspect of our approach is that we wanted to look at this topic from a multilateral point of view, and also from a multi-stakeholder point of view. And this is key. You know, EU is a very strong proponent of the multi-stakeholder approach, IGF processes and others. But we also looked at this through the multilateral approach and when we started looking at this multilateral level, we noticed that even though everybody claims to be proponents of the multi-stakeholder model, maybe not all the actors in the multi-stakeholder prism are there. And so specifically, we thought that maybe on the topic of civil society, that that could be a topic that we would like to see worked on. So I was talking about digital, but of course, on the other hand, EU is a strong proponent of civil society in general. So I won’t go too deep into that, but for us, it’s clear that in the absence of a strong participation of civil society, we tend to see, if you look at history, or even today, we tend to see societies drifting off in directions that are not aligned with our human-centric model, let’s say, or with our free democratic societies, okay. So this is a bit where we come from. So then the question was, okay, on global digital government, who needs to be around the table? So we were looking at this EU and agencies, and we also noticed that there are actors out there which are pushing some of these discussions into the intergovernmental sphere. Also at this IGF, I think this is a topic that’s coming up quite a lot. And so we just want to emphasize again that we really would like to have certain discussions, global discussions at the multi-stakeholder level. We’re the first proponent worldwide for the multilateral system, don’t get us wrong, but certain discussions should not just be intergovernmental. And so this is where we are. Now, when we looked at, okay, how shall we approach then the topic of civil society, I’m sure we will get back in more detail into that later, but just a few things. On the one hand, we noticed possibly a lack of know-how on the topic and a lack of capacity. Now, I have to say, we face the same issues internally. So this is not something that is only for civil society organizations. Even in our own DG, in our own unit, there are very few people that actually know this topic and we actually barely have resources to cover this. So it’s not unique. That’s the first thing. Second was that even though civil society was present, then maybe not at the volume, so at the amount that we wanted. So I’ll not go too much into that now. Okay, just to emphasize also that for us, in our perspective, when we talk about digital, we link this to the topic of rights, fundamental rights. So this is fundamental for any of our discussions that whatever we talk about, in the end, it has to be aligned with our views on the rights-based approach, basically also aligned with the UN Charter of Human Rights. And that underpins many of the discussions that we can have afterwards. And then another thing is that we wanted to make sure that the Global South is involved because when we looked at the capacity, there are actually actors also in civil society that are very knowledgeable, that have a track record, but that was not, I mean, we saw then maybe gaps in the global south. So we wanted to work on that. Last thing, I’m almost finished. This program for us has to be, we positioned this in an overall program where we work on digital and multilateral, so digital and multilateral, and so in that context, just for information, we’re also working with ITU and UNDP, so I mean, we have, you know, we’re funding them for actions on digital and multilateral. ITU, UNDP, also OHCHR on rights, UNESCO, and we’re also working with the tech envoy. Of course, we’re working with EU member states, and then, as was mentioned, and this is quite new, so this is the first time for us on this specific topic, we now have two actions that will start soon, and one is indeed with, under the leadership, well, you know, chaired by Diplo, as was explained. So thank you so much, and I’ll pass the word back.

Pavlina Ittelson:
Thank you, Peter, and we certainly appreciate your insight on how European Commission views the participation of civil society. I think it resonates a lot with what we see in the field as well, and we certainly agree, working with the small and developing states, that the capacity problem is not only on the side of civil society, but with fragmentations of different forums and shifting things, it is an overall problem which needs addressing. With that, let’s go to Teresa, with her IGF mag hat to tell us more about how the international forum sees this problem.

Tereza Horejsova:
Thank you very much, Pavlina. Thanks also, Peter. Well, first of all, congratulations. Not only to the grantee, and a good one, and with excellent consortium, but also for, you know, you as the donor recognizing the issue and the problem, and deciding to make it a priority, because it is important. You know, I will start with a few reflections on why I feel, in general, inputs of civil society are essential in the various policy processes that we are dealing with. You know, of course, you know, many of the deliberations that are happening here for, like Pavlina has mentioned, you know, actually, you know, impact the individual. And it’s often the civil society organizations that have the interests of the individuals really close to their heart. But beyond this kind of existential reasoning, I feel that more and more we are moving in some kind of a general culture of multi-stakeholderism and, you know, that leading, hopefully, to kind of more efficient coordination mechanisms. So, you know, basically, with a few exceptions of some very hard policy issues, it’s very difficult to think of a policy process that wouldn’t benefit from multiple perspectives, from multiple stakeholder groups, obviously, including civil society, which can ultimately lead to better and more informed policymaking. So, you know, even if you, like, we are talking here about civil society, but, you know, think also about other stakeholder groups, like, for instance, how absurd would it be to discuss some digital policy developments without being in touch with the private sector, you know. So, I feel that the same absurdity would stand for not consulting the civil society. So, I’m wearing a couple of hats today. As Pavlina mentioned, you know, one hat is ex-DIPLO, current board member of DIPLO-US. The second one, Pavlina said, IGF-MAG member, but actually, as of this morning, that’s not the case because I have served my three years, yes, but I hope that it still allows me to provide some perspectives on the current forum. So IGF is very traditionally dominated by civil society participation. It’s not the stakeholder group that the IGF is struggling with, there are actually other stakeholder groups where the struggle is more of an issue. So in this sense, really, I feel it is a safe space and also the magic space in a way for civil society to allow to engage with others without the pressure of necessarily kind of having a negotiation or a very concrete outcome in this regard. So that’s something that definitely should be protected, you know, and I’m really curious once this IGF is over, you know, how the chart of the various stakeholder representation will look like. But as usual, I will expect very, very heavy domination of civil society. That’s also why civil society, and maybe rightly so, is very defensive about any kind of, yeah, how to put it, you know, maybe some concerns about the future of the IGF. So you will really hear a lot of voices you’re hearing already and will hear in the coming months, even more, because at this moment, there is no equivalent to a space like the Internet Governance Forum where civil society could have so much opportunities to express and in a way to also influence the discourse on the issues that are here. I’m happy then to go more in detail about how it actually works, what’s the role of the MAG in this sense, but that’s maybe if we have time. And the last hat I’m wearing, and allow me just a very, very short mention, I currently work with the Global Forum on Cyber Expertise, the GFC. For those of you not familiar, we are actually also like a platform or a member organizations for various actors that are involved in especially capacity building issues and particularly related to cyber security. And I think from the whole vision, how the GFC wants to bring these actors together, it’s also one of the organizations that has got how crucial it is to have various actors from all across the stakeholder spectrum to get together and exchange on issues related to cyber in particular. So I’ll probably stop here and look forward to the discussion.

Pavlina Ittelson:
All right, now I’ll turn to Marlena, because we did have a very extensive position from Teresa on the engagement of civil society. So from the position of ECNL and advocacy position, could you bring your perspective on the topic, please?

Marlena Wisniak:
Sure, thanks so much, Pavlina. Hi, everyone. It’s great to be here today. I’m Marlena Wisniak. I lead the emerging tech and AI work at the European Center for Nonprofit Law, a civic space and human rights organization based in Europe. And I live in San Francisco, so a lot of interaction with the tech companies, which I assume you mentioned as a stakeholder is often missing. So just a couple opening remarks, and we’ll dive deeper into the conversation. But at ECNL, we really see stakeholder engagement as a cross-cutting and necessary component of any kind of policymaking at the national, regional, and global levels. And we really see it as a collaborative process, so it’s not just a one-time mechanism where we hear someone speak, but it’s an iterative process where folks have different ways to intervene depending on where they are, what their capacity is, what their resources are, and fundamentally that they can meaningfully influence the process. And that’s something that’s hard to quantify for now. It’s one thing to listen. It’s another thing to actually have our voices heard and implemented. And of course, in terms of beyond IGF, just policymaking and regulatory mechanisms in particular lie within this state, so decision-making is… member state or governments, but I think there’s more evidence that should be, or evidence-based research that should be done to really see how much of these consultations are impactful. There’s something also like stakeholder fatigue, where we have lots of consultations. And to be clear, ECNL always pushes for multi-stakeholder participation, and we are deeply concerned also about the future of IGF in particular, including where IGF 2024 may be, for those who have heard. But all this to say that it’s not enough to just have multi-stakeholder, it has to be properly resourced, including not only financial participation, but also trainings, especially for organizations that aren’t digital rights organizations, so that they can meaningfully participate. And I’m thinking especially here, marginalized groups like feminist groups, queer, racial justice, immigration, refugee groups, so that their voices can also be heard in a way that is meaningful. And fundamentally, there is an asymmetry of power between stakeholders, beyond the resource and financial access. I don’t think it’s a secret that there’s no level playing field between civil society, private sector, states. And within these sectors, these sectors are not a monolith either. So there is no such thing as one singular civil society or one private sector. There’s obviously a regional disparity. I’m very privileged to work for a European-based organization living in San Francisco. So I can pay my way to come to Japan. I don’t even need a visa. I have a U.S. and EU citizenship. So pretty much open to the entire world in terms of travel. That’s not the same for most of my colleagues. I’m also generally much safer. That’s not the case for a lot of activists and human rights defenders around the world. So having in place mechanisms that enable safe participation is just as important as enabling participation in of itself. And I will just end here. I know the rest of the session will continue on these topics that stakeholder engagement comes hand in hand with transparency. And that means that while closed door meetings are important and often necessary, there also needs to be public, transparent information about where to participate, how, what has been discussed, what are the outcomes of it to enable true accountability. Thank you.

Pavlina Ittelson:
Thank you, Marlena. We certainly hear you on the running marathons on sprint muscles. Yes, we do face the same issues where the engagement of civil society in international forum is a long-term engagement, long-term work, often decades. So the proper mechanism, not only on the sides of international organization, need to be in place, but systematically within the civil society organizations and within the funding scheme as well. Now, you mentioned also being from the Global North organization and having certain privileges. I would like to turn to Victor from the Global South part and to tell us more about what challenges are faced in the Global South and the civil society organizations participation.

Viktor Kapiyo:
Yes, thank you very much. From Kiktenet, which is a think tank based in Nairobi, we seek to promote the multi-stakeholder approach in the work that we do and to ensure that outcomes are actually meaningful for communities at the local level. We believe that the multi-stakeholder model is important, not just in for us but it’s not there in all the countries that we are working in environments where the relationship between civil society and government is not always good, which can affect the feedback or the responses to civil society proposals, because civil society has sometimes been labeled as noisemakers, and therefore when you present views with just those noisemakers, so in as much as we have the challenges at the local level, I think it is more difficult in global processes where you have the burden of getting the air ticket and the visa and all those many kilometers that you have to travel to make your point, which sometimes is not the case for global north organizations. So, for example, in Africa, for example, where we come from, the challenges of financing and the cost is a barrier to access. Issues around the technical capacity. We are aware that many organizations, at least the internet reach hasn’t been much, and mainstream organizations have not focused a lot on digital rights or internet governance issues. And as a result, you have very few organizations that are working in internet governance space. And they cannot solve all the problems in as much as the problems are well known. So you have few organizations, which they’re not always adequately resourced to with the capacity, whether it is human or financial or technical to respond to all the challenges or emerging challenges in the region. And so only you’re able to perhaps take, I don’t know, deal with, what is it called? The ice, the tip of the iceberg, right? So maybe that’s what they’re able to deal with. But the bigger problem sometimes remain unaddressed, yet we have an increasing population that is getting online across the continents. And that means that we need to be able to get more people on board to speak up for all these new communities that are joining. A new challenge is that previously, it was easy to have multi-stakeholder for internet because there weren’t so many users. So now everybody uses. So who is the stakeholder? Who should be in and who should be out? And how can we bring these conversations to everybody? Because everybody with a email account is a stakeholder, right? So getting people to actually recognize that they do have a voice and they should be able to speak up and engage. I think that realization has not come for many people because then of these barriers. And I think the other aspect is that for local organizations, you have very, very unique context which you’re working in and different realities from those of the global south. And this perspective sometimes are not always, it’s not always possible to have them articulated in the spaces where the decisions are being made. For example, I’ve participated in OEWG sessions. and other sessions where you are in the room but you don’t get to speak. Or you’re in the room but you are allocated only three minutes to say what you need to say and that’s not always enough. We are grateful for hybrid participation because it has really opened up the space for participation but not everybody is aware of the situation and I think sometimes organizations in our countries are dealing with other problems like internet connectivity so most of the time they’re looking down, trying to connect to rural communities and trying to deal with the digital rights challenges at the local level that they forget the big picture that actually there are global and regional processes that they need to pay attention to. So you end up dealing with home solutions or home problems and when you hear that decisions are being made, you’re like, but how am I supposed to get there and get my voice heard? So that’s challenge of the disconnect between the local work and the regional and global processes and even just being able to deploy resources to keep up with the number of initiatives that are ongoing at the same time. Even for some people you speak to them in the corridors here and the confusion, which session, you’re one representative and there are how many meetings at IGF and you are the one person who’s come and you want to make an impact so you may not have the capacity to attend or figure out where to make the most impact so and of course that’s a resourcing or something challenge. Of course now people can participate virtually but there’s that mountain that global processes, regional processes seem like a big mountain to overcome but I think not to paint a all gloom picture., this region has changed from 10 years ago. We now have more people, we have more voices and we have quite a number of local initiatives and organisations that are actually working on the internet government spaces. Just to give you just one example., KictaNet, we have been running the Kenya School of Internet Governance (KeSIG) for the past six years, and we have trained almost 5,000 people on internet governance, and it is just in Kenya, and we hope with more people knowing what is happening, then they can be able to make at least a chip on that iceberg to make a difference. Thank you.

Pavlina Ittelson:
Thank you, Victor. There is a lot of points I could reflect on, but from the position of Diplo Foundation, that is something we see and the main, three main issues we see is the fragmentation of forums where the internet governance is discussed. It’s more and more. Also forums where the internet governance is discussed are going into more detailed discussions, requiring more resources both on capacity, both on civil society organizations and governments, because when we work with small and developing states, they just say, I don’t have the capacity to go in and speak every other week somewhere on this. So we definitely hear that. Another point is the lack of capacity, of understanding of human rights impacts of technological development, of standardization, and of course civil society is also the technical community. On that side, it is from the other side. It is from the side of the human rights implications where the understanding of technology is there, but the implications of what that technology, when it is launched, can effect is sometimes not there. So we do have this gap we’re trying to bridge, and that is with capacity building, that is with outreach and advocacy. basically creating networks of civil society organizations in the global south and connecting them to global north organizations which could support them and help each other basically, because global north organizations also do not possess all the knowledge in the world. And bring those issues which are not currently at the Internet governance forums but are on a regional or local level related, for example, to indigenous groups, to women’s rights, to certain cultural or language aspects of Internet governance to the global level. So with that, what can we do? We all agree it’s beneficial to have civil society at the table. We all agree there are challenges to that. And what can we do? So Peter, if you could maybe expand on that and explain to us where the European Commission stands.

Peter Marien:
Thank you. Thanks very much. Again, I think I’m not sure that I’m the best to respond to that, but I’ll give some thoughts. So I think actually quite a few things have already been said, so I might be repeating a few things. Okay, how can we make sure that there’s more participation of civil society if indeed it is needed? I think the first thing is that, well, not the first, but one of the main things is that the capacity has to be there. I mean, to participate in the discussion, you have to have the know-how to participate. Or I’ll say it differently, if you want to participate meaningfully, because indeed we can all participate, but to participate meaningfully, you might need to have a little bit more knowledge on certain topics. It doesn’t mean we have to become ICT specialists. Far from that. Actually, not at all. but we need to know the broader implications, where does it fit in society, in the processes, what are the political implications, who do we need to contact to have an impact and all these things. So I think it’s about capacity. Efforts are being done at all levels, but it’s simply the scene is moving so fast that we’re probably running a little bit behind, especially maybe with the last few years, I’m just guessing, but I think maybe also with the COVID situation, to shift from society to move online has been quite dramatic. And this has maybe increased the world’s attention to these issues. And so to deal with this more adequately, this capacity has to follow. So I think it’s the first thing. I hope I’m responding part of the question. Second is then, of course, apart from the know-how, even if you have the know-how and it was mentioned here, then there’s still the question of resources. Now you can follow hybrid, you can participate, we can do so many things thanks to digital technology. Nevertheless, even if you don’t travel, you need resources, you need people dedicated to the work. But even, and then if you want to participate, of course, to events, you need other resources. Maybe to come back to one of the elements in this project that will start soon, is that the idea is to participate that CSOs have a meaningful participation at IGF, but also at other fora, such as for our organizations, if I can say, such as IETF, ICANN, ITU. Well, to meaningfully participate in ITU working groups, it takes time. And so you need resources for that. Simple. At the moment, it’s companies and states, backed, I think, mainly, operations, that are able to do that and therefore influence. Same for standard setting and so on. And if you don’t have the resources financially also, then this is difficult. So we try to, with this project, but there’s many other ways maybe to partially respond to that. And then maybe to also underline how come that maybe the voices were insufficiently heard. Just maybe reiterate that maybe there has been acceleration of events in the last couple of years because of COVID, but also simply because the adoption of technology. We’ve spoken also about connectivity access, but then of course there’s the new technologies. AI is now the hot topic. Maybe another day it will be something else. I mean, it’s a hot topic, but it’s fundamental. I mean, I’m not diminishing it, but to be able to participate also in the discussions of AI, okay, and the big principles, I think everybody can do that, but to really be on top of it, again, you need to be able to invest in those topics. Thank you.

Pavlina Ittelson:
No, I heard, and part of the project is basically the involvement of civil society in different standardization for, as you mentioned. So ITU, ICANN, IETF, and as we all know in this room, not all of them are equal. Some of them are more open to civil society participation and transparent and have human rights principles set in place for standard setting. ITU and ITUT is another one where the civil society, when the door is closed, they go through the window basically and become part of the government delegations and find different ways to get into one example is Consumers International who do it through the consumer’s rights. So there are ways to get engaged. It’s not an easy one, I would say, but there are ways to get engaged in the ways that once you have the capacity to understand. where the connections are, how to advocate for certain human rights and human-centric values. But let me turn to Marlena to explain to us more on this from the advocacy perspective again. Thank you.

Marlena Wisniak:
Sure, thanks. So at ECNL we participate in some of the standardization processes, mostly on AI, which is what my team focuses on. And like Peter said, it can be highly technical. So talking about things at the high-level principle levels, such as transparency, participation, that is easier. But then, you know, what does transparency mean in practice? What do we want to be transparent on? And we talk about the standards. It becomes much more technical, and that’s something that we’ve seen as a struggle, especially to get more orgs involved. So my team has expertise on these issues, but it really is a small group of people. And by small, for example, at the EU level, we’re talking about like 10, comparing to hundreds, if not thousands of representatives from the private sector, for example. So even you can, you know, you can see there quantitatively the difference in numbers. And AI, for example, as you mentioned, Peter, is a hot topic now. We started working on it in 2020, and I’ve been focusing on it since 2017. For a long time, it was incredibly niche, so even more close to civil society. And it was a handful, and I literally mean a handful of people working on it. And this year, probably of ChatGPT, that’s my non-empirically tested theory, it has become a big topic on the policy level, right? So I don’t know if anybody was here at UNGA in New York, AI was the topic, folks. So everything, you know, UNGA is not even digital focused. So how do we ride these waves of hotness, to take your piggyback on your hard word, Peter, while at the same time having meaningful participation is a struggle. So specifically at ECNL we participate in the ISO standard 42005 on impact assessment of AI systems. So you can hear already that, hear how nerdy that sounds. And when we have expertise in human rights impact assessments, which I think is a more broadly shared expertise amongst the whole society, but still highly technical. Those working on the UNGPs, for example, should participate more. We’re also part of the IEEE, which I don’t even know what it stands for, International Electronic something. I can Google it. An AI risk management subgroup on organizational governance. So all of this is, you know, very technical as well. At the EU, there’s the CEN-CENELEC, which is the standardization bodies. And actually we managed to we managed to get the European Commission to let me get this right, mandate that the CEN-CENELEC includes civil society and actually they have allocated resources. So CEN-CENELEC, the standardization bodies, have allocated resources to include external stakeholders, and yet they still don’t do it. So even when they are required to do so by the Commission and when they get funds, they’re still very reticent. And when it actually does happen, it’s really, really hard to participate. Right now, to give you an example, it’s mostly ECNL, Article 19, and Center for Democracy and Technology that participates in a handful of academics. So it’s really it’s really a closed space and when it when it’s like honestly pretends to be open, it’s not, even though they if people have the best intentions. And I’d say one positive case study that I’ve seen is in the US NIST,National Institute of Standards and Technology I think. They have been very inclusive in engaging some stakeholders in the risk assessment framework, and also have made it a little bit more welcoming. But again, you still see a disproportionate participation of not only digital rights orgs, but those with expertise on AI specifically. So there’s always this push and pull between inclusiveness versus needed expertise. And at ECNL, we really try to train other CSOs, both digital rights and non-digital rights, on these issues. If anyone here in this room is interested, check out our learning center. So a shameless plug for ECNL Learning Center on Google, where we have a couple basically courses specifically for CSOs on AI with some specific things like surveillance technology or platforms. So that you can participate a little bit more. And this is just the technical expertise in addition to, obviously, the challenges of visa and funding and everything I mentioned before.

Pavlina Ittelson:
Thank you, Marlena. I’m happy there are also some positive examples, because it did for a while sound like doom and gloom here. We do not have any questions online. But if there are any questions in the room, please feel free. We’ll also have a Q&A session at the end of this block of questions. I see a lot of familiar faces. So please don’t be shy to come to the microphone. I know it’s a little scary to go in the middle of the room sometimes and ask a question. But feel free to also share your experiences, if you have any, with these processes and how they relate. Any takers so far? OK.

Audience:
Oh, it’s a bit tall for me, sorry. Hi, my name’s Don. I’m with Article 19’s Global Digital Program. We’ve worked to support civil society organizations and individuals primarily from the Global South to participate in technical standards setting bodies. So like everything you’ve said really resonates. Just also what we’ve found has been useful when we’re working with civil society organizations is being able to identify their priorities and then aligning it with what would be useful within technical standards bodies because we recognize, as you said, it’s like a whole fragmented landscape. And even within standards-developing organizations, there are just dozens and dozens of working groups and study groups within each one. And it can be quite overwhelming for organizations to jump into, say, like the IETF and work out which of the 36 meetings they should be attending, like you said. So we’ve actually been working to develop engagement strategies, so being able to support them. And we take on a lot of the, we institutionalize a lot of the support structures. So like in terms of the financial capacity, in terms of like working on the visa processes, like we kind of take care of that. So that organizations and individuals from the Global South don’t need to necessarily have to like put in time and effort to focus on that, but rather scope out the work, be able to understand the concepts that are being brought up. And then we’ve also done a lot of one-on-one mentorship and engagement because we also recognize that these standards bodies have a monoculture. It’s a very like technical space, but it’s also very like Europe and US dominant. And so being able to have someone to go with you to these meetings really helps because I think after these meetings, a lot of the times people are often processing what they’ve learned, what they’ve heard. And so being able to have like someone to be able to bounce ideas and thoughts with post this. So it was just sharing some of our experience. Thank you.

Pavlina Ittelson:
That resonates very well with the project we’re about to start. where there was a big study done by one of the partners for us which found that specialist standardization bodies are male dominated, white dominated, English speaking, inaccessible to any type of variety of opinions by design. So that’s why we’re talking about the running the marathon that it needs to be slowly chipped off and introduced the different opinions presented by the civil society organizations. You also mentioned and I know Victor spoke about it, the participation from the Global South organizations and how we can help them. Part of what we will be doing is there’s a three-pronged approach which is part is capacity building, Diplo will be responsible for. One part is advocacy and bringing in grassroots opinions and networking between the civil society organizations. But also helping those who are ready to do so in engaging with these forums, in engaging through guidance on how to write a briefing for example. How do you go in and present it? What is the best strategy often being involved in these processes to achieve the goals of your organization? With that, I would like to turn back to Victor maybe and ask you about if you could elaborate a little bit of the benefits of building partnerships between the organizations, both Global North and Global South and Global South, Global South civil society organizations.

Viktor Kapiyo:
Thank you. I think there are various advantages to this collaboration. I think first is it addresses the problem of the lack of linkagesthat existed across the various digital rights organisations. We realised that collation building is very important, and collaborative approaches are even more important because we are working to solve the same or similar problem, so having that alignment that you know we are able to share our concerns collectively and figure out what are some of the key emerging themes we want to address. I think that is an important win. The second is that it takes advantage of the competencies of both organisations. If, for example, there is an event in Geneva where one of the global north organisations is based, it is easier for them to cross the road and present the views or ask for a meeting than it is for me to come from Nairobi to get a visa and struggle through trams to make the point. That scaling becomes easy because of physical presence and understanding also the local dynamics. The global north organisations which work closely, whether it is at the UN in New York or EU in Brussels they have built relationships with the various policy makers in those various offices.And therefore, when we come there, it is easy to, you know this is a person to talk to, do not go around, or the office is on the fifth floor, room number five, simple things such as that make a big difference because when you arrive at the UN, it is a big space and having someone who has that local understanding really helps. Also, another thing it helps in terms of capacity building and knowledge exchange between two organisations. Global north organizations may not understand perfectly 100% the context in global south countries, and so this discussion helped in terms of sharing knowledge and exchanging ideas, like what works for us and what works for you, and how then can we build on this. We’re able to present, you know, sometimes access to our government officials is usually difficult at the national level. Because, you know, you can’t access a minister easily. But sometimes, if you’re able to participate in a global forum, then you’re able to meet the delegation there and still be able to articulate the issues. So there is the benefit of learning from the organizations that have done it before, in terms of even knowing what to say and how to say it, and maximizing that two minutes that perhaps you will have with that person before they dash into the next meeting and say these are the three things that we need you to do. And also leveraging on the other partnerships that we know within the global circles, the influential governments and so on, and all these alignments, and the power mapping that perhaps that skill, the global north organizations have already done and have understood who the power brokers are. I can have my three issues and know who to tell them to, as opposed to going there and wondering where to start from 200 member states, you know. So that beneficial partnership is very useful. It’s an advantage. And of course, more importantly, the resources. They’re able to leverage the technical resources in terms of skill. For example, some of this, the ITF, the IE, they’re very technical. And global south organizations, we might not have a technical person like an ICT person, because now it is becoming. and an important component that human rights organizations is not just lawyers, you must have the techies there and you must have the engineers and so on because some of the issues, I remember once one government official told me, we are discussing spectrum and I’m going there, I’m saying, yes, we want to hear about human rights concerns with this spectrum and I’m like, okay, so who do I bring to say these things and to break down what spectrum actually means for the ordinary citizen on the street. So leveraging on a partnership, we were able to get engineers who’ve done it before and have best practice that then they could be able to review the submission that we were doing and give some perspective. So there are some unique benefits to those alliances and if we’re able to build strong coalitions between Global South organizations but also with Global North organizations because I think there is a certain power that we can have when we work collaboratively. And I think lastly is for funders because we have, there’s a significant problem when we have different funders who are funding different things and it’s all disjointed and fragmented and they’re supporting the same organizations who are competing for the same basket of funds to do the same problem, to attack the same problem. And so when they’re not coordinated, it is also creating problems for civil society organizations in terms of coordination because we are competing for the same EU grant. So do I partner with you or do I partner or do I go solo? And does collaborating affect the opportunities and are the donors goals aligned with the specific needs of people to help organizations collaborate? And I think it is important that funders appreciate the dynamics of Global South organizations and the impact of the funding and how they model those funds. in terms of the ease of access and how they can help build and strengthen civil society in the global south to actually make a strong impact, yeah.

Pavlina Ittelson:
Thank you, Victor. And I’m having a stereo here, one side Teresa and the other side Marlena, who want to both chip in on what you said. So Teresa, you start and then I’ll give word to Marlena.

Tereza Horejsova:
Okay, thank you very much. No, I think what you raised is very, very important, Victor, because there is a problem, and I will comment a little bit on the donor experience, yes, because you’re very right, you know, that first of all, like for civil society organizations to be able to be involved in some of the policy fora, it has to be deliverable in a project, you know, because otherwise, yeah, no way how to make it work. But at the same time, among donors, there might not be kind of sufficient coordination of the priorities, you know, in this sense. So if I can also encourage donors, you know, who are interested in more meaningful participation of civil society in various policy fora, like you have probably identified already, it’s really important also to talk to other donors and to try to not overwhelm also civil society organizations with each donors coming with a specific narrow vision, you know, project perspective. It could be ultimately more impactful if this is done while I’m aware that it’s not an easy and intuitive task. Another point that I would like to elaborate on super quickly is what you, Marlena, actually raised, and that sometimes this bad habit of like tick-the-box approach of like consulting civil society. You also mentioned it actually in your intervention. It’s actually tough to consult civil society because civil society is not like, okay, I have consulted civil society. No, it’s so many various organizations with various agendas, various objectives. So it’s certainly not easy. But, on the other hand, we are sometimes sliding or experiencing this tick-the-box, like, you know, there is some pro-forma consultation that not necessarily leads to anything, but you can say that you’ve done it, yes? And yeah, let’s be honest, some policy fora are more prone to this than others. So yeah, I don’t have a solution, but just raising the voice of the civil society more and having donors that have realized and identified this issue is a strong start. Thanks.

Marlena Wisniak:
And following up on Victor’s intervention, I wanted to bring another perspective also from a Global North organization, that the cooperation and collaboration between Global North and Global South, or majority-based orgs, isn’t only, and definitely should not take this, like, white-saviorist approach where we uplift global majority-based orgs, but also there’s a lot to learn for Global North orgs. There’s so much resistance in many countries outside the U.S. and Europe with really creative advocacy strategies, and I and my team learn constantly, and I think the global coalitions are inherently better when there are diverse perspectives as well, and it can be pretty easy to become complacent or even lazy to some extent when you live in the U.S. or Western Europe. You forget many of the fundamental issues of organizing and influencing policy makers, so that’s something to remember. And another aspect is that a lot of the international governance mechanisms, even like UNESCO recommendations for example, rarely, I will not offend UN people here, but they don’t have as much influence in the EU and U.S., however they do have a disproportionate impact on national regulation in the global majority, so for example UNESCO guidelines for digital platforms are often portrayed to be a… Digital Services Act, or DSA-esque version of the EU. There’s also the recommendations on ethical AI. The EU has its AI Act, so there’s binding regulation in the EU. The US is, bipartisan politics aside, also has its own regulation. However, a lot of the recommendations from these entities, including then like UN, UNGA, or OHCHR, really can influence, and often are weaponized to enforce problematic regulation at the national level around the world. So that’s something to consider when we have these coalitions. And then fundamentally, I mentioned before that civil society is not a monolith. The global majority is definitely not one either. It’s multiple regions. The regions themselves are not homogenous, and even in terms of languages. One individual country, India, has, I don’t even know how many languages. 60? How many?

Audience:
27 official languages.

Marlena Wisniak:
27? OK, I thought it was more than that. Official, yes. So plus the dialects, right? So differences of languages, social norms, economies, between and within countries. So that’s something to consider. And I’ll give an example, which is something that we’ve been working on with Access Now, Electronic Frontier Foundation, and a lot of other organizations, including I think Article 19, on DSA Human Rights Alliance, which is involving global majority-based orgs and the implementation of the DSA, which is the leading EU regulation on digital platforms. And what we’re trying to do with Diplo and the orgs represented here is really to bring in learnings from that experience and others into international governance of the internet.

Pavlina Ittelson:
Thank you, Marlena. We have questions in the room, so. Jovan, please start, and then the lady in the blue dress. After you, you’re going, after you.

Jovan Kurbalija:
Okay, oh, you’re next, okay. Just one, building on what Victor said, what we can call power of triviality. We very often think, discuss big system, but sometimes to navigate the UN corridors in Geneva and New York, or knock on the office, or I remember still when we had the program for the colleagues, students from the developing countries, even where you leave the jacket during the winter, or what do you do? I mean, it sounds completely trivial, but it impacts the feeling of being part of the process. And I will bring you a few examples that we have been recently focusing on. PDF is dominant format of the UN, European Union, and other actors. With PDF, you cannot do a lot if you want to interact, if you want to display it nicely. We took the European AI Act, and we were in Brussels discussing with different negotiators. If you try to consult European EU Act draft, which is now negotiated between commission, council, and parliament, you simply, you’re lost. On the very trivial level, not on knowledge of AI, but on displaying three columns, four columns, one PDF, 300 pages. What we did, we basically organized it in the simple way that you at least can read it, and then have, obviously, expertise to analyze and to have a context. The similar applies for the UN Secretary General policy briefs. If you read these policy briefs, and you can consult on Digital Watch, they’re done by designer who wanted to have nice printed publication. And this is the mindset, you can see the mindset. And said, who is going to, maybe a meeting like IGF, they will distribute, have a nice publication, as we are doing, all of us, you know. But in reality, people will consult it online. or their mobiles. Therefore, this power of triviality, which ultimately shapes people participation, is a big thing. And we plan during this project to focus on these things. And one of the elements is reporting from the IGF. You have here the paper. This session will be reported by a mix of AI and experts. You have yesterday’s sessions with main points. And why is this important for the civil society? It is important because you simply have limited resources to navigate such a flow of information and sessions in Kyoto, but also last 18 or 19 years. And frankly, some issues, digital divide, were basically rehearsed every year. And you have the more or less similar narratives. Therefore, this is, again, one small thing. If small NGO, we had discussion two days ago from Brazil, with two or three persons, wants to know exactly what is about child protection discussed during the IGF, not necessarily AI, not necessarily other issues, they should have the access. And it’s not as easy as it looks. Therefore, we are trying with this reporting, you can consult it, to use a mix of AI, Diplo AI system, and our experts, mainly to bring to the help with the small developing countries, where our main, sort of, to bring them substantively in discussion, but also small civil society and marginalized groups. Those are a few points which I invite all of us here to reflect on this power of triviality. And there are tools. And also to create a space, I call it, AI hallucination of human hallucination, to think, I won’t go too far with the way how to hallucinate, but to create a space for a bit of alternative thinking. My criticism of. all actors, and my fault, is that we sometimes become domesticated in global fora. We basically start integrating thinking of the IGF governments, which is very human. You interact and you basically develop the thinking. And the time which we are facing, you open Al Jazeera, CNN, News, you see the world is not in the best shape. And we need alternative thinkings. And we need the creative inputs. And I think this is the role for civil society and academia, where they are not contributing. I’m sorry to say, but we are not contributing to this. Therefore, those are just a few points which influence Diplo’s approach. And we hope that, together with the partners in this project, we’ll try to do this power of triviality, making things accessible to people, and also trying to create a space where we can think a bit out of the box for the benefit of the governments, public, and the global public good.Well, should I ask some question?

Pavlina Ittelson:
No. It’s OK. We’ll have a lady in a blue dress, and then the gentleman in a blue shirt, and my wish, and four questions coming up.

Audience:
Thank you for interesting thoughts from different perspective and different experiences. And I’d like to ask you about a question regarding capacity building. Since I am Emi from Japan, working at a private company giving OSINT-related service, giving more reliable information to mainly researchers, I think capacity building cannot be made one day. For example, since I’m familiar with climate change issues process. They were around the Paris Agreement, and there were a citizen assembly, citizen congress, with randomly sampled citizens without any expertise discussing the very important issue about climate change. And that didn’t really solve all the problems, but it moved forward somehow. So how would you think about such process in regarding this issue, and also the possibility or threats or limitation of such, what do you say, citizen participation in very local level? And my personal hope is to have such congress in different areas, different parts of the world, and then they can have capacity building and also participate in global level. So I’d like to know the opinions, thank you.

Pavlina Ittelson:
Any takers? It should be me on capacity building, but Peter, go first.

Peter Marien:
I’ll try, but I will pass to you afterwards for sure. Well, first of all, thanks a lot. Shall I take, I’ll respond to that and maybe give some feedback on other points. So first of all, I don’t know if this is planned or has been done, I do think it’s interesting. You make me think of, I think one of the speakers in one of the other panels recently said, I forgot her name, the Nobel laureate journalist, who said that she hadn’t done this, but she had interviewed, if I understand correctly, quite randomly, a few hundred or a few thousand people. And she said, if I correctly remember the topic, randomly, like you say, just citizens, which are not experts, right? And also have their impact. I mean, I think that’s interesting, okay, as part of the consultation process. I’m not an expert in all these consultation processes, but I think there can be space for that. I’m sure academics will have more to say about that. I do know that in the EU processes, when it comes to legislation, there are consultation processes, quite large ones, but maybe they’re not so intimate. Maybe they’re like, look, this is online and you can share your opinions. I know that on some of these consultations, we do have thousands, if not tens of thousands of inputs from society. Some of them are even indeed analyzed partially by AI because it’s not possible to read every one of them. So I think there’s some interest for that. Now, I think specifically it’s more to our partners in the CSO organizations who might give a bit more, yeah.

Pavlina Ittelson:
Yes, so it makes me think of two things. One is the capacity building. What we’re talking about is institutional capacity building. So when we talk about increasing the engagement, it is in a way we’re training people, but we are increasing the institutional capacity of civil society organization. Person may come, may go, but you need to have within the organization, the knowledge, the expertise, whether technical or policy to engage. And we have ways to do that. So I’m not gonna go into that because we’re running out of time in general. But going back on consulting regular people in the process and also Jovan’s point on basic triviality of things. I remember one example on accessibility where a blind person was able to access government’s website going through accessible government website, going to get their ID. They went through everything and they got to the end and we’re supposed to tick the boxes with the bicycles to Prove that they’re not a robot and the person is like it’s a trivial thing which can be replaced by a small technical solution But because that trivial thing is not in a forum and it’s not addressed it causes issues of accessibility on a wider scale to certain groups of people so it is two way, two way thing. Now the forum was speaking about the internet governance for a standardization for a way less open to direct engagement with the public or with individuals in the environmental sector and also in youth and Rights of youth to to their future that it that is way more open to do this and also there was a court case in Germany which established the right of youth to their future in relation to the environmental rights, so the movement there is a Little bit different. I think the burning planet might have something to do with this a little more urgency But we do have three more questions in the room and I don’t want to forget about them. So gentlemen in blue shirt Okay, and then over there

Audience:
Hi, Arjun Adrian D’Souza, Legal Counsel at Software Freedom Law Center. I’m here with my colleague Just wondering So we have spoken about capacity building as well, and we are a civil society based out of New Delhi, India So India presently is at an inflection point a very opportune time also We’ve had a personal data protection act which has been enacted. It is yet to come into force And a digital India act which will regulate platforms. So the stakes are high we are Going to be dealing with 1.4 billion people who will be regulated through these provisions Just as a civil society. We’ve seen the pushback that is there in terms of engagement. So my question is is two-fold. Firstly, in terms of putting forth a consolidated front on behalf of civil societies and think tanks. Any strategies, any advice on that? And secondly, to the gentleman’s question also, any alternative points of engaging with parliamentarians and government? For the simple fact that this may be a jurisdiction-specific issue to India, but the consultation process prior to the introduction of a bill is much more fruitful than the one that usually, which comes for after that. So just wanted your views on that.

Pavlina Ittelson:
Anyone? Victor go and then I’ll go.

Viktor Kapiyo:
Yes, that’s a very interesting. We face similar situations in Kenya with consultation. I think some of the strategies that have worked for us, one is to build a relationship with the legislators. I think it’s important to have that relationship with them before you submit the view so that they know that there is this body and this is what you do. I think the second is usually to demonstrate the expertise that is being brought to the table. I know it’s usually a higher standard for civil society organizations being placed, but I think demonstrating that you actually have value to add and you bring that value will add some weight to the comments that you present. And I think the third is to work collaboratively with other civil society organizations. I think sometimes it helps when you have 20 sign-ons as opposed to one, and so identifying the common issues across the various groups is essential in as much as there could be.Variances in terms of positions at least, there should be some key things that everybody wants that or feel it is important to articulate and coless around that. Lastly to say be ready for the marathon, so you need to go to the gym and workout, so you to have your arguments and contra arguments prime, you need to be ready for the views of other stakeholders, Not everybody will agree with your Not everybody will agree with your positions just because your civil society doesn’t mean you’re always right or that your position will be taken. So it’s important to build the watches in terms of understanding the arguments or potential arguments and other scenarios that the other stakeholders could bring forth. And the other push and pull factors and drivers, what is driving this and who is driving this and understanding that local context which you probably do. And then it helps that when you get to the floor and the question is asked, you understand and you’re able to anticipate and then have a very good response to any scenario because at least you have prepared for it as opposed to just walking in and thinking that it’s gonna be a smooth ride. Most of the time, it’s not or at least from our experience. Thank you.

Pavlina Ittelson:
Peter, did you wanna reflect on that?

Peter Marien:
Yes. Well, I want to be, I should say, a bit sensitive in what I say because of course your question is to CSOs and I’m speaking in the name of commission but maybe just to say that from experience in some other countries, we have had governments in other countries including Kenya, for example, knocking on our door to discuss exactly this kind of topic and in an open spirit where we have then engaged some of our experts. for example, our Director General Justice, to go into dialogue, and they even came on a visit. And so I think if there’s a way of creating this trusted relationship and this willingness for open dialogue, then this is of course difficult for you to create this, but I mean, if that can somehow be found, the right people or the right entry, then maybe that’s helpful. One second. But here again, I want to be really careful that I don’t give any wrong message, but you might also indeed knock on the door of other organizations that you think might have the entry. And it can be a national organization or an international organization that’s locally present with an office and that might have better channels of access. And then through that way, maybe open the dialogue.

Pavlina Ittelson:
Yes, we have the last question. And while you’re going to the microphone, I’m just going to quickly reflect on the calls for common inputs by CSOs in different processes. And I would like to maybe caution and say that there needs to be a balance between one input by several organizations and basically leaving the variety of opinions and variety of perspectives behind, because you are eliminating certain aspects of that in the process. So that is something as a CSO you need to be aware of. What is it that you are not presenting in a way when you are trying to make a certain different point stronger? So that is a balance exercise, at least in our opinion. Please, go ahead.

Audience:
Hi, so this is less of a question and more of a comment that speaks to kind of everything we’ve discussed here. But my name is Ariel Maguid. I work with Internews. Congratulations on your proposal. We actually also just won a very similar proposal from the Department of State working in South Asia. So I’ll be implementing that as well and would love to kind of collaborate with you guys working on bringing our civil society organizations together but also to speak to your topic. One of our parts of our activities are creating an online space where all of our civil society and human rights groups can kind of come together and work together around what they’re learning. And at the final, we have bringing them to sessions such as IGF and really being able to advocate. I had one other thought as well. But yeah, so speaking to how donors are not collaborating with each other. And so you have the EU, we have State Department, very similar projects. And obviously the work needs to be done everywhere but would be great to kind of bring together in our news into the reflection as well.

Pavlina Ittelson:
Absolutely, thank you. And I would love to talk to you in the corridor afterwards. It is our intention within this project to actually harness what is already in place and what is going on to create a wider network of CSOs and kind of build up on each other and cross pollinate whatever is in the space currently and whatever is going to be in the space in the future because we do believe beyond the project itself, that that is something we will be dedicating more and more time in the future as well. So, and with this, you can find us all individually if you wanna talk more, but I believe we can close this session if that’s okay with everybody or is there more questions?

Audience:
Okay, sure, go ahead. I just wanted to share something. Just to build on what Victor and Marlena said, I come from SFLC India and like you mentioned that we have a lot of languages. I think the one problem that we face is with my legal team and my technology team, they come up with these brilliant blog posts or write-ups to share information or create awareness, but sometimes the language is so complex that the people they’re trying to make aware find it difficult. So one of the benefits of local partnerships is that when I meet other CSOs for these kind of events, I realize that all of us are facing the same issue, especially from the comms and PR team. So yeah, I just wanted to say that thank you for bringing that up, because these kind of local partnerships help bring issues to light that sometimes I think go unnoticed. So just thank you.

Pavlina Ittelson:
No, thank you. Thank you for that point, and it’s another one. OK. Yes, yes, we do. We do have, I believe, still eight minutes to go, so we can go ahead.

Audience:
Perfect. I’m Camila. I’m from EDAC in Brazil. And Pavlina, you were mentioning that beyond the substantial issues on how to participate on this space, we have formal rules to interact on this space. And this is so hard. We don’t have a book on that. If we are in the UN, you have a way to interact. If you are in other spaces, you have other ways. So how can we share more information about that? You were mentioning, for example, that we can make a workshop on how to make a briefing on all these spaces. But how can we share this kind of knowledge? Thank you.

Pavlina Ittelson:
No, and you’re absolutely right about each of the processes, each of the open-ended working groups having a different way of engaging civil society. We have seen it, for example, on open-ended working group on cyber. From the first session to the second one, the temperature in the room changed, and all of a sudden civil society is not automatically. be included in the conversation. So it is a challenge from the procedural side to see where are the ways to get engaged, and that is part of what we will be doing in different fora. For example, within the standardization processes, and, Marlena, I know you do a lot of work on there, so if you feel later on chiming in on that, there are, as I mentioned, certain standardization bodies which are more open to civil society participation beyond the technical community. They do have principles in place, human rights principles and human-centric standard making processes in place. There are other ones which are not open, and there are certain ones which are just multilateral, and that is the world we live in. It would be very ideal to have one way within, for example, UN or any other body to have this is how you go ahead and do it and put your input. Another part to that is once you overcome that challenge of how you are going to participate, how are you going to put across your points, is whether those points are, as Teresa mentioned, are a tick on the box or they are taken seriously, and that is a question of building partnerships. At least in my opinion, it is a question of working in that space for a long time, knowing who does what, knowing the trends, knowing what’s coming up, where is the place to be, and which forum you need to be strategically engaged to achieve your civil society goals. Not an easy one, definitely not. Marlena, you did some work on that, so maybe some lessons learned?

Marlena Wisniak:
I was going to bring up your last point that unfortunately there are a lot of informal networks that we see. You know many of I mean many folks in this room. I know right so that’s both a good thing and a bad thing same people on panels Generally bad that I’d say one good thing is that we do have tight networks between civil society so able to You know text each other And and have monthly meetups like they’re different. I think there’s also a lot of Coalitions, so it’s hard to know Who does what and like how to be coordinated and aligned? That’s always? Was a struggle I mean Colleague from internews mentioned that there’s a similar Initiative and we work very closely with internews or even part of the global internet freedom. I think And we didn’t even know about this right so So I think there’s definitely It’s incredibly difficult And also unfortunate because it can become a cool kids club of if you’re in it Then you have and generally to be to have these connections You already are privileged and well networked, and then it gives you an even bigger boost to actually be to have the platform right so so Yeah, I think it’s our responsibility of the orgs that are in this room to actually bring in new voices Something that I’ve been experimenting with is if I’m invited to a panel I Either decline and give my spot to someone else or I say yes under the condition that the other person comes I Didn’t do that at this panel apologies Like apologies are you’re welcome But yeah, so different. I think yeah bringing in more people But unfortunately it is informal and then like you mentioned I forgot who mentioned some of the orgs are better at inclusion than others The UN is is really real difficult. I do a lot of UN advocacy and I Don’t really understand it We have a UN advocacy officer at our organization and we’re very lucky to have So he knows a lot on the procedural side. Then he doesn’t have the substantive expertise as much. And my team is the opposite. We know AI and human rights, but don’t really know where to intervene or what, unless it’s the very specific, like everybody knows IGF, right? But the smaller ones, it’s hard to know. So, sorry, it’s not a satisfactory answer. Basically, make friends, be social, and share your contacts and your privilege.

Tereza Horejsova:
Maybe if I can quickly just a story from earlier this morning, yes, on the kind of encouraging everybody to be more experimenting in panel compositions. I was on a panel and probably I had some calming effect on two other ladies speaking in the same panel. And they were like, this is my first time doing a panel. This is going to be a disaster. And I told them, no, first of all, it’s not going to be a disaster, but you belong on this panel much more than I do, for instance. And by the way, they did great, yes. So, don’t be afraid to experiment when you put panels together, because, yeah, it might seem easier to go with people you know or you have worked before, but actually it might get much fresher look if you get new people.

Pavlina Ittelson:
Thank you. And with this, there were three people on this panel I didn’t know so far, so only Teresa. So with this, I will invite you to get to know each other and we’ll close up. Thank you very much, everybody. Thank you.

Audience

Speech speed

177 words per minute

Speech length

1349 words

Speech time

458 secs

Jovan Kurbalija

Speech speed

175 words per minute

Speech length

861 words

Speech time

296 secs

Marlena Wisniak

Speech speed

166 words per minute

Speech length

2422 words

Speech time

877 secs

Pavlina Ittelson

Speech speed

160 words per minute

Speech length

2986 words

Speech time

1118 secs

Peter Marien

Speech speed

182 words per minute

Speech length

2373 words

Speech time

784 secs

Tereza Horejsova

Speech speed

156 words per minute

Speech length

1318 words

Speech time

508 secs

Viktor Kapiyo

Speech speed

187 words per minute

Speech length

2540 words

Speech time

813 secs

Impact of the Rise of Generative AI on Developing Countries | IGF 2023 Town Hall #29

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The discussion revolves around the impact of technology, particularly AI, on freedom and democracy. There is a neutral sentiment overall, with a focus on whether AI will lead to more freedom and democracy or have the opposite effect. One argument is that authoritarian rulers might use technology to establish more control rather than promote freedom. This raises concerns about the potential misuse of AI in authoritarian regimes.

Moving on to the potential application of generative AI in developing countries, the sentiment remains neutral. It is recognised that generative AI has the potential to benefit these nations, although specific evidence or supporting facts are not provided. Nonetheless, there is an interest in exploring the application of generative AI in developing countries, highlighting a desire to leverage technology for their development.

Another aspect discussed is the need to redefine the term ‘developing countries.’ This argument emphasises the existence of highly functional digital societies in Estonia, Finland, Norway, and the Netherlands. These societies serve as examples of how advancements in technology can lead to progress and development. The recommendation is to learn from these societies and implement their successes in other parts of the world. Young people are seen as crucial in this process, as they can observe and learn from these digital societies, then bring back knowledge to design and restructure their own societies. The high youth population in regions like India, the continent, and the Middle East and North Africa (MENA) further amplifies the importance of involving the young generation in shaping the future.

The impact of AI on human creativity and its contribution to human resources is considered in a neutral sentiment, without specific arguments or evidence provided. The broader question of how AI will affect human creativity and its implications for the workforce remains unanswered.

Concerns are raised about the use of native AI in developing countries with limited resources and infrastructure. The sentiment is concerned, with a focus on the potential widening of the technology gap in these nations. The argument questions whether native AI will exacerbate inequalities and further marginalise resource-poor countries.

In conclusion, this analysis highlights various viewpoints on the impact of technology, specifically AI, on freedom, democracy, development, and creativity. While concerns are raised regarding the potential misuse of AI and widening technology gaps, there is still potential for positive outcomes through the application of AI in developing countries. The role of young people and learning from successful digital societies are also emphasised in shaping a better future for societies worldwide.

Atsushi Yamanaka

In a recent analysis, different viewpoints on the topic of Artificial Intelligence (AI) were discussed. Atsushi Yamanaka, a senior advisor on digital transformations at JICA, shared his belief that AI has both significant potential and notable threats. With 28 years of experience in the field, Yamanaka advises JICA on incorporating technology elements into various projects and supporting digital transformation initiatives.

One area where Yamanaka sees promising potential is the use of generative AI models for local African languages. He highlighted how this application could play a crucial role in promoting digital inclusion in developing economies. By developing AI models for African languages, barriers in digital literacy could be overcome, enabling more people to benefit from technology. Yamanaka’s colleague, who studied AI in Japan and now works at Princeton, is actively working on generative AI models for African languages.

However, the analysis also acknowledged the potential risks associated with the rise of generative AI. It highlighted the concern that this technology could lead to an increase in misinformation, making it increasingly difficult to distinguish between real and fake information. As a result, trust in digital technology may be undermined, presenting challenges for individuals and societies. These issues underscore the importance of responsible development and deployment of AI technologies.

Another argument made in the analysis was the need to establish a consensual framework for AI regulations. The participation of emerging countries was emphasized, as developing nations should play an active role in global discussions on AI regulations. The aim is to avoid creating multiple fragmented models or regulations and instead work towards a unified approach that addresses the concerns and interests of all stakeholders.

Concerns were raised about the potential impact of AI technology on labor. The recent work by the International Labor Organization indicates potential job losses resulting from the introduction of AI. In the United States, for example, the Screenwriters Guild has expressed concerns about AI replacing their jobs, sparking fears of a potential backlash reminiscent of the Luddite Movement of the 19th century. These concerns emphasize the need to consider the potential negative consequences on employment and to ensure that appropriate measures are taken to mitigate any adverse impacts.

Privacy invasion was another aspect discussed in the analysis. The Chinese AI-based scoring system was highlighted as an example of technology that invades privacy. The system reportedly monitors and scores every aspect of citizens’ lives. This raises concerns among privacy advocates and highlights the ethical considerations that need to be taken into account as AI technologies continue to evolve and become more integrated into daily life.

The analysis also touched upon the digital gap between developed and emerging economies. The argument was made that AI technology, particularly new technologies, could actually help reduce this gap. Unlike traditional barriers to communication, there are no interaction barriers in digital technologies, making their adoption in developing countries more feasible. Furthermore, emerging economies might even contribute more to the growth and development of these technologies.

Interestingly, the analysis noted that developing countries have the potential to be at the forefront of innovation in AI technologies. It emphasized that a significant amount of innovation is already emerging from these regions and suggested that they might contribute more to innovation in digital technologies than Western countries. This insight challenges the notion that developing countries will necessarily lag behind in the adoption and advancement of AI technologies.

In conclusion, the analysis delved into various aspects of AI and provided different perspectives on its potential, risks, and implications. It emphasized the need for responsible development, consensual regulatory frameworks, and the active participation of emerging countries in shaping AI technologies. While acknowledging the threats and challenges associated with AI, the analysis also highlighted the opportunities for promoting digital inclusion and reducing the digital gap. Ultimately, it asserted that each society should have the agency to manage its own governance in line with its specific needs and circumstances.

Robert Ford Nkusi

Robert Ford Nkusi is a prominent figure in the field of software testing qualifications in Rwanda. He is currently leading a software testing qualifications team and has made significant contributions to the Rwanda Software Testing Qualifications Board. This demonstrates his expertise and leadership in the industry.

Furthermore, Robert has been involved in the design of the implementation plan for the Child Online Protection Policy in Rwanda. Working under the United Nations, he played a crucial role in developing a comprehensive plan to safeguard children online. This highlights his commitment to promoting child safety and creating a secure digital environment for young users.

In addition to his work in software testing and child protection, Robert has also contributed to the regional framework for the one network area in the East African region. By circumventing data roaming costs, this initiative has greatly benefited individuals and businesses in the area. Robert was actively involved in setting up the one network area and played a vital role in the successful proof of concept testing.

Notably, Robert has led efforts in managing cross-border mobile financial services, which have become increasingly popular in the East African community. By facilitating convenient and secure transactions across borders, these services have contributed to economic growth and poverty reduction. Robert’s involvement in this area demonstrates his expertise in the intersection of finance and technology.

Currently, Robert is engaged with JICA (Japan International Cooperation Agency) in implementing an ICT industry promotion project in Uganda. The four-year project aims to build capacity in the ICT industry and foster innovation and infrastructure development. By leveraging international partnerships, Robert is actively working towards advancing the ICT sector in Uganda.

The potential of generative AI in predicting and mitigating harmful online content was discussed. Robert highlighted how AI can aid in keeping children safe online through the issue of Child Online Protection. However, caution is required with the implementation of generative AI, as its accurate responses can make users less questioning, potentially leading to unforeseen negatives. It is crucial to strike a balance between the benefits and risks of this technology.

Moreover, African countries have shown exemplary progress in implementing and regulating AI technologies, challenging the traditional divide between developed and developing economies. By successfully adopting and regulating AI, these countries have demonstrated their capability in technological advancements.

The debate on long-term leadership and its relationship with authoritarianism was also explored. The definition of democracy and authoritarianism can differ based on the context, and it was argued that long-term leadership is not necessarily synonymous with authoritarianism. This raises important questions about the nature of political leadership and the impact it has on governance.

Furthermore, the potential of generative AI to transform politics was highlighted. The use of AI in predicting political outcomes and shaping political discourse has the ability to revolutionize the political landscape. However, it is important to critically analyze the impact of AI on democratic processes and ensure that it is used responsibly and ethically.

An interesting observation arising from the analysis is the need for collective efforts and shared learning in policy-making for AI technologies. Developing economies, like those in Africa, have successfully implemented technological solutions such as mobile money. It is suggested that developed countries, such as those in the G7, can learn from these successes and collaborate in policy-making to ensure the responsible use of AI technologies.

In conclusion, Robert Ford Nkusi is a leader in software testing qualifications, with notable contributions in the fields of child protection, regional frameworks, cross-border financial services, and ICT industry promotion. The potential and challenges of generative AI were explored, along with the successful implementation of AI technologies in African countries. The complex relationship between long-term leadership and authoritarianism was discussed, and the transformative potential of AI in politics was examined. Overall, these insights shed light on the intersection of technology, governance, and societal progress.

Sarayu Natarajan

The discussion examines the implications of generative AI across various domains, acknowledging its advantages and disadvantages, especially in relation to technology and society. It highlights the study of algorithmic and platform-mediated work, digitization, and digital infrastructure in the context of generative AI, with a focus on effective government-citizen communication.

Data governance is identified as a critical area of focus within generative AI, necessitating exploration of sustainability financing, governance, and digital system replication worldwide. The discussion also raises concerns about the impact of generative AI on the labor market, particularly in developing regions, where workers involved in data annotation and labeling are often overlooked in broader AI conversations.

Furthermore, limitations and biases in data structures restrain the full potential of generative AI, particularly in addressing gender and race representation in the developing world. The potential for generative AI to propagate disinformation and misinformation is also highlighted as a significant concern.

To address these issues and ensure meaningful digital lives and futures, the discussion emphasizes the need for governance and regulation to be considered during AI deployment. Inclusive frameworks of governance and regulation involving global participants are deemed essential to manage the impact of AI across all regions and promote equitable outcomes.

Additionally, the role of generative AI in the creative domain is explored, with the recognition that it can assist in certain types of literature creation. However, it is underlined that the education system and society should continue fostering creativity to avoid over-reliance on AI.

Overall, the analysis delves into the multifaceted implications of generative AI, highlighting the importance of governance, fairness, and ethics. The discussion emphasizes the need for thoughtful and inclusive approaches to harness the potential benefits of generative AI while mitigating its challenges.

Tomoyuki Naito

During the IGF 2023 sessions, Prime Minister Kishida emphasized the importance of generative AI and knowledge sharing for all participants. This highlights the recognition that generative AI has the potential to greatly impact various sectors, and therefore, it should be accessible to everyone.

The discussions at the IGF have brought international experts together, who have acknowledged the threats posed by generative AI. This recognition has sparked thoughts on how to counter these potential threats. The fact that these concerns are widely recognized at an international level shows the serious consideration being given to generative AI.

Tomoyuki Naito, the moderator of the session, specifically emphasized the need to explore the opportunities and threats of generative AI in the context of global south economies. This highlights the importance of understanding how generative AI can impact the economic growth and development of these regions. By recognizing the specific challenges faced by global south economies, tailored strategies can be developed to leverage generative AI for their benefit.

The panel discussion aimed to gather expert opinions on the threats and opportunities presented by generative AI. The first half of the discussion was dedicated to capturing the perspectives and insights of the panelists to ensure a comprehensive understanding of the various viewpoints. The second half of the discussion, on the other hand, was planned to encourage public opinions and comments, fostering a more inclusive and democratic approach to addressing these issues.

Naito’s belief that experts are actively working on addressing the potential threats to privacy and security posed by generative AI is significant. It indicates that these concerns are not being overlooked, and there is a collective effort to develop strategies to mitigate these risks. The fact that many sessions at the IGF have already discussed the potential threats of generative AI further strengthens the notion that this is a widely recognized issue. Moreover, the concerns shared by international experts highlight the seriousness with which these potential threats are being taken.

One noteworthy observation from the discussions is the recognition that countries can proactively utilize new technologies, such as generative AI, for their own economic and social development. This signifies a shift in mindset, where new technologies are seen as opportunities rather than threats. By leveraging these technologies effectively, countries can drive economic growth and social progress.

In conclusion, the IGF 2023 sessions shed light on the importance of generative AI and knowledge sharing for everyone, specifically in the context of global south economies. The discussions recognized the potential threats posed by generative AI and emphasized the need for expert opinions and public engagement. However, there is a collective effort to address these threats and proactively utilize new technologies for economic and social development. Overall, the sessions provided valuable insights and highlighted the significance of inclusive and informed decision-making in the field of generative AI.

Safa Khalid Salih Ali

Generative AI has emerged as a powerful tool with the potential to revolutionize various sectors. The analysis reveals several key benefits and applications of generative AI. Firstly, it can significantly reduce the time spent on data analysis by automating routine tasks. This allows businesses and organizations to derive insights more quickly and efficiently. By eliminating the need for manual data analysis, generative AI enables professionals to focus on improving the quality of their work, ultimately enhancing productivity.

Furthermore, generative AI can play a crucial role in predicting and managing economic crises. By utilising generative AI, experts can develop prediction models that help identify potential crises and take preventive measures accordingly. This is particularly relevant to sectors such as finance and banking, where generative AI can aid in risk assessment and fraud detection. By analysing historical data, generative AI can predict consumer behaviour and help financial institutions make informed decisions to mitigate risks and enhance security.

In the realm of fintech, generative AI has the potential to enhance customer experiences. By providing immediate solutions in emergency situations, generative AI can improve customer satisfaction levels. Additionally, generative AI can democratise financial services by allowing all participants to easily access the services they need, such as virtualized access through chatbots. This fosters financial inclusion and reduces inequalities by ensuring that all citizens have equal opportunities in accessing financial services.

Another significant application of generative AI lies in policy simulation. By simulating the effects of different policies, generative AI can assist policymakers in addressing weaknesses and making informed decisions. Through simulation, potential issues can be identified and resolved before they negatively impact society. For example, the analysis highlights a situation in Sudan where a war could have been preempted if generative AI had been used to simulate the consequences of certain policies.

While the benefits of generative AI are clear, it is crucial to address certain challenges. Developing countries face significant data challenges and lack the necessary knowledge and infrastructure to fully harness the potential of AI. Therefore, it is essential to establish systems that support AI development in these countries. By doing so, they can benefit from the transformative power of generative AI and drive economic growth.

In conclusion, generative AI has immense potential to revolutionize various sectors and bring about significant benefits. Its ability to streamline data analysis, aid in predicting and managing economic crises, enhance customer experiences, simulate policies, and foster financial inclusion makes it a valuable tool for the future. However, ensuring that developing countries have the necessary capacity and resources to tap into this potential is crucial. Generative AI can truly transform industries and bring about positive socio-economic changes when effectively implemented.

Session transcript

Tomoyuki Naito:
Ladies and gentlemen, good evening. I know this is today’s last session, that’s why not over 100 people coming, but actually I hope you can enjoy, ladies and gentlemen. Welcome to the session, the town hall session, the title, the impact of the rise of generative AI on developing countries, developing economies, opportunities and threats. I’m the organizer and today’s on-site moderator. My name is Tomoyuki Naito, Vice President and Professor at the Graduate School of Information Technology, Kobe Institute of Computing. Very nice to meet you, everyone. Let me just quickly introduce all of the other panels from myself. On my just left-hand side, Ms. Safa Khalid Sariyari, she’s the Senior Business Intelligence Engineer and the Software Engineer, Central Bank of Sudan, the Republic of Sudan, Safa, welcome. And next to Ms. Safa, Mr. Robert Fornon-Cussey, he’s the founding partner and CEO of Orasoft Ltd., the Republic of Rwanda, Mr. Fornon-Cussey, thank you for coming. Actually today we expected to have Ms. Kay McCormack, the Senior Director of Policy, the Digital Impact Alliance, but due to her immediate engagement, she couldn’t make it. So today we have the privilege to have Dr. Sarayu Narajan, she’s the founder of the Apti Institute India, doctor, thanks for coming. And last but not least, on my left-hand side, Mr. Atsushi Yamanaka, Senior Advisor for Transformation Japan International Cooperation Agency Japan, Mr. Yamanaka, thank you for joining us. I’m also looking at Zoom online, we have quite a number of participants online, over 20 people are joining, thank you for coming, thank you for joining today. So as scheduled, and as title, we’d like to begin the title, The Impact of the Rise of Generative AI on Developing Economies, Opportunities and Threats. Let me just begin with a very brief introduction, a very brief explanation of the background of this session. Actually many of you, attendance here and attendance online, as well as the panelists, you have already heard many discussions in the past two days, including today’s discussion throughout the IGF 2023 in Kyoto. Actually the internet for all, or internet we want, internet for everyone, internet for good. This is the kind of the common keyword for today’s, this year’s IGF. On top of that, even yesterday’s official opening ceremony, Prime Minister Kishida of Japanese government, he mentioned about the importance of generative AI, and the importance of the guideline, importance of the knowledge sharing, information sharing, and the collective effort to make AI for good, for everyone. Standing on that kind of background, actually many AI-related sessions have been done, and this discussion, or that kind of the session, is still ongoing until the end of IGF 2023 in Kyoto. Then this session is specifically focusing on generative AI’s impact, specifically for the how it will be impact on the global south. Actually I don’t want to divide north or south, but actually this is very, very important, because there are two aspects, opportunity and threats. On the threat side, many sessions have been discussing about the potential threat on privacy, human rights, or information security. Of course, all those things are very scary. The generative AI correct the information from everywhere, then synthesize all the data into one, as if we don’t know how it is synthesized. But many experts, international experts, as you already heard throughout the session, many international experts already aware about those kind of threats, and they have already shared their knowledge, their worries to everyone, including us, so that the process is already ongoing. I personally have felt that that aspect, threat aspect, is not necessarily we have to fear, we have to worry so much, because many experts or many people have already aware about the other worries, so that we can protect somehow by using the collective wisdom, collective knowledge, collective effort. On the other hand, opportunities. Opportunities-wise, I personally don’t see many discussions are happening in this IGF, but we also know that many opportunities we have seen. So I will invite these knowledge panelists, and I would like to invite all of your opinions, both from on-site and online. Then I would like to allocate first 20 minutes or 25 minutes to hear the opinion from the panelists here, and another half of this session, the last 20 minutes or 25 minutes, I really would like to have your guys’ opinion or comments about opportunities. And of course, threat-wise, you can share your opinion. That would be really, really welcome. So my first question to all the panelists, question number one, is actually go to the other, very fundamental one. Actually, in order for all of you to know about the other experts’ background, knowledge, wisdom, I won’t invite all panelists to answer to us. Question number one. What has been your background in terms of the other related works in ICT sector, in the light of using power of technology to make the better world? So let me just invite one by one, starting from the other, Ms. Fakari.

Safa Khalid Salih Ali:
Thank you, Sensei. I am truly honored to be here today. Thank you, all of us, to attend this session. I am Safa Khalid from Sudan. I start my journey in ICT since 2010, while I’m working in Central Bank of Sudan. My specialization is business intelligence and software engineering. While I’m working in Central Bank, we pass through data analysis for multiple systems that we can see now, generative AI can help us to reduce a lot of time of working, and it help us to make many prediction model for crisis, sometimes happen in any country. And also, when you’re using generative AI, it is a benefit for all Central Bank, which can impact economy of the country, like maintaining financial stability of country, or promoting economic development of the country, in Sudan or in another country. Also now, when you’re thinking about FinTech technology, generative AI can help us in customer experience and can build us many customer experience system, which can address an emergent situation for customer, instead of waiting for next day to go to the bank and find solution for your actions, and also enhance any part of analysis tool and prediction for any module. This really happen, for example, in the last year, before the world start in Sudan, when we have auction, you can use generative AI to build simulation for the policy, instead of just embracing and using the policy directly in the economy. This simulation, if you implement it, it can help us to find the weaknesses of the policy, instead of using directly to your economy. Thank you.

Tomoyuki Naito:
All right, thanks very much. Okay, Mr. Robert Ford, please go ahead.

Robert Ford Nkusi:
Thank you. My name is Robert Ford. I’m currently leading a software testing qualifications team in Rwanda, under the Rwanda Software Testing Qualifications Board, one of the members of the International Software Testing Qualifications Board. But before that, I was supporting the government of Rwanda under the United Nations in designing the implementation plan for the Child Online Protection Policy, which Rwanda designed a couple of years ago. And it’s now a framework that is helping the country in setting proper guidelines for safety for child online engagement. Before that, I participated a lot in the regional framework for the one network area for the East African region. For some of you who may not know the EAC framework, the East African community is composed of six countries, Uganda, Kenya, Rwanda, Tanzania, South Sudan, and Burundi. And at some point in the framework, they wanted to create one network area, having inbound, inbound traffic, data traffic, as one network traffic, so that they can circumvent the costs of data roaming for people in the region. So in a couple of years, we were able to set the one network area, the first actually proof of concept ever tested around the world. And in that, I was specifically more on the component of cross-border mobile financial services, helping citizens in the countries in that region to transfer and transact financially between nations using what is popularly known as mobile money. I have participated at AFRINIC, the Internet Numbers Registry for Africa. But after that, I’m now currently engaged with JICA, the Japan government, in implementing a project in Uganda. It’s an ICT industry promotion project. It’s a four-year project with four major outputs that are supposed to help the country build capacity, strong capacity in the ICT industry.

Tomoyuki Naito:
All right, thanks very much, Mr. Ford. Okay, Dr. Salayu Natarajan, could you go ahead, please?

Sarayu Natarajan:
Thank you very much. Thank you for enabling me to be a part of this conversation. It’s a very important one, both in understanding the disadvantages and advantages and some of the risks of generative AI. Thanks also to the audience. I know it’s 6 p.m. here, and it must be a range of different times across the world, so thank you for joining online as well. My name is Salayu Natarajan. I am the co-founder of Apti Institute, which is an institution that works on questions at the intersection of technology and society. We have three big areas of work. We focus on algorithmic and platform-mediated work. We look quite extensively at digitization and digital infrastructure and the ways in which governments and states can reach their citizens, particularly on questions of sustainability financing, governance, replication of digital systems across the world, and also extensively on questions of data and data governance. And AI, and particularly generative AI as a theme, is one that cuts across all of these areas, and we’ve been exploring it quite significantly, hopefully more over the course of this conversation. Back to you, Dr. Naito.

Tomoyuki Naito:
Thanks very much, Dr. Harari, and Mr. Atsushi Yamanaka, please go ahead.

Atsushi Yamanaka:
Thank you so much. Well, it’s very hard to actually be so very concise introductions to my predecessor now. Thank you so much, actually, for joining this session. You guys actually are very brave, because you actually could not actually resist the urge of actually having a GIZ reception downstairs. So thank you so much, actually, for your effort of coming to this session. My name is Atsushi Yamanaka. I’m actually a senior advisor on digital transformations at JICA. At JICA, we’re actually trying to promote the X for the improved well-being of all right now, incorporating a lot of actually these technology elements into different projects and then support initiatives. But prior to that, I’ve been actually doing this field for quite a long time, actually. This is my 28th year. In fact, it’s like I feel so old about that, of pursuing ICT for development. Initially, like I was in UNDP, and then I actually was involved quite intimately in the WSIS process. So it’s really personal for me to be here, and it’s really happy to see IGF actually came to Kyoto, and also discussing about how this process is going to actually help, finally, hopefully, change the world for a better place using these technologies. And AI certainly is one area where it’s going to have a lot of potentials, and also a lot of threats as well. So this conversation is very timely, and I’m really happy to be part of this. Thank you.

Tomoyuki Naito:
Okay. Thanks very much. So for the sake of time, let me just quickly go to the other core of today’s session. This is the core question to all the panelists. My second question to all the panelists. Do you think that the rise of generative AI represented by chat GPT is a good thing to the economic and social development aspect of developing economies? Please answer by yes or no, and with your very succinct reasons, please. Maybe let me just start with Yamanaka-san.

Atsushi Yamanaka:
Thank you, Naito-sensei. Can I say maybe? Or depend? Well, I understand that you want a very precise answer with yes or no, but I don’t think I’m qualified to say yes or no. So would it be okay? Yeah, it’s okay. It’s up to you. It’s okay. Thank you so much. Yes. Well, in a way, yes, because we actually had a colleague, actually. We sent him to Japan. He studied AI. He actually did a PhD here in Japan, and he was actually doing research on generative AI models for African language. Now, he was actually in RIKEN, one of the top research institutes in Japan, but now he, unfortunately for Japan, he actually moved to Princeton to continue his work. But having this kind of local language model actually incorporated into the generative AI could really open up the opportunities for those people who are actually unconnected or not digitally involved in it. Because of high barriers in terms of digital literacy, this inclusion has been a major challenge. And then this has been the issue for the last 20 years. And the last 2.6 billion people is going to be the most and hardest people to reach. So for that, I think AI, and specifically having the local language generative AI models, could have opportunity to open up the opportunity for them. So that’s yes. The no part, yes. Of course, there’s a threat. We talked a lot about, in terms of misinformation, malinformation, disinformation. There is even industry in, I think it’s northern Macedonia, where this village actually churned out all these malinformations. It’s a business. So people who want to actually have this malinformation, they actually hire these people in this particular village and it became an industry. And it’s getting very, very difficult for us to distinguish if any information is actually real or not. So that, I think, is going to be a huge threat in terms of information accuracies and in trust that we actually have in the internet or to the digital technology as a whole. So that’s actually yes and no. And I’m sure that my fellow participants also have a similar yes or no moment.

Sarayu Natarajan:
Thank you. I will take, again, a response which is yes if we attend to questions of governance and regulation. I do think, I mean, yes and no, yes if we take a, I mean, I feel like to think about generative AI without thinking about the consequences and the harms, which occur at all different levels, both in, or injustices of generative AI occur at several levels. One, there is the level of labor. Generative AI does not exist without the labor of several workers in very many parts of the developing world. So to talk about it abstracted from the way in which generative AI is itself created, which is the labor of data annotators and labelers, that might be a bit limiting. And so that needs to be a part of the conversation. Second is, I think, the way in which data itself is structured, which is that it often limits the presence of data pertaining to gender, race, et cetera, may limit the capability it is generative AI, so the applicability and use in context in the developing world may be limited. I think the third, of course, is what you mentioned, Mr. Yamanaka, which is around the consequences such as disinformation and misinformation, which generative AI makes very easy. So with this framework of the kinds of injustice that generative AI may bring about, we can start to think about both what are the use cases and how may we govern them to ensure that all of us have meaningful digital lives and digital futures. I’ll pause here and look forward to the rest of the discussion.

Tomoyuki Naito:
Yeah, thanks very much. That’s a very good point and a very important point. Mr. Yamanaka emphasized about the local language issues and Dr. Sara, you mentioned about start with the use case, then we can deepen the more discussion. That’s very important and significant case. Then, Mr. Robert Ford, your opinion is always very important. Thank you.

Robert Ford Nkusi:
How can I possibly, can I think I have a good voice? Thank you. Thank you, VP. Just a day before I flew here, my niece asked me and said, so when I showed her the topic I was going to discuss, I was going to be a panelist at this conference, she asked me a very intriguing question that I kept asking myself over and over when I was flying. She asked me, said, uncle, don’t you think the digital technology that we are consuming today has come earlier than it should have come? That is the world capable and the people capable to consume and use the technologies that we have today profitably for their own benefit. And I kept juggling my brain back and forth to see if that was right or wrong. So, now talking about generative AI, it’s better to tackle this topic when we have deeply digested and understood what this animal is. When I took my class many years ago on computer science, we always thought, when is it going to be a time when technology will take away from us the responsibility to write lines of code so that something else does it? And AI now is with us today. So, to answer your question, is it good or bad? Are we looking at dangers or are we looking at a comfortable world? In the time ahead. Like Sensei, yes and no. One, I’ll just speak specifically one example. The nature of predictability of the power of AI is going to help us, especially the global south, in being able to determine the kind of content that goes online before we can even know the danger that content is going to produce to us. And I speak this from the line of authority that comes from the Child Online Protection, for example. Today, when we look at how we struggle and fight to keep our children safe online, before we put mitigation measures, the content is online and children are consuming this content. Generative AI now equips us with the capability to mitigate that. That we can be able to use those tools to mitigate that. That’s the positive part. But the negative part, some of the tools, like ChatGPT, gives you so good response that many times we don’t even need to question it. That each time you ask, the response is so accurate to your understanding that you don’t even want to question that. But behind that text, there is a lot that can be questionable, which now puts us at the crossroads of what is good for us, what’s not good for us. Because the way generative AI works is that it’s just using machine learning to train some software and data to be able to give you what it gives you. That’s how it works. So if what we get seems as too good to be nice, then we don’t question. And then we stand at the edge or at the rift to fall off into oblivion by technology. That’s just the tool. Thank you.

Tomoyuki Naito:
All right, thanks very much, Mr. Ford. Hey, Safa, please go ahead. Your opinion.

Safa Khalid Salih Ali:
Okay, let me answer by yes. Because I am ICT for all this long, I will directly say yes. It can help you. When you’re thinking for economic and you’re thinking for developing country, how long will it take to analyze just the data set? It take a lot to get insight from data set. But by using generative AI, we can go to insight directly and get impact directly to economy instead of wasting a lot of time in routine work of just analyze this data. When you’re thinking about that, thinking about how can we save in the cost, how many employee you needed to just analyze this data. By generative AI, we can found your insight directly and it’s helpful. Also, when we thinking about economy, you need to think about financial inclusion and FinTech. To deserve real financial inclusion by this way of traditional way, you can deserve it for all the citizen of any country. But if you have like generative AI, we can have opportunity to all participants to find the service you need like using virtualization access, like using chatbot, any type of AI tools. And when we thinking about that GBT and which can allow you to analyze the data and thinking about how many time you need to just make risk assessment for any bank, for any customer you have it in the bank to give him just loan. By using generative AI, we can make fraud detection and you can make risk assessment and build model which can predict for you what’s the habit of this customer based on the historical information. For all these reason, I can say for your question, yes, directly. Yes, we have drawbacks, but it can cover it by any guidelines. Thank you, Cesar.

Tomoyuki Naito:
Yeah, thanks very much, Ms. Alfa. Actually, to the audience, actually as you fully understand that we have a variety of the nature of the countries as well as the nature of the jobs. Every partners have different occupations. Then it is quite interesting and quite significant and it’s quite important. Looking at the power of AI or looking at the threats of AI from different angles, it is quite important discussion, I personally believe. Then as a result, four partners here mentioned somehow yes side. No one say obviously no. So if I understand like that, four partners over here say yes somehow. Let me just ask one more question. While G7 countries, G7 member countries including Japan are currently preparing the basic guideline of appropriate AI use in societies, do you think that developing and emerging economies should also prepare original guidelines for the AI use? How do you think about that? Let me just ask this question to all the panelists. Let me invite Mr. Robert Ford first. Robert?

Robert Ford Nkusi:
Okay, I went to the computer science class but not to the audio class. So thank you. Each time we have a discussion that draws a line between developing and developed economies, I struggle. I struggle to maintain that classification. And I will quickly give an example that the one part of the world is struggling to understand or to draw lessons or to draw benchmarks from some of the success stories from another class of people we are talking about. So when we talk about the developed world, and I’ll give just a simple example, the developing world, what they call developing economies. For very many years, many years, for as long as I can remember, the world’s financial systems in the developed West were based on data that runs on plastic cards, right? That each one of us is carrying a wallet with so many plastic cards. For Africa, in the, not more than a decade ago, we leapfrogged from that. And for, they are able to transact using mobile money. And they are circumventing the whole trouble of the environmental impact of keeping plastics in people’s wallets. If we were to collectively remove plastics from people’s wallets, and we pile them together, we probably could fill up a country. And this is a very bad effect to the environment that we live in. But the other part of the world is failing or is dragging its feet in quickly benchmarking on that and say, why don’t we have a plastic-free world, at least from our simple cards? Then we can be able to transact with mobile money. So just giving examples. So when the G7 is discussing about legal and regulatory framework, about generative AI, they probably need to go down there and see what is it that there is down there that they can pick from. I know countries in Africa that have moved far away in designing and crafting regulatory frameworks for AI and for these other technologies. And they are very fast in moving towards that. So at some point, I don’t think it requires to be a member of the G7 to determine the kind of policies that are going to guide and mainstream thought process towards how we consume technology for a safer world tomorrow.

Tomoyuki Naito:
All right, thanks very much, Mr. Ford. Can I invite Dr. Sayoni? Can I invite Dr. Sarayu? Yes. Please.

Sarayu Natarajan:
Thank you very much and thank you for that. I largely agree. I think that not just because of the several advances made in several parts of the world in terms of thinking AI, I do not think this is a conversation that is entirely unique or the problems are entirely unique to parts of the developing world. I think one of the strange things about technology is that it’s a great equalizer, both in positive senses and some of the more harmful ones. And I think for us to think about these as unique problems from a frame of exceptionalism may be limiting. And so building out frameworks of governance, frameworks of regulation from an inclusive standpoint, which includes several global participants is very necessary. My yes if, to go back to that was deeply qualified, but I will speak to one specific theme that I mentioned in terms of the injustices of generative AI which is the question of labor. And I think there are two sides to the question of labor in the context of generative AI. The first is that labor and human labor is very critical to building generative AI. So if somebody doesn’t label the data that is used to build a large language model, a large language model does not exist. And this is true of image models as well. And much of this labor is in very many parts of what is called as the developing world. Now, those who build AI in that sense that are responsible for the labeling, annotation, marking, categorization of data are never really a part of the conversation. So that’s one significant injustice. So while there might be frameworks for governing regulating generative AI use, the way in which generative AI is made itself has a significant geopolitical component. The second dimension of generative AI is a much more downstream effect which is the question of job loss. And to the point Mustafa made, job loss is a complicated question that has different meanings in different countries. In mine, which is India, we have a population of 1.4 billion where challenges around thinking about employment, job losses are very significant. India is also an IT services company, so a large part of the economy relies on IT service provision. And again, the question of job losses through the use of services like charge GPT and generative AI more broadly speaking is a very significant one. All of this is to say that conversations about generative AI without starting from the how are we going to govern some of the harms just as much as how are we going to deploy it might be limiting. So we have to think very carefully about use cases, very carefully about first order, second order, third order consequences of the use of generative AI. That is not to say that there is no use case at all. There are several applications for it. But governance is a part of deployment is where I’m coming from. I’ll pause here and.

Tomoyuki Naito:
All right, thanks so much, Dr. Actually, thanks very much for you touch upon the other aspect of job itself. Actually, just to share the information to all the other participants here on site and online. Recently, ILO, the International Labor Organization just released almost one month plus ago, sometime late August. They just used their ILO model to analyze the impact of the generative AI adaptation to the 59 countries as a model. Then, according to their result, the impact of job loss aspect is much heavier to the advanced economy than much less in the developing economies. That means a lot of meanings are contained in that analysis. But I just strongly recommend if you are interested in their job analysis or the job impact of the generative AI, ILO have released a very good working report back in August. But anyhow, if I come back to the question, let me just invite Yamanaka-san for your opinion.

Atsushi Yamanaka:
Thank you so much. Actually, there’s a lot to think about, yes. I agree with the doctors and I agree with Robert about, yes, this is actually, we don’t necessarily want another fragmentation, right? We had a discussion yesterday about data flow or data governance, and we had the sessions, and this question also came up. Do we actually need to create another model or another regulations specifically for emerging economy? Answer probably no. Why should it be? Instead, I think the so-called emerging nations should be part of the global discussions on the framework or like case makings instead of actually trying to fragment. It’s already a fragmented world. Internet has been fragmented. There is also like agenda is fragmented. Why do we actually need to further fragment this field? So AI regulations, whether we can succeed or not, regulations, I’m not sure if we can do this, because there’s so many different opinions, but I think it is really important to have this multi-stakeholder approach in terms of actually having the opportunities for the emerging countries to actually have their voices and inputs into the formulations of a regime or mechanisms to regulate or to use AI for better word. I think that’s, I think, is one of the things we need to do. Going back to, I think, it was labor side. I think that’s gonna be a very, very important component. That’s something that I should have said in terms of potential negative aspect of generative AI. Already, like Robert was mentioning about software engineers, or even like the illustrators, or even like storytellers, because I think there was a huge actually strike actually doing the Guild of Screenwriters Guild in the United States, because they were saying, okay, generative AI can do their jobs. Now, I think we’re gonna have the Luddite movement. Do you know what the Luddite movement? That actually happened in the beginning of 19th century when the industrialization actually came into UK. All the workers, they start actually destroying the machines. I think we’re gonna see that if we do not actually come up with a model which is conducive. Another thing is if we do not now give them the opportunity for them to change their business models or their skills, re-skilling them, if you don’t do that, I think we’re gonna see another Luddite movement for the AI.

Tomoyuki Naito:
All right, thanks very much. Re-skilling, another word which the government of Japan is really emphasizing domestically. Okay, Safa, your opinion about my question.

Safa Khalid Salih Ali:
So we need to think about what we need to do to make sure that we have a system that can be used for the development of AI. According to guideline, when we’re thinking about guideline for developing country, we need to think about what we already said about data challenge. Really, we face very critical issue in data in developing country. We can’t find system with fully clear data. So we need to think about what we need to do to make sure that we have a system that can be used for the development of AI in developing country. Maybe like this issue we didn’t found in G7 country. For that, you need to put it as a challenges for these guidelines. Also, we need to think about the socioeconomics between this country and G7 when you compare. It is not about the fragmentation of the guidelines, but guidelines, you need to put in mind also the capacity of building. You need to think about how can you solve the issue of infrastructure. You can’t put just generative AI solution for someone who can’t use it. Also, we need to use to think about the capacity building. A lot of people in developing country didn’t know about anything about generative AI. You can’t use it. Maybe it’s a big company, but they didn’t know about it. So, we need to think about how can we use generative AI in developing country and how can we use it for the development of the country. The third issue is data privacy and labor, as you said. Do you think wasting time to make routine jobs, which you already can do it in minutes, instead of thinking about the productivity and quality of the work? If you have stuff and can focus more on the quality of the work, it’s better to use generative AI in the routine work. So, we need to think about how can we use generative AI in developing country and how can we use it for the development of the country. For example, if you are thinking about central bank, you think about if crisis happen anywhere, you need to calculate what’s the impact of this crisis to your own economy. With generative AI, you can use this solution to be prediction, to make any prediction for what the next action will be. So, we need to think about what’s the next action, what’s the next action, because it can be emergency action, you need to take it instead to go in the drawbacks directly.

Tomoyuki Naito:
Thank you. โ‰ซ Thanks very much, a very important aspect. You beautifully mentioned about the opportunity side. Now, I would like to open the floor for the question, and our panelists to ask their questions. So, please, gentlemen, come to the microphone. I already have the question online, but let me just prioritize on-site question first. Please, kindly say your name and please the question.

Audience:
โ‰ซ So, my name is from Deutsche Welle Academy, Germany. I have so many questions, I don’t know which to ask first, because I recently heard a professor from Ghana. He said, you know, we have a kind of democratic system. We have a kind of democratic system, but we have a kind of neutral political system. So, I wonder why you are not mentioning this topic. But my question is, so, we are discussing, like, as we live in a neutral political system, like, we have a kind of democratic system all over the world. So, I’m wondering, like, what do you think about this kind of democratic system, like, in Europe, also, and non-democratic states worldwide, especially also Sudan, South Sudan, and your region. So, my question is, what do you think, will these technology, especially in the age of AI, will lead us to more freedom and democracy, or will it be the opposite? Because if the authoritarian rulers want to be free, they will be free, but if the authoritarian rulers want to be free, they will lead to more free societies. So, this is my maybe too big question, but… โ‰ซ Yeah, thanks very much.

Tomoyuki Naito:
Big question, but very important question. Any panelists want to answer to this question?

Robert Ford Nkusi:
โ‰ซ Thank you. I think it depends on how you define those values of democracy, and authoritarianism, and whatever you want to call them. In my country, if, say, a country, for example, if you have a country, let’s say, in the United States, if you have a country, let’s say, in the United States, if you have a democracy in India, or in the United States, what does a democracy mean? For example, if, say, a President stays in power four, five, six times and is doing the right thing, to me, that’s okay. So, it’s Germany, for example. Germany does not have time limits, right? Germany can have a head of the country, and, in fact, if you look at the United States, if you look at the United States, if you look at the League, if you look at my country, it’s different from how my country is going to look at Germany. So, in Uganda, or in Rwanda, or in Kenya, if the British system says you can stay in power for as long as your party is keeping you in power, that’s okay. It’s not authoritarianism. It’s not authoritarianism. It’s not authoritarianism. So, and there should be a way in which we validate political leadership. At what point do we define this as being authoritarian, or not working in the interest of the masses, and I’m not saying it’s not fair. I agree with you. We have countries that are not working in the interest of the masses, and we have countries that are working in the interest of the masses, and, in fact, we have countries that are not working in the interest of the masses, meaning, the power of predictability that generative AI gives us is so immense that with it, we could build great potential that can change the way we run politics today and in the future. But, for authoritarianism, and dictatorship, and God knows what, I have good friends who have been dictatorial for 30 years and I never encountered any example in more dictatorian country because they will keep someone powerful as long as God knows what. In Britain. I know countries in Africa who have very good elections, of course.

Atsushi Yamanaka:
Not good, thank you. I think it’s a good thing. I think it’s a good thing. It’s been like with IGF, someone was calling, this IGF is so banner, there’s nothing controversial. No one is making anything controversial. Thank you so much for asking those kind of questions. It’s interesting questions. For me, I also came from this party where actually we have so-called free election system, but I was told that it’s not true. It’s not true, because I actually asked questions about China, because Chinese actually have the systems of scoring, right, Dr. Song, yes? Now, I was, you know, for me, it’s a bit difficult to accept that kind of system where I will be watched and scored every single moment. I was told that, you know, the Chinese actually have the system of scoring, but they don’t have the money to bribe the Chinese, you know, because they do not bribe people, they don’t have money to bribe or they don’t have the connections. For them, they said, well, now, if I actually are very good citizens, if I actually, you know, do something good for the society, my score goes up, and I actually get different opportunity which I never actually got. So, you know, I don’t have the answers to it. But at the same time, I feel like if, you know, we can keep the privacy, you know, if we can have, like, basic human rights, okay, last ten minutes, so we have to be quickly. I think, you know, how the polity would actually, you know, manage, you know, not control, but manage the society, I think it should be depending on the society itself, you know, we are here, you know, because so-called developed countries, we are not here to judge them, that I feel very strongly, you know, working in developing countries for so long. I don’t know. I may get hammered by this, but โ€‘โ€‘

Tomoyuki Naito:
. โ‰ซ Thanks very much. Yeah, thanks very much for the very good interaction, which going beyond my expectation, but anyhow, thanks very much for the very good question and answers. Then I have a question. I have a question about, you know, how can we apply generative AI in developing countries? I have recognized several, actually, people raise hand online, on Zoom. Then before that, let me just check chat box, and I got the one question from the Mr. Mohamed Hanif Garanai, he is questioning about could we apply generative AI in developing countries, like I do in the audience. I think perhaps we can fill the opinions online. Some of the online participants kindly answered already, so thanks very much for the online interaction spontaneously. So let me just actually ask a question. I’m not sure if you can hear me. Can you hear me? Yes, I can hear you. Please go ahead. Thank you.

Audience:
Now they’ll let my video come, so I’ll join all of you in person. Hello, I’m Debra Allen-Rogers, I’m in the Hague. I’m originally from New York City. I have a nonprofit here called Find Out Why. It’s a digital fluency lab that promotes digital fluency. I was asked to ask a question and even the sort of spicy comments that followed about Germany being dictatorial, which I don’t think is obviously true in the context of this. But I did want to ask the question about if we could rename, I know this is going to sound naรฏve, but just look at my age and know my background in design and so on. I’m going to ask this question. Developing countries, we’re in a transition now, and we can do that in highly functional societies, because the highly functional digital societies, for example, let’s say in Estonia, in Finland, in Norway, I’m living in the Netherlands, highly functional digital societies are great places to come learn for young people on travel expeditions. In Japan, back at post-war, there was that, I was looking for the name of the program, but sent students and young people around the world to develop best practices, to observe and to learn and then bring it back to Japan and design the society how they wanted it to be after the war. So aren’t we at another juncture like that right now, that India has a lot to teach, the continent has a lot to teach, MENA has a lot to teach, and those societies have young people, 70% are under the age of 30 going forward. So we have to redefine, we ourselves working in this right now, have to redefine some of these terms so that we don’t fall back on developing world countries. It’s a new day, and we don’t have to spend so much time talking, I’m kind of breaking my own rule by talking about it so much right now, but I’m saying that we should redefine it when we give our speeches and when we talk, and also part of the work I do is to help young people travel the world to see best practices, bring it back home, and then decide how you want to design your own society, but we do have highly functional digital societies, Estonia, Taiwan, the ones I said in the Scandinavian, the Nordic countries, that are right there for us to look at and watch. Thanks very much for taking my question.

Tomoyuki Naito:
โ‰ซ Yeah, thanks very much, Ms. Rogers, great comment. And one more person or organization, Ghana IGF remote hub, you are raising hand. Please go ahead, if you can hear my voice, please go ahead.

Audience:
โ‰ซ Okay. I’m Dennis. I’m from Ghana. My question is, with the use of native AI to create, sorry, does the native AI do good or affect human creativity? And if yes, how and what contribution is it making to the human resource? And the second question is, with the first group of native AI, help or not, how do you think about the use of native AI in developing countries with fewer resources and technology? Could it boost their development or might it make the technology gap between countries even bigger? Thank you.

Tomoyuki Naito:
โ‰ซ Very sharp question. Any panelists who want to answer? โ‰ซ Hello. โ‰ซ Doctor? โ‰ซ My name is Joseph. I’m from Ghana. I’m from Ghana. I’m here to talk about the use of native AI in developing countries with fewer resources and technology. Can I just invite the doctor to answer the first question?

Sarayu Natarajan:
โ‰ซ I’ll try answering very briefly to the first question on creativity. I think the second one around development gaps has been tackled a little bit, but with respect to human creativity, it’s hard to talk about the effects on creativity, but I think it’s important to think about creativity as something that’s self-built on the back of creative work by artists, by sort of, you know, producers and generators of music, makers of music, and to think about creativity as abstracted from some of how generative AI is made may be limiting. But it could help in the sense that it could be a new way of thinking about creating, you know, certain types of writing literature based on which further human creativity and ingenuity is applied, but it could also have some deleterious effects in the absence of efforts to for the It should not supplant in my mind the efforts of the education system of, you know, human society at the time, and skills such as those in young minds. So we should not see this to my mind as a zero-sum game. And particularly the question of creativity must look at how it was born. So thank you.

Tomoyuki Naito:
โ‰ซ Thanks very much, Dr. Sarayu. You want to answer to the second question?

Atsushi Yamanaka:
โ‰ซ Summarize the questions on the second one. Basically trying to actually, if the AI technology is going to be useful for the society, or the gap, gap, gap, between those. That’s a good question. I think earlier Dr. Sarayu was mentioning about technology being, you know, especially the new technologies actually reducing the gap. Because we could actually, you know, there is no sort of hindrance or the interaction sort of barrier is much lower in the digital technologies than many other technologies. So I think, you know, I don’t think that’s going to be a problem, you know, because we don’t have a lot of technology, you know, so I think it does not necessarily be that, you know, developing countries are going to lag behind. And rather, I think it’s going to be more interesting. A lot of innovation is actually coming from so-called emerging economies right now. And we may actually see much more innovation coming from the emerging economies rather than these things coming from the so-called western countries. So in that respect, I don’t necessarily think the developing country is going to be lagging behind, but rather I think with new digital technologies, I think there’s more contributions. I don’t necessarily like the word reverse innovations because that’s very pretentious, but I think the innovations coming from the emerging economies, developing countries, I think they are going to be the trend that we see in the future.

Tomoyuki Naito:
โ‰ซ All right, thanks very much. Actually, we have only one minute left up to the end of the session. So apology for the other participants online for giving me the other question. Let me just summarize today’s session very briefly. As Mr. Yamanaka’s last comment mentioned, actually, we don’t have to look at only the threat side. Actually, the opportunity, opportunity which every country can utilize the new technology as a sort of innovation to leverage more the economic development as well as the social development is the key to discuss more continuously, and is the key to emphasize not only G7 countries, the leading, the guidance, and so on. But also, the other countries, the leading countries, the leading countries, the leading countries, for instance, so other, you know, more than 190 countries can do proactively to utilize it, to do it for your own business, your own countries in the future. So that could be the main message of today’s session, and I’m sorry about my poor time today, but I think it’s time to wrap up the session, so thank you very much. Thank you very much, everyone, for being here. Let me just conclude this session, since time is already up. So please join me to give the round of applause to all the panelists here. Thank you very much. And thank you, all the other participants, on-site and online. Thank you so much.

Atsushi Yamanaka

Speech speed

186 words per minute

Speech length

1778 words

Speech time

575 secs

Audience

Speech speed

281 words per minute

Speech length

798 words

Speech time

170 secs

Robert Ford Nkusi

Speech speed

165 words per minute

Speech length

1773 words

Speech time

645 secs

Safa Khalid Salih Ali

Speech speed

193 words per minute

Speech length

1110 words

Speech time

344 secs

Sarayu Natarajan

Speech speed

190 words per minute

Speech length

1317 words

Speech time

417 secs

Tomoyuki Naito

Speech speed

143 words per minute

Speech length

2230 words

Speech time

938 secs

Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Owen Later

Microsoft has made significant efforts to prioritise the responsible use of AI technology. They have dedicated six years to building their responsible AI programme, which involves a team of over 350 experts in various fields including engineering, research, legal, and policy. Their responsible AI standard is based on the principles outlined by the Organisation for Economic Co-operation and Development (OECD), emphasising the importance of ethical AI practices.

In addition to their internal initiatives, Microsoft recognises the need for active participation from private companies and the industry in AI governance discussions. To foster collaboration and best practices, they have founded the Frontier Model Forum, which brings together leading AI labs. This forum focuses on developing technical guidelines and standards for frontier models. Microsoft also supports global efforts, such as those taking place at the United Nations (UN) and the OECD, to ensure that AI technology is governed responsibly.

Another crucial aspect highlighted by Microsoft is the importance of regulations to effectively manage the use and development of AI. They actively share their insights and experiences to help shape regulations that address the unique challenges posed by AI technology. Furthermore, Microsoft aims to build capacity for governments and industry regulators, enabling them to navigate the complex landscape of AI and ensure the adoption of responsible practices.

Microsoft emphasises the need for safeguards at both the model and application levels of AI development. The responsible development of AI models includes considering ethical considerations and ensuring that the model meets the necessary requirements. However, Microsoft acknowledges that even if the model is developed responsibly, there is still a risk if the application level lacks proper safeguards. Therefore, they stress the importance of incorporating safeguards throughout the entire AI development process.

Microsoft also supports global governance for AI and advocates for a representative process in developing standards. They believe that a global governance regime should aim for a framework that includes standard setting, consensus on risk assessment and mitigation, and infrastructure building. Microsoft cites the International Civil Aviation Organization and the Intergovernmental Panel on Climate Change as potential models for governance, highlighting the importance of collaborative and inclusive approaches to effectively govern AI.

In conclusion, Microsoft takes the responsible use of AI technology seriously, as evident through their comprehensive responsible AI programme, active participation in AI governance discussions, support for global initiatives, and commitment to shaping regulations. They emphasise the need for safeguards at both the model and application levels of AI development and advocate for global governance that is representative, consensus-driven, and infrastructure-focused. Through their efforts, Microsoft aims to ensure that AI technology is harnessed responsibly and ethically, promoting positive societal impact.

Clara Neppel

During the discussion, the speakers highlighted the ethical challenges associated with technology development. They emphasised the need for responsibility and the embedding of values and business models into technology. IEEE, with its constituency of 400,000 members, was recognised as playing a significant role in addressing these challenges.

Transparency, value-based design, and bias in AI were identified as crucial areas of concern. IEEE has been actively engaging with regulatory bodies to develop socio-technical standards in these areas. They have already established a standard that defines transparency and another standard focused on value-based design. These efforts aim to ensure that AI systems are accountable, fair, and free from bias.

The importance of standards in complementing regulation and bringing interoperability in regulatory requirements was emphasised. IEEE has been involved in discussions with experts to address the technical challenges related to AI. An example was provided in the form of the UK’s Children’s Act, which was complemented by an IEEE standard on age-appropriate design. This highlights how standards can play a crucial role in ensuring compliance and interoperability within regulatory frameworks.

Capacity building for AI certification was also discussed as an essential component. IEEE has trained over 100 individuals for AI certification and is also working on training certification bodies to carry out assessments. This capacity building process ensures that individuals and organisations possess the necessary skills and knowledge to navigate the complex landscape of AI and contribute to its responsible development and deployment.

The panel also explored the role of the private sector in protecting democracy and the rule of law. One speaker argued that it is not the responsibility of the private sector to safeguard these fundamental principles. However, another speaker highlighted the need for legal certainty, which can only be provided through regulations, in upholding the rule of law. This debate prompts further reflection on the appropriate roles and responsibilities of different societal actors in maintaining democratic values and institutions.

The negative impact of uncertainty on the private sector was acknowledged. The uncertain environment poses challenges for businesses and impedes economic growth and job creation. This concern underscores the need for stability and predictability to support a thriving and sustainable private sector.

Lastly, the importance of feedback loops and common standards in AI was emphasised. This includes ensuring that feedback is taken into account to improve and retrain AI systems. Drawing from lessons learned in the aviation industry, the development of benchmarking and common standards was seen as vital for enabling efficient and effective collaboration across different AI systems and applications.

In conclusion, the speakers underscored the importance of addressing ethical challenges in technology development, specifically in the context of AI. IEEE’s involvement in shaping socio-technical standards, capacity building for AI certification, and the need for transparency, value-based design, and addressing bias were highlighted. The role of standards in complementing regulation and promoting interoperability was emphasised, along with the necessity of legal certainty to uphold democracy and the rule of law. The challenges posed by uncertainty in the private sector and the significance of feedback loops and common standards in AI were also acknowledged. These insights contribute to the ongoing discourse surrounding the responsible development and deployment of technology.

Maria Paz Canales

The discussion on artificial intelligence (AI) governance highlighted several key points. Firstly, there is a need for a clearer understanding and clarification of AI governance. This involves the participation of various actors, including civil society organizations and communities that are directly affected by AI. Civil society organizations require a better understanding of AI risk areas, as well as effective strategies for AI implementation.

Another important point is the evaluation of AI’s impact on rights, with a specific focus on inclusiveness. It is essential to assess the effect of AI on civil, political, economic, social, and cultural rights. Unfortunately, the perspectives of communities impacted by AI are often excluded in the assessment of AI risks. Therefore, it is necessary to include the viewpoints and experiences of these communities to comprehensively evaluate the impact of AI. This inclusive approach to AI impact assessment can lead to the development of responsible and trustworthy technology.

The discussion also emphasized the importance of education and capacity building in understanding AI’s impact. The public cannot fully comprehend the consequences of AI without being adequately educated about the subject in a concrete and understandable way. Therefore, it is crucial to provide not only technical language but also information on how AI impacts daily life and basic rights. By enhancing education and capacity building, individuals can better grasp the implications and intricacies of AI technology.

Furthermore, it was highlighted that there should be some level of complementarity between voluntary standards and legal frameworks in AI governance. This includes ensuring responsibility at different levels, from the design stage to the implementation and functioning of AI systems. The relationship between voluntary standards and legal frameworks must be carefully balanced to create an effective governance structure for AI.

In addition, the discussion underscored the importance of accounting for shared responsibility within the legal framework. It is crucial to establish effective communication between the different operators involved in the production and use of AI. This communication should adhere to competition rules and intellectual property regulations, avoiding any violations. By accounting for shared responsibility, the legal framework can ensure ethical and responsible AI governance.

Lastly, the discussion emphasized the need for a bottom-up approach in AI governance. This approach involves the active participation of societies and different stakeholders at both the local and global levels. Geopolitically, there is a need to hear more about experiences and perspectives from various stakeholders in global-level governance discussions. By adopting a bottom-up approach, AI governance can become more democratic, inclusive, and representative of the diverse needs and interests of stakeholders.

In conclusion, the discussion on AI governance highlighted the importance of a clearer understanding and clarification of AI governance, inclusiveness in impact assessment, education and capacity building, complementarity between voluntary standards and legal frameworks, the consideration of shared responsibility, and a bottom-up approach in AI governance. By addressing these aspects, it is possible to develop responsible and trustworthy AI technology to benefit society as a whole.

Thomas Schneider

The topic of AI regulation is currently being discussed in the context of its application. The argument put forth is that instead of regulating AI as a tool, it should be regulated in consideration of how it is used. This means that regulations should be tailored to address the specific applications and potential risks of AI, rather than imposing blanket regulations on all AI technologies. It is believed that this approach will allow for a more nuanced and effective regulation of AI. Voluntary commitment in AI regulations is seen as an effective approach, provided that the right incentives are in place. This means that instead of enforcing compulsory regulations, which can often be complicated and unworkable, voluntary agreements can be more successful. By providing incentives for AI developers and users to adhere to certain standards and guidelines, it is believed that a more cooperative and collaborative approach can be fostered, ensuring responsible and ethical use of AI technology. The Council of Europe is currently developing the first binding convention on AI and human rights, which is seen as a significant step forward. This intergovernmental agreement aims to commit states to uphold AI principles based on the norms of human rights, democracy, and the rule of law. The convention is not only intended to ensure the protection of fundamental rights in the context of AI, but also to create interoperable legal systems within countries. This represents a significant development in the global governance of AI and the protection of human rights. The need for agreement on fundamental values is also highlighted in the discussions on AI regulation. It is essential to have a consensus on how to respect human dignity and ensure that technological advancements are made while upholding and respecting human rights. This ensures that AI development and deployment align with society’s values and principles. Addressing legal uncertainties and tackling new challenges is another important aspect of AI regulation. As AI technologies continue to evolve, it is necessary to identify legal uncertainties and clarify them to ensure a clear and coherent regulatory framework. Breaking down new elements and challenges associated with AI is crucial to ensuring that regulations are effective and comprehensive. In solving the problems related to AI regulation, it is emphasized that using the best tools and methods is essential. Different problems may require different approaches, with some methods being faster but less sustainable, while others may be more sustainable but take longer to implement. By utilizing a mix of tools and methods, it is possible to effectively address identified issues. Stakeholder cooperation is also considered to be of utmost importance in the realm of AI regulation. All stakeholders, including governments, businesses, researchers, and civil society, need to continue to engage and cooperate with each other, leveraging their respective roles. This collaboration ensures that diverse perspectives and expertise are taken into account when formulating regulations, thereby increasing the chances of effective and balanced AI regulation. However, there is also opposition to the creation of burdensome bureaucracy in the process of regulating AI. Efforts should be made to clarify issues and address challenges without adding unnecessary administrative complexity. It is crucial to strike a balance between ensuring responsible and ethical use of AI technology and avoiding excessive burdens on AI developers and users. In conclusion, the discussions on AI regulation are centered around the need to regulate AI in consideration of its application, rather than treating it as a tool. Voluntary commitments, such as the binding convention being developed by the Council of Europe, are seen as effective approaches, provided the right incentives are in place. Agreement on fundamental values, addressing legal uncertainties, and stakeholder cooperation are crucial aspects of AI regulation. It is important to strike a balance between effective regulation and avoiding burdensome bureaucracy.

Suzanne Akkabaoui

Egypt has developed a comprehensive national AI strategy aimed at fostering the growth of an AI industry. The strategy is based on four pillars: AI for government, AI for development, capacity building, and international relations. It focuses on leveraging AI technologies to drive innovation, improve governance, and address societal challenges.

Under the AI for government pillar, Egypt aims to enhance the effectiveness of public administration by adopting AI technologies. This includes streamlining administrative processes, improving decision-making, and delivering efficient government services.

The AI for development pillar highlights Egypt’s commitment to utilizing AI as a catalyst for economic growth and social development. The strategy focuses on promoting AI-driven innovation, entrepreneurship, and tackling critical issues such as poverty, hunger, and inequality.

Capacity building is prioritized in Egypt’s AI strategy to develop a skilled workforce. The country invests in AI education, training programs, research, and collaboration between academia, industry, and government.

International cooperation is emphasized to exchange knowledge, share best practices, and establish standardized approaches to AI governance. Egypt actively participates in global discussions on AI policies and practices as a member of the OECD AI network.

To ensure responsible AI deployment, Egypt has issued a charter that provides guidelines for promoting citizen well-being and aligning with national goals. These guidelines address aspects such as robustness, security, safety, and social impact assessments.

Egypt also recognizes the importance of understanding cultural differences and bridging gaps for technological advancements. The country aims to address cultural gaps and promote inclusivity, ensuring that the benefits of AI are accessible to all segments of society.

Overall, Egypt’s AI strategy demonstrates a commitment to creating an AI industry and leveraging AI for governance, development, capacity building, and international cooperation. The strategy aims to foster responsible and inclusive progress through AI technologies.

Moderator

In the analysis, several speakers discuss various aspects of AI and its impact on different sectors and societies. One key point raised is the need for more machines, especially AI, in Japan to sustain its ageing economy. With Japan facing social problems related to an ageing population, the introduction of more people and machines is deemed necessary. However, it is interesting to note that while Japan recognises the importance of AI, they believe that its opportunities and possibilities should be prioritised over legislation. They view AI as a solution rather than a problem and want to see more of what AI can do for their society before introducing any regulations.

Furthermore, the G7 delegates are focused on creating a report that specifically examines the risks, challenges, and opportunities of new technology, particularly generative AI. They have sought support from the OECD in summarising this report. This highlights the importance of international cooperation in addressing the impact of AI.

Egypt also plays a significant role in the AI discussion. The country has a national AI strategy that seeks to create an AI industry while emphasising that AI should enhance human labour rather than replace it. Egypt has published an AI charter for responsible AI and is a member of the OECD AI network. This showcases the importance of aligning national strategies with global initiatives and fostering regional and international collaborations.

Microsoft is another notable player in the field of AI. The company is committed to developing AI technology responsibly. It has implemented a responsible AI standard based on OECD principles, which is shared externally for critique and improvement. Microsoft actively engages in global governance conversations, particularly through the Frontier Model Forum, where they accelerate work on technical best practices. Their contributions highlight the importance of private sector involvement in governance discussions.

UNESCO has made significant contributions to the AI discourse by developing a recommendation on the ethics of AI. The recommendation was developed through a two-year multi-stakeholder process and adopted by 193 countries. It provides a clear indication of ethical values and principles that should guide AI development and usage. Furthermore, UNESCO is actively working on capacity building to equip governments and organizations with the necessary skills to implement AI systems ethically.

In terms of addressing concerns and ensuring inclusivity, it is highlighted that AI bias can be addressed even before AI regulations are in place. Existing human rights frameworks and data protection laws can start to address challenges related to bias, discrimination, and privacy. For example, UNESCO has been providing knowledge on these issues to judicial operators to equip them with the necessary understanding of AI and the rule of law. Additionally, discussions around AI governance emphasise the need for clarity in frameworks and the inclusion of individuals who are directly impacted by AI technologies.

The analysis also suggests that responsible development, governance, regulation, and capacity building should be multi-stakeholder and cooperative processes. All sectors need to be involved in the conversation and implementation of AI initiatives to ensure effective and inclusive outcomes.

An interesting observation is the need for a balance between voluntary standards and legal frameworks. Complementarity is needed between these two approaches, especially in the design, implementation, and use of AI systems. Furthermore, the analysis highlights the importance of a bottom-up approach in global governance, taking into account diverse stakeholders and global geopolitical contexts. By incorporating global experiences from different stakeholders, risks can be identified, and relevant elements can be considered in local contexts.

Overall, the analysis provides insights into various perspectives on AI and the importance of responsible development, global collaboration, and inclusive policies in shaping the future of AI.

Set Center

AI regulation needs to keep up with the rapid pace of technological advancements, as there is a perceived inadequacy of government response in this area. The tension between speed and regulation is particularly evident in the case of AI. The argument put forth is that regulations should be able to adapt and respond quickly to the ever-changing landscape of AI technology.

On the other hand, it is important to have risk frameworks in place to address the potential risks associated with AI. The United States has taken steps in this direction by introducing the AI Bill of Rights and Risk Management Framework. These foundational documents have been formulated with the contributions of 240 organizations and cover the entire lifecycle of AI. The multi-stakeholder approach ensures that various perspectives are considered in managing the risks posed by AI.

Technological advancements, such as the development of foundation models, have ushered in a new era of AI. Leading companies in this field are primarily located in the United States. This highlights the influence and impact of US-based companies on the development and deployment of AI technologies worldwide. The emergence of foundation models has disrupted the technology landscape, showcasing the potential and capabilities of AI systems.

To address the challenges associated with rapid AI evolution, the implementation of voluntary commitments has been proposed. The White House has devised a framework called ‘Voluntary Commitments’ to enhance responsible management of AI systems. This framework includes elements such as red teaming, information sharing, basic cybersecurity measures, public transparency, and disclosure. Its objective is to build trust and security amidst the fast-paced evolution of AI technologies.

In conclusion, it is crucial for AI regulation to keep pace with the rapid advancements in technology. The perceived inadequacy of government response highlights the need for agile and adaptive regulations. Additionally, risk frameworks, such as the AI Bill of Rights and Risk Management Framework, are important in managing the potential risks associated with AI. The emergence of technologies like foundation models has brought about a new era of AI, with leading companies based in the US driving innovation in this field. The implementation of voluntary commitments, as proposed by the White House, aims to foster trust and security in the ever-evolving landscape of AI technologies.

Auidence

The discussions at the event highlighted the challenges that arise in capacity building due to time and financial commitments. Ansgar Kuhn from EY raised the problem of time commitments related to the capacity building process, which may bring additional financial burdens for various parties. Small to Medium Enterprises (SMEs) and civil society organizations might not be able to afford the cost of someone who isn’t directly contributing to their main products or services. Academics may also struggle to get academic credit for engaging in this kind of process.

To address these financial and time commitment issues in capacity building, participants stressed the importance of finding solutions. Ansgar Kuhn specifically asked for suggestions to tackle this problem, underscoring the need to explore feasible strategies to alleviate the burden that time and financial commitments place on different stakeholders.

There were also concerns raised about the implementation of responsible AI, particularly regarding the choice between system level guardrails and model level guardrails. The discussion highlighted worries about tech vendors providing unsafe models if responsibility for responsible AI is pushed to the system level. This sparked a debate about the best approach to implement responsible AI and the potential trade-offs associated with system level versus model level guardrails.

Moreover, the event touched upon the Hiroshima process and the expectation of a principle-based approach to AI. The previous G20 process, which focused on creating data free flow with trust, served as a reference point for the discussion. There was a question about the need for a principle approach for AI, suggesting the desire to establish ethical guidelines and principles to guide the development and deployment of AI technologies.

In conclusion, the discussions shed light on the challenges posed by time and financial commitments in capacity building and the need for solutions to mitigate these issues. Concerns about system level versus model level guardrails in responsible AI implementation emphasized the importance of balancing safety and innovation in AI. The desire for a principle-based approach to AI and the establishment of ethical guidelines were also highlighted.

Prateek Sibal

UNESCO has developed a comprehensive recommendation on the ethics of AI, achieved through a rigorous multi-stakeholder process. Over two years, 24 global experts collaborated on the initial draft, which then underwent around 200 hours of intergovernmental negotiations. In 2021, all 193 member countries adopted the recommendation, emphasizing the global consensus on addressing ethical concerns surrounding AI.

The recommendation includes values of human rights, inclusivity, and sustainability, serving as a guide for developers and users. It emphasizes transparency and explainability, ensuring AI systems are clear and understandable.

UNESCO is implementing the recommendation through various tools, forums, and initiatives. This includes a readiness assessment methodology, a Global Forum on the Ethics of AI, and an ethical impact assessment tool for governments and companies procuring AI systems.

The role of AI in society is acknowledged, with an example of a robot assisting teachers potentially influencing learning norms. Ethical viewpoints are crucial to align AI with societal expectations.

Prateek Sibal advocates for inclusive multi-stakeholder conversations around AI, emphasizing the importance of awareness, accessibility, and sensitivity towards different sectors. He suggests financial compensation to facilitate civil society engagement.

In conclusion, UNESCO’s recommendation on AI ethics provides valuable guidelines for responsible AI development. Their commitment to implementation and inclusive dialogue strengthens the global effort to navigate ethical challenges presented by AI.

Galia

The speakers in the discussion highlight the critical role of global governance, stakeholder engagement, and value alignment in working towards SDG 16, which focuses on Peace, Justice, and Strong Institutions. They address the challenges faced in implementing these efforts and stress the importance of establishing credible value alignment.

Galia, one of the speakers, emphasizes the mapping exercises conducted with the OECD regarding risk assessment. This suggests that the speakers are actively involved in assessing potential risks and vulnerabilities in the context of global governance. The mention of these mapping exercises indicates that concrete steps are being taken to identify and address potential obstacles.

Both speakers agree that stakeholder engagement is essential in complementing global governance. Galia specifically highlights the significance of ensuring alignment with values at a global level. This implies that involving various stakeholders and aligning their interests and values is crucial to achieving successful global governance. This collaborative approach allows for a wider range of perspectives and enables a more inclusive decision-making process.

The sentiment expressed by the speakers is positive, indicating their optimism and belief in overcoming the challenges associated with implementation. Their focus on credible value alignment suggests that they recognize the importance of ensuring that the principles and values underpinning global governance are widely accepted and respected. By emphasizing stakeholder engagement and value alignment, the speakers underscore the need for a holistic approach that goes beyond mere top-down control and incorporates diverse perspectives.

In summary, the discussion emphasizes the vital aspects of global governance, stakeholder engagement, and value alignment in achieving SDG 16. The speakers’ identification of challenges and their emphasis on credible value alignment demonstrate a proactive and thoughtful approach. Their mention of the OECD mapping exercises also indicates a commitment to assessing risks and vulnerabilities. Overall, the analysis underscores the significance of collaboration and the pursuit of shared values in global governance.

Nobuhisa Nishigata

Japan has emerged as a frontrunner in the discussions surrounding artificial intelligence (AI) during the G7 meeting. Japan proposed the inclusion of AI as a topic for discussion at the G7 meeting, and this proposal was met with enthusiasm by the other member countries. Consequently, the Japanese government has requested the OECD to continue the work on AI further. This indicates the recognition and value placed by the G7 nations on AI and its potential impact on various aspects of society.

While Japan is proactive in advocating for AI discussions, it adopts a cautious approach towards the introduction of legislation for AI. Japan believes that it is premature to implement legislation specifically tailored for AI at this stage. Nevertheless, Japan acknowledges and respects the efforts made by the European Union in this regard. This perspective highlights Japan’s pragmatic approach towards ensuring that any legislation around AI is well-informed and takes into account the potential benefits and challenges presented by this emerging technology.

Underlining its commitment to fostering cooperation and setting standards in AI, Japan has established the ‘Hiroshima AI process’. This initiative aims to develop a code of conduct and encourage project-based collaboration in the field of AI. The process, which began in 2016, has seen a shift from voluntary commitment to government-initiated inclusive dialogue among the G7 nations. Japan is pleased with the progress made in the Hiroshima process and the inclusive dialogue it has facilitated. It is worth noting that despite unexpected events in 2016, the process has continued to move forward successfully.

Japan recognises the immense potential of AI technology to serve as a catalyst for economic growth and improve everyday life. It believes that AI has the ability to support various aspects of the economy and enhance daily activities. This positive outlook reinforces Japan’s commitment to harnessing the benefits of AI and ensuring its responsible and sustainable integration into society.

In conclusion, Japan has taken a leading role in driving discussions on AI within the G7, with its proposal being well-received by other member countries. While cautious about introducing legislation for AI, Japan appreciates the efforts made by the EU in this regard. The establishment of the ‘Hiroshima AI process’ showcases Japan’s commitment to setting standards and fostering cooperation in AI. Overall, Japan is optimistic about the potential of AI to generate positive outcomes for the economy and society as a whole.

Session transcript

Moderator:
th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Galia :
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Moderator:
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Nobuhisa Nishigata:
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . et cetera. And then now we had the Hiroshima AI process and the second round of our chair in the G7 this year. So then we had many things happening, like, for example, 2019, the G7, the French chair, they introduced the G-PAY, the Global Partnership on AI. And the same year, Japan hosted the G20 meeting. And G20 agreed on the G20. It’s on an AI principles, but it is just the same, that the OECD’s principle is kind of copy in the same text. Then we have some development in the afterwards, and then it comes to this year, to 2023. So next slide, please. Next one, yes. So just it’s more kind of history now. It’s seven years ago, the photo from Takamatsu. And then we had some discussion. Next slide, please. It’s going to show up some detail, yes. So it is the first time that at that time, the minister, Mr. Takaichi, the proposals and the discussions among the G7 at the time to talk about AI, like what the risk, what the opportunity, and then what next. And then Japan wanted to have some common understanding, common principles to cope with this new technologies at that time. So the bottom line, maybe touching upon the relationship between innovation and regulation, those kind of things, just think about what Japan is right now. We are facing several big social problems that are aging. And we need more people, I mean, to sustain our economy. And then we need more machines. So Japan is, I think, one of the kind of aged position to more like we need more machines to help us to sustain the economy, the business, and et cetera, or even the daily life. So we are very much friendly to the AI, but of course, we recognize some unsightly in this technology. So then we started a discussion at the G7, kind of trying to how they feel about the AI at the time. Then fortunately, our proposal on the AI discussion was very well received by the member of the G7. So the Japanese government decided to ask the OECD to continue the work further. Then they came out to the OECD principle in 2019. So that the kind of the beginning, the whole history of what we have now. And then the next slide, please. So that’s just the introduction. Since Gaia didn’t have the deck, I can do it for her. That’s a lot of OECD principles. It’s very simple, 10 principles. The first five is more like the value-based principles. And then the other five is more like a recommendation to the policy makers of the government. And they just, the 10 things. So the next one, please. And this year, as the chair of the G7, and we had the first, the hosted digital and tech ministers meeting in Takasaki in Japan. And it’s not only the AI ministers meeting. So we have like eight themes. And then the one is of course about AI. The third theme is responsible AI and global AI governance. And actually the ministers discuss more about the interoperability of the AI governance. Looking at some, you know, the European countries are working hard to pass the AI Act on it. Of course, we know it. But on the other hand, still Japan is not the one to introduce the legislation over the AI technologies yet. We would like more to see to, you know, the more opportunity or possibility of this technology. So in Japan’s perspective, it’s too early to introduce the legislation yet. In Japan’s perspective, it’s too early to introduce the legislation over the AI. But on the other hand, we respect what the EU is doing. So then like we try to start the discussion about the international kind of interoperability in the governance level. So that, you know, we don’t want to put more burden on the business side. I mean, of course, like a multinational people should work everywhere in this globe. So the thinking about the proposal. Then the next slide, please. Yes, thank you. Then like, so before the ministerial meeting, we are thinking more about the interoperability. But now sometimes these kind of things happen in the G7 things, like escalation to the leaders. So then like what happened is like the leaders agree to create or establish the what we say Hiroshima AI process. And the discussion is more focused on the generative AI and the foundation model, the new technology. And then now again, we are asking OECD to some support to summarize the report for the stock taking and the risk and the challenge and the opportunities on the new technology, particularly for the generative AI. Then, of course, the goal is some development of the code of conduct in organizations, or like a project-based cooperation to support the development of the new responsible AI tools and the best practices. Then you can see the link here. Then this is kind of ministerial declaration in September. And then the G7 delegates are working hard to compile the report, which is mandated to report to the leaders by the end of this year. And do we have more? Oh, no. So that’s about it. So in the end, coming back to the point from the moderator, so Japan is more, want to more to see the new technology can do, particularly for our society. I mean, not only Japan, but also the whole world. So thank you very much. Thank you. That was a really good overview of what influences, perhaps the space that a national government or an organization has to deal with when we’re thinking about how we set international principles and guidelines. How do we make them, how do we bring them home? And then how do we bring our own issues that we have in our own societies and economies back into these spaces to shape some of those responses?

Moderator:
So that was a very nice full circle there. To continue on this track of how national governments deal with the international policy space, and how do they bring their own opinions into it, we’re going to move from Japan to Egypt from the room online. And we’re going to hear from Ms. Suzanne Akobawi. Suzanne, I hope you are well connected and you can hear us. We can see you and we can hear you. The floor is yours. Perfect.

Suzanne Akkabaoui:
Thank you so much. Thank you for the opportunity to take part in this very interesting discussion. And with such an esteemed panel of guests. I’m Suzanne Akobawi. I am an advisor to the minister of ICT on data governance. So this stays a bit about how we are moving towards creating an infrastructure, institutional, legal, technical infrastructure to be in line with the technological advancement that are happening. And clearly in relation to AI as well. So just to give you a bit of background, Egypt has a national AI strategy that aims to create an AI industry in Egypt. It also wishes to exploit AI technology to serve Egypt’s development goals. The AI strategy was built on four pillars, AI for government, AI for development, capacity building and international relations. It also set and has four enablers, governance, data, ecosystem and infrastructure. The strategy was drafted according to a model that promotes effective partnership between the government and the private sector in a way to create a dynamic work environment and to support building digital Egypt and achieve digital transformation in a way that is led by AI application. One of the main principles of the strategy is that AI should enhance human labor and not replace it. Unlike our friends in Japan, we are very young, so we don’t have a lot of experience in Japan, we are a very young society and the majority of the population is between 16 to 45 years of age. So we face challenges with respect to the acceptance of AI and in showing that it has positive aspects other than taking away jobs from this young population. So one of our main principles for the strategy is that AI should enhance human labor and not replace it. And this requires that we conduct a thorough impact assessment for each AI product focusing on whether or not the AI is the best solution to the problem and what are the expected social and economic impacts of each new AI system. The strategy also emphasizes the importance of fostering regional and international cooperations. As mentioned earlier, we are working as mentioned earlier, we are members of the OECD AI network. The 70 countries that were mentioned earlier were one of them. Recently we have enacted, so we’ve published an AI charter for responsible, a charter for responsible AI. The charter is divided into two parts, general guidelines and implementation guidelines. The general guidelines give a layer of details about how to implement the principles that were in the strategy. And so in the general guidelines, we have a primary goal of using AI in government. And the purpose behind it is to promote the well-being of citizens and combating poverty, hunger, inequality, et cetera, which is in line with the human centeredness principles. With respect to any, we have the general guidelines also provide that any end user using an AI has the fundamental rights to know that they are using and interacting with an AI system. Again, a reaffirmation that no individual should be harmed by the introduction of an AI system, especially with respect to job creation. We also have, I mean, there is a list of general guidelines that are present in the document and all in line with the principles that were of the OECD principles. For the implementation guidelines, which is the second part of the charter, it provided that the AI should be robust, secure, and safe throughout the entire life cycle, that any AI project should be preceded by a pilot and a proof of concept, and that additional measures should be in place in case of sensitive and mission critical AI applications. So in short, this is where we stand. We are in line with the existing principles, guidelines, and frameworks that were decided on the international arena. And we have a clear understanding of our cultural differences, trying to find ways to bridge the gaps, the cultural and sociological gaps that come with the AI. That come with the technological advance.

Moderator:
Thank you so much, Suzanne, for sharing all of that and really emphasizing how AI approaches, policies, and frameworks need to be responsive to national context, cultures, local expectations while aligning with global values and also making sure that policies are interoperable with some of the other countries and regional and global initiatives so that we can manage towards truly global governance goals that we have. So as we’ve heard from our speakers from the national government side, I’m going to turn to some of our non-governmental stakeholders, and I’m going to go full circle and go back to international initiatives at the end, giving a bit of a breathing time to our newest speaker who joined us. Welcome, Thomas. So I’m going to turn now to Mr. Owen Larter from Microsoft and ask you, now that you’ve heard from the international space a little bit and how national governments cope with this challenge, how does that happen in a private company? How do you implement this? How do you come up with some of your own? And how do you dialogue with these initiatives?

Owen Later:
Fantastic. Hello, is this on? Can people hear me? Excellent, thank you. Good morning, everyone. My name is Owen Larter. I work on responsible AI public policy issues at Microsoft. It’s a pleasure to be here and a pleasure to be able to join such an esteemed panel. So we are enthusiastic at Microsoft about the potential of AI. We’re excited to see the way in which customers are already using our Microsoft AI co-pilots to be more productive in their day-to-day life. And I think more broadly, we see a huge amount of promise in this technology to help us better understand and manage complex systems, and in doing so, respond to major challenges, whether it’s in relation to healthcare or the climate or improving education for everyone. But there are clearly risks. We feel as a private sector company developing this technology that we have a real responsibility to lean in and make sure that the technology is developed in a way that is safe and trustworthy. So I sort of wanted to talk about three buckets of responsibilities that I view Microsoft as having to contribute to. The first one is to make sure that we’re developing this technology in a way that is responsible. And so we’ve been building out our responsible AI program for six years now. We’ve got over 350 people working right across the company on these issues from a real diversity of backgrounds, which we feel is very important. So we have people who are deep in the engineering space, research, legal, policy, people with sociological backgrounds, all coming together to work out how we identify AI risks and then put together a program internally that can help address them. So we’ve got sort of the core of our program, our responsible AI standard. This is an internal standard. It is based on the OECD principles. It’s based around making sure that people are upholding our AI principles at Microsoft. We’ve got tens of requirements across our 14 goals, and it really is a rule book. So anyone at Microsoft that is developing or deploying an AI system has to abide by this responsible AI standard. We’ve also shared this externally now, so anyone could go out and find it online if you type in Microsoft’s responsible AI standard. We think this is really important, A, to show that we’re doing the legwork here, and it’s not just nice words, but B, so that others can critique it, and build on it, and improve it as well. And then we’re building out the rest of our responsible AI infrastructure at Microsoft as well. So we have a sensitive uses team that anytime we’re engaging in developing a higher risk system, we bring greater scrutiny, we apply additional safeguards. We have an AI red team that is a centralized team within the company and goes product to product before we release it, making sure that we’re evaluating it thoroughly and that we’re able to identify, measure, and mitigate any potential risks. That’s that first bucket around responsible development. I also think we as a company, and we as an industry, quite frankly, have a real responsibility to lean into governance discussions like this. So we have recently founded the Frontier Model Forum with a number of other leading AI labs. We are trying to accelerate work around technical best practice on frontier models in particular. So these are the really highly capable models that offer a lot of promise but also compose some very significant risks as well. We want to develop that best practice, we want to implement that ourselves as companies, but we also want to share that externally to inform conversations on governance. And we’re really pleased to be able to engage internationally in global governance conversations, very supportive of the work that is going on at the UN, you know, the UN doing a very good job I think of catalyzing a globally representative conversation, UNESCO’s recommendations on the AI framework, very supportive of it, and of course all the technical work that is being done by the OECD in the background, very supportive of that as well. And I think the final responsibility we have is to lean in and help shape the development of regulation as well. So the self-regulatory steps that we’ve taken we feel are really important but they are just the start. We do feel that this new technology will need new rules and so we want to lean in, we want to share information about where the technology is going, it’s moving in a very, very fast pace, so how can we help others sort of understand exactly the trajectory of the technology. We want to share what’s working in terms of how we’re identifying and mitigating risks internally and also what’s not working quite frankly. And then finally we want to help build capacity. I think this is going to be a really key issue to underpin the development of governance frameworks and regulation in the coming years. How do you make sure that governments have the capacity to develop viable, effective regulation and then also critically how do you ensure that regulators have the capacity to understand how AI is going to impact their sector, whether it’s healthcare, whether it’s financial services and be able to address the risks that they may pose. So I’ll stop there for now and pass it back to the chair.

Moderator:
Thank you Owen. I think that the picture that it paints for me, it was very, very structured so thank you for that. It always makes the moderator’s job easy when there’s a clear one, two, three, four points in a speech. It really strikes me from what you say is that through these steps of responsible development, governance, regulation, capacity building, multi-stakeholder and cooperation between governments, private sector, civil society, the technical community, academics, research, it’s not one or the other. It’s not that one sector needs to do this and the other sector needs to do that, but we all need to be at the table at all of those levels to really make this work and then actually have the buy-in to be able to implement all of this when the rubber meets the road. You referenced how Microsoft dialogues and supports some of the UN initiatives, so I think that was a good segue to turn to Pratik and ask how does UNESCO think about all this. You’ve done a lot of work in coming up with the ethical guidelines on AI. We’ve missed the beginning of the section. We’ve had a little poll here to the audience to see how familiar the audience is with some of the AI policy frameworks out there, and UNESCO is closely second after OECD, so there’s a good understanding of what is in there, I think, in the guidelines, but I think it would be great to hear a little bit about what you do and how it works when you actually try and implement this, what are the lessons that you’ve learned, and what the challenges are in actually bringing those global principles into the national level, building capacity as Owen mentioned, and how is that working?

Prateek Sibal:
And how much time do I have? Five minutes. Okay. Right. Thanks, and apologies for being late. There was a scheduling conflict and I was hosting another session. So just very briefly on the UNESCO recommendation on the ethics of AI, it was developed through a multi-stakeholder process over a period of two years. The recommendation itself, we had a group of 24 experts which were selected from around the world who prepared the first draft. This draft was widely consulted with different stakeholder groups in different regions, in different languages, and then the document went through an intergovernmental process of about 200 hours of negotiations. And then we had this as the first global normative instrument on artificial intelligence which was adopted in 2021 by 193 countries. As far as the structure of this recommendation goes, maybe it’s worth spending a little bit of time of why we are talking about ethics. And so when we are talking about technologies, there are different kind of views of how we see technology. One is a very deterministic view of technology, that we will have technology, it will guide our life and it will do things. Then we have a very instrumentalist view of technology which is like, oh, it’s just a tool and it’s up to us on how we use it and what we do with it. And then there’s a third view of technology which is kind of like technology is a mediating force in society. So not only is technology influencing our actions, but also we are influencing how technology is shaping the world. So let me give you an example. At a very micro level, we know speed breakers, right? They force us to slow down. It’s a technology which is embedded with a script. At a macro level, if in a classroom you have a teacher and then you put a robot there to assist in the teaching, won’t our ideas of what teaching and learning looks like, what is the role of a teacher in our world, also start shifting? So there’s a shift which will happen in terms of norms. Now when we talk about ethics, it’s not just about saying, okay, these are the principles. We need to go into why. Because companies, developers, all the people who are developing and using AI, per se, are embedding technology with certain scripts. And these scripts need to be informed by ethical values and principles that we want. And this is what the recommendation does. It talks about values of human rights. It talks about leaving no one behind. It talks about sustainability. And then it goes into articulating the principles around transparency, explainability. And once we talk about these principles, it gives a clear indication to the developers, users, okay, this is how the technology should interface with us. Now it goes on further than to talk about the policy areas and what specifically needs to be done, for instance, in the domain of education, in the domain of communication and information. We have so much misinformation, disinformation going around. So it’s a very beautiful document, I would say. And I would invite you to look at it. Now how are we going to address the second part of your question, Dimya? How are we going about addressing the implementation part? Because that’s where the change would hopefully happen. The recommendation itself calls for the development of certain tools, a readiness assessment methodology which has been developed by UNESCO to look at where do countries stand vis-a-vis their state of AI development, vis-a-vis the policy areas and so on in the recommendation. And this is ongoing in about 50 countries around the world. And next year, in 2024, we’ll have the second global forum on the ethics of AI, which will be a platform in February, which will be a platform to learn from what’s coming up around in different parts of the world. Another tool is an ethical impact assessment. And there are so many of these tools. And that is wonderful to have so many diverse perspectives. This is really to guide companies, to guide governments who are procuring AI systems on what are the ethical aspects you need to look at, what at each stage of the AI lifecycle, what do we need to be concerned about. Going forward, I think capacity building was also mentioned by Owen. We don’t need to wait for these kind of regulations to be put in place to start working on capacity building. As an example, we are working with the judiciary. And the judiciary can actually, even in a lot of countries where you don’t have AI, in most countries, actually, we don’t have any kind of AI regulation, they can rely on existing human rights frameworks or other laws like data protection laws to start addressing the challenges around bias, discrimination, or privacy, and so on. So at UNESCO, we have been working with the judiciary for over 10 years. And we have reached over 35,000 judicial operators over these 10 years in 160 countries on issues related to freedom of expression, access to information, safety of journalists. And in 2020, we started working on AI and the rule of law. And we have now a massive open online course, which was used by about 5,000 judicial operators. And I say judicial operators means judges, lawyers, prosecutors, people working in legal administrations on what are the opportunities of using AI in the judicial system. So use cases around case management, we caution them about predictive justice. But also, what are the broader human rights and legal implications of this technology? And how can they address those challenges? Because you will also start seeing binding judgments, and we are already seeing this coming around. And we’re working specifically with regional human rights courts in Europe, in Latin America, in Africa, so that when we have those judgments coming on, it percolates down to the national level. And finally, I will say the work that we’re doing, we’re doing around capacities for civil servants. We keep on loading civil servants and governments with a lot of new work, complex, volatile, uncertain environments and ask them to work on regulation and implementing them without really equipping them with the necessary skills. We saw the case in Netherlands, even the Robodeb scandal in Australia, where AI systems were used and thousands of people were deprived of public benefits, and which had very serious implications. So these duty bearers, we need to work with them on strengthening their capacity. So we’ve developed an AI and digital transformation competency framework for civil servants. And in fact, I was this morning, we were launching a dynamic coalition on digital capacity building, which will focus on capacities for civil servants. So I’ll stop here and happy to go on later.

Moderator:
Thank you so much, Pratik. There was a lot there to unpack, but what particularly was striking in your comments that we don’t need to wait for AI regulation to start addressing some of those biases that we sometimes see coming out in AI systems. And I think that that is something to note here. We finally have a full panel, as you can see, we’ve expanded beyond the door, the table. I’m sorry that most of you are sitting so uncomfortably. But yeah, we really packed everything in here. Welcome, Dr. Senter, to the conversation. We’ve been really jumping between national and international frameworks, and how do we set some of these guidelines, norms, how do we implement them, and having this conversation. We started this morning with a poll on how aware some of the audience is here by some of the initiatives that are in place at national, international levels. We haven’t asked them about the White House framework, but we did ask them about the NIST, risk assessment framework. And I have to say, that was the one that the audience was least aware of. It was a 3.9 awareness on a scale of 10. So with that introduction, not to say that anything wrong with the framework, but it might need a little bit of refresher for the audience, and what the U.S. is doing. I think the audience is keen to hear from you on how the U.S. is positioning around AI governance, AI frameworks at the national, at the international level, and what are some of the approaches that you’re taking.

Set Center:
Thank you, and I’m sorry I was late. I think your taxi must have been faster than mine coming from our previous event. So speed and regulation are in natural tension. I think we know that in an AI context. We know that in any technological governance framework. It’s most conspicuous right now because of the intensification of political and cultural attention on AI, and the seeming inadequacy of governments to meet the moment or at least perception that they’re not moving fast enough. And so I think that dynamic is exactly the right one to frame this session around. In the United States, we have two foundational documents that preceded the chat GPT moment that came out in the last two years. One is the AI Bill of Rights, which provides a sociotechnical framework for dealing with automated systems that is sector agnostic. So in other words, how do we think about a risk framework for any kind of automated system? And the other is the Risk Management Framework, which scored a 3.9. What is the highest we can score? 5.9 with the OECD. Well, next year, we were going to come after the, if Audrey was here, we would come after her and go for a 5.9 next year. The Risk Management Framework, which shares some commonalities with the OECD framework insofar as it’s multi-stakeholder, 240 organizations contributed over 18 months. It was a rigorous effort to solicit views from industry, civil society, to create a framework for users, all the way from users to developers across the entire AI life cycle to manage risk. Those preceded the moment in which all of us are here filling these rooms to talk about AI. What happened, and I think this happened at a national level and a global level, is that many of us forgot all of the work that had been done prior to foundation models emerging and all of the hard work and valuable work that had been done before that. All of us lurched into a new political moment, a new cultural moment, precipitated by the belief that we’re in a new technological era, or at least new era of AI. In the United States, obviously, a lot of the leading companies that were developing large language models, frontier models, are located there. We felt it was incumbent on the United States to move quickly. As many of you know, it is unlikely that we will rapidly move to legislation. I think that’s the case in many countries. We realized that the moment required action, and it required action that defined the problem in a new way and then set obligations around the developers of frontier models. Over the course of the spring and summer, the White House talked to these developers of frontier models, tried to understand and define the nature of the unique risks posed by these models as distinct from, or at least in addition to, the more basic, substantial but understood risks posed by AI that we’ve been talking about for several years, and then tried to create a technically informed framework for dealing with them. That emerged as something called the voluntary commitments, which companies have signed down to in the United States in two waves. Essentially, it asks companies to undertake a series of obligations to responsibly manage, in a secure, trustworthy way, their AI systems. What does it entail? I think what we would now consider fairly understandable basic steps at a level of principles, but I think when we get into the question of implementation, it gets quite complicated very quickly. The commitments essentially require things like red teaming, information sharing among the frontier model developers, so that as they discover emergent risks, they can share them with each other, so each company or developer will understand them. It includes basic principles of cybersecurity and cyber hygiene, with the belief and logic that the model weights that essentially provide the power around these finished models are sufficiently important to protect, that companies need to treat them essentially as the crown jewels of their IP. It includes disclosure, public transparency and disclosure. In other words, if you think about a basic idea like a nutrition card for a food product, you would want to disclose the basic information about how a model’s been trained, how powerful it is, so everyone understands the power of the model itself. The idea is this combination of internal technical work and external transparency will generate the kind of trust and security that we need as these models continue to rapidly evolve, and prior to, or as a bridge to, a legal or regulatory framework in which we can deal with them in a more substantial way. This is a bridge. It’s a first step. I think if we were to solicit a poll, I think one thing people have focused on is the voluntary aspect, as opposed to the technical criteria underneath the voluntary aspect. If you actually look at the technical criteria, it’s quite a serious effort by the engineers, computer scientists, designers to come to terms with what they’re building, and I would suggest it probably represents the best technical framework for thinking about the era we’re moving into, even if I think there’s going to be extraordinary diversity in the kinds of legal approaches we take in the coming era.

Moderator:
Thank you so much for that and thank you for joining us in your very busy schedule. I think, I hope that this would, maybe we should ask the question again at the end and see if this improves the numbers of the understanding of some of the frameworks, but I think what’s very useful for us to know that commitments and work on this don’t just come out of the blue, it’s really built on a long-standing conversations around the topic, beyond the boom moment of when AI really became so user-friendly that we all of us are using it now on our phones every day, but really it’s based on considerations and conversations that have been coming from such a long while and it does bring together not just policymakers or the policy teams in companies, but the engineers and those who set some of the technical work and the technical standards as well. So with that, I think that’s a good segue to turn to Clara and ask how does this work in global standard setting bodies, which is IEEE and how do you think about some of the AI challenges in your work?

Clara Neppel:
Thank you for having me here, it’s a pleasure. So as it comes to the polls, the question is of course what is our role here? And I would like to maybe echo what we just heard, that it needs actually both a bottom-up approach as well as a top-down approach. And IEEE started probably the same time as Japan, already in 2016, to think about what are the ethical challenges of technology. And it came because we have a constituency of 400,000 members, we are the largest technical association, so we’re not only a standard setting body, but we are an association of technologies. So this realization that there is a responsibility in creating this technology, which really I have to say it’s not neutral, it is really embodying the values and business models of those that create it, that we have a responsibility. And this is how this initiative in IEEE of ethical airline design came to existence. It was about identifying issues and how we can deal with them, we as technologists, and how we can deal with them in discussions with regulatory bodies. So what happened since then is that we developed a set of socio-technical standards. We are already developing a lot of technical standards, including the Wi-Fi standard that you are using right now. But when it comes to socio-technical standards, it is a completely different discussion, because what we heard here, we need to have then this multi-stakeholder approach. So we have in our standard setting bodies now people that were not accustomed to this, and we had to develop a common terminology, of course, when it comes to value-based design. When it comes to what does it mean to transparency, everybody agrees that transparency is important, but what does it actually mean? And what is it, first of all, what we want to achieve, and how we are actually achieving it at the technical level. So the set of standards, as I said, one is on value-based design. It is really around identifying what are the values or expectations of the stakeholders of an AI system for a given context, and how you are actually prioritizing them, because you cannot have everything built into the system. How you are actually prioritizing them, and how you are translating them into concrete system requirements. And I think it’s about also practice and experience. Since then, we have several projects, including public-private partnership projects from UNICEF, but also including industry, which prove that it is a very valuable standard, and giving this methodology to developers and system designers, which actually influences the outcome of the systems. You actually come up with a different system, which takes this value into account. As I already said, when it comes to transparency, we have a standard which defines transparency. There are different levels of transparency, so it gives a common terminology when we speak about these terms. The same thing if it goes to bias. We are discussing about bias, and that we have to exclude bias and deal with it. But again, we don’t know what bias is for a certain context. For instance, when we are going to a medical application, we actually want to have a certain bias. We want to have, for instance, different treatment of women and men, because we have different symptoms. So we need to have that kind of context sensitivity when it comes to certain things that we all agree on. So this was the bottom-up approach. And the question is, how does it come together with a top-down approach, with all these different frameworks that we also heard, and the poll very clearly showed that people know more or less about it. So we engaged also from the beginning with the OECD, with the EU, at the high-level expert group, with the Council of Europe. So I would say when it comes to this question that, of course, industry has, what is the interoperability when it comes to regulatory requirements, I think the standards will play a very important role, because that gives the, let’s say, the very practical approach, how to move from principles to practice. And we are part of the discussion. So we are part of the network of experts, giving, let’s say, the technical background of what are the challenges, what is possible to implement, what is realistic, and also reflecting it in our standards. And I wanted to give also an example of how this complementary between regulation and standards can work. When it comes to children, we have, there is a code, which is in the UK, the Children’s Act, which gives, let’s say, and this is something where, of course, technologists cannot decide for themselves what is the right way to do it. I think this is something that needs to be done in a democratic process. So for the children’s codes, everybody agrees we need to protect children. But again, what does it mean on the technical level? And we have a standard, which is called age-appropriate design, which complements that regulation. And I think that this is, and which gives also very clear guidance on how to do this age verification and so on. So I think that this is the bottom-up approach, and a top-down approach needs to come together. And another example is, that we are doing at the EU level, is how to map the AI Act requirements to the standards. There is also a report, which came out from the joint research center of the European Union, which made this mapping for IEEE standards and actually said that it fills really this gap when it comes to ethics, because a lot of standards are still, of course, focusing on a more technical level, which is also important, but you need both. Yes, I would end up with capacity building. I think that is very important for our certification process, which complements the standards. We started building up an ecosystem, an ecosystem of assessors. We already trained more than 100 people. And what is also important, that we have to have certifications of product and services, but also certification of people. So we need these assessors, which are already in our registries, and we need also certification bodies. So we actually trained some people from certification bodies to be able to make these assessments as well. So I think that these are the things that we will continue doing. Thank you.

Moderator:
Thank you so much, Clara. And thank you for sharing all of that from your work and making that connection between the bottom-up and top-down approaches. I guess capacity building is also in the middle or around all of this to make sure it all fits and that we all have all the necessary skills and capacities to deal with that. I’m going to turn to our last two speakers for today. So I would like to encourage you to please go online and type your questions or comments into the chat box so that we can weave that into the discussion, as I will give a last round to the panelists to react to one another and to your questions. I’d love to see some of them pop up here. The chat has been a bit silent, but do go online and share your questions so that we make sure that we weave in some of the perspectives from the room as well. And with that, I’m going to turn to Maria Paz and ask, you are here on the panel representing civil society. You work a lot on some of these issues that we’ve mentioned, in particular human rights. How does that come in to the discussion from your perspective? What are some of the challenges and opportunities that we have here?

Maria Paz Canales:
Thank you. Thank you very much for the invitation for being here. I think that the benefit of being almost at the end of the round of speakers is that I can build on top of what already has been said and add the component that is specific for the perspective of the civil society organization that we work in, the space of digital governance. And I think that I am very pleased of many concepts that I have heard from different actors here around the table, starting for the concept of a sociotechnical approach to the artificial intelligence governance. And I would like to believe that it’s technology governance at large, not only for artificial intelligence. And I think that one of the other things that have resonated in all the intervention that I have heard so far is the need of this clarification in terms of the framework and how they translate to concrete way to engage with the manner in which the design, but also the deployment and the adoption for many countries of the artificial intelligence technologies is happening. So I think that we need, on top of some of the elements that Clara was mentioning, for example, about the concept of what we mean by transparency, what we mean by bias, also I think that we need to understand, have a more clear understanding in terms of frameworks about what kind of risk we are addressing. What we are considering really a risk. From the work that I represent in the organization that I work on, we try to infuse in this conversation the human rights approach. And we consider that the risk that we are addressing when we are talking about the impact of artificial intelligence, it’s the huge and wide spectrum of impact that this technology is having in the daily life of every people around the world that is touched by the technology. And it touches the exercise of civil and political rights in a very concrete manner, but also increasingly the exercise of economic, social and cultural rights also. So when we see that many places in the world, and particularly governments, are embracing the use of artificial technologies for developing policies in different areas, we wonder if we are talking about the right risk when we are measuring. And if, for example, in what Galia was mentioning, that it had been this very thorough work conducted by the OECD in terms of building the database, are we really concentrating enough in hearing also from the right people in terms of how those risks and possibilities of risk are being measured? So I think that in the example that Clara was giving related to technical community, the bottom-up approach implied to hear the technical expert, and also it’s something that was pointed out by Mr. Santer in terms of how you build this voluntary commitment, trying to be really strong in terms of the technical aspect of the recommendations, of the guidance. And you, Clara, were mentioning that it’s necessary to hear the technical expert in order to really make the assessment converge with what is technically feasible and technically understandable for the ones that are called to implement it. The same way we should be thinking how the risk is connected with different communities that is impacting for the deployment of the technology. And usually those are not in the room. Usually those are not consulted. And even, like, I acknowledge the good effort that had been done in many consultative process, for example, for building the UNESCO recommendations, ethical recommendation, and also in the process of the NIST, RISC-CAS and MENFREG more, that I have the privilege to participate representing some consideration from civil society groups. Usually who are part of those conversations are not the people that it’s the bottom line in terms of impact of the deployment of the technology in society. So I think that we should continue to make a conscious effort to have that people in the room, to bring them into the conversation. This is also related with the capacity building issue that had been raised by several of you. We cannot expect that people is able to talk about how artificial intelligence impact in their daily life if we don’t approach them to the topic in a way that is really concrete for them and understandable. We will not expect that they speak the language of the technical standard. We will not expect that they speak the language of the regulatory issues. But they will talk about how they have been discriminating in the access of housing or employment or access to health or education or how the use of this technology is being weaponized for political manipulation or state control and surveillance of opposition forces in a specific country. So I think that I am very happy of what I am hearing around this table, but I think that there is still some kind of additional work in terms of the clarity of the frameworks of what we are addressing, what we are understanding in terms of risk. And at the end, when we use the concept of responsible AI or trustworthy AI, for me those are not facts that are really useful in terms of like unpacking what it should be done in terms of addressing the governance artificial intelligence issues. Those will be the result of unpacking in a good way the governance issue. And as a result of a good governance, in my perspective with a human rights approach, we will have responsible technologies and we will have trustworthy artificial intelligence. So I will leave it there for the next round of reaction. I am very happy to be here and bring that perspective.

Moderator:
Thank you, Maria Paz. First round of applause. We are not at the end just yet, because we have to hear from Thomas. And I think you set him up for a very difficult question, because in addition to working on these issues in your home country in Switzerland, you are also chairing the process at the Council of Europe that is coming up with the first binding convention on AI and human rights. So I am not going to ask all the questions that Maria Paz asked of you, because it will be very difficult to answer in the five minutes that you have. But how is the process considering some of the human rights impact and what do you expect from it?

Thomas Schneider:
Yes, thank you. And it is actually good that we live in a hybrid world, so I was able to follow the discussion and the poll already from the taxi through new technologies like Zoom meetings. And so, yes, hello everybody. Happy to be here. First of all, thanks to the ICC for putting this together, because I don’t think it’s a problem that we have several ways, several instruments to try and cope with a new technology. And I think this is the only way to go, because there is not one single instrument that will solve all the problems or enable all the opportunities when it comes to a new technology. So we need a mix of instruments. And we also should not forget that we already have technical norms, legal norms, cultural norms that guide our societies and that have guided us with previous technologies that have also been maybe more or less disruptive. And so we do not have to reinvent the wheel every time something new comes up. We may have to adjust it or maybe add a little bit to the wagon, but not everything is completely new. So it’s good to see what is new and what is not new or what is maybe a new version of something that we may already know. And actually, if you look at AI, in many ways, you can actually compare it in its disruptiveness that AI is replacing cognitive human work through machines. AI is a driver, is an engine that drives other machines. You can actually compare it in a number of ways to actually the combustion engines and their disruptive effect that they had a few 150 or 200 years ago and actually until now. Because also there, you don’t have one law that is regulating all engines worldwide at once. You have thousands of technical, legal, and cultural norms that are actually most of them not focusing on the engine itself, but on the machine that the engine is driven by or on the people that are actually guiding the machine, on the infrastructure that the machines are using. And there’s different levels of harmonization between these rules. If you take machines that move people in the airline business, you have quite a harmonized set of rules across the world. If you take cars, looking at my German friends here, they think that they can still live without speed limits. And apparently, they don’t live that badly without speed limits. In my country, it’s slightly different. The U.S. has even lower speed limits than we do. And some people drive on the left side of the streets, but the British or the Japanese can drive on the right side. Swiss roads they just have to pay more attention. So there’s different levels of interoperability or harmonization according to the specific application of an engine in a particular machine that is used for a particular purpose. It’s also there’s a difference whether you move goods or you move people there may be different requirements and so on and so forth. And we are about to do the same with AI in the sense that we try not to regulate the tool itself but the tool in the context that it’s applied and the tool may evolve very quickly but maybe the context is not so quickly changing because in the end it’s people and people tend to not evolve so quickly as maybe the tools which should actually make it easier for us to try and understand the processes that we’re in. And what the Council of Europe is doing is trying to fill one piece to add to this set of norms where we have technical norms that may be one element to actually react quickly to developments and solve issues on a technical level to the extent it’s feasible to create a certain harmonization to create a certain level of security also of predictability of systems. The cultural norms is something else that I will not I don’t have the time to go beyond German motorways but we also may have different ways of dealing with risks in general in different societies. And then you have the legal space where it is great to have industry players taking on responsibilities to regulate themselves. It is good to have guiding soft law guidance from UNESCO from the OECD. The Council of Europe has also developed since 2018 a number of sectoral instruments like an ethical chart on the use of AI in the judicial system, a recommendation on human rights impacts of AI systems in the field of media and elsewhere. So this is all fine but it may not be enough because and I will be very curious to see how these voluntary commitments are followed in the US because voluntary means like you can but you don’t really have to. Well there may be if the incentives are right a voluntary system may actually work better than a so-called compulsory system if it’s too complicated or not workable. So I don’t necessarily say that this is not a good approach but again I think it’s good that we have different approaches and while the European Union has decided to develop an instrument that is basically a market regulatory instrument the Council of Europe for those that are not familiar with the European system the Council of Europe is not the European Union. It is something that is comparable to a UN for Europe and it is 46 member states that have agreed on a set of norms on human rights democracy and rule of law. There’s about 250 conventions and thousands of soft law instruments and so the latest one of the latest things is that we are trying to agree on a number of very high-level principles that are probably not that different from other high-level principles that we’ve already seen in a convention and the new thing is that this is the first intergovernmental agreement that states commit themselves to live up to these principles and these principles are based on the norms of human rights democracy and rule of law and what is special in this case but there have been others before is that this is not a convention just for European countries for member states for the 46 member states of the Council of Europe. It is an open process where we had already from the beginning a number of non-European countries that are leading in AI like the US, Israel, Japan, Canada. We have also Mexico on board since the beginning. We have a number of other countries joining in particular from Latin America so this is the Council of Europe has the opportunity to offer a tool where states from all over the world that respect the same values of human rights democracy and rule of law but may have different institutional arrangements to do this to join become parties to a convention where we agree on a number of principles that should guide us and it’s not just human rights it’s things that are slightly more complicated because they are not that clearly institutionalized like democracy and to agree on a number of principles and then this is one thing because that will be another paper but we also work together with the OECD together with a number of standardization institutions together with the EU and together also with we are also watching what the US colleagues are doing in the NIST framework because in the end you need to have something that operationalizes paper and so what we are calling this is a human rights democracy and rule of law impact assessment which is not a particular tool but it’s a methodology that should help us unite and like create interoperable systems in our countries like the convention is a legal tool that should help us create interoperable legal systems within our countries. Thank you.

Moderator:
Thank you so much Thomas and I very much like your analogy there with the roads systems. We need some of the common principles we all know what to do and learn how to drive and know how to drive and what are the general rules but then we need the flexibility to be able to adapt to context and I think that does apply nicely to some of these systems. We have, I’m going to be generous, 10 minutes left actually seven and I’m going we have a lot of speakers but the audience has been gracious with us because they don’t seem to have many questions so what I’m going to ask of you is to take on one minute and I’ve set the timer one minute each to take your last comments reactions to the other statements or to share one lesson and or recommendation from your side. I’m going to go from that side to that side and you’ll see how many minutes we are running over as we get to the end of the queue so please be respectful of your fellow panelists so that we do have time to go to everyone. Pratik. I would rather give my one minute if someone has a question because it’s too heavy on the panel. Yes, please.

Auidence:
Hi, yeah, Ansgar Kuhn from EY. I had problems with submitting the question. Yeah, my question was around the capacity building side and actually less the capacity building from the point of view of getting the skills of doing it but rather from the point of view of enabling various parties to engage with the process because of the time commitments that are related to that which means either cost for being able to afford to have somebody in your organization if you’re an SME or something or a civil society organization you might not be able to carry that cost of someone who isn’t contributing directly to whatever the product is that you’re creating or if in academia you may struggle to get academic credit for having engaged in this kind of process. So does anybody have any suggestions in that space?

Prateek Sibal:
So I will quickly answer within my 30 seconds left and give back the floor. So I think from our perspective whenever we are having multi-stakeholder conversations we are very sensitive to the fact that not everyone is coming with the same level of knowledge about the technology and we put out some guidance from UNESCO on how to do multi-stakeholder governance of AI in a manner which is more inclusive. So which includes first building awareness. We ourselves have launched quite a number of knowledge products to facilitate that process. For instance, in a fun way we have a comic book on AI because we hear this very often that when you are in a multi-stakeholder conversation not everyone is as familiar with the topic. Some people feel maybe a bit intimidated when you are having the technical experts, the government and everyone coming from the IOs talking in their own jargon which is sometimes very hard to decipher. So we are sensitive to this concern. We’ve also supported people to participate in different fora so that financially but also financially compensate them for their time because civil society ends up doing a lot of free work in these consultative processes without any kind of financial support. And I know colleagues that we are meeting here working on weekends, working nights in civil society which is not fair at all.

Moderator:
Thank you for the question. Thank you Pratik. We have four minutes left but I’m going to take one more question.

Auidence:
I think maybe it’s easier if we all ask the question then any panel member can just catch on it. In four minutes, yes sure. Go ahead. So my name is Liming Zhu. I’m from CSIRO, Australia’s national science agency. We’re working on the science of responsible AI. I have a question on the system level guardrails versus the model level guardrails. We all know that risks are context specific but a lot of people worry if we push the responsibility to the system level, to the users, then the tech vendors can provide unsafe models. So the response would be your legal plans to do that. On the other hand, all the model level guardrails because the general AI is hard to understand, it’s hard to embed specific rules inside a black box model and we need system level guardrails. I’m just wondering whether there’s any comments. Thank you. Thank you. Gentlemen, 30 seconds for your question please. Hi, I’m Steve Park from Roblox. I understand that previously the G20 process has created something like the data free flow with trust. I’m wondering what the Hiroshima process, is there an expectation for that sort of a principle approach for AI as well? Thank you. Thank you so much. I guess that the online system was not really good on taking questions but I’m so glad to have that engagement. 30 seconds each panelist to try and answer. All right, race is on.

Owen Later:
Very good question about models versus applications. You need safeguards at both levels. You need to make sure that you’re developing the model in a responsible way but then when you’re integrating that into an application, you also need to make sure that there are requirements at that level as well. Otherwise, the mitigations that you’ve put in at the model level can just be removed or circumvented. So we think there’s been great progress on the code this week for developers. We think ultimately you need to extend that to deployers as well. And I guess 15 seconds to offer some thoughts on a way forward in terms of global governance. I think there’s been a ton of progress made over the last 12 months and that’s been reflected in the conversation here. I think as we move forward, we should think about ultimately where do we want to get to, what do we want a global governance regime to do and what can we learn from existing regimes. So to offer a few thoughts, I think we want a sort of framework for standard setting globally. I think organizations like the International Civil Aviation Organization where you have a representative process for developing standards that are then implemented at a domestic level, really, really helpful. I think we want to have conversations to advance a consensus on risk. I think of organizations like the Intergovernmental Panel on Climate Change which might be a good model to follow. And then I think ultimately we want to keep building that infrastructure. So both the technical infrastructure so that we can advance work on evaluations where we have still really major gaps to address but also continuing to have these types of conversations. The point that you were making, making sure we have a representative way of having these conversations on global governance and pulling in perspectives from across the world, I think it’s going to be really important.

Moderator:
Thank you, Owen. Very efficient and speedy in response as the private sector generally is.

Nobuhisa Nishigata:
Thank you. And answering the question, maybe I would say like back in 2016, it’s more like Japan started, initiated the discussion but this time for the Hiroshima process, I would say it’s G7’s collective effort. And the main point is that the focus is generatively on the foundation model. I mean the back in the slide, it’s more like about the voluntary commitment. So maybe just you can see some shifting of the discussion even in G7. So what I expect is just more like started by the government of Japan but now it’s all inclusive dialogue around the G7 and we see the other four as well. So I’m just, the Japan would say that we are very happy to see what happened even though we already have some things that we didn’t expect in 2016 but still I would say that things are going well around and we have to continue this kind of dialogue among the whole world. Thank you.

Moderator:
Thank you, Nobu-san. Clara. So very quickly, I think that we need really to balance between voluntary and legal requirements.

Clara Neppel:
I think that it is not the responsibility of the private sector to have guardlays for democracy and rule of law, for instance. So if we want to have certainty on this, we need legal certainty as well. So we need to have regulation on these issues as much as possible and I think that this legal certainty is also expected from the private sector. I think that what is really bad right now is the uncertainty that they are facing. And just responding to one of the answers, I think that how to engage, what I think is going to be essential is to enable feedback loops. I think that this is going to be one of the most important things, especially working with generative AI, enabling these feedback loops and making sure that these feedbacks are actually taken into account by retraining systems or taking them into account when learning from the aviation industry. So the benchmarking I think is also important and common standards, of course. Thank you. So I’m going to take on your points also.

Maria Paz Canales:
One of the examples in which it’s evident that we need some level of complementarity between voluntary standards and legal frameworks, it’s particularly linked with one of the questions around this responsibility at different levels of the safeguard being in the design stage but also in the implementation and functioning of the system. So for example, that’s a topic that should be considered as a part of the regulation, how we distribute and we create obligations related with transparency in the information that were mentioned also by Mr. Senter related to the voluntary commitment, how we ensure that between the different operators in the change of use, production and use of the artificial intelligence, there’s also enough communication that is not against the competition rules also or the intellectual property rules, but they are a shared responsibility and the legal framework accounts for those different responsibilities. 15 seconds on the role. I talked a little bit more during my intervention about this need of the bottom-up approach in terms of societies and different stakeholders in society, but this also applies geopolitically at the global level. In this conversation that are unfolding at the global level of discussion of governance, we need to hear much more about global experiences from different stakeholders that make this process of identification of risk and relevant elements in context much more sensitive to different considerations.

Moderator:
Thank you so much, Maripaz, and don’t try to silence anyone, but we’re making our gracious host very, very anxious, so three minutes, please.

Galia :
Three minutes? One minute each, three minutes together. But just to say, I think this has been a really, really rich conversation. This also made me optimistic that I think it is possible. I think we do have all the elements combined. I really like what Thomas said about how all these things can be complementary. I do think there are still challenges at the implementation level, how we make these things work. I think mapping exercises like what we did with the OECD on the risk assessment front, I hope, can be helpful in this regard. I also think we can think about what we mean by, when we say global governance, then how global can we do it and maintain a credible value alignment, which I think is very important. I think we also have the element of the stakeholder engagement that’s really important. I think this kind of forum is really critical in advancing this kind of conversation, so thank you. Thank you so much.

Thomas Schneider:
Thank you. I try also to talk in two times speeds, like the YouTube videos that you can watch double the speed. Well, I think we need to work on several levels. One is we need to reiterate and see whether we still agree on fundamental values about how we want to respect our human dignity, how we want to be innovative while respecting rights, and see that we are all on the same page on this. Then we somehow need to break it down and see, okay, what are the new elements, what are the new challenges, what are the legal uncertainties that we need to clarify, and then how do we best clarify them without creating a burdensome bureaucracy? How do we clarify? What can we do with technical standards? What is the best tool for solving the problem? So we need to know what are the problems, and then we need to know what are the best tools, and again, it will probably be a mix of tools. Some will be faster, others will be more sustainable, and I think we’re all working on it, and we need to just continue and cooperate with all stakeholders in their respective roles. r,

Moderator:
Thank you so much, Tomas. Dr. Sente on the account of last words. I feel like that’s an awfully positive message to end on, so I will mercifully cede my 45 seconds back. Thank you. That’s very gracious. Thank you so much. Apologies for the next session and to the host for running five minutes over, but I really do want to thank the panel for making the time and coming here and for this really rich discussion, for the audience for sticking with us. I wish we had three more hours and we still couldn’t have stopped talking, I’m sure, but there is a main session on artificial intelligence, I’m told, so see you there and see you around. Thank you so much again. Thank you.

Auidence

Speech speed

179 words per minute

Speech length

410 words

Speech time

137 secs

Clara Neppel

Speech speed

164 words per minute

Speech length

1373 words

Speech time

503 secs

Galia

Speech speed

76 words per minute

Speech length

566 words

Speech time

444 secs

Maria Paz Canales

Speech speed

166 words per minute

Speech length

1291 words

Speech time

467 secs

Moderator

Speech speed

122 words per minute

Speech length

2365 words

Speech time

1165 secs

Nobuhisa Nishigata

Speech speed

147 words per minute

Speech length

1447 words

Speech time

591 secs

Owen Later

Speech speed

234 words per minute

Speech length

1374 words

Speech time

352 secs

Prateek Sibal

Speech speed

164 words per minute

Speech length

1500 words

Speech time

547 secs

Set Center

Speech speed

146 words per minute

Speech length

975 words

Speech time

401 secs

Suzanne Akkabaoui

Speech speed

124 words per minute

Speech length

775 words

Speech time

374 secs

Thomas Schneider

Speech speed

173 words per minute

Speech length

1648 words

Speech time

570 secs

Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

In the analysis, multiple speakers highlight key points regarding Digital Public Infrastructure (DPI) and its implications. The first speaker stresses the importance of sustainable DPIs that take into account environmental factors. They assert that the Global Digital Compact (GDC) claims for sustainable DPIs align with the G20 meeting in India, which established a common agenda. The positive sentiment suggests a consensus on the need for sustainable DPIs.

The second speaker focuses on the significance of DPI by design, implementation, and governance. They provide Estonia’s success in digital government and connected government as an example. By adopting DPI by design, Estonia has effectively demonstrated the value of integrating digital technologies into government functions, resulting in positive outcomes. This observation strengthens the argument for the importance of DPI.

On the other hand, the third speaker raises concerns about the potential impact of mass surveillance. Although no supporting facts are provided, the negative sentiment suggests that the speaker believes mass surveillance has detrimental consequences. This viewpoint serves as a cautionary reminder to consider the potential risks associated with DPI, particularly in relation to individual privacy and civil liberties.

The fourth speaker advocates for a specific framework on “human rights by design.” They emphasize aspects such as privacy, freedom of speech, dignity, and autonomy. By unpacking the concept of human rights by design, the speaker highlights the need for clarity and guidelines to ensure DPI does not infringe upon fundamental rights. This argument underscores the necessity to address potential ethical and legal concerns related to DPI.

The fifth speaker argues for safeguards that can potentially halt or reverse harmful systems, specifically mentioning the importance of safeguards for digital identification. They highlight the significance of the ability to reconsider, reevaluate, and reinstate changes to negate any harms associated with DPI. This perspective supports the idea that proactive measures should be in place to mitigate any adverse effects of DPI.

Lastly, the sixth speaker expresses concerns over the right to anonymity and stresses the need for private space to protect civil and political rights. They mention that the right to anonymity is essential for freedom of speech and other civil liberties. This negative sentiment suggests worries about potential infringements on individual rights and the necessity to protect them in the context of DPI.

In conclusion, this analysis presents a comprehensive overview of various perspectives surrounding DPI. The speakers highlight the importance of sustainability, design, governance, human rights, safeguards, and individual rights within the realm of DPI. While positive sentiments indicate consensus on certain aspects, negative sentiments caution against potential risks and encourage the implementation of necessary precautions.

Eileen Donahoe

The analysis explores several key points regarding the role of digital technology in achieving the Sustainable Development Goals (SDGs). It highlights the immense potential of digital technology in accelerating the attainment of these goals. However, it raises concerns that only a small percentage (2%) of government forms in the United States have been digitized. This lack of digitization not only leads to significant time wastage for the public but also results in the loss of $140 billion in potential government benefits each year. This underscores the urgency for governments to prioritize the digitization of their public infrastructure.

The discussion also emphasises the need to embed human rights into digital public infrastructure. While there is a strong desire to expand access to digital services, it is crucial to ensure that the most vulnerable and marginalised communities are protected. The risk of inadvertently developing into surveillance states through digitisation must be carefully mitigated.

Furthermore, the analysis underlines the importance of a global multi-stakeholder approach, where governments collaborate with the technical community, civil society, academic experts and the private sector. This approach fosters collective involvement and cooperation in setting digital technology standards and policies. Eileen Donahoe stresses the significance of this approach to enable a smooth transition from domestic to global multi-stakeholder processes. However, it is acknowledged that this transition can be challenging for governments.

To facilitate the multi-stakeholder input processes, it is recommended to use the Human Rights framework as a global standard. The universality and recognisability of human rights frameworks make them an ideal basis for collaboration and adherence to global standards. Governments are more likely to feel comfortable adopting these global standards as they have already committed to upholding human rights.

The analysis further discusses the importance of global open standards in increasing accountability. Global open standards provide a valuable tool for well-intentioned governments, ensuring that the standards deployed are of a consistently higher calibre compared to what would occur if governments were left to their own devices. Through a global digital compact process with follow-up, soft norms can be established, adding pressure and encouraging accountability.

In addition, the analysis supports the building of a mutual learning ecosystem in different regions. It suggests that Digital Public Infrastructures (DPIs) should be made more context-sensitive. Building on the progress achieved during the Indian presidency, there is a call for expanding and adapting these principles to new areas. Regular reviews and discussions, guided by a safeguards framework, act as reference points for evaluating the progress and efficacy of DPIs.

Lastly, the analysis emphasises the importance of a cross-disciplinary approach. It highlights the necessity for collaboration between experts in norms, law and technologists to effectively implement human rights by design. A shared language is crucial for enabling effective collaboration and understanding between these different disciplines.

Overall, the analysis underscores the need for robust and safeguarded digital public infrastructure, the importance of a global multi-stakeholder approach, and the significance of embedding human rights principles into digital technology. It also highlights the value of global open standards, mutual learning and a cross-disciplinary approach. These factors collectively foster accountability, context sensitivity and progress in the field of digital technology and its impact on achieving the Sustainable Development Goals.

Speaker 1

The speakers in the discussion emphasized the significance of digital public infrastructure (DPI) and the successful implementation of DPI in Estonia. They pointed out that Estonia has long been a digital leader, and the philosophical and practical origins of DPI can be traced back to this country. They highlighted the principles of openness, transparency, and inclusiveness as the basis for Estonia’s digital reform. The government of Estonia, in collaboration with the private sector, has worked on digital identity, which is an essential component of DPI.

Trust and collaboration were identified as vital elements for the successful implementation of DPI. The speakers emphasized that the Estonian government and private sector combined resources and contributed equally to DPI, which helped build trust among stakeholders. Equality in contribution and responsibility was deemed crucial for fostering collaboration in DPI projects.

The discussion touched upon the need for governments to focus on fixing fundamental aspects such as data governance and digital authentication before moving on to advanced concepts like artificial intelligence (AI), Internet of Things (IoT), and blockchain. The speakers argued that many governments and societies are currently discussing advanced concepts without having firmly established the basics. Therefore, DPI serves as a reminder to prioritize the foundational aspects of digitalization.

The importance of sharing and collaboration among governments in the field of digital public infrastructure was emphasized. The speakers noted that while many governments make their tools available, there is a lack of reusing and collaboration. However, some countries, such as Estonia, Finland, and Iceland, have made progress in this area by developing certain digital products collaboratively and making them globally available. The speakers called for a greater push for sharing, reusing, and collaboration among governments to enhance the effectiveness of digital products.

Sustainability and mindset change were identified as challenges in implementing digital public infrastructure. The speakers acknowledged that changes take time and that people can be resistant to change. They also emphasized the importance of continuity as various projects come and go. The example of Internet voting in Estonia was highlighted, as it took several years for it to become popular and widely accepted.

The discussion concluded by highlighting the global nature of guaranteeing privacy, security, and human rights in the digital realm. The speakers stressed that these issues require concerted efforts from both the government and the private sector. Ensuring privacy and security is not solely the responsibility of the government or the private sector. The speakers also emphasized the importance of global movements in addressing these issues.

In conclusion, the discussion shed light on the significance of digital public infrastructure and its implementation in Estonia. The principles of openness, transparency, and inclusiveness were identified as driving forces behind Estonia’s digital reform. Trust, collaboration, and equal responsibility were deemed vital for successful DPI implementation. The need for governments to focus on fundamental aspects before advancing to advanced concepts like AI and blockchain was highlighted, along with the importance of sharing and collaboration among governments. Sustainability, mindset change, and guaranteeing privacy, security, and human rights were identified as challenges that require joint efforts from the government and the private sector.

Moderator – Lea Gimpel

Digital Public Infrastructure (DPI) is defined as society-wide digital capabilities that are essential for citizens, entrepreneurs, and consumers to participate in society and markets. It serves as the foundation for public service delivery in the digital era. DPI does not refer to foundational software or physical infrastructure such as fibre optic cables.

During the COVID-19 pandemic, many countries heavily invested in DPI to facilitate faster and prompt response. This shift to digital platforms resulted in the adoption of services like digital ID, payment systems, data exchange systems, and civil registries. However, the speedy implementation sometimes compromised the security, safety, and inclusivity of the systems.

Lea Gimpel advocates for the safe, secure, and inclusive implementation of DPI. Attention needs to be given to avoid risks such as data privacy issues, mass surveillance, and exclusion of vulnerable groups. DPI has an extended risk due to its implementation at population scale, and technologies with societal functions have long-term impacts.

To address these risks, the UN Tech Envoys Office and UNDP launched the Universal DPI Safeguards initiative. It aims to develop safeguards against risks in DPI designs, implementation, and governance.

The movement around DPI focuses on people’s protection and service delivery, emphasizing the approach over technology. Eileen Donahoe advocates for the use of technology to advance the Sustainable Development Goals (SDGs) and expand access to digital services globally. Embedding human rights by design into digital technology is also important in preventing the unconscious drift towards becoming surveillance states.

The United States lags significantly in digitizing government communication and rebuilding its infrastructure. Only 2% of federal government forms have been digitized, resulting in unclaimed government benefits worth $140 billion each year. The implementation of DPI is not solely applicable to low and middle-income countries, high-income countries also need to discuss their approach.

Successful DPI implementation in Estonia highlights the need for DPI standards and the integration of human rights by design. Concerns about potential surveillance risks arise if DPI is not correctly implemented. The speed of DPI implementation often makes implementing safeguards difficult, and the right to anonymity in DPI and Digital ID systems should be ensured.

Sustainability is an important consideration in DPI, as discussed at the G20 meeting in India. Insufficient safeguards in digital ID systems call for DPI safeguards that allow for system stop, reconsideration, or rollback if harm is found. Enforcement is required to ensure compliance with DPI safeguard frameworks. Global open standards increase accountability and add value by monitoring everyone’s system.

The government’s role is crucial in the digital era. It needs to be a partner for citizens and the private sector to build an innovation ecosystem that is crucial for the evolution of digital economies. Existing initiatives like the Global Digital Compact and Open Government Partnership provide an opportunity to create commitment for the DPI Safeguards Initiative.

Cooperation, sharing of technology, and learning are important for effective implementation at scale. Changing mindsets during implementation is crucial for success.

In summary, DPI enables citizens, entrepreneurs, and consumers to participate in society and markets. The Universal DPI Safeguards initiative addresses risks and develops safeguards. The movement around DPI emphasizes people’s protection and service delivery, focusing on the approach over technology. Human rights by design and the use of technology to advance the SDGs are crucial considerations. The United States needs to digitize government communication and infrastructure. High-income countries also need to discuss their approach to DPI. Standards, safeguards, and the right to anonymity are important in DPI implementation. Sustainability, enforcement, and global open standards play crucial roles. Government partnership and cooperation are essential, and existing frameworks provide opportunities for commitment. Effective implementation at scale requires cooperation, technology sharing, and mindset changes.

Robert Opp

The concept of digital public infrastructure should be seen as an approach rather than a technology or set of technologies, with a governance structure and appropriate safeguards. This approach is important for solving immediate problems and ensuring the future success of infrastructure projects. The United Nations Development Programme (UNDP) aims to complement consultations with ground-level work in three to five countries, testing and applying the emergent framework. They will gather feedback from the field tests to inform the development of safeguards. The UNDP also plans to support countries in implementing the framework and addressing knowledge gaps. Mindset change and incentivizing collaboration are necessary to advance digitalization. Overcoming the current mindset issue requires collective effort and a shift in thinking. Collaboration is crucial for developing scalable ways to implement digitalization. The safeguard initiative is a positive step towards changing mindsets and promoting the implementation and scalability of digital solutions. Sharing and collaboration are crucial for developing and implementing digital solutions. It is important to incentivize people to collaboratively work towards digitalization. The challenge of keeping up with emerging technologies like artificial intelligence requires continuous learning and adaptation. Developing a global approach with safeguards is essential for the success of digitalization efforts. Rolling out this approach quickly with trust and sharing is necessary. In conclusion, the concept of digital public infrastructure as an approach, along with governance structures and safeguards, is important for solving problems and ensuring success. The UNDP’s efforts to complement consultations with ground-level work and support implementation are valuable. Mindset change, collaboration, and safeguards are necessary for digitalization. Sharing and collaboration are key for developing digital solutions. The challenge of emerging technologies like AI requires continuous learning. A global approach with safeguards is crucial, and it should be rolled out quickly with trust and sharing.

Amandeep Singh Gill

Amandeep Singh Gill emphasizes the need for a safeguards framework for digital public infrastructure (DPI) due to the risks and issues related to safety, security, data protection, and societal inclusion/exclusion. It is important to address these concerns to ensure the effective and ethical use of DPIs. Gill advocates for multi-stakeholder participation in building and managing the safeguards framework, including contributions from the private sector and civil society. This collaborative approach ensures diverse perspectives and expertise are considered, leading to more comprehensive and effective solutions.

The foundations of DPI are based on international human rights commitments and the Sustainable Development Goals (SDGs). Recognizing the significance of these frameworks, DPIs are designed to align with and support the principles and objectives outlined in these global agendas. By incorporating human rights and SDG frameworks into DPIs, it becomes possible to promote inclusivity, sustainability, and socioeconomic development.

DPIs not only target government services but also play a critical role in boosting the innovation ecosystem by reducing barriers to innovation. By providing an enabling environment and infrastructure, DPIs encourage the development and adoption of new technologies, fostering digital innovation across various sectors. This not only benefits the public sector but also stimulates economic growth and offers new opportunities for businesses, including the emerging Fintech sector.

Promoting a digital economy that includes businesses from different sectors, such as Fintech, is crucial in building robust DPIs. Lowering entry barriers to innovation and increasing demand for digital services and products are central to DPIs. By creating a dynamic national digital economy, DPIs contribute to decent work and economic growth as outlined in SDG 8.

DPIs should also incentivize integration and usage in sectors that may initially not see the need for digital services. For example, it is crucial to demonstrate the benefits of using DPIs in agriculture to farmers who may not immediately recognize the advantages. By showcasing the value and potential of DPI services tailored to their specific needs, farmers can be encouraged to adopt digital solutions, leading to increased productivity and improved agricultural practices.

The G20 presidency offers an opportunity to make the digital development movement more sustainable and contextually relevant in Africa and Latin America. By building on the work conducted during the Indian presidency and collaborating with international partners, it is possible to address the specific challenges and opportunities faced by these regions, ensuring inclusive and equitable digital development.

Regular discussions and soft pressure can be cultivated to improve DPIs. By facilitating ongoing dialogue and regular review of principles and action frameworks, it becomes possible to identify areas for improvement and encourage adherence to established standards. This continual accountability and open communication contribute to the evolution and refinement of DPIs, aligning them with evolving needs and technological advancements.

Strengthening mutual learning ecosystems in various regions is a priority. Promoting cooperative learning environments and exchanging best practices between different regions can help accelerate digital development and enhance the effectiveness of DPIs. The success of the Nordic region in creating a mutual learning ecosystem serves as an example to be emulated in other parts of the world.

An initiative has been launched to unpack the concept of the right to anonymity. Amandeep Singh Gill is open to ideas and assistance in exploring this concept further. This initiative aligns with SDG 16, which focuses on peace, justice, and strong institutions. Unpacking the right to anonymity is crucial in ensuring the protection of digital rights while maintaining a balanced approach to privacy and security concerns.

In conclusion, Amandeep Singh Gill highlights the importance of implementing a safeguards framework for digital public infrastructure. This framework should address risks and issues related to safety, security, data protection, and societal inclusion/exclusion. Multi-stakeholder participation, aligning with human rights commitments and SDG frameworks, boosting the innovation ecosystem, and promoting a digital economy across sectors are key elements in building robust DPIs. The G20 presidency provides an opportunity to make digital development more sustainable, while continuous discussions and follow-up can drive improvement and accountability. Strengthening mutual learning ecosystems and exploring the concept of the right to anonymity are further steps towards promoting inclusive and ethical digital practices.

Henri Verdier

France has been at the forefront of developing digital public infrastructure (DPI), even before the term was officially coined. Their focus has been on aspects such as digital identity and public APIs. They understand the importance of public service rules, such as neutrality, accessibility, and equal access, in constructing DPI. By incorporating these rules, France aims to create a reliable and fair digital environment.

Notably, France recognizes the significance of the digital commons and the need for a free and neutral internet. They argue that DPI and the digital commons are essential for achieving this goal. While DPI refers to the infrastructure, the digital commons encompass concepts such as free software, open standards, and shared resources. The convergence of DPI and the digital commons has the potential to create a powerful and inclusive digital space.

In contrast to France’s proactive approach, Europe has stopped building public infrastructure since the 1970s. However, France insists that Europe should continue developing public digital infrastructure as digital identity and infrastructure for payment are as crucial in the digital age as roads were a century ago.

France also recognizes the potential for innovation and value creation through the OpenGov movement. They highlight that unleashing innovation in government processes can create significant value. This has been demonstrated in the past with initiatives focused on data and source code, and now with infrastructure development. France believes that the OpenGov movement can serve as a third source for unlocking value in the digital landscape.

Moreover, France acknowledges that public service can be implemented by the private sector, as long as certain rules are respected. They emphasize that while the private sector can finance public services, it should not take all the added value. This approach allows for greater flexibility and efficiency in the provision of public services.

Cooperation and community-building are highly valued by France. They argue that more emphasis should be placed on these aspects, rather than simply sharing code. Good documentation and specific types of codes are considered important details in the pursuit of effective collaboration.

However, France also recognizes the challenges that governments face in implementing digital transformations. They find it difficult to adhere to open standards and build small, reusable pieces of infrastructure. The historical context of digital disruption further complicates these projects. France advocates for a simpler, more efficient, and sustainable approach to digital transformations in government.

Using simple, open, and reusable standards can lead to more inclusive and sustainable infrastructure. An example of this is the case of Tuk-Tuk drivers in Bangalore, who developed a cost-effective solution to call a rickshaw, using Indian rules and leveraging infrastructure such as UPI and bacon. France emphasizes the value of adopting such standards to ensure that infrastructure is accessible and beneficial to all.

While infrastructure is vital, France also recognizes that it alone cannot protect democracy if it is misused for malicious purposes. A strong focus on implementing rules within the infrastructure is insufficient. Additional measures are necessary to safeguard democracy and ensure its integrity.

Efficient enforcement of approaches is key to success. By prioritizing efficiency, implementation and enforcement can be carried out in the most effective manner. This approach helps drive progress and ensures that initiatives are impactful and sustainable.

Lastly, France firmly believes in empowering people, unleashing innovation, and guaranteeing fundamental freedom as the most efficient way to promote economic and social development within a country. They advocate for a holistic approach that considers the importance of technology, innovation, and freedoms in shaping a prosperous and inclusive society.

In conclusion, France’s approach to developing digital public infrastructure is rooted in the principles of public service rules, open standards, and the use of digital commons. They stress the importance of continuing to build public digital infrastructure, advancing the OpenGov movement, and promoting cooperation and community-building. France recognizes the challenges of implementing digital transformations, but also highlights the potential for inclusive and sustainable infrastructure through the use of simple and reusable standards. They underline the need to protect democracy beyond infrastructure and advocate for efficient enforcement. Ultimately, France believes in harnessing the power of innovation and empowering individuals to drive economic and social development.

Session transcript

Moderator – Lea Gimpel:
Good morning and a warm welcome. Since the room is not full yet, if you want, please also take a seat here at the table with us so that we can have a discussion with you. However, we are going to start now, since it’s already time. My name is Lea Gimpel, I’m with the Digital Public Goods Alliance, where I lead our work on country implementation and artificial intelligence, and I’m your moderator for the session today. And with me, I have Moritz Frommel-Jo, he’s with the UN, with the UN Envoys Office for Technology, who’s co-hosting the session today, and he’s the online moderator. So this session is about the effect of governments of open digital ecosystems, and we are going to talk about how to establish a framework for secure and inclusive digital public infrastructure today. And we have a fantastic panel of esteemed speakers here with us in the room, and I will introduce them in alphabetical order to you. First, we have Aileen Donahoe, the Special Envoy and Coordinator for Digital Freedom of the US Department of State, welcome. Then we have Amandeep Gill, the UN Secretary General’s Tech Envoy, who’s also the co-host of the session, as I said, welcome, Amandeep. We have Nele Liosk, the Digital Ambassador-at-Large from Estonia, thank you, Nele, for being here with us. Then we have Robert Opp, the Chief Digital Officer of the United Nations Development Programme, hi, Rob. And Henri Vรฉdier, last but not least, the French Ambassador for Digital Affairs. And before I give the word to my panel, I would like to start by a quick introduction, because the term of digital public infrastructure is not always well-defined, so I think it’s wise to first set the stage and speak about DPI and what we mean by this term for this panel discussion. By DPI, we basically mean society-wide digital capabilities that are essential to participation in society and markets for citizens, entrepreneurs, and consumers, and which are the foundation for public service delivery in the digital area. And this definition basically emphasises the functionality of digital public infrastructure, so it’s really about delivering services, both public and private. So what we don’t mean by this term, really, is foundational software, for instance, that underpins any other software solution, so that is one of the common misunderstandings. And we are also not talking about actual physical infrastructure, such as fibre optic cables, so just to be clear on that. And this kind of digital public infrastructure I was talking about, so digital public infrastructure with society-wide functions, such as digital ID, for instance, payment systems, data exchange systems, and civil registries, they have seen a real boost during the COVID-19 pandemic, because many countries invested in these foundational infrastructures heavily for pandemic response, so for instance, for cashless transfers. And part of this also involved that a lot of attention was given to speedy implementation, and maybe not so much attention was paid to actual, secure, safe, and inclusive implementation of these technologies. And I think the important thing to consider here, really, is that digital public infrastructure with society-wide function really has an extended risk to us, right? If we talk about technology that is implemented at population scale, we have risks such as data privacy issues, mass surveillance, and, for instance, deliberate or accidental exclusion of vulnerable groups. And that’s something that we really need to take care of, and also fix in case there was something implemented during the COVID-19 pandemic that didn’t really consider any of these due to the speediness of implementation and the need to react. And currently, what we see is a lot of momentum around digital public infrastructure, so you might have seen that IGF, it’s a topic that pops up in many of these sessions. And we need to discuss now how to make safeguards into digital public infrastructure design, implementation, as well as the governance of it. Because there’s also path dependency, right? So if we implement digital public infrastructure at population scale now, these kind of technologies will have an impact on people’s lives over many years. So we need to get it right at this very moment. And there’s a window of opportunity to do exactly that. And for this reason, we would like to talk today about the Universal DPI safeguards initiative that was launched by the UN Tech Envoys Office, as well as UNDP just recently, and basically discuss in the session what are the risks and how we can mitigate them, which good practices exist from already existing implementation and the lessons learned around this, as well as about the role of such a global DPI safeguards framework and what role it can play for design implementation and governance in the future of digital public infrastructure. And with that, I would like to pass on the word to my distinguished panel over there. And I would like to invite Amandeep to first tell us a bit more about the safeguards initiative that you recently launched. So why do you think there’s a need for something like this, for such an initiative, and what are your plans?

Amandeep Singh Gill:
Thank you very much, Lea. Thank you to you and Moritz for moderating this panel. Very pleased to be here with the distinguished co-panelists. Why is there a need? I think the need arises from the growing interest and the growing consensus on the importance of DPIs. They have proven themselves to be a powerful way to enhance inclusion in the digital space, to drive innovation, to improve government service delivery, reaching the last mile. COVID was the big moment, but it’s been coming for a long time. And we have here on this panel in LA, so Estonia’s experience, the experience of India, many other countries. So it’s been coming for a while, and now there is global recognition. For instance, the G20 understanding on a framework for DPIs. Now, as you put it, before we get too far down this road, because there’ll be path dependencies, it’s important to put together some safeguards to ensure that some of the risks and the problems that we’ve already seen with digital public goods, digital public infrastructure, for instance, safety online, the security, the cyber security aspect, data protection aspect, the aspects related to inclusion or exclusion, the optionality, opt-in, opt-out type of issues, the issues related to buy-in from society, issues related to the legislative framework in which DPIs are placed. So it’s good to have a global standard, a global guidance that helps the players in the DPI ecosystem move forward confidently. We can’t say that you should not have DPIs, because that has its own opportunity cost consequences. For instance, we don’t say, you know, let’s just shut down the digital platforms. They also have billions online, et cetera. But we have to work actively to ensure that they continue to serve everyone, they continue to serve human flourishing, rather than, you know, create problems further down the road. That said, I would very concretely point to the call by the Secretary General in his policy brief on the Global Digital Compact for a safeguards framework on digital public infrastructure. You know, given the, and I’m sure Rob will speak about it, the demand that the UN has been seeing from the ground, the issues that we’ve been kind of facing in country, I think it is time to have this kind of framework, and the Secretary General has given that call. He’s also outlined this problem of fragmentation overall. So if we want to avoid fragmentation in this space, this can be a kind of a unifying baseline, and it can give civil society and other partners a common reference point. So this is the reason why jointly with the UNDP, we’ve launched this initiative. But this initiative won’t be limited to these two UN entities, others will join in. It’s an open call to all those who are involved in the DPI ecosystem, from DPGA, GovStack, to Dial and other important players. Also a call to civil society and private sector, who will be helping build this out and manage the interface between the tech side of it, the governance side of it, and the community side of it. Yesterday, I think even someone said that this is a socio-technical infrastructure we’re talking about. So it’s almost like a socio-legal technical infrastructure we’re talking about. So obviously the path forward has to be multi-stakeholder. Thank you.

Moderator – Lea Gimpel:
Thank you so much for these initial explanations. You already mentioned UNDP, so I would like to pass on the word to Rob for more about the Safe Grids initiative, why UNDP decided to join, and what have been your takeaways so far from UNDP’s experience in supporting digital public infrastructure implementation?

Robert Opp:
No, exactly. Thanks, Lea. Just building on what Amandeep said, where we are with this whole kind of discussion around digital public infrastructure is we’re essentially in the process of coalescing a movement around that, taking what has been done over the last 10, 15 years, in many cases, in countries like Estonia, India, and others, as Amandeep said, and looking at how can we offer this in a way that will help accelerate digital transformation in countries that are, let’s say, not as well developed in terms of their infrastructure, their digital infrastructure. And what’s really key with the whole concept of digital public infrastructure is that it needs to be seen as an approach rather than a technology or set of technologies. And that approach needs to have the notion of a governance structure around it with the appropriate safeguards. And as you mentioned in your intro, Lea, as the COVID pandemic basically hit countries and countries needed to respond, and as they really mounted their response, it was really clear that the kinds of requests that we received out of countries over time shifted from being very solution-focused to much more thinking about what is the overall ecosystem looking like and how do we shape that. And I think it’s natural that countries, they focus, generally speaking, on trying to solve an immediate problem, and that’s where you get a focus on technology first. And I think what’s exciting about this safeguards initiative is that we have now a chance to embed the thinking around what do you need to, when you’re planning for your ecosystem and you’re trying to solve your problem, you cannot forget that it needs to be accompanied, the technology needs to be accompanied by these kind of set of safeguards that should be in place to protect people, to protect the future success of your infrastructure work. And so on the safeguards initiative, the role of UNDP is to really work with the tech envoy’s office and the convening power that they have and to kind of run the consultations on their side but complement that with what’s happening in the field. And as we prototype and create hypothesis principles and safeguards, we want to test them at the country level. So we’ll be taking three to five countries, looking at the framework as it’s developing, testing that and seeing what it would actually look like on the ground and what do we learn from that, so creating that feedback cycle. And then as the safeguards framework emerges out of the consultations, out of the feedback cycle, really looking at well what would it take, what is it going to take to support countries to be able to actually put these in place and what kind of capacity needs will there be, really trying to understand what are the knowledge gaps that need to be addressed and so that this becomes much more, let’s say, easy to adopt for countries. We can’t just leave it at global principles, we have to really understand how countries are actually going to be able to do this. So that’s what we’re really focusing on, that’s what we’re really looking forward to in all of this. Thanks.

Moderator – Lea Gimpel:
So in a nutshell, let me just summarize before we move on to the country speakers. It’s about creating a movement around DPI, something that I really like in order to ensure that we protect people and at the same time deliver services. So I really love this notion and as you’ve all heard, it’s an open call for everyone to participate in, so please do so. And what I also really like is this idea of talking about an approach rather than technology because I think a lot of this discussion currently focuses on technological solutions and not so much about the governance aspects of these. And as you’ve all heard, countries are central to this initiative. It’s about developing ideas, it’s about testing in the field and then going back and working with that feedback. So I would like to move on to our country representatives. And first, I have Eileen here with us. The White House recently announced that the US will work on deploying robust and safeguarded digital public infrastructure. How will this approach be reflected in your engagement for DPI that empowers people via technology while also protecting their freedom?

Eileen Donahoe:
Great. First, let me say thank you for including me. I am a real neophyte on this topic and I’ve already learned a lot just from listening to all of you in this room and recently participated in another event with a subset in this room and I feel like I’m excited about this. I have an instinct that it’s really important, but I’m still catching the thread. So I’ll just say that up front. As a human rights advocate, what attracts me to this? I feel like it represents a very innovative combination of using technology to basically advance and jumpstart the SDGs in effect and expanding access to digital services around the world. But also, if done right, embedding human rights by design, as everybody has said. I would emphasize more explicitly human rights by design. And that’s a further conversation about what are the terms we use. Everybody knows that there is a tremendous yearning around the world to expand access to digital services and that we really do need technology to be an accelerant to meeting the SDGs. That’s the aspiration and the hope. It would be terrible if we did that in a way that we were actually making the most marginalized, vulnerable communities more vulnerable. So that is why, as everybody’s emphasizing, the technology and the standards go together simultaneously. And you raised it so well at the top, the tension between speed and that yearning. But if you do it in the wrong way, you’re only making things worse. And I would also add, I’ve hinted at this already, I do believe the international human rights law framework should be the normative foundation for thinking about this. I will admit the hard part is, what does that look like in practice? It’s the how. We know what, the how is the hard part. And I will acknowledge that I was in a room recently with Marianne from Access Now and she raised the point, and she just said it very explicitly, the big risk is surveillance states and this unconscious drift to becoming surveillance states. So that’s the really dark vision of this and that’s why we all have responsibility. On the U.S. side, basically that was, I hadn’t seen it, I looked it up last night, I have it. It is September 22, 2023. So this was after UNGA, so after our last conversation. And what really jumped out at me is this is a vision for the American people, it is domestic. People may be surprised to know the U.S. is really behind on this. I think people would be stunned. And the vision is to transform the way government communicates with the American people and rebuild American infrastructure. Which maybe all governments say that’s what they want to do, but I think people would be stunned at how far we are behind. And a couple of stats jumped out at me, basically only 2% of the federal government in the United States forms have been digitized. And the public spends more than 10.5 billion hours each year completing government paperwork. And about $140 billion in potential government benefits go unclaimed every year. So that gives you a sense of how far behind we are. So in effect, the United States is in this with everybody and has a lot to learn.

Moderator – Lea Gimpel:
Yeah, I think that’s a very good point, that it’s not only about low and middle income countries, right, but that we are speaking about high income countries and their approach to DPI as well. And I really like this idea of human rights by design, so that’s definitely something that I think we need to discuss. And talking about terms, I want to pass on to Henri, because France is the country that is always speaking about digital commons. And I would like to know from you, Henri, how is this concept of digital commons and DPI overlaps, or how they connect to each other, and where safeguards come in?

Henri Verdier:
Thank you, and thank you for the invitation. I will start with DPI. I was thinking that probably France started developing DPI before we even knew there were DPI. We did develop different level of digital identity. We’ve got France Connect. We are working hard on geographical information, public API, and so on. And this work was probably based on two sources. One was government as a platform, and I can welcome the work of the British Government Digital Service 10 years ago. And the other was a very ancient tradition of public service, with all its rules of neutrality, accessibility, mutability, equal access, and so on. The more we did work on these issues, I will come very briefly to comments. The more we did work on these issues, the more we have seen them as a more universal challenge, because we think now that if we want to preserve an open, free and neutral Internet, and if we want also to be true democracies, so without big states or big tech, we need this small layer of public services that enables everyone to act in the digital economy and to be part of the decision process. That’s why now we consider those challenges are universal and related to democracy. And that’s also why this commitment for DPIs meets our commitments to the digital commons. We can’t stress enough just how important the commons are to the Internet as we know it. Free software, open standards, data, knowledge commons are the very core, the true core of Internet. And these two commitments can easily be linked, so they don’t overlap. It is possible to conceive good IPOs that are not commons, and some commons do not become infrastructures. But when the two ambitions come together, it can be very powerful. And if we, just to finish, if we come back to the question of safety, I think that we cannot conceive a real democracy without public infrastructure. But not every infrastructure will empower democracy. That’s simple. So we need to conceive, and that’s why we welcome this work, we need to conceive a set of rules. Probably we know most of them, but we have to order them, like real transparency, transparent governance, shared governance, security by design, privacy by design, etc. But we need to make a proper work and to share this broadly. Because, again, I finish with this, we cannot, we and not just me, France, we consider that we cannot conceive a fair development of Internet without good public infrastructure, but that not every

Moderator – Lea Gimpel:
infrastructure will empower democracy. Thank you so much for these comments. And I, yeah, I really, I think it’s a fair point to say that we, like, in a way we know the rules, we know the principles, right? We talk about these buzzwords, transparency, accountability, and such. But I think there’s a real good point in it that we need to move forward from principles to practice. And do you have experience in implementing DPI? And as you said, I mean, you’ve been implementing DPI without calling it DPI for quite a while. And I think the same is true for Estonia. We talked about this earlier before the session, that what you’re doing, you don’t call it DPI, but it is DPI, what you’re doing. So Nelle, from your experience, as Estonia as a global digital leader, what considerations should be integrated in such a safeguards framework? And how should such a process look like when we develop it over the next couple of months?

Speaker 1:
Yeah, thank you. Thank you, Lea, and thank you for the opportunity to be here. And I believe I will actually summarize or build on what all my good colleagues have already touched upon. But the first is really the notion of the DPI. That is, I would say, a rather new term, but actually standing for some of these important principles in digitalization. So from this point of view, I believe we all have good experiences. It’s not the usual suspect of Estonia and India and many others, but all countries have digitalized their societies to some extent. But to give answer to your question, I believe it might be useful actually to revisit some of these origins of what we call DPI, that is a new movement or a trend, as we have heard. And some of it actually Henry referred to, and this is really, I would say, one of these origins is philosophical, and the other is perhaps more practical. So the philosophical really comes to understanding the role of the government in our society. Is it only to serve the people by providing services? Is it also to act as a partner? Is it to share everything that the government does? So we can say that actually digitalizing, following these important principles that you also mentioned, openness, transparency, inclusiveness, it started really with rebuilding Estonian state in the 90s. It started with freedom of information, it started with privacy issues, it started with security issues, and then it moved to the digital sphere, where we started to talk about interoperability, open standards, and so forth. So I would say that it was really this logical continuation of what we had started to reform in reforming our state. But the other reason, it’s actually very practical, and this really, Estonian government sort of started to realize in the 90s that actually the needs of the government, but also private sector and other partners in digitalization are rather similar. And this came to joining forces and really sharing our resources. So in the 90s, Estonian government, together with the private sector, started to develop, for example, a digital identity that was mentioned, and we do use one digital identity across the government, but also private sector. And this is actually one of these important, or this led actually to one very important precondition for the DPI to work, and this is really the habit of working together and building trust. Because on the one hand, yes, we can build trust by setting principles, having a great legal framework, but it is definitely not enough. All the partners need to feel that they equally contribute, and they equally also take responsibility, and at times also risks. And the last thing we often forget, that in order to make things work, sometimes we must be ready to fail and take responsibility for this. So this would be perhaps some of the takeaways from Estonian side. Thank you so much. Well, I think

Moderator – Lea Gimpel:
failing for governments and taking responsibility, that’s definitely a challenge for many still, so that’s probably something we can also learn from Estonia, if there’s anything that you want to share. I really like this notion of, you know, building cooperation and trust, and involving everyone in this effort, right? So as Amandeep explained in his initial statement, it’s a multi-stakeholder initiative. So this idea of a DPI safeguards framework, and it’s really about involving everyone to take part in developing these principles, but also, you know, defining how to move into practice with that. And I would now like to, well, ask you as my panelists to react to what you’ve heard. So Nella already summarized bits of it, but we have a bit of time for a rather open discussion. Before, I will open it also to the audience and ask you to raise your questions, both online as well as here on site. So another 10 minutes for the open discussion among the panelists, and then I will open up, because I already saw a hand over there. So it’s definitely a topic where many questions exist and we need to discuss. But please, first, if any of you want to react to what you’ve heard, please go ahead. And if there’s nothing, I have a range of questions, of course, as we’re prepared.

Henri Verdier:
Maybe I could quote an Indian friend from Bangalore that told me recently, Europe, you did build your prosperity and your independence through public infrastructure, rails, roads, train, water. And then suddenly, you did stop at the end of the 70s, for some ideological reasons. And the economy did continue to evolve. And now, digital identity, infrastructure for payment, are as important as roads a century ago. And it did convince me.

Eileen Donahoe:
So I do have a question. We talked about the tension between speed and standards. Amandeep, you said it right at the top, that, you know, you’re talking about the need for global standards. And that works for me, because I think of the human rights framework as a global standard, as a basis and an anchor for thinking about these things, already universally applicable, and it’s an understood language. However, when I think about it, in the context, even of my own government, and the approach that I just described, it strikes me that the way many governments think about providing public services, pre-digital, is that it’s their job, it’s government providing services. And so I think all governments are, most governments, many governments, hopefully, are learning about multi-stakeholder process and understanding when it comes to digital technology, they need help. And that means including the technical community, civil society, academic experts, etc., private sector. I think there’s been progress there. If you add, then, the global multi-stakeholder approach to governments, that’s a bigger leap. So part of me, I raised this because I think that’s where the human rights framework can help. Because governments will be comfortable that they’ve already signed up for this, and they know what it is, and there’s a sense of trust. Otherwise, I think governments will be reticent to, and even in terms of political discourse domestically, the idea of including a global multi-stakeholder input process might seem challenging to people who haven’t been exposed to global multi-stakeholder process. So that’s one of the areas I see that we really need to help governments sort of jump ahead and collapse this tension between local, domestic and international.

Amandeep Singh Gill:
Yes, if I may jump in quickly, I think building on Eileen’s point, I think the foundations are essentially twofold. There is human rights, the international human rights commitments, and there’s the SDGs framework, leaving no one behind the 17 goals, gender equality, even good governance, zero hunger, removing poverty. So those are our foundations. And the opportunity from local to global is that we have next year the Summit of the Future. So the Global Digital Compact is going to be one of the deliverables for that summit. So how can we, when we are translating the vision of the Global Digital Compact on accelerating progress on the SDGs, addressing the digital divide, alongside how can this safeguards framework, this enabler, as Rob put it, for the DPI’s movement overall, how can that be a concrete offering to support that vision and to take it forward? So in a sense, we are going from that agreement among 2021 now in the G20, which is pathbreaking in New Delhi, to a larger framework where you have 193 countries, civil society, private sector coming together to endorse this movement. And my last point is on this, the private sector aspect, because it is not just government services, public services we are talking about. What DPI’s do is they create an innovation ecosystem. They, through this combination of common rails, guardrails, they lower the entry barriers to innovation, linking with Ahi’s point about the digital economy, the importance of a dynamic national digital economy, where you have, yes, citizen facing government services, but you have businesses, whether they are coming from FinTech, etc., who are boosting the demand for digital services and digital products. So there is a supply side paradigm on infrastructure that we are used to. But DPI’s are more complex and more sophisticated, because they play on demand as well. A farmer who really doesn’t have an incentive to connect, you know, can the DPI’s and the services provided on DPI’s, both publicly and privately, can they create that demand for that farm to say, okay, I need to, you know, plug in as well, in an empowered way.

Speaker 1:
Yeah, maybe just to stress some aspects, which I believe are important when we talk about this DPI movement or discourse. And what I really like about the DPI is it actually reminds of the basics. So when we look, for example, at the conversations here over the past days, and generally this year, and I’m sure also next year, it is all about very advanced things, AI, Internet of Things. There was a blockchain buzz a few years back. And at the same time, many of the governments and societies actually that are talking about AI and everything else have not yet actually fixed the very basics, data governance, digital authentication, and so many other things. So I think all the basic public infrastructure, so in this sense, I think DPI has spotted it well that we need to have this base in order to move further. And the second is actually related to the myth that also Amadeep and Henri and others pointed, and it’s really about the role of the private sector. So DPI does not, or good digitalization does not mean that the government is doing everything, but it’s really about sharing what it does, and really making sure that some of these important principles like security and privacy are guaranteed, because this is an obligation that is different from the private sector. So it is a public sector’s task to make sure that the people feel good in this virtual world. And the third one, and this is maybe a call actually also from my side, is really related to sharing and reusing. There are so many governments that are making their tools available, making their source codes available. We see a lot of sharing. We don’t see that much reusing or even doing together. And this comes to the question like why we don’t do that, and it’s probably trust issues, maybe readiness issues. There are different issues, but definitely a mindset issue. So from Estonia, we have a good example together with our great neighbors in Finland and Iceland, where we have come together, we have created a foundation. We make sure that we develop certain digital products together, because we need them. And we also make them available to the world, and also make sure that they are updated to security and other requirements. So I think this is a good example of a DPI safeguard. But definitely I’m calling us to share and use more what others do.

Robert Opp:
Yeah, and actually, Nelia, I was going to pick up on something you said earlier, but also this comment. My reaction, when I listen to these conversations, and in all of our engagement this year as part of the G20, as a knowledge partner, and in all of the other discussions that we have, I’m constantly thinking about how do we take it from, I think, as Amandeep, you said, the 20 or so to the 150, 170, 100, the countries that are out there that haven’t done this or done it in a sort of package, the way that we’re talking about as an approach. And what strikes me is we can get our technology packages, and we can get our standards packages, but there is still a mindset issue. And that will take some time to get people, let’s say, somehow, we have to learn how to incentivize people to share, to work together, change the mindset, and really understand that this is a movement that we’re pushing toward. And I think the safeguards initiative we’re talking about today is a step in the right direction to do that. The implementation and how we scale is what is kind of on my mind constantly. So that my call to the genius that exists in humanity out there is what are the scalable ways to ensure that we’re changing the mindset as we do this?

Henri Verdier:
I can make three brief comments. First to Eileen and Amandeep, yes, you’re right, we have two sources that are the SDGs and human rights, but maybe there is a third one, and you know it very well, Eileen, this is the OpenGov movement. What we did learn 10 years ago is that everything the government does can create much more value if we unleash innovation. We saw this with data, with source code, and now we can see this with infrastructure. And we have a lot of important lessons to learn from this movement. The second point regarding private sector, just to mention, when I did mention the long tradition of public service, public service can be made by private sector. Public service don’t have to be free. The thing is that the public service has to respect some rules and cannot be the man in the middle taking all the added value. It has to be neutral, to be equal access, you can finance it, but you cannot take the added value. So we can easily build it with the private sector. And the third and last point, Nele, you’re right, a lot of people try to share, a few people try to cooperate, and that’s one other way where we can be inspired by the common movements. Because to build community, to work all together, is not the same thing as just to share the code. And you have to think about cooperation, governance, but even if you go into the details, good documentation, certain kind of codes, this is not so easy. You cannot just open your code like this.

Moderator – Lea Gimpel:
Thank you so much, all of you. I think there’s a lot of appetite for discussion also amongst the panelists, and I think we could go on and on here. I would like to open it to the audience now with their questions. If you want to ask something, please line up at the microphone, and for those of you who are sitting more in this area, we might also pass around the mic if there are questions. Please go ahead, and Moritz, be prepared to also raise some online questions, please. Maybe we collect some questions and then we allocate them among the speakers. Yes, go ahead, please.

Audience:
Thank you. Good morning, distinguished speakers, Lea for bringing them here, everyone. I’m Ale, Costa Barbosa, I’m from Brazil. I’m a fellow at the Weizenmauer Institute in Berlin, and also a coordinator for the Homeless Workers Movement technology sector in Brazil. So I’m representing those who really rely on DPIs, let’s say. And just a quick moment on self-marketing. I had the opportunity to coordinate a research with Laotian in 2019 on identification for development somehow in Latin America. I think it’s really worth it. So it’s indeed a really old discussion, for instance. And recently, we’re about to launch a report on… on digital education in terms of infrastructure and sovereignty held by the Brazilian Internet Steering Committee, which I think should be considered not only as more sector applications. And I also like to hear the fact of geographic information systems being considered DPI somehow for you. But I’d like to hear the GDC, the Global Digital Compact, claims for sustainable DPIs. And taking into account that the last meeting of G20 in India somehow came up with a common agenda. And the following two hosting countries will be Brazil and South Africa. I’d like to hear how can we ensure that DPI will be sustainable, considering the environment, the last mile, if you’re not taking physical infrastructures into account. Thank you. Good morning, everyone. I’m Mahesh Perra from Sri Lanka. It’s good to hear that we have been talking about digital government, connected government to DPI. Now, in this journey, I mean, I think Estonia has quite successful in digital government, connected government from the inception. I mean, many countries have failed in the design, in the implementation, as the moderator said about in the governance. I mean, all three stages, we did mistakes and not achieved the expected result. Now, I’m quite pleased to hear that we are talking about DPI standards and DPI safeguard initiatives. I mean, I would like to see if we can talk about DPI by design, DPI by implementation, DPI by governance. I mean, it’s all about standards. It’s about giving standards, giving about certain measures that government and the implementers must follow by the design, by the implementation, and by the governance. I think we have plenty of questions for their later presence, for their feedback. But a couple of things that arise to my mind from listening to you, there’s so much opportunity for bringing up the potential for surveillance for the future. And this is a major concern when we talk about interoperability. Is it off? It was off. But I think that my voice was loud enough and everyone was kind of listening. Over there, Marina was listening over there. Oh, I have a good theater voice. OK, I was saying thank you so much for bringing up the potential for surveillance if we don’t get it right. So we have to think about, as it was mentioned, human rights by design from the stage, from design before implementation. This is a main concern because of the tension that we were mentioning earlier about the speed for implementation. We are really running towards it. And we cannot implement the safeguard at the same time we are implementing the infrastructure. It has to come before. Otherwise, it won’t work. And this comes then to the main concern that we have, which is the concept of human rights by design needs to be unpacked. We need to mention specifically what do we mean by that. And even though we do have, of course, human rights framework for the world, we have all of the declarations, it needs to be said specifically what it means when we talk about privacy, when we talk about freedom of speech, when we talk about rights that we usually do not touch on when we talk about technology, like the right to dignity, the right to autonomy. And all of this is involved because we are touching on very essential aspects of the human experience, basically. So when we build safeguards for these processes, safeguards are saying that we need to have safeguards. It’s not enough. The safeguards need to have a way of being implemented that allows for the systems, if we realize that they are causing harm, the systems to be stopped or even rolled back. And we do not have that in many, many places. I work on digital ID and the systems. The safeguards are about negotiating, maybe, the possibility of an eventual remedy, which is not enough at all. And we do need remedy. But we also need to be thinking about stop and rollback, if needed, to be able to reconsider, re-evaluate, and implement changes. And we had a lot of conversations yesterday around the fact that there is no model of digital ID that just works. And it is constant learning. It’s a process of constant learning in the context. So we need to be open for the infrastructure to be adaptive, to be responsive to what might or might not be working. And the question here, because I swear there is a question, I swear, is, are we giving any thought to the right of anonymity in here? Because I think that we all agree, because this is a standard, an international standard, that the right to anonymity is essential to freedom of speech and, generally, civil and political rights and also rights like autonomy, again. But if we create models that are all-encompassing in a way that they require to be identified at every step in places where we don’t necessarily need to have that amount of information of the person, then where is the space for the people to be private, to hide, to have that space that is needed as human beings? So my question is, we are implementing this infrastructure, yes, for the farmers, right? Because we need them to get the services that they need. But how much information do we need for the farmer?

Moderator – Lea Gimpel:
Thank you so much for all of these questions. I make a stop here and quickly summarize. And Moritz, if there’s anything that you can add to this package, please do say so. Just quickly, we had a question around sustainability of digital public infrastructure. We had a question around DPG standards and what it means to implement DPI in a way of human rights by design, and what is behind there, and what about the right of anonymity of people? Is there anything that you can add to this little package of questions? Yeah, there was one question online on the enforceability of the safeguards framework, because what good is a nice standard if we can’t enforce them on the ground? Please go ahead, whoever feels inclined to answer.

Henri Verdier:
OK, so very briefly, you say that sometimes countries have failed. And I totally agree. I don’t know if you know, but before being a diplomat, I was a state CTO for France. So I had to conduct some of the transformation. And the dirty little secret is that governments are not always able to make simple, to make it simple. And to respect open standards, to build small pieces, reusable, to build, to respect agile methodologies is not very usual for governments. I don’t know if you can imagine the state IT for France, for example. I had to disconnect some projects that did cost one billion euros, for example, and that did fail after 15 years of experiments. So, we have to, the dirty little secret is that there is also a digital disruption within the history of IT. And we have to change the way we do develop. That’s not simple, but we can do it. And that’s one important thing. And when we, the more simple we do develop, the more small pieces of clear standards we use, the more it’s easy to make an inclusive and sustainable infrastructure. You did mention homeless workers. In Bangalore, the tuk-tuk, you know, the rickshaw, they did alone using UPI and bacon. They decided to avoid Uber. So they did pay, but that was really inexpensive, maybe 50,000 euros, $50,000. They did develop a solution to call a rickshaw, and they did implement some Indian rules. So, for example, you can bargain, you can negotiate the price. And that’s very interesting because they told me, we are there decades before Uber, and we’ll be there decades after Uber. So, why should we organize ourselves through Uber? So, they decided to have a direct access to a very important infrastructure. Just in one word, I think very often about how can an authoritarian regime use infrastructure? I think that we can implement some securities, and we will. But, in fact, an infrastructure is an infrastructure. And if you have evil purposes, you will use the infrastructure for your evil purposes. That’s the same with trains, with the highway, with everything. So we cannot protect democracy just through while implementing rules in the infrastructure. So we need more.

Amandeep Singh Gill:
I don’t think there is anything to add to what Ami has just said. Maybe our Brazilian friends question about the incoming G20 presidency. So that’s an opportunity to kind of, as Rob put it, make this movement more sustainable and more, in a sense, contextually interesting in Africa, in Latin America. So, building on what has been achieved during the Indian presidency, and Ahi and many others have played a key role in that amazing outcome. So, that needs to be taken to new areas. Those learnings incorporate, as Nelly has said, we need to bring in those learnings. I mean, in the Nordic region, you have this kind of an ecosystem that of mutual learning. So can we build it in other places, make the DPIs work more regional and context-sensitive? And on the point about enforceability that came online, I think what we can do by leveraging the GDC process is to ensure that when there is a regular review and follow-up of the GDC principles, action framework, that there is a regular discussion as part of that on how we are doing on DPIs. So, where this safeguards framework acts as a reference point and there is a regular discussion. And there, obviously, we can create some soft pressure, some normative pressure on those who are falling behind or not living up to that standard.

Eileen Donahoe:
So, I will just underscore several points made by a colleague from Sri Lanka, Anri Amandeep. I’m hearing two, three added reasons that a global open standards are valuable. One is well-intentioned governments who may have failed or would otherwise fail need the help. And so, sharing the knowledge and the know-how is valuable. Second, to Marianne’s point, and Amandeep, global open standards actually increase the likelihood of accountability if standards deployed will be higher than would otherwise happen if governments are left to their own devices. And that applies to well-intentioned governments who do things the wrong way with inadequate standards and less well-intentioned. Amandeep, your last point, though, underscores why or how the global digital compact process itself with follow-up can add not full teeth. I mean, even the international human rights law framework is not fully enforceable, in fact, but the soft norms, that kind of pressure and global open eyeballs on everybody’s systems adds value. So, there’s a real benefit.

Moderator – Lea Gimpel:
I’m afraid we are running out of time here. Sorry for that. Eileen, you already did a great job in summarising what we’ve been discussing here. Thank you. I would like to add a few quick points as well before I give it over to my panellists again for a quick 30-second key takeaway message from each of you. So, what stuck with me specifically is that the DPI Safeguards Initiative is an opportunity also to reflect in a philosophical way on the role of government. So, what is it actually that governments need to do in the 21st digital era and how can we make sure that they deliver on that, both in terms of public service delivery as well as in being a partner for citizens as well as for the private sector. And this connects quite nicely to this pragmatic approach as well, which I would coin as a society-wide approach that enables everyone to build on top of it, including the private sector, including building an innovation ecosystem, which I think is a very important point in order to also help digital economies to evolve. Secondly, we also talked about vehicles and how to bring on and create commitment for the Safeguards Initiative. And the Global Digital Compact was mentioned as one of these vehicles, but we also discussed the Human Rights Convention as well as all the work that has been already done on the open government partnership and initiatives around this area. I think there’s a lot of legacy work, actually, that we can build on. And thirdly, as a DBGA, of course, I really like this idea of sharing technology, of sharing learnings, and of reusing these and building cooperation around these. And what stuck with me specifically is this idea of changing mindsets while implementing at scale. Thanks, Rob, for this great sentence here. And I think that’s exactly what we need to strive for when we are implementing. These are my key takeaways. Over to you, esteemed panel. What are yours?

Speaker 1:
Yes, I will actually end by responding to some of the questions, which I think were actually very important also to keep in mind when we talk about data public infrastructure. One is related to sustainability. And, you know, terms may come and go, and maybe in five years’ time we don’t talk about DPI anymore, or connected governance, or mobile governance. It’s actually important not to lose what has been done before, because we often see that projects come and go, and sometimes governments give up, civil society gives up. But we need to remember, actually, that changes take time, and actually people are rather conservative. So we see that actually from Estonia, that in order for certain services to be uptaken, some new change to be implemented, we may need six, seven, eight years. The Internet voting, that we still carry out in Estonia. First time we had 0.8% of votes coming via the Internet. Now it’s almost 50% of the voters. And the second one is really about how we can guarantee privacy, security, and now we have also human rights by design. It’s actually not one country’s issue, it is a global issue. And it’s not a government issue, but also private sector, and increasingly private sector issues. So there comes the role of these global movements that we have been talking about.

Moderator – Lea Gimpel:
Please, a short answer.

Robert Opp:
Okay, well I have many takeaways, but I’ll only, I’ll mention one, I guess. Maybe, no, I’ll mention one. You know, I think what strikes me in this conversation and kind of connecting dots with a lot of other things is although these things take time and people are inherently conservative, we have waves that are coming globally like artificial intelligence and other emerging technologies that will challenge the human ability to keep up. And at the end of the day, governments are a human endeavor, civil society’s human endeavor. And the question then becomes how might we really construct this approach globally with the safeguards and with the scalable packages of human rights by design, privacy by design, the other things that we know are important, how might we do this and roll it out as quickly as possible while creating trust and while creating that sharing. And this is just kind of what keeps me awake at night, kind of what drives us to work on this, which is why the potential is so strong for this approach. I’ll leave it there.

Henri Verdier:
One word. Someone did ask how will you implement or enforce this approach. And I will say we do it because it’s the most efficient way, and we will prove it. To empower the people, to unleash innovation, to guarantee fundamental freedom is the most efficient organization for the economic and social development of a country.

Amandeep Singh Gill:
I’d just say that this is the beginning of a journey, that we just announced the initiative last month, and this is in fact the first official consultation. So going back to your point, Mariana, about some of these granular issues around right to anonymity, unpacking the concept. So we’re at the beginning, and with your help, we’ll be able to do that. Thank you.

Eileen Donahoe:
Exactly that point. Unpacking the concept of human rights by design. What does that look like in practice to do it well? Obviously cross-regional, cross-stakeholder group, but I would underscore cross-disciplinary in the intellectual sense, because it really is about people who understand norms, soft norms, and hard law, how it works, and technologists, and the innovators. And bringing them together with a shared language is also part of the challenge.

Moderator – Lea Gimpel:
So it’s the beginning of a journey. Thank you so much, everyone, my speakers, the audience. If you want to know more about the DPI Safeguards Initiative, they have a website where you can read up on it, and also subscribe to their newsletter if you want to know more about the ongoing consultations. And with that, I would like to end it and wish you all a great day. Thank you. Thank you. Thank you. Thank you. Thank you.

Amandeep Singh Gill

Speech speed

139 words per minute

Speech length

1314 words

Speech time

566 secs

Audience

Speech speed

175 words per minute

Speech length

1128 words

Speech time

386 secs

Eileen Donahoe

Speech speed

136 words per minute

Speech length

1179 words

Speech time

518 secs

Henri Verdier

Speech speed

151 words per minute

Speech length

1303 words

Speech time

519 secs

Moderator – Lea Gimpel

Speech speed

171 words per minute

Speech length

2484 words

Speech time

873 secs

Robert Opp

Speech speed

168 words per minute

Speech length

1030 words

Speech time

368 secs

Speaker 1

Speech speed

149 words per minute

Speech length

1227 words

Speech time

495 secs