Internet’s Environmental Footprint: towards sustainability | IGF 2023 WS #21

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Gabriel Karsan

The analysis features several speakers discussing a range of topics related to climate change and sustainable development. One key point raised is the importance of recognising that we inhabit a common world with shared resources. The speakers stress the interconnected nature of humanity, emphasising that we all breathe the same air and live under the same sun.

Another topic covered in the analysis is the need for eco-friendly solutions within the internet industry. It mentions proposals such as using satellites and high altitude connectivity devices to make the internet more sustainable. The integration of technology with climate action strategies in India is cited as an example of potential solutions.

The analysis also addresses the issue of poor quality data leading to misinformation on the climate agenda. It suggests that the dissemination of inaccurate information can hinder effective progress in tackling climate change.

One of the speakers advocates for a multistakeholder approach to bridge the gap between grassroots movements and government. This inclusive approach is seen as crucial in ensuring effective and inclusive decision-making processes.

The analysis also explores the issue of addiction to hydrocarbons, acknowledging their extensive use in heating, transportation and electricity production. The shared challenge of dependence on hydrocarbons is highlighted as an issue that needs to be addressed.

In terms of creating change, one speaker believes in the power of group discussions. The importance of small groups in making a significant impact is emphasised, indicating the significance of collective efforts in addressing climate change.

Furthermore, the analysis underlines the interconnectedness of the world through the internet. It emphasises the need for boldness, vocal advocacy and accountability in promoting sustainable development. The need for eco-friendly hardware design and sustainable practices on the internet is also advocated for.

In conclusion, the comprehensive analysis underscores the importance of recognising our shared world and limited resources. It stresses the necessity of eco-friendly solutions and accurate data in addressing climate change. The analysis supports a multistakeholder approach, highlights the challenge of hydrocarbon addiction, and advocates for the power of group discussions in effecting change. It also underscores the role of the internet in fostering interconnectedness and calls for boldness, accountability and eco-friendly practices in driving sustainable development.

Ihita Gangavarapu

Ihita Gangavarapu, a coordinator of Youth Asia in India and a board member of ITU Generation Connect, possesses extensive experience in the field of technology, with a specific focus on the Internet of Things (IoT) and smart cities. Her expertise in this area makes her well-equipped to address the issue of environmental sustainability.

Gangavarapu strongly emphasizes the importance of IoT in monitoring various environmental parameters. In particular, she advocates for the use of IoT sensors in homes to monitor carbon footprint. This technology has the potential to make individuals more aware of their environmental impact and take steps towards reducing it. Furthermore, Gangavarapu highlights the use of IoT in agriculture and forest health monitoring. By employing IoT-enabled devices, it becomes possible to gather real-time data on these crucial aspects of our environment. Additionally, the IoT has the potential to predict forest fires and aid in urban planning, further contributing to environmental conservation efforts.

A key argument put forth by Gangavarapu is that consultations, cost incentives, and standardization are necessary to bridge the gap when government initiatives fall short in promoting environmental sustainability. She suggests that involved parties need to engage in meaningful discussions to ideate potential solutions. Furthermore, Gangavarapu believes that government incentives can motivate the private sector to develop environmentally conscious services and technologies. Additionally, she highlights the importance of standardization, particularly in developing IT standards for the environment. These measures are crucial in ensuring that environmental sustainability is prioritized and achieved in the absence of sufficient government initiatives.

Gangavarapu also stresses the role of individuals in reducing their negative environmental impact. By taking responsibility for their actions and making conscious choices, individuals can contribute significantly to environmental conservation. This aligns with the Sustainable Development Goal (SDG) 12: Responsible Consumption and Production, which emphasizes individual responsibility in achieving sustainability.

Moreover, Gangavarapu recognises the significance of environmentally-conscious design in artificial intelligence (AI). She highlights the integration of AI into everyday life and the need to incorporate an environmentally conscious dimension into its development. This aligns with SDG Goal 12: Responsible Consumption and Production and SDG Goal 13: Climate Action.

Another point raised by Gangavarapu is the importance of crafting and implementing policies right from the inception of a technology. She argues that a well-crafted policy can create awareness among all stakeholders in the technology supply chain regarding potential environmental repercussions. By starting policy development early in the tech development process, the carbon footprint consciousness of the entire chain can be influenced positively. This supports SDG Goal 9: Industry, Innovation, and Infrastructure.

Gangavarapu also highlights the importance of leveraging the internet to discuss environmental concerns and solutions. Utilizing the internet as a platform for communication allows for a wider reach and the opportunity to engage a larger audience in these discussions. This aligns with SDG Goal 9: Industry, Innovation, and Infrastructure.

Lastly, Gangavarapu advocates for supporting organizations and initiatives that focus on creating environmentally conscious products and services. By endorsing and investing in these initiatives, individuals and communities can contribute towards promoting sustainability.

In conclusion, Ihita Gangavarapu, with her expertise in IoT and smart cities, emphasises the importance of monitoring environmental parameters using IoT technology. She further advocates for consultations, cost incentives, and standardization to fill gaps in government initiatives towards environmental sustainability. Gangavarapu highlights individual responsibility, environmentally-conscious design in AI, crafting policies from the inception of technology, leveraging the internet for discussion, and supporting initiatives for environmentally conscious products and services. Through her various arguments and points, Gangavarapu underscores the need for collective efforts and conscious choices in achieving a more sustainable future.

Lily Edinam Botsyoe

The session primarily focuses on the issue of the internet’s carbon footprint and the need to explore sustainable alternatives. It is highlighted that internet carbon footprints currently account for 3.7% of global emissions, and this figure is expected to rise in the future.

Lily actively advocates for internet usage that is inherently sustainable, emphasizing the importance of considering the environmental impact of our online activities.

In terms of recycling, there is a discussion on the need for education and awareness in recycling processes. It is mentioned that e-waste recycling in Abu Ghoshi, Ghana, has resulted in harmful environmental impacts. However, the government of Ghana, in collaboration with GIZ, is taking steps to create awareness about sustainable recycling methods.

Furthermore, there is support for alternative methods of connectivity through equipment refurbishing. Refurbished e-waste can be redistributed to underserved communities, providing them with access to the internet. This approach has already been implemented in several cities in the United States through inclusion strategies focusing on refurbishing, reuse, and redistribution of e-waste.

The importance of resources and skills for implementing the three Rs (refurbishing, redistribution, and reuse) is also emphasized. It requires the gathering of in-use equipment, housing them, and preparing them for redistribution.

The session also acknowledges that technology, while contributing to environmental issues, can also play a crucial role in solving them. However, discussions on the economic benefits of technology often overlook its environmental impact. It is argued that a more balanced approach that considers sustainability beyond profit is needed.

Bottom-up approaches are highlighted as essential for sustainability solutions. Government-led solutions often fail to account for the importance of grassroots movements and the involvement of local communities.

When discussing artificial intelligence (AI), it is observed that the lack of policies guiding its implementation can lead to disadvantages to society. The fast-paced implementation of AI technologies contrasts with the lengthy and bureaucratic process of building policies. However, creating global policies adaptable to local contexts could ensure the right usage of AI for climate change initiatives, thus guiding its beneficial deployment.

There is agreement that awareness creation plays a fundamental role in addressing AI issues. Social media platforms are identified as a powerful tool for continuous and consistent awareness creation, facilitating the understanding of the issues at hand and generating demand for action.

Knowledge sharing is also emphasized as a significant way for individuals to actively address AI issues. By creating awareness and encouraging people to take action, individuals can contribute to spreading information about AI and its impact.

Technology is recognized as a double-edged sword in addressing climate change concerns. While it can contribute to environmental issues, it can also offer solutions to mitigate and adapt to the impacts of climate change.

Moreover, the act of connecting the unconnected is seen as inherently sustainable. By extending access to the internet and digital resources to underserved communities, it contributes to bridging the digital divide and promoting sustainable development.

Lastly, it is concluded that the conversation surrounding these topics must involve the participation of government, authorities, and people. Increasing awareness of the challenges and potential solutions is seen as a crucial step towards achieving sustainability goals.

Monojit Das

During the analysis, the speakers delved into several key aspects of internet governance and its impact on various topics. They explored its relevance to the 4th industrial revolution, sustainability, energy consumption, and environmental impact. One of the main discussions focused on the need for convergence and cooperation between different stakeholders.

Monojit Das, for instance, is researching internet-related issues, specifically examining the convergence between the multi-stakeholder and multilateral approaches. He believes that finding a convergence point is crucial for resolving debates in internet governance. Das emphasized the importance of participation from both online and offline participants in finding effective solutions.

The speakers acknowledged the complexity of the debate between the multi-stakeholder and multilateral approaches in environmental issues. They noted the inherent differences in positions held by multiple stakeholders and nations in environmental policy. Despite this complexity, they agreed that small collective actions can have significant long-term outcomes.

The analysis also focused on the significant energy consumption associated with internet use. Simple online activities, such as sending text messages, consume data and energy. Data centres, essential for internet infrastructure, consume substantial amounts of power. The speakers highlighted the need for energy efficiency and the promotion of renewable energy sources to power technology.

Furthermore, the speakers recognized that the internet plays a critical role in the 4th industrial revolution. They stressed the interconnections between energy, transportation, and communication technologies. By harnessing the potential of the internet, advancements in these areas can be achieved.

The impact of internet infrastructure on the environment was another concern highlighted during the analysis. The laying of submarine cables needs to consider and avoid disrupting underwater habitats, such as coral reefs and marine life.

Regarding individual action, the analysis suggested that efforts to reduce environmental impact can begin with individuals. The panel included individuals from various professional backgrounds dedicating time to environmental issues, and personal compliance with carbon-neutral policies can raise awareness.

The discussion on artificial intelligence (AI) research revealed differing perspectives. While the importance of understanding the full potential and threats of AI was recognized, there were debates as to whether research should continue. Some argued for pushing AI to its limits to test its capabilities, while others raised concerns about its potential dangers.

The analysis noted the success and positive reception of the session, indicating promise for future discussions on digital leadership and internet ecology. One notable observation was the optimism expressed by the speakers about the growth and expansion of the platform for discussion in future sessions.

In conclusion, the analysis highlighted the multifaceted nature of internet governance and its impact on various aspects of society. From the convergence of different approaches to the significance of collective actions and energy consumption, the speakers presented a comprehensive overview of the challenges and opportunities within the realm of internet governance. The importance of research on AI, individual actions, and sustainable practices were also emphasized. Ultimately, the analysis revealed a mixture of optimism and realism regarding the potential for positive change and future growth in discussions on internet ecology.

Audience

The need for eco-friendly internet infrastructure and measuring its carbon footprint is of utmost importance. DotAsia and APNIC Foundation have been exploring this since 2020, with the aim of gauging the eco-friendliness of internet infrastructure across countries through the EcoInternet Index. This index aims to provide a measurement tool to assess the environmental impact of internet infrastructure. The argument put forth is that narrative and measuring methods for the eco-friendliness of internet infrastructure are crucial.

The internet not only has its own carbon footprint but can also contribute positively towards addressing climate change. The EcoInternet Index takes into account the balance between the digital economy and traditional carbon-intensive industries, highlighting the potential for the internet to play a significant role in the fight against climate change.

Improving internet network efficiency is seen as a positive step towards sustainability. However, no specific supporting facts are provided for this argument.

A grassroots movement can enhance awareness and foster a move towards a more carbon-conscious internet use. This includes a call for data centres to use renewable energy sources, which is seen as a positive step towards responsible consumption and production.

Digital inclusion in remote areas can be more sustainable by utilizing renewable energy sources such as solar energy. By incorporating sustainable energy solutions, digital access can be expanded while reducing the environmental impact.

Standardization is identified as playing a crucial role in shaping an inclusive policy for a sustainable and eco-friendly internet. It helps establish consistent frameworks for measuring the carbon footprint and provides a means to quantify reports on methodologies and data collections. This standardization is seen as vital in mitigating the risks associated with the internet and promoting a sustainable approach.

Collaboration between stakeholders, including the private sector, civil society, technical community, and government, is recognized as key in shaping a sustainable internet. Each stakeholder has a unique role to play, contributing insights, creating awareness, providing technology, and formulating policies.

Sustainable cyberspace efforts and the work of Dr. Monagir and Ahita are commended by the audience. However, no specific details or supporting facts are provided to further elaborate on this point.

The narrative and measurement of environmental issues are considered important, but no specific supporting facts or arguments are provided.

Sustainable housing and reducing the carbon footprint in infrastructure are seen as viable solutions. AMPD Energy in Hong Kong is mentioned as an example of using special materials and technology to reduce the carbon footprint in housing and warehouses.

Reducing paper waste and promoting recycling is highlighted as a viable environmental strategy. The Wong Pao Foundation in Hong Kong is noted for producing results within a year.

One of the challenges in implementing sustainability initiatives is the lack of financial support. It is mentioned that financial support is often difficult to obtain when trying to implement sustainability measures, which can hinder progress.

Hong Kong’s progress towards achieving the SDGs is deemed slow. Currently, the main recycling effort in Hong Kong focuses on recycling bottles, cans, and paper. No specific details or arguments are provided for this observation.

Further research is needed to understand the impact of AI on climate change. Although no specific supporting facts or arguments are given, the stance is that thorough research is necessary before further developing AI technology.

Digital literacy is considered important for global digital inclusion efforts. Despite good internet access in Hong Kong, digital literacy is not very high. It is mentioned that considerations should be made on how to contribute to other regions of the world facing different digital situations.

In conclusion, there is a growing awareness of the need for eco-friendly internet infrastructure and measuring its carbon footprint. Collaboration between stakeholders, digital inclusion using renewable energy, standardization, and efforts to reduce paper waste are advocated for sustainability. However, challenges in implementation due to a lack of financial support and the slow progress of Hong Kong towards the SDGs are noted. The impact of AI on climate change and the importance of digital literacy for global digital inclusion are areas that require further research and consideration.

Annett Onchana

The Africa Climate Summit, held in Kenya, placed significant emphasis on the importance of accessing and transferring environmentally sound technologies to support Africa’s green industrialisation and transition. This highlights the crucial need for the continent to adopt sustainable practices and technologies in order to reduce carbon emissions and mitigate the impacts of climate change.

The summit also highlighted the necessity for cooperation among different stakeholders in addressing the digital footprint. As our reliance on digital technologies continues to grow, the environmental impact of the internet cannot be overlooked. Thus, it is crucial for governments, businesses, and individuals to work collaboratively and find ways to reduce the carbon footprint associated with digital activities.

Furthermore, the summit discussed the influence of consumer habits on the environmental impact of the internet. It raised the question of whether people’s purchasing decisions are driven by trends or functionality. By examining consumer behaviour, efforts can be made to promote sustainable consumption and production. This means individuals can make choices that have a lower environmental impact, such as opting for more energy-efficient products or those with longer lifespans.

It is important to note that consumer behaviour can play a significant role in mitigating the environmental impact of the internet. If people shift their purchasing habits towards more sustainable options, it can contribute to the reduction of carbon emissions and waste associated with the production and disposal of electronic devices.

Overall, the Africa Climate Summit underscored the importance of addressing the environmental impact of the internet, promoting sustainable technologies in Africa’s industrialisation efforts, and encouraging individuals to make more conscious choices in their consumption habits. By working together and adopting sustainable practices, positive change can be driven, and the adverse effects of climate change can be mitigated.

Session transcript

Lily Edinam Botsyoe:
and thank you for being such a patient audience. We know you’re online and I hear that we have quite an audience. If you know anybody who would benefit from this session, kindly nudge them, ping them, just say the session has started so they can join in. My name is Liliye Denamboche, and in a short while, I’ll be introducing everybody here and there’s some people who also give interventions online. We are so thankful for the gift of the internet that allows us to do all of these, and so we want you to stay interactive. If you have any questions, please put them in the chat. We’ll come back to them as soon as possible. This session is the IGF Workshop 21, and it’s titled The Internet Environmental Footprint Towards Sustainability. Our panel is a blend of experts or young leaders who essentially have backgrounds in different areas related to green tech, and also just around e-waste and other areas that will be discussed in a short while. So I’ll do a bit of what we look to do for the aim of this, and then have our panel members do introduction. So we’re looking to use a hybrid format, that is what we have right now, you online, we on-site, to get you our conversation. What you’re looking to do is to see how the internet’s carbon footprints, or do we know that the internet’s carbon footprints account for 3.7% of the global emissions, comparable to that, and this figure is expected to rise. So some of the things we’re discussing here is, what are alternatives? You’re looking in the broad sense of climate, connecting and unconnected in a way that is inherently sustainable. So that is the discussion we’re having. So, let’s start a conversation. I’m gonna start with our panel members on-site, and then I’ll allow you online to also introduce yourself. So to my left, can you please give us a short introduction?

Monojit Das:
Hi, thank you, Lily, for giving us the brief outline. My name is Manojit, I am from India. I worked on internet. And my PhD topic was finding out the convergence in this call that probably can be other ways to find out solution to the debate between the multi-stakeholder and the multilateralism. So probably this can be one of the ways. And we would like to get more input from the participants online and offline because it’s mostly about the participation, not just the one-sided way of telling this. Thank you.

Lily Edinam Botsyoe:
Thank you so much. Like you said, we want the interaction across board. And then to my right, our first panel member.

Ihita Gangavarapu:
Hi, everyone. Good evening. I’m Ida, I’m from India. I’m coordinator of Youth Asia of India and also a board member of ITU Generation Connect. It’s an absolute pleasure to be here. Thank you, Lily. And I have a background in tech. I have experience working with Internet of Things across multiple applications and smart cities. And in this regard, I’ve come across this particular persistent to this issue of environment footprint. And I’m really glad to have the opportunity to share some insights shortly.

Lily Edinam Botsyoe:
Thank you so much, and then to Carson. Hello, my name is Carson Gabriel. I’m from Tanzania, computer scientist, and I have a background in internet governance for almost six years now. I am part of the African Youth IGF, and particularly we are here to explore the role of the multi-stakeholder approach, something that we have been all raised on and how we use this element to create sustainability, especially towards a more progressive humanity that uses energy not only in terms of consumption, but also creating more diverse bridges for the next generation to enjoy. And we also have our online participants. What I forgot to add before was that we have Carson, who will co-moderate, and also with me. For our online panel members, if you are on, that is Innocent and also Jowdy, let’s start with you, Innocent, and Annette too, yes. So Innocent, the floor is yours, a short introduction, we go to Jowdy, and then to Annette. Yes, so Annette, if you can hear us, please.

Annett Onchana:
Thank you for the platform, good evening, good morning, actually a few minutes to afternoon from Kenya, and I am an African youth, passionate and interested in internet governance, and I am happy to contribute to these discussions, and I’ll also be your rapporteur for the session. You’re welcome, and thank you for having us.

Lily Edinam Botsyoe:
Thank you so much, Annette, so excited to have you. So in a short while, the conversation is going to shift to Carson, who is going to walk us through some guiding questions, but we’re going to take turns to have climate sustainability relates to us and the work we do, and because of we’re having to leave unfortunately, I’ll give a bit of context and then hand over to Carson to have the conversation kick-started. So initially, I mentioned my name is Lily, I live and I was born in Ghana, and currently doing a PhD in IT at the University of Cincinnati. Now I’ll give you the context of how sustainability pretty much aligns with the work I do and my experience with it. First as a Ghanaian, I live in Ghana, and one of the places that has been tagged as the world’s largest dump site, a place that is found in Accra, and one of the things that we saw growing up was how people would carry scraps or break apart e-waste to be able to find valuable metals, go out there and sell to people in the hopes of making money. people saw as trash, for people in Abu Ghoshi, it was treasure and the way that these materials were broken apart and the way that they were recycled to be able to get this precious materials in there was not done in a way that was environmentally friendly. These are people who only knew that they had to break apart some of these because they had to make a livelihood and they had to sell. So the issue of … in Ghana and some interventions came from the government of Ghana in collaboration with GIZ, that’s a German corporation of the German government. So what happened was a time of pretty much creating awareness of how some of the activities that probably if it was not done, recycling was not done in a good way, would harm the environment. It took, or it still takes a lot of, I mean, education and awareness creation for people who … recycling as a livelihood to see reason as to why the change in the world is pretty much sustainable. But that is just something that is currently ongoing and essentially it will take maybe some time to get people to understand the why. And when we are talking about connecting people, we want to see even how we can explore refurbishing some of these e-waste for communities that are underserved. And that is also pretty much an issue that we see in Accra and across Ghana. There are places that don’t have access, one of the biggest hindrance is access to devices. So if … a home, whether it could be refurbished and redistributed, who knows, they wouldn’t end up in dump sites to be recycled in such a way that it’s in the long run affect or negatively impacts the environment. So some of the things that I’m seeing from research and because I’m doing the research, I did my master’s thesis in US, was essentially how across the US, there are cities that have implemented inclusion strategies. some things related to EU management and I will also switch from the context of Ghana to the US. So, the best practice in some of the cities could implement, it’s actually city level, is that the inclusion strategies that look at refurbishing, redistribution and reuse. All these three R’s are in such a way that it’s contributing to making sure that all of the things that are probably dark that’s in use do not end up in the dump sites and probably would find a home with people who probably would use it also for their work or just to learn for students who need access to devices. In essence, these R’s can only come to fruition if there are resources for gathering these in use, housing them, if there were skilled aspects to prepare them and if there was a way, a program or something to be able to redistribute them. So, in essence, my perspective is from the US angle where I look at alternative to connectivity through US refurbishing and it’s because of the background I have with the story I just shared about the case of Abu Belushi and what it says that I found out the cities are implemented by US EU strategies and there are inclusion strategies in the US and some of these cities, they are tagged the trailblazer cities and they are usually, they have reports that sends out what it is that they’re doing that people can also implement for their own context and it’s something that others can replicate and customize for their own situation. So, I know probably there are questions or comments that can come from this submission. I’ll drop it here and then hand over the floor to Carson who will give us some more nudges and guiding questions to drive the conversation towards the end of this session. Thank you.

Gabriel Karsan:
Thank you very much, Lily and that was quite interesting what you shared, the local context and concept. Canada does. Well, let me start with what is the solution to a problem if the solution is the problem? Because when I think climate change now and the internet and everything, that is where I lie. That was the context when we were discussing about this session. It is a high quality question that we really need to discuss and I believe here, all of us have a lot to discuss. There is the climate change agenda and there is the climate reality. Now, when we live in the fourth industrial revolution, when we have an intersection of energy, transportation and communication technologies, highly embedded with the internet, I believe that the internet could do more rather than contributing to the 3.6% of emissions that are happening. So I would like to start with you, Dr. Monojit. If you’d like to share your context to our obligation today, what do you think the people want to hear and what is the agenda behind in trying to create an internet that’s sustainable and serves its purpose to the people? Okay, thank you.

Monojit Das:
Thank you, Gabriel. My understanding, I’d like to share about my understanding because I may not be the whole and soul of the internet or what exactly, but I’d like to share what my understanding is. There may be different stakeholders out here, whether it’s the government, academia, you know, the civil society. But if you try to dig into a little bit, you know, if you see government, government is actually dependent on the civil society for votes. And the civil society is dependent upon the technical companies, the big giants for funding because they don’t have their own. And if you go to academia, you still need a subsidized funding from the government. So everyone is, you know, some of the other interrelated with each other, we cannot deny that. So whenever we are going to civil society events or so that talks about free speech or anything, it is somewhat the other, the company has their own agenda behind it and whether you accept or not. So, and if you see. if the countries also have the trying to develop their own internet in terms of you can say firewalls or anything as such this is just trying to frame a security layer of their own so that you know internet the whole concept I believe was introduced when it was as a preventive measure under the DARPA or the that was a military thing all together it all about so so everyone has their own interest and I believe that it will continue it’s an unending but probably there can be some sort of convergence on the topics that can be worked upon together without without any differences and that is although what we all come together that we have all so and because here you don’t have any differences every stakeholder will agree because in the morning when you say good morning it’s not just the good morning text that is taking on kb of your data but at the same time it is coming going through a lot of subsolutely because we are not having dedicated satellite for us so we are using those data centers to say you know power you or save your images it takes a lot of energy so this energy is not coming from you know most of the cases although Microsoft and other are trying to you know put it under water but it’s not all 100 you know so what we have tried to find out is that there can be probably some areas where we all can collaborate in this between this debate of multi-stakeholder and multilateralism is to come up and promote energy efficiency and that when we talk about it can be largely on the devices as my co-panelist also mentioned about the e-waste you know utilizing e-waste has been because she has experienced in a very better way and the transition to powering the technology requires a lot of energy So instead if you can handle the solar power, using solar panels or any such that all the scientist experts are there. And in terms of when I say the regulatory measures and there can are, there are regulatory measures that have a moderation in terms of using of the contains what it could be. But in terms of using of the resources, there can be uniformity, everybody can agree on this point. And particularly when submarine cable is attacked, data, we have to be dependent on someone or the other because we as an individual, there’s always debate between security and privacy that initially I was a student and still I’m a student of security studies, so I understand the perspective of security much better. Either there is a debate we have to do about security or for privacy, because we cannot ask for privacy when we don’t have our own utility. For using the social media we’re still dependent on the social media to convey our message. So here what we can do is that, that’s this submarine cables when they’re laid, it should be laid. Now more and more cables are being laid on, at least we can focus that this do not disrupt the coral reefs or particularly doesn’t pass through the sensitive areas where the rich sources of marine animals are there. So this can be a little bit of areas where we can all collaborate. And the rest of the debate can continue and I’m sure many of us, we all know that everything within the debate between the multistakeholder has to, it’s an unending process. And with this, I’d like to say thank you to the moderator and please continue.

Gabriel Karsan:
Thank you very much, Dr. Monojit. And I believe we live in one world, one sun, the same air we breathe together. So this is something that it is for us. It is people centered. It is a people problem. And there’s so many solutions we’ve heard, like using satellites and high altitude connectivity devices to help mitigate the risks of climate change and climate agenda as brought by internet connectivity. So if we pivot to solutions, let me go here, come in from India, a place where you actually feel climate change, but it’s also quite technologically progressive. That might contribute in creating an eco-friendly and green sustainable internet.

Ihita Gangavarapu:
Thank you so much. And I think great points by all of my panelists here today. Excuse my wife’s sore throat. So, when we are looking at technology and connecting it to environment, you tend to think about how it’s affecting environment. But one major component, or maybe I would say one different lens would be how do we leverage technology to let’s say monitor various environmental parameters, monitor and track them, and also add intelligence to it so that when we conduct research, come up with standards and regulations, all data back. So in that, my personal experience working with Internet of Things, IoT technology, back in India, working in smart cities. We were looking at, you know, IoT enabled smart cities in India, where we were looking at multiple verticals for multiple use cases around air pollution monitoring, energy monitoring, water quality monitoring. And it is not just from a city perspective, even at a small room, how can we monitor the CO2 levels? And even in the kitchen of the households, we have placed IoT sensors and we were able to find out the kind of food that was the carbon footprint of that particular, let’s say type of bread that is being prepared in the kitchen. So I feel like the issue gets worse of, you know, the… issue of carbon footprint, you know, it’s becoming worse. If we don’t have accurate measures of determining the emissions derived from certain processes, products, services, and technologies. I think that is where I want to bring in IoT. So when you talk about Internet of Things, it’s all of these technologies. It’s Internet of all dumb things. That’s how I would explain. You have your smart refrigerators, you have baby monitors. All of these items are connected to the internet. And they have sensors that monitor various parameters. You have a lot of data from all of these IoT devices. One really interesting use case would be having IoT in an agricultural domain to monitor, let’s say, the soil moisture. And you can have IoT for monitoring the health of a forest also. You can also predict when a forest fire is likely to happen, right? So all of these are really important applications of IoT that help you have data-backed results and actions. When you look at smart cities, IoT enables smart cities particularly. They also help you with global planning, smart waste management, and traffic management, right? So these are not necessarily seen relevant in a local context, but in a global context have a lot of impact on the carbon footprint. And another important aspect I want to highlight upon is the importance of environmental compliance, right? Every country, every region might have their own set of issues or factors contributing to the footprint. But compliance or having global standards is really important. Back in India, I’ve seen people talk about, or let’s say, NGOs or even big organizations talk about is a factor, and that is why they don’t, let’s say, follow environment-friendly services or products. So, having global standards around this will help you streamlining certain processes so the cost itself does not, no longer stands as a barrier. I think that’s the thought process that I wanted to share with all of you, and I’ll hand it over to Karsan now.

Gabriel Karsan:
Thank you very much Ihita, and you mentioned a very key element of having evidence fact based mechanisms, and I got something that you said. We have to develop a graphic. How can we measure traffic of the internet because that can be a source that can help us guide the study and the research that’s needed, because now there’s so much centralization when it comes to technology and climate change, and I believe the only solution is having grassroots voices, because in the end, people processes and platforms are the things when they’re connected, help in mitigating these issues and I’m really happy to see you all the multi stakeholder approach with all your voices here, the climate debate can be sorted when we all speak so I would like to welcome you, our dear audiences to share your remarks, share your questions, and most importantly just share your voice to the next generation to be accountable, so that it can be an internet, 100 years from now, one earth, one internet that’s protected so please the audience. You’re open to your remarks. Maybe Mr. might have something to add right let me give you the. Thank you for call.

Audience:
Hello. Thank you. I think it’s a great conversation and conversation that, especially as a speaker mentioned, it is a conversation that can make. a difference from younger participants and from the youth. One of the things that I guess I will just add to this, I think how we, what the narrative is and how we think about measuring the eco-friendliness of the internet infrastructure is something that is very important. And actually for DotAsia, we have been working on this since 2020 with APNIC Foundation. And actually the ecointernet.asia report, or you can check it online, ecointernet.asia. And what we did is exactly as was discussed is the concept that it’s the internet, not only, of course the internet has its own carbon footprint and has its own impact on climate change. But what about the positive impact? How do we think about that? And that’s sort of what the EcoInternet Index was trying to do. And this is the first year we are releasing kind of a ranking between different countries, how eco-friendly the internet infrastructure is. We look at three axes, one is on economy and we look at the balance, or I guess the replacement of digital economy versus the old more carbon heavy economy. And then the energy access, really the energy source that powers the internet. And then the third area, which you touched on just a little bit as well, is the network efficiency. The network itself can also be more efficient. So beyond the ranking, one of the things that are more relevant to this conversation we are having is also what are the things that can be done. Users as young people, maybe the policy intervention are going to be important. but I think a grassroots movement to actually make this awareness that we can be more carbon conscious users of the internet and call for data centers to be using more renewable energy. Also, one key kind of finding in the report that we found really interesting is that digital inclusion data at IGF actually comes hand in hand with sustainability. Because the most remote places, it’s better to use solar energy, it’s better to use off-grid energy and build that capacity rather than try to build the more carbon heavy way into the remote areas. So, in fact, these are things that can, I guess, jumpstart people’s awareness and consciousness that how the network itself and the infrastructure actually can also help. And the balance between them is kind of what we want to achieve and being conscious about it is very important.

Lily Edinam Botsyoe:
Thank you very much. Thanks. So, the submission was spot on and got me to remember something that came from the 2019 session on environment. And it was how somebody had mentioned that there was, like you shared, that outlook or people’s interest to see how we can use technology, in essence, to solve climate issues. And the longer we have the issue of technology contributing to what it is that we’re trying to solve, right? And that’s always a dilemma. How do we balance all of that? But that conversation had steered from the angle that there has been for the longest time, a focus on the economic angle of technology, like how technology can foster development, how you can bring profit. And there was that part of there was little to no contribution for sustainability and how we can essentially, there’s another angle that was mentioned, there were three of them, but it got us to see that there is a need for people, or I mean everybody involved to reconsider what it says that we focus on for like technology beyond to see how it can also impact, I mean the world we live in, right? So beyond profit, what else? So that’s continuous focus on economic angle for the longest of time didn’t help, and so the impact part is something that’s, the impact both positive and negative is a part where people would want to consider looking at again, and also consider seeing how you can use or leverage technology for building sustainability. While I say this, there’s been a question that has come in the chat, and I’ll read it, because I’m co-moderating with Karsan to be able to start a conversation towards this line. So somebody that’s from Ghana has said that some of these solutions articulated in the past that have been spearheaded by government are quite historic, and doesn’t take into an account a bottom-up approach. How do we bridge the gap then? There’s a question for us. So.

Gabriel Karsan:
Thank you, thank you Kaiga. First of all, just to add on that, I believe we are caught in the misinformation of what the climate agenda is, and this is due to poor quality data that most of our governments have, not fact-checked. And I’d like to thank everyone who just provided a lot of solutions which are grassroots. So with that grassroots and governmental conversation, I think which can be brought by the multistakeholder approach it could be a mechanism to build on that. But I would also like to call on Dr. Munojit, because you have some experience working with government and academia. What are your points on this?

Monojit Das:
Thank you. Particularly in this point, like, you know. it has to start from somewhere. So probably that somewhere can be us, and that is also the reason you see that all the individuals in this panel, you see, we are also associated with many things like when it comes to security, and she has spoken on artificial intelligence, she’s from security privacy, but we feel that this is something the environment, there should be, you know, as a contribution to the society and the government that where we are living in, we can try to bring out a little bit from our time to take in the efforts to not just the networking we are doing outside this, but also actually on ground to find out that where can be the synergies to collaborate and find out why the dilemma that exists between the actual implementation on ground and on paper. So here, you know, with that, why the debate between the multi-stakeholder and the multilateral, it exists, if you relate that it will continue to exist still long, I don’t know if you like it or not, but it’s actually here to stay because there are differences, you know, and which is not going to be very easy to solve, but on this particular lines, as we discussed that we’re finding out how much internet was in a communication was sent between that we can be started from basically friend A to friend B, whether my home is net, you know, compliance is having the compliance of environmental friendly or not. So in this way, you know, where it is sent in a way that his or her house or the gender they follow, so that having the same, you know, carbon neutral policy or not, this can be some sort of a step really, some ideas and it can be then implemented because internet also, the world we are connecting today did not have many telecom companies, all started in a small lab, probably little bigger than this, with different computers. So now it has become distinct in the sometime other places we are connecting. So this is the start and we can do the change.

Gabriel Karsan:
Thank you very much. I’d like to go to Annette. Much has been spoken about policy and the role of policy mitigating our crisis here. Annette, if you’d like to share because Kenya is a country that is quite a pioneer in terms of secular economy and this is built on the traditions there. Just a fun fact, it comes from the Maasai culture. The Maasai culture is a form of people and traditions who are quite incredible in their ingenuity and the use of indigenous processes to cultivate secular economy based on their mechanisms. I believe there is where we can get the solution. Annette, you have five minutes to share your points if you can hear me.

Annett Onchana:
Thank you. I would really like to maybe shift the discussion a little bit and also give an example of Kenya. First and foremost, last month, September, we had the Africa Climate Summit happening in Kenya. One of the call to action items was, and I quote, a call to access to and transfer of environmentally sound technologies, including technologies that consist of processes and innovation methods to support Africa’s green industrialization and transition. That was just a step in the right direction. Tackling the internet’s environmental footprint is key in driving the key agenda. This is therefore a call to collaboration and cooperation of all different stakeholders in tackling the digital footprint. Also, it’s often said that we are creatures of habit. We also should question what we can do at individual levels to tackle our own internet’s environmental footprint. For example, buying new gadgets. Are we creatures of trend? or do we focus on functionality? And those are just some of the mitigation efforts that we can use to reduce the internal environmental impact. Thank you.

Gabriel Karsan:
Thank you very much. You talked about inclusivity and that is key. Well, since I like putting people on the spot, Ernest, Ernest, you’re quite potent in the field of standardization. So what can standardization aid in terms of mitigating the risk of the internet that’s sustainable? Please, Ernest.

Audience:
Thank you very much, Karsan. You got me off guard on there. All right. So I like what Ihita said about the carbon footprint measuring the carbon footprint. I think that’s what I’m going to focus. So I believe this is one of the very important aspects that standards can play in measuring the providing a consistent framework to establishing and quantifying reports on methodologies and data collections on how we can measure the carbon footprint and other issues. So I believe that by following these standards and manage their carbon emission areas of improvements and have informed decision. For example, we have various standards that talk about the environment ability. I’m sure most of you know about ITU. You can just go to ITUT standard on environment ability. We have standards that talk about issues to deal with climate change and also on how we are mitigating the issues of data and how we’re spending. the data on the internet, and also just our internet footprint and how we are using the internet and everything. So I believe standards plays a role in shaping an inclusive policy on how best we can shape our environment to be more sustainable and eco-friendly. And the other thing that I would also like to suggest is that collaboration in everything is key. We have a private sector, we have a civil society, we have the technical community, and we have a government, so it’s best that we collaborate together to shape one ideology and one agenda to ensure that the internet becomes more sustainable, because we can talk about environmental-friendly, but we have the experts from the technical community who can provide us insights on the best ways on how we can do it. We’ve got civil society who can provide the awareness aspect and also reaching out to many numbers, masses, and also we’ve got the private sector who have good data and also the technology at the end of the day, and we’ve got government who can provide policies and also, like they have the final say, they have the plumbers. Some of us, we are pipes, but we have the plumbers. Thank you very much.

Lily Edinam Botsyoe:
Thank you so much. So we’re gonna come back online for anybody who has a question or a contribution, but because we want to hear your recommendations also, we’ll pass a microphone around in a bit. Before we do that, we’re going to ask Ihita to also respond to the question around how to bridge the gap, especially when government initiatives aren’t doing what they have to do and how do we bridge the gap, as was asked by Keja online.

Ihita Gangavarapu:
Actually, that’s a really important question for us to think about. I’m not sure if we have a definite answer that will just go into action right away, but it’s something that we should all think about and discuss. The first thing that comes to my mind, looking at all of you in the room, this is called consultations, right? Looking at different stakeholders coming together. and discussing on work. Every small or big effort will definitely have a good or a bad impact. So we’re looking at a good impact in terms of reducing the footprint, making sure that environment is sustainable, it’s healthy. But what is a realistic start for us? I think that’s something we have to all ideate on in consultations that, let’s say, a government facilitates in their country, making sure the private sector comes together, civil society come together. Those developing technologies from a design point of view itself are incorporating in their supply chain, in their products, in various processes. So I think consultations is one. The second point, I think, is in terms of cost. Like I mentioned in a previous message, cost is a factor or is considered as a barrier for people to develop environmentally conscious services and technologies. So incentive from the governments to the private sector or those who are making an effort to reduce the footprint, I think that’s another point I’d like to highlight. And we also have a role to play. We have to be conscious ourselves as to what our daily activities, how are they contributing to the global environment, what is the type of effect that it’s creating? Is it a bad effect? How are we propagating this? So, and I really like that Ernest mentioned standards. He took an example of IT standards for environment. And I think standardization is a third aspect because a lot of standards, not just at a multilateral level, also multistakeholder organizations standards. So I think that is where an echo of how environmentally sound decisions need to be made in the digital space come together. So yeah, those are my inputs.

Lily Edinam Botsyoe:
Thank you so much for sharing, Ahita, and we’re gonna start hearing from you. Is there anyone who has heard, read, experimented, or probably just learned of a best practice with regards to sustainability in the case of technology that would want to share? If you’d want to, by all means, put up your hand and I’ll reach you with a microphone. So we want to, as much as possible, also learn from you. And if there was anything that has, please let us know, and we can add it to the recommendations that we have. So any idea whatsoever, you can put up your hand and I’ll give you the microphone. Right, do you have a question? You have a question for the panel, maybe? No, right. Anybody wants to contribute to this conversation so far? So, oh, the professor, right. You’d want to add to it?

Audience:
Yeah, sure. Though I’m speaking a little late, but I really appreciate having interacting with you, and I really appreciate the perception and the efforts which you’re putting for the sustainable cyberspace. So I think it’s up to you and really Dr. Monagir and Ahita, all are working really hard for this thing. So I think asking a question to all of you will be a challenging part for me also, because everything is so fine and filtered. Right, okay.

Lily Edinam Botsyoe:
Thank you. Thanks for coming to support. We appreciate you and the friendship. Anybody want to ask a question about sustainability in terms of technology, green technology, clean technology, clean energy? You can put up your hand and ask, or if you have a submission, a recommendation, you can also put up your hand and I can give you the microphone to share.

Gabriel Karsan:
We have? All right, okay. Hello. We are all addicted to hydrocarbons sadly, and this is something that powers us. In this situation, there are things which work, and things which do not work. So, our session is mostly about listening to all of you. So, I will pass the mic. Tell us, what is that one idea do you think we can work on? Because this is a small group, and this is where change happens. So, please respectfully, just one thing share with us, beginning here.

Audience:
Now, I think we, not a lot of idea to add to it, because we are working on it as well, and we want to listen. But I think the way to, the narrative and the measurement is really important. And if we can come up with a way to really talk about this, I think that would be important. But I would encourage you to, tomorrow morning, there’s going to be a main session on environment, which we will be talking about this. And so, I do encourage you to bring some of this discussion there, and ask those experts there. Actually, I’ll be moderating that session as well, but please bring today’s ideas there. Now, I will pass, help you pass. Okay, everyone speak. So, hi, I’m Jasmine. I’m with Admin, actually, from .Asia Hong Kong. I don’t know what should I say, maybe something I, well, like, there’s some solution that I want to share. It’s actually for more of a sustainable housing solution. So, there’s a one side of Hong Kong called AMPD Energy. So, what they’ve been… They’re using some kind of special materials and also technology to reduce the carbon footprint and also emission when it comes to that kind of transactional housing things and also warehouse boxes. So I forget, because I’m not an expert, so I do not know how to, I forget the term how to name the materials, but I’m just calling it AMPD Energy because I have been before. So this is one of the case that I would love to share, especially housing, you know, as you guys know, it’s quite a serious problem in Hong Kong. So definitely we want to reduce more carbon footprint and also impacts when it comes to housing materials and infrastructure. So that’s just a little thing that I could contribute. I just think about it, so I’ll just pass my mic to younger generation, also from Hong Kong. Yeah, I’m from Hong Kong. All right. So my name is Ethan. So I’m from the Wong Pao Foundation. So this foundation actually aims to reduce paper waste and give folks second lives. So actually we have already some results in one year since it’s actually a pretty new idea, or you can say our company. And I think, and we will have a lightning talk on Wednesday. So if you guys are free to join, we are very welcome you to join. Yep. That’s all. Oh, okay. Hello, thank you for your invitation. I’m Danny, also from Wong Pao, from Hong Kong. And actually we have same group that we’re using scenario, and I think we’ll have a discussion about how to… to build a sustainability environment. I have some point to share what I have encountered in Hong Kong. Because everybody wants to talk about sustainability, talk about protective environment, but when you want to implicate it, you will find that you always need to have a business model. And you always have to raise funds. You always have to have people who will give you money. But if you really want to just raise funds, or the money is coming from the government, it’s always very difficult. So I find that many, many people have the awareness to protect the environment, have some initiative, but to the final round, no money. No business to support. When you go into the business, they always want to make some impact, but the impact is not social impact. It’s about the business impact. So I think it is a gap between the commercial world and the sustainability initiative. And I believe, so that’s why we are here. I believe we have some synergy. We have to develop many, many kind of synergy. The ESG environment is possible. If just one initiative, I think it’s always hard to thrive because no financial support always. So that’s what I kind of want to share with you and hope we can have some synergy or other ESG solutions. Thank you. Oh, hello. We come from the same group. So because Danny has already mentioned about the things that happen in our company, but I want to address some situations that the world… Sometimes people will say maybe Hong Kong is a well-developed cities, but in terms of the SDGs, we have a very slow pace. anything of growth because right now we are just doing things like, okay, recycle the bottles and recycle the can and recycle the paper and that’s it. So what we are coming over is how we make use of the technology and then how to make the sustainable, the path move forward. So that’s why we come here to absolve the technology that or the works have been done over the world so we can bring it back and then give some, provide some solutions to maybe the government or maybe even how can we promote it within Hong Kong to chase it, the steps of how the world is working on. So these are my kind of thoughts that we want to share with you. Thanks. Okay, thank you for giving the mic. I just do research at University of Namibia. I’m a researcher at the University of Namibia. And yeah, I was just sharing a bit earlier at the main session on artificial intelligence. I was hearing said center saying, you must use the most powerful issues, including climate change issues. In my opinion, it’s always interesting to point out that of course, there are some possibilities with AI to address issues like climate change, but we always need to stress that it has an impact. on climate change. And then a question could be shouldn’t we wait and research first on the impact of AI before trying to develop these technologies to address the issues? So that would be a question for you, actually. Question. That’s a good sticker. I’m going to ask. Thank you. Thank you. Thank you. My name is Terry. And I’m also from Hong Kong, so I skipped a part. From the speech, from talk, I’m always thinking about a question is what can I do? What can me as a youth, as a youth from other side, can do, can contribute to the youth, to the people in the global South, in Africa? Because as we know that we are facing different situation of digital technology, for example, digital inclusion. In Hong Kong, we have the most advanced access and also availability of internet. But our digital literacy actually is not very high. But in Africa, in the global South, it’s another situation. Maybe you are still working on how to get the digital devices. So this is a lesson, or maybe it is a question for me to think about what can I do for youth, or what can I do for the other side of the whole world? Thank you.

Lily Edinam Botsyoe:
Thank you so much. So we have two questions right now. We have first one on, shouldn’t we be researching more on AI to see how the impact of AI wouldn’t negatively impact the environment before we develop it? it’s more, that’s how do we make sure that AI is serving us and serving us well and not contributing to the issue. One of the things I think Enes hinted was around policies. Usually when we say policies, it feels so far-fetched, but it’s some of the recommendations that we give to government for them to be able to see why they may be at disadvantage to something that they deploy as technology. And I see that because technology is fast evolving, one of the things that we usually would see is it’s just really fast-paced policy. People adapt technologies, probably you don’t even have the basic infrastructure in place, how to make sure that it is deployed within the confines that protects humans and within the confines that can make it beneficial to society. So it’s what you’re seeing as a problem. Policies that ensure that there is the right deployment, right usage, sometimes are missing. And the process of building policy is sometimes bureaucratic, so lengthy, it’s not swift to implement. And so for a long time, the disadvantage will be felt before there is a curative approach to addressing it. It’s usually not done before things happen. Nobody thinks of it till it’s really an issue, and that’s what we are seeing. So if there were a policy that probably were even global, that people are learning from, countries are adapting and customizing for their context to be able to deploy AI for climate change initiatives, who knows, it will probably be helpful. So my best thoughts was policy, because it helps us to be able to create a certain kind of guiding principles for how some of these technologies are deployed. You can add to it, right? So Ihita also wants to add to the point. Ihita?

Ihita Gangavarapu:
Yeah, thanks Lily. So she’s given a policy dimension to it, and I think it’s really important that she starts guiding principles. you’re looking at AI, I think it’s definitely here to stay. So, and we’re already, I think, past, I don’t know, we still use the term emerging technology, I think it’s quite emerged now, and quite integral part of basically everyday life, especially those who have, let’s say, use the services regularly. But in cyber security, we use the term security by design. So, can we have something similar when it comes to environment, right? Can we have something environmentally conscious by design? Like, I think it’s a great question, because at the very starting of a supply chain, when you’re designing development, then maybe comes, you are deploying it. Can we have something there? And I think that is where if you have policy starting right at the beginning of coming up with the technology itself, then the whole chain itself is very conscious, and to what could be the repercussions of not following a particular policy on the carbon footprint. And that’s the kind of thought process that I have. Be happy to add more to it. But given the time, I think I will hand it over to the moderators.

Monojit Das:
In addition, there’s one thing that whether we should first focus on how much, you know, potential or how much threat can AI pose to us, then we can think on the extension part. It is an unending, because you see, in a few months back, the very top CEOs decided, you know, we should stop researching on AI for six months. Probably that wasn’t the news going around. But did they stop working? Did the company stop? No, they didn’t stop. So, it’s like, they’re working and they want others to stop, you know. So, at least they want to be on the advantageous side. That’s what I feel. Because unless and until we research to what extent we can go, we cannot find out what can be the drawbacks. It should be parallel, you know. We have to build a house, then we can find whether it’s earth-resistant or not. Unless you can shake it, before that you can’t make a two-story and then think whether the six-story will fall or not. We have to build a six-story and then probably we can shake it to see whether it falls or not.

Lily Edinam Botsyoe:
I like the analogy at the end, with a two-story and a six-story, like you’ve got to test it to see if, that’s interesting. There was a second question actually, it said, what can I do, right? I think we as individuals too, I think we’ve been demanded from different stakeholder groups the question is, what can we do? I would start from the awareness creation. So you’ve had something from here, you want to go and probably just, you can use a social media. That’s sometimes like very little, but if it were consistent and continuous and creates awareness in such a way that people are seeing, then people get to know that there’s an issue and there is a demand for some hearing, for people to take some action. So I would take that angle, I know some others would want to add to it. So for me, I feel like one of the things that people would need to be able to swing into action is a knowledge and knowledge sharer, to be the one to create awareness and help people to swing into action. I would have, who wants to add something? What can individuals do?

Gabriel Karsan:
Thank you very much for those contributions. All I want to say is what we should do is an important question. Be bold, be vocal, be accountable. It’s one internet, it’s one world. The solutions are here. We need to provide solution digital leadership. And I’m so happy for our colleagues from Hong Kong and all the room who shared this perspective. It shows how much we are connected, the internet, interconnected networks, but behind those nodes are people. And the solutions are always people centered. The first, second, even third industries, but they were mitigation based on challenges. We see how eco-friendly design has gone in hardware. We can do it as well on the internet. and make it sustainable. So, I will welcome my panelists to finish to give some parting words. Just mine, I remind you, be bold, be vocal, and let’s do it, let’s cooperate. And thank you all.

Monojit Das:
Thank you. Your turn. Well, thank you. I feel if you see this way, like the topic that we have come up with might have attended other sessions as well. So this topic itself, getting selected and getting an opportunity to present in front of you and you opening up to yourself. It still speaks that we have this idea is a success. And I’m sure the success has already started to come out from yourself. And with this, as my co-panelists have highlighted, it can be started with small steps, taking initiative from making groups. And that will lead to us probably in the next session and next year we’ll be having a bigger platform to discuss upon. Thank you.

Ihita Gangavarapu:
Yeah. So I love the motivation and the points by Carson. And I also agree with the points made by… That is something we’d be able to do with the internet, right? And at the end of the notes, the people. So we’ll be able to talk about what solutions we require, what are our concerns, and we should leverage the internet for that. In addition to that, what I’d like to say is that there are organizations and initiatives that are working and putting in their efforts to create products and services that are environmentally conscious, right? And we should be identifying them and supporting them. So I think that is something we should be working towards and helping them. Thank you.

Lily Edinam Botsyoe:
So I get a last say and this is a thank you to all of you for being an engaged audience for you online. Thank you. for making the time to be a part of the conversation. We’ve come to see that technology essentially can be helpful for mitigating what we see when we talk about climate change agendas, we talk about the negative impact of technology, and also can maybe be contributing to what the problem is. And I’ll make the point clear that it behooves us to start the conversation, get government, get authority, get people to be aware. And at the end of this, let us know that connecting the less billion, connecting the unconnected is inherently a matter of sustainability. Thank all of you for coming. The notes will be shared with all of you and let’s create some change. Thank you. We can all come here and join us for a group picture, please. So please, let’s get a group photo and thank you very much, arigato. Thank you. Oh, sorry. Excuse me. Thank you. This is everybody? Everybody wants to speak on this session? Yeah, it’s just all of us. Like a fun one, right? Let’s do a fun one. Hey. Let’s say yo-yo. Yo-yo. Great. Thank you so much. Oh, my god. I made it straight to my house. All right. I’m just going to do that one more time. Let’s see. Oh. Yeah. Everybody’s so nervous. And I love that no one’s talking to you right now. Because I love that. Yeah. Okay. Maybe? I will throw it. You see, it’s supposed to be moved up. Here is the key. Here is the key. I am going to hit it.

Annett Onchana

Speech speed

118 words per minute

Speech length

252 words

Speech time

128 secs

Audience

Speech speed

149 words per minute

Speech length

2275 words

Speech time

916 secs

Gabriel Karsan

Speech speed

189 words per minute

Speech length

1153 words

Speech time

366 secs

Ihita Gangavarapu

Speech speed

179 words per minute

Speech length

1510 words

Speech time

506 secs

Lily Edinam Botsyoe

Speech speed

191 words per minute

Speech length

3182 words

Speech time

1000 secs

Monojit Das

Speech speed

190 words per minute

Speech length

1729 words

Speech time

545 secs

Increasing routing security globally through cooperation | IGF 2023 WS #339

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Annemiek Toersen

The Netherlands Standardization Forum plays a significant role in promoting interoperability and provides advice to the Dutch government regarding the use of mandatory open standards. The forum consists of approximately 25 members from various sectors, including government, businesses, and science. One of their key efforts is the compilation of a list of mandatory open standards, primarily focused on public sector organizations. This ensures effective communication and information sharing between different governmental entities.

Open standards are essential for secure and trustworthy data exchange, enabling seamless communication and compatibility between different systems and technologies. They also contribute to accessibility for all individuals, regardless of their technical capabilities, and promote vendor neutrality by reducing dependence on specific vendors.

The Netherlands Standardization Forum utilizes the internet.nl tool to monitor and measure the growth of internet security standards and other open standards. This tool helps conduct annual reviews of procurement tenders, assessing the government’s performance in implementing open standards. The forum reports these results to the cabinet, ensuring transparency and accountability in open standards adoption.

Annemiek Toersen, a supporter of the forum, advocates for the use of Resource Public Key Infrastructure (RPKI) to prevent Internet hijack. To support its adoption, Toersen proposes sponsoring courses on RPKI to educate and train personnel within Dutch government institutions.

Education and workshops play a crucial role in promoting the adoption of open standards. By providing information and training, governments can make informed decisions and effectively implement these standards. The European Union (EU) also monitors the adoption rate of internet standards, including RPKI, to ensure that European countries stay up to date with the latest advancements.

Internet.nl, an open and accessible tool, is available worldwide for implementation. It has already inspired countries like Australia, Brazil, and Denmark to adopt it. The availability of an English version facilitates global cooperation, and the team behind Internet.nl offers assistance and support to ensure successful implementation.

For a procedure to be accepted, substantial deployment and support are necessary. The involvement of multiple organizations helps validate its efficacy and practicality for wide-scale implementation. Public discussions and workshops are necessary to improve routing security and advance technologies like RPKI.

In conclusion, the Netherlands Standardization Forum plays a vital role in promoting interoperability and advising the government on the use of mandatory open standards. Open standards facilitate secure data exchange, accessibility, and vendor neutrality. The forum uses the internet.nl tool for monitoring and measurement, and Annemiek Toersen supports the use of RPKI. Education and workshops are crucial for the widespread adoption of open standards, and the EU monitors the adoption rate of internet standards. Internet.nl is available worldwide, and the acceptance of a procedure requires substantial deployment and support. Continued efforts are needed to progress security measures and advocate for improved strategies in the digital realm.

Olaf Kolkman

Routing security is a critical concern when it comes to safeguarding the core of internet infrastructure. The argument is that protecting the routing space is vital, as it serves as the backbone of the internet. To address this issue, a prioritization of routing security is necessary.

The Mutually Agreed Norms on Routing Security (MANRS) have been established to tackle routing security challenges. MANRS offers a set of measures that participants in the routing system agree to adopt. Different programs are available for Internet Service Providers (ISPs), Content Delivery Networks (CDNs), Internet exchange points, and vendors. The MANRS Observatory helps track incidents and community adoption, ensuring transparency and accountability.

Another proposed measure is the implementation of certification schemes to enhance routing security. Participants can obtain certification through an audit scheme, potentially increasing their market value. The argument suggests that a certification scheme could create higher value in the market, thereby incentivizing participants to prioritize routing security.

Collaboration among routing system participants is emphasized as a crucial aspect in addressing common action problems. The lack of visibility among participants is seen as a challenge, but by making each participant’s commitment to routing security visible, this issue can be overcome. Increased visibility could incentivize the adoption of routing security measures and promote a more secure routing system.

Olaf Kolkman, although not directly involved in the process, raises a question about the specific Request for Comments (RFCs) used in the initiative. He suggests forwarding the question to individuals such as Bart or Rüdiger, who may have the answer. This demonstrates a willingness to seek expertise and knowledge from relevant sources.

In conclusion, securing routing is of utmost importance for protecting the core of internet infrastructure. Initiatives such as MANRS and certification schemes aim to enhance routing security. Collaboration, visibility, and certification can incentivize participants to prioritize and adopt routing security measures. Seeking input from relevant experts highlights the commitment to obtaining accurate information. An integrated approach is necessary to address challenges and ensure the secure functioning of the routing system.

Verena Weber

Routing vulnerabilities persist in the world of internet security due to various challenges. These challenges include the collective action problem, where the actions of one actor depend on others in the system. The cost of implementing routing security practices is also a challenge. Furthermore, available security techniques require a layered approach, which can increase the risk of mistakes.

To improve routing security, there is a need to enhance the measurement and collection of time series data on routing incidents. Governments can support this effort by funding and ensuring continuous measurement. Several countries, such as the United States, Netherlands, Brazil, and Switzerland, have shown a proactive approach towards routing security and can lead by example.

Governments can play a significant role in bolstering routing security by implementing best practices, facilitating information sharing, and defining common frameworks with the industry. Information sharing and wider adoption of implemented practices can also contribute to improving the situation.

At a broader level, awareness-raising and training at the EU level are important to equip individuals with the necessary knowledge and skills to tackle routing security challenges effectively.

In summary, routing vulnerabilities persist due to various challenges, but governments have an increased interest and can play a crucial role in improving routing security. By actively engaging in efforts to enhance data collection, implement best practices, and facilitate information sharing, governments can strengthen routing security. Additionally, awareness-raising and training at the EU level are essential for addressing routing security issues effectively.

Moderator

During the discussion, operators expressed concerns about the deployment of Resource Public Key Infrastructure (RPKI). Some operators were hesitant to pay for routing securities, raising doubts about the effectiveness and value of such investments. These concerns indicate a negative sentiment towards RPKI deployment. It was also noted that further steps, including ASPATH validation, are needed to enhance routing security measures. This suggests a neutral stance towards the need for additional measures to improve the security of routing.

Operators’ skepticism about investing in routing securities reflects their reluctance to allocate resources without clear benefits or guarantees. This negative sentiment emphasizes the need for persuasion and reassurance to encourage operators to adopt and invest in routing security measures.

Furthermore, there was a request for clarification regarding the tracking of governments on internet.nl. The concern raised implies uncertainty or confusion about the extent to which governments can monitor or track activities on the internet.nl platform.

On a positive note, it was highlighted that Annemiek Toersen’s team provides assistance and inspiration to other countries through the English version of internet.nl. This knowledge exchange among countries, such as Australia, Brazil, and Denmark, illustrates the positive impact Annemiek Toersen’s team has in promoting the use of internet.nl and its code.

Lastly, the moderator sought clarification from Annemiek on RPK standards during the discussion, indicating a need for further understanding or insight into the implementation and impact of RPK standards.

In conclusion, the discussion highlighted concerns and skepticism among operators regarding RPKI deployment and investing in routing securities. The need for additional measures, such as ASPATH validation, was emphasized to enhance routing security. There was also a request for clarification regarding government tracking on internet.nl. However, the positive contribution of Annemiek Toersen’s team in supporting and inspiring other countries with the English version of internet.nl was acknowledged. Further clarification on RPK standards was sought from Annemiek, indicating a desire to gain more insights into this topic.

Katsuyasu Toyama

The deployment of Resource Public Key Infrastructure (RPKI), specifically the use of Route Origin Authorizations (ROAs), varies across regions. Europe and the Middle East have greater adoption of ROA, with approximately 70% usage, while Africa and North America lag behind with less than 30%. This difference was observed in data from APNIC Labs.

One of the contributing factors to the slower adoption of ROA is the lack of knowledge and skills among internet service provider (ISP) operators. In Singapore and Thailand, it has been reported that some operators lack the necessary expertise to effectively implement ROA. This skills gap impedes the deployment of ROA and highlights the need for more practical understanding in this area.

Another challenge arises from the operation of ROA cache servers, which are currently available as open-source software. Efforts in Japan are being made to provide ROA cache servers at Internet Exchange Points (IXPs), but concerns have been raised regarding the security of the communication channel between routers and the ROA cache. The absence of encryption raises security concerns and emphasizes the need for improved measures in this domain.

To encourage broader adoption of RPKI and ROA, it is recommended that organizations or governments issue recommendations for their deployment. In Singapore, for instance, governmental regulations have helped to some extent in promoting ROA implementation. Such industry or country-level recommendations can lead to wider adoption and improved routing security.

The occurrence of route leaks underscores the importance of striving for improved global routing security. Route leaks have negative impacts on internet stability and security. The need for enhanced security measures, such as Autonomous System Path (ASPATH) validation, is evident. However, ASPATH validation is acknowledged as an imperfect solution that requires further development to address existing limitations.

The enforcement of RPKI is currently driven by penalties imposed on non-compliant entities. Although this serves as a motivation for deployment, operators remain skeptical about investing in routing securities. Their skepticism may stem from concerns about practicality, effectiveness, and potential costs associated with implementing such measures.

In conclusion, the deployment of RPKI, particularly the use of ROA, varies across regions, with Europe and the Middle East leading in adoption. The skills gap among operators, challenges related to ROA cache server operation, and operator skepticism towards investing in routing securities present obstacles to wider adoption. However, recommendations from organizations or governments, improved global routing security measures, and ongoing efforts in ASPATH validation can contribute to broader deployment of RPKI and advancement in routing security.

Audience

The analysis examines the discussions surrounding the implementation and adoption of Resource Public Key Infrastructure (RPKI) and routing security. Various speakers shared valuable insights and perspectives on the subject.

One speaker highlighted the commitment of a non-profit organisation to provide free online training in technologies such as BGP security and RPKI. This initiative aims to assist individuals facing budget constraints that prevent them from travelling or attending physical training sessions. The organisation’s focus on social impact rather than profit-making reinforces their dedication to promoting knowledge accessibility.

Another speaker emphasised the flexible training programs offered by the organisation. They expressed a willingness to negotiate tailor-made programs to suit the community’s needs. Additionally, they were open to discussions about offering discounts for training sessions, considering factors such as the number of participants and potential impact.

The analysis also discussed the automation of RPKI, with contrasting viewpoints presented by two speakers. One speaker suggested that automation has facilitated the expansion of Public Key Infrastructure (PKI) with web servers, citing the example of Let’s Encrypt, which provided free certificates based on Acme. This automation was seen as a catalyst for PKI expansion. However, another speaker disagreed, emphasising the importance of resource holders personally signing statements within the portal. They argued that the process of signing statements is not so complex that it should be automated, underscoring the significance of individual responsibility in this regard.

A digital platform called internet.nl was mentioned, which currently checks only Route Origin Authorizations (ROAs) and not Route Origin Validations (ROVs). This limitation in checking ROVs was acknowledged, as it necessitates separate ISP space that has an invalid route to perform the check. This insight provides context to the capabilities and limitations of the internet.nl platform.

The European Union (EU) was mentioned as monitoring the adoption rate of modern internet standards, such as RPKI and “manners.” This observation indicates the EU’s interest in promoting the usage of these standards and highlights their commitment to enhancing internet security and infrastructure.

The analysis revealed the existence of several Request for Comments (RFCs) that have established RPKI-related standards. These standards pertain not only to the establishment of ROAs and origin validation but also introduce new objects in RPKI, such as the upcoming “ASPA.” The inclusion of these standards demonstrates ongoing efforts to develop and enhance RPKI.

The incomplete implementation of BGP-SEC, a standard specifically designed for RPKI, was a concern discussed by one of the speakers. They expressed their worries about the lack of comprehensive BGP-SEC implementation, which requires significant resources. This issue was described as often overlooked in discussions surrounding RPKI and routing security. This observation highlights a potential blind spot within the ongoing discourse and emphasises the need to address this gap to ensure the effective implementation of RPKI.

The audience also raised important points regarding the need for discussions and improvements in the implementation and deployment of BGP-SEC and routing security. It was suggested that the current focus seems to be on the immediately available options, potentially neglecting the necessity for further advancements and enhancements in the field.

Furthermore, resource allocation was deemed crucial for the future development and deployment of RPKI and routing security. The audience stressed the importance of securing necessary resources, including personnel and adequate security measures, to effectively drive advancements in these areas.

In conclusion, this analysis provides a comprehensive overview of the discussions surrounding RPKI implementation and routing security. The insights shared by various speakers shed light on the commitment of organisations to offer free online training and tailor-made programs, the potential of automation in RPKI, limitations of existing platforms, the EU’s monitoring efforts, the establishment of RPKI-related standards, concerns related to incomplete BGP-SEC implementation, and the need for discussions and resource allocation. These discussions contribute to a holistic understanding of the challenges, opportunities, and directions for improvement in the realm of RPKI and routing security.

Bastiaan Goslings

The analysis of the provided information reveals several important points regarding routing security and the adoption of open standards in the internet infrastructure. One key aspect is the Resource Public Key Infrastructure (RPKI), which offers a more secure method of routing security by using cryptography to verify the originating network of routing information. This prevents impersonation and unauthorised usage. Efforts to promote the use of RPKI and improve routing security are seen as crucial and should be intensified.

The MANRS initiative also plays a significant role in protecting the core of internet infrastructure by promoting routing security. Bastiaan Goslings, a proponent of the initiative, is positive about its next level, MANRS+. There is also an encouragement for participants to spread awareness and convince other networks to join MANRS. This highlights the collective effort required to enhance routing security.

RIPE NCC plays a vital role in providing training courses on RPKI and BGP security, which are essential for the adoption of open standards. They offer free online courses, conduct webinars and host meetings to educate individuals on RPKI and other routing security measures. Additionally, RIPE NCC is open to providing tailor-made trainings and considering discounts based on the potential impact and volume.

While RIPE NCC has not implemented an incentive programme like SIDN for adopting open standards, the idea is open for consideration. The decision to adopt such a programme would require the agreement of the members. This emphasises the importance of collective decision-making within member-based organisations.

The automation of creating RPKI space is not a straightforward process and may be perceived as technically complex or costly. However, it is worth noting that automation, as exemplified by the creation of “Let’s Encrypt,” has proved successful in facilitating the adoption of open standards in the Web PKI realm. This suggests that further advancements in automation could address the perceived complexity associated with implementing RPKI.

Regarding certificate validation, Internet.nl primarily checks Regional Internet Registry (RIR) and Autonomous System (AS) Operator certificates, rather than Route Origin Authorisation (ROA) certificates. This underlines the specific focus of certificate checking on the platform.

The analysis also emphasises the need for further improvement beyond the creation of ROAs and validation in internet regulation. Discussions have taken place regarding organising workshops for Dutch government policymakers and cooperation with RIPE to achieve these improvements. This signifies an acknowledgement of the necessity to go beyond the existing tools and approaches to enhance internet regulation.

In conclusion, the analysis reveals the importance of routing security and the adoption of open standards in the internet infrastructure. Efforts to promote the use of RPKI and improve routing security are crucial. The MANRS initiative plays a significant role in this regard, with supporters like Bastiaan Goslings actively encouraging participation and spreading awareness. RIPE NCC provides essential training courses and is open to considering incentives. Automation of the RPKI space and further improvements in internet regulation are also areas of interest. Overall, the analysis highlights the ongoing efforts and challenges in enhancing routing security and promoting the adoption of open standards in the internet infrastructure.

Session transcript

Bastiaan Goslings:
Good day, everyone. For those who are not seated yet and do intend to attend the session, please be seated. We’d like to start. We’re already a couple of minutes over time, we have a busy schedule. My name is Bastiaan Gosselinks. I work for the RIPE NCC, and I have the honor to be coordinating and coordinating a session called Increasing Routing Security Globally Through Cooperation. I think we are all very much aware, you know, we’re a couple of days into the IGF and it’s been mentioned multiple times what the impact of the internet is and the essential role it plays in many of our societies. So whether it comes to work, leisure, education, doing business, even public service, more and more everything is being delivered online and we’re so accustomed, you know, to using apps and devices to communicate and to consume content. It’s a given, like electricity coming out of a plug or water coming from a tap. The internet just works, which is a great thing. So seeing, you know, what happened during the COVID crisis and a lot of traffic, you know, more being generated because people are working from home and learning from home. But because of all of this, the dependency of underlying functionalities that support our uses of online services and apps, we need to take a closer look. And in this case, we’re going to do so at the routing that underpins the internet. It’s actually one of the building blocks that everything else depends on, the actual exchange of internet traffic. So what we do is get some experts from different stakeholders on a panel and, you know, see what their different perspectives are, either regional or from a stakeholder view and see, you know, what answers we can provide potentially and hopefully also have a discussion with people in the room and people online. So I have the honor to present, firstly, Verena Weber, Policy Analyst for the OECD. And then here to my left, Katsuyasu Toyama. He works for the Japanese Internet Exchange Point, JPNAP, and is also chair for the Asian Pacific Internet Exchange Point Association, APIX. Then on my right, I have Annemie Coutours, and she works for the Dutch Forum for Standardization. And they’re doing some interesting stuff with regard to certain routing security tools. So she’s more than happy to share that more with you. I will be providing a perspective from the RIPE NCC, what we do, both technically and in terms of engagement and community and, you know, and spreading the message. But I especially also want to thank people who were involved in preparing this, Lauren Crean from the OECD and Benjamin Boersma from the Dutch Internet Standards Platform. This is a sequence of the speakers, and we aim to have at least a half an hour of interactive dialogue with the audience. So we look very much forward to hear what you think on this. But let me start off. Routing security, RPKI specifically as the tool, which I’ll go into a bit more detail later, and the role that the RIPE NCC provides as a regional internet registry. So what actually could you consider to be the internet? Well, in this case, we’ll use a definition that the internet is actually a collection of individually managed networks. In technical terms, those are called autonomous systems. There are more than 70,000 of those in the routing system. And for people to actually experience one internet, these networks need to seamlessly or at least, you know, not visibly for others outside of this ecosystem. They need to interconnect with each other in order to create an end-to-end connectivity from every single endpoint to every single other endpoint. And in order to do so, these networks need to speak to each other. They need a common language. And that’s what we refer to in internet terms as standards and underlying protocols. There’s no central coordination. It’s actually an organic thing, the way that networks interconnect. It’s mostly based on commercial business relationships and need for reachability. But there’s no like central management or authority, you know, that runs all of this. So the protocol, the language that these networks speak is called the border gateway protocol. This is actually quite an old protocol. It’s from the 90s in the previous century. And in theory, using this protocol as a network, you’re the one that sounds maybe very obvious, but you’re the one that should be announcing your network identifier and the IP addresses behind that, right? That’s what the end users and the end devices use. You’re the one that should be announcing your network. But with this protocol, it’s actually technically possible for anyone to announce anything. This protocol actually assumes that everyone is telling the truth. There is no real hard built insecurity in this protocol. So again, in any autonomous system, any network can announce any prefix subset of IP addresses. And even, you know, like all these 70,000 networks are not directly interconnected to all of each other. So most of the time, traffic, you know, goes through a series of networks before it reaches its destination. This sequence of networks is called an AS path. And that’s not even, that’s also like a given, if you receive such an announcement, you will in essence accept it. There’s no way to actually verify whether it’s correct or not. When this information is not correct, and people just share this amongst each other, people propagate to the entire internet. Again, as I mentioned, this is an old protocol. And implicitly, it actually assumes that everyone, you know, that uses it and interconnects with each other is trustworthy. And when this was developed, people knew each other. So it’s like this peer review, and you know, this, what we call in Dutch social control, that was part of it. The main goal was just to make it work, no overhead. And there were no like ex ante security concerns here, because there was no need to. And again, no single authoritative source, no central control. Which makes this susceptible to incidents. If you think of abusive behavior, an attacker can use this, right, to impersonate itself as another network, to intercept traffic from others, to prevent another network, you know, from being reachable at all, basically disappearing from the internet. And if you are able to redirect traffic, you can use it for other purposes, maybe stealing credentials, stealing cryptocurrency, sending spam. But that’s when malicious purposes, there’s a real intent to do something bad, that actually most of the time, it’s accidents, you know, people configuring routing sessions, configuring their routers, and just making typos, and this wrong information then being propagated on the internet. So in order to make this routing more secure, and again, that might sound quite obvious as well, you need to be able to verify the routing information received from another network. And it has IP addresses, an announced prefix that you receive, has it actually been originated by the network that is entitled to do so? Has this sequence of networks, right, that actually point to the originating network, is that correct? Has that been tampered with? You want to prevent the propagation of incorrect routing information. So where does the RIPE NCC in this case come in? We are a regional internet registry, a term I already used. There are five of those globally. And we cover the region, Europe, Middle East, and central parts of Asia. And that is where we, for our members, which is mostly like networks, organizations running networks, traditionally ISPs, that need IP addresses and AS numbers to run their networks, they come to us in order to receive those resources. And these are the resources that are needed to actually route internet traffic. So what we do, we distribute those resources, we register them in a public database, everybody can check who is responsible for what. So we can guarantee the unique holdership, maybe imagine IP address, you know, can only distribute it once, it can only be used by one, for one endpoint, and not multiple times. And combined with that, we can distribute certificates to our members, who can then cryptographically sign their IP addresses, and the relationship with their autonomous system number, so their network identifier, in order then to take the next step for others to check who is entitled to use which IP addresses and what network number. And that’s where the term resource key public infrastructure steps in as a tool, RPKI, so to speak. So how does RPKI improve routing security? Well, as I mentioned, it makes cryptographically with a certificate and a statement with regard to an AS number, a network identifier, and the IP addresses that are associated with it. And these cryptographic statements can then be used by other networks, you can download them, use specific software tools for that, routing validators are called, to actually verify whether statements they receive, when you connect your end router to the rest, to other networks, and receive routing announcements from them, route announcements from them, to actually verify whether those are correct or not. And that, you know, refers to the originator of an announcement, that does not really say anything about the path, the sequence, and, you know, the other networks that are mentioned in there. But that’s actually something that RPKI can also play a role in in the future. And that’s then called path validation. So the five RERs, we are one of them for our region, and then globally, there are five of them, they act as trust anchors here. And then the whole signing of resources happens in a hierarchical fashion. So the RERs distribute certificates to their resource holders, to their members, and they can then use those certificates to sign their resources, and create statements, and those statements are called route origin authorization statements, ROAS. I just want to comment on, to make it more specifically, you know, what RPKI can contribute here, there was already, there still is an older system in place called the Internet Routing Registry. It’s from the 1990s. RPKI was developed shortly after 2010. I think, you know, the IRC was published in 2012. And the IRR system is basically databases as distributed, I think there are 12 of them, when networks, you know, can register their routing objects, you know, and their routing policies. So their AS numbers associated with the prefixes, the IP addresses that they’re responsible for. The thing there is, if you use those database, you need to maintain them, which can be automated to some extent, but it is the responsibility, you have to actually see to it that the information in there is accurate. And here too, the thing is, okay, the information is there, and it’s very useful if it’s accurate, but there’s no hard way of actually verifying that it’s correct, what is in there. And that’s where RPKI steps in. It’s not only because the RERs are responsible for this system, they actually have control, they distribute these resources, they have insight into who can use what, so they have control over the accuracy of the data, so which network can use which IP addresses. And because cryptography is involved, it brings a hard form of trust. So as a mechanism, it’s quite powerful. It can prevent hijacks and route leaks, and I mentioned the stepping stone towards path validation. But the thing is, it’s opt-in. On the one hand, it’s good, right? You’re not going to enforce this, at least not where we’re at now, for people to use this. But there needs to be incentives for people, actually, to start doing this. So in terms of adoption, on our own side, you see it can differ quite substantially per region and per country. On average, on an aggregate level, close to 45% of allocated IP address space, IPv4 in this case, is covered by these statements. So on the one hand, that’s good, and we see a growing line. That’s not going fast enough. So what are the potential factors limiting adoption of routing security, and in this case specifically, the adoption of RPKI? And I think my colleagues here in the panel will go into more detail with regard to their experience in this. But you hear that implementing it is technically supposedly not trivial, especially if you have a quite complex network and customers and suppliers you’re dealing with. The thing is that while many, many incidents happen, they don’t really seem to have a visible impact, so an impact that scares people and that gives them a reason to act upon. And there’s a collective action problem, so to speak. So if you implement this, you basically help the rest, but there’s no immediate, at least that’s what people perceive it, that’s not there. There’s no immediate benefit, so you make your cost, you make the effort, and what does it then bring you? Yeah, it makes it for others easier, but while on the other hand, you think if all your services are provided online and it’s about continuity of service and also reputation, right, damage that you could do to reputation if things go bad, that there definitely is a reason to get your act together. And I think the OECD will also maybe go into that in a bit more detail, but there seems to be, it’s a bit of a challenge to get really robust data on this and insightful data and also that others and policy makers can use. So briefly before I end, so what’s the IRIPE NCC doing here? Well, it’s hard in our strategy, strategic goals to operate a resilient, externally auditable and secure resource certification trust anchor, and combined with that to promote the use in this case of RPKI. We take our role of trust anchor very, very seriously. And yeah, to promote the use, not only are we here at the IGF, right, to talk about this, but we do a lot of training. We provide free online courses, so anyone can go to academy.iripe.net and create an account and take the courses for free online. For those that prefer to have a physical trainer, and we do that initially, we did that especially for our members. We travel around the service region to give in-house trainings to people, also with regard to routing security and best practices. And we host webinars, so then to make it less of an impediment for people that travel, you can do it online. And we do so also on request. So like if there’s a need to, for instance, you know, we did such a thing with the Dutch government, we can organize tailor-made trainings. And then obviously the outreach in a community building. In part, we host many meetings where we talk about this and update the community with regard to what we’re doing and where we’re at. We host ROAS signing parties, so just get people in one room, right? Because it might seem, yeah, quite a large threshold for people to actually do this, but if you then take them by the hand and show them how easy it can be done through the portal and you can do it on the fly, and before you know it, you have your ROAS and everything is in green. So that’s really, that’s quite successful. Technically speaking, we are preparing ourselves for the introduction of ASPA, Autonomous System Provider Authorization, and that’s meant to take it a step further when it comes to a path validation. Once the standards have been finalized and published, we will be ready to support this. And with regard to the infrastructure itself, the hardening of it, of security, we’re working on auditing it and having it formally certified. Also in terms of, you know, we’re not regulated, but we actually try to act as if we are, and for the benefit of us all. Slide I put in there in terms of internet messaging services, just a shameful plug here. There’s a lot of data, you know, that we collect, probes that we have installed all over the place, you know, that give, and via RIPEstat, you know, give people a nice interface to get more insight. And also in terms of routing, there’s a routing information service where we have like three, 23 globally distributed collectors at internet exchange points that collect the routing data and give people insights, you know, and the data has been collected since 1999. So there’s a lot of stuff, especially, you know, for researchers and academics that might be useful there. So that was my introduction, a bit longer than I hoped for, but I hope this made sense. I tried to make it not too technical, and then I would like to hand over to Verena from the OECD. Thank you.

Verena Weber:
Thank you, Bastian. And we’re just trying to sort out the technical issues. So Bastian, could you log in on Zoom to share the presentation, so it seems, so that our remote participants can see the slides as well. Meanwhile, I’ll start. So good morning, good afternoon, good evening, everyone. My name is Verena Weber. I’m working for the OECD where I’m heading the communication infrastructure and services policy unit. So for those of you who don’t know the OECD, so we are an international organization that is composed of currently 38 member countries. We have a further six countries that are currently in the accession process, and our membership spans from the Americas, Asia, and Europe. So the idea of the OECD is really to write a forum for member countries to exchange best practices and advise on public policies. And we do like, as the organization, the entire organization covers a huge range of issues from trade to education and digital policies, which is where my team sits. So, and like, we have one working party that is dealing with telecommunication issues, which has the same name. So we’re a working party on communication, infrastructure, and services policies. So basically we have a program of working budget where our member countries tell us those are the key issues we would like to work on with you guys in the next two years. And you’ll see that security was one of those priorities. We do the broadband statistics for the OECD. So if you go to the OECD broadband portal, you’ll find all our statistics on broadband for our member countries. And as I mentioned, like we had quite an important work stream with our assistant working party on security in a digital economy, where we looked at how we can secure communication networks. So this was a series of three reports. We had one more general report looking at the main trends, how communication networks will evolve and what does that mean in terms of security implications. We had one more specific report on the DNS and we had a third one, which is the one that I want to present today, which is on routing security. And I would like to acknowledge my colleague, Lauren Crean, who you can see now on the screen. Hi, Lauren. So she was instrumental to the report. So let’s dive right in. So you could think, okay, it’s quite strange that actually the OECD is looking into routing security. So why is that? And so our members wanted to know more about the issues that Bastian already presented around routing security. So basically, what’s the problem? What are the scope and scales of routing incidents that we’re facing today? So that was one important point that we tried to address. Then obviously, if we all agree, okay, there are incidents when it comes to routing security. The next question is, okay, how can we mitigate that? So what security techniques have been proposed are available. Bastian mentioned some of those and how effective are they? And then of course, one important point, and this is the one I’ll focus on during this presentation is what is the role of policymakers, right? So what should be their role in this multi-stakeholder community in securing the routing system? And I think like one conclusion from Bastian’s presentation is that, well, routing vulnerabilities have been understood for many years now, but they persist, right? He already went into the fact why that’s the case. Bastian, could we move to the next slide, please? We’re still figuring out tag issues. Perfect. So what are the challenges we see? And there is a great overlap with Bastian, which is good news because otherwise, I think we should start to get word on this panel. So first of all, I mean, the internet is a network of networks. So that you mentioned, collective action is needed. So that means that basically that one actor’s actions depend on the actions of the other actor in the system, but this is also why we’re all here to have a multi-stakeholder approach to discuss these issues. The next issue that I think Bastian has mentioned so far as well, actually that costs money, right? The implementation costs a bit of money, but if you are implementing routing security techniques, you’re not directly benefiting from that, right? And you still have a problem if there are other actors in the ecosystem that don’t do so. So that’s the second issue. And then obviously, there are now like a set of different solutions out there to make routing more secure. But I mean, basically companies need quite a layered approach to secure their routing efforts. So which can also increase the risk of mistakes and misconfigurations. And so there’s not one thing at the moment that actors can do to fix the problem once and for all. So this is the background we’re facing. Next slide, please. So we looked a bit at what countries are doing in the OECD. And what we do see is that our countries are becoming more interested in routing security. I mean, this is not surprising given that more of our lives are digitally being transformed. I mean, all of our economies are going digital. So the internet is increasingly seen as a critical infrastructure that we need to protect. So on the slide, you see just a couple of examples. So for example, the FCC launched an inquiry in February of 2022 about internet routing vulnerabilities and followed up on this notice of inquiry together with CISA. They hold a workshop in August of this year, published a blog post outlining recent actions. And one of them includes basically the federal government’s BGP security practices, basically meaning cleaning up a bit their routing techniques, including RPKI. Then we have Sweden. So Sweden and the regulator of Sweden, PTS, they undertook quite an extensive monitoring of BGP vulnerabilities. So they looked at, you know, how well are their companies doing? And basically what they found in this exercise, which took them a few years, is that like broadly speaking, it’s fine, but they had some recommendations for certain actors to improve. And the third action I would like to mention is the one by ENISA, which is the European Network and Information Security Agency, which published a report on seven steps to shore up the border gateway protocol. If we go to the next slide, please. So now, you know, if we take a step back and say, okay, you know, what should and could our governments do? So we identified four key pillars in the report. And one important point I would like to make here that this is really not about measures that one place undue regulatory burden on operators. So this is certainly not what we intend to do, nor to centralize the control of the routing system, right? So our four recommendations for policy actions that we identified is like one, we need to get better in the measurements of routing incidents and the collection of time series data. So just during the period where we basically were actively working on the report, we found that some data collection has been discontinued. We found that data collection is heavily dependent on really interested individuals in the community, right? This is individuals that have been doing this for their entire lives for a very long time that are really passionate about this. But for example, we found one person who changed jobs and then suddenly, you know, we have a problem, right? So this is something that’s not ideal. What we also see, and so we are showing different measurement efforts in the report on routing incidents is that, you know, they vary quite a bit in terms of results. So basically we had to explain policymaker, okay, this is the available, but yes, it might not always be consistent. And yes, there are different measurement approaches. So really like one big action for policymakers would be to really fund and ensure, you know, continuous measurement of routing incidents really, and, you know, to build up a time series that we can work with. Now, the second important area is that obviously governments could lead by example by implementing routing good practices and promoting the deployment of available techniques, especially obviously when it comes to government-owned IP addresses and autonomous systems. So, and even, you know, what I mentioned, all these techniques are currently like a bit incomplete, but because none of the techniques fully addresses the issue. I mean, they offer a lot of protection against routing incidents. Now, my third point here is that governments obviously have an important role in information sharing between different stakeholders through, for example, formalized feedback groups. So we could also think about, you know, using established systems, such as the certs that we have across many OECD members to basically, so use them to enhance information sharing. And finally, governments could also define a common framework with industry on how to improve routing security. And, you know, there is a big, there are a lot of different options on how to do this. So they range from formalized partnerships to regulatory monitoring of implemented techniques to voluntary guidelines, or finally, you know, and that’s like the strongest step to more defined secondary legislation. So on the next slide, I have a couple of examples. That I would like to share with you. So the United States has been doing great in promoting the measurement and collection of time series data. So this is through the NIST RPKI monitor that tracks the global implementation of RPKI. And then of course, you know, we know that the technical community, like including RIPE NCC and APNIC provide very useful data, but we can see that, you know, in some cases it makes sense that a government complements and supports that data collection effort. Now, when it comes to leading by example, so we have one very successful case in the Netherlands, and we will hear more from Annemieke in a minute. So this is why I won’t go into further details. I did mention like the US, we have the National Cybersecurity Strategy that commits the government to implement good routing practices and security in its own IP space, which is basically one of the OECD recommendations we have. Australia is getting more active. So through the Australian Cybersecurity Center, so they have guidelines for gateways that provide information and recommended action to improve security. And they also provide information on BGP route security and namely RPKI implementation. Then in our host country, Japan, we have the Ministry of Internal Affairs and Communications, the MIC, that sets standards for safety and reliability of information and telecommunication networks that propose further information sharing among operators, especially during security incidents to one, determine the cause of the incidents and two, consider appropriate countermeasures. And finally, when it comes to defining a common framework with industry, we have a couple of countries such as Brazil and the United States that have quite a good multi-stakeholder collaboration with industry and other stakeholders. So the Japanese guidelines that I just mentioned are an example of voluntary guidelines, but then we also have more legal frameworks. So for example, Switzerland has broad general guidelines for communication services providers that aim to establish a minimum level of security of communication infrastructure and services. And Finland, zooming into BGP, has basically legislation that stipulates to uphold basic security of the BGP. So you can see like they range from like a pure consultation cooperation with different stakeholder groups to legal requirements. So that’s the range of measures that we’re seeing at the moment. And if we move to the next slide, please. So the main takeaways of this presentation. So we all know that routing vulnerabilities are happening. Not all have severe effects, but some can have them. And they can affect the availability, integrity and confidentiality of communication services. And this is something we don’t want to happen. Only what gets measured gets improved. So at the OECD, we’re quite evidence-based driven. So we really need better data on routing incidents. We do see several ongoing efforts to improve routing security, but no single technique at the moment meets all of the challenges. And then finally, governments have an increased interest in routing security. And so we propose several actions in the report to really improve overall routing security. Thank you very much.

Bastiaan Goslings:
Yeah, thank you very much, Evelina. Very insightful. And I’m very glad that you, from that perspective, could share this with us.

Katsuyasu Toyama:
Next is Katsuyasu Toyama from JPNAP and APIX. Probably more technical perspective. Yeah, thank you very much. My name is Katsuyasu Toyama. Yeah, I’m from the operation community. So operating the JPNAP Internet Exchange in Japan and also a chairperson of an APIX Association of Internet Exchanges in Asia-Pacific region. So today, from this standpoint, I’d like to show you about an Asia or world situation. Okay, please, next. So I well remembered approximately six years ago. So we had a big failure that has caused the big tech. Oh, this is like Google, maybe you remember. Yeah, they leaked the peer traffic to upstream provider. Please, next. Yeah, so they leaked the prefixes and then the traffic is rerouted to the worst one. So at the time, the connection, sorry, the communication with the content and the eyeball is in the loss or delay. So the degree of the quality of the communication. Okay, please. Yeah, so these kind of a misoperation, but also the hijacking is often frequently. Yes, please. So, but as Bastien mentioned, the routing insecurity is another long time, very important things. So network operators have been trying to secure our Internet for a long time. So at first, the route filling with an IRR, yeah, that is the routing information, which is not authorized or certified, but we use the data for a long time. But. that was sometimes not up-to-date, obsolete. Yeah, so sometimes to use that data. So the 2010s, yeah, so RPKI started. So now we are moving to the RPKI. So please go next. So how widely RPKI is deployed? Please state. Oh, ROA and ROV already mentioned, so please. So this is the data from APNIC Labs, which are published on the web. And the summarized, according to the regions, the basically the RIR region. So the Africa, North America, Asia and Oceania, Latin America, Europe and Middle East. And each regions, they have some kind of ratio of deployed the ROA. So the green part, you can see, that is a range or ratio of ROA enabled. So as you could see, the Europe and the Middle East, the many space, approximately 70% are already covered in the ROA. But in Africa or area, that is the North America region, there’s still less than 30%. Yeah, so according to the regions, the deployment or penetration of the RPKI, especially the ROA part is not different. Okay, so please go to the next slide. Yeah, so this is also the same, the comparing from the route object. Okay, so you could see some ROA invalid. I think this is not on the hijacking, but I think the misconfiguration of an misregistration of an ROA and such kind of things. Okay, but still, yeah, Europe is very widely covered by the ROA. Okay, go please. So why the networks have registered or deployed ROA? I think, I believe that some global tier one providers and also the big techs are recommended to register ROA. And sometimes they’re saying to the eyeball networks, if you do not register ROA, in a future, we will reject your routes. Yeah, so like in the Japanese, for example, the Japanese operators, they are all fear to lose the connection. Yeah, and cannot access to the such and the famous and the popular services. So they gradually started to deploy and register the ROAs. I think that is a risk. Yeah, so if they do not register the ROA, maybe they will lose the customers. And that means that they lose the money. Okay, so next please. Okay, so as I mentioned, I am conducting the APIX and there I asked and did a survey about ROA and RPKI kind of things. Okay, so why ROA is used or not used in your country or economy? And we got not many, but a few replies. So in Bangladesh, as you could see, in Bangladesh, ROA becomes approximately 90%. Okay, so they said that did no big challenge and networks, they’re doing that by themselves and it becomes a normal. It’s a great thing, I think. In a Singapore case, the government recommended, government regularly recommended a few years ago, but that is not regulated, only the recommendation. And that becomes an approximate 60%. Okay, and the Thailand case, oh, they also the 43%, yeah. And, oh, the obstacles or, yeah, what is in the, yeah, prohibited or not allowing to do the ROA is sometimes they are saying, if you look at Singapore’s answer, oh, there, some ISPs operators do not have a necessary knowledge or skillset. Okay, and also in Thailand, they are saying the same kind of things. So operational level, they should learn more and make convinced, yeah, doing the RPKI things. Yeah, oh, I think that the management level of the Thailand, oh, they are allowed to do that, but the engineers have not much knowledge or skill about it. Okay, so these are the operators’ reactions. So please go to the next. So then, oh, how about ROV? So as far as I know, not so many networks deployed on ROV. Yeah, oh, this is a feedback from such operators in HPEC. And, oh, I asked two IXP friends in the HPEC region and Bangladesh guys replied, oh, they are not deployed ROV in their internet exchange. But the person says they are in the deploying phase and maybe deployed by the end of this year. And Singapore already, I guess, this is in the SCIS and Thailand case, the mechanics, they are deploying the ROV. So, oh, yeah, some of the internet exchange are doing the ROV on their route servers. Yeah, but then also they are saying that sometimes the knowledge is not so enough to do that. And especially as, yeah, some kind of a peer to lose some kind of that valid route. So please go to the next. So this is a feedback from Japanese operators. Yeah, so why they do not deploy ROV? Yeah, because sometimes they appear about the invalid route or mistakenly judged, that is very dangerous. Yeah, so that is one of the reasons. And the other reasons are still the software engineer need it because RPKI softwares are basically open source and not appliance provided. So need more software engineers. And also the not many network engineer itself is not so many. For example, then small ISP or cable TP operators, they do not have enough engineers. Only one engineer operator, that is not a rare case. So in that case, they are very busy. So not too time to learn over the RPKI. Okay, so please go to the next. So what can ISP do for this case? So of course in the ROV at an internet exchange, this is a typical case. We are doing the ROV for a long time. And also the invalid routes are not announced to the peer. So we have discarded that. Okay, so this is, of course, and as I mentioned, several internet exchanges in APAC region are doing this kind of the ROV. This will reduce the burden of our networks. Okay, so this is another good thing. And, okay, so please go on to the next. And not only that, we are doing the experimental project to facilitate ROV. Yeah, as I mentioned, some networks, some operators says they do not have enough software engineers to deploy it and several kind of software. So some of them say that, oh, internet exchange people, please operate. Please do the service about an ROA cache servers. The ROA cache servers is left as open software. And it is sometimes very difficult to operate. So in Japan, we are now trying to challenge to provide ROA cache servers at the IXPs, and which can be used by IXP users. Okay, yeah, but there is some kind of difficulties because the ROA cache should be operated in one. It’s, yeah, so the communication channel between the routers and the ROA cache not encrypted in general. And of course, there are some options to encrypt and on top of it to exchange some kind of information. But still, the part, we think that no good standard, not good implementation, it’s not, we don’t have that. Okay, so that is a concern. Okay, please go next. So as a conclusion of my talk, I would like to suggest that for to deploy the ROA, so some organization in a country should recommend that. Now, I like the approach to industry by doing them by ourselves, but sometimes the cost issue or the engineers are not so many, so need some justification. And higher level or easy or persuaded, if there is some kind of a standard or recommendation of a country level, that is easy to do that. So NIR or regulator, government, maybe you can do that kind of a recommendation. That is one of the good things, I think. And of course, the RPKI solicitation and implementation should be updated. Yeah, as I mentioned, there are some lack part or that less part, so that should be implemented. And of course, the global routing security, that is a long and winding road, as you know. So the first case I told about the root leak, that should be needed as an ASPAS validation. Yeah, so that is not the next or next step. Yeah, but then we have to do a lot of things, but we should go for that goal. Okay, thank you very much.

Bastiaan Goslings:
Thank you, thank you very much. I think very insightful because like practical experiences, what operators are doing and what you can see at your internet exchange. So I think it’s very much adds, it gives the perspective on what we’re talking about here, the evidence-based approach, so to speak. I suggest any questions, comments, let’s do them after the last presentation from Annemiek, who will be speaking on behalf of the Dutch Forum for Standardization of Thinking. Floor is yours. I thought I’d put it on.

Katsuyasu Toyama:
Thank you very much, both of you, all of you.

Annemiek Toersen:
Thank you for the compliments, Verena, for the whole thing and the backgrounds, Katsuyasu. My name is Annemiek Toersen from, yeah, you could call it the Dutch, but we have to say Netherlands Standardization Forum, but it doesn’t make sense. So if you put the next screen on, sheet on, then you can follow everything. What is the Netherlands Standardization Forum? It’s a think tank with about 25 members and focused on interoperability and advises the Dutch government as a whole. And those members are on personal title involved in this forum, and they have a background in the government, but also in businesses and science. And the focus, the main focus of the forum is a list with mandatory open standards. This is our core business is focused on the mandatory of the open standards we were talking about earlier. And the scope of this list is only for public sector organizations. Of course, private, it’s not, can also use it. Can also use it. There will be a nice, next slide, please. But what are, why are we using those open standards? Well, as you all might know, because you’re joining this workshop, the open standards are for interoperability, exchanging data safely and trustworthy. Security, in order to be trustworthy to the society. It should be accessible for everyone. 25% of our population in the Netherlands are not able to watch internet or they have no access to it. So we should realize that. And of course, vendor neutrality. So we shouldn’t be dependent on vendors. For open standards is very important in your services. Next slide, please. An adoption of strategy internet security standards, we have three levels or three points, three items, which I here have a slide of. First of all, we focus on the obligations. I already told you of the mandatory. We do that with a comply or explain lists. So, well, I’ll come back to that later on. And that is, of course, comply or explain for new investments. Furthermore, we have public commitments with implementation deadlines. So later on, I will show you also what that means, especially RPKI is one of those. And we have obligations by mandatory by law. So lately, July the 1st, we had in the Dutch government approved a law for HTTPS, for instance, and ASTS. An open security standard. Furthermore, we have second one is monitoring and third cooperating. I will go first deeper in the obligations. I already told you about the comply or explain list, but the list is for about 40 standards and all those open standards, 15 of them are security standards. And what we do on the list is we have experts gathered to collect it in order to evaluate those standards and the criteria are mentioned here. You have to go to the next slide, please. Okay. Okay, let’s see if you go back to the former sheet, please. Yeah. And then we have the security standards. If the adoption strategy, number two and the monitoring, I go, yeah, sorry that I mixed up, but I had different slides deck, but that’s not a problem. I go from the sheets. The mandatory, we had by law. The second was cooperation. So that means we cooperate a lot with public and private companies. We have contact with vendors. An example is that we have letters written to Microsoft in order to implement Dane. Not only we, yeah, due to our fact we wrote letters, other countries followed like European countries. And therefore the coming spring, they announced that they will implement Dane, well, next year, 2024. And we exchange a lot of knowledge. And that’s nice because, yeah, then we promote adoption in that way. Monitoring, the last one is that we used the tooling of internet.nl. We monitor, apart from that, we review vendors. So if we procure ICT service in the government, you should ask for open standards. If you don’t do that, then you have a reason to explain in your annual report in order to explain, for instance, it’s too much expenses. Could be a severe reason. If not, then you have to use them. The measurements will be published twice a year and offered to the cabinet. So if I can have next slide, please. So, oh yes, you need, if you have only one company or one organization using open standards, then it doesn’t work effectively. You better have, you can only have advantage of it if there are more organizations using open standards. Therefore we call it a critical mass needed. And another thing is that end users don’t know any, and can’t verify. So you need more transparency and awareness is needed. So the information asymmetry is necessary. If we can have the next one. This one I recognize, sorry for that. I apologize, but this is okay. I was talking about criteria. The most important is the openness, added value, market support, and proportionality. And apart from that, open standards do also have different kinds of categories. For instance, internet and security standards. RPKI is one of them, but we also have document and web in the e-invoicing and administration. Accessibility, for instance, the WCAG is also a famous one. And when governments invest or buy such, they must choose for the relevant standards on the list. Otherwise they should have a severe reason to explain in the annual report, as I just mentioned. If you’ll go to the next one, please. I already talked about the internet security standards. Here you see a couple of them. In total, there are about 15. The most, we mostly recognize HTTPS. I already mentioned that there is a mandatory for it. Now, well, of course, here we are for RPKI, but there are more. Next sheet, please. The second, we cooperate. We cooperate, let me see, because my slides are different. Cooperation, including contacts with vendors. So I already mentioned that we, for instance, have contacts with large suppliers. Here you have vendors and hosters, like Cisco, Microsoft, OpenExchange, and Google. Akamai is also, well, you can read yourself here. But we also do international contacts. Like last week, we were represented in the Michieux, workshops on the modern email security standards with European governments. And we reused the internet.nl code. And other countries take that notice, just like Australia and Brazil and Denmark. So they use internet.nl for their measurements. That’s very nice that we can inspire other countries So if you are interested, actually, in also using the internet.nl, please send us a mail and then we can help you in the future. Next sheet, please. We also mentioned monitoring, so measuring. There are two things on the procurement. If you, once a year, we take, go through all the tenders which are done in the Netherlands and we review them. And during the review, we see what’s happening and we see how the growth of using internet security standards are growing and other open standards as well. And we offer this report to the cabinet. So governments will be spoken of. Yeah, they will call, well, they are announced. They will see how they do well or not well. So therefore, the next is that we, the second part is that we measure by internet.nl. We do that twice a year, but that is specific on the internet security standards. Can we go to the next sheet, please? Okay, here you see internet.nl, how it works. It’s actually very easy. You put your URL in it or your email and you find out in one sheet what you’re doing. We also have a Hall of Fame. So if people have 100% score, they can have a special t-shirt from us. And it’s a collector’s item actually, but what we do is more naming than shaming and that works out very well, quite well. Next sheet, please. Yeah, that’s so slow. And if you don’t ask it, you don’t get it. Yeah, well, anyway. Next sheet, please. Okay, if there are any questions in that way, I would also mention that the reason we have RPKI on our list of open standards is that the Ministry of Foreign Affairs was hijacked. We are one of the examples of, unfortunately, Katsuyasu mentioned in his story. We were hijacked and that was a big problem because in 2014, November, this journalists found out and we were in the newspapers in the Netherlands. Later on in 2015, it resulted in parliamentary questions, unfortunately. So it could be worse actually, but due to the RPKI in future using, we can prevent this disaster. It was accidentally found out actually because the Netherlands, the NCST in Holland, submitted RPKI to be continued in the complier explain list. Due to the hijack, they submitted it to us in 2019. Unfortunately, we could implement it in 2022 in the internet.nl measure tooling. So therefore we now check also all governments using RPKI. And well, that means that we have a good site of RPKI in future among the governments and that will be nice. But yeah, if there are any further questions about it, I would love to answer them. Thank you very much. And excuse us for the wrong presentation.

Bastiaan Goslings:
Yeah, thank you. Thank you very much, Annemieke. And yeah, I also feel like somewhat uncomfortable and apologies. I think you did really well despite the fact that the latest version of the presentation somehow did not end up in the slide pack, but I think the message came across very, very clearly. You did really well. So I wanna use the remaining time we have according to plan, 25 minutes to open up the floor for anyone who would like to contribute here, ask questions, ideas, comments. Let me first check if there’s online anyone who would… No? Okay, thank you. In that case, in the room, is there anyone who would like to… Gentleman here, Olaf, please go ahead.

Olaf Kolkman:
Yeah, Olaf Kolkman, Internet Society. I would be amiss if I wouldn’t be talking about what I’m going to talk about. I strongly align, very strongly align with everything that the panel said. Routing security is a top priority if we want to protect the core of the internet infrastructure. The routing space… I’m big on that screen. The routing space needs protecting. And Lauren said it, and you all actually all said it, it’s a common action problem. And that common action problem comes with a lack of visibility. It’s very difficult to see whether a participant in the routing system deploys routing security measures and make that visible, and thereby create a little bit more value in the market. And when thinking about this, and this has been a discussion within the technical community for already a couple of years, I think five or six now, the community came up with a set of norms called the Mutually Agreed Norms on Routing Security. And basically these are a number of measures that participants in the routing system agree to take. They’re different, we have different programs, we have programs for ISPs, we have programs for CDNs, for Content Distribution Networks, we have programs for internet exchange points and for vendors, and there are some different requirements there. And with this program, we try to get visibility in sort of general terms for people to understand whether people are good players in the routing space. We also want to see whether that has impact. So we have an observatory called the MANRS Observatory in which we track incidents, but also how does the community adopt and adapt to certain technologies. And yes, the incidents come from data sets that may or not be all trustworthy. And by the way, not all incidents are actually caused by malice. What more do I want to say? Yes, in taking that other step about creating value, the community is now looking at what we call MANRS+, it’s the working title, whereby we are trying to identify stronger controls than the ones that are now in the MANRS program that can actually be audited. And so with an audit scheme, you can also imply a certification scheme. And with a certification scheme, you might create a higher value. If you are certified, you have probably a higher value in the market. And we hope that by making the consciousness of routing security more visible, we hope that that also creates the value for the participants when they sell their goods and their connectivity services. So that’s what I wanted to add, because I think the MANRS community, which we host as Internet Society, the MANRS community is actually trying to forward the incentives. And I know JPNexus, GeneXus is a member. Thank you.

Bastiaan Goslings:
Thank you for that, Olaf. Interesting. I was very much aware of MANRS, so you had the opportunity to plug that. I think it’s a very, very important initiative in taking it to the next level, right? MANRS+, I think it’s good. It’s been around for quite a while. I’d assume, right? And it’s a good thing that those entities, organizations, companies that join MANRS, right? And I think you have like different programs, right? ISPs, Internet Exchange Points, CDNs. You might even have more by now. Those are the ones. And again, that’s a good thing that actually want to commit to this, right? And want to live up to the spirit of it and also the practicalities of what you then need to comply with. But what about the rest? You know, that is not there. So I do hope that the members, I don’t know if they’re members, but like the participants also go out to take out the message, you know, towards their respective communities. And I had the slide here, you know, with some of the factors, limiting adoption of routing security and whether, especially when it comes to those that operate networks and the technicalities of using these type of tools, as well as potentially, you know, costs involved to implement it. The projects can be quite significant, especially if you have a large countries, you know, spending network with a lot of equipment, et cetera. So I do hope, you know, that the MANRS participants also help to spread the gospel and to work with other networks who not yet are on board to convince them, hey, you know, it’s not that complex or it’s not that expensive or I can help you do this or that.

Olaf Kolkman:
Yeah, I think it is a community of ambassadors, so to speak. We actually have MANRS ambassadors that try to do that. And mind you, that doesn’t exclude the things that internet.nl does. And for instance, the procurement approach that is being taken. Those are all kinds of additional things that help boost routing security.

Bastiaan Goslings:
And that’s why we’re in the game. I fully agree. No, thanks for that. Sorry, is there someone online who wants to? Yeah. You have to do that first and then Professor Muller.

Moderator:
It’s more of a comment from Benjamin Bruceman. And the question actually, the domain name registry of the Netherlands sidn.nl has an incentive program to give discounts if the domain owner uses some open standards, for example, DNSSEC, Dane, et cetera, to improve adoption. And the question is, did RIPE NCC look into giving discounts to IP space owners? Thank you, Benjamin. I saw that one there coming. I know it’s a very fair question.

Bastiaan Goslings:
And I think, you know, sidn, the Dutch CCCLD operator for .nl, yeah, did really great work there and it had an impact. I think it’s quite a low margin business being a registrar and most of those organizations do hosting and other stuff as well. But there’s not a lot of money there to be made. So any discount they can get, you know, and if it’s relatively trivial to implement the NSSEC, then they’ll go for it. And I think the situation for the RIPE NCC is somewhat different. SIDN is not a member-based organization like us. So the management can more easily decide, you know, let’s do this. Combined with the fact that .nl actually receives part of the fee that registrants pay registrars. So per .nl domain name, part of it goes to SIDN. So they have like room to give a discount. For the RIPE NCC, we’re a member-based organization and we don’t charge based on the resources that our members receive or use or whatever they do with them. Everyone pays a straight membership fee. So whether you are a small host or a very big international, one of the big tech companies, everyone pays the same. But this might be an idea and I’m thinking out loud now that that would have to be decided by the members themselves, right? They’d set the membership fee, whether that would be an interesting thing to consider in order to help move this forward. But it’s not up to the RIPE NCC and in this case itself to decide upon that, but it’s a fair question. So thanks for that, Benjamin.

Annemiek Toersen:
Thank you very much. I can also thank RIPE for, yeah, government, Dutch government institutions, because RIPE sponsored courses for RPKI in order to adopt this open standard. So therefore you can sponsor also in a way, not give a discount, but also sponsor by giving courses about RPKI. That might be a suggestion for other environments and other countries, other governments. Thank you.

Bastiaan Goslings:
I’m very happy to take that on board and maybe even more happy to say that that’s actually what we’re already doing. I mentioned it briefly, we do give these, I don’t think it’s scalable to give them away for free, like the face-to-face ones, right? Like you have an actual trainer that travels to go somewhere and spends a number of days with people and has a backup, et cetera. Like we’re a nonprofit organization, so this is not a moneymaker for us anyway. And it’s more about, it’s important to spread the message and to help these people to learn of these technologies and to actually use them. We have also for BGP security and RPKI, free online trainings. So they’re available to anyone. So if you, for whatever reason, I can understand that, right? Like in terms of budget, it’s a challenge to actually come to travel or to go to an actual training. Then the online courses are free of charge available. So that, I think at least as a first step would be good for people to be aware of. So if you guys are not aware of them, I think we need to do a bit more marketing in terms of spreading the message that this is available. And on the other hand, and I think it’s a good example of what we did with the Dutch public officials, right? People from the Dutch government and agencies, et cetera, that we are more than happy to have like a dialogue and to see like what in a tailor-made form we can provide in terms of training. And then maybe also including a discount, no problem. Depending also on the amount of people and the impact that we potentially can have. So, and anyone that has questions about that, feel free to contact me, to come to me and to see what we can do here. Thanks for the suggestion. Sorry, go ahead.

Audience:
So, we’ve been studying the web PKI and one of the big actions that facilitated the expansion of PKI with web servers is the automation and the creation of let’s encrypt which offered free certificates based on this Acme, which later became the Acme standard of automating is, is it possible for some kind of automation to happen in the RPKI space or is it so different that you can’t use that model. I’m not aware of the technicalities of the example like or the analogy that you make with regard to PKI. From what I’ve seen, like, this is something you know we’re not going to automate in terms of, we are not going to create a row as you know to the sign statements for the resource holders, that’s something they need to do themselves but like seeing the way that this actually works within the portal it’s so trivial is maybe too big of a word you need to know what you’re doing but for the people you know that you’re speaking very fast, could you slow down and speak a little louder so people actually understand what you’re saying.

Bastiaan Goslings:
Yeah. Okay, well I’m sorry for that. The way you know that our members, create these statements that were within the portal, it is so easy. Like so that should not be an impediment for them to actually do it this is not something that we can automate and do for them automatically this, it is something that needs to be triggered by the resource holder him or herself to actually create create these statements. I don’t know if that answers your question or not because that will be my initial response. Well I guess, what is the impediment then. Well that’s, I think what we were what we’re discussing here right well what’s the reason for someone not to do it, either they perceive it to be too technically complex or maybe on the validation parts, using the tools, it’s too expensive to you know to configure routers or other equipment they need to get for this accordingly. That’s what they automated in the web API they thought it was too complicated to manage certificates so they created the acne protocol, I just don’t know whether the model is applicable at all but you need to make the choice and then then the tools of course support the choice that you make in such a way. And that part is the tools are sufficiently mature and the way that we do it via the portal, but also the validating software, etc. I don’t, I don’t think. And I’m not a network engineer but imagine and also talking to network engineers that should not be the challenge in itself right if you can run a network you can do this in itself that’s not a technical challenge, but they need to make the choice themselves and do it, probably that’s more of a challenge management approve them to implement this. I have some comments and questions from zoom.

Moderator:
First comment from Benjamin Bruce mark for information manners, the mutually agreed norms for routes and security is currently in procedure to be decided to be put on the Dutch comply or explain list. I have quite a few questions. I’ll ask them one by one, please keep me on the list. First question from Bart Knuben, could we get to a point that RPKI is the default. For example, that networks do not accept routes that are not covered by RPKI roles. Shall I read everything.

Bastiaan Goslings:
I don’t know if I may ask my neighbor here because he thought that was an interesting example you refer to tier one operators and others. I think large content providers, demanding from customers you know that they have their resources assigned so maybe you can answer to that question.

Katsuyasu Toyama:
Yep. So, oh, from the operator sides. Oh. enforcement is in a good way, but usually. So such kind of some kind of penalty is now driving that deployment of the RPKI, but it’s still the people are operators are anxious about So at this point of time, the ROA and ROV is only for certificate and validator origin. So yeah, ASPATH validation is the two or more steps forward. So, not in the perfect solution. So, kind of the enforcement is necessary but then there’s still that is on the way. So, oh, some operators are skeptical to pay the money on that routing securities. Yeah, still. Yeah. So, that is I think the problem. Thank you.

Moderator:
I have another question from Lauren Crayon addressed to forum standardization. Could you provide further details regarding the tracking of governments you mentioned on internet.nl. Will this be tracking ROA creation and or ROV? Thanks.

Bastiaan Goslings:
Hello. Yeah. The last part I couldn’t understand from you. Sorry. Can you repeat that for me. You were mentioning tracking of governments on internet.nl. Would you track ROAs? ROAs? What is it? No, we don’t check ROAs. Sorry, can I interrupt? This is pretty technical about the ROA or the ROV indeed. On the internet.nl, we only check the ROA, so the certificates at the moment.

Audience:
To actually check the ROV is more complicated. It could be done, but we would need separate ISP space to that actually has an invalid route to do the check. Currently, we don’t do that and we use the ethnic data to report this data back to the government or yearly reporting.

Bastiaan Goslings:
Okay, thank you very much.

Audience:
I have another question from Mark Knubben. On its EU Internet Standards Deployment Monitoring website, the EU is also monitoring the adoption rate of modern internet standards like RPKI and manners. What could or should the EU do more? What could or should we do more about measuring or monitoring? Yeah, monitoring the adoption rate of modern internet standards. Well, what we do more is connect governments in order to do so. We inform them, we give workshops, but I’m not sure what he wants to know.

Annemiek Toersen:
We just measure, we offer it, we inform. I don’t know exactly what he really wants to know because, sorry, I can’t answer the question.

Bastiaan Goslings:
All right. Can you repeat it again?

Moderator:
Yeah, sure. So, the EU is monitoring. So, there is a paper EU Internet Standards Deployment Monitoring website, where the EU is monitoring the adoption rates of modern internet standards like RPKI and manners. The question is, what could or should the EU do more?

Annemiek Toersen:
EU, so this question for everyone, I guess. Or not.

Bastiaan Goslings:
Sorry. Yeah, I mean, it’s hard for me to speak on behalf of the EU and I would get in trouble for doing that.

Verena Weber:
So, basically, I mean, I think you mentioned one point, right, like information sharing is good. I mean, I think if you could present what you guys are doing, you know, and have this more widely adopted. I mean, that is probably like another issue, but I don’t know the site well enough to basically say, you know, okay, how well is it working? How many governments are implementing it? So, from our report, I know that, you know, some governments are really quite active, but there are quite a few that are not, right? So, I think, you know, like training, raising awareness and stuff might be also an issue on the EU level, but again, you know, I don’t want to speak for the EU.

Annemiek Toersen:
We inspire two other countries. You already mentioned that, it was Australia and Brazil, and Denmark also uses the English version of internet.nl, so therefore we exchange our knowledge about that and we inspire them. Also, if there are any other countries also here available, if they want to help for that, we can answer, we can help you in assisting using the code. So, the English version of internet.nl is available. So, if anyone needs that here, we would like to know and help you.

Moderator:
Thank you. Rüdiger Vogt has his hand up. He also wrote in the chat the question, but maybe I can ask the technical desk if they can unmute him, then he can ask it himself. If that’s possible. Okay. Okay, I can just read it then. My question is to Annemieke. Which RPK standards are you listing in your advice? Are you including advice on standards that still wait for implementation, or would you need to still fill gaps? I don’t understand.

Annemiek Toersen:
That is positive, then it comes to the list, comply or explain. And if not, yeah, then it can be to another list which we call recommended lists. But Olaf likes to have an answer for that.

Olaf Kolkman:
While I’m not involved with the process, I do understand your question. I think your question is which specific RFCs were input to this process. I’m not quite sure if you noticed, Annemieke, but I’m sure that Bart or somebody else in the office would be able to answer that. And I’d be happy to forward that to Rüdiger, but Bastian knows him too. I think Rüdiger in the meantime is unmuted.

Bastiaan Goslings:
I don’t see the red microphone. Rüdiger, can you speak? Yeah, if you can hear me.

Audience:
Yes, well, okay, kind of Annemieke, as Olaf was telling, there are quite a number of RFCs that define RPKI-based standards. And so far, I only have been hearing about use of establishment of ROAS and the use of origin validation. I think Bastian was mentioning the upcoming ASPA, which will be another object in the RPKI. And in fact, the RPKI design right from the beginning was targeting something that has been defined for quite a number of years as full standards BGP-SEC, which is not yet implemented. And which actually needs significant action and resources for getting implementation and deployment. And people are usually not talking about it, which is a problem.

Bastiaan Goslings:
So I’m done.

Annemiek Toersen:
Okay, thank you very much for your question or remark. Most probably my colleague Benjamin could talk more about that.

Audience:
Yes, so I put the reference documentation in the chat. Currently, our RPKI, which went into procedure has like one RFC attached to it, which is 6,380, but also lists three other RFCs with recommendations. Regarding the BGP-SEC, we don’t do that yet because it needs to be put in procedure by somebody.

Bastiaan Goslings:
Is that clear for you?

Olaf Kolkman:
Can I ask a qualifying question with this? For the procedure, in order to be accepted, there is the expectation that there’s a reasonable amount of deployment of a particular standard or specification, is it not?

Annemiek Toersen:
That’s correct, Arlof. You need not only one organization using it, but you have to find a companion, fellow organizations in order to have a severe standard and accept it in practice. It should be in practice and it should be supported.

Olaf Kolkman:
I think that answers Rüdiger’s question because BGP-SEC is not very much deployed at the moment.

Audience:
Okay. Hi. Seems I’m still there. Yes, I’m very much aware that BGP-SEC, in fact, is essentially not implemented. I learned over the past few years that public discussion of making use of RPKI and improving routing security tends to essentially stress the stuff that is essentially really available. In many cases, the advocates of that argue in a way that, yes, what’s available now is kind of solving old world problems and ignoring to work on getting the improvements that are still necessary. Actually, one of the really bad problems is that for the future deployment standards, development work needs serious resources. Serious resources, in particular, if we want to progress security. That’s not happening. The question is, how could we actually work on getting those resources available and in place with the proper people? Thanks.

Annemiek Toersen:
That’s a good question. In the Netherlands, we organized a workshop in cooperation with RIPE. We opened a course and only policymakers of the Dutch government could join these courses. That’s what we did together. Perhaps you have an addition, another possibility.

Bastiaan Goslings:
Sorry, Rüdiger. I think it’s interesting points you make. Just to confirm, I personally definitely did not want to make the point that we can focus on the tools that are there and that will solve all of our problems. Definitely not. I think when it comes to creating the ROAs and doing the validation, on the other hand, we still have quite a long way to go in terms of adoption, but you’re absolutely making a good point that there’s a lot more that needs to be achieved and that we need to build on. I want to thank everyone here. I’m sorry that’s quite abrupt stopping this now, but the next workshop is going to start soon. People need to prepare for that. I really want to thank everyone for joining. I hope this was interesting. Again, with regard to the topic, if you want to follow up, I’m assuming I speak for all the panelists, right? Come to us and approach us and see how we can together move this forward. I want to thank all of my panelists here. Great for all you guys being here and contributing. I’m really happy. Again, thank you again. Also, the audience for being here and participating also online. Thank you. Thank you.

Annemiek Toersen

Speech speed

144 words per minute

Speech length

2127 words

Speech time

887 secs

Audience

Speech speed

104 words per minute

Speech length

659 words

Speech time

380 secs

Bastiaan Goslings

Speech speed

189 words per minute

Speech length

5211 words

Speech time

1655 secs

Katsuyasu Toyama

Speech speed

145 words per minute

Speech length

1955 words

Speech time

809 secs

Moderator

Speech speed

112 words per minute

Speech length

351 words

Speech time

189 secs

Olaf Kolkman

Speech speed

145 words per minute

Speech length

738 words

Speech time

306 secs

Verena Weber

Speech speed

178 words per minute

Speech length

2536 words

Speech time

854 secs

How to enhance participation and cooperation of CSOs in/with multistakeholder IG forums | IGF 2023 Open Forum #96

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Pavlina Ittelson

The European Commission has launched a new initiative called Civil Society Alliances for Digital Empowerment (CADE), led by the Diplo Foundation and funded by the European Commission. This project aims to enhance civil society participation in international Internet governance (IG) processes, with a particular focus on the Global South. The goal is to address the challenges faced in IG, including the fragmentation of forums, lack of capacity and understanding of human rights impacts, and the requirements of technological development. The initiative seeks to promote a more inclusive approach to Internet governance by involving civil society in a multi-stakeholder manner, allowing for the inclusion of diverse perspectives. By doing so, it can bring attention to issues such as women’s rights, language and culture aspects, and the rights of indigenous groups, which are currently underrepresented in Internet governance forums. The lack of diversity and inclusion within specialist standardization bodies is also highlighted as a concern. Efforts should be made to address these disparities and ensure that a wider range of perspectives are considered in decision-making processes. Capacity building, grassroots participation, and engagement guidance are identified as key areas requiring attention for civil society organizations to effectively contribute to IG processes and advocate for their interests. Partnerships between civil society organizations from the Global North and Global South are encouraged to facilitate knowledge sharing and collaboration for a more equitable and effective approach to Internet governance. Trivial technical solutions are seen as potential remedies for scalability issues within IG, such as replacing challenging anti-bot measures to improve accessibility and user experience. Opportunities for public engagement are found in the environmental sector and youth rights, where public involvement can contribute to progress. While collective input from civil society organizations is valuable, it is important to strike a balance between collective action and the preservation of diverse opinions and perspectives. Ensuring that diverse voices are included is essential for effective decision-making processes. Collaboration between organizations and network building can greatly benefit civil society by amplifying their impact and creating a stronger collective voice. Navigating participation procedures in various international bodies remains a challenge for civil society, but strategic engagement in specific forums can help achieve their goals. Long-term involvement and understanding trends are considered crucial for success. In conclusion, the CADE project aims to empower civil society and promote their active participation in international Internet governance. Addressing the challenges of fragmentation, capacity building, diversity, and inclusion is crucial to achieving a more inclusive and effective approach to Internet governance. Through partnerships, collaboration, and strategic engagement, civil society can play a significant role in shaping the Internet’s future.

Viktor Kapiyo

Civil society organizations from the Global South face various challenges in their work. One major challenge is the difficulty in accessing global processes due to financial barriers. These organizations often lack the necessary resources to participate in international meetings and forums, limiting their ability to have their voices heard on important issues. Additionally, the limited internet reach in the Global South further exacerbates this problem, hindering their ability to engage in online discussions and access relevant information. Furthermore, there are only a few organizations in the region that focus on internet governance, isolating the voices of civil society in these discussions.

However, there is a positive sentiment towards the need for awareness and capacity building among more people in the Global South. The Kenya School of Internet Governance has played a critical role in this regard, having trained nearly 5,000 individuals on internet governance in just six years. The idea is to make internet governance conversations accessible to everyone, acknowledging that anyone with an email is a stakeholder. This approach recognises the unique contexts of local organizations and aims to amplify their voices in global discussions.

Collaborative approaches and coalition-building are also considered crucial in the field of digital rights. By forming partnerships and working together, organizations can address the lack of linkages that previously existed across various digital rights organizations. This collaboration allows for collective problem-solving, knowledge exchange, capacity building, and resource leveraging. By combining the competencies of different organizations, particularly in terms of physical presence and understanding local dynamics, their collective impact is strengthened.

Additionally, partnering with organizations from the Global North can benefit those in the Global South. Global North organizations often have established relationships with policymakers and a better understanding of local dynamics, which facilitates the presentation of views by Global South organizations. Such partnerships also lead to capacity building and knowledge exchange. Global North organizations possess technical resources, such as ICT skills, which Global South organizations can leverage to enhance their work.

Funders also play a crucial role in strengthening civil society organizations. However, disjointed and fragmented funding can create problems for these organizations. Global South organizations often find themselves competing for the same funds to address similar problems, hindering collaboration. Moreover, funders’ goals do not always align with the specific needs of organizations in the Global South. Therefore, it is essential for funders to coordinate their goals and understand the dynamics of Global South organizations to provide effective support.

Building good relationships with legislators and demonstrating expertise is crucial for civil society organizations when trying to influence legislation. It is important to establish these relationships before submitting views and demonstrate the potential value that the organization can bring to the legislative process.

Working collaboratively with other civil society organizations and presenting a united front can add weight to arguments. This approach demonstrates strength in numbers and increases the impact of advocacy efforts.

Finally, being prepared for potential counterarguments and understanding the local context are crucial for civil society organizations. By being well-prepared, organizations can effectively respond to opposing arguments and address the specific concerns and needs of their local communities.

In conclusion, civil society organizations from the Global South face various challenges, including limited access to global processes, internet reach, and organizational capacity. However, there is a positive sentiment towards the need for awareness, capacity building, and collaboration. Partnering with organizations from the Global North, coordinating funders’ goals, building relationships with legislators, working collaboratively, and being prepared are essential strategies for strengthening civil society organizations and making a meaningful impact.

Marlena Wisniak

Stakeholder engagement is considered a vital component of policymaking at all levels. It emphasises the need for collaboration, iteration, and inclusivity, ensuring that stakeholders’ voices are heard and that they have influence over the decision-making process. However, there is a clear power imbalance between stakeholders, hindering inclusivity in policymaking. This asymmetry of power is evident in the unequal playing field between civil society, the private sector, and states, as well as regional disparities and safety issues that impede activist participation.

To address these challenges, transparency in stakeholder engagement is essential, ensuring public visibility of participation mechanisms and discussion outcomes. Proper resourcing, including financial contributions and trainings, is crucial for effective multi-stakeholder participation, particularly for marginalized groups and non-digital rights organizations.

Participation in standardization processes, especially in the field of artificial intelligence (AI), is complex due to its technical nature. Civil society’s limited representation in standardization bodies, with a disproportionate focus on digital rights organizations and AI expertise, hinders diversity and inclusivity in these processes.

Global North organizations can learn from global majority-based organizations to incorporate diverse perspectives. However, stakeholder engagement faces resistance in many countries outside the United States and Europe, requiring innovative advocacy strategies.

International governance mechanisms may have limited influence in the European Union (EU) and the United States (US) but greatly impact national regulations within the global majority. UNESCO guidelines and recommendations from entities like the United Nations (UN) shape national regulations, though enforcement can sometimes become problematic.

The diversity of civil society and the global majority, including different languages and cultural norms, should be considered in policymaking and stakeholder engagement processes.

In the context of internet governance, there is a need for more inclusive perspectives. Incorporating learnings from initiatives like the Digital Services Act (DSA) Human Rights Alliance is important in shaping international internet governance.

While inclusive informal networks exist, coordination among these networks proves challenging, impacting effective stakeholder engagement and collaboration.

Privileged and well-networked organizations have the advantage of exposure and influence, creating an unequal platform for stakeholder engagement. This inequality must be addressed to achieve inclusivity.

Organizations need to take responsibility for bringing in new voices and perspectives, such as offering panel spots to others or having someone accompany them to ensure representation.

Understanding the UN advocacy process is challenging, even with dedicated UN advocacy officers, hindering effective stakeholder engagement in international policymaking.

In conclusion, stakeholder engagement is crucial for effective policymaking, necessitating the addressing of power imbalances, promoting transparency and accountability, providing resources and training, and embracing inclusivity. Considerations include international governance mechanisms, language and cultural diversity, and coordination within informal networks. Call for organizations to bring in new voices persists, while understanding the UN advocacy process is crucial to effective stakeholder engagement.

Jovan Kurbalija

The analysis of the speakers’ statements reveals several noteworthy points regarding the impact of triviality on people’s participation in large systems. One speaker highlights the influence of navigation experience in UN corridors on individuals’ sense of belonging and their feeling of being part of the process. The speaker suggests that this experience can shape people’s participation in the system, implying that a positive and inclusive navigation experience can enhance engagement.

Another significant concern raised by the speakers is the accessibility of documents from various international bodies such as the UN, EU, and others. It is argued that the lack of interactivity in PDF formats can hinder the accessibility of these documents. The speakers suggest that the static nature of PDF formats may limit the ability of individuals to engage with the content effectively, potentially excluding certain groups from participating fully.

Furthermore, the design of UN Secretary General policy briefs is highlighted as not being online-friendly. It is suggested that the current design may pose challenges for users trying to access and engage with the content online. This aspect negatively affects the user experience and may impede people’s ability to participate in policy development processes.

The sentiment among the speakers towards the current state of information accessibility in policy development processes is largely negative. The mention of the European AI Act exemplifies this sentiment, as its complex display hinders consultation, potentially limiting effective engagement. However, there is a positive aspect as well. The analysis reveals an acknowledgement of the importance of encouraging alternative thinking and creativity in the current world context. This suggests that fostering diverse perspectives and innovative approaches can contribute to more inclusive and effective policy development.

In conclusion, the analysis highlights the importance of considering the impact of triviality on people’s participation in large systems. It emphasizes the need for a positive and inclusive navigation experience in institutional settings such as the UN. Additionally, it underscores the significance of improving the accessibility of documents from international bodies, including the design of policy briefs. The sentiment of connectivity and engagement in policy development processes is largely negative, but there is a recognition of the value of alternative thinking and creativity. These insights provide valuable considerations for policymakers and institutions aiming to enhance public participation and effective governance.

Peter Marien

Summary: The need for strong participation of civil society in global digital governance is emphasised as a positive argument in the context of the EU. The EU strongly advocates for civil society involvement as it believes that without it, societies tend to drift off in directions that are not aligned with a human-centric model. Similarly, advocating for multi-stakeholder level discussions is seen as positive and necessary. It is argued that certain discussions should not be limited to intergovernmental talks alone.

However, there is a negative aspect to consider as well – the lack of knowledge and capacity within civil society organisations. It is observed that civil society faces challenges internally within the European Commission, and there is a general lack of know-how when it comes to global digital governance. This lack of expertise and capacity hinder the effective participation of civil society in shaping digital governance policies.

The importance of inclusion of the Global South in digital dialogue is seen as a positive argument. It is noted that there is currently a gap in participation from the Global South in global digital governance. An initiative led by the Diplo Foundation known as the Civil Society Alliances for Digital Empowerment (CADE) project is mentioned as being instrumental in addressing this gap.

Furthermore, it is highlighted that more capacity and resources are needed for civil society to participate meaningfully in internet governance discussions. The fast-paced nature of the internet scene and the increased global attention to these issues, particularly due to the COVID-19 pandemic, create a demand for civil society to possess not just the know-how, but also the necessary resources for participation.

Peter Marien, a supporter of civil society’s meaningful participation in internet governance discussions, emphasises the importance of investment in capacity building and resource allocation. He argues that adopting new technologies, such as artificial intelligence (AI), requires resources for meaningful participation, especially since AI has become a fundamental topic in internet governance.

Initiating consultation processes with the general public is seen as beneficial and positive. Notably, a Nobel laureate journalist actively interviewed a random selection of individuals, which had a significant impact. Extensive consultation processes, which sometimes receive thousands to tens of thousands of inputs from society and are sometimes even analysed by AI, occur in EU legislation processes.

Additionally, it is argued that consultation processes should also involve non-experts, as citizens, despite lacking expertise, can have a notable impact. This underlines the value of diverse perspectives and the democratization of public engagement.

In terms of diplomacy and communication, it is acknowledged by Peter Marien that sensitivity should be maintained when dealing with such matters. This implies that diplomatic interactions require a tactful approach to foster constructive dialogue.

Peter recommends seeking dialogue and creating a trusted relationship with the involved government when it comes to government relations and policy-making. He refers to experiences in other countries, like Kenya, where open dialogue has been beneficial.

Finally, it is suggested that reaching out through other organizations that may have better access to open dialogue can be fruitful. By collaborating with strategic alliances and other organizations, civil society can effectively enter the conversation and contribute to the discourse on internet governance.

In conclusion, the expanded summary highlights the need for strong civil society participation, the importance of multi-stakeholder discussions, the lack of knowledge and capacity within civil society, the inclusion of the Global South, the necessity for increased capacity and resources, Peter Marien’s support for investment in capacity building, the benefits of initiating consultation processes with the public including non-experts, the importance of maintaining sensitivity in diplomacy and communication, the significance of building a trusted relationship with the government, and the suggestion of reaching out through other organizations for open dialogue. These various aspects contribute to shaping effective global digital governance and promoting a human-centric model. The summary aims to be an accurate reflection of the main analysis text.

Tereza Horejsova

The arguments and stances presented emphasize the importance of civil society in policy processes. Civil society organizations are viewed as crucial actors in policy development, as they often prioritize the interests of individuals. They provide multiple perspectives and efficient coordination mechanisms, enabling policy processes to benefit from a wide range of viewpoints.

The Internet Governance Forum (IGF) is highlighted as a significant platform for civil society to engage with others and influence relevant issues. Traditionally, the IGF has been dominated by civil society participation, and it offers a safe space for civil society to have a say in the governance of the internet.

Moreover, the contribution of all stakeholders, including the private sector and civil society, is considered vital for the development of digital policy. It is argued that it would be absurd to discuss digital policy without consulting these stakeholders, as their involvement ensures a more inclusive and comprehensive approach.

The Global Forum on Cyber Expertise (GFC) is recognized for acknowledging the importance of multi-stakeholder cooperation in capacity building related to cyber security. The GFC serves as a platform for actors involved in cyber security to come together and work collaboratively towards building expertise in this field.

Despite these positive aspects, there are concerns about the effectiveness of consulting civil society organizations in a superficial or “tick-the-box” approach. Civil society organizations have varied agendas and objectives, making it challenging to consult them effectively. Some policy fora are criticized for conducting pro-forma consultations that do not necessarily lead to meaningful outcomes. This lack of sufficient coordination of priorities among donors is seen as a barrier to the effective involvement of civil society organizations in policy forums.

On a different note, Tereza Horejsova’s perspective is highlighted, as she believes in the importance of introducing new and inexperienced voices in panels. She encourages experimenting with panel compositions to achieve fresh perspectives and downplays the risks associated with having first-time panelists. This approach fosters inclusivity and contributes to reducing gender inequalities and promoting diversity in panel discussions.

In summary, the arguments and stances presented emphasize the crucial role that civil society organizations play in policy processes. They bring valuable inputs, diverse perspectives, and efficient coordination mechanisms. The Internet Governance Forum and the Global Forum on Cyber Expertise are identified as important platforms for civil society to engage in relevant discussions. However, there are concerns about the superficiality of consultations and the lack of sufficient coordination among donors. Additionally, Tereza Horejsova’s perspective highlights the need for inclusivity and fresh perspectives in panel compositions. These observations underscore the significance of multi-stakeholder cooperation and the active involvement of civil society in policy development processes.

Audience

Technical standards bodies can be complex and overwhelming, making it challenging for new participants to navigate and contribute effectively. These bodies consist of numerous working and study groups within each organization, leading to a fragmented landscape. It can be difficult for newcomers to determine which meetings to attend and how to make meaningful contributions. The dominance of the United States and Europe in these bodies further complicates the situation, potentially marginalising participants from other regions around the world.

However, there are strategies and structures that can support and facilitate smoother participation. One such approach is the provision of engagement strategies and support, such as financial assistance and assistance with visa processes. For instance, Article 19’s Global Digital Program offers support structures that take care of finances and visa processes for participants. They also provide one-on-one mentorship to help participants understand complex concepts and bounce ideas off after meetings. This support helps participants overcome logistical barriers and align the priorities of civil society organisations with the needs and objectives of technical standards bodies.

Cooperation and collaboration between the Global North and Global South in technical standards bodies should embrace an inclusive approach and avoid a white-saviorist mentality. There is much to learn from the Global South, and creative advocacy strategies can flourish outside of the US and Europe. By embracing a collaborative approach that respects the knowledge and expertise of all regions, technical standards bodies can become more equitable and representative.

International governance mechanisms have a significant impact on national regulation, particularly in the global majority. Entities like UNESCO recommendations can disproportionately influence national regulatory frameworks, potentially shaping policies that may not be in the best interests of countries in the global majority. Therefore, it is important for these governance mechanisms to involve a diverse range of voices and perspectives to ensure fair and inclusive decision-making processes.

It is crucial to acknowledge that civil society and the global majority are not monoliths. There is significant diversity within regions, and even within a single country like India, there are multiple languages and perspectives. Recognising this diversity strengthens the ability to address inequalities and promote inclusivity within technical standards bodies.

Capacity building is a process that takes time and cannot be achieved in a day. This is particularly evident in areas like climate change, where developing the necessary expertise and infrastructure to meet the goals outlined in global agreements like the Paris Agreement is a long-term endeavour. Recognising the gradual nature of capacity building is crucial to avoid unrealistic expectations and foster sustainable progress.

Citizen participation at the local level plays a crucial role in addressing global issues. As demonstrated during the Paris Agreement process, citizen assemblies can provide valuable input and insights. Encouraging citizen participation in different parts of the world can foster capacity building and contribute to global efforts to address pressing challenges.

Institutional capacity building is vital for civil societies. By strengthening their institutional structures, civil society organisations can better engage with governments and stakeholders to influence policy making. For example, the pending implementation of India’s Personal Data Protection Act and Digital India Act highlights the need for a strong front when dealing with governments. These regulations will impact the digital activities of 1.4 billion people, emphasising the importance of civil society organisations advocating for their interests.

When engaging with governments and policymakers, alternative methods of engagement beyond traditional consultation processes should be explored. Consultation processes prior to the introduction of a bill in India, for example, have proven to be more fruitful in generating meaningful engagement. Finding ways to engage directly with parliamentarians and government officials can lead to more effective and impactful involvement.

While common input by civil society organisations can be valuable, it is important to strike a balance between shared perspectives and maintaining a variety of opinions and perspectives. Overlooking the diversity of opinions within civil society organisations can limit the range of perspectives presented and potentially hinder inclusive decision-making processes.

Collaboration between donors is crucial for promoting synergies and avoiding duplication of efforts. Donors such as the European Union and the State Department are often working on similar projects but may not be collaborating effectively. Encouraging collaboration among donors can lead to more efficient and coordinated support for initiatives and maximise impact.

Creating a wider network of civil society organisations can foster sharing and collaboration. This approach allows organisations to build upon each other’s work, share resources, and learn from one another’s experiences. By creating a supportive network, civil society organisations can collectively address challenges and contribute to social progress.

Rules for interaction in vast spaces, such as international forums and technical standards bodies, need to be shared and clarified to facilitate effective engagement. Currently, the lack of definitive rules and different interaction styles across spaces can hinder meaningful communication and collaboration. Workshops to brief participants on interaction techniques and establish common ground for engagement are proposed as a possible solution.

In conclusion, navigating and contributing to technical standards bodies can be challenging due to their complex nature. However, supporting engagement strategies, fostering collaboration, and promoting inclusivity are essential for facilitating participation and ensuring the effective functioning of these bodies. Empowering civil society organisations, embracing diverse perspectives, and building strong institutional capacity are key components of this process. By working together, stakeholders can foster meaningful dialogue, create impactful policies, and drive positive change towards achieving the Sustainable Development Goals.

Session transcript

Pavlina Ittelson:
and the inclusiveness of the spaces here in the right room. I see that this is a time where everybody would rather take a nap with a jet lag than discuss serious topics, but we do have a wonderful panel and a good topic today, so I hope you will all be engaged. And we see this as an open forum, as a dialogue, as a learning experience, and we’d like to hear as much from you. And I see a lot of expertise in the room as from our panel. And let me kick us off then with introducing myself. So my name is Pavlina Ittelson I’m the Executive Director of Diplo-US, and I’ll be moderating this session and also speaking on behalf of Diplo Foundation. We have Peter Merian, Team Leader of Digital Governance, Unit 5 in Science Technology Innovation at DG INTPA. We have Teresa, IGF MAG member, GFC Outreach Manager, and esteemed board member of Diplo-US. Then we have Victor Kapiyo, member of the Board of Trustees of Kenya ICT Action Network. And Marlena Vyshnyak, Senior Advisor of Digital Rights of European Center for Nonprofit Law, ECNL. We also have online participation, and my colleague Sita Lakshmi is as a moderator who will come in with the questions from online. So what are we going to be talking about? We will discuss in this session a new initiative by the European Commission, where Diplo Foundation is a part of, and a new project, Civil Society Alliances for Digital Empowerment, CADE, led by Diplo Foundation, working with nine partners globally that aims to increase participation of the civil society into international IG processes, just funded by European Commission. We have partners at this table and some in the audience as well. There are Forus International, ECNL, CIPESA, KictaNET, Sarvodaya Fusion, Vision for Change, SMEX, Fundacion Carisma, and PICISOC So quite a big group and quite a diversity of views on our end. So our aim is to discuss how to improve and enhance engagement of civil society organizations in multi-stakeholder forums. What challenges do civil society face in meaningful engagement? And also, how can we bring in the perspective of Global South, civil society into the international multi-stakeholder forums? Specifically, we will talk also about standardization forums at ITU, IETF, and ICANN. So with this, I would ask Peter to start us off with a short introduction, please.

Peter Marien:
Thank you very much. Good afternoon, everybody, or maybe people online, good morning or good evening. Thanks a lot for giving me the mic. I think I’m probably the least knowledgeable in the room on the topic. But anyway, I’m glad to kick it off. So maybe a bit of context, because as was mentioned, we are moving into a new project on this topic, and I’d just like to shape a little bit how we got to that point. So about three years ago, the topic of digital became a priority for the European Commission. And in my DG International Partnerships, we were looking at how to best approach this topic to work at this. Specifically, we’re looking at this topic at the global level, national level, regional level, and of course, at different thematic levels. And when we looked at this specific topic, we always look at this through our lens of a human-centric digital development. And that means that, of course, the human is centric, not the state, not the company. And also, as you’ve probably heard many times before, we are aiming at tackling the digital divide. So very soon we came on this topic of global digital governance. What does this mean? This was quite new to us. This is also why I have to stress we are still in learning mode. And another aspect of our approach is that we wanted to look at this topic from a multilateral point of view, and also from a multi-stakeholder point of view. And this is key. You know, EU is a very strong proponent of the multi-stakeholder approach, IGF processes and others. But we also looked at this through the multilateral approach and when we started looking at this multilateral level, we noticed that even though everybody claims to be proponents of the multi-stakeholder model, maybe not all the actors in the multi-stakeholder prism are there. And so specifically, we thought that maybe on the topic of civil society, that that could be a topic that we would like to see worked on. So I was talking about digital, but of course, on the other hand, EU is a strong proponent of civil society in general. So I won’t go too deep into that, but for us, it’s clear that in the absence of a strong participation of civil society, we tend to see, if you look at history, or even today, we tend to see societies drifting off in directions that are not aligned with our human-centric model, let’s say, or with our free democratic societies, okay. So this is a bit where we come from. So then the question was, okay, on global digital government, who needs to be around the table? So we were looking at this EU and agencies, and we also noticed that there are actors out there which are pushing some of these discussions into the intergovernmental sphere. Also at this IGF, I think this is a topic that’s coming up quite a lot. And so we just want to emphasize again that we really would like to have certain discussions, global discussions at the multi-stakeholder level. We’re the first proponent worldwide for the multilateral system, don’t get us wrong, but certain discussions should not just be intergovernmental. And so this is where we are. Now, when we looked at, okay, how shall we approach then the topic of civil society, I’m sure we will get back in more detail into that later, but just a few things. On the one hand, we noticed possibly a lack of know-how on the topic and a lack of capacity. Now, I have to say, we face the same issues internally. So this is not something that is only for civil society organizations. Even in our own DG, in our own unit, there are very few people that actually know this topic and we actually barely have resources to cover this. So it’s not unique. That’s the first thing. Second was that even though civil society was present, then maybe not at the volume, so at the amount that we wanted. So I’ll not go too much into that now. Okay, just to emphasize also that for us, in our perspective, when we talk about digital, we link this to the topic of rights, fundamental rights. So this is fundamental for any of our discussions that whatever we talk about, in the end, it has to be aligned with our views on the rights-based approach, basically also aligned with the UN Charter of Human Rights. And that underpins many of the discussions that we can have afterwards. And then another thing is that we wanted to make sure that the Global South is involved because when we looked at the capacity, there are actually actors also in civil society that are very knowledgeable, that have a track record, but that was not, I mean, we saw then maybe gaps in the global south. So we wanted to work on that. Last thing, I’m almost finished. This program for us has to be, we positioned this in an overall program where we work on digital and multilateral, so digital and multilateral, and so in that context, just for information, we’re also working with ITU and UNDP, so I mean, we have, you know, we’re funding them for actions on digital and multilateral. ITU, UNDP, also OHCHR on rights, UNESCO, and we’re also working with the tech envoy. Of course, we’re working with EU member states, and then, as was mentioned, and this is quite new, so this is the first time for us on this specific topic, we now have two actions that will start soon, and one is indeed with, under the leadership, well, you know, chaired by Diplo, as was explained. So thank you so much, and I’ll pass the word back.

Pavlina Ittelson:
Thank you, Peter, and we certainly appreciate your insight on how European Commission views the participation of civil society. I think it resonates a lot with what we see in the field as well, and we certainly agree, working with the small and developing states, that the capacity problem is not only on the side of civil society, but with fragmentations of different forums and shifting things, it is an overall problem which needs addressing. With that, let’s go to Teresa, with her IGF mag hat to tell us more about how the international forum sees this problem.

Tereza Horejsova:
Thank you very much, Pavlina. Thanks also, Peter. Well, first of all, congratulations. Not only to the grantee, and a good one, and with excellent consortium, but also for, you know, you as the donor recognizing the issue and the problem, and deciding to make it a priority, because it is important. You know, I will start with a few reflections on why I feel, in general, inputs of civil society are essential in the various policy processes that we are dealing with. You know, of course, you know, many of the deliberations that are happening here for, like Pavlina has mentioned, you know, actually, you know, impact the individual. And it’s often the civil society organizations that have the interests of the individuals really close to their heart. But beyond this kind of existential reasoning, I feel that more and more we are moving in some kind of a general culture of multi-stakeholderism and, you know, that leading, hopefully, to kind of more efficient coordination mechanisms. So, you know, basically, with a few exceptions of some very hard policy issues, it’s very difficult to think of a policy process that wouldn’t benefit from multiple perspectives, from multiple stakeholder groups, obviously, including civil society, which can ultimately lead to better and more informed policymaking. So, you know, even if you, like, we are talking here about civil society, but, you know, think also about other stakeholder groups, like, for instance, how absurd would it be to discuss some digital policy developments without being in touch with the private sector, you know. So, I feel that the same absurdity would stand for not consulting the civil society. So, I’m wearing a couple of hats today. As Pavlina mentioned, you know, one hat is ex-DIPLO, current board member of DIPLO-US. The second one, Pavlina said, IGF-MAG member, but actually, as of this morning, that’s not the case because I have served my three years, yes, but I hope that it still allows me to provide some perspectives on the current forum. So IGF is very traditionally dominated by civil society participation. It’s not the stakeholder group that the IGF is struggling with, there are actually other stakeholder groups where the struggle is more of an issue. So in this sense, really, I feel it is a safe space and also the magic space in a way for civil society to allow to engage with others without the pressure of necessarily kind of having a negotiation or a very concrete outcome in this regard. So that’s something that definitely should be protected, you know, and I’m really curious once this IGF is over, you know, how the chart of the various stakeholder representation will look like. But as usual, I will expect very, very heavy domination of civil society. That’s also why civil society, and maybe rightly so, is very defensive about any kind of, yeah, how to put it, you know, maybe some concerns about the future of the IGF. So you will really hear a lot of voices you’re hearing already and will hear in the coming months, even more, because at this moment, there is no equivalent to a space like the Internet Governance Forum where civil society could have so much opportunities to express and in a way to also influence the discourse on the issues that are here. I’m happy then to go more in detail about how it actually works, what’s the role of the MAG in this sense, but that’s maybe if we have time. And the last hat I’m wearing, and allow me just a very, very short mention, I currently work with the Global Forum on Cyber Expertise, the GFC. For those of you not familiar, we are actually also like a platform or a member organizations for various actors that are involved in especially capacity building issues and particularly related to cyber security. And I think from the whole vision, how the GFC wants to bring these actors together, it’s also one of the organizations that has got how crucial it is to have various actors from all across the stakeholder spectrum to get together and exchange on issues related to cyber in particular. So I’ll probably stop here and look forward to the discussion.

Pavlina Ittelson:
All right, now I’ll turn to Marlena, because we did have a very extensive position from Teresa on the engagement of civil society. So from the position of ECNL and advocacy position, could you bring your perspective on the topic, please?

Marlena Wisniak:
Sure, thanks so much, Pavlina. Hi, everyone. It’s great to be here today. I’m Marlena Wisniak. I lead the emerging tech and AI work at the European Center for Nonprofit Law, a civic space and human rights organization based in Europe. And I live in San Francisco, so a lot of interaction with the tech companies, which I assume you mentioned as a stakeholder is often missing. So just a couple opening remarks, and we’ll dive deeper into the conversation. But at ECNL, we really see stakeholder engagement as a cross-cutting and necessary component of any kind of policymaking at the national, regional, and global levels. And we really see it as a collaborative process, so it’s not just a one-time mechanism where we hear someone speak, but it’s an iterative process where folks have different ways to intervene depending on where they are, what their capacity is, what their resources are, and fundamentally that they can meaningfully influence the process. And that’s something that’s hard to quantify for now. It’s one thing to listen. It’s another thing to actually have our voices heard and implemented. And of course, in terms of beyond IGF, just policymaking and regulatory mechanisms in particular lie within this state, so decision-making is… member state or governments, but I think there’s more evidence that should be, or evidence-based research that should be done to really see how much of these consultations are impactful. There’s something also like stakeholder fatigue, where we have lots of consultations. And to be clear, ECNL always pushes for multi-stakeholder participation, and we are deeply concerned also about the future of IGF in particular, including where IGF 2024 may be, for those who have heard. But all this to say that it’s not enough to just have multi-stakeholder, it has to be properly resourced, including not only financial participation, but also trainings, especially for organizations that aren’t digital rights organizations, so that they can meaningfully participate. And I’m thinking especially here, marginalized groups like feminist groups, queer, racial justice, immigration, refugee groups, so that their voices can also be heard in a way that is meaningful. And fundamentally, there is an asymmetry of power between stakeholders, beyond the resource and financial access. I don’t think it’s a secret that there’s no level playing field between civil society, private sector, states. And within these sectors, these sectors are not a monolith either. So there is no such thing as one singular civil society or one private sector. There’s obviously a regional disparity. I’m very privileged to work for a European-based organization living in San Francisco. So I can pay my way to come to Japan. I don’t even need a visa. I have a U.S. and EU citizenship. So pretty much open to the entire world in terms of travel. That’s not the same for most of my colleagues. I’m also generally much safer. That’s not the case for a lot of activists and human rights defenders around the world. So having in place mechanisms that enable safe participation is just as important as enabling participation in of itself. And I will just end here. I know the rest of the session will continue on these topics that stakeholder engagement comes hand in hand with transparency. And that means that while closed door meetings are important and often necessary, there also needs to be public, transparent information about where to participate, how, what has been discussed, what are the outcomes of it to enable true accountability. Thank you.

Pavlina Ittelson:
Thank you, Marlena. We certainly hear you on the running marathons on sprint muscles. Yes, we do face the same issues where the engagement of civil society in international forum is a long-term engagement, long-term work, often decades. So the proper mechanism, not only on the sides of international organization, need to be in place, but systematically within the civil society organizations and within the funding scheme as well. Now, you mentioned also being from the Global North organization and having certain privileges. I would like to turn to Victor from the Global South part and to tell us more about what challenges are faced in the Global South and the civil society organizations participation.

Viktor Kapiyo:
Yes, thank you very much. From Kiktenet, which is a think tank based in Nairobi, we seek to promote the multi-stakeholder approach in the work that we do and to ensure that outcomes are actually meaningful for communities at the local level. We believe that the multi-stakeholder model is important, not just in for us but it’s not there in all the countries that we are working in environments where the relationship between civil society and government is not always good, which can affect the feedback or the responses to civil society proposals, because civil society has sometimes been labeled as noisemakers, and therefore when you present views with just those noisemakers, so in as much as we have the challenges at the local level, I think it is more difficult in global processes where you have the burden of getting the air ticket and the visa and all those many kilometers that you have to travel to make your point, which sometimes is not the case for global north organizations. So, for example, in Africa, for example, where we come from, the challenges of financing and the cost is a barrier to access. Issues around the technical capacity. We are aware that many organizations, at least the internet reach hasn’t been much, and mainstream organizations have not focused a lot on digital rights or internet governance issues. And as a result, you have very few organizations that are working in internet governance space. And they cannot solve all the problems in as much as the problems are well known. So you have few organizations, which they’re not always adequately resourced to with the capacity, whether it is human or financial or technical to respond to all the challenges or emerging challenges in the region. And so only you’re able to perhaps take, I don’t know, deal with, what is it called? The ice, the tip of the iceberg, right? So maybe that’s what they’re able to deal with. But the bigger problem sometimes remain unaddressed, yet we have an increasing population that is getting online across the continents. And that means that we need to be able to get more people on board to speak up for all these new communities that are joining. A new challenge is that previously, it was easy to have multi-stakeholder for internet because there weren’t so many users. So now everybody uses. So who is the stakeholder? Who should be in and who should be out? And how can we bring these conversations to everybody? Because everybody with a email account is a stakeholder, right? So getting people to actually recognize that they do have a voice and they should be able to speak up and engage. I think that realization has not come for many people because then of these barriers. And I think the other aspect is that for local organizations, you have very, very unique context which you’re working in and different realities from those of the global south. And this perspective sometimes are not always, it’s not always possible to have them articulated in the spaces where the decisions are being made. For example, I’ve participated in OEWG sessions. and other sessions where you are in the room but you don’t get to speak. Or you’re in the room but you are allocated only three minutes to say what you need to say and that’s not always enough. We are grateful for hybrid participation because it has really opened up the space for participation but not everybody is aware of the situation and I think sometimes organizations in our countries are dealing with other problems like internet connectivity so most of the time they’re looking down, trying to connect to rural communities and trying to deal with the digital rights challenges at the local level that they forget the big picture that actually there are global and regional processes that they need to pay attention to. So you end up dealing with home solutions or home problems and when you hear that decisions are being made, you’re like, but how am I supposed to get there and get my voice heard? So that’s challenge of the disconnect between the local work and the regional and global processes and even just being able to deploy resources to keep up with the number of initiatives that are ongoing at the same time. Even for some people you speak to them in the corridors here and the confusion, which session, you’re one representative and there are how many meetings at IGF and you are the one person who’s come and you want to make an impact so you may not have the capacity to attend or figure out where to make the most impact so and of course that’s a resourcing or something challenge. Of course now people can participate virtually but there’s that mountain that global processes, regional processes seem like a big mountain to overcome but I think not to paint a all gloom picture., this region has changed from 10 years ago. We now have more people, we have more voices and we have quite a number of local initiatives and organisations that are actually working on the internet government spaces. Just to give you just one example., KictaNet, we have been running the Kenya School of Internet Governance (KeSIG) for the past six years, and we have trained almost 5,000 people on internet governance, and it is just in Kenya, and we hope with more people knowing what is happening, then they can be able to make at least a chip on that iceberg to make a difference. Thank you.

Pavlina Ittelson:
Thank you, Victor. There is a lot of points I could reflect on, but from the position of Diplo Foundation, that is something we see and the main, three main issues we see is the fragmentation of forums where the internet governance is discussed. It’s more and more. Also forums where the internet governance is discussed are going into more detailed discussions, requiring more resources both on capacity, both on civil society organizations and governments, because when we work with small and developing states, they just say, I don’t have the capacity to go in and speak every other week somewhere on this. So we definitely hear that. Another point is the lack of capacity, of understanding of human rights impacts of technological development, of standardization, and of course civil society is also the technical community. On that side, it is from the other side. It is from the side of the human rights implications where the understanding of technology is there, but the implications of what that technology, when it is launched, can effect is sometimes not there. So we do have this gap we’re trying to bridge, and that is with capacity building, that is with outreach and advocacy. basically creating networks of civil society organizations in the global south and connecting them to global north organizations which could support them and help each other basically, because global north organizations also do not possess all the knowledge in the world. And bring those issues which are not currently at the Internet governance forums but are on a regional or local level related, for example, to indigenous groups, to women’s rights, to certain cultural or language aspects of Internet governance to the global level. So with that, what can we do? We all agree it’s beneficial to have civil society at the table. We all agree there are challenges to that. And what can we do? So Peter, if you could maybe expand on that and explain to us where the European Commission stands.

Peter Marien:
Thank you. Thanks very much. Again, I think I’m not sure that I’m the best to respond to that, but I’ll give some thoughts. So I think actually quite a few things have already been said, so I might be repeating a few things. Okay, how can we make sure that there’s more participation of civil society if indeed it is needed? I think the first thing is that, well, not the first, but one of the main things is that the capacity has to be there. I mean, to participate in the discussion, you have to have the know-how to participate. Or I’ll say it differently, if you want to participate meaningfully, because indeed we can all participate, but to participate meaningfully, you might need to have a little bit more knowledge on certain topics. It doesn’t mean we have to become ICT specialists. Far from that. Actually, not at all. but we need to know the broader implications, where does it fit in society, in the processes, what are the political implications, who do we need to contact to have an impact and all these things. So I think it’s about capacity. Efforts are being done at all levels, but it’s simply the scene is moving so fast that we’re probably running a little bit behind, especially maybe with the last few years, I’m just guessing, but I think maybe also with the COVID situation, to shift from society to move online has been quite dramatic. And this has maybe increased the world’s attention to these issues. And so to deal with this more adequately, this capacity has to follow. So I think it’s the first thing. I hope I’m responding part of the question. Second is then, of course, apart from the know-how, even if you have the know-how and it was mentioned here, then there’s still the question of resources. Now you can follow hybrid, you can participate, we can do so many things thanks to digital technology. Nevertheless, even if you don’t travel, you need resources, you need people dedicated to the work. But even, and then if you want to participate, of course, to events, you need other resources. Maybe to come back to one of the elements in this project that will start soon, is that the idea is to participate that CSOs have a meaningful participation at IGF, but also at other fora, such as for our organizations, if I can say, such as IETF, ICANN, ITU. Well, to meaningfully participate in ITU working groups, it takes time. And so you need resources for that. Simple. At the moment, it’s companies and states, backed, I think, mainly, operations, that are able to do that and therefore influence. Same for standard setting and so on. And if you don’t have the resources financially also, then this is difficult. So we try to, with this project, but there’s many other ways maybe to partially respond to that. And then maybe to also underline how come that maybe the voices were insufficiently heard. Just maybe reiterate that maybe there has been acceleration of events in the last couple of years because of COVID, but also simply because the adoption of technology. We’ve spoken also about connectivity access, but then of course there’s the new technologies. AI is now the hot topic. Maybe another day it will be something else. I mean, it’s a hot topic, but it’s fundamental. I mean, I’m not diminishing it, but to be able to participate also in the discussions of AI, okay, and the big principles, I think everybody can do that, but to really be on top of it, again, you need to be able to invest in those topics. Thank you.

Pavlina Ittelson:
No, I heard, and part of the project is basically the involvement of civil society in different standardization for, as you mentioned. So ITU, ICANN, IETF, and as we all know in this room, not all of them are equal. Some of them are more open to civil society participation and transparent and have human rights principles set in place for standard setting. ITU and ITUT is another one where the civil society, when the door is closed, they go through the window basically and become part of the government delegations and find different ways to get into one example is Consumers International who do it through the consumer’s rights. So there are ways to get engaged. It’s not an easy one, I would say, but there are ways to get engaged in the ways that once you have the capacity to understand. where the connections are, how to advocate for certain human rights and human-centric values. But let me turn to Marlena to explain to us more on this from the advocacy perspective again. Thank you.

Marlena Wisniak:
Sure, thanks. So at ECNL we participate in some of the standardization processes, mostly on AI, which is what my team focuses on. And like Peter said, it can be highly technical. So talking about things at the high-level principle levels, such as transparency, participation, that is easier. But then, you know, what does transparency mean in practice? What do we want to be transparent on? And we talk about the standards. It becomes much more technical, and that’s something that we’ve seen as a struggle, especially to get more orgs involved. So my team has expertise on these issues, but it really is a small group of people. And by small, for example, at the EU level, we’re talking about like 10, comparing to hundreds, if not thousands of representatives from the private sector, for example. So even you can, you know, you can see there quantitatively the difference in numbers. And AI, for example, as you mentioned, Peter, is a hot topic now. We started working on it in 2020, and I’ve been focusing on it since 2017. For a long time, it was incredibly niche, so even more close to civil society. And it was a handful, and I literally mean a handful of people working on it. And this year, probably of ChatGPT, that’s my non-empirically tested theory, it has become a big topic on the policy level, right? So I don’t know if anybody was here at UNGA in New York, AI was the topic, folks. So everything, you know, UNGA is not even digital focused. So how do we ride these waves of hotness, to take your piggyback on your hard word, Peter, while at the same time having meaningful participation is a struggle. So specifically at ECNL we participate in the ISO standard 42005 on impact assessment of AI systems. So you can hear already that, hear how nerdy that sounds. And when we have expertise in human rights impact assessments, which I think is a more broadly shared expertise amongst the whole society, but still highly technical. Those working on the UNGPs, for example, should participate more. We’re also part of the IEEE, which I don’t even know what it stands for, International Electronic something. I can Google it. An AI risk management subgroup on organizational governance. So all of this is, you know, very technical as well. At the EU, there’s the CEN-CENELEC, which is the standardization bodies. And actually we managed to we managed to get the European Commission to let me get this right, mandate that the CEN-CENELEC includes civil society and actually they have allocated resources. So CEN-CENELEC, the standardization bodies, have allocated resources to include external stakeholders, and yet they still don’t do it. So even when they are required to do so by the Commission and when they get funds, they’re still very reticent. And when it actually does happen, it’s really, really hard to participate. Right now, to give you an example, it’s mostly ECNL, Article 19, and Center for Democracy and Technology that participates in a handful of academics. So it’s really it’s really a closed space and when it when it’s like honestly pretends to be open, it’s not, even though they if people have the best intentions. And I’d say one positive case study that I’ve seen is in the US NIST,National Institute of Standards and Technology I think. They have been very inclusive in engaging some stakeholders in the risk assessment framework, and also have made it a little bit more welcoming. But again, you still see a disproportionate participation of not only digital rights orgs, but those with expertise on AI specifically. So there’s always this push and pull between inclusiveness versus needed expertise. And at ECNL, we really try to train other CSOs, both digital rights and non-digital rights, on these issues. If anyone here in this room is interested, check out our learning center. So a shameless plug for ECNL Learning Center on Google, where we have a couple basically courses specifically for CSOs on AI with some specific things like surveillance technology or platforms. So that you can participate a little bit more. And this is just the technical expertise in addition to, obviously, the challenges of visa and funding and everything I mentioned before.

Pavlina Ittelson:
Thank you, Marlena. I’m happy there are also some positive examples, because it did for a while sound like doom and gloom here. We do not have any questions online. But if there are any questions in the room, please feel free. We’ll also have a Q&A session at the end of this block of questions. I see a lot of familiar faces. So please don’t be shy to come to the microphone. I know it’s a little scary to go in the middle of the room sometimes and ask a question. But feel free to also share your experiences, if you have any, with these processes and how they relate. Any takers so far? OK.

Audience:
Oh, it’s a bit tall for me, sorry. Hi, my name’s Don. I’m with Article 19’s Global Digital Program. We’ve worked to support civil society organizations and individuals primarily from the Global South to participate in technical standards setting bodies. So like everything you’ve said really resonates. Just also what we’ve found has been useful when we’re working with civil society organizations is being able to identify their priorities and then aligning it with what would be useful within technical standards bodies because we recognize, as you said, it’s like a whole fragmented landscape. And even within standards-developing organizations, there are just dozens and dozens of working groups and study groups within each one. And it can be quite overwhelming for organizations to jump into, say, like the IETF and work out which of the 36 meetings they should be attending, like you said. So we’ve actually been working to develop engagement strategies, so being able to support them. And we take on a lot of the, we institutionalize a lot of the support structures. So like in terms of the financial capacity, in terms of like working on the visa processes, like we kind of take care of that. So that organizations and individuals from the Global South don’t need to necessarily have to like put in time and effort to focus on that, but rather scope out the work, be able to understand the concepts that are being brought up. And then we’ve also done a lot of one-on-one mentorship and engagement because we also recognize that these standards bodies have a monoculture. It’s a very like technical space, but it’s also very like Europe and US dominant. And so being able to have someone to go with you to these meetings really helps because I think after these meetings, a lot of the times people are often processing what they’ve learned, what they’ve heard. And so being able to have like someone to be able to bounce ideas and thoughts with post this. So it was just sharing some of our experience. Thank you.

Pavlina Ittelson:
That resonates very well with the project we’re about to start. where there was a big study done by one of the partners for us which found that specialist standardization bodies are male dominated, white dominated, English speaking, inaccessible to any type of variety of opinions by design. So that’s why we’re talking about the running the marathon that it needs to be slowly chipped off and introduced the different opinions presented by the civil society organizations. You also mentioned and I know Victor spoke about it, the participation from the Global South organizations and how we can help them. Part of what we will be doing is there’s a three-pronged approach which is part is capacity building, Diplo will be responsible for. One part is advocacy and bringing in grassroots opinions and networking between the civil society organizations. But also helping those who are ready to do so in engaging with these forums, in engaging through guidance on how to write a briefing for example. How do you go in and present it? What is the best strategy often being involved in these processes to achieve the goals of your organization? With that, I would like to turn back to Victor maybe and ask you about if you could elaborate a little bit of the benefits of building partnerships between the organizations, both Global North and Global South and Global South, Global South civil society organizations.

Viktor Kapiyo:
Thank you. I think there are various advantages to this collaboration. I think first is it addresses the problem of the lack of linkagesthat existed across the various digital rights organisations. We realised that collation building is very important, and collaborative approaches are even more important because we are working to solve the same or similar problem, so having that alignment that you know we are able to share our concerns collectively and figure out what are some of the key emerging themes we want to address. I think that is an important win. The second is that it takes advantage of the competencies of both organisations. If, for example, there is an event in Geneva where one of the global north organisations is based, it is easier for them to cross the road and present the views or ask for a meeting than it is for me to come from Nairobi to get a visa and struggle through trams to make the point. That scaling becomes easy because of physical presence and understanding also the local dynamics. The global north organisations which work closely, whether it is at the UN in New York or EU in Brussels they have built relationships with the various policy makers in those various offices.And therefore, when we come there, it is easy to, you know this is a person to talk to, do not go around, or the office is on the fifth floor, room number five, simple things such as that make a big difference because when you arrive at the UN, it is a big space and having someone who has that local understanding really helps. Also, another thing it helps in terms of capacity building and knowledge exchange between two organisations. Global north organizations may not understand perfectly 100% the context in global south countries, and so this discussion helped in terms of sharing knowledge and exchanging ideas, like what works for us and what works for you, and how then can we build on this. We’re able to present, you know, sometimes access to our government officials is usually difficult at the national level. Because, you know, you can’t access a minister easily. But sometimes, if you’re able to participate in a global forum, then you’re able to meet the delegation there and still be able to articulate the issues. So there is the benefit of learning from the organizations that have done it before, in terms of even knowing what to say and how to say it, and maximizing that two minutes that perhaps you will have with that person before they dash into the next meeting and say these are the three things that we need you to do. And also leveraging on the other partnerships that we know within the global circles, the influential governments and so on, and all these alignments, and the power mapping that perhaps that skill, the global north organizations have already done and have understood who the power brokers are. I can have my three issues and know who to tell them to, as opposed to going there and wondering where to start from 200 member states, you know. So that beneficial partnership is very useful. It’s an advantage. And of course, more importantly, the resources. They’re able to leverage the technical resources in terms of skill. For example, some of this, the ITF, the IE, they’re very technical. And global south organizations, we might not have a technical person like an ICT person, because now it is becoming. and an important component that human rights organizations is not just lawyers, you must have the techies there and you must have the engineers and so on because some of the issues, I remember once one government official told me, we are discussing spectrum and I’m going there, I’m saying, yes, we want to hear about human rights concerns with this spectrum and I’m like, okay, so who do I bring to say these things and to break down what spectrum actually means for the ordinary citizen on the street. So leveraging on a partnership, we were able to get engineers who’ve done it before and have best practice that then they could be able to review the submission that we were doing and give some perspective. So there are some unique benefits to those alliances and if we’re able to build strong coalitions between Global South organizations but also with Global North organizations because I think there is a certain power that we can have when we work collaboratively. And I think lastly is for funders because we have, there’s a significant problem when we have different funders who are funding different things and it’s all disjointed and fragmented and they’re supporting the same organizations who are competing for the same basket of funds to do the same problem, to attack the same problem. And so when they’re not coordinated, it is also creating problems for civil society organizations in terms of coordination because we are competing for the same EU grant. So do I partner with you or do I partner or do I go solo? And does collaborating affect the opportunities and are the donors goals aligned with the specific needs of people to help organizations collaborate? And I think it is important that funders appreciate the dynamics of Global South organizations and the impact of the funding and how they model those funds. in terms of the ease of access and how they can help build and strengthen civil society in the global south to actually make a strong impact, yeah.

Pavlina Ittelson:
Thank you, Victor. And I’m having a stereo here, one side Teresa and the other side Marlena, who want to both chip in on what you said. So Teresa, you start and then I’ll give word to Marlena.

Tereza Horejsova:
Okay, thank you very much. No, I think what you raised is very, very important, Victor, because there is a problem, and I will comment a little bit on the donor experience, yes, because you’re very right, you know, that first of all, like for civil society organizations to be able to be involved in some of the policy fora, it has to be deliverable in a project, you know, because otherwise, yeah, no way how to make it work. But at the same time, among donors, there might not be kind of sufficient coordination of the priorities, you know, in this sense. So if I can also encourage donors, you know, who are interested in more meaningful participation of civil society in various policy fora, like you have probably identified already, it’s really important also to talk to other donors and to try to not overwhelm also civil society organizations with each donors coming with a specific narrow vision, you know, project perspective. It could be ultimately more impactful if this is done while I’m aware that it’s not an easy and intuitive task. Another point that I would like to elaborate on super quickly is what you, Marlena, actually raised, and that sometimes this bad habit of like tick-the-box approach of like consulting civil society. You also mentioned it actually in your intervention. It’s actually tough to consult civil society because civil society is not like, okay, I have consulted civil society. No, it’s so many various organizations with various agendas, various objectives. So it’s certainly not easy. But, on the other hand, we are sometimes sliding or experiencing this tick-the-box, like, you know, there is some pro-forma consultation that not necessarily leads to anything, but you can say that you’ve done it, yes? And yeah, let’s be honest, some policy fora are more prone to this than others. So yeah, I don’t have a solution, but just raising the voice of the civil society more and having donors that have realized and identified this issue is a strong start. Thanks.

Marlena Wisniak:
And following up on Victor’s intervention, I wanted to bring another perspective also from a Global North organization, that the cooperation and collaboration between Global North and Global South, or majority-based orgs, isn’t only, and definitely should not take this, like, white-saviorist approach where we uplift global majority-based orgs, but also there’s a lot to learn for Global North orgs. There’s so much resistance in many countries outside the U.S. and Europe with really creative advocacy strategies, and I and my team learn constantly, and I think the global coalitions are inherently better when there are diverse perspectives as well, and it can be pretty easy to become complacent or even lazy to some extent when you live in the U.S. or Western Europe. You forget many of the fundamental issues of organizing and influencing policy makers, so that’s something to remember. And another aspect is that a lot of the international governance mechanisms, even like UNESCO recommendations for example, rarely, I will not offend UN people here, but they don’t have as much influence in the EU and U.S., however they do have a disproportionate impact on national regulation in the global majority, so for example UNESCO guidelines for digital platforms are often portrayed to be a… Digital Services Act, or DSA-esque version of the EU. There’s also the recommendations on ethical AI. The EU has its AI Act, so there’s binding regulation in the EU. The US is, bipartisan politics aside, also has its own regulation. However, a lot of the recommendations from these entities, including then like UN, UNGA, or OHCHR, really can influence, and often are weaponized to enforce problematic regulation at the national level around the world. So that’s something to consider when we have these coalitions. And then fundamentally, I mentioned before that civil society is not a monolith. The global majority is definitely not one either. It’s multiple regions. The regions themselves are not homogenous, and even in terms of languages. One individual country, India, has, I don’t even know how many languages. 60? How many?

Audience:
27 official languages.

Marlena Wisniak:
27? OK, I thought it was more than that. Official, yes. So plus the dialects, right? So differences of languages, social norms, economies, between and within countries. So that’s something to consider. And I’ll give an example, which is something that we’ve been working on with Access Now, Electronic Frontier Foundation, and a lot of other organizations, including I think Article 19, on DSA Human Rights Alliance, which is involving global majority-based orgs and the implementation of the DSA, which is the leading EU regulation on digital platforms. And what we’re trying to do with Diplo and the orgs represented here is really to bring in learnings from that experience and others into international governance of the internet.

Pavlina Ittelson:
Thank you, Marlena. We have questions in the room, so. Jovan, please start, and then the lady in the blue dress. After you, you’re going, after you.

Jovan Kurbalija:
Okay, oh, you’re next, okay. Just one, building on what Victor said, what we can call power of triviality. We very often think, discuss big system, but sometimes to navigate the UN corridors in Geneva and New York, or knock on the office, or I remember still when we had the program for the colleagues, students from the developing countries, even where you leave the jacket during the winter, or what do you do? I mean, it sounds completely trivial, but it impacts the feeling of being part of the process. And I will bring you a few examples that we have been recently focusing on. PDF is dominant format of the UN, European Union, and other actors. With PDF, you cannot do a lot if you want to interact, if you want to display it nicely. We took the European AI Act, and we were in Brussels discussing with different negotiators. If you try to consult European EU Act draft, which is now negotiated between commission, council, and parliament, you simply, you’re lost. On the very trivial level, not on knowledge of AI, but on displaying three columns, four columns, one PDF, 300 pages. What we did, we basically organized it in the simple way that you at least can read it, and then have, obviously, expertise to analyze and to have a context. The similar applies for the UN Secretary General policy briefs. If you read these policy briefs, and you can consult on Digital Watch, they’re done by designer who wanted to have nice printed publication. And this is the mindset, you can see the mindset. And said, who is going to, maybe a meeting like IGF, they will distribute, have a nice publication, as we are doing, all of us, you know. But in reality, people will consult it online. or their mobiles. Therefore, this power of triviality, which ultimately shapes people participation, is a big thing. And we plan during this project to focus on these things. And one of the elements is reporting from the IGF. You have here the paper. This session will be reported by a mix of AI and experts. You have yesterday’s sessions with main points. And why is this important for the civil society? It is important because you simply have limited resources to navigate such a flow of information and sessions in Kyoto, but also last 18 or 19 years. And frankly, some issues, digital divide, were basically rehearsed every year. And you have the more or less similar narratives. Therefore, this is, again, one small thing. If small NGO, we had discussion two days ago from Brazil, with two or three persons, wants to know exactly what is about child protection discussed during the IGF, not necessarily AI, not necessarily other issues, they should have the access. And it’s not as easy as it looks. Therefore, we are trying with this reporting, you can consult it, to use a mix of AI, Diplo AI system, and our experts, mainly to bring to the help with the small developing countries, where our main, sort of, to bring them substantively in discussion, but also small civil society and marginalized groups. Those are a few points which I invite all of us here to reflect on this power of triviality. And there are tools. And also to create a space, I call it, AI hallucination of human hallucination, to think, I won’t go too far with the way how to hallucinate, but to create a space for a bit of alternative thinking. My criticism of. all actors, and my fault, is that we sometimes become domesticated in global fora. We basically start integrating thinking of the IGF governments, which is very human. You interact and you basically develop the thinking. And the time which we are facing, you open Al Jazeera, CNN, News, you see the world is not in the best shape. And we need alternative thinkings. And we need the creative inputs. And I think this is the role for civil society and academia, where they are not contributing. I’m sorry to say, but we are not contributing to this. Therefore, those are just a few points which influence Diplo’s approach. And we hope that, together with the partners in this project, we’ll try to do this power of triviality, making things accessible to people, and also trying to create a space where we can think a bit out of the box for the benefit of the governments, public, and the global public good.Well, should I ask some question?

Pavlina Ittelson:
No. It’s OK. We’ll have a lady in a blue dress, and then the gentleman in a blue shirt, and my wish, and four questions coming up.

Audience:
Thank you for interesting thoughts from different perspective and different experiences. And I’d like to ask you about a question regarding capacity building. Since I am Emi from Japan, working at a private company giving OSINT-related service, giving more reliable information to mainly researchers, I think capacity building cannot be made one day. For example, since I’m familiar with climate change issues process. They were around the Paris Agreement, and there were a citizen assembly, citizen congress, with randomly sampled citizens without any expertise discussing the very important issue about climate change. And that didn’t really solve all the problems, but it moved forward somehow. So how would you think about such process in regarding this issue, and also the possibility or threats or limitation of such, what do you say, citizen participation in very local level? And my personal hope is to have such congress in different areas, different parts of the world, and then they can have capacity building and also participate in global level. So I’d like to know the opinions, thank you.

Pavlina Ittelson:
Any takers? It should be me on capacity building, but Peter, go first.

Peter Marien:
I’ll try, but I will pass to you afterwards for sure. Well, first of all, thanks a lot. Shall I take, I’ll respond to that and maybe give some feedback on other points. So first of all, I don’t know if this is planned or has been done, I do think it’s interesting. You make me think of, I think one of the speakers in one of the other panels recently said, I forgot her name, the Nobel laureate journalist, who said that she hadn’t done this, but she had interviewed, if I understand correctly, quite randomly, a few hundred or a few thousand people. And she said, if I correctly remember the topic, randomly, like you say, just citizens, which are not experts, right? And also have their impact. I mean, I think that’s interesting, okay, as part of the consultation process. I’m not an expert in all these consultation processes, but I think there can be space for that. I’m sure academics will have more to say about that. I do know that in the EU processes, when it comes to legislation, there are consultation processes, quite large ones, but maybe they’re not so intimate. Maybe they’re like, look, this is online and you can share your opinions. I know that on some of these consultations, we do have thousands, if not tens of thousands of inputs from society. Some of them are even indeed analyzed partially by AI because it’s not possible to read every one of them. So I think there’s some interest for that. Now, I think specifically it’s more to our partners in the CSO organizations who might give a bit more, yeah.

Pavlina Ittelson:
Yes, so it makes me think of two things. One is the capacity building. What we’re talking about is institutional capacity building. So when we talk about increasing the engagement, it is in a way we’re training people, but we are increasing the institutional capacity of civil society organization. Person may come, may go, but you need to have within the organization, the knowledge, the expertise, whether technical or policy to engage. And we have ways to do that. So I’m not gonna go into that because we’re running out of time in general. But going back on consulting regular people in the process and also Jovan’s point on basic triviality of things. I remember one example on accessibility where a blind person was able to access government’s website going through accessible government website, going to get their ID. They went through everything and they got to the end and we’re supposed to tick the boxes with the bicycles to Prove that they’re not a robot and the person is like it’s a trivial thing which can be replaced by a small technical solution But because that trivial thing is not in a forum and it’s not addressed it causes issues of accessibility on a wider scale to certain groups of people so it is two way, two way thing. Now the forum was speaking about the internet governance for a standardization for a way less open to direct engagement with the public or with individuals in the environmental sector and also in youth and Rights of youth to to their future that it that is way more open to do this and also there was a court case in Germany which established the right of youth to their future in relation to the environmental rights, so the movement there is a Little bit different. I think the burning planet might have something to do with this a little more urgency But we do have three more questions in the room and I don’t want to forget about them. So gentlemen in blue shirt Okay, and then over there

Audience:
Hi, Arjun Adrian D’Souza, Legal Counsel at Software Freedom Law Center. I’m here with my colleague Just wondering So we have spoken about capacity building as well, and we are a civil society based out of New Delhi, India So India presently is at an inflection point a very opportune time also We’ve had a personal data protection act which has been enacted. It is yet to come into force And a digital India act which will regulate platforms. So the stakes are high we are Going to be dealing with 1.4 billion people who will be regulated through these provisions Just as a civil society. We’ve seen the pushback that is there in terms of engagement. So my question is is two-fold. Firstly, in terms of putting forth a consolidated front on behalf of civil societies and think tanks. Any strategies, any advice on that? And secondly, to the gentleman’s question also, any alternative points of engaging with parliamentarians and government? For the simple fact that this may be a jurisdiction-specific issue to India, but the consultation process prior to the introduction of a bill is much more fruitful than the one that usually, which comes for after that. So just wanted your views on that.

Pavlina Ittelson:
Anyone? Victor go and then I’ll go.

Viktor Kapiyo:
Yes, that’s a very interesting. We face similar situations in Kenya with consultation. I think some of the strategies that have worked for us, one is to build a relationship with the legislators. I think it’s important to have that relationship with them before you submit the view so that they know that there is this body and this is what you do. I think the second is usually to demonstrate the expertise that is being brought to the table. I know it’s usually a higher standard for civil society organizations being placed, but I think demonstrating that you actually have value to add and you bring that value will add some weight to the comments that you present. And I think the third is to work collaboratively with other civil society organizations. I think sometimes it helps when you have 20 sign-ons as opposed to one, and so identifying the common issues across the various groups is essential in as much as there could be.Variances in terms of positions at least, there should be some key things that everybody wants that or feel it is important to articulate and coless around that. Lastly to say be ready for the marathon, so you need to go to the gym and workout, so you to have your arguments and contra arguments prime, you need to be ready for the views of other stakeholders, Not everybody will agree with your Not everybody will agree with your positions just because your civil society doesn’t mean you’re always right or that your position will be taken. So it’s important to build the watches in terms of understanding the arguments or potential arguments and other scenarios that the other stakeholders could bring forth. And the other push and pull factors and drivers, what is driving this and who is driving this and understanding that local context which you probably do. And then it helps that when you get to the floor and the question is asked, you understand and you’re able to anticipate and then have a very good response to any scenario because at least you have prepared for it as opposed to just walking in and thinking that it’s gonna be a smooth ride. Most of the time, it’s not or at least from our experience. Thank you.

Pavlina Ittelson:
Peter, did you wanna reflect on that?

Peter Marien:
Yes. Well, I want to be, I should say, a bit sensitive in what I say because of course your question is to CSOs and I’m speaking in the name of commission but maybe just to say that from experience in some other countries, we have had governments in other countries including Kenya, for example, knocking on our door to discuss exactly this kind of topic and in an open spirit where we have then engaged some of our experts. for example, our Director General Justice, to go into dialogue, and they even came on a visit. And so I think if there’s a way of creating this trusted relationship and this willingness for open dialogue, then this is of course difficult for you to create this, but I mean, if that can somehow be found, the right people or the right entry, then maybe that’s helpful. One second. But here again, I want to be really careful that I don’t give any wrong message, but you might also indeed knock on the door of other organizations that you think might have the entry. And it can be a national organization or an international organization that’s locally present with an office and that might have better channels of access. And then through that way, maybe open the dialogue.

Pavlina Ittelson:
Yes, we have the last question. And while you’re going to the microphone, I’m just going to quickly reflect on the calls for common inputs by CSOs in different processes. And I would like to maybe caution and say that there needs to be a balance between one input by several organizations and basically leaving the variety of opinions and variety of perspectives behind, because you are eliminating certain aspects of that in the process. So that is something as a CSO you need to be aware of. What is it that you are not presenting in a way when you are trying to make a certain different point stronger? So that is a balance exercise, at least in our opinion. Please, go ahead.

Audience:
Hi, so this is less of a question and more of a comment that speaks to kind of everything we’ve discussed here. But my name is Ariel Maguid. I work with Internews. Congratulations on your proposal. We actually also just won a very similar proposal from the Department of State working in South Asia. So I’ll be implementing that as well and would love to kind of collaborate with you guys working on bringing our civil society organizations together but also to speak to your topic. One of our parts of our activities are creating an online space where all of our civil society and human rights groups can kind of come together and work together around what they’re learning. And at the final, we have bringing them to sessions such as IGF and really being able to advocate. I had one other thought as well. But yeah, so speaking to how donors are not collaborating with each other. And so you have the EU, we have State Department, very similar projects. And obviously the work needs to be done everywhere but would be great to kind of bring together in our news into the reflection as well.

Pavlina Ittelson:
Absolutely, thank you. And I would love to talk to you in the corridor afterwards. It is our intention within this project to actually harness what is already in place and what is going on to create a wider network of CSOs and kind of build up on each other and cross pollinate whatever is in the space currently and whatever is going to be in the space in the future because we do believe beyond the project itself, that that is something we will be dedicating more and more time in the future as well. So, and with this, you can find us all individually if you wanna talk more, but I believe we can close this session if that’s okay with everybody or is there more questions?

Audience:
Okay, sure, go ahead. I just wanted to share something. Just to build on what Victor and Marlena said, I come from SFLC India and like you mentioned that we have a lot of languages. I think the one problem that we face is with my legal team and my technology team, they come up with these brilliant blog posts or write-ups to share information or create awareness, but sometimes the language is so complex that the people they’re trying to make aware find it difficult. So one of the benefits of local partnerships is that when I meet other CSOs for these kind of events, I realize that all of us are facing the same issue, especially from the comms and PR team. So yeah, I just wanted to say that thank you for bringing that up, because these kind of local partnerships help bring issues to light that sometimes I think go unnoticed. So just thank you.

Pavlina Ittelson:
No, thank you. Thank you for that point, and it’s another one. OK. Yes, yes, we do. We do have, I believe, still eight minutes to go, so we can go ahead.

Audience:
Perfect. I’m Camila. I’m from EDAC in Brazil. And Pavlina, you were mentioning that beyond the substantial issues on how to participate on this space, we have formal rules to interact on this space. And this is so hard. We don’t have a book on that. If we are in the UN, you have a way to interact. If you are in other spaces, you have other ways. So how can we share more information about that? You were mentioning, for example, that we can make a workshop on how to make a briefing on all these spaces. But how can we share this kind of knowledge? Thank you.

Pavlina Ittelson:
No, and you’re absolutely right about each of the processes, each of the open-ended working groups having a different way of engaging civil society. We have seen it, for example, on open-ended working group on cyber. From the first session to the second one, the temperature in the room changed, and all of a sudden civil society is not automatically. be included in the conversation. So it is a challenge from the procedural side to see where are the ways to get engaged, and that is part of what we will be doing in different fora. For example, within the standardization processes, and, Marlena, I know you do a lot of work on there, so if you feel later on chiming in on that, there are, as I mentioned, certain standardization bodies which are more open to civil society participation beyond the technical community. They do have principles in place, human rights principles and human-centric standard making processes in place. There are other ones which are not open, and there are certain ones which are just multilateral, and that is the world we live in. It would be very ideal to have one way within, for example, UN or any other body to have this is how you go ahead and do it and put your input. Another part to that is once you overcome that challenge of how you are going to participate, how are you going to put across your points, is whether those points are, as Teresa mentioned, are a tick on the box or they are taken seriously, and that is a question of building partnerships. At least in my opinion, it is a question of working in that space for a long time, knowing who does what, knowing the trends, knowing what’s coming up, where is the place to be, and which forum you need to be strategically engaged to achieve your civil society goals. Not an easy one, definitely not. Marlena, you did some work on that, so maybe some lessons learned?

Marlena Wisniak:
I was going to bring up your last point that unfortunately there are a lot of informal networks that we see. You know many of I mean many folks in this room. I know right so that’s both a good thing and a bad thing same people on panels Generally bad that I’d say one good thing is that we do have tight networks between civil society so able to You know text each other And and have monthly meetups like they’re different. I think there’s also a lot of Coalitions, so it’s hard to know Who does what and like how to be coordinated and aligned? That’s always? Was a struggle I mean Colleague from internews mentioned that there’s a similar Initiative and we work very closely with internews or even part of the global internet freedom. I think And we didn’t even know about this right so So I think there’s definitely It’s incredibly difficult And also unfortunate because it can become a cool kids club of if you’re in it Then you have and generally to be to have these connections You already are privileged and well networked, and then it gives you an even bigger boost to actually be to have the platform right so so Yeah, I think it’s our responsibility of the orgs that are in this room to actually bring in new voices Something that I’ve been experimenting with is if I’m invited to a panel I Either decline and give my spot to someone else or I say yes under the condition that the other person comes I Didn’t do that at this panel apologies Like apologies are you’re welcome But yeah, so different. I think yeah bringing in more people But unfortunately it is informal and then like you mentioned I forgot who mentioned some of the orgs are better at inclusion than others The UN is is really real difficult. I do a lot of UN advocacy and I Don’t really understand it We have a UN advocacy officer at our organization and we’re very lucky to have So he knows a lot on the procedural side. Then he doesn’t have the substantive expertise as much. And my team is the opposite. We know AI and human rights, but don’t really know where to intervene or what, unless it’s the very specific, like everybody knows IGF, right? But the smaller ones, it’s hard to know. So, sorry, it’s not a satisfactory answer. Basically, make friends, be social, and share your contacts and your privilege.

Tereza Horejsova:
Maybe if I can quickly just a story from earlier this morning, yes, on the kind of encouraging everybody to be more experimenting in panel compositions. I was on a panel and probably I had some calming effect on two other ladies speaking in the same panel. And they were like, this is my first time doing a panel. This is going to be a disaster. And I told them, no, first of all, it’s not going to be a disaster, but you belong on this panel much more than I do, for instance. And by the way, they did great, yes. So, don’t be afraid to experiment when you put panels together, because, yeah, it might seem easier to go with people you know or you have worked before, but actually it might get much fresher look if you get new people.

Pavlina Ittelson:
Thank you. And with this, there were three people on this panel I didn’t know so far, so only Teresa. So with this, I will invite you to get to know each other and we’ll close up. Thank you very much, everybody. Thank you.

Audience

Speech speed

177 words per minute

Speech length

1349 words

Speech time

458 secs

Jovan Kurbalija

Speech speed

175 words per minute

Speech length

861 words

Speech time

296 secs

Marlena Wisniak

Speech speed

166 words per minute

Speech length

2422 words

Speech time

877 secs

Pavlina Ittelson

Speech speed

160 words per minute

Speech length

2986 words

Speech time

1118 secs

Peter Marien

Speech speed

182 words per minute

Speech length

2373 words

Speech time

784 secs

Tereza Horejsova

Speech speed

156 words per minute

Speech length

1318 words

Speech time

508 secs

Viktor Kapiyo

Speech speed

187 words per minute

Speech length

2540 words

Speech time

813 secs

Impact of the Rise of Generative AI on Developing Countries | IGF 2023 Town Hall #29

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The discussion revolves around the impact of technology, particularly AI, on freedom and democracy. There is a neutral sentiment overall, with a focus on whether AI will lead to more freedom and democracy or have the opposite effect. One argument is that authoritarian rulers might use technology to establish more control rather than promote freedom. This raises concerns about the potential misuse of AI in authoritarian regimes.

Moving on to the potential application of generative AI in developing countries, the sentiment remains neutral. It is recognised that generative AI has the potential to benefit these nations, although specific evidence or supporting facts are not provided. Nonetheless, there is an interest in exploring the application of generative AI in developing countries, highlighting a desire to leverage technology for their development.

Another aspect discussed is the need to redefine the term ‘developing countries.’ This argument emphasises the existence of highly functional digital societies in Estonia, Finland, Norway, and the Netherlands. These societies serve as examples of how advancements in technology can lead to progress and development. The recommendation is to learn from these societies and implement their successes in other parts of the world. Young people are seen as crucial in this process, as they can observe and learn from these digital societies, then bring back knowledge to design and restructure their own societies. The high youth population in regions like India, the continent, and the Middle East and North Africa (MENA) further amplifies the importance of involving the young generation in shaping the future.

The impact of AI on human creativity and its contribution to human resources is considered in a neutral sentiment, without specific arguments or evidence provided. The broader question of how AI will affect human creativity and its implications for the workforce remains unanswered.

Concerns are raised about the use of native AI in developing countries with limited resources and infrastructure. The sentiment is concerned, with a focus on the potential widening of the technology gap in these nations. The argument questions whether native AI will exacerbate inequalities and further marginalise resource-poor countries.

In conclusion, this analysis highlights various viewpoints on the impact of technology, specifically AI, on freedom, democracy, development, and creativity. While concerns are raised regarding the potential misuse of AI and widening technology gaps, there is still potential for positive outcomes through the application of AI in developing countries. The role of young people and learning from successful digital societies are also emphasised in shaping a better future for societies worldwide.

Atsushi Yamanaka

In a recent analysis, different viewpoints on the topic of Artificial Intelligence (AI) were discussed. Atsushi Yamanaka, a senior advisor on digital transformations at JICA, shared his belief that AI has both significant potential and notable threats. With 28 years of experience in the field, Yamanaka advises JICA on incorporating technology elements into various projects and supporting digital transformation initiatives.

One area where Yamanaka sees promising potential is the use of generative AI models for local African languages. He highlighted how this application could play a crucial role in promoting digital inclusion in developing economies. By developing AI models for African languages, barriers in digital literacy could be overcome, enabling more people to benefit from technology. Yamanaka’s colleague, who studied AI in Japan and now works at Princeton, is actively working on generative AI models for African languages.

However, the analysis also acknowledged the potential risks associated with the rise of generative AI. It highlighted the concern that this technology could lead to an increase in misinformation, making it increasingly difficult to distinguish between real and fake information. As a result, trust in digital technology may be undermined, presenting challenges for individuals and societies. These issues underscore the importance of responsible development and deployment of AI technologies.

Another argument made in the analysis was the need to establish a consensual framework for AI regulations. The participation of emerging countries was emphasized, as developing nations should play an active role in global discussions on AI regulations. The aim is to avoid creating multiple fragmented models or regulations and instead work towards a unified approach that addresses the concerns and interests of all stakeholders.

Concerns were raised about the potential impact of AI technology on labor. The recent work by the International Labor Organization indicates potential job losses resulting from the introduction of AI. In the United States, for example, the Screenwriters Guild has expressed concerns about AI replacing their jobs, sparking fears of a potential backlash reminiscent of the Luddite Movement of the 19th century. These concerns emphasize the need to consider the potential negative consequences on employment and to ensure that appropriate measures are taken to mitigate any adverse impacts.

Privacy invasion was another aspect discussed in the analysis. The Chinese AI-based scoring system was highlighted as an example of technology that invades privacy. The system reportedly monitors and scores every aspect of citizens’ lives. This raises concerns among privacy advocates and highlights the ethical considerations that need to be taken into account as AI technologies continue to evolve and become more integrated into daily life.

The analysis also touched upon the digital gap between developed and emerging economies. The argument was made that AI technology, particularly new technologies, could actually help reduce this gap. Unlike traditional barriers to communication, there are no interaction barriers in digital technologies, making their adoption in developing countries more feasible. Furthermore, emerging economies might even contribute more to the growth and development of these technologies.

Interestingly, the analysis noted that developing countries have the potential to be at the forefront of innovation in AI technologies. It emphasized that a significant amount of innovation is already emerging from these regions and suggested that they might contribute more to innovation in digital technologies than Western countries. This insight challenges the notion that developing countries will necessarily lag behind in the adoption and advancement of AI technologies.

In conclusion, the analysis delved into various aspects of AI and provided different perspectives on its potential, risks, and implications. It emphasized the need for responsible development, consensual regulatory frameworks, and the active participation of emerging countries in shaping AI technologies. While acknowledging the threats and challenges associated with AI, the analysis also highlighted the opportunities for promoting digital inclusion and reducing the digital gap. Ultimately, it asserted that each society should have the agency to manage its own governance in line with its specific needs and circumstances.

Robert Ford Nkusi

Robert Ford Nkusi is a prominent figure in the field of software testing qualifications in Rwanda. He is currently leading a software testing qualifications team and has made significant contributions to the Rwanda Software Testing Qualifications Board. This demonstrates his expertise and leadership in the industry.

Furthermore, Robert has been involved in the design of the implementation plan for the Child Online Protection Policy in Rwanda. Working under the United Nations, he played a crucial role in developing a comprehensive plan to safeguard children online. This highlights his commitment to promoting child safety and creating a secure digital environment for young users.

In addition to his work in software testing and child protection, Robert has also contributed to the regional framework for the one network area in the East African region. By circumventing data roaming costs, this initiative has greatly benefited individuals and businesses in the area. Robert was actively involved in setting up the one network area and played a vital role in the successful proof of concept testing.

Notably, Robert has led efforts in managing cross-border mobile financial services, which have become increasingly popular in the East African community. By facilitating convenient and secure transactions across borders, these services have contributed to economic growth and poverty reduction. Robert’s involvement in this area demonstrates his expertise in the intersection of finance and technology.

Currently, Robert is engaged with JICA (Japan International Cooperation Agency) in implementing an ICT industry promotion project in Uganda. The four-year project aims to build capacity in the ICT industry and foster innovation and infrastructure development. By leveraging international partnerships, Robert is actively working towards advancing the ICT sector in Uganda.

The potential of generative AI in predicting and mitigating harmful online content was discussed. Robert highlighted how AI can aid in keeping children safe online through the issue of Child Online Protection. However, caution is required with the implementation of generative AI, as its accurate responses can make users less questioning, potentially leading to unforeseen negatives. It is crucial to strike a balance between the benefits and risks of this technology.

Moreover, African countries have shown exemplary progress in implementing and regulating AI technologies, challenging the traditional divide between developed and developing economies. By successfully adopting and regulating AI, these countries have demonstrated their capability in technological advancements.

The debate on long-term leadership and its relationship with authoritarianism was also explored. The definition of democracy and authoritarianism can differ based on the context, and it was argued that long-term leadership is not necessarily synonymous with authoritarianism. This raises important questions about the nature of political leadership and the impact it has on governance.

Furthermore, the potential of generative AI to transform politics was highlighted. The use of AI in predicting political outcomes and shaping political discourse has the ability to revolutionize the political landscape. However, it is important to critically analyze the impact of AI on democratic processes and ensure that it is used responsibly and ethically.

An interesting observation arising from the analysis is the need for collective efforts and shared learning in policy-making for AI technologies. Developing economies, like those in Africa, have successfully implemented technological solutions such as mobile money. It is suggested that developed countries, such as those in the G7, can learn from these successes and collaborate in policy-making to ensure the responsible use of AI technologies.

In conclusion, Robert Ford Nkusi is a leader in software testing qualifications, with notable contributions in the fields of child protection, regional frameworks, cross-border financial services, and ICT industry promotion. The potential and challenges of generative AI were explored, along with the successful implementation of AI technologies in African countries. The complex relationship between long-term leadership and authoritarianism was discussed, and the transformative potential of AI in politics was examined. Overall, these insights shed light on the intersection of technology, governance, and societal progress.

Sarayu Natarajan

The discussion examines the implications of generative AI across various domains, acknowledging its advantages and disadvantages, especially in relation to technology and society. It highlights the study of algorithmic and platform-mediated work, digitization, and digital infrastructure in the context of generative AI, with a focus on effective government-citizen communication.

Data governance is identified as a critical area of focus within generative AI, necessitating exploration of sustainability financing, governance, and digital system replication worldwide. The discussion also raises concerns about the impact of generative AI on the labor market, particularly in developing regions, where workers involved in data annotation and labeling are often overlooked in broader AI conversations.

Furthermore, limitations and biases in data structures restrain the full potential of generative AI, particularly in addressing gender and race representation in the developing world. The potential for generative AI to propagate disinformation and misinformation is also highlighted as a significant concern.

To address these issues and ensure meaningful digital lives and futures, the discussion emphasizes the need for governance and regulation to be considered during AI deployment. Inclusive frameworks of governance and regulation involving global participants are deemed essential to manage the impact of AI across all regions and promote equitable outcomes.

Additionally, the role of generative AI in the creative domain is explored, with the recognition that it can assist in certain types of literature creation. However, it is underlined that the education system and society should continue fostering creativity to avoid over-reliance on AI.

Overall, the analysis delves into the multifaceted implications of generative AI, highlighting the importance of governance, fairness, and ethics. The discussion emphasizes the need for thoughtful and inclusive approaches to harness the potential benefits of generative AI while mitigating its challenges.

Tomoyuki Naito

During the IGF 2023 sessions, Prime Minister Kishida emphasized the importance of generative AI and knowledge sharing for all participants. This highlights the recognition that generative AI has the potential to greatly impact various sectors, and therefore, it should be accessible to everyone.

The discussions at the IGF have brought international experts together, who have acknowledged the threats posed by generative AI. This recognition has sparked thoughts on how to counter these potential threats. The fact that these concerns are widely recognized at an international level shows the serious consideration being given to generative AI.

Tomoyuki Naito, the moderator of the session, specifically emphasized the need to explore the opportunities and threats of generative AI in the context of global south economies. This highlights the importance of understanding how generative AI can impact the economic growth and development of these regions. By recognizing the specific challenges faced by global south economies, tailored strategies can be developed to leverage generative AI for their benefit.

The panel discussion aimed to gather expert opinions on the threats and opportunities presented by generative AI. The first half of the discussion was dedicated to capturing the perspectives and insights of the panelists to ensure a comprehensive understanding of the various viewpoints. The second half of the discussion, on the other hand, was planned to encourage public opinions and comments, fostering a more inclusive and democratic approach to addressing these issues.

Naito’s belief that experts are actively working on addressing the potential threats to privacy and security posed by generative AI is significant. It indicates that these concerns are not being overlooked, and there is a collective effort to develop strategies to mitigate these risks. The fact that many sessions at the IGF have already discussed the potential threats of generative AI further strengthens the notion that this is a widely recognized issue. Moreover, the concerns shared by international experts highlight the seriousness with which these potential threats are being taken.

One noteworthy observation from the discussions is the recognition that countries can proactively utilize new technologies, such as generative AI, for their own economic and social development. This signifies a shift in mindset, where new technologies are seen as opportunities rather than threats. By leveraging these technologies effectively, countries can drive economic growth and social progress.

In conclusion, the IGF 2023 sessions shed light on the importance of generative AI and knowledge sharing for everyone, specifically in the context of global south economies. The discussions recognized the potential threats posed by generative AI and emphasized the need for expert opinions and public engagement. However, there is a collective effort to address these threats and proactively utilize new technologies for economic and social development. Overall, the sessions provided valuable insights and highlighted the significance of inclusive and informed decision-making in the field of generative AI.

Safa Khalid Salih Ali

Generative AI has emerged as a powerful tool with the potential to revolutionize various sectors. The analysis reveals several key benefits and applications of generative AI. Firstly, it can significantly reduce the time spent on data analysis by automating routine tasks. This allows businesses and organizations to derive insights more quickly and efficiently. By eliminating the need for manual data analysis, generative AI enables professionals to focus on improving the quality of their work, ultimately enhancing productivity.

Furthermore, generative AI can play a crucial role in predicting and managing economic crises. By utilising generative AI, experts can develop prediction models that help identify potential crises and take preventive measures accordingly. This is particularly relevant to sectors such as finance and banking, where generative AI can aid in risk assessment and fraud detection. By analysing historical data, generative AI can predict consumer behaviour and help financial institutions make informed decisions to mitigate risks and enhance security.

In the realm of fintech, generative AI has the potential to enhance customer experiences. By providing immediate solutions in emergency situations, generative AI can improve customer satisfaction levels. Additionally, generative AI can democratise financial services by allowing all participants to easily access the services they need, such as virtualized access through chatbots. This fosters financial inclusion and reduces inequalities by ensuring that all citizens have equal opportunities in accessing financial services.

Another significant application of generative AI lies in policy simulation. By simulating the effects of different policies, generative AI can assist policymakers in addressing weaknesses and making informed decisions. Through simulation, potential issues can be identified and resolved before they negatively impact society. For example, the analysis highlights a situation in Sudan where a war could have been preempted if generative AI had been used to simulate the consequences of certain policies.

While the benefits of generative AI are clear, it is crucial to address certain challenges. Developing countries face significant data challenges and lack the necessary knowledge and infrastructure to fully harness the potential of AI. Therefore, it is essential to establish systems that support AI development in these countries. By doing so, they can benefit from the transformative power of generative AI and drive economic growth.

In conclusion, generative AI has immense potential to revolutionize various sectors and bring about significant benefits. Its ability to streamline data analysis, aid in predicting and managing economic crises, enhance customer experiences, simulate policies, and foster financial inclusion makes it a valuable tool for the future. However, ensuring that developing countries have the necessary capacity and resources to tap into this potential is crucial. Generative AI can truly transform industries and bring about positive socio-economic changes when effectively implemented.

Session transcript

Tomoyuki Naito:
Ladies and gentlemen, good evening. I know this is today’s last session, that’s why not over 100 people coming, but actually I hope you can enjoy, ladies and gentlemen. Welcome to the session, the town hall session, the title, the impact of the rise of generative AI on developing countries, developing economies, opportunities and threats. I’m the organizer and today’s on-site moderator. My name is Tomoyuki Naito, Vice President and Professor at the Graduate School of Information Technology, Kobe Institute of Computing. Very nice to meet you, everyone. Let me just quickly introduce all of the other panels from myself. On my just left-hand side, Ms. Safa Khalid Sariyari, she’s the Senior Business Intelligence Engineer and the Software Engineer, Central Bank of Sudan, the Republic of Sudan, Safa, welcome. And next to Ms. Safa, Mr. Robert Fornon-Cussey, he’s the founding partner and CEO of Orasoft Ltd., the Republic of Rwanda, Mr. Fornon-Cussey, thank you for coming. Actually today we expected to have Ms. Kay McCormack, the Senior Director of Policy, the Digital Impact Alliance, but due to her immediate engagement, she couldn’t make it. So today we have the privilege to have Dr. Sarayu Narajan, she’s the founder of the Apti Institute India, doctor, thanks for coming. And last but not least, on my left-hand side, Mr. Atsushi Yamanaka, Senior Advisor for Transformation Japan International Cooperation Agency Japan, Mr. Yamanaka, thank you for joining us. I’m also looking at Zoom online, we have quite a number of participants online, over 20 people are joining, thank you for coming, thank you for joining today. So as scheduled, and as title, we’d like to begin the title, The Impact of the Rise of Generative AI on Developing Economies, Opportunities and Threats. Let me just begin with a very brief introduction, a very brief explanation of the background of this session. Actually many of you, attendance here and attendance online, as well as the panelists, you have already heard many discussions in the past two days, including today’s discussion throughout the IGF 2023 in Kyoto. Actually the internet for all, or internet we want, internet for everyone, internet for good. This is the kind of the common keyword for today’s, this year’s IGF. On top of that, even yesterday’s official opening ceremony, Prime Minister Kishida of Japanese government, he mentioned about the importance of generative AI, and the importance of the guideline, importance of the knowledge sharing, information sharing, and the collective effort to make AI for good, for everyone. Standing on that kind of background, actually many AI-related sessions have been done, and this discussion, or that kind of the session, is still ongoing until the end of IGF 2023 in Kyoto. Then this session is specifically focusing on generative AI’s impact, specifically for the how it will be impact on the global south. Actually I don’t want to divide north or south, but actually this is very, very important, because there are two aspects, opportunity and threats. On the threat side, many sessions have been discussing about the potential threat on privacy, human rights, or information security. Of course, all those things are very scary. The generative AI correct the information from everywhere, then synthesize all the data into one, as if we don’t know how it is synthesized. But many experts, international experts, as you already heard throughout the session, many international experts already aware about those kind of threats, and they have already shared their knowledge, their worries to everyone, including us, so that the process is already ongoing. I personally have felt that that aspect, threat aspect, is not necessarily we have to fear, we have to worry so much, because many experts or many people have already aware about the other worries, so that we can protect somehow by using the collective wisdom, collective knowledge, collective effort. On the other hand, opportunities. Opportunities-wise, I personally don’t see many discussions are happening in this IGF, but we also know that many opportunities we have seen. So I will invite these knowledge panelists, and I would like to invite all of your opinions, both from on-site and online. Then I would like to allocate first 20 minutes or 25 minutes to hear the opinion from the panelists here, and another half of this session, the last 20 minutes or 25 minutes, I really would like to have your guys’ opinion or comments about opportunities. And of course, threat-wise, you can share your opinion. That would be really, really welcome. So my first question to all the panelists, question number one, is actually go to the other, very fundamental one. Actually, in order for all of you to know about the other experts’ background, knowledge, wisdom, I won’t invite all panelists to answer to us. Question number one. What has been your background in terms of the other related works in ICT sector, in the light of using power of technology to make the better world? So let me just invite one by one, starting from the other, Ms. Fakari.

Safa Khalid Salih Ali:
Thank you, Sensei. I am truly honored to be here today. Thank you, all of us, to attend this session. I am Safa Khalid from Sudan. I start my journey in ICT since 2010, while I’m working in Central Bank of Sudan. My specialization is business intelligence and software engineering. While I’m working in Central Bank, we pass through data analysis for multiple systems that we can see now, generative AI can help us to reduce a lot of time of working, and it help us to make many prediction model for crisis, sometimes happen in any country. And also, when you’re using generative AI, it is a benefit for all Central Bank, which can impact economy of the country, like maintaining financial stability of country, or promoting economic development of the country, in Sudan or in another country. Also now, when you’re thinking about FinTech technology, generative AI can help us in customer experience and can build us many customer experience system, which can address an emergent situation for customer, instead of waiting for next day to go to the bank and find solution for your actions, and also enhance any part of analysis tool and prediction for any module. This really happen, for example, in the last year, before the world start in Sudan, when we have auction, you can use generative AI to build simulation for the policy, instead of just embracing and using the policy directly in the economy. This simulation, if you implement it, it can help us to find the weaknesses of the policy, instead of using directly to your economy. Thank you.

Tomoyuki Naito:
All right, thanks very much. Okay, Mr. Robert Ford, please go ahead.

Robert Ford Nkusi:
Thank you. My name is Robert Ford. I’m currently leading a software testing qualifications team in Rwanda, under the Rwanda Software Testing Qualifications Board, one of the members of the International Software Testing Qualifications Board. But before that, I was supporting the government of Rwanda under the United Nations in designing the implementation plan for the Child Online Protection Policy, which Rwanda designed a couple of years ago. And it’s now a framework that is helping the country in setting proper guidelines for safety for child online engagement. Before that, I participated a lot in the regional framework for the one network area for the East African region. For some of you who may not know the EAC framework, the East African community is composed of six countries, Uganda, Kenya, Rwanda, Tanzania, South Sudan, and Burundi. And at some point in the framework, they wanted to create one network area, having inbound, inbound traffic, data traffic, as one network traffic, so that they can circumvent the costs of data roaming for people in the region. So in a couple of years, we were able to set the one network area, the first actually proof of concept ever tested around the world. And in that, I was specifically more on the component of cross-border mobile financial services, helping citizens in the countries in that region to transfer and transact financially between nations using what is popularly known as mobile money. I have participated at AFRINIC, the Internet Numbers Registry for Africa. But after that, I’m now currently engaged with JICA, the Japan government, in implementing a project in Uganda. It’s an ICT industry promotion project. It’s a four-year project with four major outputs that are supposed to help the country build capacity, strong capacity in the ICT industry.

Tomoyuki Naito:
All right, thanks very much, Mr. Ford. Okay, Dr. Salayu Natarajan, could you go ahead, please?

Sarayu Natarajan:
Thank you very much. Thank you for enabling me to be a part of this conversation. It’s a very important one, both in understanding the disadvantages and advantages and some of the risks of generative AI. Thanks also to the audience. I know it’s 6 p.m. here, and it must be a range of different times across the world, so thank you for joining online as well. My name is Salayu Natarajan. I am the co-founder of Apti Institute, which is an institution that works on questions at the intersection of technology and society. We have three big areas of work. We focus on algorithmic and platform-mediated work. We look quite extensively at digitization and digital infrastructure and the ways in which governments and states can reach their citizens, particularly on questions of sustainability financing, governance, replication of digital systems across the world, and also extensively on questions of data and data governance. And AI, and particularly generative AI as a theme, is one that cuts across all of these areas, and we’ve been exploring it quite significantly, hopefully more over the course of this conversation. Back to you, Dr. Naito.

Tomoyuki Naito:
Thanks very much, Dr. Harari, and Mr. Atsushi Yamanaka, please go ahead.

Atsushi Yamanaka:
Thank you so much. Well, it’s very hard to actually be so very concise introductions to my predecessor now. Thank you so much, actually, for joining this session. You guys actually are very brave, because you actually could not actually resist the urge of actually having a GIZ reception downstairs. So thank you so much, actually, for your effort of coming to this session. My name is Atsushi Yamanaka. I’m actually a senior advisor on digital transformations at JICA. At JICA, we’re actually trying to promote the X for the improved well-being of all right now, incorporating a lot of actually these technology elements into different projects and then support initiatives. But prior to that, I’ve been actually doing this field for quite a long time, actually. This is my 28th year. In fact, it’s like I feel so old about that, of pursuing ICT for development. Initially, like I was in UNDP, and then I actually was involved quite intimately in the WSIS process. So it’s really personal for me to be here, and it’s really happy to see IGF actually came to Kyoto, and also discussing about how this process is going to actually help, finally, hopefully, change the world for a better place using these technologies. And AI certainly is one area where it’s going to have a lot of potentials, and also a lot of threats as well. So this conversation is very timely, and I’m really happy to be part of this. Thank you.

Tomoyuki Naito:
Okay. Thanks very much. So for the sake of time, let me just quickly go to the other core of today’s session. This is the core question to all the panelists. My second question to all the panelists. Do you think that the rise of generative AI represented by chat GPT is a good thing to the economic and social development aspect of developing economies? Please answer by yes or no, and with your very succinct reasons, please. Maybe let me just start with Yamanaka-san.

Atsushi Yamanaka:
Thank you, Naito-sensei. Can I say maybe? Or depend? Well, I understand that you want a very precise answer with yes or no, but I don’t think I’m qualified to say yes or no. So would it be okay? Yeah, it’s okay. It’s up to you. It’s okay. Thank you so much. Yes. Well, in a way, yes, because we actually had a colleague, actually. We sent him to Japan. He studied AI. He actually did a PhD here in Japan, and he was actually doing research on generative AI models for African language. Now, he was actually in RIKEN, one of the top research institutes in Japan, but now he, unfortunately for Japan, he actually moved to Princeton to continue his work. But having this kind of local language model actually incorporated into the generative AI could really open up the opportunities for those people who are actually unconnected or not digitally involved in it. Because of high barriers in terms of digital literacy, this inclusion has been a major challenge. And then this has been the issue for the last 20 years. And the last 2.6 billion people is going to be the most and hardest people to reach. So for that, I think AI, and specifically having the local language generative AI models, could have opportunity to open up the opportunity for them. So that’s yes. The no part, yes. Of course, there’s a threat. We talked a lot about, in terms of misinformation, malinformation, disinformation. There is even industry in, I think it’s northern Macedonia, where this village actually churned out all these malinformations. It’s a business. So people who want to actually have this malinformation, they actually hire these people in this particular village and it became an industry. And it’s getting very, very difficult for us to distinguish if any information is actually real or not. So that, I think, is going to be a huge threat in terms of information accuracies and in trust that we actually have in the internet or to the digital technology as a whole. So that’s actually yes and no. And I’m sure that my fellow participants also have a similar yes or no moment.

Sarayu Natarajan:
Thank you. I will take, again, a response which is yes if we attend to questions of governance and regulation. I do think, I mean, yes and no, yes if we take a, I mean, I feel like to think about generative AI without thinking about the consequences and the harms, which occur at all different levels, both in, or injustices of generative AI occur at several levels. One, there is the level of labor. Generative AI does not exist without the labor of several workers in very many parts of the developing world. So to talk about it abstracted from the way in which generative AI is itself created, which is the labor of data annotators and labelers, that might be a bit limiting. And so that needs to be a part of the conversation. Second is, I think, the way in which data itself is structured, which is that it often limits the presence of data pertaining to gender, race, et cetera, may limit the capability it is generative AI, so the applicability and use in context in the developing world may be limited. I think the third, of course, is what you mentioned, Mr. Yamanaka, which is around the consequences such as disinformation and misinformation, which generative AI makes very easy. So with this framework of the kinds of injustice that generative AI may bring about, we can start to think about both what are the use cases and how may we govern them to ensure that all of us have meaningful digital lives and digital futures. I’ll pause here and look forward to the rest of the discussion.

Tomoyuki Naito:
Yeah, thanks very much. That’s a very good point and a very important point. Mr. Yamanaka emphasized about the local language issues and Dr. Sara, you mentioned about start with the use case, then we can deepen the more discussion. That’s very important and significant case. Then, Mr. Robert Ford, your opinion is always very important. Thank you.

Robert Ford Nkusi:
How can I possibly, can I think I have a good voice? Thank you. Thank you, VP. Just a day before I flew here, my niece asked me and said, so when I showed her the topic I was going to discuss, I was going to be a panelist at this conference, she asked me a very intriguing question that I kept asking myself over and over when I was flying. She asked me, said, uncle, don’t you think the digital technology that we are consuming today has come earlier than it should have come? That is the world capable and the people capable to consume and use the technologies that we have today profitably for their own benefit. And I kept juggling my brain back and forth to see if that was right or wrong. So, now talking about generative AI, it’s better to tackle this topic when we have deeply digested and understood what this animal is. When I took my class many years ago on computer science, we always thought, when is it going to be a time when technology will take away from us the responsibility to write lines of code so that something else does it? And AI now is with us today. So, to answer your question, is it good or bad? Are we looking at dangers or are we looking at a comfortable world? In the time ahead. Like Sensei, yes and no. One, I’ll just speak specifically one example. The nature of predictability of the power of AI is going to help us, especially the global south, in being able to determine the kind of content that goes online before we can even know the danger that content is going to produce to us. And I speak this from the line of authority that comes from the Child Online Protection, for example. Today, when we look at how we struggle and fight to keep our children safe online, before we put mitigation measures, the content is online and children are consuming this content. Generative AI now equips us with the capability to mitigate that. That we can be able to use those tools to mitigate that. That’s the positive part. But the negative part, some of the tools, like ChatGPT, gives you so good response that many times we don’t even need to question it. That each time you ask, the response is so accurate to your understanding that you don’t even want to question that. But behind that text, there is a lot that can be questionable, which now puts us at the crossroads of what is good for us, what’s not good for us. Because the way generative AI works is that it’s just using machine learning to train some software and data to be able to give you what it gives you. That’s how it works. So if what we get seems as too good to be nice, then we don’t question. And then we stand at the edge or at the rift to fall off into oblivion by technology. That’s just the tool. Thank you.

Tomoyuki Naito:
All right, thanks very much, Mr. Ford. Hey, Safa, please go ahead. Your opinion.

Safa Khalid Salih Ali:
Okay, let me answer by yes. Because I am ICT for all this long, I will directly say yes. It can help you. When you’re thinking for economic and you’re thinking for developing country, how long will it take to analyze just the data set? It take a lot to get insight from data set. But by using generative AI, we can go to insight directly and get impact directly to economy instead of wasting a lot of time in routine work of just analyze this data. When you’re thinking about that, thinking about how can we save in the cost, how many employee you needed to just analyze this data. By generative AI, we can found your insight directly and it’s helpful. Also, when we thinking about economy, you need to think about financial inclusion and FinTech. To deserve real financial inclusion by this way of traditional way, you can deserve it for all the citizen of any country. But if you have like generative AI, we can have opportunity to all participants to find the service you need like using virtualization access, like using chatbot, any type of AI tools. And when we thinking about that GBT and which can allow you to analyze the data and thinking about how many time you need to just make risk assessment for any bank, for any customer you have it in the bank to give him just loan. By using generative AI, we can make fraud detection and you can make risk assessment and build model which can predict for you what’s the habit of this customer based on the historical information. For all these reason, I can say for your question, yes, directly. Yes, we have drawbacks, but it can cover it by any guidelines. Thank you, Cesar.

Tomoyuki Naito:
Yeah, thanks very much, Ms. Alfa. Actually, to the audience, actually as you fully understand that we have a variety of the nature of the countries as well as the nature of the jobs. Every partners have different occupations. Then it is quite interesting and quite significant and it’s quite important. Looking at the power of AI or looking at the threats of AI from different angles, it is quite important discussion, I personally believe. Then as a result, four partners here mentioned somehow yes side. No one say obviously no. So if I understand like that, four partners over here say yes somehow. Let me just ask one more question. While G7 countries, G7 member countries including Japan are currently preparing the basic guideline of appropriate AI use in societies, do you think that developing and emerging economies should also prepare original guidelines for the AI use? How do you think about that? Let me just ask this question to all the panelists. Let me invite Mr. Robert Ford first. Robert?

Robert Ford Nkusi:
Okay, I went to the computer science class but not to the audio class. So thank you. Each time we have a discussion that draws a line between developing and developed economies, I struggle. I struggle to maintain that classification. And I will quickly give an example that the one part of the world is struggling to understand or to draw lessons or to draw benchmarks from some of the success stories from another class of people we are talking about. So when we talk about the developed world, and I’ll give just a simple example, the developing world, what they call developing economies. For very many years, many years, for as long as I can remember, the world’s financial systems in the developed West were based on data that runs on plastic cards, right? That each one of us is carrying a wallet with so many plastic cards. For Africa, in the, not more than a decade ago, we leapfrogged from that. And for, they are able to transact using mobile money. And they are circumventing the whole trouble of the environmental impact of keeping plastics in people’s wallets. If we were to collectively remove plastics from people’s wallets, and we pile them together, we probably could fill up a country. And this is a very bad effect to the environment that we live in. But the other part of the world is failing or is dragging its feet in quickly benchmarking on that and say, why don’t we have a plastic-free world, at least from our simple cards? Then we can be able to transact with mobile money. So just giving examples. So when the G7 is discussing about legal and regulatory framework, about generative AI, they probably need to go down there and see what is it that there is down there that they can pick from. I know countries in Africa that have moved far away in designing and crafting regulatory frameworks for AI and for these other technologies. And they are very fast in moving towards that. So at some point, I don’t think it requires to be a member of the G7 to determine the kind of policies that are going to guide and mainstream thought process towards how we consume technology for a safer world tomorrow.

Tomoyuki Naito:
All right, thanks very much, Mr. Ford. Can I invite Dr. Sayoni? Can I invite Dr. Sarayu? Yes. Please.

Sarayu Natarajan:
Thank you very much and thank you for that. I largely agree. I think that not just because of the several advances made in several parts of the world in terms of thinking AI, I do not think this is a conversation that is entirely unique or the problems are entirely unique to parts of the developing world. I think one of the strange things about technology is that it’s a great equalizer, both in positive senses and some of the more harmful ones. And I think for us to think about these as unique problems from a frame of exceptionalism may be limiting. And so building out frameworks of governance, frameworks of regulation from an inclusive standpoint, which includes several global participants is very necessary. My yes if, to go back to that was deeply qualified, but I will speak to one specific theme that I mentioned in terms of the injustices of generative AI which is the question of labor. And I think there are two sides to the question of labor in the context of generative AI. The first is that labor and human labor is very critical to building generative AI. So if somebody doesn’t label the data that is used to build a large language model, a large language model does not exist. And this is true of image models as well. And much of this labor is in very many parts of what is called as the developing world. Now, those who build AI in that sense that are responsible for the labeling, annotation, marking, categorization of data are never really a part of the conversation. So that’s one significant injustice. So while there might be frameworks for governing regulating generative AI use, the way in which generative AI is made itself has a significant geopolitical component. The second dimension of generative AI is a much more downstream effect which is the question of job loss. And to the point Mustafa made, job loss is a complicated question that has different meanings in different countries. In mine, which is India, we have a population of 1.4 billion where challenges around thinking about employment, job losses are very significant. India is also an IT services company, so a large part of the economy relies on IT service provision. And again, the question of job losses through the use of services like charge GPT and generative AI more broadly speaking is a very significant one. All of this is to say that conversations about generative AI without starting from the how are we going to govern some of the harms just as much as how are we going to deploy it might be limiting. So we have to think very carefully about use cases, very carefully about first order, second order, third order consequences of the use of generative AI. That is not to say that there is no use case at all. There are several applications for it. But governance is a part of deployment is where I’m coming from. I’ll pause here and.

Tomoyuki Naito:
All right, thanks so much, Dr. Actually, thanks very much for you touch upon the other aspect of job itself. Actually, just to share the information to all the other participants here on site and online. Recently, ILO, the International Labor Organization just released almost one month plus ago, sometime late August. They just used their ILO model to analyze the impact of the generative AI adaptation to the 59 countries as a model. Then, according to their result, the impact of job loss aspect is much heavier to the advanced economy than much less in the developing economies. That means a lot of meanings are contained in that analysis. But I just strongly recommend if you are interested in their job analysis or the job impact of the generative AI, ILO have released a very good working report back in August. But anyhow, if I come back to the question, let me just invite Yamanaka-san for your opinion.

Atsushi Yamanaka:
Thank you so much. Actually, there’s a lot to think about, yes. I agree with the doctors and I agree with Robert about, yes, this is actually, we don’t necessarily want another fragmentation, right? We had a discussion yesterday about data flow or data governance, and we had the sessions, and this question also came up. Do we actually need to create another model or another regulations specifically for emerging economy? Answer probably no. Why should it be? Instead, I think the so-called emerging nations should be part of the global discussions on the framework or like case makings instead of actually trying to fragment. It’s already a fragmented world. Internet has been fragmented. There is also like agenda is fragmented. Why do we actually need to further fragment this field? So AI regulations, whether we can succeed or not, regulations, I’m not sure if we can do this, because there’s so many different opinions, but I think it is really important to have this multi-stakeholder approach in terms of actually having the opportunities for the emerging countries to actually have their voices and inputs into the formulations of a regime or mechanisms to regulate or to use AI for better word. I think that’s, I think, is one of the things we need to do. Going back to, I think, it was labor side. I think that’s gonna be a very, very important component. That’s something that I should have said in terms of potential negative aspect of generative AI. Already, like Robert was mentioning about software engineers, or even like the illustrators, or even like storytellers, because I think there was a huge actually strike actually doing the Guild of Screenwriters Guild in the United States, because they were saying, okay, generative AI can do their jobs. Now, I think we’re gonna have the Luddite movement. Do you know what the Luddite movement? That actually happened in the beginning of 19th century when the industrialization actually came into UK. All the workers, they start actually destroying the machines. I think we’re gonna see that if we do not actually come up with a model which is conducive. Another thing is if we do not now give them the opportunity for them to change their business models or their skills, re-skilling them, if you don’t do that, I think we’re gonna see another Luddite movement for the AI.

Tomoyuki Naito:
All right, thanks very much. Re-skilling, another word which the government of Japan is really emphasizing domestically. Okay, Safa, your opinion about my question.

Safa Khalid Salih Ali:
So we need to think about what we need to do to make sure that we have a system that can be used for the development of AI. According to guideline, when we’re thinking about guideline for developing country, we need to think about what we already said about data challenge. Really, we face very critical issue in data in developing country. We can’t find system with fully clear data. So we need to think about what we need to do to make sure that we have a system that can be used for the development of AI in developing country. Maybe like this issue we didn’t found in G7 country. For that, you need to put it as a challenges for these guidelines. Also, we need to think about the socioeconomics between this country and G7 when you compare. It is not about the fragmentation of the guidelines, but guidelines, you need to put in mind also the capacity of building. You need to think about how can you solve the issue of infrastructure. You can’t put just generative AI solution for someone who can’t use it. Also, we need to use to think about the capacity building. A lot of people in developing country didn’t know about anything about generative AI. You can’t use it. Maybe it’s a big company, but they didn’t know about it. So, we need to think about how can we use generative AI in developing country and how can we use it for the development of the country. The third issue is data privacy and labor, as you said. Do you think wasting time to make routine jobs, which you already can do it in minutes, instead of thinking about the productivity and quality of the work? If you have stuff and can focus more on the quality of the work, it’s better to use generative AI in the routine work. So, we need to think about how can we use generative AI in developing country and how can we use it for the development of the country. For example, if you are thinking about central bank, you think about if crisis happen anywhere, you need to calculate what’s the impact of this crisis to your own economy. With generative AI, you can use this solution to be prediction, to make any prediction for what the next action will be. So, we need to think about what’s the next action, what’s the next action, because it can be emergency action, you need to take it instead to go in the drawbacks directly.

Tomoyuki Naito:
Thank you. ≫ Thanks very much, a very important aspect. You beautifully mentioned about the opportunity side. Now, I would like to open the floor for the question, and our panelists to ask their questions. So, please, gentlemen, come to the microphone. I already have the question online, but let me just prioritize on-site question first. Please, kindly say your name and please the question.

Audience:
≫ So, my name is from Deutsche Welle Academy, Germany. I have so many questions, I don’t know which to ask first, because I recently heard a professor from Ghana. He said, you know, we have a kind of democratic system. We have a kind of democratic system, but we have a kind of neutral political system. So, I wonder why you are not mentioning this topic. But my question is, so, we are discussing, like, as we live in a neutral political system, like, we have a kind of democratic system all over the world. So, I’m wondering, like, what do you think about this kind of democratic system, like, in Europe, also, and non-democratic states worldwide, especially also Sudan, South Sudan, and your region. So, my question is, what do you think, will these technology, especially in the age of AI, will lead us to more freedom and democracy, or will it be the opposite? Because if the authoritarian rulers want to be free, they will be free, but if the authoritarian rulers want to be free, they will lead to more free societies. So, this is my maybe too big question, but… ≫ Yeah, thanks very much.

Tomoyuki Naito:
Big question, but very important question. Any panelists want to answer to this question?

Robert Ford Nkusi:
≫ Thank you. I think it depends on how you define those values of democracy, and authoritarianism, and whatever you want to call them. In my country, if, say, a country, for example, if you have a country, let’s say, in the United States, if you have a country, let’s say, in the United States, if you have a democracy in India, or in the United States, what does a democracy mean? For example, if, say, a President stays in power four, five, six times and is doing the right thing, to me, that’s okay. So, it’s Germany, for example. Germany does not have time limits, right? Germany can have a head of the country, and, in fact, if you look at the United States, if you look at the United States, if you look at the League, if you look at my country, it’s different from how my country is going to look at Germany. So, in Uganda, or in Rwanda, or in Kenya, if the British system says you can stay in power for as long as your party is keeping you in power, that’s okay. It’s not authoritarianism. It’s not authoritarianism. It’s not authoritarianism. So, and there should be a way in which we validate political leadership. At what point do we define this as being authoritarian, or not working in the interest of the masses, and I’m not saying it’s not fair. I agree with you. We have countries that are not working in the interest of the masses, and we have countries that are working in the interest of the masses, and, in fact, we have countries that are not working in the interest of the masses, meaning, the power of predictability that generative AI gives us is so immense that with it, we could build great potential that can change the way we run politics today and in the future. But, for authoritarianism, and dictatorship, and God knows what, I have good friends who have been dictatorial for 30 years and I never encountered any example in more dictatorian country because they will keep someone powerful as long as God knows what. In Britain. I know countries in Africa who have very good elections, of course.

Atsushi Yamanaka:
Not good, thank you. I think it’s a good thing. I think it’s a good thing. It’s been like with IGF, someone was calling, this IGF is so banner, there’s nothing controversial. No one is making anything controversial. Thank you so much for asking those kind of questions. It’s interesting questions. For me, I also came from this party where actually we have so-called free election system, but I was told that it’s not true. It’s not true, because I actually asked questions about China, because Chinese actually have the systems of scoring, right, Dr. Song, yes? Now, I was, you know, for me, it’s a bit difficult to accept that kind of system where I will be watched and scored every single moment. I was told that, you know, the Chinese actually have the system of scoring, but they don’t have the money to bribe the Chinese, you know, because they do not bribe people, they don’t have money to bribe or they don’t have the connections. For them, they said, well, now, if I actually are very good citizens, if I actually, you know, do something good for the society, my score goes up, and I actually get different opportunity which I never actually got. So, you know, I don’t have the answers to it. But at the same time, I feel like if, you know, we can keep the privacy, you know, if we can have, like, basic human rights, okay, last ten minutes, so we have to be quickly. I think, you know, how the polity would actually, you know, manage, you know, not control, but manage the society, I think it should be depending on the society itself, you know, we are here, you know, because so-called developed countries, we are not here to judge them, that I feel very strongly, you know, working in developing countries for so long. I don’t know. I may get hammered by this, but ‑‑

Tomoyuki Naito:
. ≫ Thanks very much. Yeah, thanks very much for the very good interaction, which going beyond my expectation, but anyhow, thanks very much for the very good question and answers. Then I have a question. I have a question about, you know, how can we apply generative AI in developing countries? I have recognized several, actually, people raise hand online, on Zoom. Then before that, let me just check chat box, and I got the one question from the Mr. Mohamed Hanif Garanai, he is questioning about could we apply generative AI in developing countries, like I do in the audience. I think perhaps we can fill the opinions online. Some of the online participants kindly answered already, so thanks very much for the online interaction spontaneously. So let me just actually ask a question. I’m not sure if you can hear me. Can you hear me? Yes, I can hear you. Please go ahead. Thank you.

Audience:
Now they’ll let my video come, so I’ll join all of you in person. Hello, I’m Debra Allen-Rogers, I’m in the Hague. I’m originally from New York City. I have a nonprofit here called Find Out Why. It’s a digital fluency lab that promotes digital fluency. I was asked to ask a question and even the sort of spicy comments that followed about Germany being dictatorial, which I don’t think is obviously true in the context of this. But I did want to ask the question about if we could rename, I know this is going to sound naïve, but just look at my age and know my background in design and so on. I’m going to ask this question. Developing countries, we’re in a transition now, and we can do that in highly functional societies, because the highly functional digital societies, for example, let’s say in Estonia, in Finland, in Norway, I’m living in the Netherlands, highly functional digital societies are great places to come learn for young people on travel expeditions. In Japan, back at post-war, there was that, I was looking for the name of the program, but sent students and young people around the world to develop best practices, to observe and to learn and then bring it back to Japan and design the society how they wanted it to be after the war. So aren’t we at another juncture like that right now, that India has a lot to teach, the continent has a lot to teach, MENA has a lot to teach, and those societies have young people, 70% are under the age of 30 going forward. So we have to redefine, we ourselves working in this right now, have to redefine some of these terms so that we don’t fall back on developing world countries. It’s a new day, and we don’t have to spend so much time talking, I’m kind of breaking my own rule by talking about it so much right now, but I’m saying that we should redefine it when we give our speeches and when we talk, and also part of the work I do is to help young people travel the world to see best practices, bring it back home, and then decide how you want to design your own society, but we do have highly functional digital societies, Estonia, Taiwan, the ones I said in the Scandinavian, the Nordic countries, that are right there for us to look at and watch. Thanks very much for taking my question.

Tomoyuki Naito:
≫ Yeah, thanks very much, Ms. Rogers, great comment. And one more person or organization, Ghana IGF remote hub, you are raising hand. Please go ahead, if you can hear my voice, please go ahead.

Audience:
≫ Okay. I’m Dennis. I’m from Ghana. My question is, with the use of native AI to create, sorry, does the native AI do good or affect human creativity? And if yes, how and what contribution is it making to the human resource? And the second question is, with the first group of native AI, help or not, how do you think about the use of native AI in developing countries with fewer resources and technology? Could it boost their development or might it make the technology gap between countries even bigger? Thank you.

Tomoyuki Naito:
≫ Very sharp question. Any panelists who want to answer? ≫ Hello. ≫ Doctor? ≫ My name is Joseph. I’m from Ghana. I’m from Ghana. I’m here to talk about the use of native AI in developing countries with fewer resources and technology. Can I just invite the doctor to answer the first question?

Sarayu Natarajan:
≫ I’ll try answering very briefly to the first question on creativity. I think the second one around development gaps has been tackled a little bit, but with respect to human creativity, it’s hard to talk about the effects on creativity, but I think it’s important to think about creativity as something that’s self-built on the back of creative work by artists, by sort of, you know, producers and generators of music, makers of music, and to think about creativity as abstracted from some of how generative AI is made may be limiting. But it could help in the sense that it could be a new way of thinking about creating, you know, certain types of writing literature based on which further human creativity and ingenuity is applied, but it could also have some deleterious effects in the absence of efforts to for the It should not supplant in my mind the efforts of the education system of, you know, human society at the time, and skills such as those in young minds. So we should not see this to my mind as a zero-sum game. And particularly the question of creativity must look at how it was born. So thank you.

Tomoyuki Naito:
≫ Thanks very much, Dr. Sarayu. You want to answer to the second question?

Atsushi Yamanaka:
≫ Summarize the questions on the second one. Basically trying to actually, if the AI technology is going to be useful for the society, or the gap, gap, gap, between those. That’s a good question. I think earlier Dr. Sarayu was mentioning about technology being, you know, especially the new technologies actually reducing the gap. Because we could actually, you know, there is no sort of hindrance or the interaction sort of barrier is much lower in the digital technologies than many other technologies. So I think, you know, I don’t think that’s going to be a problem, you know, because we don’t have a lot of technology, you know, so I think it does not necessarily be that, you know, developing countries are going to lag behind. And rather, I think it’s going to be more interesting. A lot of innovation is actually coming from so-called emerging economies right now. And we may actually see much more innovation coming from the emerging economies rather than these things coming from the so-called western countries. So in that respect, I don’t necessarily think the developing country is going to be lagging behind, but rather I think with new digital technologies, I think there’s more contributions. I don’t necessarily like the word reverse innovations because that’s very pretentious, but I think the innovations coming from the emerging economies, developing countries, I think they are going to be the trend that we see in the future.

Tomoyuki Naito:
≫ All right, thanks very much. Actually, we have only one minute left up to the end of the session. So apology for the other participants online for giving me the other question. Let me just summarize today’s session very briefly. As Mr. Yamanaka’s last comment mentioned, actually, we don’t have to look at only the threat side. Actually, the opportunity, opportunity which every country can utilize the new technology as a sort of innovation to leverage more the economic development as well as the social development is the key to discuss more continuously, and is the key to emphasize not only G7 countries, the leading, the guidance, and so on. But also, the other countries, the leading countries, the leading countries, the leading countries, for instance, so other, you know, more than 190 countries can do proactively to utilize it, to do it for your own business, your own countries in the future. So that could be the main message of today’s session, and I’m sorry about my poor time today, but I think it’s time to wrap up the session, so thank you very much. Thank you very much, everyone, for being here. Let me just conclude this session, since time is already up. So please join me to give the round of applause to all the panelists here. Thank you very much. And thank you, all the other participants, on-site and online. Thank you so much.

Atsushi Yamanaka

Speech speed

186 words per minute

Speech length

1778 words

Speech time

575 secs

Audience

Speech speed

281 words per minute

Speech length

798 words

Speech time

170 secs

Robert Ford Nkusi

Speech speed

165 words per minute

Speech length

1773 words

Speech time

645 secs

Safa Khalid Salih Ali

Speech speed

193 words per minute

Speech length

1110 words

Speech time

344 secs

Sarayu Natarajan

Speech speed

190 words per minute

Speech length

1317 words

Speech time

417 secs

Tomoyuki Naito

Speech speed

143 words per minute

Speech length

2230 words

Speech time

938 secs

Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Owen Later

Microsoft has made significant efforts to prioritise the responsible use of AI technology. They have dedicated six years to building their responsible AI programme, which involves a team of over 350 experts in various fields including engineering, research, legal, and policy. Their responsible AI standard is based on the principles outlined by the Organisation for Economic Co-operation and Development (OECD), emphasising the importance of ethical AI practices.

In addition to their internal initiatives, Microsoft recognises the need for active participation from private companies and the industry in AI governance discussions. To foster collaboration and best practices, they have founded the Frontier Model Forum, which brings together leading AI labs. This forum focuses on developing technical guidelines and standards for frontier models. Microsoft also supports global efforts, such as those taking place at the United Nations (UN) and the OECD, to ensure that AI technology is governed responsibly.

Another crucial aspect highlighted by Microsoft is the importance of regulations to effectively manage the use and development of AI. They actively share their insights and experiences to help shape regulations that address the unique challenges posed by AI technology. Furthermore, Microsoft aims to build capacity for governments and industry regulators, enabling them to navigate the complex landscape of AI and ensure the adoption of responsible practices.

Microsoft emphasises the need for safeguards at both the model and application levels of AI development. The responsible development of AI models includes considering ethical considerations and ensuring that the model meets the necessary requirements. However, Microsoft acknowledges that even if the model is developed responsibly, there is still a risk if the application level lacks proper safeguards. Therefore, they stress the importance of incorporating safeguards throughout the entire AI development process.

Microsoft also supports global governance for AI and advocates for a representative process in developing standards. They believe that a global governance regime should aim for a framework that includes standard setting, consensus on risk assessment and mitigation, and infrastructure building. Microsoft cites the International Civil Aviation Organization and the Intergovernmental Panel on Climate Change as potential models for governance, highlighting the importance of collaborative and inclusive approaches to effectively govern AI.

In conclusion, Microsoft takes the responsible use of AI technology seriously, as evident through their comprehensive responsible AI programme, active participation in AI governance discussions, support for global initiatives, and commitment to shaping regulations. They emphasise the need for safeguards at both the model and application levels of AI development and advocate for global governance that is representative, consensus-driven, and infrastructure-focused. Through their efforts, Microsoft aims to ensure that AI technology is harnessed responsibly and ethically, promoting positive societal impact.

Clara Neppel

During the discussion, the speakers highlighted the ethical challenges associated with technology development. They emphasised the need for responsibility and the embedding of values and business models into technology. IEEE, with its constituency of 400,000 members, was recognised as playing a significant role in addressing these challenges.

Transparency, value-based design, and bias in AI were identified as crucial areas of concern. IEEE has been actively engaging with regulatory bodies to develop socio-technical standards in these areas. They have already established a standard that defines transparency and another standard focused on value-based design. These efforts aim to ensure that AI systems are accountable, fair, and free from bias.

The importance of standards in complementing regulation and bringing interoperability in regulatory requirements was emphasised. IEEE has been involved in discussions with experts to address the technical challenges related to AI. An example was provided in the form of the UK’s Children’s Act, which was complemented by an IEEE standard on age-appropriate design. This highlights how standards can play a crucial role in ensuring compliance and interoperability within regulatory frameworks.

Capacity building for AI certification was also discussed as an essential component. IEEE has trained over 100 individuals for AI certification and is also working on training certification bodies to carry out assessments. This capacity building process ensures that individuals and organisations possess the necessary skills and knowledge to navigate the complex landscape of AI and contribute to its responsible development and deployment.

The panel also explored the role of the private sector in protecting democracy and the rule of law. One speaker argued that it is not the responsibility of the private sector to safeguard these fundamental principles. However, another speaker highlighted the need for legal certainty, which can only be provided through regulations, in upholding the rule of law. This debate prompts further reflection on the appropriate roles and responsibilities of different societal actors in maintaining democratic values and institutions.

The negative impact of uncertainty on the private sector was acknowledged. The uncertain environment poses challenges for businesses and impedes economic growth and job creation. This concern underscores the need for stability and predictability to support a thriving and sustainable private sector.

Lastly, the importance of feedback loops and common standards in AI was emphasised. This includes ensuring that feedback is taken into account to improve and retrain AI systems. Drawing from lessons learned in the aviation industry, the development of benchmarking and common standards was seen as vital for enabling efficient and effective collaboration across different AI systems and applications.

In conclusion, the speakers underscored the importance of addressing ethical challenges in technology development, specifically in the context of AI. IEEE’s involvement in shaping socio-technical standards, capacity building for AI certification, and the need for transparency, value-based design, and addressing bias were highlighted. The role of standards in complementing regulation and promoting interoperability was emphasised, along with the necessity of legal certainty to uphold democracy and the rule of law. The challenges posed by uncertainty in the private sector and the significance of feedback loops and common standards in AI were also acknowledged. These insights contribute to the ongoing discourse surrounding the responsible development and deployment of technology.

Maria Paz Canales

The discussion on artificial intelligence (AI) governance highlighted several key points. Firstly, there is a need for a clearer understanding and clarification of AI governance. This involves the participation of various actors, including civil society organizations and communities that are directly affected by AI. Civil society organizations require a better understanding of AI risk areas, as well as effective strategies for AI implementation.

Another important point is the evaluation of AI’s impact on rights, with a specific focus on inclusiveness. It is essential to assess the effect of AI on civil, political, economic, social, and cultural rights. Unfortunately, the perspectives of communities impacted by AI are often excluded in the assessment of AI risks. Therefore, it is necessary to include the viewpoints and experiences of these communities to comprehensively evaluate the impact of AI. This inclusive approach to AI impact assessment can lead to the development of responsible and trustworthy technology.

The discussion also emphasized the importance of education and capacity building in understanding AI’s impact. The public cannot fully comprehend the consequences of AI without being adequately educated about the subject in a concrete and understandable way. Therefore, it is crucial to provide not only technical language but also information on how AI impacts daily life and basic rights. By enhancing education and capacity building, individuals can better grasp the implications and intricacies of AI technology.

Furthermore, it was highlighted that there should be some level of complementarity between voluntary standards and legal frameworks in AI governance. This includes ensuring responsibility at different levels, from the design stage to the implementation and functioning of AI systems. The relationship between voluntary standards and legal frameworks must be carefully balanced to create an effective governance structure for AI.

In addition, the discussion underscored the importance of accounting for shared responsibility within the legal framework. It is crucial to establish effective communication between the different operators involved in the production and use of AI. This communication should adhere to competition rules and intellectual property regulations, avoiding any violations. By accounting for shared responsibility, the legal framework can ensure ethical and responsible AI governance.

Lastly, the discussion emphasized the need for a bottom-up approach in AI governance. This approach involves the active participation of societies and different stakeholders at both the local and global levels. Geopolitically, there is a need to hear more about experiences and perspectives from various stakeholders in global-level governance discussions. By adopting a bottom-up approach, AI governance can become more democratic, inclusive, and representative of the diverse needs and interests of stakeholders.

In conclusion, the discussion on AI governance highlighted the importance of a clearer understanding and clarification of AI governance, inclusiveness in impact assessment, education and capacity building, complementarity between voluntary standards and legal frameworks, the consideration of shared responsibility, and a bottom-up approach in AI governance. By addressing these aspects, it is possible to develop responsible and trustworthy AI technology to benefit society as a whole.

Thomas Schneider

The topic of AI regulation is currently being discussed in the context of its application. The argument put forth is that instead of regulating AI as a tool, it should be regulated in consideration of how it is used. This means that regulations should be tailored to address the specific applications and potential risks of AI, rather than imposing blanket regulations on all AI technologies. It is believed that this approach will allow for a more nuanced and effective regulation of AI. Voluntary commitment in AI regulations is seen as an effective approach, provided that the right incentives are in place. This means that instead of enforcing compulsory regulations, which can often be complicated and unworkable, voluntary agreements can be more successful. By providing incentives for AI developers and users to adhere to certain standards and guidelines, it is believed that a more cooperative and collaborative approach can be fostered, ensuring responsible and ethical use of AI technology. The Council of Europe is currently developing the first binding convention on AI and human rights, which is seen as a significant step forward. This intergovernmental agreement aims to commit states to uphold AI principles based on the norms of human rights, democracy, and the rule of law. The convention is not only intended to ensure the protection of fundamental rights in the context of AI, but also to create interoperable legal systems within countries. This represents a significant development in the global governance of AI and the protection of human rights. The need for agreement on fundamental values is also highlighted in the discussions on AI regulation. It is essential to have a consensus on how to respect human dignity and ensure that technological advancements are made while upholding and respecting human rights. This ensures that AI development and deployment align with society’s values and principles. Addressing legal uncertainties and tackling new challenges is another important aspect of AI regulation. As AI technologies continue to evolve, it is necessary to identify legal uncertainties and clarify them to ensure a clear and coherent regulatory framework. Breaking down new elements and challenges associated with AI is crucial to ensuring that regulations are effective and comprehensive. In solving the problems related to AI regulation, it is emphasized that using the best tools and methods is essential. Different problems may require different approaches, with some methods being faster but less sustainable, while others may be more sustainable but take longer to implement. By utilizing a mix of tools and methods, it is possible to effectively address identified issues. Stakeholder cooperation is also considered to be of utmost importance in the realm of AI regulation. All stakeholders, including governments, businesses, researchers, and civil society, need to continue to engage and cooperate with each other, leveraging their respective roles. This collaboration ensures that diverse perspectives and expertise are taken into account when formulating regulations, thereby increasing the chances of effective and balanced AI regulation. However, there is also opposition to the creation of burdensome bureaucracy in the process of regulating AI. Efforts should be made to clarify issues and address challenges without adding unnecessary administrative complexity. It is crucial to strike a balance between ensuring responsible and ethical use of AI technology and avoiding excessive burdens on AI developers and users. In conclusion, the discussions on AI regulation are centered around the need to regulate AI in consideration of its application, rather than treating it as a tool. Voluntary commitments, such as the binding convention being developed by the Council of Europe, are seen as effective approaches, provided the right incentives are in place. Agreement on fundamental values, addressing legal uncertainties, and stakeholder cooperation are crucial aspects of AI regulation. It is important to strike a balance between effective regulation and avoiding burdensome bureaucracy.

Suzanne Akkabaoui

Egypt has developed a comprehensive national AI strategy aimed at fostering the growth of an AI industry. The strategy is based on four pillars: AI for government, AI for development, capacity building, and international relations. It focuses on leveraging AI technologies to drive innovation, improve governance, and address societal challenges.

Under the AI for government pillar, Egypt aims to enhance the effectiveness of public administration by adopting AI technologies. This includes streamlining administrative processes, improving decision-making, and delivering efficient government services.

The AI for development pillar highlights Egypt’s commitment to utilizing AI as a catalyst for economic growth and social development. The strategy focuses on promoting AI-driven innovation, entrepreneurship, and tackling critical issues such as poverty, hunger, and inequality.

Capacity building is prioritized in Egypt’s AI strategy to develop a skilled workforce. The country invests in AI education, training programs, research, and collaboration between academia, industry, and government.

International cooperation is emphasized to exchange knowledge, share best practices, and establish standardized approaches to AI governance. Egypt actively participates in global discussions on AI policies and practices as a member of the OECD AI network.

To ensure responsible AI deployment, Egypt has issued a charter that provides guidelines for promoting citizen well-being and aligning with national goals. These guidelines address aspects such as robustness, security, safety, and social impact assessments.

Egypt also recognizes the importance of understanding cultural differences and bridging gaps for technological advancements. The country aims to address cultural gaps and promote inclusivity, ensuring that the benefits of AI are accessible to all segments of society.

Overall, Egypt’s AI strategy demonstrates a commitment to creating an AI industry and leveraging AI for governance, development, capacity building, and international cooperation. The strategy aims to foster responsible and inclusive progress through AI technologies.

Moderator

In the analysis, several speakers discuss various aspects of AI and its impact on different sectors and societies. One key point raised is the need for more machines, especially AI, in Japan to sustain its ageing economy. With Japan facing social problems related to an ageing population, the introduction of more people and machines is deemed necessary. However, it is interesting to note that while Japan recognises the importance of AI, they believe that its opportunities and possibilities should be prioritised over legislation. They view AI as a solution rather than a problem and want to see more of what AI can do for their society before introducing any regulations.

Furthermore, the G7 delegates are focused on creating a report that specifically examines the risks, challenges, and opportunities of new technology, particularly generative AI. They have sought support from the OECD in summarising this report. This highlights the importance of international cooperation in addressing the impact of AI.

Egypt also plays a significant role in the AI discussion. The country has a national AI strategy that seeks to create an AI industry while emphasising that AI should enhance human labour rather than replace it. Egypt has published an AI charter for responsible AI and is a member of the OECD AI network. This showcases the importance of aligning national strategies with global initiatives and fostering regional and international collaborations.

Microsoft is another notable player in the field of AI. The company is committed to developing AI technology responsibly. It has implemented a responsible AI standard based on OECD principles, which is shared externally for critique and improvement. Microsoft actively engages in global governance conversations, particularly through the Frontier Model Forum, where they accelerate work on technical best practices. Their contributions highlight the importance of private sector involvement in governance discussions.

UNESCO has made significant contributions to the AI discourse by developing a recommendation on the ethics of AI. The recommendation was developed through a two-year multi-stakeholder process and adopted by 193 countries. It provides a clear indication of ethical values and principles that should guide AI development and usage. Furthermore, UNESCO is actively working on capacity building to equip governments and organizations with the necessary skills to implement AI systems ethically.

In terms of addressing concerns and ensuring inclusivity, it is highlighted that AI bias can be addressed even before AI regulations are in place. Existing human rights frameworks and data protection laws can start to address challenges related to bias, discrimination, and privacy. For example, UNESCO has been providing knowledge on these issues to judicial operators to equip them with the necessary understanding of AI and the rule of law. Additionally, discussions around AI governance emphasise the need for clarity in frameworks and the inclusion of individuals who are directly impacted by AI technologies.

The analysis also suggests that responsible development, governance, regulation, and capacity building should be multi-stakeholder and cooperative processes. All sectors need to be involved in the conversation and implementation of AI initiatives to ensure effective and inclusive outcomes.

An interesting observation is the need for a balance between voluntary standards and legal frameworks. Complementarity is needed between these two approaches, especially in the design, implementation, and use of AI systems. Furthermore, the analysis highlights the importance of a bottom-up approach in global governance, taking into account diverse stakeholders and global geopolitical contexts. By incorporating global experiences from different stakeholders, risks can be identified, and relevant elements can be considered in local contexts.

Overall, the analysis provides insights into various perspectives on AI and the importance of responsible development, global collaboration, and inclusive policies in shaping the future of AI.

Set Center

AI regulation needs to keep up with the rapid pace of technological advancements, as there is a perceived inadequacy of government response in this area. The tension between speed and regulation is particularly evident in the case of AI. The argument put forth is that regulations should be able to adapt and respond quickly to the ever-changing landscape of AI technology.

On the other hand, it is important to have risk frameworks in place to address the potential risks associated with AI. The United States has taken steps in this direction by introducing the AI Bill of Rights and Risk Management Framework. These foundational documents have been formulated with the contributions of 240 organizations and cover the entire lifecycle of AI. The multi-stakeholder approach ensures that various perspectives are considered in managing the risks posed by AI.

Technological advancements, such as the development of foundation models, have ushered in a new era of AI. Leading companies in this field are primarily located in the United States. This highlights the influence and impact of US-based companies on the development and deployment of AI technologies worldwide. The emergence of foundation models has disrupted the technology landscape, showcasing the potential and capabilities of AI systems.

To address the challenges associated with rapid AI evolution, the implementation of voluntary commitments has been proposed. The White House has devised a framework called ‘Voluntary Commitments’ to enhance responsible management of AI systems. This framework includes elements such as red teaming, information sharing, basic cybersecurity measures, public transparency, and disclosure. Its objective is to build trust and security amidst the fast-paced evolution of AI technologies.

In conclusion, it is crucial for AI regulation to keep pace with the rapid advancements in technology. The perceived inadequacy of government response highlights the need for agile and adaptive regulations. Additionally, risk frameworks, such as the AI Bill of Rights and Risk Management Framework, are important in managing the potential risks associated with AI. The emergence of technologies like foundation models has brought about a new era of AI, with leading companies based in the US driving innovation in this field. The implementation of voluntary commitments, as proposed by the White House, aims to foster trust and security in the ever-evolving landscape of AI technologies.

Auidence

The discussions at the event highlighted the challenges that arise in capacity building due to time and financial commitments. Ansgar Kuhn from EY raised the problem of time commitments related to the capacity building process, which may bring additional financial burdens for various parties. Small to Medium Enterprises (SMEs) and civil society organizations might not be able to afford the cost of someone who isn’t directly contributing to their main products or services. Academics may also struggle to get academic credit for engaging in this kind of process.

To address these financial and time commitment issues in capacity building, participants stressed the importance of finding solutions. Ansgar Kuhn specifically asked for suggestions to tackle this problem, underscoring the need to explore feasible strategies to alleviate the burden that time and financial commitments place on different stakeholders.

There were also concerns raised about the implementation of responsible AI, particularly regarding the choice between system level guardrails and model level guardrails. The discussion highlighted worries about tech vendors providing unsafe models if responsibility for responsible AI is pushed to the system level. This sparked a debate about the best approach to implement responsible AI and the potential trade-offs associated with system level versus model level guardrails.

Moreover, the event touched upon the Hiroshima process and the expectation of a principle-based approach to AI. The previous G20 process, which focused on creating data free flow with trust, served as a reference point for the discussion. There was a question about the need for a principle approach for AI, suggesting the desire to establish ethical guidelines and principles to guide the development and deployment of AI technologies.

In conclusion, the discussions shed light on the challenges posed by time and financial commitments in capacity building and the need for solutions to mitigate these issues. Concerns about system level versus model level guardrails in responsible AI implementation emphasized the importance of balancing safety and innovation in AI. The desire for a principle-based approach to AI and the establishment of ethical guidelines were also highlighted.

Prateek Sibal

UNESCO has developed a comprehensive recommendation on the ethics of AI, achieved through a rigorous multi-stakeholder process. Over two years, 24 global experts collaborated on the initial draft, which then underwent around 200 hours of intergovernmental negotiations. In 2021, all 193 member countries adopted the recommendation, emphasizing the global consensus on addressing ethical concerns surrounding AI.

The recommendation includes values of human rights, inclusivity, and sustainability, serving as a guide for developers and users. It emphasizes transparency and explainability, ensuring AI systems are clear and understandable.

UNESCO is implementing the recommendation through various tools, forums, and initiatives. This includes a readiness assessment methodology, a Global Forum on the Ethics of AI, and an ethical impact assessment tool for governments and companies procuring AI systems.

The role of AI in society is acknowledged, with an example of a robot assisting teachers potentially influencing learning norms. Ethical viewpoints are crucial to align AI with societal expectations.

Prateek Sibal advocates for inclusive multi-stakeholder conversations around AI, emphasizing the importance of awareness, accessibility, and sensitivity towards different sectors. He suggests financial compensation to facilitate civil society engagement.

In conclusion, UNESCO’s recommendation on AI ethics provides valuable guidelines for responsible AI development. Their commitment to implementation and inclusive dialogue strengthens the global effort to navigate ethical challenges presented by AI.

Galia

The speakers in the discussion highlight the critical role of global governance, stakeholder engagement, and value alignment in working towards SDG 16, which focuses on Peace, Justice, and Strong Institutions. They address the challenges faced in implementing these efforts and stress the importance of establishing credible value alignment.

Galia, one of the speakers, emphasizes the mapping exercises conducted with the OECD regarding risk assessment. This suggests that the speakers are actively involved in assessing potential risks and vulnerabilities in the context of global governance. The mention of these mapping exercises indicates that concrete steps are being taken to identify and address potential obstacles.

Both speakers agree that stakeholder engagement is essential in complementing global governance. Galia specifically highlights the significance of ensuring alignment with values at a global level. This implies that involving various stakeholders and aligning their interests and values is crucial to achieving successful global governance. This collaborative approach allows for a wider range of perspectives and enables a more inclusive decision-making process.

The sentiment expressed by the speakers is positive, indicating their optimism and belief in overcoming the challenges associated with implementation. Their focus on credible value alignment suggests that they recognize the importance of ensuring that the principles and values underpinning global governance are widely accepted and respected. By emphasizing stakeholder engagement and value alignment, the speakers underscore the need for a holistic approach that goes beyond mere top-down control and incorporates diverse perspectives.

In summary, the discussion emphasizes the vital aspects of global governance, stakeholder engagement, and value alignment in achieving SDG 16. The speakers’ identification of challenges and their emphasis on credible value alignment demonstrate a proactive and thoughtful approach. Their mention of the OECD mapping exercises also indicates a commitment to assessing risks and vulnerabilities. Overall, the analysis underscores the significance of collaboration and the pursuit of shared values in global governance.

Nobuhisa Nishigata

Japan has emerged as a frontrunner in the discussions surrounding artificial intelligence (AI) during the G7 meeting. Japan proposed the inclusion of AI as a topic for discussion at the G7 meeting, and this proposal was met with enthusiasm by the other member countries. Consequently, the Japanese government has requested the OECD to continue the work on AI further. This indicates the recognition and value placed by the G7 nations on AI and its potential impact on various aspects of society.

While Japan is proactive in advocating for AI discussions, it adopts a cautious approach towards the introduction of legislation for AI. Japan believes that it is premature to implement legislation specifically tailored for AI at this stage. Nevertheless, Japan acknowledges and respects the efforts made by the European Union in this regard. This perspective highlights Japan’s pragmatic approach towards ensuring that any legislation around AI is well-informed and takes into account the potential benefits and challenges presented by this emerging technology.

Underlining its commitment to fostering cooperation and setting standards in AI, Japan has established the ‘Hiroshima AI process’. This initiative aims to develop a code of conduct and encourage project-based collaboration in the field of AI. The process, which began in 2016, has seen a shift from voluntary commitment to government-initiated inclusive dialogue among the G7 nations. Japan is pleased with the progress made in the Hiroshima process and the inclusive dialogue it has facilitated. It is worth noting that despite unexpected events in 2016, the process has continued to move forward successfully.

Japan recognises the immense potential of AI technology to serve as a catalyst for economic growth and improve everyday life. It believes that AI has the ability to support various aspects of the economy and enhance daily activities. This positive outlook reinforces Japan’s commitment to harnessing the benefits of AI and ensuring its responsible and sustainable integration into society.

In conclusion, Japan has taken a leading role in driving discussions on AI within the G7, with its proposal being well-received by other member countries. While cautious about introducing legislation for AI, Japan appreciates the efforts made by the EU in this regard. The establishment of the ‘Hiroshima AI process’ showcases Japan’s commitment to setting standards and fostering cooperation in AI. Overall, Japan is optimistic about the potential of AI to generate positive outcomes for the economy and society as a whole.

Session transcript

Moderator:
th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Galia :
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Moderator:
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Nobuhisa Nishigata:
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . et cetera. And then now we had the Hiroshima AI process and the second round of our chair in the G7 this year. So then we had many things happening, like, for example, 2019, the G7, the French chair, they introduced the G-PAY, the Global Partnership on AI. And the same year, Japan hosted the G20 meeting. And G20 agreed on the G20. It’s on an AI principles, but it is just the same, that the OECD’s principle is kind of copy in the same text. Then we have some development in the afterwards, and then it comes to this year, to 2023. So next slide, please. Next one, yes. So just it’s more kind of history now. It’s seven years ago, the photo from Takamatsu. And then we had some discussion. Next slide, please. It’s going to show up some detail, yes. So it is the first time that at that time, the minister, Mr. Takaichi, the proposals and the discussions among the G7 at the time to talk about AI, like what the risk, what the opportunity, and then what next. And then Japan wanted to have some common understanding, common principles to cope with this new technologies at that time. So the bottom line, maybe touching upon the relationship between innovation and regulation, those kind of things, just think about what Japan is right now. We are facing several big social problems that are aging. And we need more people, I mean, to sustain our economy. And then we need more machines. So Japan is, I think, one of the kind of aged position to more like we need more machines to help us to sustain the economy, the business, and et cetera, or even the daily life. So we are very much friendly to the AI, but of course, we recognize some unsightly in this technology. So then we started a discussion at the G7, kind of trying to how they feel about the AI at the time. Then fortunately, our proposal on the AI discussion was very well received by the member of the G7. So the Japanese government decided to ask the OECD to continue the work further. Then they came out to the OECD principle in 2019. So that the kind of the beginning, the whole history of what we have now. And then the next slide, please. So that’s just the introduction. Since Gaia didn’t have the deck, I can do it for her. That’s a lot of OECD principles. It’s very simple, 10 principles. The first five is more like the value-based principles. And then the other five is more like a recommendation to the policy makers of the government. And they just, the 10 things. So the next one, please. And this year, as the chair of the G7, and we had the first, the hosted digital and tech ministers meeting in Takasaki in Japan. And it’s not only the AI ministers meeting. So we have like eight themes. And then the one is of course about AI. The third theme is responsible AI and global AI governance. And actually the ministers discuss more about the interoperability of the AI governance. Looking at some, you know, the European countries are working hard to pass the AI Act on it. Of course, we know it. But on the other hand, still Japan is not the one to introduce the legislation over the AI technologies yet. We would like more to see to, you know, the more opportunity or possibility of this technology. So in Japan’s perspective, it’s too early to introduce the legislation yet. In Japan’s perspective, it’s too early to introduce the legislation over the AI. But on the other hand, we respect what the EU is doing. So then like we try to start the discussion about the international kind of interoperability in the governance level. So that, you know, we don’t want to put more burden on the business side. I mean, of course, like a multinational people should work everywhere in this globe. So the thinking about the proposal. Then the next slide, please. Yes, thank you. Then like, so before the ministerial meeting, we are thinking more about the interoperability. But now sometimes these kind of things happen in the G7 things, like escalation to the leaders. So then like what happened is like the leaders agree to create or establish the what we say Hiroshima AI process. And the discussion is more focused on the generative AI and the foundation model, the new technology. And then now again, we are asking OECD to some support to summarize the report for the stock taking and the risk and the challenge and the opportunities on the new technology, particularly for the generative AI. Then, of course, the goal is some development of the code of conduct in organizations, or like a project-based cooperation to support the development of the new responsible AI tools and the best practices. Then you can see the link here. Then this is kind of ministerial declaration in September. And then the G7 delegates are working hard to compile the report, which is mandated to report to the leaders by the end of this year. And do we have more? Oh, no. So that’s about it. So in the end, coming back to the point from the moderator, so Japan is more, want to more to see the new technology can do, particularly for our society. I mean, not only Japan, but also the whole world. So thank you very much. Thank you. That was a really good overview of what influences, perhaps the space that a national government or an organization has to deal with when we’re thinking about how we set international principles and guidelines. How do we make them, how do we bring them home? And then how do we bring our own issues that we have in our own societies and economies back into these spaces to shape some of those responses?

Moderator:
So that was a very nice full circle there. To continue on this track of how national governments deal with the international policy space, and how do they bring their own opinions into it, we’re going to move from Japan to Egypt from the room online. And we’re going to hear from Ms. Suzanne Akobawi. Suzanne, I hope you are well connected and you can hear us. We can see you and we can hear you. The floor is yours. Perfect.

Suzanne Akkabaoui:
Thank you so much. Thank you for the opportunity to take part in this very interesting discussion. And with such an esteemed panel of guests. I’m Suzanne Akobawi. I am an advisor to the minister of ICT on data governance. So this stays a bit about how we are moving towards creating an infrastructure, institutional, legal, technical infrastructure to be in line with the technological advancement that are happening. And clearly in relation to AI as well. So just to give you a bit of background, Egypt has a national AI strategy that aims to create an AI industry in Egypt. It also wishes to exploit AI technology to serve Egypt’s development goals. The AI strategy was built on four pillars, AI for government, AI for development, capacity building and international relations. It also set and has four enablers, governance, data, ecosystem and infrastructure. The strategy was drafted according to a model that promotes effective partnership between the government and the private sector in a way to create a dynamic work environment and to support building digital Egypt and achieve digital transformation in a way that is led by AI application. One of the main principles of the strategy is that AI should enhance human labor and not replace it. Unlike our friends in Japan, we are very young, so we don’t have a lot of experience in Japan, we are a very young society and the majority of the population is between 16 to 45 years of age. So we face challenges with respect to the acceptance of AI and in showing that it has positive aspects other than taking away jobs from this young population. So one of our main principles for the strategy is that AI should enhance human labor and not replace it. And this requires that we conduct a thorough impact assessment for each AI product focusing on whether or not the AI is the best solution to the problem and what are the expected social and economic impacts of each new AI system. The strategy also emphasizes the importance of fostering regional and international cooperations. As mentioned earlier, we are working as mentioned earlier, we are members of the OECD AI network. The 70 countries that were mentioned earlier were one of them. Recently we have enacted, so we’ve published an AI charter for responsible, a charter for responsible AI. The charter is divided into two parts, general guidelines and implementation guidelines. The general guidelines give a layer of details about how to implement the principles that were in the strategy. And so in the general guidelines, we have a primary goal of using AI in government. And the purpose behind it is to promote the well-being of citizens and combating poverty, hunger, inequality, et cetera, which is in line with the human centeredness principles. With respect to any, we have the general guidelines also provide that any end user using an AI has the fundamental rights to know that they are using and interacting with an AI system. Again, a reaffirmation that no individual should be harmed by the introduction of an AI system, especially with respect to job creation. We also have, I mean, there is a list of general guidelines that are present in the document and all in line with the principles that were of the OECD principles. For the implementation guidelines, which is the second part of the charter, it provided that the AI should be robust, secure, and safe throughout the entire life cycle, that any AI project should be preceded by a pilot and a proof of concept, and that additional measures should be in place in case of sensitive and mission critical AI applications. So in short, this is where we stand. We are in line with the existing principles, guidelines, and frameworks that were decided on the international arena. And we have a clear understanding of our cultural differences, trying to find ways to bridge the gaps, the cultural and sociological gaps that come with the AI. That come with the technological advance.

Moderator:
Thank you so much, Suzanne, for sharing all of that and really emphasizing how AI approaches, policies, and frameworks need to be responsive to national context, cultures, local expectations while aligning with global values and also making sure that policies are interoperable with some of the other countries and regional and global initiatives so that we can manage towards truly global governance goals that we have. So as we’ve heard from our speakers from the national government side, I’m going to turn to some of our non-governmental stakeholders, and I’m going to go full circle and go back to international initiatives at the end, giving a bit of a breathing time to our newest speaker who joined us. Welcome, Thomas. So I’m going to turn now to Mr. Owen Larter from Microsoft and ask you, now that you’ve heard from the international space a little bit and how national governments cope with this challenge, how does that happen in a private company? How do you implement this? How do you come up with some of your own? And how do you dialogue with these initiatives?

Owen Later:
Fantastic. Hello, is this on? Can people hear me? Excellent, thank you. Good morning, everyone. My name is Owen Larter. I work on responsible AI public policy issues at Microsoft. It’s a pleasure to be here and a pleasure to be able to join such an esteemed panel. So we are enthusiastic at Microsoft about the potential of AI. We’re excited to see the way in which customers are already using our Microsoft AI co-pilots to be more productive in their day-to-day life. And I think more broadly, we see a huge amount of promise in this technology to help us better understand and manage complex systems, and in doing so, respond to major challenges, whether it’s in relation to healthcare or the climate or improving education for everyone. But there are clearly risks. We feel as a private sector company developing this technology that we have a real responsibility to lean in and make sure that the technology is developed in a way that is safe and trustworthy. So I sort of wanted to talk about three buckets of responsibilities that I view Microsoft as having to contribute to. The first one is to make sure that we’re developing this technology in a way that is responsible. And so we’ve been building out our responsible AI program for six years now. We’ve got over 350 people working right across the company on these issues from a real diversity of backgrounds, which we feel is very important. So we have people who are deep in the engineering space, research, legal, policy, people with sociological backgrounds, all coming together to work out how we identify AI risks and then put together a program internally that can help address them. So we’ve got sort of the core of our program, our responsible AI standard. This is an internal standard. It is based on the OECD principles. It’s based around making sure that people are upholding our AI principles at Microsoft. We’ve got tens of requirements across our 14 goals, and it really is a rule book. So anyone at Microsoft that is developing or deploying an AI system has to abide by this responsible AI standard. We’ve also shared this externally now, so anyone could go out and find it online if you type in Microsoft’s responsible AI standard. We think this is really important, A, to show that we’re doing the legwork here, and it’s not just nice words, but B, so that others can critique it, and build on it, and improve it as well. And then we’re building out the rest of our responsible AI infrastructure at Microsoft as well. So we have a sensitive uses team that anytime we’re engaging in developing a higher risk system, we bring greater scrutiny, we apply additional safeguards. We have an AI red team that is a centralized team within the company and goes product to product before we release it, making sure that we’re evaluating it thoroughly and that we’re able to identify, measure, and mitigate any potential risks. That’s that first bucket around responsible development. I also think we as a company, and we as an industry, quite frankly, have a real responsibility to lean into governance discussions like this. So we have recently founded the Frontier Model Forum with a number of other leading AI labs. We are trying to accelerate work around technical best practice on frontier models in particular. So these are the really highly capable models that offer a lot of promise but also compose some very significant risks as well. We want to develop that best practice, we want to implement that ourselves as companies, but we also want to share that externally to inform conversations on governance. And we’re really pleased to be able to engage internationally in global governance conversations, very supportive of the work that is going on at the UN, you know, the UN doing a very good job I think of catalyzing a globally representative conversation, UNESCO’s recommendations on the AI framework, very supportive of it, and of course all the technical work that is being done by the OECD in the background, very supportive of that as well. And I think the final responsibility we have is to lean in and help shape the development of regulation as well. So the self-regulatory steps that we’ve taken we feel are really important but they are just the start. We do feel that this new technology will need new rules and so we want to lean in, we want to share information about where the technology is going, it’s moving in a very, very fast pace, so how can we help others sort of understand exactly the trajectory of the technology. We want to share what’s working in terms of how we’re identifying and mitigating risks internally and also what’s not working quite frankly. And then finally we want to help build capacity. I think this is going to be a really key issue to underpin the development of governance frameworks and regulation in the coming years. How do you make sure that governments have the capacity to develop viable, effective regulation and then also critically how do you ensure that regulators have the capacity to understand how AI is going to impact their sector, whether it’s healthcare, whether it’s financial services and be able to address the risks that they may pose. So I’ll stop there for now and pass it back to the chair.

Moderator:
Thank you Owen. I think that the picture that it paints for me, it was very, very structured so thank you for that. It always makes the moderator’s job easy when there’s a clear one, two, three, four points in a speech. It really strikes me from what you say is that through these steps of responsible development, governance, regulation, capacity building, multi-stakeholder and cooperation between governments, private sector, civil society, the technical community, academics, research, it’s not one or the other. It’s not that one sector needs to do this and the other sector needs to do that, but we all need to be at the table at all of those levels to really make this work and then actually have the buy-in to be able to implement all of this when the rubber meets the road. You referenced how Microsoft dialogues and supports some of the UN initiatives, so I think that was a good segue to turn to Pratik and ask how does UNESCO think about all this. You’ve done a lot of work in coming up with the ethical guidelines on AI. We’ve missed the beginning of the section. We’ve had a little poll here to the audience to see how familiar the audience is with some of the AI policy frameworks out there, and UNESCO is closely second after OECD, so there’s a good understanding of what is in there, I think, in the guidelines, but I think it would be great to hear a little bit about what you do and how it works when you actually try and implement this, what are the lessons that you’ve learned, and what the challenges are in actually bringing those global principles into the national level, building capacity as Owen mentioned, and how is that working?

Prateek Sibal:
And how much time do I have? Five minutes. Okay. Right. Thanks, and apologies for being late. There was a scheduling conflict and I was hosting another session. So just very briefly on the UNESCO recommendation on the ethics of AI, it was developed through a multi-stakeholder process over a period of two years. The recommendation itself, we had a group of 24 experts which were selected from around the world who prepared the first draft. This draft was widely consulted with different stakeholder groups in different regions, in different languages, and then the document went through an intergovernmental process of about 200 hours of negotiations. And then we had this as the first global normative instrument on artificial intelligence which was adopted in 2021 by 193 countries. As far as the structure of this recommendation goes, maybe it’s worth spending a little bit of time of why we are talking about ethics. And so when we are talking about technologies, there are different kind of views of how we see technology. One is a very deterministic view of technology, that we will have technology, it will guide our life and it will do things. Then we have a very instrumentalist view of technology which is like, oh, it’s just a tool and it’s up to us on how we use it and what we do with it. And then there’s a third view of technology which is kind of like technology is a mediating force in society. So not only is technology influencing our actions, but also we are influencing how technology is shaping the world. So let me give you an example. At a very micro level, we know speed breakers, right? They force us to slow down. It’s a technology which is embedded with a script. At a macro level, if in a classroom you have a teacher and then you put a robot there to assist in the teaching, won’t our ideas of what teaching and learning looks like, what is the role of a teacher in our world, also start shifting? So there’s a shift which will happen in terms of norms. Now when we talk about ethics, it’s not just about saying, okay, these are the principles. We need to go into why. Because companies, developers, all the people who are developing and using AI, per se, are embedding technology with certain scripts. And these scripts need to be informed by ethical values and principles that we want. And this is what the recommendation does. It talks about values of human rights. It talks about leaving no one behind. It talks about sustainability. And then it goes into articulating the principles around transparency, explainability. And once we talk about these principles, it gives a clear indication to the developers, users, okay, this is how the technology should interface with us. Now it goes on further than to talk about the policy areas and what specifically needs to be done, for instance, in the domain of education, in the domain of communication and information. We have so much misinformation, disinformation going around. So it’s a very beautiful document, I would say. And I would invite you to look at it. Now how are we going to address the second part of your question, Dimya? How are we going about addressing the implementation part? Because that’s where the change would hopefully happen. The recommendation itself calls for the development of certain tools, a readiness assessment methodology which has been developed by UNESCO to look at where do countries stand vis-a-vis their state of AI development, vis-a-vis the policy areas and so on in the recommendation. And this is ongoing in about 50 countries around the world. And next year, in 2024, we’ll have the second global forum on the ethics of AI, which will be a platform in February, which will be a platform to learn from what’s coming up around in different parts of the world. Another tool is an ethical impact assessment. And there are so many of these tools. And that is wonderful to have so many diverse perspectives. This is really to guide companies, to guide governments who are procuring AI systems on what are the ethical aspects you need to look at, what at each stage of the AI lifecycle, what do we need to be concerned about. Going forward, I think capacity building was also mentioned by Owen. We don’t need to wait for these kind of regulations to be put in place to start working on capacity building. As an example, we are working with the judiciary. And the judiciary can actually, even in a lot of countries where you don’t have AI, in most countries, actually, we don’t have any kind of AI regulation, they can rely on existing human rights frameworks or other laws like data protection laws to start addressing the challenges around bias, discrimination, or privacy, and so on. So at UNESCO, we have been working with the judiciary for over 10 years. And we have reached over 35,000 judicial operators over these 10 years in 160 countries on issues related to freedom of expression, access to information, safety of journalists. And in 2020, we started working on AI and the rule of law. And we have now a massive open online course, which was used by about 5,000 judicial operators. And I say judicial operators means judges, lawyers, prosecutors, people working in legal administrations on what are the opportunities of using AI in the judicial system. So use cases around case management, we caution them about predictive justice. But also, what are the broader human rights and legal implications of this technology? And how can they address those challenges? Because you will also start seeing binding judgments, and we are already seeing this coming around. And we’re working specifically with regional human rights courts in Europe, in Latin America, in Africa, so that when we have those judgments coming on, it percolates down to the national level. And finally, I will say the work that we’re doing, we’re doing around capacities for civil servants. We keep on loading civil servants and governments with a lot of new work, complex, volatile, uncertain environments and ask them to work on regulation and implementing them without really equipping them with the necessary skills. We saw the case in Netherlands, even the Robodeb scandal in Australia, where AI systems were used and thousands of people were deprived of public benefits, and which had very serious implications. So these duty bearers, we need to work with them on strengthening their capacity. So we’ve developed an AI and digital transformation competency framework for civil servants. And in fact, I was this morning, we were launching a dynamic coalition on digital capacity building, which will focus on capacities for civil servants. So I’ll stop here and happy to go on later.

Moderator:
Thank you so much, Pratik. There was a lot there to unpack, but what particularly was striking in your comments that we don’t need to wait for AI regulation to start addressing some of those biases that we sometimes see coming out in AI systems. And I think that that is something to note here. We finally have a full panel, as you can see, we’ve expanded beyond the door, the table. I’m sorry that most of you are sitting so uncomfortably. But yeah, we really packed everything in here. Welcome, Dr. Senter, to the conversation. We’ve been really jumping between national and international frameworks, and how do we set some of these guidelines, norms, how do we implement them, and having this conversation. We started this morning with a poll on how aware some of the audience is here by some of the initiatives that are in place at national, international levels. We haven’t asked them about the White House framework, but we did ask them about the NIST, risk assessment framework. And I have to say, that was the one that the audience was least aware of. It was a 3.9 awareness on a scale of 10. So with that introduction, not to say that anything wrong with the framework, but it might need a little bit of refresher for the audience, and what the U.S. is doing. I think the audience is keen to hear from you on how the U.S. is positioning around AI governance, AI frameworks at the national, at the international level, and what are some of the approaches that you’re taking.

Set Center:
Thank you, and I’m sorry I was late. I think your taxi must have been faster than mine coming from our previous event. So speed and regulation are in natural tension. I think we know that in an AI context. We know that in any technological governance framework. It’s most conspicuous right now because of the intensification of political and cultural attention on AI, and the seeming inadequacy of governments to meet the moment or at least perception that they’re not moving fast enough. And so I think that dynamic is exactly the right one to frame this session around. In the United States, we have two foundational documents that preceded the chat GPT moment that came out in the last two years. One is the AI Bill of Rights, which provides a sociotechnical framework for dealing with automated systems that is sector agnostic. So in other words, how do we think about a risk framework for any kind of automated system? And the other is the Risk Management Framework, which scored a 3.9. What is the highest we can score? 5.9 with the OECD. Well, next year, we were going to come after the, if Audrey was here, we would come after her and go for a 5.9 next year. The Risk Management Framework, which shares some commonalities with the OECD framework insofar as it’s multi-stakeholder, 240 organizations contributed over 18 months. It was a rigorous effort to solicit views from industry, civil society, to create a framework for users, all the way from users to developers across the entire AI life cycle to manage risk. Those preceded the moment in which all of us are here filling these rooms to talk about AI. What happened, and I think this happened at a national level and a global level, is that many of us forgot all of the work that had been done prior to foundation models emerging and all of the hard work and valuable work that had been done before that. All of us lurched into a new political moment, a new cultural moment, precipitated by the belief that we’re in a new technological era, or at least new era of AI. In the United States, obviously, a lot of the leading companies that were developing large language models, frontier models, are located there. We felt it was incumbent on the United States to move quickly. As many of you know, it is unlikely that we will rapidly move to legislation. I think that’s the case in many countries. We realized that the moment required action, and it required action that defined the problem in a new way and then set obligations around the developers of frontier models. Over the course of the spring and summer, the White House talked to these developers of frontier models, tried to understand and define the nature of the unique risks posed by these models as distinct from, or at least in addition to, the more basic, substantial but understood risks posed by AI that we’ve been talking about for several years, and then tried to create a technically informed framework for dealing with them. That emerged as something called the voluntary commitments, which companies have signed down to in the United States in two waves. Essentially, it asks companies to undertake a series of obligations to responsibly manage, in a secure, trustworthy way, their AI systems. What does it entail? I think what we would now consider fairly understandable basic steps at a level of principles, but I think when we get into the question of implementation, it gets quite complicated very quickly. The commitments essentially require things like red teaming, information sharing among the frontier model developers, so that as they discover emergent risks, they can share them with each other, so each company or developer will understand them. It includes basic principles of cybersecurity and cyber hygiene, with the belief and logic that the model weights that essentially provide the power around these finished models are sufficiently important to protect, that companies need to treat them essentially as the crown jewels of their IP. It includes disclosure, public transparency and disclosure. In other words, if you think about a basic idea like a nutrition card for a food product, you would want to disclose the basic information about how a model’s been trained, how powerful it is, so everyone understands the power of the model itself. The idea is this combination of internal technical work and external transparency will generate the kind of trust and security that we need as these models continue to rapidly evolve, and prior to, or as a bridge to, a legal or regulatory framework in which we can deal with them in a more substantial way. This is a bridge. It’s a first step. I think if we were to solicit a poll, I think one thing people have focused on is the voluntary aspect, as opposed to the technical criteria underneath the voluntary aspect. If you actually look at the technical criteria, it’s quite a serious effort by the engineers, computer scientists, designers to come to terms with what they’re building, and I would suggest it probably represents the best technical framework for thinking about the era we’re moving into, even if I think there’s going to be extraordinary diversity in the kinds of legal approaches we take in the coming era.

Moderator:
Thank you so much for that and thank you for joining us in your very busy schedule. I think, I hope that this would, maybe we should ask the question again at the end and see if this improves the numbers of the understanding of some of the frameworks, but I think what’s very useful for us to know that commitments and work on this don’t just come out of the blue, it’s really built on a long-standing conversations around the topic, beyond the boom moment of when AI really became so user-friendly that we all of us are using it now on our phones every day, but really it’s based on considerations and conversations that have been coming from such a long while and it does bring together not just policymakers or the policy teams in companies, but the engineers and those who set some of the technical work and the technical standards as well. So with that, I think that’s a good segue to turn to Clara and ask how does this work in global standard setting bodies, which is IEEE and how do you think about some of the AI challenges in your work?

Clara Neppel:
Thank you for having me here, it’s a pleasure. So as it comes to the polls, the question is of course what is our role here? And I would like to maybe echo what we just heard, that it needs actually both a bottom-up approach as well as a top-down approach. And IEEE started probably the same time as Japan, already in 2016, to think about what are the ethical challenges of technology. And it came because we have a constituency of 400,000 members, we are the largest technical association, so we’re not only a standard setting body, but we are an association of technologies. So this realization that there is a responsibility in creating this technology, which really I have to say it’s not neutral, it is really embodying the values and business models of those that create it, that we have a responsibility. And this is how this initiative in IEEE of ethical airline design came to existence. It was about identifying issues and how we can deal with them, we as technologists, and how we can deal with them in discussions with regulatory bodies. So what happened since then is that we developed a set of socio-technical standards. We are already developing a lot of technical standards, including the Wi-Fi standard that you are using right now. But when it comes to socio-technical standards, it is a completely different discussion, because what we heard here, we need to have then this multi-stakeholder approach. So we have in our standard setting bodies now people that were not accustomed to this, and we had to develop a common terminology, of course, when it comes to value-based design. When it comes to what does it mean to transparency, everybody agrees that transparency is important, but what does it actually mean? And what is it, first of all, what we want to achieve, and how we are actually achieving it at the technical level. So the set of standards, as I said, one is on value-based design. It is really around identifying what are the values or expectations of the stakeholders of an AI system for a given context, and how you are actually prioritizing them, because you cannot have everything built into the system. How you are actually prioritizing them, and how you are translating them into concrete system requirements. And I think it’s about also practice and experience. Since then, we have several projects, including public-private partnership projects from UNICEF, but also including industry, which prove that it is a very valuable standard, and giving this methodology to developers and system designers, which actually influences the outcome of the systems. You actually come up with a different system, which takes this value into account. As I already said, when it comes to transparency, we have a standard which defines transparency. There are different levels of transparency, so it gives a common terminology when we speak about these terms. The same thing if it goes to bias. We are discussing about bias, and that we have to exclude bias and deal with it. But again, we don’t know what bias is for a certain context. For instance, when we are going to a medical application, we actually want to have a certain bias. We want to have, for instance, different treatment of women and men, because we have different symptoms. So we need to have that kind of context sensitivity when it comes to certain things that we all agree on. So this was the bottom-up approach. And the question is, how does it come together with a top-down approach, with all these different frameworks that we also heard, and the poll very clearly showed that people know more or less about it. So we engaged also from the beginning with the OECD, with the EU, at the high-level expert group, with the Council of Europe. So I would say when it comes to this question that, of course, industry has, what is the interoperability when it comes to regulatory requirements, I think the standards will play a very important role, because that gives the, let’s say, the very practical approach, how to move from principles to practice. And we are part of the discussion. So we are part of the network of experts, giving, let’s say, the technical background of what are the challenges, what is possible to implement, what is realistic, and also reflecting it in our standards. And I wanted to give also an example of how this complementary between regulation and standards can work. When it comes to children, we have, there is a code, which is in the UK, the Children’s Act, which gives, let’s say, and this is something where, of course, technologists cannot decide for themselves what is the right way to do it. I think this is something that needs to be done in a democratic process. So for the children’s codes, everybody agrees we need to protect children. But again, what does it mean on the technical level? And we have a standard, which is called age-appropriate design, which complements that regulation. And I think that this is, and which gives also very clear guidance on how to do this age verification and so on. So I think that this is the bottom-up approach, and a top-down approach needs to come together. And another example is, that we are doing at the EU level, is how to map the AI Act requirements to the standards. There is also a report, which came out from the joint research center of the European Union, which made this mapping for IEEE standards and actually said that it fills really this gap when it comes to ethics, because a lot of standards are still, of course, focusing on a more technical level, which is also important, but you need both. Yes, I would end up with capacity building. I think that is very important for our certification process, which complements the standards. We started building up an ecosystem, an ecosystem of assessors. We already trained more than 100 people. And what is also important, that we have to have certifications of product and services, but also certification of people. So we need these assessors, which are already in our registries, and we need also certification bodies. So we actually trained some people from certification bodies to be able to make these assessments as well. So I think that these are the things that we will continue doing. Thank you.

Moderator:
Thank you so much, Clara. And thank you for sharing all of that from your work and making that connection between the bottom-up and top-down approaches. I guess capacity building is also in the middle or around all of this to make sure it all fits and that we all have all the necessary skills and capacities to deal with that. I’m going to turn to our last two speakers for today. So I would like to encourage you to please go online and type your questions or comments into the chat box so that we can weave that into the discussion, as I will give a last round to the panelists to react to one another and to your questions. I’d love to see some of them pop up here. The chat has been a bit silent, but do go online and share your questions so that we make sure that we weave in some of the perspectives from the room as well. And with that, I’m going to turn to Maria Paz and ask, you are here on the panel representing civil society. You work a lot on some of these issues that we’ve mentioned, in particular human rights. How does that come in to the discussion from your perspective? What are some of the challenges and opportunities that we have here?

Maria Paz Canales:
Thank you. Thank you very much for the invitation for being here. I think that the benefit of being almost at the end of the round of speakers is that I can build on top of what already has been said and add the component that is specific for the perspective of the civil society organization that we work in, the space of digital governance. And I think that I am very pleased of many concepts that I have heard from different actors here around the table, starting for the concept of a sociotechnical approach to the artificial intelligence governance. And I would like to believe that it’s technology governance at large, not only for artificial intelligence. And I think that one of the other things that have resonated in all the intervention that I have heard so far is the need of this clarification in terms of the framework and how they translate to concrete way to engage with the manner in which the design, but also the deployment and the adoption for many countries of the artificial intelligence technologies is happening. So I think that we need, on top of some of the elements that Clara was mentioning, for example, about the concept of what we mean by transparency, what we mean by bias, also I think that we need to understand, have a more clear understanding in terms of frameworks about what kind of risk we are addressing. What we are considering really a risk. From the work that I represent in the organization that I work on, we try to infuse in this conversation the human rights approach. And we consider that the risk that we are addressing when we are talking about the impact of artificial intelligence, it’s the huge and wide spectrum of impact that this technology is having in the daily life of every people around the world that is touched by the technology. And it touches the exercise of civil and political rights in a very concrete manner, but also increasingly the exercise of economic, social and cultural rights also. So when we see that many places in the world, and particularly governments, are embracing the use of artificial technologies for developing policies in different areas, we wonder if we are talking about the right risk when we are measuring. And if, for example, in what Galia was mentioning, that it had been this very thorough work conducted by the OECD in terms of building the database, are we really concentrating enough in hearing also from the right people in terms of how those risks and possibilities of risk are being measured? So I think that in the example that Clara was giving related to technical community, the bottom-up approach implied to hear the technical expert, and also it’s something that was pointed out by Mr. Santer in terms of how you build this voluntary commitment, trying to be really strong in terms of the technical aspect of the recommendations, of the guidance. And you, Clara, were mentioning that it’s necessary to hear the technical expert in order to really make the assessment converge with what is technically feasible and technically understandable for the ones that are called to implement it. The same way we should be thinking how the risk is connected with different communities that is impacting for the deployment of the technology. And usually those are not in the room. Usually those are not consulted. And even, like, I acknowledge the good effort that had been done in many consultative process, for example, for building the UNESCO recommendations, ethical recommendation, and also in the process of the NIST, RISC-CAS and MENFREG more, that I have the privilege to participate representing some consideration from civil society groups. Usually who are part of those conversations are not the people that it’s the bottom line in terms of impact of the deployment of the technology in society. So I think that we should continue to make a conscious effort to have that people in the room, to bring them into the conversation. This is also related with the capacity building issue that had been raised by several of you. We cannot expect that people is able to talk about how artificial intelligence impact in their daily life if we don’t approach them to the topic in a way that is really concrete for them and understandable. We will not expect that they speak the language of the technical standard. We will not expect that they speak the language of the regulatory issues. But they will talk about how they have been discriminating in the access of housing or employment or access to health or education or how the use of this technology is being weaponized for political manipulation or state control and surveillance of opposition forces in a specific country. So I think that I am very happy of what I am hearing around this table, but I think that there is still some kind of additional work in terms of the clarity of the frameworks of what we are addressing, what we are understanding in terms of risk. And at the end, when we use the concept of responsible AI or trustworthy AI, for me those are not facts that are really useful in terms of like unpacking what it should be done in terms of addressing the governance artificial intelligence issues. Those will be the result of unpacking in a good way the governance issue. And as a result of a good governance, in my perspective with a human rights approach, we will have responsible technologies and we will have trustworthy artificial intelligence. So I will leave it there for the next round of reaction. I am very happy to be here and bring that perspective.

Moderator:
Thank you, Maria Paz. First round of applause. We are not at the end just yet, because we have to hear from Thomas. And I think you set him up for a very difficult question, because in addition to working on these issues in your home country in Switzerland, you are also chairing the process at the Council of Europe that is coming up with the first binding convention on AI and human rights. So I am not going to ask all the questions that Maria Paz asked of you, because it will be very difficult to answer in the five minutes that you have. But how is the process considering some of the human rights impact and what do you expect from it?

Thomas Schneider:
Yes, thank you. And it is actually good that we live in a hybrid world, so I was able to follow the discussion and the poll already from the taxi through new technologies like Zoom meetings. And so, yes, hello everybody. Happy to be here. First of all, thanks to the ICC for putting this together, because I don’t think it’s a problem that we have several ways, several instruments to try and cope with a new technology. And I think this is the only way to go, because there is not one single instrument that will solve all the problems or enable all the opportunities when it comes to a new technology. So we need a mix of instruments. And we also should not forget that we already have technical norms, legal norms, cultural norms that guide our societies and that have guided us with previous technologies that have also been maybe more or less disruptive. And so we do not have to reinvent the wheel every time something new comes up. We may have to adjust it or maybe add a little bit to the wagon, but not everything is completely new. So it’s good to see what is new and what is not new or what is maybe a new version of something that we may already know. And actually, if you look at AI, in many ways, you can actually compare it in its disruptiveness that AI is replacing cognitive human work through machines. AI is a driver, is an engine that drives other machines. You can actually compare it in a number of ways to actually the combustion engines and their disruptive effect that they had a few 150 or 200 years ago and actually until now. Because also there, you don’t have one law that is regulating all engines worldwide at once. You have thousands of technical, legal, and cultural norms that are actually most of them not focusing on the engine itself, but on the machine that the engine is driven by or on the people that are actually guiding the machine, on the infrastructure that the machines are using. And there’s different levels of harmonization between these rules. If you take machines that move people in the airline business, you have quite a harmonized set of rules across the world. If you take cars, looking at my German friends here, they think that they can still live without speed limits. And apparently, they don’t live that badly without speed limits. In my country, it’s slightly different. The U.S. has even lower speed limits than we do. And some people drive on the left side of the streets, but the British or the Japanese can drive on the right side. Swiss roads they just have to pay more attention. So there’s different levels of interoperability or harmonization according to the specific application of an engine in a particular machine that is used for a particular purpose. It’s also there’s a difference whether you move goods or you move people there may be different requirements and so on and so forth. And we are about to do the same with AI in the sense that we try not to regulate the tool itself but the tool in the context that it’s applied and the tool may evolve very quickly but maybe the context is not so quickly changing because in the end it’s people and people tend to not evolve so quickly as maybe the tools which should actually make it easier for us to try and understand the processes that we’re in. And what the Council of Europe is doing is trying to fill one piece to add to this set of norms where we have technical norms that may be one element to actually react quickly to developments and solve issues on a technical level to the extent it’s feasible to create a certain harmonization to create a certain level of security also of predictability of systems. The cultural norms is something else that I will not I don’t have the time to go beyond German motorways but we also may have different ways of dealing with risks in general in different societies. And then you have the legal space where it is great to have industry players taking on responsibilities to regulate themselves. It is good to have guiding soft law guidance from UNESCO from the OECD. The Council of Europe has also developed since 2018 a number of sectoral instruments like an ethical chart on the use of AI in the judicial system, a recommendation on human rights impacts of AI systems in the field of media and elsewhere. So this is all fine but it may not be enough because and I will be very curious to see how these voluntary commitments are followed in the US because voluntary means like you can but you don’t really have to. Well there may be if the incentives are right a voluntary system may actually work better than a so-called compulsory system if it’s too complicated or not workable. So I don’t necessarily say that this is not a good approach but again I think it’s good that we have different approaches and while the European Union has decided to develop an instrument that is basically a market regulatory instrument the Council of Europe for those that are not familiar with the European system the Council of Europe is not the European Union. It is something that is comparable to a UN for Europe and it is 46 member states that have agreed on a set of norms on human rights democracy and rule of law. There’s about 250 conventions and thousands of soft law instruments and so the latest one of the latest things is that we are trying to agree on a number of very high-level principles that are probably not that different from other high-level principles that we’ve already seen in a convention and the new thing is that this is the first intergovernmental agreement that states commit themselves to live up to these principles and these principles are based on the norms of human rights democracy and rule of law and what is special in this case but there have been others before is that this is not a convention just for European countries for member states for the 46 member states of the Council of Europe. It is an open process where we had already from the beginning a number of non-European countries that are leading in AI like the US, Israel, Japan, Canada. We have also Mexico on board since the beginning. We have a number of other countries joining in particular from Latin America so this is the Council of Europe has the opportunity to offer a tool where states from all over the world that respect the same values of human rights democracy and rule of law but may have different institutional arrangements to do this to join become parties to a convention where we agree on a number of principles that should guide us and it’s not just human rights it’s things that are slightly more complicated because they are not that clearly institutionalized like democracy and to agree on a number of principles and then this is one thing because that will be another paper but we also work together with the OECD together with a number of standardization institutions together with the EU and together also with we are also watching what the US colleagues are doing in the NIST framework because in the end you need to have something that operationalizes paper and so what we are calling this is a human rights democracy and rule of law impact assessment which is not a particular tool but it’s a methodology that should help us unite and like create interoperable systems in our countries like the convention is a legal tool that should help us create interoperable legal systems within our countries. Thank you.

Moderator:
Thank you so much Thomas and I very much like your analogy there with the roads systems. We need some of the common principles we all know what to do and learn how to drive and know how to drive and what are the general rules but then we need the flexibility to be able to adapt to context and I think that does apply nicely to some of these systems. We have, I’m going to be generous, 10 minutes left actually seven and I’m going we have a lot of speakers but the audience has been gracious with us because they don’t seem to have many questions so what I’m going to ask of you is to take on one minute and I’ve set the timer one minute each to take your last comments reactions to the other statements or to share one lesson and or recommendation from your side. I’m going to go from that side to that side and you’ll see how many minutes we are running over as we get to the end of the queue so please be respectful of your fellow panelists so that we do have time to go to everyone. Pratik. I would rather give my one minute if someone has a question because it’s too heavy on the panel. Yes, please.

Auidence:
Hi, yeah, Ansgar Kuhn from EY. I had problems with submitting the question. Yeah, my question was around the capacity building side and actually less the capacity building from the point of view of getting the skills of doing it but rather from the point of view of enabling various parties to engage with the process because of the time commitments that are related to that which means either cost for being able to afford to have somebody in your organization if you’re an SME or something or a civil society organization you might not be able to carry that cost of someone who isn’t contributing directly to whatever the product is that you’re creating or if in academia you may struggle to get academic credit for having engaged in this kind of process. So does anybody have any suggestions in that space?

Prateek Sibal:
So I will quickly answer within my 30 seconds left and give back the floor. So I think from our perspective whenever we are having multi-stakeholder conversations we are very sensitive to the fact that not everyone is coming with the same level of knowledge about the technology and we put out some guidance from UNESCO on how to do multi-stakeholder governance of AI in a manner which is more inclusive. So which includes first building awareness. We ourselves have launched quite a number of knowledge products to facilitate that process. For instance, in a fun way we have a comic book on AI because we hear this very often that when you are in a multi-stakeholder conversation not everyone is as familiar with the topic. Some people feel maybe a bit intimidated when you are having the technical experts, the government and everyone coming from the IOs talking in their own jargon which is sometimes very hard to decipher. So we are sensitive to this concern. We’ve also supported people to participate in different fora so that financially but also financially compensate them for their time because civil society ends up doing a lot of free work in these consultative processes without any kind of financial support. And I know colleagues that we are meeting here working on weekends, working nights in civil society which is not fair at all.

Moderator:
Thank you for the question. Thank you Pratik. We have four minutes left but I’m going to take one more question.

Auidence:
I think maybe it’s easier if we all ask the question then any panel member can just catch on it. In four minutes, yes sure. Go ahead. So my name is Liming Zhu. I’m from CSIRO, Australia’s national science agency. We’re working on the science of responsible AI. I have a question on the system level guardrails versus the model level guardrails. We all know that risks are context specific but a lot of people worry if we push the responsibility to the system level, to the users, then the tech vendors can provide unsafe models. So the response would be your legal plans to do that. On the other hand, all the model level guardrails because the general AI is hard to understand, it’s hard to embed specific rules inside a black box model and we need system level guardrails. I’m just wondering whether there’s any comments. Thank you. Thank you. Gentlemen, 30 seconds for your question please. Hi, I’m Steve Park from Roblox. I understand that previously the G20 process has created something like the data free flow with trust. I’m wondering what the Hiroshima process, is there an expectation for that sort of a principle approach for AI as well? Thank you. Thank you so much. I guess that the online system was not really good on taking questions but I’m so glad to have that engagement. 30 seconds each panelist to try and answer. All right, race is on.

Owen Later:
Very good question about models versus applications. You need safeguards at both levels. You need to make sure that you’re developing the model in a responsible way but then when you’re integrating that into an application, you also need to make sure that there are requirements at that level as well. Otherwise, the mitigations that you’ve put in at the model level can just be removed or circumvented. So we think there’s been great progress on the code this week for developers. We think ultimately you need to extend that to deployers as well. And I guess 15 seconds to offer some thoughts on a way forward in terms of global governance. I think there’s been a ton of progress made over the last 12 months and that’s been reflected in the conversation here. I think as we move forward, we should think about ultimately where do we want to get to, what do we want a global governance regime to do and what can we learn from existing regimes. So to offer a few thoughts, I think we want a sort of framework for standard setting globally. I think organizations like the International Civil Aviation Organization where you have a representative process for developing standards that are then implemented at a domestic level, really, really helpful. I think we want to have conversations to advance a consensus on risk. I think of organizations like the Intergovernmental Panel on Climate Change which might be a good model to follow. And then I think ultimately we want to keep building that infrastructure. So both the technical infrastructure so that we can advance work on evaluations where we have still really major gaps to address but also continuing to have these types of conversations. The point that you were making, making sure we have a representative way of having these conversations on global governance and pulling in perspectives from across the world, I think it’s going to be really important.

Moderator:
Thank you, Owen. Very efficient and speedy in response as the private sector generally is.

Nobuhisa Nishigata:
Thank you. And answering the question, maybe I would say like back in 2016, it’s more like Japan started, initiated the discussion but this time for the Hiroshima process, I would say it’s G7’s collective effort. And the main point is that the focus is generatively on the foundation model. I mean the back in the slide, it’s more like about the voluntary commitment. So maybe just you can see some shifting of the discussion even in G7. So what I expect is just more like started by the government of Japan but now it’s all inclusive dialogue around the G7 and we see the other four as well. So I’m just, the Japan would say that we are very happy to see what happened even though we already have some things that we didn’t expect in 2016 but still I would say that things are going well around and we have to continue this kind of dialogue among the whole world. Thank you.

Moderator:
Thank you, Nobu-san. Clara. So very quickly, I think that we need really to balance between voluntary and legal requirements.

Clara Neppel:
I think that it is not the responsibility of the private sector to have guardlays for democracy and rule of law, for instance. So if we want to have certainty on this, we need legal certainty as well. So we need to have regulation on these issues as much as possible and I think that this legal certainty is also expected from the private sector. I think that what is really bad right now is the uncertainty that they are facing. And just responding to one of the answers, I think that how to engage, what I think is going to be essential is to enable feedback loops. I think that this is going to be one of the most important things, especially working with generative AI, enabling these feedback loops and making sure that these feedbacks are actually taken into account by retraining systems or taking them into account when learning from the aviation industry. So the benchmarking I think is also important and common standards, of course. Thank you. So I’m going to take on your points also.

Maria Paz Canales:
One of the examples in which it’s evident that we need some level of complementarity between voluntary standards and legal frameworks, it’s particularly linked with one of the questions around this responsibility at different levels of the safeguard being in the design stage but also in the implementation and functioning of the system. So for example, that’s a topic that should be considered as a part of the regulation, how we distribute and we create obligations related with transparency in the information that were mentioned also by Mr. Senter related to the voluntary commitment, how we ensure that between the different operators in the change of use, production and use of the artificial intelligence, there’s also enough communication that is not against the competition rules also or the intellectual property rules, but they are a shared responsibility and the legal framework accounts for those different responsibilities. 15 seconds on the role. I talked a little bit more during my intervention about this need of the bottom-up approach in terms of societies and different stakeholders in society, but this also applies geopolitically at the global level. In this conversation that are unfolding at the global level of discussion of governance, we need to hear much more about global experiences from different stakeholders that make this process of identification of risk and relevant elements in context much more sensitive to different considerations.

Moderator:
Thank you so much, Maripaz, and don’t try to silence anyone, but we’re making our gracious host very, very anxious, so three minutes, please.

Galia :
Three minutes? One minute each, three minutes together. But just to say, I think this has been a really, really rich conversation. This also made me optimistic that I think it is possible. I think we do have all the elements combined. I really like what Thomas said about how all these things can be complementary. I do think there are still challenges at the implementation level, how we make these things work. I think mapping exercises like what we did with the OECD on the risk assessment front, I hope, can be helpful in this regard. I also think we can think about what we mean by, when we say global governance, then how global can we do it and maintain a credible value alignment, which I think is very important. I think we also have the element of the stakeholder engagement that’s really important. I think this kind of forum is really critical in advancing this kind of conversation, so thank you. Thank you so much.

Thomas Schneider:
Thank you. I try also to talk in two times speeds, like the YouTube videos that you can watch double the speed. Well, I think we need to work on several levels. One is we need to reiterate and see whether we still agree on fundamental values about how we want to respect our human dignity, how we want to be innovative while respecting rights, and see that we are all on the same page on this. Then we somehow need to break it down and see, okay, what are the new elements, what are the new challenges, what are the legal uncertainties that we need to clarify, and then how do we best clarify them without creating a burdensome bureaucracy? How do we clarify? What can we do with technical standards? What is the best tool for solving the problem? So we need to know what are the problems, and then we need to know what are the best tools, and again, it will probably be a mix of tools. Some will be faster, others will be more sustainable, and I think we’re all working on it, and we need to just continue and cooperate with all stakeholders in their respective roles. r,

Moderator:
Thank you so much, Tomas. Dr. Sente on the account of last words. I feel like that’s an awfully positive message to end on, so I will mercifully cede my 45 seconds back. Thank you. That’s very gracious. Thank you so much. Apologies for the next session and to the host for running five minutes over, but I really do want to thank the panel for making the time and coming here and for this really rich discussion, for the audience for sticking with us. I wish we had three more hours and we still couldn’t have stopped talking, I’m sure, but there is a main session on artificial intelligence, I’m told, so see you there and see you around. Thank you so much again. Thank you.

Auidence

Speech speed

179 words per minute

Speech length

410 words

Speech time

137 secs

Clara Neppel

Speech speed

164 words per minute

Speech length

1373 words

Speech time

503 secs

Galia

Speech speed

76 words per minute

Speech length

566 words

Speech time

444 secs

Maria Paz Canales

Speech speed

166 words per minute

Speech length

1291 words

Speech time

467 secs

Moderator

Speech speed

122 words per minute

Speech length

2365 words

Speech time

1165 secs

Nobuhisa Nishigata

Speech speed

147 words per minute

Speech length

1447 words

Speech time

591 secs

Owen Later

Speech speed

234 words per minute

Speech length

1374 words

Speech time

352 secs

Prateek Sibal

Speech speed

164 words per minute

Speech length

1500 words

Speech time

547 secs

Set Center

Speech speed

146 words per minute

Speech length

975 words

Speech time

401 secs

Suzanne Akkabaoui

Speech speed

124 words per minute

Speech length

775 words

Speech time

374 secs

Thomas Schneider

Speech speed

173 words per minute

Speech length

1648 words

Speech time

570 secs

Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

In the analysis, multiple speakers highlight key points regarding Digital Public Infrastructure (DPI) and its implications. The first speaker stresses the importance of sustainable DPIs that take into account environmental factors. They assert that the Global Digital Compact (GDC) claims for sustainable DPIs align with the G20 meeting in India, which established a common agenda. The positive sentiment suggests a consensus on the need for sustainable DPIs.

The second speaker focuses on the significance of DPI by design, implementation, and governance. They provide Estonia’s success in digital government and connected government as an example. By adopting DPI by design, Estonia has effectively demonstrated the value of integrating digital technologies into government functions, resulting in positive outcomes. This observation strengthens the argument for the importance of DPI.

On the other hand, the third speaker raises concerns about the potential impact of mass surveillance. Although no supporting facts are provided, the negative sentiment suggests that the speaker believes mass surveillance has detrimental consequences. This viewpoint serves as a cautionary reminder to consider the potential risks associated with DPI, particularly in relation to individual privacy and civil liberties.

The fourth speaker advocates for a specific framework on “human rights by design.” They emphasize aspects such as privacy, freedom of speech, dignity, and autonomy. By unpacking the concept of human rights by design, the speaker highlights the need for clarity and guidelines to ensure DPI does not infringe upon fundamental rights. This argument underscores the necessity to address potential ethical and legal concerns related to DPI.

The fifth speaker argues for safeguards that can potentially halt or reverse harmful systems, specifically mentioning the importance of safeguards for digital identification. They highlight the significance of the ability to reconsider, reevaluate, and reinstate changes to negate any harms associated with DPI. This perspective supports the idea that proactive measures should be in place to mitigate any adverse effects of DPI.

Lastly, the sixth speaker expresses concerns over the right to anonymity and stresses the need for private space to protect civil and political rights. They mention that the right to anonymity is essential for freedom of speech and other civil liberties. This negative sentiment suggests worries about potential infringements on individual rights and the necessity to protect them in the context of DPI.

In conclusion, this analysis presents a comprehensive overview of various perspectives surrounding DPI. The speakers highlight the importance of sustainability, design, governance, human rights, safeguards, and individual rights within the realm of DPI. While positive sentiments indicate consensus on certain aspects, negative sentiments caution against potential risks and encourage the implementation of necessary precautions.

Eileen Donahoe

The analysis explores several key points regarding the role of digital technology in achieving the Sustainable Development Goals (SDGs). It highlights the immense potential of digital technology in accelerating the attainment of these goals. However, it raises concerns that only a small percentage (2%) of government forms in the United States have been digitized. This lack of digitization not only leads to significant time wastage for the public but also results in the loss of $140 billion in potential government benefits each year. This underscores the urgency for governments to prioritize the digitization of their public infrastructure.

The discussion also emphasises the need to embed human rights into digital public infrastructure. While there is a strong desire to expand access to digital services, it is crucial to ensure that the most vulnerable and marginalised communities are protected. The risk of inadvertently developing into surveillance states through digitisation must be carefully mitigated.

Furthermore, the analysis underlines the importance of a global multi-stakeholder approach, where governments collaborate with the technical community, civil society, academic experts and the private sector. This approach fosters collective involvement and cooperation in setting digital technology standards and policies. Eileen Donahoe stresses the significance of this approach to enable a smooth transition from domestic to global multi-stakeholder processes. However, it is acknowledged that this transition can be challenging for governments.

To facilitate the multi-stakeholder input processes, it is recommended to use the Human Rights framework as a global standard. The universality and recognisability of human rights frameworks make them an ideal basis for collaboration and adherence to global standards. Governments are more likely to feel comfortable adopting these global standards as they have already committed to upholding human rights.

The analysis further discusses the importance of global open standards in increasing accountability. Global open standards provide a valuable tool for well-intentioned governments, ensuring that the standards deployed are of a consistently higher calibre compared to what would occur if governments were left to their own devices. Through a global digital compact process with follow-up, soft norms can be established, adding pressure and encouraging accountability.

In addition, the analysis supports the building of a mutual learning ecosystem in different regions. It suggests that Digital Public Infrastructures (DPIs) should be made more context-sensitive. Building on the progress achieved during the Indian presidency, there is a call for expanding and adapting these principles to new areas. Regular reviews and discussions, guided by a safeguards framework, act as reference points for evaluating the progress and efficacy of DPIs.

Lastly, the analysis emphasises the importance of a cross-disciplinary approach. It highlights the necessity for collaboration between experts in norms, law and technologists to effectively implement human rights by design. A shared language is crucial for enabling effective collaboration and understanding between these different disciplines.

Overall, the analysis underscores the need for robust and safeguarded digital public infrastructure, the importance of a global multi-stakeholder approach, and the significance of embedding human rights principles into digital technology. It also highlights the value of global open standards, mutual learning and a cross-disciplinary approach. These factors collectively foster accountability, context sensitivity and progress in the field of digital technology and its impact on achieving the Sustainable Development Goals.

Speaker 1

The speakers in the discussion emphasized the significance of digital public infrastructure (DPI) and the successful implementation of DPI in Estonia. They pointed out that Estonia has long been a digital leader, and the philosophical and practical origins of DPI can be traced back to this country. They highlighted the principles of openness, transparency, and inclusiveness as the basis for Estonia’s digital reform. The government of Estonia, in collaboration with the private sector, has worked on digital identity, which is an essential component of DPI.

Trust and collaboration were identified as vital elements for the successful implementation of DPI. The speakers emphasized that the Estonian government and private sector combined resources and contributed equally to DPI, which helped build trust among stakeholders. Equality in contribution and responsibility was deemed crucial for fostering collaboration in DPI projects.

The discussion touched upon the need for governments to focus on fixing fundamental aspects such as data governance and digital authentication before moving on to advanced concepts like artificial intelligence (AI), Internet of Things (IoT), and blockchain. The speakers argued that many governments and societies are currently discussing advanced concepts without having firmly established the basics. Therefore, DPI serves as a reminder to prioritize the foundational aspects of digitalization.

The importance of sharing and collaboration among governments in the field of digital public infrastructure was emphasized. The speakers noted that while many governments make their tools available, there is a lack of reusing and collaboration. However, some countries, such as Estonia, Finland, and Iceland, have made progress in this area by developing certain digital products collaboratively and making them globally available. The speakers called for a greater push for sharing, reusing, and collaboration among governments to enhance the effectiveness of digital products.

Sustainability and mindset change were identified as challenges in implementing digital public infrastructure. The speakers acknowledged that changes take time and that people can be resistant to change. They also emphasized the importance of continuity as various projects come and go. The example of Internet voting in Estonia was highlighted, as it took several years for it to become popular and widely accepted.

The discussion concluded by highlighting the global nature of guaranteeing privacy, security, and human rights in the digital realm. The speakers stressed that these issues require concerted efforts from both the government and the private sector. Ensuring privacy and security is not solely the responsibility of the government or the private sector. The speakers also emphasized the importance of global movements in addressing these issues.

In conclusion, the discussion shed light on the significance of digital public infrastructure and its implementation in Estonia. The principles of openness, transparency, and inclusiveness were identified as driving forces behind Estonia’s digital reform. Trust, collaboration, and equal responsibility were deemed vital for successful DPI implementation. The need for governments to focus on fundamental aspects before advancing to advanced concepts like AI and blockchain was highlighted, along with the importance of sharing and collaboration among governments. Sustainability, mindset change, and guaranteeing privacy, security, and human rights were identified as challenges that require joint efforts from the government and the private sector.

Moderator – Lea Gimpel

Digital Public Infrastructure (DPI) is defined as society-wide digital capabilities that are essential for citizens, entrepreneurs, and consumers to participate in society and markets. It serves as the foundation for public service delivery in the digital era. DPI does not refer to foundational software or physical infrastructure such as fibre optic cables.

During the COVID-19 pandemic, many countries heavily invested in DPI to facilitate faster and prompt response. This shift to digital platforms resulted in the adoption of services like digital ID, payment systems, data exchange systems, and civil registries. However, the speedy implementation sometimes compromised the security, safety, and inclusivity of the systems.

Lea Gimpel advocates for the safe, secure, and inclusive implementation of DPI. Attention needs to be given to avoid risks such as data privacy issues, mass surveillance, and exclusion of vulnerable groups. DPI has an extended risk due to its implementation at population scale, and technologies with societal functions have long-term impacts.

To address these risks, the UN Tech Envoys Office and UNDP launched the Universal DPI Safeguards initiative. It aims to develop safeguards against risks in DPI designs, implementation, and governance.

The movement around DPI focuses on people’s protection and service delivery, emphasizing the approach over technology. Eileen Donahoe advocates for the use of technology to advance the Sustainable Development Goals (SDGs) and expand access to digital services globally. Embedding human rights by design into digital technology is also important in preventing the unconscious drift towards becoming surveillance states.

The United States lags significantly in digitizing government communication and rebuilding its infrastructure. Only 2% of federal government forms have been digitized, resulting in unclaimed government benefits worth $140 billion each year. The implementation of DPI is not solely applicable to low and middle-income countries, high-income countries also need to discuss their approach.

Successful DPI implementation in Estonia highlights the need for DPI standards and the integration of human rights by design. Concerns about potential surveillance risks arise if DPI is not correctly implemented. The speed of DPI implementation often makes implementing safeguards difficult, and the right to anonymity in DPI and Digital ID systems should be ensured.

Sustainability is an important consideration in DPI, as discussed at the G20 meeting in India. Insufficient safeguards in digital ID systems call for DPI safeguards that allow for system stop, reconsideration, or rollback if harm is found. Enforcement is required to ensure compliance with DPI safeguard frameworks. Global open standards increase accountability and add value by monitoring everyone’s system.

The government’s role is crucial in the digital era. It needs to be a partner for citizens and the private sector to build an innovation ecosystem that is crucial for the evolution of digital economies. Existing initiatives like the Global Digital Compact and Open Government Partnership provide an opportunity to create commitment for the DPI Safeguards Initiative.

Cooperation, sharing of technology, and learning are important for effective implementation at scale. Changing mindsets during implementation is crucial for success.

In summary, DPI enables citizens, entrepreneurs, and consumers to participate in society and markets. The Universal DPI Safeguards initiative addresses risks and develops safeguards. The movement around DPI emphasizes people’s protection and service delivery, focusing on the approach over technology. Human rights by design and the use of technology to advance the SDGs are crucial considerations. The United States needs to digitize government communication and infrastructure. High-income countries also need to discuss their approach to DPI. Standards, safeguards, and the right to anonymity are important in DPI implementation. Sustainability, enforcement, and global open standards play crucial roles. Government partnership and cooperation are essential, and existing frameworks provide opportunities for commitment. Effective implementation at scale requires cooperation, technology sharing, and mindset changes.

Robert Opp

The concept of digital public infrastructure should be seen as an approach rather than a technology or set of technologies, with a governance structure and appropriate safeguards. This approach is important for solving immediate problems and ensuring the future success of infrastructure projects. The United Nations Development Programme (UNDP) aims to complement consultations with ground-level work in three to five countries, testing and applying the emergent framework. They will gather feedback from the field tests to inform the development of safeguards. The UNDP also plans to support countries in implementing the framework and addressing knowledge gaps. Mindset change and incentivizing collaboration are necessary to advance digitalization. Overcoming the current mindset issue requires collective effort and a shift in thinking. Collaboration is crucial for developing scalable ways to implement digitalization. The safeguard initiative is a positive step towards changing mindsets and promoting the implementation and scalability of digital solutions. Sharing and collaboration are crucial for developing and implementing digital solutions. It is important to incentivize people to collaboratively work towards digitalization. The challenge of keeping up with emerging technologies like artificial intelligence requires continuous learning and adaptation. Developing a global approach with safeguards is essential for the success of digitalization efforts. Rolling out this approach quickly with trust and sharing is necessary. In conclusion, the concept of digital public infrastructure as an approach, along with governance structures and safeguards, is important for solving problems and ensuring success. The UNDP’s efforts to complement consultations with ground-level work and support implementation are valuable. Mindset change, collaboration, and safeguards are necessary for digitalization. Sharing and collaboration are key for developing digital solutions. The challenge of emerging technologies like AI requires continuous learning. A global approach with safeguards is crucial, and it should be rolled out quickly with trust and sharing.

Amandeep Singh Gill

Amandeep Singh Gill emphasizes the need for a safeguards framework for digital public infrastructure (DPI) due to the risks and issues related to safety, security, data protection, and societal inclusion/exclusion. It is important to address these concerns to ensure the effective and ethical use of DPIs. Gill advocates for multi-stakeholder participation in building and managing the safeguards framework, including contributions from the private sector and civil society. This collaborative approach ensures diverse perspectives and expertise are considered, leading to more comprehensive and effective solutions.

The foundations of DPI are based on international human rights commitments and the Sustainable Development Goals (SDGs). Recognizing the significance of these frameworks, DPIs are designed to align with and support the principles and objectives outlined in these global agendas. By incorporating human rights and SDG frameworks into DPIs, it becomes possible to promote inclusivity, sustainability, and socioeconomic development.

DPIs not only target government services but also play a critical role in boosting the innovation ecosystem by reducing barriers to innovation. By providing an enabling environment and infrastructure, DPIs encourage the development and adoption of new technologies, fostering digital innovation across various sectors. This not only benefits the public sector but also stimulates economic growth and offers new opportunities for businesses, including the emerging Fintech sector.

Promoting a digital economy that includes businesses from different sectors, such as Fintech, is crucial in building robust DPIs. Lowering entry barriers to innovation and increasing demand for digital services and products are central to DPIs. By creating a dynamic national digital economy, DPIs contribute to decent work and economic growth as outlined in SDG 8.

DPIs should also incentivize integration and usage in sectors that may initially not see the need for digital services. For example, it is crucial to demonstrate the benefits of using DPIs in agriculture to farmers who may not immediately recognize the advantages. By showcasing the value and potential of DPI services tailored to their specific needs, farmers can be encouraged to adopt digital solutions, leading to increased productivity and improved agricultural practices.

The G20 presidency offers an opportunity to make the digital development movement more sustainable and contextually relevant in Africa and Latin America. By building on the work conducted during the Indian presidency and collaborating with international partners, it is possible to address the specific challenges and opportunities faced by these regions, ensuring inclusive and equitable digital development.

Regular discussions and soft pressure can be cultivated to improve DPIs. By facilitating ongoing dialogue and regular review of principles and action frameworks, it becomes possible to identify areas for improvement and encourage adherence to established standards. This continual accountability and open communication contribute to the evolution and refinement of DPIs, aligning them with evolving needs and technological advancements.

Strengthening mutual learning ecosystems in various regions is a priority. Promoting cooperative learning environments and exchanging best practices between different regions can help accelerate digital development and enhance the effectiveness of DPIs. The success of the Nordic region in creating a mutual learning ecosystem serves as an example to be emulated in other parts of the world.

An initiative has been launched to unpack the concept of the right to anonymity. Amandeep Singh Gill is open to ideas and assistance in exploring this concept further. This initiative aligns with SDG 16, which focuses on peace, justice, and strong institutions. Unpacking the right to anonymity is crucial in ensuring the protection of digital rights while maintaining a balanced approach to privacy and security concerns.

In conclusion, Amandeep Singh Gill highlights the importance of implementing a safeguards framework for digital public infrastructure. This framework should address risks and issues related to safety, security, data protection, and societal inclusion/exclusion. Multi-stakeholder participation, aligning with human rights commitments and SDG frameworks, boosting the innovation ecosystem, and promoting a digital economy across sectors are key elements in building robust DPIs. The G20 presidency provides an opportunity to make digital development more sustainable, while continuous discussions and follow-up can drive improvement and accountability. Strengthening mutual learning ecosystems and exploring the concept of the right to anonymity are further steps towards promoting inclusive and ethical digital practices.

Henri Verdier

France has been at the forefront of developing digital public infrastructure (DPI), even before the term was officially coined. Their focus has been on aspects such as digital identity and public APIs. They understand the importance of public service rules, such as neutrality, accessibility, and equal access, in constructing DPI. By incorporating these rules, France aims to create a reliable and fair digital environment.

Notably, France recognizes the significance of the digital commons and the need for a free and neutral internet. They argue that DPI and the digital commons are essential for achieving this goal. While DPI refers to the infrastructure, the digital commons encompass concepts such as free software, open standards, and shared resources. The convergence of DPI and the digital commons has the potential to create a powerful and inclusive digital space.

In contrast to France’s proactive approach, Europe has stopped building public infrastructure since the 1970s. However, France insists that Europe should continue developing public digital infrastructure as digital identity and infrastructure for payment are as crucial in the digital age as roads were a century ago.

France also recognizes the potential for innovation and value creation through the OpenGov movement. They highlight that unleashing innovation in government processes can create significant value. This has been demonstrated in the past with initiatives focused on data and source code, and now with infrastructure development. France believes that the OpenGov movement can serve as a third source for unlocking value in the digital landscape.

Moreover, France acknowledges that public service can be implemented by the private sector, as long as certain rules are respected. They emphasize that while the private sector can finance public services, it should not take all the added value. This approach allows for greater flexibility and efficiency in the provision of public services.

Cooperation and community-building are highly valued by France. They argue that more emphasis should be placed on these aspects, rather than simply sharing code. Good documentation and specific types of codes are considered important details in the pursuit of effective collaboration.

However, France also recognizes the challenges that governments face in implementing digital transformations. They find it difficult to adhere to open standards and build small, reusable pieces of infrastructure. The historical context of digital disruption further complicates these projects. France advocates for a simpler, more efficient, and sustainable approach to digital transformations in government.

Using simple, open, and reusable standards can lead to more inclusive and sustainable infrastructure. An example of this is the case of Tuk-Tuk drivers in Bangalore, who developed a cost-effective solution to call a rickshaw, using Indian rules and leveraging infrastructure such as UPI and bacon. France emphasizes the value of adopting such standards to ensure that infrastructure is accessible and beneficial to all.

While infrastructure is vital, France also recognizes that it alone cannot protect democracy if it is misused for malicious purposes. A strong focus on implementing rules within the infrastructure is insufficient. Additional measures are necessary to safeguard democracy and ensure its integrity.

Efficient enforcement of approaches is key to success. By prioritizing efficiency, implementation and enforcement can be carried out in the most effective manner. This approach helps drive progress and ensures that initiatives are impactful and sustainable.

Lastly, France firmly believes in empowering people, unleashing innovation, and guaranteeing fundamental freedom as the most efficient way to promote economic and social development within a country. They advocate for a holistic approach that considers the importance of technology, innovation, and freedoms in shaping a prosperous and inclusive society.

In conclusion, France’s approach to developing digital public infrastructure is rooted in the principles of public service rules, open standards, and the use of digital commons. They stress the importance of continuing to build public digital infrastructure, advancing the OpenGov movement, and promoting cooperation and community-building. France recognizes the challenges of implementing digital transformations, but also highlights the potential for inclusive and sustainable infrastructure through the use of simple and reusable standards. They underline the need to protect democracy beyond infrastructure and advocate for efficient enforcement. Ultimately, France believes in harnessing the power of innovation and empowering individuals to drive economic and social development.

Session transcript

Moderator – Lea Gimpel:
Good morning and a warm welcome. Since the room is not full yet, if you want, please also take a seat here at the table with us so that we can have a discussion with you. However, we are going to start now, since it’s already time. My name is Lea Gimpel, I’m with the Digital Public Goods Alliance, where I lead our work on country implementation and artificial intelligence, and I’m your moderator for the session today. And with me, I have Moritz Frommel-Jo, he’s with the UN, with the UN Envoys Office for Technology, who’s co-hosting the session today, and he’s the online moderator. So this session is about the effect of governments of open digital ecosystems, and we are going to talk about how to establish a framework for secure and inclusive digital public infrastructure today. And we have a fantastic panel of esteemed speakers here with us in the room, and I will introduce them in alphabetical order to you. First, we have Aileen Donahoe, the Special Envoy and Coordinator for Digital Freedom of the US Department of State, welcome. Then we have Amandeep Gill, the UN Secretary General’s Tech Envoy, who’s also the co-host of the session, as I said, welcome, Amandeep. We have Nele Liosk, the Digital Ambassador-at-Large from Estonia, thank you, Nele, for being here with us. Then we have Robert Opp, the Chief Digital Officer of the United Nations Development Programme, hi, Rob. And Henri Védier, last but not least, the French Ambassador for Digital Affairs. And before I give the word to my panel, I would like to start by a quick introduction, because the term of digital public infrastructure is not always well-defined, so I think it’s wise to first set the stage and speak about DPI and what we mean by this term for this panel discussion. By DPI, we basically mean society-wide digital capabilities that are essential to participation in society and markets for citizens, entrepreneurs, and consumers, and which are the foundation for public service delivery in the digital area. And this definition basically emphasises the functionality of digital public infrastructure, so it’s really about delivering services, both public and private. So what we don’t mean by this term, really, is foundational software, for instance, that underpins any other software solution, so that is one of the common misunderstandings. And we are also not talking about actual physical infrastructure, such as fibre optic cables, so just to be clear on that. And this kind of digital public infrastructure I was talking about, so digital public infrastructure with society-wide functions, such as digital ID, for instance, payment systems, data exchange systems, and civil registries, they have seen a real boost during the COVID-19 pandemic, because many countries invested in these foundational infrastructures heavily for pandemic response, so for instance, for cashless transfers. And part of this also involved that a lot of attention was given to speedy implementation, and maybe not so much attention was paid to actual, secure, safe, and inclusive implementation of these technologies. And I think the important thing to consider here, really, is that digital public infrastructure with society-wide function really has an extended risk to us, right? If we talk about technology that is implemented at population scale, we have risks such as data privacy issues, mass surveillance, and, for instance, deliberate or accidental exclusion of vulnerable groups. And that’s something that we really need to take care of, and also fix in case there was something implemented during the COVID-19 pandemic that didn’t really consider any of these due to the speediness of implementation and the need to react. And currently, what we see is a lot of momentum around digital public infrastructure, so you might have seen that IGF, it’s a topic that pops up in many of these sessions. And we need to discuss now how to make safeguards into digital public infrastructure design, implementation, as well as the governance of it. Because there’s also path dependency, right? So if we implement digital public infrastructure at population scale now, these kind of technologies will have an impact on people’s lives over many years. So we need to get it right at this very moment. And there’s a window of opportunity to do exactly that. And for this reason, we would like to talk today about the Universal DPI safeguards initiative that was launched by the UN Tech Envoys Office, as well as UNDP just recently, and basically discuss in the session what are the risks and how we can mitigate them, which good practices exist from already existing implementation and the lessons learned around this, as well as about the role of such a global DPI safeguards framework and what role it can play for design implementation and governance in the future of digital public infrastructure. And with that, I would like to pass on the word to my distinguished panel over there. And I would like to invite Amandeep to first tell us a bit more about the safeguards initiative that you recently launched. So why do you think there’s a need for something like this, for such an initiative, and what are your plans?

Amandeep Singh Gill:
Thank you very much, Lea. Thank you to you and Moritz for moderating this panel. Very pleased to be here with the distinguished co-panelists. Why is there a need? I think the need arises from the growing interest and the growing consensus on the importance of DPIs. They have proven themselves to be a powerful way to enhance inclusion in the digital space, to drive innovation, to improve government service delivery, reaching the last mile. COVID was the big moment, but it’s been coming for a long time. And we have here on this panel in LA, so Estonia’s experience, the experience of India, many other countries. So it’s been coming for a while, and now there is global recognition. For instance, the G20 understanding on a framework for DPIs. Now, as you put it, before we get too far down this road, because there’ll be path dependencies, it’s important to put together some safeguards to ensure that some of the risks and the problems that we’ve already seen with digital public goods, digital public infrastructure, for instance, safety online, the security, the cyber security aspect, data protection aspect, the aspects related to inclusion or exclusion, the optionality, opt-in, opt-out type of issues, the issues related to buy-in from society, issues related to the legislative framework in which DPIs are placed. So it’s good to have a global standard, a global guidance that helps the players in the DPI ecosystem move forward confidently. We can’t say that you should not have DPIs, because that has its own opportunity cost consequences. For instance, we don’t say, you know, let’s just shut down the digital platforms. They also have billions online, et cetera. But we have to work actively to ensure that they continue to serve everyone, they continue to serve human flourishing, rather than, you know, create problems further down the road. That said, I would very concretely point to the call by the Secretary General in his policy brief on the Global Digital Compact for a safeguards framework on digital public infrastructure. You know, given the, and I’m sure Rob will speak about it, the demand that the UN has been seeing from the ground, the issues that we’ve been kind of facing in country, I think it is time to have this kind of framework, and the Secretary General has given that call. He’s also outlined this problem of fragmentation overall. So if we want to avoid fragmentation in this space, this can be a kind of a unifying baseline, and it can give civil society and other partners a common reference point. So this is the reason why jointly with the UNDP, we’ve launched this initiative. But this initiative won’t be limited to these two UN entities, others will join in. It’s an open call to all those who are involved in the DPI ecosystem, from DPGA, GovStack, to Dial and other important players. Also a call to civil society and private sector, who will be helping build this out and manage the interface between the tech side of it, the governance side of it, and the community side of it. Yesterday, I think even someone said that this is a socio-technical infrastructure we’re talking about. So it’s almost like a socio-legal technical infrastructure we’re talking about. So obviously the path forward has to be multi-stakeholder. Thank you.

Moderator – Lea Gimpel:
Thank you so much for these initial explanations. You already mentioned UNDP, so I would like to pass on the word to Rob for more about the Safe Grids initiative, why UNDP decided to join, and what have been your takeaways so far from UNDP’s experience in supporting digital public infrastructure implementation?

Robert Opp:
No, exactly. Thanks, Lea. Just building on what Amandeep said, where we are with this whole kind of discussion around digital public infrastructure is we’re essentially in the process of coalescing a movement around that, taking what has been done over the last 10, 15 years, in many cases, in countries like Estonia, India, and others, as Amandeep said, and looking at how can we offer this in a way that will help accelerate digital transformation in countries that are, let’s say, not as well developed in terms of their infrastructure, their digital infrastructure. And what’s really key with the whole concept of digital public infrastructure is that it needs to be seen as an approach rather than a technology or set of technologies. And that approach needs to have the notion of a governance structure around it with the appropriate safeguards. And as you mentioned in your intro, Lea, as the COVID pandemic basically hit countries and countries needed to respond, and as they really mounted their response, it was really clear that the kinds of requests that we received out of countries over time shifted from being very solution-focused to much more thinking about what is the overall ecosystem looking like and how do we shape that. And I think it’s natural that countries, they focus, generally speaking, on trying to solve an immediate problem, and that’s where you get a focus on technology first. And I think what’s exciting about this safeguards initiative is that we have now a chance to embed the thinking around what do you need to, when you’re planning for your ecosystem and you’re trying to solve your problem, you cannot forget that it needs to be accompanied, the technology needs to be accompanied by these kind of set of safeguards that should be in place to protect people, to protect the future success of your infrastructure work. And so on the safeguards initiative, the role of UNDP is to really work with the tech envoy’s office and the convening power that they have and to kind of run the consultations on their side but complement that with what’s happening in the field. And as we prototype and create hypothesis principles and safeguards, we want to test them at the country level. So we’ll be taking three to five countries, looking at the framework as it’s developing, testing that and seeing what it would actually look like on the ground and what do we learn from that, so creating that feedback cycle. And then as the safeguards framework emerges out of the consultations, out of the feedback cycle, really looking at well what would it take, what is it going to take to support countries to be able to actually put these in place and what kind of capacity needs will there be, really trying to understand what are the knowledge gaps that need to be addressed and so that this becomes much more, let’s say, easy to adopt for countries. We can’t just leave it at global principles, we have to really understand how countries are actually going to be able to do this. So that’s what we’re really focusing on, that’s what we’re really looking forward to in all of this. Thanks.

Moderator – Lea Gimpel:
So in a nutshell, let me just summarize before we move on to the country speakers. It’s about creating a movement around DPI, something that I really like in order to ensure that we protect people and at the same time deliver services. So I really love this notion and as you’ve all heard, it’s an open call for everyone to participate in, so please do so. And what I also really like is this idea of talking about an approach rather than technology because I think a lot of this discussion currently focuses on technological solutions and not so much about the governance aspects of these. And as you’ve all heard, countries are central to this initiative. It’s about developing ideas, it’s about testing in the field and then going back and working with that feedback. So I would like to move on to our country representatives. And first, I have Eileen here with us. The White House recently announced that the US will work on deploying robust and safeguarded digital public infrastructure. How will this approach be reflected in your engagement for DPI that empowers people via technology while also protecting their freedom?

Eileen Donahoe:
Great. First, let me say thank you for including me. I am a real neophyte on this topic and I’ve already learned a lot just from listening to all of you in this room and recently participated in another event with a subset in this room and I feel like I’m excited about this. I have an instinct that it’s really important, but I’m still catching the thread. So I’ll just say that up front. As a human rights advocate, what attracts me to this? I feel like it represents a very innovative combination of using technology to basically advance and jumpstart the SDGs in effect and expanding access to digital services around the world. But also, if done right, embedding human rights by design, as everybody has said. I would emphasize more explicitly human rights by design. And that’s a further conversation about what are the terms we use. Everybody knows that there is a tremendous yearning around the world to expand access to digital services and that we really do need technology to be an accelerant to meeting the SDGs. That’s the aspiration and the hope. It would be terrible if we did that in a way that we were actually making the most marginalized, vulnerable communities more vulnerable. So that is why, as everybody’s emphasizing, the technology and the standards go together simultaneously. And you raised it so well at the top, the tension between speed and that yearning. But if you do it in the wrong way, you’re only making things worse. And I would also add, I’ve hinted at this already, I do believe the international human rights law framework should be the normative foundation for thinking about this. I will admit the hard part is, what does that look like in practice? It’s the how. We know what, the how is the hard part. And I will acknowledge that I was in a room recently with Marianne from Access Now and she raised the point, and she just said it very explicitly, the big risk is surveillance states and this unconscious drift to becoming surveillance states. So that’s the really dark vision of this and that’s why we all have responsibility. On the U.S. side, basically that was, I hadn’t seen it, I looked it up last night, I have it. It is September 22, 2023. So this was after UNGA, so after our last conversation. And what really jumped out at me is this is a vision for the American people, it is domestic. People may be surprised to know the U.S. is really behind on this. I think people would be stunned. And the vision is to transform the way government communicates with the American people and rebuild American infrastructure. Which maybe all governments say that’s what they want to do, but I think people would be stunned at how far we are behind. And a couple of stats jumped out at me, basically only 2% of the federal government in the United States forms have been digitized. And the public spends more than 10.5 billion hours each year completing government paperwork. And about $140 billion in potential government benefits go unclaimed every year. So that gives you a sense of how far behind we are. So in effect, the United States is in this with everybody and has a lot to learn.

Moderator – Lea Gimpel:
Yeah, I think that’s a very good point, that it’s not only about low and middle income countries, right, but that we are speaking about high income countries and their approach to DPI as well. And I really like this idea of human rights by design, so that’s definitely something that I think we need to discuss. And talking about terms, I want to pass on to Henri, because France is the country that is always speaking about digital commons. And I would like to know from you, Henri, how is this concept of digital commons and DPI overlaps, or how they connect to each other, and where safeguards come in?

Henri Verdier:
Thank you, and thank you for the invitation. I will start with DPI. I was thinking that probably France started developing DPI before we even knew there were DPI. We did develop different level of digital identity. We’ve got France Connect. We are working hard on geographical information, public API, and so on. And this work was probably based on two sources. One was government as a platform, and I can welcome the work of the British Government Digital Service 10 years ago. And the other was a very ancient tradition of public service, with all its rules of neutrality, accessibility, mutability, equal access, and so on. The more we did work on these issues, I will come very briefly to comments. The more we did work on these issues, the more we have seen them as a more universal challenge, because we think now that if we want to preserve an open, free and neutral Internet, and if we want also to be true democracies, so without big states or big tech, we need this small layer of public services that enables everyone to act in the digital economy and to be part of the decision process. That’s why now we consider those challenges are universal and related to democracy. And that’s also why this commitment for DPIs meets our commitments to the digital commons. We can’t stress enough just how important the commons are to the Internet as we know it. Free software, open standards, data, knowledge commons are the very core, the true core of Internet. And these two commitments can easily be linked, so they don’t overlap. It is possible to conceive good IPOs that are not commons, and some commons do not become infrastructures. But when the two ambitions come together, it can be very powerful. And if we, just to finish, if we come back to the question of safety, I think that we cannot conceive a real democracy without public infrastructure. But not every infrastructure will empower democracy. That’s simple. So we need to conceive, and that’s why we welcome this work, we need to conceive a set of rules. Probably we know most of them, but we have to order them, like real transparency, transparent governance, shared governance, security by design, privacy by design, etc. But we need to make a proper work and to share this broadly. Because, again, I finish with this, we cannot, we and not just me, France, we consider that we cannot conceive a fair development of Internet without good public infrastructure, but that not every

Moderator – Lea Gimpel:
infrastructure will empower democracy. Thank you so much for these comments. And I, yeah, I really, I think it’s a fair point to say that we, like, in a way we know the rules, we know the principles, right? We talk about these buzzwords, transparency, accountability, and such. But I think there’s a real good point in it that we need to move forward from principles to practice. And do you have experience in implementing DPI? And as you said, I mean, you’ve been implementing DPI without calling it DPI for quite a while. And I think the same is true for Estonia. We talked about this earlier before the session, that what you’re doing, you don’t call it DPI, but it is DPI, what you’re doing. So Nelle, from your experience, as Estonia as a global digital leader, what considerations should be integrated in such a safeguards framework? And how should such a process look like when we develop it over the next couple of months?

Speaker 1:
Yeah, thank you. Thank you, Lea, and thank you for the opportunity to be here. And I believe I will actually summarize or build on what all my good colleagues have already touched upon. But the first is really the notion of the DPI. That is, I would say, a rather new term, but actually standing for some of these important principles in digitalization. So from this point of view, I believe we all have good experiences. It’s not the usual suspect of Estonia and India and many others, but all countries have digitalized their societies to some extent. But to give answer to your question, I believe it might be useful actually to revisit some of these origins of what we call DPI, that is a new movement or a trend, as we have heard. And some of it actually Henry referred to, and this is really, I would say, one of these origins is philosophical, and the other is perhaps more practical. So the philosophical really comes to understanding the role of the government in our society. Is it only to serve the people by providing services? Is it also to act as a partner? Is it to share everything that the government does? So we can say that actually digitalizing, following these important principles that you also mentioned, openness, transparency, inclusiveness, it started really with rebuilding Estonian state in the 90s. It started with freedom of information, it started with privacy issues, it started with security issues, and then it moved to the digital sphere, where we started to talk about interoperability, open standards, and so forth. So I would say that it was really this logical continuation of what we had started to reform in reforming our state. But the other reason, it’s actually very practical, and this really, Estonian government sort of started to realize in the 90s that actually the needs of the government, but also private sector and other partners in digitalization are rather similar. And this came to joining forces and really sharing our resources. So in the 90s, Estonian government, together with the private sector, started to develop, for example, a digital identity that was mentioned, and we do use one digital identity across the government, but also private sector. And this is actually one of these important, or this led actually to one very important precondition for the DPI to work, and this is really the habit of working together and building trust. Because on the one hand, yes, we can build trust by setting principles, having a great legal framework, but it is definitely not enough. All the partners need to feel that they equally contribute, and they equally also take responsibility, and at times also risks. And the last thing we often forget, that in order to make things work, sometimes we must be ready to fail and take responsibility for this. So this would be perhaps some of the takeaways from Estonian side. Thank you so much. Well, I think

Moderator – Lea Gimpel:
failing for governments and taking responsibility, that’s definitely a challenge for many still, so that’s probably something we can also learn from Estonia, if there’s anything that you want to share. I really like this notion of, you know, building cooperation and trust, and involving everyone in this effort, right? So as Amandeep explained in his initial statement, it’s a multi-stakeholder initiative. So this idea of a DPI safeguards framework, and it’s really about involving everyone to take part in developing these principles, but also, you know, defining how to move into practice with that. And I would now like to, well, ask you as my panelists to react to what you’ve heard. So Nella already summarized bits of it, but we have a bit of time for a rather open discussion. Before, I will open it also to the audience and ask you to raise your questions, both online as well as here on site. So another 10 minutes for the open discussion among the panelists, and then I will open up, because I already saw a hand over there. So it’s definitely a topic where many questions exist and we need to discuss. But please, first, if any of you want to react to what you’ve heard, please go ahead. And if there’s nothing, I have a range of questions, of course, as we’re prepared.

Henri Verdier:
Maybe I could quote an Indian friend from Bangalore that told me recently, Europe, you did build your prosperity and your independence through public infrastructure, rails, roads, train, water. And then suddenly, you did stop at the end of the 70s, for some ideological reasons. And the economy did continue to evolve. And now, digital identity, infrastructure for payment, are as important as roads a century ago. And it did convince me.

Eileen Donahoe:
So I do have a question. We talked about the tension between speed and standards. Amandeep, you said it right at the top, that, you know, you’re talking about the need for global standards. And that works for me, because I think of the human rights framework as a global standard, as a basis and an anchor for thinking about these things, already universally applicable, and it’s an understood language. However, when I think about it, in the context, even of my own government, and the approach that I just described, it strikes me that the way many governments think about providing public services, pre-digital, is that it’s their job, it’s government providing services. And so I think all governments are, most governments, many governments, hopefully, are learning about multi-stakeholder process and understanding when it comes to digital technology, they need help. And that means including the technical community, civil society, academic experts, etc., private sector. I think there’s been progress there. If you add, then, the global multi-stakeholder approach to governments, that’s a bigger leap. So part of me, I raised this because I think that’s where the human rights framework can help. Because governments will be comfortable that they’ve already signed up for this, and they know what it is, and there’s a sense of trust. Otherwise, I think governments will be reticent to, and even in terms of political discourse domestically, the idea of including a global multi-stakeholder input process might seem challenging to people who haven’t been exposed to global multi-stakeholder process. So that’s one of the areas I see that we really need to help governments sort of jump ahead and collapse this tension between local, domestic and international.

Amandeep Singh Gill:
Yes, if I may jump in quickly, I think building on Eileen’s point, I think the foundations are essentially twofold. There is human rights, the international human rights commitments, and there’s the SDGs framework, leaving no one behind the 17 goals, gender equality, even good governance, zero hunger, removing poverty. So those are our foundations. And the opportunity from local to global is that we have next year the Summit of the Future. So the Global Digital Compact is going to be one of the deliverables for that summit. So how can we, when we are translating the vision of the Global Digital Compact on accelerating progress on the SDGs, addressing the digital divide, alongside how can this safeguards framework, this enabler, as Rob put it, for the DPI’s movement overall, how can that be a concrete offering to support that vision and to take it forward? So in a sense, we are going from that agreement among 2021 now in the G20, which is pathbreaking in New Delhi, to a larger framework where you have 193 countries, civil society, private sector coming together to endorse this movement. And my last point is on this, the private sector aspect, because it is not just government services, public services we are talking about. What DPI’s do is they create an innovation ecosystem. They, through this combination of common rails, guardrails, they lower the entry barriers to innovation, linking with Ahi’s point about the digital economy, the importance of a dynamic national digital economy, where you have, yes, citizen facing government services, but you have businesses, whether they are coming from FinTech, etc., who are boosting the demand for digital services and digital products. So there is a supply side paradigm on infrastructure that we are used to. But DPI’s are more complex and more sophisticated, because they play on demand as well. A farmer who really doesn’t have an incentive to connect, you know, can the DPI’s and the services provided on DPI’s, both publicly and privately, can they create that demand for that farm to say, okay, I need to, you know, plug in as well, in an empowered way.

Speaker 1:
Yeah, maybe just to stress some aspects, which I believe are important when we talk about this DPI movement or discourse. And what I really like about the DPI is it actually reminds of the basics. So when we look, for example, at the conversations here over the past days, and generally this year, and I’m sure also next year, it is all about very advanced things, AI, Internet of Things. There was a blockchain buzz a few years back. And at the same time, many of the governments and societies actually that are talking about AI and everything else have not yet actually fixed the very basics, data governance, digital authentication, and so many other things. So I think all the basic public infrastructure, so in this sense, I think DPI has spotted it well that we need to have this base in order to move further. And the second is actually related to the myth that also Amadeep and Henri and others pointed, and it’s really about the role of the private sector. So DPI does not, or good digitalization does not mean that the government is doing everything, but it’s really about sharing what it does, and really making sure that some of these important principles like security and privacy are guaranteed, because this is an obligation that is different from the private sector. So it is a public sector’s task to make sure that the people feel good in this virtual world. And the third one, and this is maybe a call actually also from my side, is really related to sharing and reusing. There are so many governments that are making their tools available, making their source codes available. We see a lot of sharing. We don’t see that much reusing or even doing together. And this comes to the question like why we don’t do that, and it’s probably trust issues, maybe readiness issues. There are different issues, but definitely a mindset issue. So from Estonia, we have a good example together with our great neighbors in Finland and Iceland, where we have come together, we have created a foundation. We make sure that we develop certain digital products together, because we need them. And we also make them available to the world, and also make sure that they are updated to security and other requirements. So I think this is a good example of a DPI safeguard. But definitely I’m calling us to share and use more what others do.

Robert Opp:
Yeah, and actually, Nelia, I was going to pick up on something you said earlier, but also this comment. My reaction, when I listen to these conversations, and in all of our engagement this year as part of the G20, as a knowledge partner, and in all of the other discussions that we have, I’m constantly thinking about how do we take it from, I think, as Amandeep, you said, the 20 or so to the 150, 170, 100, the countries that are out there that haven’t done this or done it in a sort of package, the way that we’re talking about as an approach. And what strikes me is we can get our technology packages, and we can get our standards packages, but there is still a mindset issue. And that will take some time to get people, let’s say, somehow, we have to learn how to incentivize people to share, to work together, change the mindset, and really understand that this is a movement that we’re pushing toward. And I think the safeguards initiative we’re talking about today is a step in the right direction to do that. The implementation and how we scale is what is kind of on my mind constantly. So that my call to the genius that exists in humanity out there is what are the scalable ways to ensure that we’re changing the mindset as we do this?

Henri Verdier:
I can make three brief comments. First to Eileen and Amandeep, yes, you’re right, we have two sources that are the SDGs and human rights, but maybe there is a third one, and you know it very well, Eileen, this is the OpenGov movement. What we did learn 10 years ago is that everything the government does can create much more value if we unleash innovation. We saw this with data, with source code, and now we can see this with infrastructure. And we have a lot of important lessons to learn from this movement. The second point regarding private sector, just to mention, when I did mention the long tradition of public service, public service can be made by private sector. Public service don’t have to be free. The thing is that the public service has to respect some rules and cannot be the man in the middle taking all the added value. It has to be neutral, to be equal access, you can finance it, but you cannot take the added value. So we can easily build it with the private sector. And the third and last point, Nele, you’re right, a lot of people try to share, a few people try to cooperate, and that’s one other way where we can be inspired by the common movements. Because to build community, to work all together, is not the same thing as just to share the code. And you have to think about cooperation, governance, but even if you go into the details, good documentation, certain kind of codes, this is not so easy. You cannot just open your code like this.

Moderator – Lea Gimpel:
Thank you so much, all of you. I think there’s a lot of appetite for discussion also amongst the panelists, and I think we could go on and on here. I would like to open it to the audience now with their questions. If you want to ask something, please line up at the microphone, and for those of you who are sitting more in this area, we might also pass around the mic if there are questions. Please go ahead, and Moritz, be prepared to also raise some online questions, please. Maybe we collect some questions and then we allocate them among the speakers. Yes, go ahead, please.

Audience:
Thank you. Good morning, distinguished speakers, Lea for bringing them here, everyone. I’m Ale, Costa Barbosa, I’m from Brazil. I’m a fellow at the Weizenmauer Institute in Berlin, and also a coordinator for the Homeless Workers Movement technology sector in Brazil. So I’m representing those who really rely on DPIs, let’s say. And just a quick moment on self-marketing. I had the opportunity to coordinate a research with Laotian in 2019 on identification for development somehow in Latin America. I think it’s really worth it. So it’s indeed a really old discussion, for instance. And recently, we’re about to launch a report on… on digital education in terms of infrastructure and sovereignty held by the Brazilian Internet Steering Committee, which I think should be considered not only as more sector applications. And I also like to hear the fact of geographic information systems being considered DPI somehow for you. But I’d like to hear the GDC, the Global Digital Compact, claims for sustainable DPIs. And taking into account that the last meeting of G20 in India somehow came up with a common agenda. And the following two hosting countries will be Brazil and South Africa. I’d like to hear how can we ensure that DPI will be sustainable, considering the environment, the last mile, if you’re not taking physical infrastructures into account. Thank you. Good morning, everyone. I’m Mahesh Perra from Sri Lanka. It’s good to hear that we have been talking about digital government, connected government to DPI. Now, in this journey, I mean, I think Estonia has quite successful in digital government, connected government from the inception. I mean, many countries have failed in the design, in the implementation, as the moderator said about in the governance. I mean, all three stages, we did mistakes and not achieved the expected result. Now, I’m quite pleased to hear that we are talking about DPI standards and DPI safeguard initiatives. I mean, I would like to see if we can talk about DPI by design, DPI by implementation, DPI by governance. I mean, it’s all about standards. It’s about giving standards, giving about certain measures that government and the implementers must follow by the design, by the implementation, and by the governance. I think we have plenty of questions for their later presence, for their feedback. But a couple of things that arise to my mind from listening to you, there’s so much opportunity for bringing up the potential for surveillance for the future. And this is a major concern when we talk about interoperability. Is it off? It was off. But I think that my voice was loud enough and everyone was kind of listening. Over there, Marina was listening over there. Oh, I have a good theater voice. OK, I was saying thank you so much for bringing up the potential for surveillance if we don’t get it right. So we have to think about, as it was mentioned, human rights by design from the stage, from design before implementation. This is a main concern because of the tension that we were mentioning earlier about the speed for implementation. We are really running towards it. And we cannot implement the safeguard at the same time we are implementing the infrastructure. It has to come before. Otherwise, it won’t work. And this comes then to the main concern that we have, which is the concept of human rights by design needs to be unpacked. We need to mention specifically what do we mean by that. And even though we do have, of course, human rights framework for the world, we have all of the declarations, it needs to be said specifically what it means when we talk about privacy, when we talk about freedom of speech, when we talk about rights that we usually do not touch on when we talk about technology, like the right to dignity, the right to autonomy. And all of this is involved because we are touching on very essential aspects of the human experience, basically. So when we build safeguards for these processes, safeguards are saying that we need to have safeguards. It’s not enough. The safeguards need to have a way of being implemented that allows for the systems, if we realize that they are causing harm, the systems to be stopped or even rolled back. And we do not have that in many, many places. I work on digital ID and the systems. The safeguards are about negotiating, maybe, the possibility of an eventual remedy, which is not enough at all. And we do need remedy. But we also need to be thinking about stop and rollback, if needed, to be able to reconsider, re-evaluate, and implement changes. And we had a lot of conversations yesterday around the fact that there is no model of digital ID that just works. And it is constant learning. It’s a process of constant learning in the context. So we need to be open for the infrastructure to be adaptive, to be responsive to what might or might not be working. And the question here, because I swear there is a question, I swear, is, are we giving any thought to the right of anonymity in here? Because I think that we all agree, because this is a standard, an international standard, that the right to anonymity is essential to freedom of speech and, generally, civil and political rights and also rights like autonomy, again. But if we create models that are all-encompassing in a way that they require to be identified at every step in places where we don’t necessarily need to have that amount of information of the person, then where is the space for the people to be private, to hide, to have that space that is needed as human beings? So my question is, we are implementing this infrastructure, yes, for the farmers, right? Because we need them to get the services that they need. But how much information do we need for the farmer?

Moderator – Lea Gimpel:
Thank you so much for all of these questions. I make a stop here and quickly summarize. And Moritz, if there’s anything that you can add to this package, please do say so. Just quickly, we had a question around sustainability of digital public infrastructure. We had a question around DPG standards and what it means to implement DPI in a way of human rights by design, and what is behind there, and what about the right of anonymity of people? Is there anything that you can add to this little package of questions? Yeah, there was one question online on the enforceability of the safeguards framework, because what good is a nice standard if we can’t enforce them on the ground? Please go ahead, whoever feels inclined to answer.

Henri Verdier:
OK, so very briefly, you say that sometimes countries have failed. And I totally agree. I don’t know if you know, but before being a diplomat, I was a state CTO for France. So I had to conduct some of the transformation. And the dirty little secret is that governments are not always able to make simple, to make it simple. And to respect open standards, to build small pieces, reusable, to build, to respect agile methodologies is not very usual for governments. I don’t know if you can imagine the state IT for France, for example. I had to disconnect some projects that did cost one billion euros, for example, and that did fail after 15 years of experiments. So, we have to, the dirty little secret is that there is also a digital disruption within the history of IT. And we have to change the way we do develop. That’s not simple, but we can do it. And that’s one important thing. And when we, the more simple we do develop, the more small pieces of clear standards we use, the more it’s easy to make an inclusive and sustainable infrastructure. You did mention homeless workers. In Bangalore, the tuk-tuk, you know, the rickshaw, they did alone using UPI and bacon. They decided to avoid Uber. So they did pay, but that was really inexpensive, maybe 50,000 euros, $50,000. They did develop a solution to call a rickshaw, and they did implement some Indian rules. So, for example, you can bargain, you can negotiate the price. And that’s very interesting because they told me, we are there decades before Uber, and we’ll be there decades after Uber. So, why should we organize ourselves through Uber? So, they decided to have a direct access to a very important infrastructure. Just in one word, I think very often about how can an authoritarian regime use infrastructure? I think that we can implement some securities, and we will. But, in fact, an infrastructure is an infrastructure. And if you have evil purposes, you will use the infrastructure for your evil purposes. That’s the same with trains, with the highway, with everything. So we cannot protect democracy just through while implementing rules in the infrastructure. So we need more.

Amandeep Singh Gill:
I don’t think there is anything to add to what Ami has just said. Maybe our Brazilian friends question about the incoming G20 presidency. So that’s an opportunity to kind of, as Rob put it, make this movement more sustainable and more, in a sense, contextually interesting in Africa, in Latin America. So, building on what has been achieved during the Indian presidency, and Ahi and many others have played a key role in that amazing outcome. So, that needs to be taken to new areas. Those learnings incorporate, as Nelly has said, we need to bring in those learnings. I mean, in the Nordic region, you have this kind of an ecosystem that of mutual learning. So can we build it in other places, make the DPIs work more regional and context-sensitive? And on the point about enforceability that came online, I think what we can do by leveraging the GDC process is to ensure that when there is a regular review and follow-up of the GDC principles, action framework, that there is a regular discussion as part of that on how we are doing on DPIs. So, where this safeguards framework acts as a reference point and there is a regular discussion. And there, obviously, we can create some soft pressure, some normative pressure on those who are falling behind or not living up to that standard.

Eileen Donahoe:
So, I will just underscore several points made by a colleague from Sri Lanka, Anri Amandeep. I’m hearing two, three added reasons that a global open standards are valuable. One is well-intentioned governments who may have failed or would otherwise fail need the help. And so, sharing the knowledge and the know-how is valuable. Second, to Marianne’s point, and Amandeep, global open standards actually increase the likelihood of accountability if standards deployed will be higher than would otherwise happen if governments are left to their own devices. And that applies to well-intentioned governments who do things the wrong way with inadequate standards and less well-intentioned. Amandeep, your last point, though, underscores why or how the global digital compact process itself with follow-up can add not full teeth. I mean, even the international human rights law framework is not fully enforceable, in fact, but the soft norms, that kind of pressure and global open eyeballs on everybody’s systems adds value. So, there’s a real benefit.

Moderator – Lea Gimpel:
I’m afraid we are running out of time here. Sorry for that. Eileen, you already did a great job in summarising what we’ve been discussing here. Thank you. I would like to add a few quick points as well before I give it over to my panellists again for a quick 30-second key takeaway message from each of you. So, what stuck with me specifically is that the DPI Safeguards Initiative is an opportunity also to reflect in a philosophical way on the role of government. So, what is it actually that governments need to do in the 21st digital era and how can we make sure that they deliver on that, both in terms of public service delivery as well as in being a partner for citizens as well as for the private sector. And this connects quite nicely to this pragmatic approach as well, which I would coin as a society-wide approach that enables everyone to build on top of it, including the private sector, including building an innovation ecosystem, which I think is a very important point in order to also help digital economies to evolve. Secondly, we also talked about vehicles and how to bring on and create commitment for the Safeguards Initiative. And the Global Digital Compact was mentioned as one of these vehicles, but we also discussed the Human Rights Convention as well as all the work that has been already done on the open government partnership and initiatives around this area. I think there’s a lot of legacy work, actually, that we can build on. And thirdly, as a DBGA, of course, I really like this idea of sharing technology, of sharing learnings, and of reusing these and building cooperation around these. And what stuck with me specifically is this idea of changing mindsets while implementing at scale. Thanks, Rob, for this great sentence here. And I think that’s exactly what we need to strive for when we are implementing. These are my key takeaways. Over to you, esteemed panel. What are yours?

Speaker 1:
Yes, I will actually end by responding to some of the questions, which I think were actually very important also to keep in mind when we talk about data public infrastructure. One is related to sustainability. And, you know, terms may come and go, and maybe in five years’ time we don’t talk about DPI anymore, or connected governance, or mobile governance. It’s actually important not to lose what has been done before, because we often see that projects come and go, and sometimes governments give up, civil society gives up. But we need to remember, actually, that changes take time, and actually people are rather conservative. So we see that actually from Estonia, that in order for certain services to be uptaken, some new change to be implemented, we may need six, seven, eight years. The Internet voting, that we still carry out in Estonia. First time we had 0.8% of votes coming via the Internet. Now it’s almost 50% of the voters. And the second one is really about how we can guarantee privacy, security, and now we have also human rights by design. It’s actually not one country’s issue, it is a global issue. And it’s not a government issue, but also private sector, and increasingly private sector issues. So there comes the role of these global movements that we have been talking about.

Moderator – Lea Gimpel:
Please, a short answer.

Robert Opp:
Okay, well I have many takeaways, but I’ll only, I’ll mention one, I guess. Maybe, no, I’ll mention one. You know, I think what strikes me in this conversation and kind of connecting dots with a lot of other things is although these things take time and people are inherently conservative, we have waves that are coming globally like artificial intelligence and other emerging technologies that will challenge the human ability to keep up. And at the end of the day, governments are a human endeavor, civil society’s human endeavor. And the question then becomes how might we really construct this approach globally with the safeguards and with the scalable packages of human rights by design, privacy by design, the other things that we know are important, how might we do this and roll it out as quickly as possible while creating trust and while creating that sharing. And this is just kind of what keeps me awake at night, kind of what drives us to work on this, which is why the potential is so strong for this approach. I’ll leave it there.

Henri Verdier:
One word. Someone did ask how will you implement or enforce this approach. And I will say we do it because it’s the most efficient way, and we will prove it. To empower the people, to unleash innovation, to guarantee fundamental freedom is the most efficient organization for the economic and social development of a country.

Amandeep Singh Gill:
I’d just say that this is the beginning of a journey, that we just announced the initiative last month, and this is in fact the first official consultation. So going back to your point, Mariana, about some of these granular issues around right to anonymity, unpacking the concept. So we’re at the beginning, and with your help, we’ll be able to do that. Thank you.

Eileen Donahoe:
Exactly that point. Unpacking the concept of human rights by design. What does that look like in practice to do it well? Obviously cross-regional, cross-stakeholder group, but I would underscore cross-disciplinary in the intellectual sense, because it really is about people who understand norms, soft norms, and hard law, how it works, and technologists, and the innovators. And bringing them together with a shared language is also part of the challenge.

Moderator – Lea Gimpel:
So it’s the beginning of a journey. Thank you so much, everyone, my speakers, the audience. If you want to know more about the DPI Safeguards Initiative, they have a website where you can read up on it, and also subscribe to their newsletter if you want to know more about the ongoing consultations. And with that, I would like to end it and wish you all a great day. Thank you. Thank you. Thank you. Thank you. Thank you.

Amandeep Singh Gill

Speech speed

139 words per minute

Speech length

1314 words

Speech time

566 secs

Audience

Speech speed

175 words per minute

Speech length

1128 words

Speech time

386 secs

Eileen Donahoe

Speech speed

136 words per minute

Speech length

1179 words

Speech time

518 secs

Henri Verdier

Speech speed

151 words per minute

Speech length

1303 words

Speech time

519 secs

Moderator – Lea Gimpel

Speech speed

171 words per minute

Speech length

2484 words

Speech time

873 secs

Robert Opp

Speech speed

168 words per minute

Speech length

1030 words

Speech time

368 secs

Speaker 1

Speech speed

149 words per minute

Speech length

1227 words

Speech time

495 secs

Equi-Tech-ity: Close the gap with digital health literacy | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Man Hei Connie Siu

The speakers at the discussion highlighted the persistent disparities in access to care, despite the progress made in digital health. They argued that digital health has not necessarily improved health equity and mentioned two key factors contributing to this issue: the digital divide and low digital health literacy.

The digital divide refers to the gap between those who have access to digital technology and those who do not. This divide disproportionately affects disadvantaged communities, including low-income individuals, rural populations, and marginalized groups. As digital health relies on technology, those without access are unable to benefit from its potential advantages. This creates a further divide in healthcare, perpetuating existing health inequalities.

Low digital health literacy is another barrier to achieving health equity. Many individuals lack the necessary skills and knowledge to navigate digital health information and services effectively. This can prevent them from accessing healthcare resources, making informed decisions, and actively participating in their own care. Addressing this issue requires comprehensive frameworks and assessment tools that capture and assess various dimensions of digital health literacy. By understanding individuals’ abilities and needs in this area, tailored interventions can be developed to enhance digital health literacy and bridge the gap.

Policy solutions were proposed as a means to bridge the digital divide and ensure that digital health truly advances healthcare outcomes for all. It was emphasised that these solutions should be inclusive and consider the unique needs and challenges faced by marginalized communities. By actively addressing these disparities, policymakers can promote equity and ensure that the benefits of digital health are accessible to all.

Throughout the discussion, the importance of promoting inclusivity and equitable access to digital health resources was stressed. It was highlighted that this not only requires action at the policy level but also requires advocacy for strategies that effectively address the unique needs of marginalized communities. By prioritising inclusivity and equity, digital health initiatives can contribute to reducing health disparities and improving overall healthcare outcomes.

In conclusion, while progress has been made in digital health, disparities in access to care persist. The digital divide and low digital health literacy contribute to these disparities, hindering efforts to improve health equity. Policy solutions, comprehensive frameworks, and tailored strategies are needed to bridge this divide, enhance digital health literacy, and promote equitable access to digital health resources for all individuals and communities. By addressing these issues, digital health has the potential to play a significant role in advancing healthcare outcomes and reducing health inequalities.

Audience

The current state of digital health needs to be improved in order to effectively handle future pandemics, according to experts. With the potential for another pandemic like COVID-19, it is crucial to address the shortcomings of the existing digital health infrastructure. The main concerns revolve around overcrowded healthcare facilities during pandemics, which can lead to increased transmission rates and overwhelmed healthcare systems. To mitigate these challenges, it is essential for individuals to receive accurate and timely medical advice and treatment remotely.

There is a growing need to provide accessible treatment and advice without physical visits, especially for vulnerable populations such as the elderly or those with underlying health conditions, who may face higher risks during a pandemic. The reliance on telemedicine and digital healthcare services has become necessary to ensure their safety and well-being.

The argument for improving digital healthcare in pandemic response is compelling. The current system falls short of meeting the demands and implications of a crisis like COVID-19. Enhancing virtual consultations, remote monitoring, and telehealth services would allow individuals to access medical advice, receive prescriptions, and monitor their health from the comfort of their homes.

Additionally, digital health should aim to provide consistent and accurate medical advice and treatment. The decentralization of healthcare during a pandemic can result in inconsistencies and disparities in the quality of care received by individuals in different locations. By standardizing and improving digital healthcare services, individuals can have confidence in the advice and treatment they receive, regardless of where they are located.

In conclusion, the current state of digital health needs to be improved in order to effectively handle future pandemics. The concerns over overcrowded healthcare facilities, the need for individuals to receive accurate and remote medical advice and treatment, and the importance of providing accessible healthcare for vulnerable populations all highlight the urgency of enhancing digital healthcare services. By integrating telemedicine and digital health into the healthcare system, it is possible to enhance access, ensure consistent care, and improve overall pandemic response capabilities.

Geralyn Miller

The analysis examines the perspectives of various speakers on topics related to health, technology, and social determinants. One key point is the importance of addressing social determinants of health to improve health outcomes. It is emphasized that social determinants, including economic policy, development agendas, and social policies, have a significant impact on health outcomes, contributing to around 30 to 55% of health outcomes. The argument put forward is that tackling these determinants is crucial for achieving better health outcomes.

Another important theme is the use of data and technology to understand and address health disparities. The Microsoft AI for Good team has developed a health equity dashboard that provides insights into disparities and outcomes. Partnerships between Microsoft and other organizations, such as the Humanitarian Action Program and Bing Maps, are highlighted as a way to map vulnerable areas. The argument is that data and technology play a crucial role in addressing health disparities.

The analysis also emphasizes the impact of partnerships on social determinants. LinkedIn’s Data for Impact program is mentioned as an example of a partnership that provides professional data to organizations like the World Bank Group. LinkedIn’s data has informed a $1.7 billion World Bank strategy for Argentina. The argument is that partnerships with various entities can have a significant impact on social determinants.

Additionally, the promotion of digital skilling is highlighted as a way to contribute to health equity. Microsoft’s Learn program offers free online learning resources, including role-based learning paths for AI engineers and data scientists. The argument is that digital skilling is important for advancing health equity.

Microsoft’s responsible AI initiatives are also highlighted, emphasizing their focus on fairness, transparency, accountability, reliability, privacy, security, and inclusion. It is crucial to ensure that AI systems and their outputs are understood and accountable to stakeholders, including patients and clinicians.

Furthermore, the analysis advocates for a policy of accountability in AI development, ensuring that products are safe before being released to the public. Brad Smith, Microsoft’s President, has testified in the US Senate Judiciary Subcommittee, stressing the importance of accountability and safe AI deployment. The argument is that technology creators should take responsibility for the impact of their technology.

The value of cross-sector partnerships is also highlighted, particularly during the pandemic. Different types of partnerships, including governance-sponsored consortiums, privately funded consortiums, and community-driven groups, have played a crucial role. The argument is that cross-sector partnerships are invaluable in addressing health crises.

Moreover, the analysis recognizes the importance of standards work during the pandemic. The use of smart health cards to represent vaccine status, the development of smart health links encoding minimal clinical information, and the efforts of the International Patient Summary Group in standardizing clinical information for emergency services are underscored. The argument is that the momentum around this standards work should be maintained and expanded.

The analysis also acknowledges the challenge of keeping up with the pace of innovation.

Additionally, it emphasizes the importance of gatherings and dialogue among people with similar interests for advancing in the field. It also advocates for the integration of technological training into the academic system.

In conclusion, the analysis highlights several key points relating to health, technology, and social determinants. It underscores the importance of addressing social determinants of health, utilizing data and technology to understand and address disparities, forming partnerships, promoting digital skilling, adhering to responsible AI initiatives, ensuring accountability in AI development, valuing cross-sector partnerships, acknowledging achievements in standards work during the pandemic, and addressing the challenges of innovation. It also recognizes the significance of gatherings and dialogue and the integration of technological training into the academic system.

Debbie Rogers

The analysis highlights the potential of mobile technology in Sub-Saharan Africa to improve health literacy, personal behavior change, and access to health services. In 2007, more people in Africa had access to mobile technology compared to the so-called global north or western countries. This demonstrates the widespread use and availability of mobile technology in the region. REACH’s maternal health program in South Africa has reached 4.5 million mothers, representing 60% of the mothers who have given birth in the public health system over the last eight years. The program has had several impacts, including improved uptake of breastfeeding and family planning.

Low-tech solutions, such as SMS and WhatsApp, can also empower individuals in their health. These low-tech solutions are highly scalable and can be designed with scale and context in mind. Given the ubiquitous nature of mobile technology in Africa, massive scale reach is possible, thereby increasing access to health information and services.

Additionally, designing digital health solutions with a human-centric approach and considering the larger system can enhance health literacy. By placing the human at the center and acknowledging their existence within a larger system, health literacy can be improved without widening the technology-related divide. Using appropriate language and literacy levels makes digital health services more user-friendly. Furthermore, making these services accessible for free or at a reduced cost decreases the barriers to access.

Ignoring the wider context and blindly implementing digital solutions can inadvertently increase the digital divide. It is important to understand the contextual understanding and the impact of these solutions on the existing system. Ignoring the wider context can lead to unintended consequences and exacerbate existing inequalities.

Addressing systemic issues is crucial for improving health in Sub-Saharan Africa. Currently, Sub-Saharan Africa has 10% of the world’s population, 24% of the disease burden, and only 3% of the health workers. Simply training more health workers without addressing these systemic issues will not improve the statistics and may even worsen the situation.

Telecommunication companies can play a role in promoting health equity and bridging the digital divide. The Facebook Free Basics model, for example, provides essential information that is free to access, and people who are given this free access to data then go on to use the internet more, making them more valuable customers. Collaborating with telecom companies to reduce message costs further enhances digital health access. As the reach of large-scale programs increases, the costs for telecom companies are reduced, benefiting both the companies and the access to health information for users.

Digital health solutions should work in harmony with the existing health system. Creating a digital health solution should not overburden the system, and feedback mechanisms are crucial to understand the impact of these solutions on the overall system.

Biases in creating digital health services can be reduced by having a diverse team. The biases that exist in these services are often a result of the people building them not being the ones using them. Having a team that is diverse in terms of gender and race can address these biases and ensure that digital health solutions are more inclusive and equitable.

During the COVID-19 pandemic, digital health played a crucial role in reducing the burden on healthcare professionals and empowering patients with information. Large-scale networks such as Facebook, WhatsApp, and SMS platforms provided quick and reliable information to people, proving the effectiveness and importance of digital health in times of crisis.

Long-term investment in digital health infrastructure is crucial for preparedness. Digital health platforms that served the needs during the pandemic may no longer exist and need to be maintained for future use. Another pandemic is inevitable, thus preparation is necessary to ensure a timely and effective response.

Technology can be utilized as a great enabler to decrease health inequalities and improve digital literacy. By leveraging technology, health services can reach marginalized populations and bridge the gap in access to information and services. Digital health is a mature field with the potential for large-scale implementation, as evidenced by numerous case studies of successful implementations.

There is excitement and a positive view towards the role of youth in the evolution of the digital health field. Engaging youth and integrating their perspectives can lead to innovative solutions and advancements in the field. This aligns with the broader goals of SDG 3 (Good Health and Well-being) and SDG 4 (Quality Education).

In conclusion, mobile technology, low-tech solutions, and digital health have the potential to significantly improve health outcomes in Sub-Saharan Africa. Designing solutions with a human-centric approach, addressing systemic issues, collaborating with telecommunications companies, and considering diversity can enhance the effectiveness and inclusivity of digital health services. The COVID-19 pandemic has further emphasized the importance of digital health in reducing burdens on healthcare professionals and empowering individuals with information. Long-term investment in digital health infrastructure and harnessing the potential of technology are vital for achieving health equity, reducing inequalities, and improving overall well-being.

Rajendra Gupta

The analysis highlights the importance of digital health training for various stakeholders in the healthcare sector. Firstly, it emphasises the need for policy makers to be adequately trained in digital health. The International Society of Telemedicine and E-Health, with memberships in 117 countries, is an influential body in promoting digital health training. Additionally, the World Health Organization (WHO) has established a capacity building department in 2019 to support policy makers in this area.

Moreover, it is essential for frontline health workers to receive affordable and accessible digital health training. In India, ASHA workers, who are the first responders in healthcare, will be provided with affordable £1 training in the next two months. This will enable them to effectively utilise digital health tools and technologies in their work.

Patients also need to be trained to use digital health technology effectively. They should be educated on how to open an app, use it, and understand privacy and security measures. The International Patients Union is actively involved in training patients to use digital technology, ensuring they can benefit from its potential in managing their health.

The analysis also highlights the role of governments in addressing health equity and the digital divide, particularly in low- and middle-income countries (LMICs). Governments, such as India, have launched initiatives to provide digital healthcare access to underprivileged populations. For instance, India offers free telemedicine services through 160,000 health and wellness centres across the country. Additionally, the government has rolled out 460 million health IDs with plans for 1 billion under the digital health mission. These efforts help bridge the gap in healthcare access and promote health equity.

A well-crafted policy and substantial government investment are deemed essential for the successful implementation of digital health programs. The Indian government, for instance, has established a national digital health mission and is investing in advanced systems like artificial intelligence and natural language processing to enhance telemedicine services. They are also rolling out the Ayushman Bharat Health Account number (ABHA number) to further support digital health initiatives.

Digital health is seen as a promising solution for health inequity and has the potential to bridge the gap between urban and rural healthcare service delivery. Technologies such as conversational AI and chatbots can offer basic health consultations for routine problems, while the creation of 460 million health records in India demonstrates the progress being made in digitising health information.

The analysis also acknowledges the role of technology during the COVID-19 pandemic. It highlights fast-track vaccine development through global collaborations and the use of artificial intelligence for repurposed drug use. The delivery of 2.2 billion vaccinations digitally through a COVID App further demonstrates the readiness of technology in responding to the pandemic.

The momentum of using technology in the health sector must be maintained, with government incentives and flexibility in telehealth during the pandemic playing a crucial role. Additionally, digital literacy is important for anyone in the health sector, with initiatives such as the Digital Health Academy collaborating with Google to create developers for health. Courses on robotics, artificial intelligence, and digital health are being developed to ensure that individuals at all levels of the healthcare sector possess the necessary skills.

It is further highlighted that those who do not understand digital health risk becoming professionally irrelevant. Therefore, it is crucial for healthcare professionals, including doctors, to stay updated on digital health developments to better serve informed patients.

The analysis points out that scalability is crucial in healthcare. This means that the ability to expand digital health initiatives and ensure they are accessible to all is of utmost importance in order to achieve the desired impact in improving healthcare delivery.

Overall, the analysis underscores the importance of digital health training for policymakers, frontline health workers, patients, and the broader healthcare sector. It highlights the role of various stakeholders, including private organisations, civil society, and governments, in promoting digital health literacy, addressing health equity, and bridging the digital divide. The analysis also highlights the potential of technology in managing healthcare, particularly during the COVID-19 pandemic. Moreover, it emphasises the need for digital literacy and scalability in order to maximise the benefits of digital health in the healthcare sector.

Yawri Carr

The analysis delves into several key topics related to digital health and technology. One of the main focuses is the Responsible Research and Innovation (RRI) Framework, which aims to harmonise technological progress with ethical principles. The framework advocates for policies that preserve digital rights and establish mechanisms of accountability. This is seen as crucial in guiding the development of digital health technologies, ensuring that they are ethically sound and aligned with societal values.

Ethical considerations in the development of digital health technologies are explored further. It is argued that in competitive environments, where efficiency, speed, and profit are prioritised, ethical concerns can be compromised. This tension between ethics and industry objectives highlights the need for a careful balance between technological advancements and ethical principles, ensuring that technology is developed in a responsible and sustainable manner.

The involvement of youth in digital health is highlighted as a significant factor in bridging the digital divide and enhancing digital health literacy. Youths can play a crucial role in the research process, ensuring that interventions are culturally sensitive and address the specific needs of their communities. Innovation challenges and mentorship programmes are seen as powerful tools for guiding youth in the development of their ideas. Additionally, digital health literacy programmes can be initiated to equip young individuals with the necessary skills and knowledge to navigate the digital health landscape effectively.

The analysis also emphasises the importance of youth participation in internet governance policies. By actively engaging in discussions and decision-making processes, young advocates can ensure equitable access to digital health resources. It is argued that youth coalitions can amplify their collective voice on topics such as digital health equity, ultimately driving positive change and promoting inclusivity in healthcare.

Innovation hubs are suggested as a collaborative platform where young innovators, healthcare professionals, and policymakers can come together to create solutions for digital health challenges. The involvement of supportive companies and resources can aid in filling innovation gaps and promoting meaningful advancements in the field.

During a pandemic, telemedicine and the implementation of robots are highlighted as crucial. Telemedicine enables the delivery of remote healthcare, minimising contact and reducing the risk of contagion for healthcare workers. Robots, on the other hand, can perform tasks considered dangerous or dirty, thus protecting the health of patients and medical professionals.

The analysis also supports the initiative of Open Science, emphasising the importance of open access to data and research. Costa Rica’s proposal for an open science initiative to the World Health Organization (WHO) is highlighted as a positive step towards facilitating collaboration and partnerships for the advancement of digital health technologies.

The role of technology in emergency situations is underscored in the analysis. It is argued that technology can help protect healthcare professionals and patients during emergencies, providing essential support and resources to mitigate risks and ensure effective healthcare delivery.

Finally, the analysis recognises the value of ethicists’ work and emphasises the importance of their active involvement in discussions about responsible AI. Ethicists are seen as vital in ensuring that the development and deployment of AI technologies align with ethical considerations and respect for human values.

In conclusion, the analysis provides a comprehensive examination of various aspects of digital health and technology. It highlights the importance of ethical considerations, youth engagement, innovation hubs, and the role of robots and telemedicine. The insights gained from this analysis further emphasise the need for responsible and inclusive development of digital health technologies, while recognising the value of collaboration, inclusivity, and ethics in driving positive advancements in the field.

Session transcript

Man Hei Connie Siu:
So, hi, everyone, both on-site and online. Welcome to our workshop titled Equity, Closing the Gap with Digital Health Literacy. My name is Connie. I’m a 22-year-old biomedical engineering student and also a United Nations International Telecommunication Union Generation Connect youth envoy with a passion for internet governance. So, in the next 85 minutes, we’ll be exploring how digital technologies have transformed healthcare, especially during the pandemic. However, despite progress, digital health has not necessarily improved health equity. Low digital health literacy and the digital divide are still persisting, in turn creating disparities in access to care. So, in this session, we will discuss strategies to enhance digital health literacy and identify measures to promote equitable digital health access. Our goal is to find innovative policy solutions that bridge the digital divide and ensure that digital health truly advances healthcare outcomes for all. Thank you all for joining us on this important journey and let’s get started. We have three key policy questions that will guide our discussion today. How can comprehensive frameworks and assessment tools be developed to capture and assess different dimensions of digital health literacy, ensuring holistic understanding of individuals’ abilities in navigating digital health information and services? What strategies towards health equity can be adopted to ensure digital health literacy programs effectively address unique needs and challenges faced by marginalized communities, promote inclusivity and equitable access to digital health resources? And also, how can partnerships between key stakeholders, including healthcare providers, educational institutions, technology companies, and governments be leveraged to enhance digital health literacy skills, foster collaboration and knowledge sharing to advance health equity? Our panelists will be addressing these issues today. So if you would like to ask a question towards the panel, we will have a Q&A session at the end for on-site participants. And online participants may use the Zoom chat to type and send in your questions. And my online moderator, Valerie, will be helping me with them. So without further ado, to kick off our discussion, I would like to introduce our esteemed panelists who will share their insights on these matters. First, joining us online, we have Ms. Geraldine Miller, an innovation leader driving change in healthcare and life sciences through AI. She is a senior director at Microsoft in product incubations, Microsoft Health and Life Sciences Cloud, Data and AI. And she’s also the co-founder and head of AI for Health, which is Microsoft AI for Good Research Lab. And then we have Professor Rahindra Gupta joining us on site here today, a leading public policy expert with vast experience in policymaking. And he’s been involved in major global initiatives on digital health and holds several key positions in the digital health arena. He’s also the founder and behind many pathbreaking initiatives like his Project Create and organizations working for digital health. And next we have Ms. Debbie Rogers joining us on site as well. She’s an experienced leader in the design and management of national digital mobile health programs and the CEO of Reach Digital Health, aiming to harness existing technologies to improve healthcare and create societal impact. And last but definitely not least, we have Ms. Jari Carr joining us online. She’s an internet governance scholar, youth activist and AI advocate. And she’s also a digital youth envoy for the ITU like me and a global shaper with the World Economic Forum with her work centering on responsible AI and data science for social good. Now let’s begin section one of today’s workshop on low digital health literacy and strategies. And I would like Ms. Geraldine to take the floor first. So what research and development initiatives, for example, including the creation of comprehensive frameworks and assessment tools, is Microsoft pursuing to address the multifaceted challenges of low digital health literacy? And additionally, can you highlight your thoughts and innovative strategies and partnerships that Microsoft is employing or supporting to enhance digital health literacy among marginalized populations with a focus on inclusivity and equitable access, especially in low income and rural areas? Ms. Geraldine, over to you.

Geralyn Miller:
Yeah, great, thanks. And thank you for inviting me today to participate in this. So the lens I’m gonna take from this is really based on something that is known as social determinants of health. So I wanna start by defining and sanity checking that social determinant of health is a non-medical factor that influences health outcomes. So this is the conditions that people are born, work and live in, and the wider set of forces that shape conditions of our daily lives, right? So this includes things like economic policy and development agendas, social norms, social policies, racism, even climate change and political systems. And this affects about, from research, we know that this is about 30 to 55% of health outcomes. are actually really dependent on social determinants of health. So when you want to think about health equity in digital literacy, it’s really important for two things. First, to understand the problem based on data. And I’ll share a little bit about what Microsoft Research is doing in that area. And the second is to open your mind and have a willingness to address the underlying, often systemic problems that affect health outcomes. And that includes social determinants of health. So Microsoft has some things that we’re doing to understand the problem with data, including the Microsoft AI for Good team has built something that we call a health equity dashboard. That is essentially a Power BI dashboard that takes a number of public datasets and allows one to look at them from a geography perspective, slice and dice the data by rural, suburban and urban populations, and then also examine different health outcomes, including things like life expectancy. So that’s the first thing, right? Is really being able to understand and visualize the problem itself. So I invite you to actually have a look at that information. There’s a number of other things that from a Microsoft perspective, we’re doing to look at on the social determinants of health side. So I’ll point for example, to some of the work we’re doing on climate change. We announced a climate change research initiative that we call MCRI, which is really a multidisciplinary research initiative that is focusing on things like carbon accounting, carbon removal and environmental resilience. We also have our Microsoft AI for Good research lab and our humanitarian action program. They have, for example, worked with a group called Humanitarian Open Street Map Team or HOT, which partnered with Bing Maps to map areas vulnerable to natural disaster and poverty. So that’s an example of some of the work out of the research lab and the humanitarian action program coming together to help give relief teams information to respond better after disasters. There’s also a lot of work that we have happening from a Microsoft perspective that ties more directly to economic development and digital skilling. So we have some work out of LinkedIn, something called the Economic Graph, which is a perspective or a view based on data of more than 950 million professionals and 50 million companies. LinkedIn, which is a Microsoft company, also has a data for impact program. And this program makes this type of professional data available to partner entities, including entities like the World Bank Group, the European Bank and others. So it’s data on more than 180 countries and regions, and this is at no cost to the partner organizations. An example of the impact of this type of data, this data for impact information was able to advise and inform a $1.7 billion World Bank strategy for the country of Argentina. And then there’s also the Microsoft Learn program, which is a free online learning platform enabling students and job seekers to expand their skills. So role-based learning for things like AI engineers, data scientists and software developers, hundreds of learning paths and thousands of modules are localized in 23 different languages. So in summarizing, I just wanna say that we look at this from a holistic broad perspective as digital health literacy and digital skills as part of the social determinants of health and the work that we’re doing to support those.

Man Hei Connie Siu:
Thank you very much, Ms. Miller. And now moving on to Ms. Debbie, as an experienced leader in the design and management of national mHealth programs and the CEO of Reach Digital Health, can you share your thoughts on digital health literacy, digital divide and health equity, effective strategies for enhancing digital health literacy among marginalized populations, particularly in resource constraint settings? And additionally, how can partnerships between nonprofit organizations like Reach and private sector mobile operators be strengthened to promote digital health literacy among women and marginalized communities addressing gender-based barriers and limited resources while contributing to bridging the digital divide?

Debbie Rogers:
Thanks very much. So I think the first thing just to talk about is a little bit of the context. So we work primarily in Africa. To give you an idea around inequality in health in Sub-Saharan Africa, we have 10% of the world’s population, 24% of the disease burden, and only 3% of the health workers. And so we really do have the odds stacked against us in a time when we’re supposed to be going towards universal healthcare, which quite honestly is a pipe dream if you look at where things are at the moment. While we’ve made some progress in addressing maternal and child health and addressing infectious diseases such as HIV, we are getting an increased burden when it comes to non-communicable diseases. So the burden is just increasing, not decreasing. And so really, if we follow the same. patterns over and over again and we keep just training more and more health workers and not addressing the systemic issues or relieving the burden from the health system, then there’s absolutely no way that we’re going to be able to improve these stats. We’re going to go backwards and not forwards. And so I think I’m fairly optimistic actually because I think that digital, and particularly mobile, has the opportunity to really address some of these issues in a way that many other interventions don’t. REACH Digital Health was founded in 2007 with the idea that the massive increase in access to mobile technology in Africa, at the time more people in Africa had access percentage-wise to mobile technology than in the so-called global north or western countries, was a way for us to leapfrog some of the challenges that we’ve had in the global south and to actually address some of these issues. And we really have been able to see that. We have been able to see how the access to information and services through a small device that’s in the palm of many people’s hands has been able to improve health, both from a personal behavior change perspective but also health systems as a whole. And so what we primarily focus on is using really, really low-tech but highly scalable technology. So things like SMS, WhatsApp, these are the things that everybody uses every day to communicate to their family and friends. And we use that to empower them in their health, help them to practice healthy behaviors, to stop unhealthy behaviors, and to access the right services at the right time. And with the fairly ubiquitous nature of mobile technology in Africa, we’ve been able to reach people at a massive scale. So for example, we have a maternal health program with the Department of Health in South Africa. It’s been running since 2014. We’ve reached 4.5 million mothers on that platform, but that represents about 60% of the mothers who have given birth in the public health system over the last eight years, which percentage-wise is huge. And we’ve been able to see that this has had impacts such as improved uptake of breastfeeding, improved uptake of family planning, and really has seen not just an individual change but a more systemic change with the ability to understand what is the quality of care on a national scale for the Department of Health. in South Africa. And so we really do believe that if you harness the power of the simplest technology, if you design for scale with scale in mind, if you design with understanding the context, then you can actually use digital to be able to increase health literacy. And so it’s not all doom and gloom. It’s not just about the fact that digital is always excluding other people. It can be an enabler, but only, of course, if we consider the wider context and we don’t go blindly into things and ignore the fact that this could be something that increases it. And so I think I’ll talk a little bit later more about some of the strategies that can be used, but I think two things to remember is design with the human, not patient. I don’t like the word patient, but in digital health we tend to use that word, with the human at the center of what you’re trying to do. And design understanding that you are a part of a bigger system, and this is not something that exists by itself. And if you do those two things, not only will you be able to improve health literacy, but you’ll be able to do so in a way that doesn’t widen the divide that many technologies already put in place.

Man Hei Connie Siu:
Thank you very much, Miss Devi. Moving on to Professor Gupta, with your extensive experience in policy development, digital health education, and founding the world’s first digital health university, can you share your thoughts and offer key policy recommendations that governments and international organizations should prioritize to comprehensively enhance digital health literacy, especially amongst marginalized populations? Additionally, can you share insights into successful and scalable educational strategies and approaches that have effectively improved digital health literacy, with a focus on adapting these methods globally to meet health care scaling needs for digital health?

Rajendra Gupta:
Thanks, Connie. Firstly, I congratulate you for picking up this very important topic. And secondly, I’m a little worried for. for such a long question because after 5 p.m. almost like I’m half asleep. It’s been an engaging session throughout the day, but yes, it’s a very important topic. It keeps me awake, but pardon me for my incoherence. But let me give you a little backdrop of why this topic is important. There is an international society called International Society of Telemedicine and E-Health. It’s been around for a quarter of a century and has memberships in 117 countries. So way back in 2018, I said that digital health has two opportunities and two challenges, but the two challenges are like we have reached a stage of technical maturity. Give me a challenge, I’ll give you 100 solutions. But where we lack is organizational maturity. People are not trained enough to leverage technology that’s available, so I said let’s look at capacity building. I think the issue that you brought up. So 2019, they formed the Capacity Building Working Group, which I chair, and post that, we have done two papers on capacity building. One is listing the kind of people we need to train across digital health, and second, we have done a deep dive and released that in partnership with World Health Organization. So there is, for those who are looking at what kind of capacity we need, the ISFDH website has a list, two papers written on this topic. And then 2019, WHO set up their capacity building department, which is a very recent thing. So I think there is a lot of focus. And now coming back to what my experience was. So having pushed various organizations to do that, but I still relied, we were just doing policy papers, and policies take time to translate. I mean, people like Debbie would need people to help her in technology. I mean, a policy paper can’t help her. She needs people trained in digital health. So in 2019, I set up the Digital Health Academy, which now is now the Academy of Digital Health Sciences. We have started a course for doctors and for people in healthcare. It’s a global course, fully online, as digital course should be. But to your point, that also would not solve my biggest overall challenge. I am training doctors, you know, it is so shocking, and I’ll put a context to that, that we had a half page advertisement in a leading newspaper in India. A very senior doctor called me and asked, Rajen, what’s digital health? So I was shocked that even doctors need to be first surprised that what does word digital health mean. I’ll give you another example. There’s a company that works exclusively in data domain. So I. called the founder who is a doctor and asked, do you do digital health? He said, no Raj, we don’t do digital health. I said, do you use data? He said, we only use data. So I said, you only do digital health. So the challenge is first people should know the definition of digital health. That is the level we have to get in and which is needed across the ecosystem. So right from the bureaucracies and the ministers and the ministries of health, they need to understand what is digital health because they come for a fixed tenure or they get transferred. If that level they are sensitized, then things flow down the line because government makes policies which get implemented as programs. So that’s one level of competencies that I’ve told WHO to look at because my experience in WHO meetings is that bureaucrats come, they spend two, three days in Geneva or New York and then they go back and forget it. So there has to be a course for policymakers at the highest level, which probably WHO or any organization could do. The second level is what we need to do is the courses for doctors and health professionals. And third and the most important which we are launching in next two months is frontline health workers. But understand the challenge that frontline health workers are either doing voluntary service, like you have the ASHA workers in India, which is a million workers. They are our first line or first responders. Don’t expect them to pay you $1,000 or $100. So we had to actually innovate and convince one of the Institute of National Importance that we need to bring out $1 trainings. So we should train people for as low as $1 and this we’re doing globally. So frontline health workers, if I’m able to train, I think I would have addressed the biggest challenge for healthcare. Now one of the government’s agency has approached us to work with us. So as such, on the capacity building, I think governments just focus on the program minus capacity building, which is a serious lapse. And I think this is across the board. I think that we would agree on that is that we are very focused on saying maternal health, mobile application, child health, mobile application, rural health telemedicine, but who will do it? We don’t know. But people who are going to use don’t even know how to use a mobile phone. They do not know how to log in on the account. So we need basic training and I think this is what private organizations, not-for-profits and then government step in very late, let me tell you that. So they are not the ones who would initiate, so once you go with the program, talk to them, they will partner. So as a policy, I’m glad, Connie, that you have put a session on this, something that our Digital Health Dynamic Coalition should have done, but they only allow one session for a Dynamic Coalition. So we had our session, which we are doing tomorrow, but now that you have taken it up, it puts the spotlight on this important topic. At ISFDA, there are policy papers, they have been given to WHO, WHO set up the Capacity Building Department, but honestly, nothing much has moved between 19 and 23, four years. We are still to look at, and they’re still forming a committee, so I think it’s mostly going to be the civil society organizations and private sector that will take the lead. On policy side, I have not seen documents that talk about it so far, so we will have to wait for a normative guidance from WHO, which will be still, I think, a few years away. It takes time to build a document in WHO. How this will happen fast is like this. In India, we have a digital health mission, which has rolled out 460 million health IDs. In this year, we will roll out one billion health IDs. Our health consultations, teleconsultations, have crossed 120 million. I think that is the first point, so I’m inverting the process from policy to let’s first have implementation. When the government rolls out at such level and scale, automatically, you will start feeling the need of trained people in this. I think this is one thing, but more than structured courses, it will be more of continuous upscaling that everyone will need to do, because technology is also changing. Till last year, no one talked about generative AI. Now, people have started talking about generative AI. I think we need to keep that training as fluid and make it more as a continuous upscaling program for people across health care. We are not waiting for government policies. We are rolling out as digital. Academy of Digital Health Sciences, and these are global programs. We are making it really affordable as $1 trainings for front-end health workers, for doctors, and for the industries, the post-graduate program. And we will announce undergraduate programs as well, because I think this is where we need to build capacity. So for now, I think policy interventions will happen. I think overall, a part of the health policy, everyone should put capacity building, and digital health is now an integral part of health. So digital upscaling is required for digital scaling. So I think this is something that governments have to look at, and WHO should take a frontal role. So I would say more to WHO, and organizations like the one that Debbie runs, organizations like the ones that I run with my team. And more importantly, there are two people sitting in this room, Priya and Saptarshi. They run patients union, International Patients Union. Even if you train doctors, industry, and the front-end health workers, if patients are not trained, who will use digital? At the end of the day, they have to open an app, use it. They need to know what’s privacy, what’s security. So it’s on us on people like them, to go and train patients for how to use digital technology. So it’s a multidimensional topic, and I’m happy that there’s a session dedicated to this. Unless we address this in a complete ecosystem perspective, we have not done justice to this topic. Thank you.

Man Hei Connie Siu:
Thank you very much, Professor Gupta. And now to Jari. As someone with expertise in responsible AI, digital rights, and a passion for the intersection of technology and society, how can policymakers craft regulations to ensure the responsible development and deployment of digital health technologies, especially for marginalized communities? And also, what role do you see for youth-led initiatives in enhancing digital health literacy, bridging the digital divide, and engaging with policymakers to drive policies that support equitable access to digital health resources? Over to you.

Yawri Carr:
Hello, everyone, dear organizers, participants, and guests. Thank you very much, Connie, for the organization, and thank you for inviting me. Well, so in a world where technology and healthcare are more intertwined than ever, the responsible development and deployment of digital health technologies are of paramount importance. This is especially true when considering marginalized communities, where equitable access to healthcare is not just a goal, but a moral imperative. So in this case, I would like to mention the Responsible Research and Innovation Framework as one of the guiding philosophies that serve as a roadmap for navigating the intricate terrain of AI in healthcare. At its core, RRI is a commitment to harmonizing technological process with ethical principles. It places a premium on transparency and accountability, recognizing them as pivotal elements in the responsible development and deployment of AI technologies. In the realm of healthcare AI, RRI advocates for policies that do not only uphold digital rights, safeguarding privacy and security, but also establishing mechanisms to hold AI systems answerable for their decisions. It is a holistic approach that seeks to ensure that benefits of innovation are realized with a compromise in ethical standards or jeopardizing individual rights. So who should be involved in a process of responsible research and innovation? Societal actors and innovators, scientists, business partners, research funders and policymakers, all stakeholders involved in research and innovation practice, funders, researchers, stakeholders, and the public, large community of people, early stages of R&I processes, and the process as a whole. And when? Through the entire innovations life cycle. And to do what? So it is important to anticipate risks and benefits, to reflect on prevailing conceptions, values, and beliefs, to engage the stakeholders and members of the wider public, to respond the stakeholders, public values, and also the changing circumstances that are present in these kinds of processes, to describe and analyze potential impacts, reflecting on underlying purposes, motivations, uncertainties, risks, assumptions, and questions, and that a huge amount of dilemmas that could also emerge in this kind of circumstances, and open to reflections and to have a collective deliberation and a process of reflexivity, and to integrate measures throughout the whole innovation process. So these are also in which ways should we do this? Working together, becoming mutually responsive to each other, and of course, in an open, inclusive, and in a timely matter. And to what ends, what this framework proposes is that it’s allowing appropriate embedding of scientific and technological advances in society to better align the processes and outcomes with values, needs, and expectations of society, to take care of the future, to ensure desirable and acceptable research outcomes, solve a set of moral problems, and will also protect the environment and consider impacts on social and economic dimensions, also promote creativity and opportunities for science and innovation that are socially desirable and are taken in the public interest, and how these can be applied specifically in a context of healthcare technologies. For example, there are academic projects and also societal projects. One example of an academic project is one from the Technical University of Munich in which I am now studying. Well, we have a project that’s an AI-driven innovation, including a robotic arm of exoprothesis and an advanced version of bimanual mobile service robot. So to ensure the responsible and ethical integration of these technologies into broader healthcare applications, the developers from the Machine Intelligence Institute have collaborated with the Institute of History and Ethics of Medicine, as well as the Munich Center for Technology and Society. And these teams are employing embedded ethics, incorporating ethics, social scientists, and legal experts into the development processes. So they have initial onboarding workshops where these experts have become integral members of the development team. They have been actively participating in regular virtual meetings to discuss technological advancements, algorithmic development, and product design collaboratively and interdisciplinary. And when ethical challenges are raised, they are addressed as part of the regular development process leading to adjustments in product design. An example involves the planning of model flats for a smart city where initial designs focus on open-play layouts. Embedded ethics is highlighted in this case, potential challenges for elderly population unaccustomed to such arrangements, promoting a reconsideration of the layout. Also taking into consideration that these kind of projects in this specific case had a target population of the elderly population. So this is why it is very important to look at this target population and actually see if they are prepared and if they could be adapted to these kind of technologies. So insights from this discussion influence the design process, emphasizing the importance of directly seeking future inhabitant perspectives in layout planning. And simultaneously, the project also involves interviews with various stakeholders, including developers, programmers, healthcare providers, and patients. Well, workshops, participant observations of development work and collaborative reflection and case studies contribute also to active ethical consideration. And while the project is also aiming to develop a toolbox to facilitate implementation embedded ethics in diverse settings in the future, but there are also several unresolved issues that remain and that are also like with cultural setting and with corporate and organizational structures because even in a research setting funded by public resources, the development of AI is predominantly situated in a fairly competitive landscape with prioritization of efficiency, speed, and also profit. So, and also in the case of health, so ethical considerations might be normally isolated or like are normally like not so taken into an importance when they directly clash with profit-driven motives. So, taking ethical concerns seriously often creates a tension with industry objectives and faces the risk of being assimilated into broader corporate commitments to concepts like technological solutionism, micro-fundamentalism, that at the end prevents ethicists to actually do their work and to do a responsible healthcare technology. Normally, embedded ethicists may find themselves working within contexts that are characterized by pronounced power imbalances, particularly those of a financial nature. And it is probable that some form of enforcement measures will become very necessary in such environments. So, not just for the development of the technical aspects, but also like for the work of the persons that are working on the responsible development and deployment, so that maybe regulatory framework certification processes or even voluntary initiatives into the organization can make an awareness of these kind of issues that are arising in these situations. And well, okay, I also needed to talk about youth-led initiatives, right? If I still have time. Okay, so, well, there are also like a lot of ways in which youth-led initiatives and also marginalized community could also engage with responsible research and innovation. So, for example, youth-led initiatives could connect or could try to participate in events such as this one, but also like try to, that universities or centers of education could inspire the youth so that they can also learn about telemedicine, how can they develop telemedicine initiatives in countries and also in a special rural areas as the professor was mentioning about in India that these kinds of populations don’t have the same access. Also, for example, community-based participatory research projects that are involved in communities in their research process, ensuring that interventions are culturally sensitive and address the specific needs of a population. Also, digital health literacy programs. And also like innovation challenges could be motivated between students and youth so that they can also engage. And I also consider the mentorship that these students or youth can also gain from experienced people is also very important because they need a guidance and also like foundations and also examples of how can they develop their ideas. So thank you.

Man Hei Connie Siu:
Thank you very much, Jari. So while low digital health literacy is a challenge for all populations, it’s also particularly harmful for marginalized communities. So in this section, we’ll discuss strategies for addressing health equity and the digital divide in the context of digital health. So let’s start this off with Ms. Gerilyn again. So in light of the session’s focus on health equity and the digital divide, could you share your thoughts and elaborate on specific policy measures and initiatives that Microsoft is advocating for or actively participating in to bridge the digital divide and promote equitable digital health access? And also how is Microsoft addressing barriers faced by diverse populations and how are these efforts contributing to advancing health equity? Over to you.

Geralyn Miller:
Yeah, thank you very much for the question. So I want to respond to in this context to some of the comments that Dr. Gupta and Ms. Carr mentioned and really shine a light on the concept of artificial intelligence, generative AI and what we at Microsoft call responsible AI as an example of policy. So one of my favorite quotes in this area is a quote by our Chief Legal Officer and President Brad Smith. And I’m gonna paraphrase a quote I don’t have exactly but Brad has a quote that basically says that when you bring a technology into the world and your technology changes the world, you bear a responsibility as a person that created that technology to help address the world that the technology helps create. And so from a Microsoft perspective, we look at this under the lens of something that we call responsible AI. Our responsible AI initiatives date back far before the birth of the chat GPT and generative AI and large foundation models and large language models, really back to about 2018, 2019. And we have a set of principles that we’ve established that are around how you design solutions that are worthy of people’s trust. So these are our principles, what we call our responsible AI principles. There are many people who have different principles around responsible AI. I’ll share with you ours. I would just offer that it’s something worthy of thought. And very often when I work with academic medical centers or healthcare providers who are starting to use AI or build and deploy AI models, I also offer to them, hey, you should have a position on responsible AI, right? Do your thought work, do your homework. You should have something that is consistent with your own values, your own entity’s values. And, but going back to, from a Microsoft perspective, what we believe those principles are. The principles are really based on fairness. So treating all stakeholders equitably and not making sure that the models themselves don’t reinforce any undesirable stereotypes or biases. Transparency, so this is all about AI systems and their outputs being understandable to relevant stakeholders. And relevant stakeholders in the context of healthcare means not only patients who may be receiving the output of this, but also clinicians who may be using these as decision support tools or to do some type of prediction. Accountability, and so people who design and deploy AI systems have to be accountable for how the systems operate. And I’m gonna do a click down on accountability in a second. Reliability, so systems should be designed to perform safely, even in the worst case scenarios. Privacy and security, of course, that goes, those are underpinnings behind any technology. And AI systems as well should protect data from misuse and ensure privacy rights. And then inclusion, and this is all about designing systems that empower everyone, regardless of ability, and engaging people in the feedback channel and in the creation of these tools. And there are some things I will drill down a little bit on the inclusion front as well. So when you, an example, as I mentioned, of the accountability, I’d like to share some things that our, you know, President Brad Smith was offering when he testified before this, the U.S. Senate Judiciary Subcommittee. This was back in the beginning of September, around September 12th, on a hearing entitled The Oversight of AI, Legislating and Artificial Intelligent. So Brad highlighted a few areas that he is suggesting help shape and drive policy. One is really about accountability in AI development and deployment. Things like ensuring that the products are safe before they’re offered to the public. Building systems that put security first. Earning trust. So this is things like provenance, technology, and watermarks so people know when they’re looking at the output of an AI system. Disclosure of model limitations, including effects on fairness and bias. And then also really channeling research energy and funding into things that are looking at societal risk associated with AI. He also suggested that we need something called, you know, what he terms safety brakes for AI that manages any type of critical infrastructure or critical scenarios, including health. And, you know, when you think today we have collision avoidance systems in airlines, we have circuit breakers in buildings that help prevent a fire due to, for example, power surges, right? AI systems should have safety brakes as well. So this involves classifying systems so you know which ones are high risk, requiring these safety brakes, testing and monitoring to make sure that the human always remains in control, and then licensing infrastructure for the deployment of critical systems. And then from a policy perspective, ensuring that the regulatory framework actually maps to how these systems are designed so that the two flow together and work together. So that’s an example of the policy in action side of things. And from a Microsoft perspective, we put our responsible AI principles that I mentioned into action through our commitments at a policy level. Our voluntary alignment, for example, here in the US out of some of the things coming out of the White House. So voluntary alignment with commitments around safety, security, and trustworthiness of AI. And on one last point, I did wanna go back to the responsible AI principle and talk about inclusion. And so we’re doing some work from a Microsoft perspective in the health AI team that I am a product manager on to really look at how, when we have data that guides models, and either this is either custom AI models, or when we’re grounding large foundation models or large language models with data, how do we make sure that we understand the distribution and makeup of that data to ensure that their bias doesn’t creep in from the data perspective? And we’re also doing work, for example, on the deployment of models. How do you understand if models are performing as they intended? How do you monitor for things, something called model drift? So when models start to perform in a manner that isn’t how you think, right? When the accuracy starts to decline, and then what do you do when the models don’t perform that way? And this last part, the model monitoring and drift is some of the things that we have happening out of our research organization. So thank you.

Man Hei Connie Siu:
Thank you very much, Ms. Cherilyn. So now I want to move back to Ms. Debbie. Drawing from your experience in developing the digital strategy for a major telco in South Africa, how can telecommunication companies play a more significant role in advancing health equity and bridging the digital divide through innovative approaches and digital solutions? And also, what lessons can be learned from your work in South Africa that can be applied globally to improve digital health access?

Debbie Rogers:
Thanks. I think one of the most interesting examples of how mobile network operators have really had a big impact on decreasing any inequities around health is the Facebook Free Basics model. You may not know what that was, but Facebook basically put together simple information through what looked like a little Mobi site. And this was essential information that they felt everybody should have access to. And they work with mobile network operators to zero rate access to only that portion of Facebook, just that portion, not to everything, but just that portion. And they were able to show that by providing essential information that is free to access, they were able to improve people’s literacy and use of data. So they then… went on to use more data and to use the internet more often, and therefore become more valuable customers to the MNOs. So by doing something like providing free access to essential information, there was also an increase in profit for the mobile network operators. And I think that’s a really interesting model to look at. I think very often we forget that it’s just as important for mobile network operators to be reaching as many people as possible as it is for those of us who are trying to improve health through something like digital health. And so if there are aligned priorities, then there are very good ways that you can work together. One of the ways that we’ve worked with mobile network operators in South Africa has been to reduce the cost of sending messages out to citizens of the country. And that’s been done not in a way that prohibits the mobile network operators from making a profit, but what it does do is it makes it completely free for the end user. So if it’s completely free for the end user, you’re reducing the barriers for them to be able to access this kind of information. But the reduced cost is then something that can be brought to the table because of the increased size of access. So the more we scale out these programs, the more we’re able to see economies of scale, and the more worthwhile it then becomes for mobile network operators to engage with us. And so one of the very interesting models that’s been used was to reduce churn. So if people can only access information, say, using a MTN SIM card, they’re less likely to switch to other SIM cards if that’s the case. And so being able to align the health, the desires of a health, digital health organization or government with those of mobile network operators is incredibly important for being able to ensure that you’re working towards the same goal but without. anyone asking for any handouts, because that’s not going to work. I think when it comes to strategies for decreasing inequity, I think the one that we really need to talk about more is about being human-centered. And that doesn’t just mean designing for people and occasionally having them attend a focus group. It means designing with them and ensuring that the service is actually something that they want to use, something that they love using. Make it easy and intuitive for them to use. No one starts a course on how to use Facebook before they use Facebook. We shouldn’t create services that need so much upscaling. We should create services that are simple and easy for people to use. You need to use appropriate language and literacy levels. And this is something that the medical fraternity often forgets about, because it is a very patriarchal society. Make it something that is at least close to free for people to access. We find that access to a mobile device is less of a problem than the cost of data, for example. So just because somebody has access to a device doesn’t mean that they’re going to be able to go and look up information because they may not have data on their phones. So you can work very closely to reduce the cost or make it zero cost, and that’s really going to ensure that you reduce the barrier to access. And then you really have to try and think about the system that you’re in. By creating a digital health solution, are you overburdening the health system that already exists, for example, or are you reducing the burden on it? Are you creating feedback mechanisms that mean that you can understand what the impact is that you’re having on the system itself rather than working within a vacuum? Are you making sure that where a digital health solution may not be accessible to somebody, there is an alternative in place that does not rely on the digital health solution? We can’t just operate within … silos, we have to think about the fact that digital health is just as much a part of health infrastructure as the physical facilities, for example. Until digital health is seen as just as much of an infrastructure, it’s going to be a fun project on the side and not something that’s going to have some systemic change. So it’s really important for us to think about that system. And then recognizing biases, I think Geraldine mentioned this, very often the people who are creating digital health services are not the people that are using the digital health services. So this goes back to why human-centered design is so important, but it’s also important to understand that you will be introducing biases if the people who are building the system are not the people who are using the system. And so you have to look more systemically. Look at the makeup of your team. How diverse is the makeup of your team? I would assume, having been an electrical engineer myself, that it’s probably not particularly representative from a gender or race perspective. So look at the team that you have. How are you working to make your team more representative and therefore address some of the biases that are going to be put in place by having a non-representative team building out the systems? So there’s a bunch of things in there, but I guess in summary, build for the end user in mind. Make it human-centered. Make it easy to use, appropriate, and intuitive. Design with the understanding that you work within a system and make sure that you don’t have unintended consequences and that you’re always feeding back to understand what the impact on the broader system is. And ensure that you think about the biases that are going to be inherent in the fact that the people building the system are not necessarily the people using the system.

Man Hei Connie Siu:
Thank you very much, Ms. Debbie. And now moving on to Professor Gupta. So based on your background in advising the Health Minister of India and drafting national policies, how can governments play a pivotal role in addressing the intersection of health equity and the digital divide, particularly in the context of health care access for marginalized communities and also what policy measures should be prioritized to ensure equitable digital health access?

Rajendra Gupta:
Thank you, Connie. This depends on the economic status of the country. So when you have an LMIC country like India, so I’ll give you an example of what was done. So we understand that there is a sizable population which is underprivileged, which is is marginalized, so there was a scheme that was launched for 550 million people, and you have to understand that countries are at different phases of development and they require investments on infrastructure, they require investments on health and education, and it’s not possible to give the amount that the sectors actually deserve. So what was done very carefully since I was in drafting the health policy I played a role in that, so we carefully treaded the path of saying let’s first make primary care a comprehensive primary care, so first guarantee primary care, so that’s comprehensive, that includes chronic disease management to all the things, then let’s convert the sub-centres and private centres into health and wellness centres and put telemedicine as a part of it. So what happens is 160,000 health and wellness centres now across the country offer you telemedicine. Then we created a eSangeevani programme, which is a telemedicine programme which is you can get a doctor consultation for free, so that is across specialities, that’s why it’s 120 million consultations, and now what’s going to happen is we’re putting in AI and NLP in that, so given that India has 36 states and people talk different languages, their dialects are different, so a person talking from a southern state to a doctor in a northern state will hear like his language when he speaks and the doctor will hear in his language when he listens to the patient’s problem. So I think India has planned its strategy for addressing the vulnerable and the underprivileged sections as it charts its course of development, one is that integrate technology in the care delivery right from the primary care, so that has proven, as I said, 460 million health records, 550 million people given insurance, which is of a very decent amount, I would say, which a typically middle class would afford. So on the policy side, on digital health, India has, as we speak, is probably the largest implementation of digital health in the country that is happening, and I would bring here one point that the government has not only to take the stewardship, but also the ownership of investing in digital health. Debbie would understand it very well that digital health is still figuring out the business model. That’s why you see the largest companies have withdrawn digital. health, and as much they can give, you know, talks on the forum, but their investments are on futuristic technologies, which are probabilistic technologies. But the companies that forayed into it years ago don’t exist on the map. So I think governments have to play a frontal role on investing, like Indian government has done. They set up a national digital health mission, rolling it across states, ensuring that everyone has what you call the Ayushman Bharat Health Account number, ABHA number. And you know, we actually will be probably the first country to work towards what I have championed is that let’s work to make digital health for all by 2028. And this for those who work in health care and more so in public health. Forty-five years back in Alma-Ata, we promised health for all by 2000. It’s 23 years after the deadline that we’re still not close to that. At least we can, you know, champion digital health for all by 2028. If that is one objective we pursue as governments across the world, I think a lot of issues will get addressed, because there is a whole lot of planning that will go into doing that. And it’s doable. That’s the only way you can address the issue of health equity. Because the practical part is that doctors who study in urban areas do not want to go to rural areas. They will not. I mean, even if you push them to do, they will find a way to scuttle that. But the only way you can do is you can get technology into their hands with the mobile phones. I think now the systems are fairly advanced. Tomorrow we are hosting a session on generate the conversational AI in low resource setting. So you can have chatbots interacting with people, addressing their basic problems. And 80% of the problems are routine, acute problems. So I think we need to leverage technology not only as a policy but as a program. And there are best practices available. I think India has, parts of Africa have. But these are like islands of excellence. I think forums like these are good to discuss if they can be mainstreamed into islands of excellence to center of excellence, then we can replicate them. and scale those programs. So I think India probably would have a good story as we speak about scale-up of digital health program, but again the key point is that the federal government has to be the funder for the program. Where do you start as health helpline? If you really want to address the inequities, start a health helpline which people can pick the phone, talk to a doctor or a paramedic and get a consultation free of cost. Get into projects like East and Givni, which I think the country is offering to other countries as a goodwill gesture, is where you connect to district hospitals and tell doctors to allocate time for doing digital consultations. So these programs actually help you bridge the digital divide and health and wellness centers. A phenomenal experience of under $60,000 health and wellness centers which have telemedicine facility. So I think picking up the queue, I would say it’s time for implementation. For policy-wise, I think we all know that. I think that we very clearly said it’s getting integrated. In fact, I go further line and say, if you’re not into digital health, you’re not into health care. Don’t talk health care. That’s the truth actually. Thank you.

Man Hei Connie Siu:
Thank you very much, Professor Gupta. Finally, to Jerry, drawing from your experiences in speaking about youth in cyberspace and Internet governance, how can young advocates actively participate in shaping Internet governance policies to ensure that digital health resources are accessible and equitable for all, regardless of socioeconomic status or geographic location? And also, what are some successful examples of youth-driven initiatives in this context? Over to you.

Yawri Carr:
Thank you very much. Well, in the realm of youth in cyberspace and Internet governance, empowering young advocates to actively shape Internet governance policies is crucial for ensuring equitable access to digital health resources. So young advocates can play a transformative role in policy discussions by engaging in many ways, such as, for example, participating in the IGF, because with this active participation, we start to break the ice in how to discuss, how to have dialogues, how to ask questions, and all of these activities, even though they are seen as very daily for experienced people. For youth, this is ways to break the ice and to gain confidence in how to participate in public debates. And they also get insights into current challenges and opportunities in digital health governance. Second, for a formation of youth coalitions, young advocates can form coalitions or networks dedicated to digital health equity, and these coalitions can amplify the collective voice of young people advocating for policies that prioritize accessibility and inclusibility in digital health. For example, we have the Internet Society. We have a youth group, or we have regionally different youth initiatives, and a chapter about digital health could also be open so that coalitions in this specific topic can deepen into these kind of topics. Also, third, it would be engagement with multi-stakeholder processes. So not just the IGF, but also in other kind of processes that are led by governments, NGO, or industry stakeholders. And their participation ensures that diverse voices contribute to shaping policies that consider the needs of all. And it is also important that in this circumstance, so public sector and industries and NGOs can also open this kind of opportunity for youth and that they actively seek for youth that could participate into their processes as well. Because if they don’t do it in such a direct way, so youth, as I mentioned before, they could feel intimidated and think that they are not experienced enough to participate. The fourth, youth-led policy research. Young advocates can initiate research projects to understand the specific challenges faced by marginalized communities in accessing digital health resources. Because evidence-based research can be a powerful tool for advocating target policy changes. And I think this is something that it is a situation, it is a possibility in many countries that have the resources for research, but it is still very behind in countries, for example, in Latin America, where we don’t have so much support from public foundations or from the government to do research. And we also don’t have like so big research focus in our university. So I think maybe one professor can bring this kind of perspective that can inspire the students to make a research group. For example, universities in Brazil, they have like student groups in which they meet some day of the week or some day monthly and they discuss specific topics. So I think this is a good practice so that youth can start to create, that they can start to discuss and that they can start to bring this university and to other colleagues and classmates. Of course, it would be great if some countries could also start to help other global South countries in order that they can have more research and that the students can participate more in these kinds of initiatives in their own countries. Also innovation hubs for digital health. So, for example, in which a hops in which a young innovators healthcare professionals and policymakers can create a solutions together in the sense, it would be also a good to have a funding from an organization or a company that can also collaborate, so that these kinds of innovations at the end can also maybe have like starting a month of financial resource so that they can start with this kind of innovation and that a youth can feel that they are able to become a innovators in this kind of field. But also, I think that this kind of innovation address gaps in digital health accessibility and some kind of examples of youth driven initiatives are for example detail health task forces, because in several regions, you’ve let task forces focus on creating policy recommendations for integrating digital health into broader internal governance frameworks. Also, you’ve led data privacy campaigns in which youth can also, for example, create dialogues in various communities, and they can make provide awareness about the importance of robust data privacy match measures in digital health technologies that people and common patients can also understand why is important to protect their privacy when they go to a when they access some kind of detail a health tool and a global youth hackathons for health, in which in there are health challenges that can develop on innovative apps and platform addresses specific healthcare needs that are in. Yeah, specifically related in the communities of these youth. And I also consider in another action. It’s a this movement also have paid internships that in students can also have access to internships that are paid so that they can equally participate in in a practical application of what they are learning at university or what they are in our sitting. So, um, well I think that by actively participating in these initiatives in active advocates contribute with fresh perspectives, innovative solutions and commitment to digital health equity in internet governance policies because they are digital natives, and they also could understand. I consider they could understand a rapidly a how the how the technologies can help them but also their challenges, their issues and they can also in become more active as they are not just the future but also the present.

Man Hei Connie Siu:
So thank you. Thank you very much Jerry and also thank you once again to the panel for their responses. And so now we’ll move on to the q&a session so if any onsite participants would like to raise their questions, please feel free to walk up to the mic.

Audience:
Hello I’m Nicole, and my YouTube student in Hong Kong. In case of another pandemic like covert 19 nowadays. How do you think the current digital health can be developed and improve and contribute to the society in recovering and ensuring each individual can receive the accurate and same medical advice and treatment without physically visiting a healthcare facilities as it will be crowded with a lot of people or elderly. Thank you.

Debbie Rogers:
I think one of the things that has really been a challenge in the work that we do is that we speak directly to citizens and empower them in their own health, given that the medical fraternity is quite patriarchal that’s not usually a priority. And so what we found is that when an issue is something that happens to somebody else, then there isn’t. It isn’t seen as a need to provide people with the right information, but when covert 19 happened, everybody was affected, nobody had the information. It didn’t matter if you were the president of the of the country, or if you were a student at a high school, no one had the information about the pandemic that was needed and so we’re able to use really large scale networks and things that were already there like Facebook, like WhatsApp, like SMS. platforms to be able to get information to people extremely quickly, and in a time when the information was changing on a daily basis. This wasn’t something where you could take a lot of time, think through things, and put up a website, and think about how things are going to be talked about. This was happening in real time, so you continually had to be updating things. People continually had to get the latest information, and without that, many more people would have died than did already in the pandemic. I think what’s important, though, is for us not to forget the lessons of COVID-19. We very quickly forget, as human beings, when things go back to so-called normal, we very quickly forget the lessons that we learned. I think one of the really important things that needs to continue from COVID-19 is an understanding that knowledge is power in the patient or citizens’ hands, and this isn’t something that needs to be hoarded by the medical fraternity. By giving information to people at a really large scale, you can improve their health, and you actually make your life easier at a time when you are most needed. Digital health can’t replace a healthcare professional, but it certainly can reduce the burden for healthcare professionals, and so that’s a really important thing that we need to continue to consider as we move on from COVID-19. I think the other thing to remember is that we built up platforms, digital health platforms, that solved problems during COVID-19. Screening for symptoms, for example, gathering data that could be used for decision-making, sending out large-scale pieces of information to people. Many, many people in the digital health space reacted very quickly and created incredible platforms that could be used to solve the problems during COVID-19. Many of those no longer exist. exist today. And so we need to remember that there needs to be an investment in digital health infrastructure in the long term so that we don’t have to spin up new solutions every time there is a new pandemic, because there will be another one. It’s not something that is going anywhere. So how are we preparing so that when the next pandemic comes we’re not having to start from scratch all over again? And I think that’s something that we very quickly have forgotten. I want to take a minute and address that as well, if you don’t mind.

Geralyn Miller:
A couple of things I think from the pandemic, and that’s a really great question because you know as a society we want to learn from the past. There’s two areas where I think are worthy to bring forward from the pandemic. First is that there is an incredible value in these cross-sector partnerships. So in public, private, and academic partnerships. We saw a lot of that during the pandemic, literally to light up research on understanding the virus, to do things like drug discovery. Some of this was governance-sponsored consortium, other were more privately funded consortium, and then third class was kind of just similar groups of people coming together, what I would say almost community-driven groups. So really this cross-sector collaboration, that’s the first thing. Second thing is there is some good standards work that I think was done during the pandemic that could be brought forward. So we saw the advent of something called smart health cards during the pandemic. Smart health cards are a digital representation of a relevant clinical information. During the pandemic it was used to represent vaccine status. So think of it as information about your vaccine status encoded in a QR code. There has been an extension of that, something called smart health links, where you can encode a link to a source that would have a minimum set of clinical information. And it’s literally encoded in a QR code that can be put on a mobile device or printed on a card for somebody to take if they don’t have access to a mobile device. Smart health cards also reinforces the concept of some of the work being done by the IPS, or International Patient Summary Group. It is a group that is trying to drive a standard around representing a minimal set of clinical information that could be used in emergency services. And so some of those things that happened in the standards bodies, I think were very powerful during the COVID-19 pandemic. And I would love to see more momentum around driving those use cases forward and also expanding them. Thank you.

Rajendra Gupta:
Thanks. Firstly, another COVID shouldn’t happen. That’s first. Second, I don’t think that technology at any time failed. Actually, it proved that it was ready. So whether you looked at the fast track development of vaccine, which was collaborating researchers across the globe, what technology repurpose drug use artificial intelligence, that’s why we did it. I think almost every country, our country use COVID app, we deliver 2.2 billion vaccinations, totally digital. So I think digital health proved that it was ready, it is ready. Challenges will come, but I think technology is the only one that saved the life. We wouldn’t be sitting in this room, trust me, if technology wasn’t around. The only thing that we should do through forums like this is to keep the momentum going. What we want is to forget the COVID and go back to the old ways. I think there were incentives given by the government, there were flexibilities offered in terms of continuing the telehealth regulations like in the United States. I think that should become permanent. That’s all we should do. So technology has already proved that it’s ready. we were waiting for COVID to be shaken and start using it. So I think technology is ready, we’ll always be ready with us for anything that comes our way. Thank you.

Man Hei Connie Siu:
Jerry, would you like to provide a response?

Yawri Carr:
Yeah. I just wanted to say that I consider that in this situation of a pandemic, telemedicine and also the implementation of robots, as the case that I mentioned previously, are of a huge importance and could also be very useful, taking into consideration that it’s very dangerous for humans to attend or to take care of people because of the contagious possibilities or risks. So I think that in these specific scenarios, the application of telemedicine and robots is particularly useful. Of course, taking into consideration that it’s an emergency, that the robots should not be working alone, they should also be guided by humans, but at least they are protecting also that workers such as nurses, that are commonly workforce that is not so valued in different societies because the tasks that nurses do, for example, are normally considered as dirty or not of a great importance. So I think actually, these technologies can protect not just the health of the patients that are infected by COVID or other pandemic, but also the work of the medical professionals such as nurses that are normally very exposed. In the other side, I also remember the initiative of Open Science that my country, Costa Rica, actually had proposed to the World Health Organization so that the initiatives, the projects, and the research that was done in a context of a pandemic is opened and that also is kept available for every person that’s interested. The data can also be accessed without having to pay, without having to make a patent of that. I consider this also of extremely importance because in a case of an emergency we just don’t have time for that and we should really try to cooperate within each other and to try to respond to the emergency in a holistic and collaborative way. Thank you.

Man Hei Connie Siu:
Thank you very much to the panel for your responses. Are there any other on-site questions? If not, then I’ll take the question from the chat. What are some emerging trends and future directions in digital health literacy, and what do you suggest to individuals to stay informed and up-to-date in this rapidly evolving field and ensuring they have the accurate guidance and outdated information?

Rajendra Gupta:
I’ll take that because of the couple of initiatives we are learning. So one is on the technical community side, what we are doing is within the health parliament that I run with my team, we have created CoLabs. We are creating developers for health working with companies like Google and others, because I think what we need to do is to create developers to solve problems. So that’s one initiative where people who are enthusiastic about being part of the technical contributors to digital transformation of health, that’s one. The other thing, in next three months, we’ll be starting courses for class eight students on robotics and artificial intelligence, an elementary course. We want to educate them very early on, so that they can choose what they want to do. They will be aware on what the opportunities are, and same way we have doing courses, which are very elementary level for people to understand rather than going to deep dive into tech. So and everyone who is into health, I would strongly recommend that if you don’t know digital health, you will hit a zone of professional irrelevance. Please update, whatever you do, whether you do a one-week course, two-week course, just make sure that you know digital health from ecosystem perspective. Thank you.

Man Hei Connie Siu:
Would any other speaker like to take the question?

Geralyn Miller:
Yeah, just a few comments on that. I think it’s always a challenge at the pace of innovation that we’re seeing today to keep current. So I want to call out and our panel here today and the people who put the panel together today and gave us this opportunity. This is one way that the dialogue starts and that information is shared. And so more opportunities for people of similar interests to come together, I think, will always help advance the state of where we’re at from an understanding perspective. So opportunities like this, you know, training as well. And it’s not just training from tech providers. It is just training infused into the academic system as well. And so I would agree with what Dr. Gupta said there. But again, a call out to the folks who put together this panel, because I think this is one way that that starts. Thank you.

Man Hei Connie Siu:
Thank you very much, Ms. Geraldine. So we have about five minutes left. So maybe we could go with the closing remarks from each of the speakers. Maybe starting with Ms. Debbie.

Debbie Rogers:
I guess my closing remark would be that technology is a great enabler. It can actually be used to decrease the inequity that we see in health, but also in digital literacy. I am actually very positive about the future. That we see with digital health. And I think Dr. Gupta is right. The technology is ready. We’ve seen many case studies where things have been done at a really large scale. This is no longer a fledgling area. This is now a mature and really large scale area of practice. And so I’m really excited to see what happens from this point. And I’m excited to see that we have youth involved in this panel. Because, yes, absolutely. Youth will be the people who will be building the next evolution in this space. So really excited to see how that works and to see how things evolve from here.

Rajendra Gupta:
I think I would say that in this age where patients are more informed, if not, you know, than anyone about health conditions, about the treatment options. It is high time doctors know them before patients start selling them. You don’t know about it? Let me tell you this. I saw this. So I think, one, this is that digital health is something that everyone who is into health care, whether it is a clinician or a paramedic, needs to learn this. Second, if you’re talking about digital health, scalability. Scalability comes first. So I think continuously upscale, cross-scale yourself. And lastly, I must say thanks, Connie, for putting up this wonderful panel discussion.

Man Hei Connie Siu:
Ms. Jolin?

Geralyn Miller:
Yeah, first off, I want to start by expressing my gratitude for being included in this, it was a wonderful opportunity. I wanna echo the sentiment that youth play a huge role in this going forward. And I’m very appreciative that you brought everybody together under this umbrella. The thing from a tech perspective, I agree with the panelists on that, digital health is here now. The one part that I would add to this is that when we’re thinking about things new evolving technology like generative AI, let’s do this in a responsible way, open the dialogue around policy discussion. A discussion is always healthy. And let’s make sure that this technology that we’re bringing to light with good intent benefits everyone. Thanks.

Yawri Carr:
Well, and in my case, well, in conclusion, yeah, let us strive to be digital health leaders equipped not only with technical skills, but also with a profound commitment to equity. I consider value the work of nurses is very important. Even though the technology evolves, of course, professionals, humans will be very necessary. And it is a fact that technology can help us to protect them and also the patients in situations of emergency and also value of the work of ethicists in when they have something to say that they are not misvalued that they can take into consideration. And also when there are conflicts with, for example, a profit so that ethicists can also have a opinion of that and that they can also try to contribute in the mission of responsible AI so that they are not just there as a decoration, but they are actually taken into consideration. And also, well, of course the role of youth is fundamental as we see all that youth led initiatives that could strengthen the mission of digital health literacy. Nowadays, can in the future, so developed in a very good environment that it’s inclusive, that it’s included marginalized communities and all the population. So I consider that now healthcare and digital healthcare should not be more a privilege, but also a right. And yes, and I’m very thankful also for the opportunity to be here and to express, Yeah, my opinions and to talk about youth as well. Thank you very much.

Man Hei Connie Siu:
Thank you very much once again to the panel for your insightful responses and the workshop has closed today. Thank you very much for coming and together we hope we can create a future where digital health resources are accessible, equitable and can empower individuals to navigate their health journey confidently online. Thank you. Thank you. Thank you. Thank you so much. Okay bye. Bye bye. Bye bye. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Audience

Speech speed

152 words per minute

Speech length

76 words

Speech time

30 secs

Debbie Rogers

Speech speed

166 words per minute

Speech length

2880 words

Speech time

1043 secs

Geralyn Miller

Speech speed

175 words per minute

Speech length

2721 words

Speech time

933 secs

Man Hei Connie Siu

Speech speed

174 words per minute

Speech length

1802 words

Speech time

620 secs

Rajendra Gupta

Speech speed

204 words per minute

Speech length

3459 words

Speech time

1016 secs

Yawri Carr

Speech speed

141 words per minute

Speech length

3176 words

Speech time

1354 secs

Elections and the Internet: free, fair and open? | IGF 2023 Town Hall #39

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Felicia Anthonio

Internet shutdowns have become a widespread problem globally, with detrimental effects on lives and democratic processes. The Keep It On campaign, which aims to combat internet shutdowns, has recorded over 1,200 incidents of shutdowns in approximately 76 countries since 2016. These shutdowns are typically carried out by state actors during critical moments such as elections, protests, and conflicts.

One of the main concerns regarding internet shutdowns is their impact on democratic processes, particularly during elections. The internet plays a crucial role in enabling active participation and promoting transparency and fairness in electoral proceedings. However, when shutdowns occur, it becomes challenging to effectively monitor and ensure the integrity of electoral processes.

Governments often justify these shutdowns as a necessary national security measure to prevent the spread of misinformation. However, in practice, the opposite tends to occur. Shutdowns tend to benefit incumbent governments, as they can control the flow of information and stifle opposition voices. This, in turn, often sparks public outrage and protests. Incidents in countries like Uganda, Belarus, and the Republic of Congo serve as examples of how shutdowns have been used for political gains and to suppress dissent.

Addressing this issue requires the collaboration of various stakeholders, including businesses, big tech companies, and governments. The fight against internet shutdowns necessitates a multi-stakeholder approach, emphasizing the importance of secure, open, free, and inclusive internet access during critical moments such as elections.

Furthermore, it is crucial to highlight that internet shutdowns do not contribute to resolving crises. On the contrary, they tend to exacerbate the situations at hand. Shutdowns provide an opportunity for governments and perpetrators to commit crimes with impunity. Moreover, in conflict situations, shutting down the internet in response to flagged dangerous content ultimately escalates the crisis.

The Keep It On Coalition, a prominent advocate against shutdowns, strongly condemns all forms of internet shutdowns. In addition, they call upon big tech companies to exercise responsibility in promptly removing violent content to ensure people’s safety.

In conclusion, internet shutdowns are an escalating issue that negatively affects lives and democratic processes. The Keep It On campaign’s documentation of a significant number of shutdown incidents highlights the magnitude of the problem. The justifications used by governments for shutdowns often raise concerns about political motivations and human rights violations. Tackling this issue necessitates collaborative efforts between various stakeholders, and it is essential to prioritize secure, open, and inclusive internet access during critical moments. Additionally, internet shutdowns have been observed to worsen crises rather than resolve them, underlining the need for alternative approaches. The condemnation of shutdowns by organizations like the Kipiton Coalition further emphasizes the importance of combating this issue and ensuring the responsible conduct of big tech companies in safeguarding online spaces.

Audience

The speakers discussed several key aspects related to free and fair elections and the issue of internet shutdowns. They emphasised the importance of communication and the role it plays in ensuring fair elections. They highlighted the significance of the internet, GSM networks, and blockchain networks as essential tools for facilitating communication during election processes. Additionally, they emphasised the need for independent observers, journalists, and international organisations to monitor elections and ensure their fairness. These independent entities play a crucial role in preventing election fraud and promoting transparency.

Another critical aspect discussed was the use of blockchain technology in elections. The speakers highlighted the immutability of election results that can be achieved by leveraging blockchain technology. They stressed that this feature is essential in guaranteeing the credibility of election outcomes. Furthermore, they emphasised the role of cryptographic protection in ensuring the security and safety of the election process. Robust cryptographic measures can prevent tampering or manipulation of sensitive election data.

Scalability was identified as another crucial component of free and fair elections. The speakers pointed out that a scalable network is necessary to efficiently manage a large number of voters, such as a population of 300 million. This ensures that the election process can accommodate a significant number of participants without any disruptions or technical limitations.

While the discussion mainly focused on the positive aspects of communication, independent observers, blockchain technology, and scalability, there were also concerns raised regarding the resorting to internet shutdowns by governments. The speakers highlighted that governments sometimes lack alternative tools to address legitimate concerns and therefore turn to internet shutdowns as a means of control. This practice was seen as problematic as it limits citizens’ access to information and disrupts the democratic process.

The potential economic impact of internet shutdowns was also discussed. Lack of reliable connectivity was identified as a significant factor that creates a difficult investment climate. Internet shutdowns and restrictions on data flows were acknowledged as factors that negatively affect a country’s economy.

The Internet Society’s efforts in developing a tool called Pulse to track and provide information on internet shutdowns and data flows were applauded. This tool aims to support activists and democracy by providing digestible information that can help address concerns related to internet shutdowns.

The concerns about potential misuse and the legitimisation of internet shutdowns for specific cases were also raised. It was acknowledged that the legitimisation of internet shutdowns during religious ceremonies or events that might incite violence could encourage misuse of this strategy by other governments. This highlighted the need to explore solutions to address structural issues within governments that may lead to internet shutdowns.

Furthermore, the speakers identified the spread of disinformation as a significant challenge during elections. Disinformation was acknowledged as damaging to the image of political leaders and the democratic process as a whole. It was proposed that internet service providers should be held responsible for controlling the spread of disinformation, and artificial intelligence could be used as a tool to achieve this.

Lastly, the role of digital technology in promoting government accountability and responsiveness was emphasised. It was suggested that the use of digital technology can enhance the accountability of governments, making them more responsive to the needs and concerns of citizens.

Overall, the discussions highlighted the multifaceted nature of free and fair elections. It was concluded that a comprehensive approach involving governments, internet service providers, political parties, and citizens is necessary to ensure the integrity of electoral processes. The discussions also shed light on the challenges and potential solutions related to internet shutdowns, disinformation, and the use of digital technology in elections.

Ben Graham Jones

The discussion revolves around the detrimental effects of internet shutdowns and the importance of safeguarding online rights. The primary argument is that the rights people enjoy offline should not be diminished when they are online. This argument is supported by the agreement at the UN General Assembly that there should be equality between online and offline rights. It is emphasised that internet shutdowns have a negative impact on communication, as they silence the entire population by cutting off their access to the internet.

Another argument put forward is that internet shutdowns exacerbate the problem of disinformation. This is because during shutdowns, state TV or selective channels often remain functional, thereby monopolising the sources of information available to the public. This concentration of information sources leads to a limited pool of information and increases the likelihood of disinformation spreading. The inability to access fact-based information compromises people’s right to access accurate information and undermines the integrity of elections.

The discussions also highlight the need for cross-context learning to effectively counter disinformation. It is suggested that there is considerable overlap in the types of disinformation narratives spread across different electoral contexts. To address this challenge, there is a call for organisations working in vulnerable contexts to learn from other contexts and enhance their preparedness for countering disinformation. This entails shifting efforts from response to prevention and providing fact-based information at an earlier stage.

Furthermore, risk forecasting is deemed crucial in addressing potential internet shutdowns. The discussions stress that by the time an internet shutdown takes place, it is often too late to take substantial action. Therefore, organisations need to map out potential risks and adjust their plans accordingly to minimise the impact of such shutdowns.

Additionally, the analysis reveals that election technology, including blockchain, can become targets for disinformation. While the details and evidence supporting this argument are not provided, it is suggested that election technologies may be vulnerable to misinformation campaigns, potentially undermining the credibility and integrity of elections.

Overall, there is a strong positive stance that internet shutdowns should be fought against. The primary reason cited is that these shutdowns impede the ability of fact-checkers and journalists to perform their roles effectively, thereby undermining freedom of information. The importance of preserving online rights and resisting the negative consequences of internet shutdowns is emphasised throughout the discussions.

In conclusion, the expanded summary delves into the various arguments and evidence related to the negative consequences of internet shutdowns and the imperative to protect online rights. Additionally, the need for cross-context learning, risk forecasting, and the vulnerability of election technology are addressed. The overall message conveys the importance of combating internet shutdowns and their detrimental impact on freedom of information and the integrity of elections.

Kanbar Hossein-Bor

Internet shutdowns have a significant impact on the flow of information, freedom of expression, and human rights. These shutdowns not only hinder individuals’ ability to express themselves online but also threaten the exercise of human rights. It is important to consider internet shutdowns in the context of broader issues, such as media freedom and misinformation.

Recognizing the gravity of the situation, the Freedom Online Coalition issued a joint statement focusing on internet shutdowns and elections. The UK has taken a leading role in addressing this problem by leading a Task Force on Internet Shutdowns as part of the Freedom Online Coalition. This collaborative approach involves stakeholders such as the UK’s Foreign Commonwealth and Development Office, Access Now, and the Global Network Initiative. The Task Force, chaired by Kanbar Hossein-Bor, advocates for a multi-stakeholder approach to effectively tackle internet shutdowns and disruptions.

Internet shutdowns not only impact individual rights but also pose a threat to the wider democratic process. By restricting access to the internet, these shutdowns hinder the exercise of offline rights online. Additionally, the economic costs incurred by societies affected by internet shutdowns are substantial.

Despite the challenges, there is a strong desire to support policymakers who may lack the capacity, but not the intent, to address internet shutdowns. This recognizes the need for collaborative efforts between various actors to tackle this issue effectively.

In the face of those with ulterior motives, it is crucial to stand firm and uphold principles of open internet access and the protection of human rights. The comprehensive impact of internet shutdowns has been highlighted by the Oxford statement, and the launch of the FOC statement further emphasizes the urgency of addressing this issue.

In conclusion, internet shutdowns pose a grave threat to the free flow of information, freedom of expression, and human rights. Addressing this issue requires a collaborative, multi-stakeholder approach, as advocated by Kanbar Hossein-Bor and demonstrated through the Task Force on Internet Shutdowns led by the UK. Policymakers must prioritize efforts to combat internet shutdowns, even when capacity is limited, but there is a strong intent to address the issue. It is essential to remain steadfast in the face of those seeking to restrict access to information and suppress rights.

Andrea Ngombet

The analysis highlights several key points concerning internet shutdowns and information control in Congo. During the 2021 elections, the government not only blocked the internet but also telecommunications, justifying this action as a measure against foreign interference and misinformation. However, this move has been widely criticized as an attempt by the government to control the flow of information.

Furthermore, anti-terrorism and cyber-criminality laws have been used to suppress opposition in Congo. Activists were arrested based on their social media posts during the internet shutdowns, raising concerns about the government’s use of legal mechanisms to target dissent and stifle freedom of speech.

The government of Congo is seeking assistance from the Republic of China to acquire advanced tools for internet control, such as a firewall. However, this approach lacks technological sophistication, highlighting the need for aid in developing domestic technology and innovation.

One important argument made is that tech companies like META should play a role in preventing the spread of misinformation, particularly during elections. Through collaboration with META, Congo was able to establish the Congo Fact Check initiative, demonstrating the positive impact of cooperation between tech companies and local organizations.

Civil society organizations also have a crucial role in moderating hate speech and misinformation online. In Congo, META worked with civil society organizations to create a task force on elections, addressing hate speech and misinformation from both the opposition and government. The involvement of civil society organizations can serve as a middle ground, reducing the perceived need for the government to impose internet shutdowns.

Additionally, it is emphasized that big corporations should be encouraged to participate more actively in online moderation efforts. It is noted that these corporations often have a reactive approach to tackling online misinformation. By reaching out to them, local civil society organizations can facilitate their involvement in countering online misinformation and make their efforts more proactive.

In conclusion, the analysis reveals a concerning pattern of internet shutdowns and information control in Congo, which is seen as an attempt by the government to control the narrative during elections. There is a call for tech companies, civil society organizations, and big corporations to proactively work together to prevent the spread of misinformation and hate speech. By doing so, the likelihood of internet shutdowns can be reduced, ensuring the protection of freedom of speech and public access to information.

Nicole Streamlau

Internet shutdowns are increasingly seen as necessary measures to address concerns related to elections, such as interference, disinformation, and post-election violence. Research carried out in Africa has shown a growing acceptance of internet shutdowns as a means of controlling election-related issues. Historical practices like banning opinion polls and political campaigning near voting day have also contributed to this acceptance.

Governments in the global South express frustration with the perceived lack of response, engagement, and oversight from large social media companies. Internet shutdowns are viewed as a form of resistance and sovereignty against the dominance of these companies, which are often based in distant countries. This dynamic highlights the tensions between governments and technology companies in terms of information governance.

The decision to implement internet shutdowns is partly influenced by a lack of information literacy. Governments with limited experience and understanding of online content moderation may resort to internet shutdowns as a response. Oxford University has launched a training program aimed at increasing information literacy among policymakers and judges, promoting a better balance of competing rights and addressing information disorder within a human rights framework. The goal is to reduce reliance on internet shutdowns as a solution.

Policymakers in peripheral markets, such as Ethiopia and the Central African Republic, struggle to understand and engage with technology companies. This observation underscores the difficulties faced by policymakers in regions with limited presence and engagement, in contrast to countries like Germany, which have embassies in Silicon Valley. The complexities of the relationship between policymakers and technology companies contribute to the challenges of addressing issues like internet shutdowns.

In conflict-affected regions, internet shutdowns are becoming accepted by local populations as a means to combat online hate speech and incitement to violence. Research carried out in conflict-prone areas of Ethiopia shows that locals prefer internet shutdowns as a way to avoid exposure to harmful online content. The acceptance of internet shutdowns in these regions arises from a lack of effective alternatives to address widespread hate speech and incitement to violence online.

Overall, while internet shutdowns are increasingly seen as a response to election-related concerns, the lack of information literacy and strained relationships between governments and technology companies contribute to their implementation. However, efforts to enhance information literacy among policymakers and judges through training programs, such as the one initiated by Oxford University, offer a promising approach to reducing reliance on internet shutdowns. Finding effective and sustainable solutions beyond internet shutdowns requires striking a balance between addressing concerns and protecting rights within a human rights framework.

Sarah Moulton

Increased internet disruptions during elections have a detrimental impact on the work of ground observers and pose a serious threat to domestic observer networks. These networks play a crucial role in reporting on electoral processes and collecting vital data. The disruption of internet services hampers their operation, making it difficult to effectively monitor elections and gather accurate information.

Moreover, observers on the ground face higher risks, including the risk of being arrested. This underscores the urgent need to safeguard them and provide them with the necessary tools to measure and report data effectively. Without adequate protection and support, these observers may be deterred from carrying out their important work, compromising transparency and accountability in the electoral process.

The importance of political parties and policymakers engaging in the process is also highlighted. Attendees at the FIFA Africa event in Tanzania displayed interest in the issue, emphasising the need for their active involvement. It is crucial for political parties and policymakers to recognize the significance of internet disruptions during elections and take proactive measures to address this issue.

Early collaboration is essential, with a particular focus on data collection relating to the economic and social impacts of shutdowns. The repercussions of internet shutdowns extend beyond the electoral process and can have a significant negative impact on healthcare and various economic sectors within a country. Therefore, it is essential to gather comprehensive data on these impacts to understand the full extent of the problem and develop effective strategies to mitigate them. Training programs for politicians and political parties can also be instrumental in preparing them for potential shutdowns and equipping them with the necessary skills and knowledge to respond effectively.

Accurate data that reflects the specific local context is vital in reports related to internet shutdowns. It is crucial that policy decisions are based on accurate and contextually relevant information, as the impact of internet disruptions can vary greatly between different regions and countries. The work being done through the Summit for Democracy highlights the recognition of this need and the ongoing efforts to ensure that data used for policymaking accurately portrays the local realities and challenges associated with internet shutdowns.

Collaboration between various stakeholders, including policymakers, civil society, internet service providers, technology platforms, strategic litigators, and international organizations, is paramount. Given the complex and multifaceted nature of internet disruptions during elections, a collaborative approach is necessary to address the issue effectively. All these actors must come together and share their resources, expertise, and data to build a comprehensive case and develop robust strategies for combating internet shutdowns, particularly during election times.

Furthermore, the platform created by the Internet Society is highly valued and supports the measurement of the cost of internet shutdowns. This platform plays a crucial role in helping to quantify the economic impact of internet disruptions and provides valuable insights into the true costs of such disruptions. By highlighting the financial consequences, the Internet Society facilitates a deeper understanding of the gravity of the issue and advocates for necessary actions to prevent or mitigate internet shutdowns.

In conclusion, increased internet disruptions during elections pose serious challenges for ground observers and domestic observer networks. It is imperative to protect and support these observers, provide them with effective tools, and engage political parties and policymakers in addressing this issue. Early collaboration, accurate data collection, and collaboration between various stakeholders are all crucial aspects of combating internet shutdowns during elections. The platform created by the Internet Society is instrumental in measuring the cost of internet shutdowns and emphasizes the need for action.

Session transcript

Kanbar Hossein-Bor:
Hi. Good morning, everyone. I think we’ll just give it about another couple of seconds. I can see some people are still entering the room. And then hopefully we will start. And just a reminder that this is a session on elections and the Internet, free and fair and open. I hope that’s the session you’ve come for. If you haven’t, you’re very welcome to stay. Fantastic. Well, let’s make a start. Firstly, a good morning to everyone here. And I know we’ve got a lot of colleagues online as well. A good morning from Kyoto to them, wherever they may be joining us. It’s a real privilege for me to be moderating this session today. My name is Kambor Sainbor. I’m the head of the Democratic Governance and Media Freedom Department of the UK’s Foreign Commonwealth and Development Office. We have a wonderful panel here today with you. I’m going to ask them each to introduce themselves when I hand them over to the floor to engage in this session. I’ll start off with making a few introductory remarks to set the scene, as it were. From the UK’s perspective, it’s a real privilege for us for this year, as part of the Freedom Online Coalition, to be chairing one of the task forces of the Freedom Online Coalition. In this case, the Task Force on Internet Shutdowns. And true to the multi-stakeholder spirit of the IGF, we’re delighted to be chairing that with the FOC Advisory Network members, Access Now, and the Global Network Initiative. We are chairing this task force because We passionately believe that internet shutdowns pose a significant threat to the free flow of information. They are a significant threat to the ability of everyone to express themselves online. They are a major source of censorship. And as all of you know, in a world where we are increasingly exercising our offline rights online, they are a fundamental impediment to the ability of us to exercise our human rights. In that regard, we want to use our task force chairship to highlight the increasing prevalence and use of shutdowns and internet disruptions. And we passionately believe that the multi-stakeholder approach is the right one. But we also recognize that internet shutdowns need to be seen as part of a much broader set of issues, all of which are related. For example, we have the issue of media freedom, online violence against women, development, mis- and disinformation. All of them come together to pose a significant threat to the ability of all of us to exercise our rights and actually lead to the full exercise of the realization of development. So in that regard, I want to briefly, before I hand over to the panel, highlight for the benefit of all of you that there has been a joint statement on internet shutdowns and elections, which is actually going live today. So if you have a look at the screen. We have a quick snapshot of this statement, the first issued by the FOC. In that regard, I think it’s a great way to introduce the session today, a reminder of the determination of the FOC to take up the challenge that this issue poses. For all of you in the room, and I hope for all of you online, you can see the statement now. We will share a copy of that later. I’m very happy to discuss that as well during the Q&A. So insofar as today’s session is concerned, we’ve got, I think, five speakers. I’m gonna ask them each to come in. Firstly, with a few words of self-introduction, and then they’ll spend about three, five minutes reflecting on a particular point of this session. And then we will have, I hope, a good half an hour or so of discussion where we can answer questions or reflect on any points that you in the room or virtually are making. So without any further ado, I’m gonna ask Felicia Antonio to start off and give a reflection on the Keep It On campaign and what some of the initial recommendations the policy makers are. So over to you, Felicia. Hello, can you hear me okay?

Felicia Anthonio:
All right, I’m Felicia Antonio, Keep It On campaign manager at Access Now. And for those who don’t know what the Keep It On campaign is, it’s a global campaign that unites over 300 organizations around the world. And our objective is to fight internet shutdowns. And this campaign was launched in 2016 by Access Now and other stakeholders. And since then, we’ve monitored, documented, and advocated against shutdowns. I’m going to give a few highlights of what we’ve seen across the globe with regards to shutdowns in general, and then I’ll narrow my submission to election-related shutdowns and the impacts. So according to our data and monitoring that is accessed now in the Kipiton Coalition, Internet shutdowns are spreading, they are lasting longer, and they are also impacting lives. Since 2016, we’ve documented at least 1,200 incidents of shutdowns in about 76 countries worldwide. And these incidents of shutdowns are usually perpetrated by governments, state actors, warring parties, military juntas, or third parties, and they take place during very critical moments like elections, protests, and conflict situations. In relation to shutdowns documented around elections, we have seen at least 57 election-related shutdowns globally since 2016. Africa accounts for 44 percent of these shutdowns. That is about 25 of these shutdowns happened in Africa. We also have countries like Iran, Bangladesh, Pakistan, Iraq, Belarus, Turkmenistan, among others, that have weaponized shutdowns during elections. We all know and believe that the Internet and digital platforms continue to enable and enhance fundamental human rights of people to access information, to express themselves, and to also enjoy their rights to freedom of assembly. In times of elections, the Internet plays a critical role in promoting free, transparent, and fair electoral process by providing political candidates avenues. to reach their supporters or audience, as well as allow equal access to communication channels for both the incumbent and the opposition to debate and highlight their political manifestos and policies. And for voters, keeping the internet and essential platforms on during elections enable them to actively participate in democratic processes, scrutinize policies put forward by political candidates, and also provide opportunities for people to hold their governments to account. Elections, particularly in growing democracies, are a critical time of transition, and active participation in the process contributes significantly to a credible democratic outcome. Journalists, human rights defenders, election observers, and other key stakeholders also rely on the internet and digital communication tools to monitor the electoral process. And shutdowns make it extremely difficult for all these actors to effectively monitor the electoral processes across the globe. Some governments have attempted to justify these shutdowns as relevant to prevent the spread of misinformation or hateful content, or as a national security measure. However, the opposite is true. When you shut down the internet during elections, it results in chaos, in the sense that it blocks alternative sources of information verification channels and seeks to benefit only the incumbent governments. Imposing shutdowns during elections is likely to also agitate people to protest and, in that regard, it questions the national security bits of governments trying to justify shutdowns. And according to a study that was done in 2019 by the collaboration of ICT policy in East and Southern Africa. Shutdowns remain a go-to tool for governments who want to hold on to power. With examples in Uganda, Belarus, Republic of Congo and most recently we saw this happen in Gabon when the internet was shut down and then the incumbents was announced as a winner of the elections but there was a military coup which overthrew him and so if that hadn’t happened we would have the incumbents in power for the next term of elections. And then I think that although the number of elections around the world have reduced over the past few years with some authorities in countries like Ghana, Kenya, Nigeria, Sierra Leone among others making commitments to keep it on during elections I think it still remains a crucial priority for all actors working to advance democracy around the world and so next year we have or next year has been described as the year of elections with at least 50 or so elections, 50 or so countries scheduled to go to the polls and so given the direct interference of shutdowns on electoral processes and the outcomes of elections I think it’s important for all stakeholders including governments, regional and international bodies like the United Nations, the African Union, European Union, the Freedom Online Coalition among others to support the Keep It On Coalition and other stakeholders to ensure that governments do not normalize shutdowns during elections and we welcome the just published statements by the Freedom Online Coalition. denouncing election-related shutdowns. And my other recommendation also goes to the businesses and telecom companies, as well as big tech companies, to ensure that people have access to secure, open, free, inclusive internet access throughout electoral processes, as well as ensure that these platforms are safe for people to be able to express themselves, and to also avoid giving governments reasons to justify their actions by shutting down the internet. So in conclusion, I think that the fight against shutdowns requires a collaborative effort, as we’ve seen. And so this is not just something that civil society alone is working on. We’ve seen the just-released statement by the Freedom Online Coalition, as well as statements denouncing the use of shutdowns by several governments and other institutions, which we appreciate as the Kipiton Coalition. And we look forward to working with all of you to push back against shutdowns. Thank you.

Kanbar Hossein-Bor:
Thank you very much, Felicia, for that really great overview of the elections and shutdowns. I’m now very pleased to hand over to our colleague joining us on screen, Andrea Ngombe, who will reflect on the impact of shutdowns on the ground, especially as seen from the Republic of Congo. So over to Andrea. Hey. Can you hear us? Can you hear me? Yes, we can hear you. Yeah, I can hear you. Yeah, we can hear you. Please continue. Thank you. OK.

Andrea Ngombet:
Let me stop the video. OK, it’s OK. So thanks for having me. I’m Andrea Ngombe from Republic of Congo, leader of SASOP Collective, which is an organization based in Paris, but working on democracy and human rights in the Republic. of Congo. We started just for human rights and democracy, and then we extend in many topics as anti-skeptocracy, and we really work with KPI on campaigns since 2015. So what happened in the last election in 2021 in Congo was not just informative, but it followed up what Felicia said. The narrative first is about safety, the safety of the public against foreign interference, against misinformation coming from the opposition, but never about the misinformation coming from the government, of course. And by using this narrative of fighting the foreign influence into the electoral process, they are able to sell the internet shutdown as something as, oh, we are so weak. We are a weak democracy. We don’t have a tool to keep the internet on because we don’t have the necessary tool to block that misinformation. And during that election, it was surprising that this narrative was even effective in the public opinion, general public opinion of the Congolese. And it goes on for about one week without phone and internet because they just not block the internet, they also block the telecommunication directly in the country. And with that narrative, they extend it to the anti-terrorist activity. And my point here is to say that this internet shutdown is not just for the internet, it also has an impact directly on the people. Because of this new anti-terrorist and cyber-criminality law in Congo, they are able to arrest militants from the opposition because of social media posts. Even if the internet was blocked, if you post something earlier about the election process, they can go and arrest you. Three activists from the opposition were arrested and put in jail for about three or four months because of this internet shutdown and the information they spread. And on our side, Sophie, and this is what I was trying to make as a point, we work with people from META to say that during election time in Africa, because of the behavior of our government, they need to step up. I don’t ask for a full and permanent a task on election, but during election time, because of the spread of hate speech from the opposition and from the government, someone needs to be in the middle and like a referee for the competition on the free flow of information. And we were able to secure a tacit way to work with META and they put up something called Congo Fact Check to check on the information putting out during that special time. And they were able to block a very vast disinformation coming from government related Facebook account. And it was really shameful for government to come up with this idea of blocking misinformation by shutting down internet and being themselves broke because they use robot and bot to spread lies during election time. So this is what was happening. And because of that, I also think that the next move of this internet shutdown in Africa is not just about internet shutdown. It’s about control, control of the information coming in the country. So because they are not able to have the newest technology, they use the internet shutdown. But in the coming years, and in the perspective of Congo-Brazil, they are trying to have a set of tools coming from the Republic of China, people of the Republic of China, so they can have this kind of firewall and secure themselves from any kind of information coming from outside the world, inside the country. So this is what we need to be focused on, not just the regular internet shutdown, but this next step they are trying to make to block any kind of information coming from outside, inside the country. Thank you.

Kanbar Hossein-Bor:
Well, thank you very much, Andrea, for that really powerful reflection on the ground, especially some of those future challenges. Also, thank you for staying on time, and a special thanks for joining us. I think it’s one o’clock where you are in Paris, so we’re very grateful that you’ve dialed in, much obliged. I’m now going to hand over to Ben Graham-Jones to reflect on shutdowns and freedom of expression.

Ben Graham Jones:
Thanks ever so much, Kambar, and thank you, other colleagues. My name’s Ben Graham-Jones. I am an elections consultant, work on many elections every year, and an advisor to the Westminster Foundation for Democracy, a UK public body. Let me start by applauding the joint statement on internet shutdowns and elections by the online coalition. I think it really provides a sound basis for calling out the illegitimacy of internet shutdowns, wherever they may occur. I’m gonna make three brief points today, and really, the first is that I would like to, I’d like you to imagine, if you would, a situation where you have an election, and at some point during the election process, perhaps on election day, or perhaps as the results are being counted and tabulated, 10,000. Nearly 10,000 journalists are locked up by government authorities. How much condemnation and opprobrium this would attract from the international community and from domestic actors, and rightly so. And yet it strikes me that when the communications of an entire population is silenced for that period, there is not always the same level and the same degree of condemnation. And perhaps we need to think carefully about how we can equate those two events that the legitimacy of the rights that we enjoy offline are in no way diminished when they are online. And so this is the first point I wish to make. That equality between the rights we have offline as they are online, it’s been agreed at the UN General Assembly. It’s something which I think we need to underscore. And I applaud the work of Access Now, of NDI, of other partners here today, who do such an excellent job in really raising awareness of that fact. Of course, internet shutdowns are not just about the right to freedom of expression. And the second point I would like to make pertains to disinformation, the right to access information, and the right to credible elections, all of which depend on having that basis of fact-based information. And one of the things that I see as someone who specializes in counter disinformation is that when internet shutdowns occur, they amplify disinformation. How do they amplify disinformation? Because they concentrate the sources of information that people can access. Your state TV, for example, may remain on, or it may be that the channels that are closed down or the means which are throttled are selective. What that means for those of us working in counter disinformation is that we need to be thinking seriously about pre-bunking, about moving. our response efforts to prevention efforts, to mitigation efforts, about providing fact-based information at an earlier stage of the process where there is a risk of Internet shutdowns. I want to very briefly suggest four actions that can help in that regard. Number one, when we’re working in contexts which may be vulnerable to Internet shutdowns, we need to learn from other contexts. If you’re sat there in an election commission in Nigeria, let’s say, you may not be thinking about the recent election that took place in Kenya or France or Kazakhstan. Your previous points of reference is probably the previous elections that took place in Nigeria. But actually, what we need to be bearing in mind is that we see quite a lot of overlap in the types of disinformation narratives that are circulated across different electoral contexts. I see this. I work globally across lots of different elections each year. And so by looking at other contexts, we can bolster preparedness for counter disinformation in advance of any Internet shutdown and information monopoly being imposed. The second thing is to think about narrative forecasting. Our organizations, whether it’s election management bodies, civil society organizations, political parties, really making a plan for thinking about what types of narratives might be deployed at different points in the process, informed by that international best practice. And then thinking about what response might look like. Thirdly, overcoming selection bias. We know that people don’t seek out counter disinformation. We know people don’t look to check whether or not their pre-existing opinion is correct. There’s decades of psychological research on this. And so we need to find ways of bringing that fact-based information before shutdowns occur into the places where it needs to be. Because the very people who will otherwise seek out fact-based information are precisely the people who you need to reach least. And fourth, thinking about drafting that preemptive response early. If you can draft, well, if you can draft effective infographics and videos to counter some of these narratives early on, then when they do come up, it’s going to reduce your response times and cut the virality of disinformation before any shutdowns are imposed. The third point I’d like to very briefly make is on risk forecasting. And when we’re thinking about internet shutdowns, by the time it takes place, often it’s too late to do a lot of the actions that can have substantive consequence, whether that’s the publication of telecommunications licensing agreements, whether that’s putting concerted pressure, the sorts of things that keep it on, coalition does so effectively. And so we really need to be thinking about, on a sector-wide basis, but also within our individual organizations, mapping out risks. So for example, if you’re a body that sends election observation missions, you might be thinking about, okay, were there known risk factors of internet shutdowns present in particular contexts, and then prioritizing the deployment of your missions to those places so that you can serve as a counterweight to the monopolization of information. Likewise, if you are a civil society organization whose communications plan depends on releasing a statement around election day, but you realize that there is a chance of an internet shutdown, then maybe you need to think a little bit carefully about how to communicate your key messaging around the election if that’s not going to be possible. So three key points, remembering that the same rights online apply offline, thinking about farsighted disinformation response, and forecasting risk. Thanks ever so much.

Kanbar Hossein-Bor:
Thank you very much, Ben, especially for those pretty practical recommendations as well. We’re now going to go back online. We’ve got a colleague joining us, Nicole Stremlow, who will reflect on the research on government decisions around internet shutdowns, especially in Africa. Over to you, Nicole.

Nicole Streamlau:
Good morning, everyone. I hope you can hear me. We can. Please continue. Okay, thank you so much. So I just wanted to take a couple of minutes to reflect on some of the research we’ve been doing at Oxford around internet shutdowns, and particularly around elections and conflicts, primarily in Africa. So we’ve been conducting some research on government decision making, so basically asking why governments are choosing this relatively blunt tool of internet shutdowns, as compared with other forms of control, and specifically in Ethiopia, and I just returned from Ethiopia, we’ve also been looking at the impact and the perception of shutdown in violence affected communities. And actually, like Andrea, we found sort of a growing acceptance or acquiescence that this is actually an important tool. And in the process of our research, we’ve also sought to come up with a different reading of internet shutdowns. So to look beyond this framing this dichotomy of digital authoritarianism and ask whether or not it’s possible to identify these alternative logics and rules rather than the assumed motivations of what’s actually driving shutdowns. And I also have three points, like my previous colleague, and I would say, first of all, a somewhat obvious point is that we’re seeing a growing acceptance of shutdowns. So they’re becoming increasingly normalized as a tool to address very legitimate concerns around election interference, concerns about disinformation, concerns about incitement to violence post-elections, and they’re seen as a useful tool or a necessary trade-off to protect the integrity of the electoral process. And by this, I’m talking about a lot of reflections around the research we’ve been doing on the ground in Africa. So I think it’s also helpful to remember that there’s long been information controls around elections in different democracies. So the banning of public opinion polls within weeks of election is seen in Kenya, the prohibition of political advertising or campaign rallies close to voting day that might arise in particular contexts in accordance with historical experiences. And the challenge of social media is that it’s making it makes imposing these kinds of silences around elections increasingly difficult. So shutdowns are this blunt tool, this very crude tool for addressing some of these concerns in the context of having less precise tools or not knowing what else to do that might historically have been available for dealing with concerns around mass media, for example. And second, I think most importantly, we see shutdowns as a growing form of resistance, an expression of frustration to the overwhelming power of large social media companies that are typically based in the US or China. And we see this frustration with the failures and the inequalities of online content moderation. And I think to some degree, this has become well documented. People have been writing about this and doing research about this, particularly around the failure of online content moderation in local African languages and the lack of attention given to resource poor communities. So we see governments in these more marginal markets in the global South being frustrated with this inadequate response, the lack of engagement, the lack of product oversight from these large tech companies. And so shutdowns are seen by some, and it’s not always explicit, but as a way of expressing sovereignty, as a way of pushing back against what is often seen to be these arbitrary responses of incredibly rich companies deciding good and bad actors from a distance, and the frustration also with the rules that are being written in far off countries according to certain logics that local authorities feel powerless to engage or really to challenge. And so like Andrea, you know, I agree there’s a lot of discussion and debate about what more can these companies do, not necessarily in Kenya, but more in the Central African Republic, for example, or the failures of what’s been happening in Ethiopia. And third, I think we’ve also seen that the decision to implement shutdowns partly is an information literacy check. And I think to some degree, this has been overlooked, but our research has shown that governments often resort to shutdowns because of a lack of experience of how to actually engage these large tech companies, or a lack of understanding about alternative ways of addressing the very legitimate concerns about the failures of online content moderation, particularly around elections or in cases of extreme violence, and how to navigate this balance between the competing rights, such as the responsibility to protect in cases of extreme violence, as well as freedom of expression, or the right to information, as we’ve mentioned on this panel already. And if I can say a very tiny plug, we at Oxford, we were just awarded a European Media and Information Fund award to actually launch a new program to train policymakers and judges through a new executive program on information literacy. And we’re specifically going to be working on how to improve understanding among these key influencers, and how to address these really very real challenges that information disorder poses, particularly in the context of generative AI, but really how to do so through human rights, through a human rights framework ahead of elections in context of extreme violence, and hopefully reducing the need or the turn towards these blunt crude tools of censorship that internet shutdowns are. Thank you.

Kanbar Hossein-Bor:
Thank you so much, Nicole, really helpful there. And also grateful for you joining us. I know it’s a difficult time zone where you are as well. We’re now going to go to our last speaker. That’s Sarah Moulton. Before we open it up to you, the audience for Q&A, Sarah will be reflecting on the multi stakeholder coordination challenge. Over to you, Sarah.

Sarah Moulton:
Thanks. My name is Sarah Moulton. I’m from the National Democratic Institute. I’m the deputy director of our democracy and technology team. NDI is a nonprofit. partisan non-governmental organization based in the US but we work in about 50 countries around the world and we come at this from an implementation angle you know NDI works and supports democratic processes strengthening democratic institutions and provides a lot of on-the-ground election support for many of the elections that have been discussed already but in primarily we do a lot of work with domestic observation groups independent groups on the ground who are deployed in advance of an election to report back on you know what they’re seeing at the polling station and reporting on the process and the results and obviously for us from a practical standpoint it’s really important for the internet to be working so that they can transmit their findings and what they’re seeing throughout the day and then allowing the observer group to then report on make a statement about the process that they’re seeing and you know hopefully verify that it was indeed democratic and properly run that’s not always the case however you know what we’re seeing often these days is that these disruptions are making it more challenging for groups on the ground to do so there’s definitely been a lot more concern about what might be happening and trying to plan in advance for potential shutdowns and so one thing that we’ve really explored is how do we better utilize this network which can often include thousands you know maybe up to 2,000 in some cases observers deployed in all parts of a country and how do we take advantage of that distribution in order to collect better data on what we’re seeing across country in terms of whether there’s a shutdown whether there’s just a disruption or there’s throttling perhaps censorship of particular sites that can lead to better data collection in that process so how can we feed that data to the wider network of stakeholders that we’ve been talking about This is, you know, our topic here is multi-stakeholder collaboration and how do we share that data with those who can perhaps do that more direct advocacy with individuals maybe across on an international level but also even domestically. Our concern with that particular group, obviously there’s higher risks to observers these days. We’ve been seeing on a couple of recent examples Sierra Leone and perhaps Zimbabwe is the more difficult one to talk about but you know seeing observers arrested for simply the process they’re doing an independent analysis and verification sometimes in the middle of what they’re doing on Election Day in the case of Zimbabwe and so we have to look at you know how do we protect these groups who can collect this data but also enable them to to do so because there’s a lot of opportunity there’s a lot of tools out there now you know in order to take these measurements and then report them up. The other thing like sort of the other side of this angle is NDI also works with politicians and policymakers and I think that there’s a real opportunity for collaboration here but really needing to do so well in advance of an election we need to get this process started now yesterday you know especially with 2024 we’ve been talking about 2024 for years now but you know when are we you know we really have to actually start working towards it the statement is a great is a great start and a great recognition of that coming up and I know you know thanks to you know keep it on campaign really puts a lot of effort into planning and tracking which elections are going to be you know perhaps the most significant in potential for shutdowns but also reflecting having just come from the FIFA Africa event in Tanzania last week I think that you know there’s a lot of I think interest from policymakers to engage in this process but there’s also a lack of information at times there’s a lack of understanding of the environment and sometimes the approach from civil society might potentially be aggressive in this and they perceive it as that we are not being collaborative that we are coming at them as opposed to working together with them and frankly sometimes there’s a challenge in trying to get policymakers to care about the issue there’s you know the the prospect of freedom of expression may not really resonate especially when it came up the reference Felicia made to national security is often an argument made but I think really where we can make a difference here is really the impact of a shutdown beyond that it’s you know looking at the you know health care issues it’s looking at economic loss like has a huge impact on a country and really trying to you know collect that data use the data that we’re collecting in order to make that case in order to work earlier on with not only politicians or individuals but political parties generally because politicians during the time of an election are really concerned more about their election than they are about you know potentially a shutdown so how can you work with the wider you know political party ecosystem and I think there’s things we can do in preparation of that you know there is a desire for training programs for learning about these tools for working together with multi stakeholder approaches whether that’s civil society or others and I think for us if we can make better efforts to connect civil society with political parties as also international initiatives that we can go a long way towards kind of mitigating this potential damage is coming up so I’ll stop there

Kanbar Hossein-Bor:
thank you so much Sarah really really helpful we’re now going to open up for Q&A, really. I want to start with folks in the room first. If you can briefly introduce yourself and also briefly set out your comment or question, that would be great. I think the format is I see a mic in the middle. So maybe we’ve got a colleague already there. If you could, the gentleman in the white shirt, if you could start off.

Audience:
Yes, hello. My name is Eugene Morozov, and I represent devoteusa.com, a voting solution. I want to thank this panel for bringing up two very important components of free and fair elections. And one is availability of communication. You talk about internet, but there are, of course, other ways to communicate, like GSM networks or blockchain networks, which do not use TCP IP protocol at all. Then you talked about something also very important, and that is availability of independent observers and journalists and international organizations, very important. But there are three other critical components on free and fair elections, which this panel have not touched upon. So I just wanted to raise them. One of them is true immutability of election results. And that is achieved by, for example, using a blockchain, which is what we use. Then, of course, there is an issue of security and safety, and that is achieved by using a cryptographic protection. And you also need scalable networks to conduct elections with a country, let’s say, with 300 million voters. You must have a scalable network to conduct those. So my question is, are there any thoughts to those other components of free and fair election process that this panel is thinking about? And if not, of course, come talk to us. We can help. Thank you.

Kanbar Hossein-Bor:
Perfect. Thank You, Eugene. That’s some really powerful reflections there on the wider technological context of elections. I’m going to look to the panel in the room first to see if anyone wants to respond to Eugene. And Ben’s volunteered. Ben, over to you.

Ben Graham Jones:
Sure, thanks Eugene. I mean I don’t want to keep us close to the remit on internet shutdowns, but just to say that I think there’s probably two components to electoral legitimacy. There’s the actual process itself and how that pans out, and then there’s the perception of the process and election technology is a classic target for disinformation, in part because it’s very difficult to explain your blockchain or to explain. You can’t observe electrons as well. It makes it quite tricky sometimes because there are big tools for building confidence like this limiting audits like cryptographic methods as well. But I think that’s exactly why it’s so important that we fight internet shutdowns, is because when you’ve got that sort of disinformation that can be levied against election technologies in particular, then you can’t actually fight that if the fact-checkers and the journalists don’t have the ability to do so.

Kanbar Hossein-Bor:
Great, thank you. We have another speaker now. Over to you.

Audience:
Hi everyone, Nikki Muscati. I’m from the US Department of State, Bureau of Democracy, Human Rights, and Labor. I also serve as our focal point for the Freedom Online Coalition. Thank you so, so much for this panel. I was excitedly writing so many notes and I have so many follow-ups that I’d love to have with all of you. I have a question that was sparked by some of Nicole’s comments, but really just open it up to the whole panel. One of the things that I’ve also found is pretty consistent with the first finding that you noted about the acceptance, it seems, of this tool internationally, really due to the fact that so many governments feel that they really don’t have very many other tools to address what, again, are legitimate concerns. And, you know, when we’re going through the list that we see oftentimes, and access now keep it on reports of these are all the real reasons that an internet shutdown might be happening. I think one of the reasons that’s often cited as a I can’t believe that’s a reason is to prevent cheating and so I am a little bit curious just across the board you know what are some of the solutions that folks have been thinking about to address what is really just seen like as a institutional frameworks and sort of like structural issues within governments that lead them to be unable to address some of these again legitimate concerns that are happening within the country and then turning to the internet to then just bluntly just use that to to try to address everything and then creating so many other additional concerns that just build on top of the original legitimate concerns thank you

Kanbar Hossein-Bor:
thank you Nikki a really important point there about practical alternatives of policymakers if I may you implied you suggested Nikki Nicole had touched on some of these points I might ask Nicole online if you want to come in on Nikki’s question over to you

Nicole Streamlau:
Nicole sure no thanks for that Nikki well I think we see it at both levels and maybe Andrea wants to also come in with what he’s seeing in the Congo so I think we do see it with policymakers not understanding and I think it’s particularly in markets that are peripheral to the large tech companies so here you know I’m not speaking about Kenya I’m speaking about the Central African Republic I’m speaking to some degree about Ethiopia but you know that where they don’t have the same channels the same lines open they don’t have embassies in Silicon Valley like Germany does you know like it’s just a very different environment in relationship with these companies and so they’re also not sure how to engage with them and I think it’s not only the at the level of the companies but it’s also an understanding about technology it’s an understanding about what other tools they have have, how else they can deal with it, other than shutting down the Internet. And I think we also, and I think what is very concerning, and this is some research that as I mentioned, I just returned from Ethiopia, and we’ve been doing long standing research in Awassa and Shashamani, which are two conflict affected regions, looking at how communities there are engaging with Internet shutdowns and how they see the impact of Internet shutdowns. And we have seen there that there is an acceptance of these Internet shutdowns, because people are so fed up with the content that they’re receiving online, with the massive amounts of online hate speech, with the incitement to violence, and they’re also experiencing violence on the ground. So they’re just saying, and I’m putting it very crudely, our findings are more nuanced than this, but in the interest of like 10 seconds, they’re finding that there aren’t any other alternatives, so they’d rather not be exposed to this, what they see as inciting real world violence, and they’d rather just have it shut off.

Kanbar Hossein-Bor:
Great, thank you for that. I see some more hands up in the room, we’ve got two speakers, if the lady at the microphone could come in, and then afterwards, the gentleman here, if you could go after, we’ll take these two questions together in a bunch. So over to you.

Audience:
Wonderful, thank you. I’m Sally Wentworth, I’m from the Internet Society, and I want to thank you for this great panel, learned a lot, a lot of things to be concerned about. We at the Internet Society, we are a more technical organization, and we’ve thought hard about what role we can play to support the work that many of you are trying to do to support freedom and to support democracy and free flow of information, and where we stand is we like to look at it from what do we see in the Internet, and is there information that we can see on the Internet about shutdowns, about data flows, about cross-border connectivity, and make that information available in sort of digestible ways that you all can use in your advocacy and promotion of democracy and free elections. Sarah, I was particularly struck by your comment of putting this in a broader context of what is the impact, not just in the immediate term with respect to the election, but what does this do, you know, ongoing shutdowns, what does that do for a country’s economy, right? If we see governments saying we want to be, you know, an online economy, we want to be a digital marketplace, we want to have all these opportunities, but there’s no reliability of connectivity, that makes a very difficult investment climate. And so that’s some of what we’re trying to do. We have a tool called Pulse, a little bit of a shameless plug, but really what we’re trying to do is create resources that are useful for activists that are doing this kind of important work. So I want to thank you for that and express our support and willingness to be helpful in this.

Kanbar Hossein-Bor:
Thank you so much. I’ll take the two further questions in the room in a bunch, and then we’ve got a hand up in the virtual room as well, and then we’ll do one final round of reactions from the panel. But over to yourself.

Audience:
Hi, Jamil. I’m a barrister, but I’m also a policy counsel for many of the tech companies in Pakistan. And one of the things I found very effective was to actually run a timer, a clock that shows how money is being lost every time, you know. It worked really well with ministers and other policy folks as well. My question really is to Nicole. I completely understand there are certain things we’re also seeing in countries like Pakistan where there’s religious ceremonies or religious days where there could be violence, very serious violence. And so handling the internet in some ways becomes important. And if they don’t know what to do, they will shut it down. That’s what’s happening every single time. My concern is that while we legitimize that, and we said that’s a concern, from what I’m hearing constantly in this room has been this sort of an idea that, you know, there are actually good reasons. I’m concerned about certain governments. They might not be in Central Africa, for instance, but in other places who might actually take heed from this and say, wonderful, we have people who agree with us. So I’d just like to sort of make sure that we balance that out a little bit. Thank you so much.

Kanbar Hossein-Bor:
Thank you. A really good point there. And last but not least, over to you, sir. Thank you.

Audience:
Thank you. Let me introduce myself. This is Ganesh Pandey. I work for the government of Nepal in the prime minister office. While talking about the free and fair election and internet, right now we focus only on the internet shutdown and the something else, but we should not forget free and fair election needs comprehensive approach. There is a government, there is the internet service provider, there are political parties, and there are also the citizens. So sometimes government intentionally or purposefully controls the media and the internet to get some vested interest or hide the information. In that way how we can make the government accountable. Sometimes the public make the criticize of the some of the leader or the candidate of the elections. If that is disinformation, within one hour it is spread so fast that the image or the trust of that leader goes down immediately. And we don’t have the access to the internet service provider to control or to check or restrict that disinformation or the disclaim of that that is the false false information. So how the internet service providers we can make responsible through the use of AI or any tools so that such kind of disinformation should not be spread in the social media or something else. And the second thing is that how government can make, how we can make the government more accountable and responsive through the use of digital technology. This is also very important. Thank you very much.

Kanbar Hossein-Bor:
Thank you very much. Some really important reflections from a policymakers perspective. I’m going to ask the panel to come in. I’ll introduce you. and ask you to come in. But first to come in is I think we have André, one of our speakers online, who would like to come in. So André, did you want to come in on those points or another point?

Andrea Ngombet:
Yeah, thank you. So in the case of Congo, it was Sassoufi who reached out to META to have a kind of task force on the elections. And it’s not that we are trying to justify the reason of the government. We are identifying a pain. The pain is there is hate speech from opposition and government. There is misinformation from every side. So as a civil society organization, we need to be the referee in the middle ground. And if we can, as a takeaway for the group, have more civil society organizations reaching to those big corporations and doing this online moderation in the local level, because those companies won’t do it themselves, we need to push them. And by doing so, we will erase that argument of a government that internet means violence, because we will have these local civil society working against the hate speech, working against the incitement to violence. And if we are more of us doing that work, the government won’t have this legitimate argument to say, oh, nobody is doing that work, so we have to shut down the internet. This is our way to address that specific problem.

Kanbar Hossein-Bor:
Thank you, Andre. I think that addresses one of the points made just now about how to engage with internet providers and content platforms. But I think we had a question from the Internet Society around data and economy and making that argument. Maybe if I could ask Sarah to respond to that. And then we had a comment made from our colleague based in Pakistan about the dangers of potentially giving some arguments to those states. who don’t have, put bluntly, the best of intentions in this area and the unwitting power we might hand over to them. Maybe I can ask Felicia to respond to that. So, Sarah and then Felicia.

Sarah Moulton:
Yeah, all I would say is thank you to the Internet Society. I know that there’s been a lot of work being done lately, especially through the discussions from the Summit for Democracy. The platform that’s come out of that or that’s been strengthened through that and also the cost of Internet shutdown. Well, that’s a different title. But that tool, I think, is really critical. And it’s really getting it into more hands and how do we make sure that that data is accurate or reflects the local context, because that’s the other situation that we face. If we’re going in and speaking to a particular policymaker, they want to make sure that it reflects their situation and their context. And I think maybe, as I said, my main point is these conversations, this work, needs to start now, particularly for the elections coming up. How do we, you know, I think sitting down and collaborating and figuring out what data you have, what data you have, and that we have from on the ground and how do we, this is still my question is how can we, you know, have this collaboration point in advance to make sure that we’re all sharing the information that we’re collecting and working with the right, whether it’s policymakers, whether it’s civil society or ISPs or the tech platforms or strategic litigators, all of these components or international, you know, FOC is like, this is very critical for, you know, raising the alarm. And all of this data comes together to make the case. And so thank you for that. I’m not sure if I’m answering this particular question, but I just want to note that the importance of that platform and how much we value it.

Felicia Anthonio:
Thank you. Over to you, Felicia. Yes. Yeah, so for us, and I keep it on coalition campaign, we haven’t seen any evidence. of shutdowns contributing to resolving crises that governments tend to cite. When you shut down the internet during conflicts in response to dangerous content being flagged online, it only escalates the crisis. It endangers more people. It provides an opportunity for governments and perpetrators to actually commit heinous crimes against people with impunity. And so for us, we believe that what needs to be done is that, yes, there is violence content on platforms. Big tech companies need to be responsible in taking down violence content or hateful content or dangerous content in order to keep people safe. And so I just want to emphasize that the Kipiton Coalition denounces all forms of shutdowns. We haven’t seen shutdowns as a solution to any form of violence anywhere around the world. And if anything, what shutdowns do is that, as I said, it provides an opportunity for governments, warring parties, to perpetrate heinous crimes against people around the world. And so I just want to make this very clear on behalf of the Kipiton Coalition. Thank you.

Kanbar Hossein-Bor:
Thank you very much. We’re really reaching the end of it. I’ve got the unenviable task of trying to sum this up in about a minute or two. Three very quick points from me. Firstly, a big thanks to our speakers for coming in and setting out this very complicated issue for us. And also a big thank you to all of you, both online and in the room, for engaging in this. Secondly, I think for me, this is a reminder of the importance from a principles-based perspective. Namely, internet shutdowns pose a massive threat to not only the ability to exercise offline rights online, but they also pose a threat to the wider democratic process. fabric of society and also they pose significant economic costs to societies as well. Finally and the third more positive hope I want to end on is to say that there’s a lot of good intentions I’m hearing across this discussion about trying to support policymakers where they might not have the capacity but they have the intent to address these issues but also recognizing that we should stand firm in the face of those who actually don’t have the best of intentions here and next year we potentially have the fate of 2 billion people in about 50 or so elections to consider and the need to stand up for that on a norms-based basis. In that regard I really want to remind everyone of the FOC statement that we launched today. That’s a start about two weeks ago as part of the UNESCO International Day for Universal Access Information. There’s an Oxford statement also I like to bring your attention to I hope it will be on screen which highlights the comprehensive impact of these issues together and finally we hope through the FOC and the task force internet shutdowns we can through a multi-stakeholder approach bring all the expertise together the data together to come up with some practical measures to try and address the significant challenges that not only happening today but also will be facing collectively next year as well. So thanks again to all our speakers and to all of you for in this session. you You You

Andrea Ngombet

Speech speed

147 words per minute

Speech length

932 words

Speech time

381 secs

Audience

Speech speed

168 words per minute

Speech length

1401 words

Speech time

501 secs

Ben Graham Jones

Speech speed

177 words per minute

Speech length

1299 words

Speech time

441 secs

Felicia Anthonio

Speech speed

140 words per minute

Speech length

1276 words

Speech time

547 secs

Kanbar Hossein-Bor

Speech speed

162 words per minute

Speech length

2009 words

Speech time

744 secs

Nicole Streamlau

Speech speed

181 words per minute

Speech length

1330 words

Speech time

440 secs

Sarah Moulton

Speech speed

178 words per minute

Speech length

1392 words

Speech time

468 secs

Disrupt Harm: Accountability for a Safer Internet | IGF 2023 Open Forum #146

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Moderator – Alexandra Robinson

Women and girls are subjected to significant levels of online harassment and gender-based violence, as highlighted by the analysis. This underscores the urgent need to prioritize their safety in digital spaces. Alexandra Robinson emphasizes the importance of addressing this issue through a combination of law, policy, and civil society movements.

There is a growing awareness in the international community regarding the prevalence of technology-facilitated gender-based violence. The Commission on the Status of Women dedicated a session to the intersection of gender and technology, and global outcomes documents now incorporate language on gender-based violence. These developments indicate an increasing focus on addressing this issue at a global level.

Several countries are taking action by implementing laws and policies to combat technology-facilitated gender-based violence, demonstrating their commitment to protecting the rights of women and girls. This positive step toward achieving gender equality and ensuring the safety of women online.

In addition, there is support for the progression of policy and legal systems concerning gender-based violence, highlighting the need for robust frameworks to effectively address and prevent such violence. This recognition of the importance of institutions and mechanisms to disrupt online harm experienced by women is encouraging.

In conclusion, the analysis highlights the high rates of online harassment and gender-based violence faced by women and girls. It emphasizes the significance of prioritizing their safety in digital spaces and the role of law, policy, and civil society movements in addressing this issue. The international community’s increasing awareness of technology-facilitated gender-based violence and implementation of relevant laws and policies offer hope for meaningful change. Achieving gender equality and combating gender-based violence require continued efforts and support for the progression of policy and legal systems.

Karla Velasco

The Association for Progressive Communications (APC) and its member organisations are actively involved in addressing various aspects of women’s rights, sexual rights, and feminist movements. Their work spans approximately 40 countries, mainly in the global south. A significant achievement of APC and its members is the successful recognition of online gender-based violence as a violation of human rights in 2022. This recognition is a result of their continuous efforts and advocacy.

APC aims to create a gender-inclusive internet that goes beyond providing access to the online world. They highlight the importance of understanding the challenges faced by women, as well as individuals from diverse genders and sexualities, online. Critical issues that APC addresses include online gender-based violence and technology-facilitated gender-based violence. They believe that discussions on these topics should not only raise awareness but also focus on the response and remedy for victims and survivors.

Intersectionality is another key focus for APC. They assert that a gender-inclusive internet should consider factors such as race, gender, identity, sexuality, class, and ethnicity. By highlighting these aspects, APC aims to create a comprehensive and inclusive digital space that addresses the needs and concerns of all individuals, regardless of their social backgrounds.

APC promotes a vision of transformative justice, emphasising values such as pleasure, sexuality, joy, and freedom of expression. They believe that promoting a more positive and empowering narrative around gender issues online can lead to societal transformation that respects and upholds the rights and dignity of all individuals.

One important observation is that APC urges the discussion to move beyond acknowledging and condemning online gender-based violence towards implementing measures that provide support and remedies for victims and survivors. They call for comprehensive discussions and actions on victim support and advocacy to ensure that those affected receive the necessary assistance and justice they deserve.

In conclusion, APC and its member organisations play a crucial role in advancing women’s rights, sexual rights, and feminist movements. Through their advocacy and initiatives, they have been instrumental in recognising online gender-based violence as a human rights violation. APC’s emphasis on a gender-inclusive internet, intersectionality, and transformative justice demonstrates their commitment to creating a more equitable and empowering digital world. Their call to prioritise victim support and remedies further reinforces their dedication to addressing the needs and challenges faced by individuals affected by online gender-based violence.

MARTHA LUCIA MICHER CAMARENA

The analysis highlights a concerning issue in Mexico, where digital violence is affecting women, adolescents, and girls. Startling statistics reveal that three out of ten women internet users in Mexico have become victims of cyberbullying. Furthermore, a staggering 74.1% of women victims of digital violence are between the ages of 18 and 30.

What’s more alarming is that the majority of the aggressors responsible for these acts of digital violence are known individuals, with former partners being the main culprits, accounting for 81.6% of the cases. This indicates that digital violence is not a random occurrence but often involves individuals with intimate or prior relationships with the victims.

Recognising the seriousness of this issue, the call for legislation to provide safety for women in digital spaces has been raised. One positive aspect is the existence of the Gender Equality Committee in the Mexican Senate, chaired by a prominent figure who is actively working towards this cause. The committee has successfully enacted important reforms that define digital violence and establish regulations for protection orders in cases of digital violence.

However, despite these positive steps, challenges remain in the judicial system, public ministries, and amongst judges. These institutions and individuals pose significant obstacles to achieving gender equality. Lack of adequate understanding, biases, and systemic issues still prevalent in the judicial system hinder progress in addressing gender-related issues effectively.

On a more positive note, the analysis also highlights significant progress made in the realms of gender equality and women’s rights over the past three decades. This progress is evident considering the participation in the Beijing 1995 Conference, which focused on gender inequality and highlighted various gender-related topics. Notably, these topics were once considered ‘crazy’ but are now internationally recognised areas of concern and focus.

In conclusion, the analysis sheds light on the issue of digital violence affecting women, adolescents, and girls in Mexico. Legislation is urgently needed to ensure their safety in digital spaces. Although advancements have been made in this regard, challenges in the judicial system and among public ministries persist, hindering progress towards gender equality. Despite these challenges, notable progress has been achieved in gender equality and women’s rights, with gender-related issues now receiving international attention and recognition.

Audience

This analysis explores several crucial topics related to gender-based violence, social justice, and the intersection of digital technologies. The speakers discussed the various risks and opportunities presented by the internet in combating gender-based violence and promoting social justice.

One speaker highlighted the work of NGO Diretos Digitales, which operates at the intersection of human rights and digital technologies. They argued that the internet is a place of both risk and opportunity. On one hand, it allows for greater visibility and the potential for addressing social justice issues. On the other hand, it also exposes individuals to risks and potential harm, particularly in relation to gender-based violence.

Another speaker focused on the need for sensible legislation, enforcement, and understanding to address technology-facilitated gender-based violence. They emphasized that standardizing such legislation is currently under discussion. However, they also noted that the legislative aspect alone is not enough to combat this issue effectively.

In connection with this, the importance of legal frameworks that consider the privacy, freedom of expression, and access to information of survivors was raised. The speakers argued that it is not just the rights of offenders that should be considered, but also the rights and protection of those who have experienced gender-based violence.

Furthermore, an intersectional approach, which takes into account contextual and social differences, was advocated. The speakers acknowledged that social problems disproportionately affect individuals in vulnerable situations. Therefore, any efforts to address gender-based violence and promote social justice must consider these differences and work towards a more inclusive and equitable solution.

Lastly, the analysis included a notable call for age-based protections, particularly for adult women, within the legal system. It was highlighted that while there are existing protections for children up to the age of 18 in the speaker’s country, violence against adult women is often normalized and they are not always recognized as victims. This observation emphasizes the need for a comprehensive approach to tackling gender-based violence and ensuring justice for all individuals affected by it.

In conclusion, this analysis highlights the multifaceted nature of gender-based violence and the need for comprehensive strategies to combat it. It underscores the importance of legislation, legal frameworks, and an intersectional approach in promoting social justice and addressing the risks and opportunities presented by digital technologies. Additionally, it raises awareness about the need for age-based protections, especially for adult women. By considering these factors, society can take meaningful steps towards creating a safer and more equitable environment.

Julie Inman Grant

The analysis explores several crucial aspects of online harassment and the urgent need for effective measures to combat it. One notable observation is the disapproval of the term ‘revenge porn’, which is deemed to trivialize and victim-blame. Instead, there is an argument to adopt the term ‘image-based abuse’ to better convey the seriousness and harm caused by such actions. This emphasises the importance of using language that accurately depicts the nature and impact of online harassment.

Another significant finding is the intersectional nature of online harassment. The analysis highlights that indigenous Australians experience twice the amount of online hate compared to other groups. It also reveals the different challenges faced by urban and rural indigenous populations, as well as culturally and linguistically diverse communities. This underscores the necessity of understanding and addressing the unique vulnerabilities and perspectives of these groups to effectively tackle online harassment.

The analysis further emphasises the importance of co-designing preventive solutions with vulnerable communities. It stresses the need to consider diverse experiences and vulnerabilities when designing mechanisms to prevent online harassment. This promotes a more inclusive approach that is better equipped to address the specific challenges faced by different groups, thereby increasing the effectiveness of preventive measures.

Furthermore, the analysis highlights the successful implementation of deterrent powers in curbing online abuses. It indicates a 90% success rate in removing abusive content, which is a positive outcome. Moreover, it suggests that women who sought help had positive responses, affirming the effectiveness of these measures in providing relief and protection to victims.

Finally, an important observation from the analysis is the willingness of the eSafety Commissioner to collaborate internationally. Recognising that online harassment is a global issue, the Commissioner acknowledges the importance of a global approach to addressing it and ensuring a safer online environment for all. This demonstrates the recognition of the need for partnerships and information sharing to effectively tackle online harassment.

In conclusion, the analysis underscores the need for a comprehensive and inclusive approach to combat online harassment. It highlights the importance of using appropriate language, understanding the intersectional nature of online harassment, co-designing preventive measures with vulnerable communities, implementing effective deterrent powers, and collaborating internationally. These insights provide valuable guidance in tackling the complex issue of online harassment and ensuring a safer online environment for everyone.

Juan Carlos Lara Galvez

The internet is a space that presents both risks and opportunities. In the context of social justice and combating gender-based violence, the internet has provided a platform for giving visibility to social demands. It has allowed for the amplification of voices and the dissemination of information related to these issues. This is a positive sentiment as it signifies the potential for social change and progress.

To effectively address technology-facilitated gender-based violence, legal frameworks should take a balanced perspective. This means considering the rights of individuals, including privacy, freedom of expression, and access to information of survivors. This approach recognizes the need for sensible legislative efforts and standards that uphold these rights while addressing the issue of gender-based violence. It is a positive stance that acknowledges the importance of striking a balance between protecting survivors and ensuring their rights are respected.

In addition, an intersectional approach is necessary to address the contextual and social differences that exist within gender-based violence. This understanding recognizes that certain issues disproportionately affect people in situations of vulnerability. It highlights the need for a comprehensive and inclusive approach that takes into account factors such as race, class, and sexuality. In particular, it acknowledges that women also face such issues in the public sphere, further emphasizing the importance of an intersectional perspective. This stance is positive and highlights the significance of considering various dimensions of identity and vulnerability in addressing gender-based violence.

However, it is important to note that legislation alone cannot fully resolve complex social issues. While legal frameworks are a crucial component, enforcement and understanding throughout the system are equally important. This neutral sentiment indicates that a comprehensive solution entails not only enacting laws but also ensuring their effective implementation and creating a deeper understanding of the underlying causes and dynamics of gender-based violence. It is a reminder that a multifaceted approach is needed to address the complexity of these social issues effectively.

In conclusion, the internet has the potential to serve as a platform for social change and combating gender-based violence. Legal frameworks should take a balanced perspective, considering the rights of individuals, while addressing technology-facilitated gender-based violence. An intersectional approach is necessary to address the contextual and social differences that exist within gender-based violence and other social issues. However, it is essential to recognize that legislation alone is insufficient in resolving complex social issues and that enforcement and understanding are crucial factors in achieving meaningful change.

Eiko Narita

The analysis highlights several important points related to internet governance and the fight against harm online. One of the main arguments is the significance of multi-stakeholder conversations in this endeavor. These conversations involve various stakeholders such as governments, regulatory bodies, civil society organizations (CSOs), businesses, and rights-based organizations. By including diverse perspectives and expertise, multi-stakeholder conversations can lead to more effective strategies and solutions for combating harm on the internet.

Civil society organizations (CSOs) are specifically emphasized as crucial entities in internet governance. Their role in giving voice to ground realities is recognized, and organizations like PC and Audrey are mentioned as examples. These CSOs play a vital part in ensuring that the internet remains a safe and inclusive space for all users.

Accountability in online digital technology crimes is identified as a significant challenge. The analysis highlights that holding individuals accountable for online crimes is much more difficult than accountability for crimes against humanity such as genocide. This observation sheds light on the complexities associated with addressing online crimes and the need for robust systems and mechanisms to ensure accountability.

The importance of continuing to use platforms like the Internet Governance Forum (IGF) is emphasized. These platforms provide spaces for interaction and the amplification of important voices. By engaging in ongoing discussions and collaborations through platforms like IGF, the momentum in addressing issues related to internet governance can be sustained.

Additionally, the analysis includes the UNFPA’s efforts to end gender-based violence. It is stated that the UNFPA is actively working with governments and policymakers in this regard. Their commitment to tackling this issue aligns with the Sustainable Development Goal 5 on Gender Equality.

Another noteworthy observation is Eiko Narita’s stance on cybercrime and online harassment. Narita emphasizes that if something is not acceptable to do in person, it should not be tolerated online either. This highlights the importance of creating a safe and respectful online environment and holding individuals accountable for their actions.

Overall, the analysis underscores the importance of multi-stakeholder conversations, the crucial role of civil society organizations, the challenges in accountability for online crimes, the significance of continuing to use platforms for interaction, the efforts of the UNFPA in combating gender-based violence, and the need to address cybercrime and online harassment. These insights shed light on the complexity of internet governance and the ongoing efforts to create a safer and more inclusive internet for all.

Sherri Kraham Talabany

Women and girls in Iraq and across the Middle East face significant risks of online violence, which are exacerbated by the high internet penetration in the region and social conservatism. This online violence poses serious threats to their safety and well-being. Approximately 50% of women and girls in Iraq have either experienced or know someone who has experienced online violence, highlighting the prevalence of this issue.

The consequences of online violence in Iraq are not limited to the digital sphere but often result in real-life tragedies, such as honor killings and increased rates of suicide. This demonstrates that the impact of online violence extends beyond virtual interactions, causing physical harm and loss of lives. Urgent interventions are necessary to address this issue effectively.

To tackle this pressing concern, a nationwide task force has been established in Iraq. This task force focuses on human rights-based legislation and policy to combat Technology-Facilitated Gender-Based Violence (TFGBV). Its objectives include enhancing access to safe and confidential reporting facilities for victims and survivors, as well as promoting skilled investigations into cases of online violence. The task force also aims to train local non-governmental organizations (NGOs) to better understand and respond to these unique crimes. These efforts represent positive steps towards providing support and justice to victims of online violence.

Tech companies play a crucial role in addressing and combating online violence. They are urged to establish survivor-centered, rights-focused redress systems that take into account how online violence in the Middle East can lead to real-world harm. Understanding the manifestations of online violence across the region is essential for developing appropriate responses suitable for the unique environment. Tech companies should proactively contribute to creating a safer online space for women and girls in the Middle East.

When formulating internet governance frameworks, it is vital to consider the unique challenges faced by women in the Middle East due to online violence. These challenges should be integrated into emerging policies or regulations concerning internet governance. Broad governance mechanisms must be incorporated to address the specific considerations and situations encountered by women in the Middle East. By doing so, a more inclusive and supportive online environment can be created, prioritizing the rights and safety of women.

In conclusion, women and girls in Iraq and across the Middle East face significant risks of online violence, with high internet penetration and social conservatism exacerbating the issue. The establishment of a nationwide task force in Iraq dedicated to addressing TFGBV represents a positive step towards combatting online violence. The involvement and commitment of tech companies are crucial for establishing survivor-centered redress systems and developing appropriate responses to effectively tackle this issue. Furthermore, integrating the unique challenges faced by women in the Middle East into internet governance frameworks is essential to create a safer and more equitable online space.

Session transcript

Moderator – Alexandra Robinson:
This is working? It’s working? Okay, thank you. Okay, it feels quite awkward sitting, but hi everybody. Thank you for joining today. My name is Alexandra Robinson. I’m the Gender-Based Violence Technical Advisor for the United Nations Population Fund, UNFPA. And today we welcome everyone to our event on Disrupt Harm, Accountability for a Safer Internet. So ending gender-based violence and harmful practices is at the center of what UNFPA do. And increasingly in a digital world, we realize that we can’t achieve that without ensuring that all women and girls are safe in all spaces, including online spaces and through their use of technology. So we are hosting the event today to explore those mechanisms through which law and policy and civil society movements are operating to disrupt that harm. experienced by women in online spaces and technology. And we’re gonna hear from a really amazing panel. I feel really privileged to be sitting here with such phenomenal people. But we will hear from a range of different perspectives, their wealth and experience across their work in doing exactly this and disrupting harm. We’ll then open for a Q&A both with online, we have an online presence, so we’ll have a Q&A with you in the room, but also with Q&A for people online. So, and we’re a relatively small room, so please don’t be shy in taking the microphone and asking. With that, I will turn to our first panelist, who is Senator Mata Lutia Misheka Morena, known as Malu Mishe. She is a staunch feminist. She is the Morena Senator for Guanajuato. She is a mother. She has been a federal representative on three occasions. Currently a legislator in the Congress of Union representing the state of Guanajuato. And she will speak specifically around the legal measures and regulations implemented in Mexico for the prevention and response of technology-facilitated gender-based violence. Thank you, Senator.

MARTHA LUCIA MICHER CAMARENA:
Thank you. Thank you, Alexandra. Thank you for inviting me. Nice to meet you, and thank you very much for this invitation. Good afternoon. I am Mata Lutia Misheka Morena, a Mexican Senator, and today I want to share the current situation of women, adolescents, and girls regarding information and communication technologies. I now want to address an important and troublesome issue, digital violence, which, according to the UN, three out of 10 women internet users in Mexico have been victims of cyberbullying that is approximately 10 million women. In addition, the National Front for Sorority and Digital Defenders, a Mexican civil society organization, an NGO, has indicated that 19 out of every 100 victims of digital violence are women, pointing out that 74.1% of women victims of digital violence are between 18 and 30 years old, 72.3% are university students, and 81.6% of the aggressors are a known person, mainly former partners. Among the main behaviors reported by this violence are dissemination of intimate content without consent, threats of dissemination, harassment, and or sexual harassment, exhortion, sexual assaults not related to sexual intimacy, distribution of child pornography, production of intimate content without consent, dissemination of personal data offering sexual services without consent, and identity theft. The main formats in which digital aggressions occur are intimate photo sharing groups or websites, direct message, creation of fake profiles, and attack from fake profiles. Currently, I chair the Gender Equality Committee in the Mexican Senate, a legislative space that has allowed me to create, contribute, and adapt legislation to current times. Thus, we are not only concerned, but we have also dealt with legislating important reforms that provide safety of women in digital space. Well, the reform, and it was approved in unanimity. How do you say unanimity? Everyone, uh-huh, see. The reform entails the following. Define digital violence as any malicious action carried out through the use of information and communication technologies by which images, audios, or real or simulated videos of intimate sexual content of a person without their consent without their approval or without their authorization are exposed, distributed, disseminated, exhibited, transmitted, market, offered, exchanged, or shared with costs psychological and emotional harm, as well as damage any area of the person’s private life or own image. It also includes those malicious acts that cause damage to the intimacy, privacy, and or dignity of women which are committed through information and communication technologies. Second, regulates protection orders for digital violence cases in which the public prosecutor’s office or judge will immediately order the necessary protection measures, introducing electronical or ingriding the companies of digital platforms, the media, social of website space, pages, individuals, or companies to interrupt. block, destruct, or delete image, audios, or videos related to the investigation. And third, adds the crime of violation of sexual intimacy punishable by a penalty of three to six years in prison to anyone who discloses, shares, distributes, or publish image, videos, or audios of intimate sexual content of a person who is a legal age without the person’s consent, approval, or authorization, as well as anyone who videotapes, audiotapes, photographs, prints, or develops image, audios, or videos with intimate sexual content of a person without their consent, without their approval, or without their authorization. Well, I am convinced that one of the best ways to achieve women, adolescents, and girls’ safety is to provide an applicable legal framework to face situations that cause serious harm to their lives. Never take one step back on women’s rights. Thank you very much.

Moderator – Alexandra Robinson:
Thank you. Thank you. Thank you. Thank you. Thank you. Thank you so much. I think that set the stage for the entire event very well. I will now introduce the other panelists who’ll be speaking with us today. We have Sherry Karam Talabani, who is sitting right next to the senator, who is the executive director of the SEED Foundation. Sherry is a human rights lawyer and has over 20 years of experience as a policymaker, program manager, and advocate for gender and human rights and social justice. And today, she’ll be speaking to us specifically around contemporary legal frameworks and political discourses on technology facilitator, GBV, in Iraq. And then sitting on the other side of Sherry, we have Carla Velasco Ramos, the policy advisor coordinator at the Association for Progressive Communications. And Carla has many years of experience in internet access, gender and technology, and with the APC, plays a crucial role in convening CSOs, tech companies and online platforms to address TFGBB. And then we will be speaking with our eSafety Commissioner, Julie Inman-Grant, one of the only intergovernmental regulatory bodies in the world committed to keeping citizens safer online. The eSafety Commissioner has extensive experience in the non-profit and government sectors and has spent two decades working in senior public policy and safety roles in the tech industry, including at Microsoft, Twitter and Adobe. And as the Commissioner, plays an important role as the global chair, an important global role as the chair of the Child Dignity Alliance’s technical working group, a board member of the We Protect Global Alliance, and the Commissioner also serves on the World Economic Forum’s Global Coalition for Digital Safety and on their Exide ecosystem governance steering committee on building and defining the metaverse. I’m not sure. And finally, we will conclude our panel discussion with Juan Carlos Lara, who is the Executive Director at Derechos Digitales, an organization working at the intersection of human rights and digital technologies, where he is a lawyer by training and has experience in legal and policy analyst and research on data privacy, surveillance, freedom of expression and access to knowledge in the digital environment. So with that, I will now turn to you, Sherry. Thank you for being with us.

Sherri Kraham Talabany:
Thank you for hosting us. I think at the conference so far, we’ve seen everything at a 10,000 foot level and I really So, you’ve been talking a lot about government structures and platforms, and I really wanted to drill down on some of that. Thank you. Surprisingly, I’m not very tech-savvy myself. So what I really wanted to do is drill down on what online violence means in Iraq, but I think also across the Middle East, because I think it’s an area where we see very high internet penetration, but also very high rates of gender inequality, extremely conservative norms, which creates unique vulnerabilities of people who already have high vulnerabilities, and it exacerbates those. So unique vulnerabilities to TFGBV, and with real-life disastrous consequences for women and young girls. We see TFGBV endemic and increasing across the Middle East and Iraq. So Iraq is the fourth worst country in the world when it comes to women’s peace, security, access to justice, women’s rights, and their safety. And that’s according to the Women, Peace, and Security Index, and it relates to their participation in every aspect of life. We have the highest rate of intimate partner violence in the world, 45% of women face violence in their home, so women aren’t safe at home. And we have conservative norms that shape and constrain how women and girls, what they can do, and how they behave, and we see adolescents and young women with extremely limited freedoms and spending a lot of their free time online. We have the largest gap globally. It shows up in the economic sphere, in political participation, in education and health attainments, in their very survival, and we have very limited protections in place. At the same time, Iraq is very well-connected. Seventy-five percent of the population are active on the Internet. Almost everybody has a cell phone. The gaps between women and men exist for sure, with the biggest gap in connection with rural women, and with women lagging behind in terms of digital literacy. Nonetheless, 50 percent of women and girls in Iraq say that they have faced and experienced TFGBV or know someone who has experienced it. In this context with these social and cultural dynamics, women and girls are extremely vulnerable to online violence, with the high likelihood that this violence shows up offline as well. So what are we worried about? Much as what the senator just described, harassment, abuse, exploitation, trafficking. We also see these phenomenon lead to murder, honor killings, and increased rates of suicide. So what do we see? We see image-based abuse, just what you just described, private photo or image or film, sometimes real and sometimes manipulated, used to exploit sextortion, used to traffic, used in every economic strata in our society for women and girls. Besides the violence that women and girls face from the perpetrator, or the person that’s abusing them online, we also see them face extremely high rates of violence in their home life as a result of this threat and of this violence. So if their families find out, it could lead to honor-based violence and murder, and it has, and we have many cases of this. Harassment, threats, and defamation. It’s against women and girls generally, but it’s especially a risk for women in the public space, academics, politicians. NGO leaders, and women of every walk of life, and it’s intended to inhibit and constrain women and girls’ participation. And so we see them being harassed and intimidated online. We’ve seen a spate of murders of social media influencers for dressing what is perceived to be provocatively smoking, and punishing them for going outside social norms. So it’s violent, and it’s scary, and it’s intended to keep women’s representation and participation low, and it’s very effective at that. We have other challenges with predatory practices, including of children through gaming, child porn, child trafficking rings, but these are less documented and well-known. And of course, the most obvious and horrific abuse is that we saw women sold like chattel by ISIS online, and that fostered the trafficking of women during the ISIS crisis, which continues even to today online. So what do we need to do to address it? My organization two years ago started a nationwide task force. I think it’s the only nationwide task force called the TFGBB Task Force, and you can find out and connect to our task force here. We’re focused on human rights-based legislation and policy across the Middle East and Iraq. Legislation to protect against these harms is often used to decrease public expression, free media, and the response tends to be rules that inhibit and criminalize public expression. So we need to focus on the crime, but not on expression. We need increased access to safe and confidential reporting, along with investigations and protections from designated agencies with clear mandates and skilled personnel. So we don’t have that in Iraq today. We don’t have a designated—we do have some legislation, but we don’t have a designated agency, and we certainly don’t have— investigations that are skilled or experienced. We also need skilled and experienced NGOs that understand this unique kind of crime and how it impacts women and girls across Iraq and of course the Middle East. And this requires serious training capacity building which we are undertaking. And then finally we need to focus on the tech companies. They need to have proper redress that is both survivor-centered, rights-focused, including child’s rights-focused that understands how this type of online violence manifest into real-world violence across the Middle East in a unique way and develop appropriate safe responses for the environment that we face. So to close, we really need a regionalized local response in whatever internet governance architecture that emerges from these forums, whether it’s the Global Digital Compact or other thread, we need to address in these broad governance mechanisms the unique violence and considerations that we face across the Middle East. Thank you.

Moderator – Alexandra Robinson:
Thank you so much, Sherry. And really to build on your work as a CSO in Iraq, I’ll now turn to the Association for Progressive Communications who have demonstrated a longstanding ability to mobilize communities and community organizations around the issue of addressing tech facilitator GBV. I wondered if you could speak to the role of APC in shaping those movements, but also perhaps talk to some of the voices that you think might be missing from those movements.

Karla Velasco:
Yes, thank you. So I am Carla, and I’m the Policy Advocacy Coordinator of the Women’s Rights Program, which is part of the Association for Progressive Communication. So today I’m going to speak on behalf of WRP and APC. The Association for Progressive Communications is a network organization, it’s a member’s organization. We have around 70 member organizations that work in approximately 40 countries around the world, mostly of them in the global south or in the majority world. So the work that we do with our member organizations since APC’s inception, which was almost 30 years ago, is through the women’s rights program to work on women’s rights, sexual rights, and feminist movements. And back then when we started working with the women’s rights programs, a language like online gender-based violence was something that didn’t even exist, right? So it is a celebration for us that 25 years after we get to see these issues in the agenda and we get to see that different governments are taking these very important issues into account and these are being mentioned right now at the Internet Governance Forums, at the Global Digital Compact, and in the different feminist foreign policies that are currently pushing this subject, right? So for us, it has been a major achievement to have this. In 2022, the term was successfully recognized as a human rights violation, and it was thanks to the work of member organizations together with APC and with other organizations from the feminist movements, and it has been a successful work for us to be able to find a pathway between feminist organizations and digital rights organizations, right? Because that’s also a very big struggle right now. So for us… it is very important to bring into the digital space the voices of women and people of diverse genders and sexualities and the things that are very important and crucial right now is that even though there is a discussion between online gender-based violence and technology facilitated gender-based violence, we need to go beyond the discussion of the term and we really need to discuss the response and remedy to victims and survivors where they are. So for example, one of the things that I want to highlight here is that we hear in many of the discussions the phrase, yes access and digital skills for women and girls as a possible solution to the problems that we have for gender and my urgency here is to please go beyond that, you know, because access is only part of the problem. But what we really need to look at is the usage of the internet and how women and people of diverse genders and sexualities are connected, the issues that we face online and how we have differentiated effect when we are using the internet, right? And how that crosses intersectionality and how that crosses where we come from, where are we connecting from and it intersects with race, gender, identity, sexuality, class, ethnicity. So we need to take all of these things into account. So once you look into or beyond the gender gap, you get to see that there’s a lot of complexities around that and we really need to focus on this and this is what the members are currently asking for us to do, right? To bring the conversation beyond that, to bring technology-facilitated gender-based violence, to bring gender disinformation into the discussion, but also to change a little bit the narrative because we always think about the negative things and we always see the negative effects and impacts that we have. But for example in APC, we have a vision of transformative justice. So we really, the proposal that we have here and that we also show in our feminist principles of the internet is how through bringing values such as pleasure, sexuality, joy, freedom of expression, we get to change the narrative of how we see these issues that we are currently facing as women and people of diverse genders and sexualities. So my time is up. Thank you very much.

Moderator – Alexandra Robinson:
Thank you, Carla. And with that, I think, you know, another pathway around how we achieve those safe spaces for women and girls to enjoy technology and online spaces, we have Commissioner Iman Grant. It would be lovely to hear from you about what does a regulatory body look like and how is that disrupting harm so that women can have a transformative experience online.

Julie Inman Grant:
Well, thank you. I’d also like to play off the really important discussion that has already been had and congratulate everyone for not using the term revenge porn. When I was announced as eSafety Commissioner, I was asked to set up a revenge porn portal and I said, yes, I will, but no, I will not call it revenge. Revenge for what? And porn, not for the titillating purposes. We can’t be using language that trivializes or victim blames. So it’s so good to see that in many languages, in many contexts, that image-based abuse is being adopted as a much more empowered terminology. I think it’s really important. The role that we have actually gives me a legislative role to coordinate all online safety activity across the Commonwealth, but also to be the educator and the regulator for online safety. Now, I think it’s really important, we’ve heard this, there is no one-size-fits-all. So when we’re talking about prevention and education, it’s really important to establish an evidence base and understand how the most vulnerable communities are being impacted and how it might be manifesting differently. So for instance, in Australia, indigenous Australians are twice as likely to receive online hate than the broader general population. And the way the indigenous communities use technology is different. They tend to share devices, they tend to share passwords. It’s a very familial base, but that also means that there are more imposter accounts and takeovers and lateral violence. But you also can’t say there’s a one-size-fits-all for Indigenous communities. The experiences of urban Indigenous people are different from those in rural and remote communities. By the same token, in culturally and linguistically diverse communities, when we looked at technology-facilitated abuse, not only are they experiencing the harm and the mental and emotional distress that the everyday Australian is experiencing from technology-facilitated abuse, they often have low digital literacy, low technology literacy. The man controls the technology in the home. There are additional threats of deportation. There may be mistrust of police and government organizations, and just general disenfranchisement from the community. And then when we look at those with intellectual disabilities, these women are afraid to tell the truth. They’re afraid that they will not be believed. And it’s often their carers or their partners that control their technology and threaten to cut them off from their peers and their friends. And they may not have the capability of knowing where to report to or where to get help. So we do have the intersectional nature that we have to make sure that we understand we need to co-design solutions for prevention with these communities. When we get to the protection side of things, to echo the senator’s comments, because we take complaints from the public around child sexual abuse material, around image-based abuse, around youth-based cyberbullying and adult cyberabuse, every single one of those forms of abuse is gendered. The average age for girls being bullied used to be 14. We’re now getting reports from girls as young as 8 or 9 years old. I’ve just issued end-user powers against a group of six 14-year-old boys who are sending rape and death threats to another 14-year-old girl. We’re helping women in Iran and Pakistan with… Australian Connections get their image-based abuse materials down because they’re at risk of honor killings and you know a terrible shame that we don’t experience it the same way in the Australian context and so we’re now issuing some remedial directions against some of those people. So using these deterrent powers and naming and shaming does have an impact. We have a 90% success rate in terms of getting this content taken down and I can tell you that so many women that come to us that’s what they want. They’ve been to the police and they were told why’d you take the image in the first place why didn’t you just get off the internet. So again we need to we need to learn from each other so that we can develop solutions that will work in every jurisdiction and every context and my time is up but I just want to offer that we’re willing to work with all of you to help share our learnings. Thank you. I

Moderator – Alexandra Robinson:
will turn to our last panelist now Juan Carlos to speak to the significance of some work where UNFPN Diretos Digitales are doing jointly around what rights-based law reform looks like to address to address TFGBB and why this is

Juan Carlos Lara Galvez:
an important piece of work. Yes thank you very much. Thank you to NFPA for the invitation to participate in this. I am now saluting you all from Diretos Digitales. We’re an NGO working in the intersection of human rights and digital technologies working in Latin America and I speak also on behalf of my wonderful colleagues who are working in this effort to provide guidance for for law reform. I’d like to begin by highlighting the fact that as a civil society organization based in the global majority we understand that the internet is indeed a place of risk but it’s also a place for opportunity. that the digital realm has meant and has allowed for more spaces to give visibility to social demands, to social justice demands, and also for the demands of combating and preventing gender-based violence, especially that which is facilitated by technology. At the same time, I do wish to acknowledge like the significant contributions to this panel which are a big summary of the amounts and the diversity of the violence that women, gender non-conforming people, LGBTQIA plus people face daily on the internet. But at the same time, the work that the digital is conducting is trying to address the fact that we need sensible legislation, legislative efforts, standards are being discussed right now. However, how that applies to the internet and to the complexity of the social backgrounds of these types of violence is a very complex problem and the legislative side of it is only one part of it. And we need to take it into consideration in its right way to balance the rights and of course to provide the solutions that the legislation by itself is able to provide. We need to also understand that complex social issues are not going to be just solved by virtue of enacting new laws, but that we also need enforcement and we need a level of understanding throughout the system that should be reflected as well. So we need to develop legal frameworks that addressed technology facilitated gender-based violence from the perspective of balance and also taking into consideration that the privacy of the survivors themselves, their freedom of expression, their access to information are relevant also for them. It’s not just a matter of the rights of the people who are committing the offenses. So because these are social problems that disproportionately affect people in situations of vulnerability and women also in the public sphere, we need to defend an intersectional approach that addresses contextual and social differences. and also that there are groups

Audience:
that are being taken care of in the legal system. So there’s almost protection to children up to the age of 18 in my country at least. And then from 18, you’re a woman and your harm is normalized, violence against you is normalized and you’re not even considered a victim. So those are my two statements, thank you.

Moderator – Alexandra Robinson:
Thank you very much, Angela. Very quickly on the global stage, I think we’re lucky to have Ellen here from Young Women, but last, you know, in March, the entire Commission of Status of Women was dedicated to gender and technology. And I think there was a really strong focus around technology facilitated gender-based violence, integrated into global outcomes documents and language. So I think at a global level, there is certainly movement around building international language and policy. And I think at a national level, we’re very much seeing, you know, movements around different countries implementing laws and policies. I will pass to the senator.

MARTHA LUCIA MICHER CAMARENA:
Si, bueno. Let me tell you that I was in Beijing in 1990. No, it’s 95. 1995. And everyone told us that we were, que estamos locas. Locas, crazies. They told us, you are crazy. They didn’t want us to talk about violence against women or this kind of issues about penalties and now, yo veo que hemos avanzado mucho, si lo puedes decir. Pero muchísimo. Hace 30 años, este tema era del brujas. Era un tema prohibido. Y ahora, vamos muy avanzadas. Pero yo creo que el reto es los juzgadores y los ministerios públicos. Creo que ahí es donde tenemos que trabajar. So the senator is sharing that she has seen a lot of progress in the last 30 years and that we should definitely take that into account. It has completely shifted. For 30 years, so that’s something to to remember and that the problem now is the judiciary system public ministries and judges This is the the most important problem As she shared just now

Moderator – Alexandra Robinson:
Thank you. I we do have to wrap up now, but I will ask Eiko Narita the Representative of our Japan UNFPA office to to close us out today I’ll stand up so you can sit here because I think there’s a

Eiko Narita:
Well, well, thank you so much I’m six is a great number here, but I thought I would be the lucky seven barging in to close this session but it’s really, you know excellencies and you know, all these leaders and Wonderful colleagues and also friends, you know within the community for being here to this really rich conversation around, you know Disrupt harm accountability for safety say for Internet Since we keep talking about the importance of multi-stakeholder Conversations over the last couple of days. I mean, what does that really means? And I think you heard that here right more specifically, you know, we need to come together across governments, you know regulatory bodies CSO’s businesses and rights-based organizations to really Collaborate more efficiently to be able to disrupt this harm over the internet that we see so so frequently I think we also have to acknowledge at this moment the all and the power of civil society organizations I think it’s really important that they’re here today at IGF and You know, especially including those led by a PC and Audrey I Think what I was talking to Alex earlier this afternoon and say, you know, they they belong here, right they they are Entities that belong here to really make that voice be heard from the ground because that’s really important We’re not just talking sort of in theory. So, just going over what we discussed today, we learned about the experiences of one of the only intergovernmental regulatory bodies, the eSafety Commissioner of Australia, and also from the legislative scenarios of Mexico, all these steps taken, and from feminist digital rights activists whose global work inspires really all of us, and from community leaders in Iraq, one of the toughest countries to really handle and face this gender-based violence issue, not only online, but on the ground in person. And I think what this event did was to really put a human face over topics that are often so high up in the technology world, and I think that’s really important that we put this human face to it. And I think it’s interesting for me because we use this online digital technology, the accountability of it is unlike other crimes, like crimes against humanity, like genocide. They’re accountable. Who do we put accountability to it? When we have things like AI, suddenly that accountability is really much more difficult to put a finger to. So, at UNFPA, we’re really working hard to try, and as one of the transformative goals, to end gender-based violence. And as Maria Ressa mentioned earlier, if it’s not okay to do it in person, then it’s not okay to do it online either. So, I think we work with governments, policymakers like ourselves, and I just want us to finally maybe say that you’re all here in your own positions, whether civil society or not. I think it’s good for us to continue to use this platform as a way to interact and also continue the momentum of movement so that this becomes a place of exchange and also… amplifying the voices of what is really important. So with that, I know I have to do this. I have to extend my gratitude to everyone who made this event possible. Special thanks go to our Honorable Senator Camarena and also Julie Eman Grant, Sherry Karaham Talabani, Juan Carlos, Lara and Carla Valesquez Ramos, and also Alex, Stephanie and Eva, our team from UNFPA, and to all of you who have come here today to really make this conversation very rich. So thank you so much.

Audience

Speech speed

206 words per minute

Speech length

67 words

Speech time

20 secs

Eiko Narita

Speech speed

166 words per minute

Speech length

656 words

Speech time

236 secs

Juan Carlos Lara Galvez

Speech speed

170 words per minute

Speech length

489 words

Speech time

173 secs

Julie Inman Grant

Speech speed

166 words per minute

Speech length

779 words

Speech time

281 secs

Karla Velasco

Speech speed

167 words per minute

Speech length

739 words

Speech time

265 secs

MARTHA LUCIA MICHER CAMARENA

Speech speed

128 words per minute

Speech length

871 words

Speech time

408 secs

Moderator – Alexandra Robinson

Speech speed

163 words per minute

Speech length

1163 words

Speech time

428 secs

Sherri Kraham Talabany

Speech speed

150 words per minute

Speech length

1117 words

Speech time

448 secs

Donor Principles for the Digital Age: Turning Principles int | IGF 2023 Open Forum #157

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Augustin Willem Van Zwoll

During the discussion, the speakers conveyed their positive sentiments towards the USAID and IDRC, commending them for their high standard multi-stakeholder processes. These processes were praised for their ability to connect unconnected topics and tie them into rights agendas. This approach was seen as a commendable effort in promoting human rights and digital development programming.

Another key point raised by the speakers was the need for locally driven action in human rights-centered digital development. They emphasized the importance of adapting donor principles into more concrete tools that can be effectively utilized by local communities. The aim was to empower communities by providing them with practical and actionable frameworks to address inequalities and promote inclusive growth. To achieve this, the speakers expressed their intention to collaborate with fellow members and share best practices to investigate how donor principles can be effectively applied at the local level.

Moreover, the speakers also discussed the integration of various components, including development work, digitalization, connectivity, security, and good governance. Particularly, there was a strong emphasis on integrating cybersecurity tools and good governance for the unconnected third of the world. The need for this integration was driven by the realization that connectivity and digital development can only be truly beneficial when accompanied by secure and stable environments. Combining cybersecurity measures with good governance practices aims to ensure a safe and reliable digital environment for the unconnected population.

To summarise, the speakers exhibited a positive outlook towards the USAID and IDRC’s multi-stakeholder processes, highlighting their ability to connect diverse topics to rights agendas. They also emphasized the importance of locally driven action and the adaptation of donor principles into practical tools for communities. Furthermore, the integration of cybersecurity tools and good governance was recognized as crucial for supporting digital development and connectivity in the unconnected regions of the world.

Audience

The discussion centres around the challenge of integrating human rights principles into the operations of donor governments and foundations without imposing additional burdens on grantees and implementing partners. The main concern is to find ways to incorporate these principles effectively without causing excessive workload or duplication of effort. This is particularly important for donor agencies like 4USAID and IDRC.

Another key aspect highlighted in the discussion is the need for a broader understanding of digital security and resilience. It is argued that a more comprehensive understanding of these concepts would facilitate their integration into the work with grantees, going beyond emergency training for specific actors. This would ensure that digital security and resilience become embedded in the programmatic activities of organizations.

Within this context, the Ford Foundation is praised as a good example of a donor that takes a holistic approach to digital security and safety. Their approach includes building capacity in their grants, considering economic, social, and cultural aspects of digital security. This indicates a commitment to comprehensive and sustainable approaches to digital security.

The discussion also emphasises the need for more creativity in community outreach efforts. It is suggested that organizations should go beyond reaching out to the usual suspects and actively include communities that are commonly marginalized. By adopting a bottom-up approach and collaborating with private foundations, organizations can enhance their outreach efforts and have a greater impact.

Moreover, it is argued that the principles of donors should not only be used to guide their funding decisions but should also serve to facilitate the transfer of funds without imposing excessive bureaucratic measures. The objective is to ensure that funds are efficiently distributed to those in need, without unnecessary delays or obstacles.

Concerns are raised about the potential funding uncertainty following the potential withdrawal of support by Open Society Foundations. It is noted that Open Society Foundations have been major contributors to human rights and digital rights organizations, particularly in global majority countries. Smaller organizations in these countries may face challenges in securing alternative funding sources to sustain their important work.

Furthermore, the discussion highlights the existence of countries where strong civil societies are lacking, resulting in prevalent digital human rights violations. Ratilo from Botswana draws attention to this issue, advocating for financial and legal assistance to protect individuals from such violations. He shares his own experience as a member of parliament, expressing a willingness to take legal action against his government over such violations, despite the financial constraints involved.

In conclusion, the discussion revolves around finding effective ways to integrate human rights principles into the operations of donor governments and foundations. It emphasizes the importance of a comprehensive understanding of digital security and resilience, along with practical mechanisms and tools to align strategies with these principles. The potential withdrawal of support by Open Society Foundations and the need to support civil society and digital rights organizations are also highlighted. Notably, the discussion highlights the challenges faced by countries lacking strong civil societies in combating prevalent digital human rights violations.

Vera Zakem

The donor principles, which have received the official endorsement of 38 member governments of the Freedom Online Coalition, play a crucial role in establishing an international framework for donor accountability. These principles also align with the ethical obligations of donors to ensure that their actions do not cause harm. Additionally, the donor governments have committed themselves to implementing procedures that protect local partners and communities from the potential misuse of digital technologies and data.

However, despite these commitments, the annual Freedom on the Net report released by Freedom House paints a concerning picture. The report reveals that global internet freedom has experienced a decline for the 13th consecutive year. This decline raises concerns about the state of digital rights and the potential threats faced by individuals and communities worldwide.

Nevertheless, there is an argument put forth that it is possible to achieve digital transformation without compromising digital rights. This argument highlights the importance of prioritising safety and security in addressing these issues. Donor governments are believed to better fulfil their mandate when they place safety and security at the heart of their approach to digital transformation.

Overall, these findings emphasize the importance of safeguarding international assistance from digital repression and upholding digital rights throughout the process of digital transformation. This requires a comprehensive and ethical approach that takes into account the potential harm caused by the misuse of digital technologies and data.

Moderator – Lisa Poggiali

During the discussion, several important points were raised by the speakers. The breakout groups were organized around internal and external components, with each group focusing on a different question. This structure allowed for a comprehensive exploration of the various aspects and perspectives related to the topic at hand.

The inclusion of online groups in the discussions was widely supported, with a commitment made to involve them in the conversation. This recognition of the importance of diversity and inclusivity in decision-making processes aligns with the goal of reducing inequalities (SDG 10).

One of the participants, Lisa Poggiali, expressed appreciation for the idea of clarifying roles among stakeholders and partners. This notion of clearly defining responsibilities and actions of different actors is seen as valuable in fostering more effective collaboration and accountability in digital development. Poggiali also advocated for concrete commitments and actions by individual governments within their legal and strategic frameworks.

In moving forward, Poggiali suggested the development of toolkits as the next step in implementing the Freedom Online Coalition. These toolkits would provide specific guidance and resources for different stakeholders, including civil society, diplomats, and development actors. This approach aims to empower and equip these actors with the necessary tools to promote digital freedom and security.

Concerns were raised regarding the uncertain landscape of donor funding. The indication that Open Society Foundations may decrease their funding for various organizations has raised questions about the future financial support for initiatives and projects in the digital rights sphere. It was mentioned that statutory donors often provide larger grants, but it is more challenging to secure their support for smaller organizations.

On a positive note, the potential for partnerships between the private sector and donors in addressing digital security issues was highlighted. Private sector organisations often possess more financial resources than traditional donors, making them valuable allies in efforts to enhance digital security.

The need for greater synergy between conversations about human rights and traditional cybersecurity was emphasised. It was acknowledged that these discussions have been somewhat siloed in the past, and there is a desire to bridge this gap and integrate human rights and democratic values into cybersecurity practices. The Global Forum on Cyber Expertise (GFCE), the International Telecommunication Union (ITU), Microsoft, and the government of Sweden were mentioned as entities already working towards mainstreaming digital security with a focus on human rights and democratic values.

The discussion also shed light on the silo effect in conversations about democracy and human rights in technology. These topics have often been isolated from broader global technology discussions, limiting the potential for comprehensive and integrated approaches. The Democracy, Human Rights, and Governance Bureau at USAID and other donors have recognised this issue and are actively seeking ways to address it.

The importance of supporting civil society in countries where they lack leverage or resources to hold governments accountable for human rights violations was emphasised. In some instances, digital human rights violations occur, but there is no strong civil society to protect the interests of the community. Additionally, the cost of taking legal action against the government can be prohibitive for individual members of society. Therefore, it was argued that support should be provided to these civil society organisations to empower them to advocate for human rights and hold governments accountable.

The speakers concluded by urging donors to heed the call to support civil societies. The principles discussed throughout the conversation can serve as a foundation for addressing critical human rights issues. Collaboration and support among stakeholders and partners are crucial in achieving the goals set forth in the discussion.

Overall, the detailed discussion highlighted the need for inclusivity, clarity, and collaboration in the digital development sphere. By involving diverse voices, clarifying roles and responsibilities, and fostering partnerships, the participants aim to create a more secure and inclusive digital environment that upholds human rights and promotes sustainable development.

Shannon Green

The Donor Principles for Human Rights in the Digital Age have been developed and endorsed by 38 member governments of the Freedom Online Coalition. Shannon Green, an advocate for digital rights and freedom, applauds this development, stating that the principles serve as a crucial blueprint to protect individuals’ rights in the digital world.

Green highlights the significance of partnership between donors and various stakeholders, including government, civil society, and the private sector. She believes that donors have much to learn from their partners in different sectors and stresses the importance of collaboration in shaping the global digital ecosystem.

The principles are seen as a means to promote safer and more secure environments for partners and local communities. By equipping safeguards, donors can ensure the equitable distribution of programs, addressing concerns of accountability and reducing inequalities.

Green also expresses enthusiasm for the Open Government Partnership’s prioritisation of digital governance. She believes that this focus will result in improved transparency of public oversight of artificial intelligence and data processing systems. Green cites remarkable progress made under the commitments of the Open Government Partnership.

In conclusion, Green perceives the Donor Principles for Human Rights in the Digital Age as a significant contribution to a digital future that respects rights, promotes democracy, and ensures equitable sharing of technology benefits. She urges other donor governments to make concrete commitments aligned with these principles. Overall, the principles are applauded for their potential to protect and uphold individual rights in our digital world while fostering collaboration and safeguarding the equitable distribution of technology benefits.

Moderator – Sidney Leclercq

During a panel discussion, speakers from various countries and organizations provided insights into the implementation of donor principles. The Netherlands, represented by Van Zalt, a Senior Policy Officer, expressed their commitment to incorporating these principles as they assume the chairship in 2024. Emphasizing the importance of localized knowledge and evidence at the Internet Governance Resource Centre (IGRC), Immaculate Kassai, the data protection commissioner from Kenya, highlighted the significance of considering diverse perspectives and contexts when implementing these principles.

Zach Lambell, a Senior Legal Advisor for the International Center for Nonprofit Law, outlined a comprehensive framework for implementing donor principles. He stressed the need for international, domestic, and technical approaches to effectively apply these principles to ensure their adherence across different jurisdictions and organizations.

Michael Karimian, the Director for Digital Diplomacy, Asia and the Pacific, at Microsoft, provided a private sector perspective on donor principles. He recognized the relevance and importance of these principles in promoting responsible and ethical practices within the digital realm.

Closing the panel discussion, Adrian DiGiovanni, the team leader on democratic and inclusive governance at IDRC, shared closing remarks to acknowledge the contributions of all participants and their valuable insights. The discussion emphasized the need for collaboration and cooperation among stakeholders to ensure the effective implementation of donor principles and to promote inclusive and democratic practices in Internet governance.

Overall, the panel discussion underscored the significance of implementing donor principles in different contexts. It highlighted the importance of localized knowledge, international collaboration, and private sector involvement for effectively implementing these principles.

Michael Karimian

The analysis of the various speakers’ viewpoints reveals several important points regarding the role of businesses and the need for certain practices in advancing the Sustainable Development Goals (SDGs). One key point is the importance of businesses upholding international human rights norms and laws. Michael, who works on Microsoft’s digital diplomacy team, emphasises the need for responsible behaviour in cyberspace based on international law. This suggests that businesses should align their practices with established legal frameworks to ensure ethical conduct and protect human rights.

Transparency and accountability are highlighted as crucial aspects of businesses implementing human rights policies and grievance mechanisms. It is argued that companies should have publicly available human rights policies that are implemented by accountable teams. Additionally, businesses are encouraged to be transparent in their practices and engage with stakeholders while undertaking human rights due diligence. This approach ensures that businesses are open and receptive to feedback, allowing them to continuously improve their practices and address any potential violations of human rights.

The need for direct connections between businesses and local civil society stakeholders is also emphasised. Transnational private sector companies are often criticised for having weak connections with local communities. Platforms like the Internet Governance Forum (IGF) and organisations like Access Now are identified as potential facilitators in establishing and strengthening these connections. This suggests that businesses should actively engage with local stakeholders to ensure their operations align with local contexts and address the needs and concerns of the communities they operate in.

The importance of building products that align with human rights and democratic values is highlighted. Donors are encouraged to support products that incorporate “human rights by design” processes. This includes considering salient human rights risks such as privacy, accessibility, and responsible AI when developing new products. By prioritising human rights and democratic values in product development, businesses can contribute to building a more ethical and inclusive technological landscape.

The analysis also recognises the challenge and potential of professional codes of ethics for individuals, organisations, and institutions. It is acknowledged that incorporating ethical codes into university curricula can be difficult. However, continuous training for staff and access to experts within the company are identified as important interim steps. This indicates the importance of ongoing education and professional development to ensure that individuals and organisations are aware of ethical considerations and have the necessary tools to address them.

In the context of digital development and the SDGs, mainstreaming digital security is crucial for low- and middle-income countries. As these countries undergo digital transformation, the threat landscape for cybersecurity expands. Efforts by organisations such as the Global Forum on Cyber Expertise (GFCE), the International Telecommunication Union (ITU), Microsoft, and the government of Sweden are mentioned as initiatives aimed at addressing this issue. By prioritising digital security in the realm of digital development, low- and middle-income countries can mitigate risks and create a safer digital environment.

Lastly, it is argued that cybersecurity should be considered in the post-2030 agenda. The analysis does not provide additional details regarding this point, but it implies that cybersecurity is a significant concern that should be addressed in future planning beyond the current 2030 agenda.

In conclusion, the analysis highlights the importance of businesses upholding international human rights norms and laws, being transparent and accountable in their practices, and engaging with local civil society stakeholders. It also emphasises the significance of building products that align with human rights and democratic values. The challenge and potential of professional codes of ethics are recognised, and the importance of mainstreaming digital security in digital development is underscored. Additionally, the analysis suggests that cybersecurity should be factored into the post-2030 agenda. These insights provide valuable considerations for businesses and policymakers in their efforts to achieve the SDGs while promoting ethical practices and protecting human rights.

Juan Carlos Lara Galvez

Juan Carlos Lara Galvez, a member of an organization working on digital rights in the global majority, specifically in Latin America, emphasises the importance of engaging with governments and donor governments. These entities provide vital funding for organizations like his that strive to safeguard digital rights. Juan Carlos strongly believes that interacting with governments and donor governments is crucial for the success and sustainability of their work.

Regarding donor principles, Juan Carlos stresses the significance of not only formulating principles but also ensuring their implementation through concrete steps and actions. He highlights that the true measure of success lies in how effectively these principles are translated into tangible outcomes. He acknowledges that while the formulation of donor principles is an inspiring beginning, it is essential to monitor their progress and evaluate their impact on the ground.

An important aspect that Juan Carlos advocates for is stakeholder involvement, participation, and the recognition of human rights in various contexts, including technological development. He is pleased to see that the donor principles acknowledge the need for coordination with stakeholders. Juan Carlos believes that donor governments should actively foster collaboration between different stakeholders to promote and protect human rights. By involving diverse perspectives and including all relevant parties, these issues can be addressed more effectively.

Furthermore, Juan Carlos emphasizes that the priorities of advocacy should come from the ground level. He believes that advocacy organizations themselves, along with the individuals actively engaged in the work, hold valuable knowledge and insights into what is truly needed on the ground. By acknowledging and understanding this knowledge, officials can better advocate for and protect human rights. Juan Carlos highlights the importance of interaction and collaboration between stakeholders as a means to foster the promotion of human rights.

In conclusion, Juan Carlos Lara Galvez underscores the significance of engaging with governments and donor governments, implementing donor principles through concrete steps and actions, involving stakeholders in decision-making processes, and recognizing the importance of advocacy priorities that originate from the ground level. His arguments are rooted in the belief that collaboration and recognition of diverse perspectives lead to more effective promotion and protection of human rights.

Zora Gouhary

Zora Gouhary plays a crucial role in supporting the formation and smooth running of breakout groups for discussions. This process involves the creation of five groups, comprising four in-person groups and one online group. Each group will have its own moderator, ensuring effective facilitation and guidance during the discussions.

The breakout sessions will focus on four key questions, encouraging participants to explore and share their perspectives. These discussions are expected to last approximately 15 minutes, allowing for focused and in-depth conversations within each group.

Furthermore, Zora Gouhary actively facilitates the process of grouping participants. Participants are given the freedom to choose their own groups, potentially leading to a more diverse and engaging experience. Zora’s involvement in this process ensures that the formation of groups is well-organised and efficient.

All contributions made during the breakout sessions will be diligently summarised for later use. This summarisation enables the effective capture and consolidation of key ideas and insights generated during the discussions. By preserving these contributions, valuable information can be used to advance the next steps of the donor principles, indicating that the breakout sessions play a significant role in the overall decision-making process.

In conclusion, Zora Gouhary’s support in forming, moderating, and summarising breakout groups enhances the effectiveness and productivity of the discussions. The inclusion of multiple in-person and online groups, along with Zora’s guidance, encourages diverse perspectives, ensuring that the breakout sessions contribute meaningfully to the advancement of the donor principles.

Adrian di Giovanni

The discussion centres around the significance of donor principles on human rights in the digital age, particularly in response to the rapid advancements in technology. These principles are essential guidelines in establishing a framework to safeguard and ensure accountability for investments in digital initiatives. They are also designed to align with commitments to human rights and democratic values.

Digital technologies are recognized as powerful tools that facilitate information sharing, self-expression, and organization. However, they also present challenges, especially for marginalized and vulnerable communities. In certain cases, these technologies can be used to deny or diminish individuals’ rights, and there is a correlation between technological changes and the decline of democratic processes.

For this reason, it is crucial for donors to take responsibility for ensuring that their actions and investments in digital initiatives do not contribute to the erosion of human rights protections and democratic institutions. This necessitates adopting the principle of ‘do no harm’ when it comes to these investments. By embracing this principle, donors can mitigate adverse consequences and ensure that their initiatives have a positive impact on society.

The donor principles on human rights in the digital age provide an indispensable framework for safeguarding and ensuring accountability in investments related to digital initiatives. These principles are particularly critical in the face of fast-paced technological advancements, which continuously challenge existing norms and regulations. By aligning with commitments to human rights and democratic values, donors can contribute to the preservation and advancement of these fundamental principles.

In conclusion, the discussion underscores the importance of donor principles on human rights in the digital age. As technology continues to rapidly evolve, it is imperative for donors to proactively ensure that their investments do not undermine human rights protections and democratic institutions. This necessitates adopting the principle of ‘do no harm’ and utilizing the donor principles as a framework for safeguarding and accountability. Ultimately, by promoting responsible and ethical practices, donors can harness the full potential of digital technologies while upholding human rights and democratic values.

Allison Peters

The United States government has taken on the chairmanship of the Freedom Online Coalition, an international organization focused on promoting human rights in the digital landscape. This year, the U.S. Department of State and the government view the Coalition as a crucial partner in safeguarding and advancing human rights in the use of digital technologies globally. The U.S. government sees the Coalition as an important platform for global collaboration and sharing of best practices.

As part of its initiative, the Freedom Online Coalition has launched donor principles that provide guidance to donor governments in supporting human rights online. These principles aim to promote and protect human rights while guarding against the potential misuse of digital technologies. Donor governments, including the U.S., play an essential role in driving these efforts by responsibly investing in digital technologies with a focus on human rights.

Allison Peters, an advocate for digital rights, emphasizes the significance of donor governments investing in digital technologies while remaining vigilant against their potential misuse. The donor principles launched by the Coalition provide crucial guidance to ensure responsible investment and prevent any negative consequences that may arise from the misuse of these technologies. Peters highlights the importance of striking a balance between promoting accessibility and innovation in the digital sphere while also safeguarding against any destabilization and infringement of human rights.

Secretary of State Anthony Blinken echoes similar sentiments in his speech at the United Nations General Assembly. He emphasizes the need to govern digital technologies in partnership with those who share democratic values. This approach is essential to address the challenges and potential risks associated with the misuse of digital technologies. By working together and upholding democratic principles, governments can protect human rights, maintain stability, and ensure the responsible use of digital technologies.

In conclusion, the U.S. government’s chairmanship of the Freedom Online Coalition reflects its commitment to promoting and protecting human rights in the digital age. Through the donor principles and collaborations with like-minded partners, such as Allison Peters, the government aims to foster responsible investment and prevent any negative repercussions resulting from the misuse of digital technologies. This concerted effort aligns with Secretary Blinken’s call for governing digital technologies in partnership with those who value democratic principles. With these measures in place, the international community can work towards a digital landscape that respects and upholds human rights while promoting innovation and connectivity.

Zach Lampell

After conducting the analysis, three main arguments related to civil society organizations have been identified. The first argument emphasizes the importance of collaboration between civil society organizations and donor governments in shaping foreign assistance. It is suggested that civil societies should actively engage with donor governments to provide them with comprehensive information about the realities on the ground and the existing gaps in their country’s domestic legislation. By doing so, civil society organizations can influence the allocation of foreign assistance towards addressing these gaps and supporting initiatives that align with their objectives. The evidence supporting this argument includes the advice of Zach Lampell, who advises civil societies to utilize the Universal Periodic Review (UPR) process, ensuring that the voices and concerns of civil society are heard during the decision-making process on foreign assistance.

The second argument highlights the importance of civil societies pushing for inclusion in standard-setting bodies and integrating human rights protections into internet infrastructure. This argument acknowledges the increasing role of technology and the internet in today’s world, and the need for civil society organizations to actively participate in shaping the standards and practices that govern them. It is suggested that civil societies should seek assistance from the international community in developing their technical knowledge and expertise in this field. Furthermore, working with private companies is recommended to create systems that uphold human rights. This argument promotes the idea that civil society organizations have a crucial role to play in ensuring that technology and the internet serve as tools for peace, justice, and the protection of human rights. The evidence supporting this argument highlights the need for civil societies to leverage their partnerships and engage in collaborative efforts with relevant stakeholders to drive positive change in this area.

The third argument focuses on the significance of facilitating meaningful interaction with stakeholders in the process of drafting legislation. Civil society organizations are encouraged to work closely with donor governments and their own government to create open, public processes for the drafting of legislation. By actively engaging with stakeholders, civil society organizations can ensure that their perspectives, concerns, and expertise are taken into account during the development of legal frameworks. It is stressed that these legal frameworks should uphold international human rights standards and principles. The evidence supporting this argument underlines the importance of collaboration between civil society organizations and both donor and national governments to develop effective and inclusive legislative processes.

Overall, these three arguments analyzed in the research showcase the vital role civil society organizations can play in shaping policies and practices in various sectors. By collaborating with donor governments, pushing for inclusion in standard-setting bodies, and facilitating stakeholder engagement in legislation drafting processes, civil society organizations can contribute to the development of policies and initiatives that align with their objectives and promote peace, justice, and the protection of human rights. This analysis highlights the need for civil societies to actively utilize various platforms and opportunities to advocate for positive change and utilize their expertise to shape a better future for their respective communities and society as a whole.

Nele Leosk

Estonia has demonstrated the transformative potential of technology in various sectors. For the past 15 years, digitalisation has been a top priority for the country, allowing it to shift from being a recipient of aid to becoming a donor. This focus on digitalisation has played a crucial role in shaping Estonia’s development, economic policies, trade policies, and even its tech diplomacy efforts.

The integration of digital tools and processes has enabled Estonia to streamline its government services, making them more efficient and accessible for its citizens. Services such as e-residency, e-tax, and e-voting have facilitated a seamless and transparent democratic system. By placing digitalisation at the core of its development strategy, Estonia has successfully established a digital society that promotes democracy and empowers its citizens.

Moreover, Estonia has shown its commitment to supporting other nations in their development efforts, particularly through capacity building. A notable example is its 14-year partnership with Ukraine, where Estonia has helped them in building a democratic system. Ukraine’s progress in this area has been remarkable, surpassing that of many other countries. This highlights Estonia’s belief that development assistance should focus on enabling countries to develop their own capacities, sometimes even exceeding those of the donors.

Estonia’s approach to development cooperation is characterized by three main priorities: gender equality, collaboration with the private sector, and openness. Gender equality is consistently integrated into all policies and action plans, including tech diplomacy. The country aims to bridge the gender divide and ensure equal opportunities for all. Additionally, Estonia values the use of open-source principles in its development cooperation initiatives, ensuring control and transparency while avoiding dependencies.

Furthermore, Estonia’s development agency, which is only two years old, emphasizes partnerships with private companies and other organizations. This collaboration allows for a broader range of expertise and resources, contributing to national development goals. By engaging the private sector, Estonia harnesses innovation and leverages its potential for driving economic growth and sustainable development.

To conclude, Estonia’s success story exemplifies the positive impact of technology in building democracy, enhancing the economy, rebuilding trust, and establishing transparency and openness. Digitalisation has become a pivotal driver in Estonia’s development strategies, enabling the country to shift from an aid recipient to a donor. Estonia’s commitment to capacity building, gender equality, collaboration with the private sector, and openness further strengthens its approach to development cooperation. Overall, Estonia serves as a model for other nations, showcasing the possibilities and benefits that can be achieved by harnessing the power of innovation and digitalisation.

Immaculate Kassait

In the era of digitisation, the importance of data protection is emphasised, as highlighted by the arguments presented. Kenya has taken steps to address this issue by establishing a legal and institutional framework for data protection. The Office of Data Protection in Kenya has enforced six penalty notices related to the misuse of personal data, demonstrating their commitment to safeguarding individuals’ information. This positive sentiment towards data protection is further supported by the fact that 2,761 complaints have been received regarding data protection issues, indicating widespread recognition of the need for such measures.

However, challenges also exist in the realm of data protection. The newly established Office of Data Protection in Kenya faces operational and resource constraints, hindering their ability to carry out their responsibilities effectively. Additionally, there are concerns regarding the existing legal frameworks which may not adequately address the complexities posed by multinational companies operating in Kenya. The rapid progress of technological advancements, such as Artificial Intelligence, also presents additional challenges as the potential risks and implications on data protection need to be carefully navigated.

To overcome these challenges, collaboration and donor support are seen as crucial factors. Sharing expertise and best practices amongst stakeholders can enhance the regulation of data processing, allowing for a coordinated and effective approach to data protection. Donor support can play a vital role in aligning country-specific legal frameworks with international standards and providing the necessary resources for capacity building. This collaborative effort would enable Kenya to strengthen its data governance mechanisms and better protect individuals’ data.

In conclusion, the arguments presented highlight the significance of data protection in the digital age. While Kenya has made strides in establishing a legal framework and enforcing penalties for data misuse, challenges such as resource constraints, inadequate legal frameworks, and technological advancements remain. However, through collaboration and donor support, it is possible to address these challenges and enhance data governance practices. By doing so, Kenya can ensure the protection of personal data and align with global efforts towards sustainable development.

Session transcript

Vera Zakem:
I know it’s also early morning, but we really, really are grateful because we just really think this is such a momentous and exciting opportunity for us to roll out these principles and also what they mean for strengthening rights respecting digital ecosystem. So again, I’m delighted to welcome you to this event. I am pleased to announce that as of last week, the donor principles have been officially endorsed by 38 member governments of the Freedom Online Coalition, some of whom you will hear today. The donor principles establish international framework for donor accountability and cooperation on digital issues that align with donor ethical obligations to do no harm. Earlier this month, Freedom House released the annual Freedom on the Net report, a survey and analysis of internet freedom around the world, and we see that the global internet freedom has declined for 13th consecutive year in a row. The donor principles commit donor governments, including the United States, to reverse the trend. They call on the donors to safeguard international assistance from digital repression by establishing procedures to protect local partners and communities from the potential misuse of digital technologies and data. Over the past two decades, USAID and other donors have supported many digital initiatives around the world with, dare I say, positive outcomes. We have assisted countries to digitize their public service delivery systems from healthcare to education to participatory budgeting. We’ve also supported young entrepreneurs to develop financial technology or FinTech applications that have created new economic opportunities for those who have been excluded from traditional economic systems. At the same time, we have witnessed how governments have used digital data to target and threaten journalists and activists in Central America. We have seen how FinTech companies have weaponized the personal data of poor people through predatory digital lending practices. We have learned how consulting firms have exploited citizens’ personal data to influence their voting behavior in ways that undermine freedom of thought and expression and fundamentally weaken public trust. in democratic institutions. Such examples are common and are cause for concern, but digital transformation, we know, does not have to come at the expense of digital rights. As donor governments, we can best fulfill our mandate when we put safety and security at the heart of these issues and the values of democracy, respect for human rights and accountability, really at the heart and the center of our work. Suffice to say, I’m very pleased to be here with colleagues and partners from governments, civil society and the private sectors who have demonstrated their commitment to these values. I believe, and USCID believes, it’s only through this multi-stakeholder process and multilateral collaboration that we can fulfill the promise and the intent of these principles. I certainly want to thank the Freedom Online Coalition Support Unit who’ve made this event possible and the donor principles themselves. I also thank our panelists in the room and online. Where is Joost? I don’t think it’s, right here, thank you. Joost from the Netherlands. USCID, of course, is very much looking forward to working with you as Netherlands takes chairmanship of the Freedom Online Coalition next year. Estonia’s digital ambassador that we have here, Nili Lisk, again, congratulations to you for hosting phenomenal Thailand Digital Summit and Open Government Partnership last month in Thailand. Kenya’s commissioner, online, okay, good. Kenya’s commissioner for data protection, Immaculate Kassite. We commend you for the work that you are doing to keep Kenyans safe and look forward to partnering with you on digital governance. As Kenya begins co-chairmanship of the OGP, Open Government Partnership Steering Committee. And from the FOC Advisory Network, Juan Carlos Lara, the executive director of Derechos Digitales and Zach Lampel, senior legal advisor from the International Center for Not-for-Profit Law. We deeply appreciate your support in drafting the donor principles and, of course, very much look forward to working with you. And, of course, Michael Karimian from Microsoft. We really appreciate your company’s commitment to democratic values and respect for human rights. I also want to express especially deep gratitude to our Canadian colleagues from the International Development Research Center who have co-chaired the Freedom Online Coalition’s funding coordination group with us this year and co-led the donor principles drafting and negotiating process, so huge thanks to you. The donor principles reflect the U.S. and Canada’s shared commitment for digital inclusion with the support of the FOC Support Unit and the U.S. Department of State. U.S. and IDRC co-led the first ever public consultation process for the FOC deliverable which yielded inputs and insights from civil society, academia, and the private sector and various stakeholders from around the world. As a result, the principles better address the needs and desires of the communities that we seek that they serve. And finally, I’m so pleased that USAID at large is pleased to be here in partnership with our colleagues from the Department of State’s Bureau for Democracy, Human Rights, and Labor. It goes without saying that your collaboration on everything with the Freedom Online Coalition, these principles would not be possible, so I am especially delighted to turn it over to the Deputy Assistant Secretary at DRL, Alison Peters, a dear friend and colleague who has been really working hand and arm with all of us to really enable these principles to come to life, over to you.

Allison Peters:
Thanks so much, Vera, and especially to Lisa for your tireless leadership, getting these principles over the finish line. It is not ever easy negotiating anything in a multilateral, multi-stakeholder process, and we really appreciate your leadership. And also to Sydney and IDRC for your strong partnership in this effort. Thanks all for joining us. We know it’s an early morning. We hope everyone is well caffeinated, but this is a really, really momentous and exciting occasion to launch these donor principles, so we’re grateful that you took the time to join us this morning. The Department of State and the U.S. government as a whole view the Freedom Online Coalition as a key, indispensable partner in our efforts to promote and. protect human rights and the use of digital technologies globally. Pretty much every issue set that we have heard discussed here at IGF is a core priority of the work that we’re doing with the other governments in the Freedom Online Coalition to promote human rights online. As the chair this year of the Freedom Online Coalition, the United States made a firm commitment to work within the FOC and with our partners and allies to promote and protect fundamental freedoms, counter the rise of digital authoritarianism and the misuse of digital technologies, advance norms, safeguards, and principles for artificial intelligence based on human rights, and support ongoing initiatives to promote safe online spaces for marginalized and vulnerable groups. As we heard from our Secretary of State, Anthony Blinken, at the UN General Assembly, which feels like 100 years ago now but was just a couple of weeks ago, we are delivering. These principles launching today really translate these priorities into action, giving donor governments concrete guidance to hold fast to our commitment to invest in digital technologies only when it is possible to protect against their potential misuse. They reinforce the Freedom Online Coalition’s shared vision to enable individual dignity and economic prosperity. Technology should be harnessed in a manner that is open, sustainable, secure, and respectable of democratic values and human rights. And these donor principles will help us take one step in that direction. They also demonstrate our shared commitment to advancing the UN’s 2030 Sustainable Development Agenda as we look to harness the power of digital technologies in a rights-respecting manner to advance our shared goals from achieving gender equality to promoting inclusive and peaceful societies. As our Secretary of State stated at the UN General Assembly, we can develop the best technologies in the world. But if we haven’t determined how to govern them in partnership with those who share our values, these technologies are likely to be misused for repressive or destabilizing purposes, making our communities less peaceful, less prosperous, less secure, and unfortunately, more undermining of human rights. They’re also less likely to be leveraged for advancing societal progress around the globe. So again, I thank you all for joining us today. We have both an exciting panel with some key partners. And we’re thrilled to be joined by the government of the Netherlands, who are turning over the chairship of the FOC to next year. But we’re really thrilled to also join you in the breakout sessions to hear your thoughts on these donor principles and how we can move them forward through the FOC. So thank you again. And thank you again to Lisa and Vera for your leadership.

Moderator – Lisa Poggiali:
Thank you, Alison.

Moderator – Sidney Leclercq:
Thank you very much. And thank you to the US, really, for the commitment and dedication in that, in getting those principles. I think that was an important process and a decisive one. But you already made the transition, actually, to a first speaker in the panel from the Netherlands. And I’ll turn over to you, I guess. Yes, Van Zalt, who is Senior Policy Officer at the Human Rights and Political and Legal Affairs in the Netherlands. And as you take over. the chairship in 2024. It’ll be interesting to hear from you the intention to implement the donor principles in that chairship in 2024. Over to you.

Augustin Willem Van Zwoll:
Thank you, Sidney. First of all, thank you USAID and thank you IDRC for really bringing something new to the table here at the FOC. I think it’s great that you were able to to create these principles, not only tying all these different important topics that we’ve been hearing the last few days about, really like connecting the unconnected, but tying that into the rights agendas that we have been discussing in our little side sessions the last few days. And I think it’s an important bridge, not only indeed in getting the development goals, getting to the development goals or reaching the development goals, but also tying, it will be an important step for us, at least from a policy side also, to get where we need to be in order to have fruitful discussions with reaching to the GDC and the West-West-West-West. So I think it’s a great important step, not only from a digitalization perspective or an aid perspective, but also really connecting it to the more human rights-related discussions that we are having as well. And also thank you for setting a really high bar that will be very difficult to reach in form of having an open multi-stakeholder process. I mean you’ve done an excellent job in that and I really want to congratulate you. I rather had it that you would do it after our chairship because it will be so challenging to work to that high standard, but it’s a great inspiration for us and we’ll really try to continue that line of work as well next year. I mean now under the guidance of the USAID and IDRC, of course in partnership with the U.S. State Department, you really set up these important donor principles that encompass the basic conditions for human rights-centered digital development programming. But however, at least for us, this would only be the beginning. I mean turning these principles into locally driven action that truly serves the target communities that we support within the context of our very diverse coalition, that is really the big task that still lies ahead of us. During our upcoming chairship, the Netherlands therefore wants to see how we can adapt these principles into even more concrete tools that can be used by our community to practice and integrate them into the activities that we support. only be done through cooperation between our members in close cooperation with our local and implementing partners whose needs and challenges are central to any solution. We will therefore also ask all of our members or all of our Freedom Online member states to share also their best practices either as a donor or a recipient. I mean given the multi-regional build-up of a coalition, this would be a great chance to see it from both sides. And also as the Netherlands, these principles will be key and there will be a great way in connecting the development work and tying it to important, to tie the agenda that we have on digitalization and tying it to connect connectivity, security and good governance. Because we see it sometimes that we have these high-level discussions at the OEWG that are very difficult and we see that it’s a certain set of countries that are very active in that and we need to reach out and make sure that those, the last third of the world that’s unconnected, will be able to connect. But that also will have to have the cybersecurity tools to keep that structure secure and then of course have a good human rights set of principles to govern that structure, as Alison really much more detail pointed out. Thank you for that. I think I will leave it at this. Thank you so much.

Moderator – Lisa Poggiali:
Thank you so much, Kjus, and I have no doubt that you will be able to even exceed the work that we have done this year in your FOC chairship next year. So we look forward to partnering with you. No pressure. So now I’m very pleased to introduce Nelle Leosk, the Digital Ambassador at Large from Estonia. Nelle, over to you.

Nele Leosk:
Thank you. Thank you so much and I’m glad to be here in this very early hour and I’m glad to see also so many other people here. Actually last month we celebrated a little birthday in our Ministry of Foreign Affairs because 25 years passed since Estonia became a donor. From a recipient side to a donor. So we have both experiences and perhaps I will just complement these principles by some practical, I would say, takeaways from our 25 years, out of which I would say 15 digital has been a priority. And I know that we have been discussing here over the past days and also today quite a bit of everything that can go wrong with technology. And in a way I believe it’s also an increasingly trendy to talk about. It’s of course very timely and very much needed conversation. But it seems to me that we are also at the same time forgetting about everything that technology can bring. And in this sense Estonia I believe is a good reminder that technology actually can be used to build democracy. Technology can be used to enhance economy, to rebuild trust, to build openness, transparency and Estonia has all done this. And this has I believe also been the reason for interest in our experience. Because it’s not about digitalization. It’s not to become the world leader in digital services. It’s really about democratizing your state and the opportunities it gives. So for us digitalization and these principles that we’re also talking about here have actually been horizontally integrated in different programs. And not only, I would say, our development or economic policy or trade policies, but currently also in our tech diplomacy. So these principles that we are talking about here somehow need to be implemented. Because just talking about the principles will also not get us very far. And actually digitalization through development cooperation has been one of these very practical ways how we build a democratic state. And there were some examples here in these principles, for example data governance and management. So it is clear that in order to introduce a data governance or management system, for example In Estonia, we have this famous system called X-Road. It’s our interoperability layer that allows to exchange data. In order for this to work, you need to create also an ecosystem and a supporting legal framework and policies. You must have access to Information Act. You must have open standards, and so forth, and so forth. So this, in a way, creates this, I would say, democratic ecosystem. But one other aspect that we were discussing about it yesterday evening also over the party is actually that often we forget that the development is not about us. And in order to really reach these principles, it is actually about also the receiving side. So we really need to put the emphasis in building the capacity of the others to the level of us and even beyond. And we have a very good example, a practical example, from long cooperation with Ukraine, for example. Over the past 14 years, we have been working closely in supporting Ukraine to build their democratic system. And we can see now that in many areas, they may also exceed all of us in this room. So it’s really about the other side and not that much of us in this journey. I believe my time is almost finished. But I wanted to bring maybe just quickly three main priorities for us that are also horizontal issues. And actually, one of them is a gender divide, which is also integrated in all our policies and action plans and is also the priority for tech diplomacy and, in a way, my own work. The other is the working with private sector. Our development agency is only two years old. So it has been mainly through the partnership with private companies and other organizations that we carry out our policies. And the third is actually about openness. And that also translates to technological openness. So we support open source in our development cooperation not to get anybody hooked and have also more control and transparency over these processes. So this is maybe very shortly about how we have approached it.

Moderator – Sidney Leclercq:
Thank you very much, Ambassador. And thanks for a great reminder of the democratic potential and also the importance of open source. of building capacity. But you were starting by saying that it’s really early. I’m afraid that for our next speaker, who is based in Kenya, it’s very late. But it’s even more my pleasure to introduce and to welcome Immaculate Kassai, the data protection commissioner from Kenya. So commissioner, over to you.

Immaculate Kassait:
Thank you. I hope you can hear me. You can hear me? Perfectly. All right. It’s very early. It’s actually 3 AM in Kenya. So thank you, Ambassador, and my fellow panelists in the USA and Netherlands for the opportunity to participate in this panel. I’ll try as much as possible to summarize. I think it’s a very exciting moment to be discussing key principles for donors in terms in this era of digital era when we are discussing governance. And I liked what was spoken earlier, that if we could quickly be evolving into digitization, and we’re not talking about governance, this could lead to misuse and destabilize many economies. Of course, from a data protection perspective, we are often seen as the people who hold back development and interfere with innovations because we are put there to actually then ask questions as far as data protection is concerned. As an office, just a quick one, this office has been there for three years now. It was established in 2020, but the act came into force in 2019. And really, our role is to regulate the processing of personal data based on certain principles, which I would say are very common across all data protection authorities. Our task is to make sure that when we talk about the right to privacy, it’s actually not just a right we speak about. It’s a right that is actually implemented by the Kenyan government. That makes sure that the social justice orientation of the society. On top of that, as an institution, we have been mandated to establish a legal and institutional framework, provide the rights of the data subject. Some of the key issues that we’ve been able to achieve in this short time, of course, is guidance notes as an office. We are members of three international bodies. We will be hosting the Network for Data Protection Authority in the coming year. We have established a register of the data protection controller, data controllers, and we have a strategic plan. What I’d like to just speak about is we have had 2,761 complaints and have actually enforced almost six penalty notices. The recent one, which was like a week ago, was to do with people using personal photos of children and also using people’s photos in social places and also unsolicited information. Unsolicited messages. And that comes to the point that many times in the process of marketing, many controllers are not paying attention to that this is personal information and we must be held to account. Of course, as an office, there are challenges and I’m happy we have this conversation. We are finding ourselves in a situation where we don’t have adequate laws in some cases, where in the context of when we developed the Data Protection Act, we did not anticipate we’d have multinationals. have not registered in Kenya, of course, being a new office, resources are never adequate, and of course, advancement in technology, we are seeing AI as one of the issues. But coming now to the highlighting as far as the issues around the donor principles for human rights, what does this mean for us when we say we need to commit to doing no harm in the digital age while enhancing technology and also ensuring that we’re increasing donors accountability? I see several areas of collaboration. Some of the areas of collaboration, when we say donor support and country being aligned in terms of their legal framework, I see the need for support in as far as reviewing of current legal framework and for those countries that don’t have existing data protection framework, they need to actually then help them so that we’re not leaving other countries behind in as far as data governance is concerned. Sharing expertise, some countries are ahead, I think it would be important to collaborate and come up with some of the guidance notes, guidance in as far as this is concerned. We also need to liberate the government agenda on technology. In our case, as a country, we are digitalizing over 5,000 government services, and there’s need for liberating what others have done. Sharing best practices, of course, in terms of collaboration with private sector, we see an opportunity there to facilitate partnership with private sector and recipient countries to encourage right-based respect. And I would say, I would see this also more of the data protection by default and by design. Capacity building is another area for collaboration and technical support, supporting training programs. Of course, when it comes to fostering coordination, I see joint advocacy effort as one of the things that we can also do. Support on the growth of rights, respect and technology as a principle, I see one of the areas of collaboration is facilitating training initiative, advocating for professional codes of ethics, and of course, facilitating exchange of information. When it comes to prioritizing of digital security, the need to provide for resources, and of course, capacity building. I think I don’t take too much of the time. I want to thank you once again for the opportunity, and I really welcome the conversation around the principles, and it being launched here is a really big milestone for donor countries, for partners, and especially in this era of technology, where we are now being hold to account and holding other people to account, so that it’s not just development, it’s not just technology for the sake of it, it’s technology that adheres to human rights. Thank you.

Moderator – Lisa Poggiali:
Thank you so much, Commissioner Kassayet, for those remarks. I think you provided a really nice bridge for us to start thinking about implementation by offering some concrete ideas of how we could partner with other countries around the world, not only donor countries, but all countries around the world, so really appreciate that, and appreciate your remarks. and the work that you do. So I wanted to now turn it over to Juan Carlos Lara, who is the Executive Director of Derechos Digitales and who has played an instrumental role in the drafting process for these principles. Juan Carlos.

Juan Carlos Lara Galvez:
Thank you, Lisa. Good morning, everyone. And good morning, evening, afternoon to people attending online. I wish to first introduce myself. I am a member of an organization that works on digital rights in the global majority, specifically in Latin America. And for us, it’s very important to interact with governments and with donor governments, especially considering the role that they have in funding much of the work that organizations like mine do in the global majority. And that depends on the support that we can obtain from different funders. In that regard, it’s also heartening to hear so much about having countries be accountable or having put principles that will lead to action and other types of language that represents an intention to bring all the good intentions that countries often present into concrete steps, into concrete things. And the donor principles in that regard are a product of an interaction, of an exchange of ideas and views that in many ways represented what our priorities are for civil society in the global majority, understanding as well that we need the support not just to conduct work that we like, but also to create change and to promote social justice and to generate conditions for a responsible development that is respectful of human rights and that is centered around the people. I wish to, before I close my remarks, I wish to recognize those efforts and at the same time recognize the fact that whether we see this as a fruitful steps, fruitful thing is going to be shown by the implementation process. As much as we would like to recognize this as the beginning of something very inspiring, we also need to see how this translates into action. And to the question about the opportunities that this presents for advocacy for organizations like mine, it’s also very positive to see that the principles recognize the need for coordination with stakeholders and the need to admit also participation of different people, participation of different stakeholders and recognition of human rights in issues such as technological development. So I think that one of the most important things that we can see here is that when we put the idea of the priorities of states into action that we need for advocacy organizations is that those priorities should come from the advocacy organizations and should come from the grounds of the people that are doing this work. And that donor governments, donor institutions need to recognize that that’s where the knowledge comes from, from what is needed on the ground. And that the position of certain officials, it’s better informed when they have that type of interaction and when they can foster collaboration between different stakeholders in order to promote human rights. So thank you.

Moderator – Sidney Leclercq:
Thanks very much Juan Carlos, and we cannot agree more on the importance of localized knowledge and evidence at IGRC for sure. And I’ll turn to Zach Lambell, Senior Legal Advisor for the International Center for Nonprofit Law, who is online, and I hope, yes?

Zach Lampell:
Yes. Thank you, Sydney. Can you hear me okay? Perfectly. Great. Well, thank you all so very much. My apologies that I could not be with you all in person in Kyoto, but I know and trust you all having a great time. Before I begin my very brief remarks, I wanted to quickly introduce myself. I’m Zach Lampell, Senior Legal Advisor with the International Center for Not-for-Profit Law, where I lead our global digital rights programming, and where we work in over 100 countries to ensure that the legal framework supports civil society and promotes and protects the freedoms of expression, association, assembly, and the right to privacy. I want to also thank the whole Freedom Online Coalition, the support unit, and the member states, and especially the U.S. government, USAID, and the U.S. State Department for their leadership in developing these principles, as well as Sydney and his team with IDRC, the co-authors and co-leaders of the principles. I’d also like to thank the Funding Coordination Group, the rest of the drafting committee, and finally, everyone who provided feedback, comments, and suggestions, especially all of those from civil society organizations in the global majority. I’d also like to thank I’d like to now briefly present three ways in which civil society can use the donor principles for advocacy. First, internationally. I would encourage all civil society organizations to collaborate with donor governments, as those donor governments develop their strategic priorities and institutionalize their processes to shape their foreign assistance. Like Juan Carlos was saying, let them know what you’re seeing on the ground. Let these donor governments know what has worked, what concerns you have, and most importantly, articulate what gaps there are in domestic legislation. And finally, utilize existing processes like the UPR to obtain firm commitments from your governments to improve the legal framework. So that’s internationally. Domestically, work with donor governments to encourage and facilitate real, meaningful, multi-stakeholder, open, public processes for drafting legislation. Be sure to reference all of the international legal obligations and frameworks on which these principles are based. And work with both your governments and the donor community to ensure that these principles and international human rights standards are being upheld in the legal framework. Finally, technically, and this is one of the principles, but work to push for inclusion into standard-setting bodies. If you or your organization or your partners do not have the knowledge base to effectively engage with these standard-setting bodies, reach out to the international community, donor governments, international NGOs, so you can develop and build your knowledge base. So that way you can impact the work of these technical bodies. Work to ensure that human rights protections are built into the infrastructure of the internet. Work with private companies to help create products, services, and design systems that place human rights at the forefront. So again, internationally, domestically, and technically, there are ways for civil society to use these principles to advocate for an improved legal framework, improved product and services, and an improved internet infrastructure, all of which we believe will lead to the change and support, promotion, and protection of democratic principles that we all seek. Thank you again so much, and I look forward to rolling out these principles and working with all of you then. Thank you so much.

Moderator – Sidney Leclercq:
Thanks so much, Zach. And you’ve probably given us the kind of the structure for implementing. internationally, domestically, and technically. So thanks so much. Let me turn to Michael Karimian, Director for Digital Diplomacy, Asia and the Pacific, from Microsoft, to also provide a private sector perspective on the donor principles. Thank you very much, Sydney, and indeed a private

Michael Karimian:
sector perspective, not necessarily the whole of private sector, but thank you to FOC, USAID, and IDRC for the opportunity to join today’s discussion. It’s very nice to follow on from Zach. Zach and I did some work together a few years ago. I have a lot of respect for him and his organization. I work on Microsoft’s digital diplomacy team, which seeks to advance responsible state and non-state behavior in cyberspace, grounded in international law and norms, including the international human rights regime. I previously worked on Microsoft’s human rights team, which seeks to uphold Microsoft’s corporate responsibility to respect human rights, grounded in the United Nations’ guiding principles on business and human rights, and it’s great to see the UNGPs accurately integrated throughout the principles here. Indeed, as Sydney mentioned, I’ll offer a quick reflection on current application of the principles and some of the ways to move forward where there’s perhaps some gaps in application. So looking particularly at principle three, within that there’s reference to donor government should also emphasize the need for industry to remain accountable to address critical feedback from civil society and human rights defenders. I think firstly that requires that donors are very specific in either encouraging or even mandating that companies uphold the second pillar of the UN guiding principles on business and human rights, mainly by having a human rights policy in place signed off at the most senior level, publicly available and implemented by accountable teams, and with the right degree of transparency. And that of course should include a commitment to respect the work of human rights defenders. Additionally, that also requires both states and companies to uphold the third pillar of the United Nations guiding principles, which is access to remedy, and you do that through grievance mechanisms, both judicial grievance mechanisms and non-judicial grievance mechanisms. So that’s a mix of mechanisms coming from the state, from law enforcement, and from regulatory bodies, as well as the more informal non-judicial grievance mechanisms which can be implemented by companies, civil society, or other actors. And again, companies should be expected to respect and participate in such processes and not to hinder them. There is an important recognition in the principles around the fact that transnational private sector companies often have weak direct connections to local civil society stakeholders. This is a huge challenge. This is where platforms such as the IGF come into play, as well as regional IGFs and local IGFs. I would also call out organizations which have tremendous civil society networks around the world, such as Access Now. And Brett Solomon is pleased to see Brett is in the room. Access Now is an incredible organization who has a tremendous network, which has certainly helped Microsoft to be better at having those direct connections with civil society organizations in global majority countries. Additionally, in the principles, there’s a reference that donors can and should hold private sector partners accountable. This absolutely goes back to the fact that donors, I think, should have a high expectation that companies should be undertaking human rights due diligence so that the actual inclusive, sustainable, and rights-respecting business investments are being made. And human rights due diligence requires that companies are undertaking ongoing practices which are transparent. They must include stakeholders, including civil society, to assess and address actual and potential human rights impacts. Quickly turn into principle seven, support the growth of rights-respecting technology workforce. Within there, there’s reference to donors should encourage these products to be built in alignment with respect for human rights and democratic values or supporting, I should say, inclusive human rights by design processes. I would actually take that down a step further and make sure that there’s a focus on so-called salient human rights, so the human rights that are most at risk by business activities. And that’s generally understood to be the human rights risks where there’s the highest degree of scale, scope, and remediability challenges posed by those business practices. And for most technology companies, that means privacy by design, accessibility by design, and increasingly responsible AI by design. And that requires having policies in place, accountable teams in place, and again, the right degree of transparency. Lastly, there’s mention in principle seven around a professional code of ethics for individuals, organizations, and institutions. This is a challenge. Many have looked at this before. So for example, can you have software engineers having a code of conducts that are taught in university courses? The challenge there is those university degrees, especially at the highest level universities. Students have very little scope for optional courses. The mandatory courses are already very full. And so it’s hard to add anything into that curriculum. But actually, you don’t need to let perfect be the enemy of the good. There are lots of interim steps. So donors should make sure that companies have the right standards of business conduct in place and making sure that there is the right degree of training for staff throughout the company so that they understand what are their responsibilities. They understand the structures that are in place to seek additional guidance if they need to. They should also have access to additional training if they want to have it. And most importantly, they should know where to go to within the company for additional expertise on these subject matters. I’ll stop there and very much look forward to the breakout sessions.

Moderator – Lisa Poggiali:
Thank you so much, Michael. And what a rich set of remarks for us to think about when we start the implementation. implementation conversation in a minute. Thanks so much for that. So before we move into the second portion of our event, we will hear from, last but not least, Shannon Green, who is the Assistant to the Administrator for the Bureau for Democracy, Human Rights, and Governance at USAID. And she will be joining us, as you can see, via video. Thank you.

Shannon Green:
Hello. I am delighted to join you to celebrate the launch of the Donor Principles for Human Rights in the Digital Age. And I commend the 38 member governments of the Freedom Online Coalition who have endorsed these principles and supported their development. These principles provide an important blueprint to protect and uphold the rights of individuals in our digital world. They commit donor governments, including my own agency, to hold ourselves accountable for the role we play in shaping the global digital ecosystem. The principles encourage donors to examine our own internal structures and processes and introduce safeguards for all programs. These safeguards will help ensure that our programs are equitably distributed. They will also promote safer and more secure environments for partners and local communities. Donors have much to learn from our partners around the world in government, civil society, and the private sector. You heard earlier from Commissioner Kasait, who has been leading Kenya’s Office of Data Protection. These authorities are the safeguards that protect us from the darker aspects of the digital age. It is more important than ever that donors partner with them in their critical mission to better protect the public and increase transparency. USAID is also energized by the Open Government Partnership, or OGPs, recent announcement of digital governance as a priority issue. This will strengthen the transparency of public oversight of artificial intelligence and data processing systems. We have seen remarkable progress under OGP commitments, and in this spirit, on behalf of USAID, I am pleased to issue a call to action for other donor governments to join USAID in making concrete commitments aligned with the donor principles. Internally, donors can make commitments to integrate human rights impact assessments into their program design and evaluation processes. They can also allocate dedicated funding to support partners and local communities’ digital security. Externally, donors can better support partner countries to develop and implement strong legal and regulatory frameworks, or equip oversight bodies to better protect the public and hold powerful actors accountable. Civil society and tech companies, large and small, should consider how they can most effectively use the principles to encourage responsible donor behavior. For more information, please visit the Freedom Online Coalition’s website. We look forward to hearing what concrete actions donors commit to at the Third Summit for Democracy in the Republic of Korea, where the United States government plans to launch its own efforts. The donor principles for human rights in the digital age help contribute to a digital future that respects rights, promotes democracy, and ensures that the benefits of technology are shared by all. Let us act with determination and vision to fulfill its promise.

Moderator – Lisa Poggiali:
Thank you, Shannon. And with that, we will conclude the official launch of the donor principles for human rights in the digital age, and we will now move into breakout groups. So, I’m going to invite Zora, who is over there in the corner, to facilitate the process of getting all of you into breakout groups. There won’t be too much movement. And then maybe I’ll also just say, if you have not signed in via the sign-in sheet that is going around, We will also send it around again, and then we’ll leave it on the table right next to the entrance and exit so that we can continue to keep in touch around implementation of the donor principles after this event. Zohra.

Zora Gouhary:
Hello. Can you hear me? Thank you so much, everyone, for joining us. As Lisa said, we will be going ahead with our breakout groups. So what we’re going to do is we’re going to be breaking out in five groups. So four groups here physically, and then everyone who’s joining us online will have their own breakout group and their own moderator. So I would like to ask everyone who’s in the room just to move to the four different corners of the room. I think make it out of your own choice. I’m not going to be separating you, so just direct yourself to one of the corners. I will be going around, and we have about four questions, which you can see now on the screen. Maybe I’ll just give over to Lisa just to explain maybe the questions in a bit. But one final thing for me is that we’ll have about 15 minutes for the breakout groups, after which we’ll come back into plenary just to quickly discuss what has been discussed in the breakout groups. We have our own facilitators who will be taking your contributions, after which we will be taking them and summarizing them and making sure that we’ll use that towards the next steps following the launch of donor principles. And I think that’s it for me. Thanks.

Moderator – Lisa Poggiali:
Hey, thanks, Zora. So just to provide a little bit of structure, as you heard many of our panelists note, there is sort of an internal component to the donor principles, and there is an external component. So on the one hand, we’re thinking about what can donor governments do internally in terms of their own processes and structures to uphold the donor principles. And then also we’re thinking about what can donors support externally and programmatically in order to uphold the donor principles. So we’ve structured each of the questions around that internal and external component. We’re going to run this kind of like a speed test. dating situation. So each group will have a few minutes to focus on each question. And then the group will remain the same and will just move to focus on a different question every few minutes. Or we’ll announce a loud buzz or something to indicate. And so you’ll get to have a sort of cohesive conversation across the entire period of the breakout group. You can stay with your group and pick up on conversations that you had as the questions move along. And I think that is it. Anything? OK, and to our online group, we will do our very best to incorporate you in the discussion afterwards. And so don’t think we’re forgetting about you. We value that you are there as well. So let’s break out into groups. And if everyone can kind of migrate to the corner that you’re closest to, we’d appreciate that.

Audience:
Congratulations. Thank you. Thank you. We’re just going to have a little break in this. I think it was our time. I think we have a few minutes left. And we’re going to move to some five minutes. And I’m going to wait until you have a question. But we want to do one last main question. And then we want to move on to a round of questions. So I want you to do a round of questions. And then we’ll take a buzz. Yeah, let’s go. So if you have a question, in that case, I will say it. And no, I don’t think you need to talk to the audience. I’m just curious to know what we have in there. Because there’s questions all around the room. So if there’s a question, I’m going to say it. Thank you. OK. So if you have a question, and you want to take a buzz, take a buzz. And if you have a question, you can ask that question. And we’ll give it to you there. Can we see this? Because I have it on the board. Yeah. Yeah. So if you have a question, you can ask it. And if you have a question, you can ask it. OK. That’s really insightful. Thank you. Thank you. OK. And thank you all for that. That’s wonderful. And we’ll take another question. And we’ll leave with that. Give us a pause. We have a question. We have a question. We have a question. We have a question. We have a question. The knowledge of the race is important. And meaningfully, it goes without saying. That’s a great question. That’s a great question. Oh, thank you. That’s wonderful. Wonderful. Thank you so much. I know you’re listening in terms of the community and that’s a wonderful thing. Thank you. Hello, everyone. If I can just ask you to move to the next question. Thank you. Thank you. Thank you. So I’m just wondering if you can speak to the outcome by any funding that’s going to be given to government as a result of the civil society, rather than necessarily a portion of the funding that’s going to be given to government that’s going to be effective in the long term? Thank you so much. I think the accountability potential and accountability, and to make sure that they have the good news, and the structures that are needed in the country to make that happen. But also, the work that’s happening across the whole system, the structures in place, in order to do that. Hello again. I’m just asking everyone if they can move to the next question, if they haven’t already. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. May I ask everyone just to move to the last question? We have the last four minutes. Thank you. Thank you very much. I will just ask everyone to go back to their seats so we can come back to plenary. Thank you. Everyone can stay in the same chairs if you’re in the room and just turn them, or you can get up and move back, but we do need to move back. back to plenary at this point, and we’ll continue the conversation. We won’t just be reporting out. We’ll continue the conversation at this point, and we’ll continue the conversation at this

Moderator – Lisa Poggiali:
Okay, so we’re going to have a continued discussion, so we won’t be reporting out necessarily from groups, but we’ll invite any of you to raise your hands either in the room or raise your hands online if you want to make a comment, and we’ll just start with one of the questions around implementation internally in donor governments that Eleni, who I don’t know where, oh I think she went to the bathroom. So Eleni from GNI asked a question about what it would look like in donor governments like 4USAID or IDRC to implement these processes, and then somebody else whose name I’m forgetting, but feel free to chime in, asked a question around not burdening those who are receiving funding such as implementing partners, grantees, from having to do extra work themselves in order to implement these principles. So just invite anybody to maybe give thoughts on that. I’m sure this has come up in multiple groups, so we’ll just turn the floor over to anyone who has any ideas around that or want to expand on that idea of implementing the principles internally without burdening grantees and implementing partners with additional labor. We can start with IDRC, maybe Sydney, or if you want to repeat what it was, Rahia, that you said in the session.

Audience:
I mean, first of all, I can’t speak for all of the programming that happens at IDRC, but I think for those of us who work in technology, we already take these things into consideration a lot. And I think what I would want to try to do is socialize this across my colleagues and begin to talk to them about, for instance, providing more digital security and digital resilience as a portion of a budget. And to work with grantees who are, you know, for instance, if it’s a health application and there’s a, you know, what are the data governance practices? Because not everyone’s on the same page with these issues, right? I mean, they’re thinking of different human rights outcomes around access to health or, you know, access to clean water. So how can we begin that conversation with an IDRC?

Moderator – Lisa Poggiali:
Anyone want to add to that or have a follow-on question? Quinn. And please just introduce yourself when you come on mic.

Audience:
Sure. Is this on? Yes, thanks. Yes, I’m Quinn McHugh, the executive director for Article 19. We work on implementing a freedom of expression approach and human rights-based approach to bridge technology and policy and human rights actors. I just wanted to echo what she was saying. One of the things that we see quite frequently when we are submitting grant proposals for Article 19 and in negotiations with donor governments is we will put in a line for safety and security, and it is one of the most frequently questioned lines we have in our proposal. People are like, what’s this for? Can this be only to demonstrate the safety and security for specific actors in this program and not for the organizations themselves to build robust digital security and resilience practices, which are about keeping our partners safe as well? And so that’s just something to echo a little bit. I think it would be really useful in terms of the implementation if there was maybe a broader understanding of the importance of these kinds of lines in the proposals that we’re submitting. And maybe this. This is something that can be echoed from kind of yourselves down to your colleagues that maybe having a bit broader understanding of digital security and resilience and how that programming should be incorporated into some of the work with grantees. So it’s not just, again, specific to someone being given an emergency training or something like that. It would be very helpful.

Moderator – Lisa Poggiali:
That’s really useful. Thank you. Are there specific actors, this can be directed to you or anybody else, that we should bring to the table or existing networks that we can leverage or bring in as partners in order to socialize these very issues to others across all of our respective development agencies who may not have the knowledge of what digital security might look like in a solicitation process and who should actually be involved and who should be protected?

Audience:
We work on this. I mean, Access Now, pretty much every organization that’s going to be here in civil society could provide something. But in terms of donors themselves, the Ford Foundation is actually really good at building the idea of capacity building into the grants that they give as well. I’m sure there’s other funders here, but that’s just one I’m very familiar with. They have a very open dialogue-based approach and more expansive in terms of looking at issues of security, not just from technical things, but like economic, social, cultural elements of digital security and safety as well, looking at the more kind of a holistic approach to it. So I would suggest if you’re looking for another kind of donor to speak to on some of their practices, they’ve been very good.

Moderator – Lisa Poggiali:
Thank you. What about… Go ahead, Daniela.

Audience:
Yeah, Daniela from GPD. Just wanted to echo that that came up in our group around being more creative in terms of reaching more groups and going beyond the usual suspects and reach communities that are usually marginalized, and that goes back to the very clear point that was made earlier about that bottom-up approach, but also we discussed how these principles can be leveraged not just with donor governments, but also increasing collaboration with private foundations. So that came up as well. So yeah, just echoing that and supporting that point.

Moderator – Lisa Poggiali:
Thank you. Is there a specific fora where maybe these ideas are not socialized as much? So thinking about other major development conferences or like even the G20 process, or other spaces where we might want to work on socializing these ideas so that our colleagues who work on digital health or digital economy can start to learn more about how they can facilitate more digital security? Yeah. Mm-hmm. Go ahead.

Audience:
Thank you. Apologies for my voice. Silvia Cadena, APNIC Foundation. I just wanted to say that there are so many events and… and principles and processes that small and medium and large organizations are supposed to figure out by themselves, that it would be very useful to have, when you talk about mechanisms and tools for implementation, it would be really good to have really practical things that allow organizations to see, okay, where do I align? Where do these align with my strategy? It’s not about, it feels a lot like chasing the strategy of others, instead of seeing how that is helping the strategy of each organization to actually deliver, and maybe that in our case, we can support, I don’t know, or do proper follow-ups of three or four of these principles, but not necessarily all. Same with the ROAMx indicators, and you start looking, and it’s like, okay, which one do I choose? What do I do when I’m doing, and all the time you feel you’re doing wrong, because you’re not following everything. So figuring out this, I really like the fact that you mentioned the principles of digital development, tiny little thing at the end, having things like that to say, for this principle, these other things are important, then you start feeling like you are connected and you’re contributing, and even encouraging people from a bottom-up approach to be able to participate in this process would be really good. I’m David Sullivan with the Digital Trust and Safety Partnership. One thing that occurs to me, so principles are invaluable for building consensus, but that process of building consensus, you wind up with a fair amount of passive voice, and then the concern, of course, becomes that in that passive voice, responsibilities get driven down to implementers and their partners, and I was sort of thinking that you could almost have an accompanying tool for donor agencies to take the principles and then just add specifics in terms of who is responsible. for each of these things, going from, you know, the actors to the events and opportunities and items or whatnot. And that could be particular to each government. And then you could sort of ensure, okay, we’re not going to, you know, take these responsibilities for human rights due diligence and add that on, you know, as on top of other things that the implementer has to do that gets pushed down to local partners in the field. But that’s something that gets built in at the strategy level within the agency with the right people involved. So just a thought in terms of how this could be operationalized in a way that you go from that sort of vague consensus to clarity about who does what.

Moderator – Lisa Poggiali:
Thanks. That’s very helpful. And I will say the idea for the call for action of donor governments is to allow individual governments to be able to think about within their own internal legal structures and processes and strategies, what commitments they might be able to make that are concrete, that are kind of bringing the principles down a level to concrete commitments and actions. And one of the things that we talked about in the drafting process was the potential for building out toolkits as part of the next year implementation under the Freedom Online Coalition. And so curious if anyone talked about that or has ideas around what kind of toolkit might be helpful. You know, there was a suggestion for different pieces of guidance that was more concrete that speaks to specific tools for different stakeholders, like maybe civil society for advocacy or diplomats or development actors who are doing the work out in the field. So any ideas that anyone have those kinds of conversations? Online as well, feel free. Zora, is anyone from online wanting to participate? Okay. Go ahead, Brett. It’s not working.

Audience:
Hello. Hi. Brett Solomon from Access Now. Thanks a lot for the principles and for the donors who have worked on it and for civil society as well. I just wanted to… your point and I think also to David’s as well, is just if these principles serve as a tool to focus donors’ minds on how to get more money out the door and into the hands of the beneficiaries, then I think that’s a real plus. If what actually happens is that they become a bureaucratic roadblock to the delivery of money, then that’s a backfire. And I think in terms of the toolkits and the processes and the briefings and all of that, like the starting point should be, and I’m speaking from the perspective of civil society is, or from my perspective as a civil society member, is that civil society is currently so under-resourced and so under attack and so on the front line, particularly organisations in the global majority. And so whatever we can do to leverage these principles to facilitate the transfer of funds from those who have it to those who need it, then the better. And I would think that should be the starting point of any of the briefings or the processes for implementation.

Moderator – Lisa Poggiali:
It’s very helpful, thank you. Anyone else want to speak to that? Go ahead, Quinn.

Audience:
I’m sorry, I’m speaking too much, but taking off what Brett just said, there’s something that all of us in civil society, particularly working on digital rights and these issues are acutely aware of, which is the big hanging question over all of us, particularly in global majority countries of what is gonna happen with open society foundations. There’s very strong indications they will be pulling away from funding a large number of the organisations they have supported in the past. And so the question is, what is going to be the response of the donor community if they think it’s very important to have these organisations at the local. national level in the global majority countries be strong, what is going to be the response from, as Brett was saying, those who have lots of funding. I mean, statutory donors typically provide larger grants, but it’s often harder to get them to smaller ones. And while these donor principles don’t necessarily talk about that in terms of that issue, I do think because this is a forum for donors here, DR, I just thought it was useful maybe to reflect that there is a huge amount of uncertainty in the community because open society has funded at the human rights level so many organizations broadly and at a small level, but was very useful for sustaining and securing. And with that question, there is, as Brett was saying, there’s a huge amount of uncertainty in the field about how are we going to sustain the momentum that we’ve had. And so in these donor conversations, it’d be very useful to think about that level of how do we sustain and build the networks that are there when the funding environment is so uncertain at present. That’s all.

Moderator – Lisa Poggiali:
Yeah, that’s a really good point, changing landscape for sure. So I wanted to bring it back to the question that Daniela raised about private sector. See, are there any private sector partners who maybe could comment on how private sector organizations who do have even more money than donors do oftentimes could potentially partner with donors on digital security or any of the other issues raised in other principles? Invite those online or in the room. Michael, I don’t want to put you on the spot, but if there are no other private sector partners who want to speak, I will because I know you are one.

Michael Karimian:
Thank you, not appreciated. So I think your question speaks to a broader challenge, frankly, which is that in low and middle income countries, as they undergo digital transformation that expands the cybersecurity threat landscape. And so there absolutely needs to be more effort as some are already doing. For example, the GFCE, the ITU is looking at this as well. Among others, Microsoft too, the government of Sweden. How do we mainstream digital security, cybersecurity into the digital development arena? And as we start to now look at the post 2030 agenda, we need to be much more acutely aware of that than when the 2030 agenda was created in the first place, where digital transformation was. undervalued as a means for achieving the SDGs. It’s kind of a conversation happening now, which is a bit too late. And so how do we think about cybersecurity in the post-2030 agenda is absolutely a critical component of that conversation, which is starting now. The GDC process must be part of that, whatever happens with the new agenda for peace as well. But yeah, absolutely. I mean, it’s much bigger than just what we’re looking at in these principles today, I think. Thank you.

Moderator – Lisa Poggiali:
Well, and that raises a good point about some of the other fora through which these conversations, and particularly the human rights and democracy affirming kind of perspective, could join forces with some of the more traditional cybersecurity conversations that have been occurring in the ITU and GFCE, et cetera. And so we’d love to hear if anyone is engaged in those processes currently, if there are any concrete recommendations for next steps for trying to engage in those spaces and networks that have thus far not been connected that well, at least from the space where I sit in the DRG, Democracy, Human Rights, and Governance Bureau at USAID. And I know from talking to other donors as well that the democracy and human rights issues on the technology side have been siloed oftentimes from many of these other technology conversations that are happening at the global level. So any insights from anyone in the room, or Michael, feel free to also respond, or anyone online as well.

Audience:
May I? Yes, please. Yes, I don’t know where to start. Thank you for being one of the participants in this launch. But all that I want to say, my name’s Honorable Ratilo from Botswana. It’s around 3.05 in the morning. want to say here is that when you are talking about the civil society, indeed the civil society can play a critical role, but at the same time we have to try to understand some few things because in most of the country you will realize that there’s no strong civil society in place, but the digital human rights violations are in place. So how are we going to try to protect those people who are living in those countries? We can try to protect the interests of the ordinary people or the community, but at the same time the donors cannot reach that because they have not registered a civil society in their respective country, but at the same time I will decide because I’m a member of parliament, I keep on telling them no, once the violation of the human rights take place on the issue of digital, I will take the government to court, but at the same time I don’t have enough financial muscle to protect the interests of the ordinary people before the court of law simply because of the financial muscle. Now I want to pose a question, how are we going to assist those type of the countries that are not really vibrant in the line of the civil society? Thank you.

Moderator – Lisa Poggiali:
So I think if I’m understanding right, the question was in spaces where civil society doesn’t have that kind of leverage with the government or doesn’t have the resources, how it is that we can support them in order to hold governments accountable when human rights are being violated. If you wanted to put something in the chat, we couldn’t hear, some of the audio was breaking up. I think that’s an excellent question and I think that that’s something that donors can heed the call on to support civil society. And these principles certainly provide a foundation for doing that on these critical human rights issues in particular. So thank you for that. I will right now turn it over to Sidney to close out the session and he will introduce the last speaker.

Moderator – Sidney Leclercq:
Yes, time flies when we’re having fun. And so we’re a bit late, but I’ll introduce maybe Adrian DiGiovanni, our team leader on democratic and inclusive governance at IDRC. And he’ll be providing some closing remarks. And he’s online from Ottawa. Adrian, over to you.

Adrian di Giovanni:
Hi, good morning, everyone. Can you hear me okay? Perfect. All right, so I’ll just dive in and it’s really just to say a few words of thank you. It’s bedtime here, so I managed to join in for the plenary discussion right now and I have a flavor for the richness of your discussion. So really to our distinguished guests and panelists, ladies and gentlemen, it’s an immense pleasure for me to join you from Ottawa, Canada. We’re on the unceded, unsurrendered territories of the Algonquin and Anishinaabe people. We just passed our third annual National Truth and Reconciliation Day in Canada. So we always recognize the traditional custodians of the territories we’re on. And it’s a wonderful event, the launching of donor principles on human rights and the digital age. And we’re really delighted to have been part of this effort and the principles couldn’t arrive at a more critical time. I don’t have to talk to a group of experts like yourself about the fast pace and ever accelerating pace of change with technology and how it can be a double-edged sword. And we always grapple in our work, do we talk about things as an opportunity or as a challenge? And we see it both, and especially for democratic values and human rights for the most marginalized and vulnerable communities in the majority world. Digital technologies, yes, powerful tools for information sharing, self-expression and organization, but they can also be used to deny or diminish people’s. And again, I think within the room, it’s probably come up quite a bit, a lot of the threats. And we’ve seen how digital technologies can play a key role in the decline or backsliding of democratic processes. And this Vera, from what I understand, and I read her opening remarks, mentioned how most often where you see stresses online in the digital space, it reflects a broader decline in human rights and freedoms across the world. And we see that work. We’re at the Democratic Inclusive Governance team at IDRC. We see both the online stresses on the ground and actually how they may feed one another, something we actively try to think about and understand. So that’s why at the International Development Research Center here in Canada, we’re a funder and a champion of research for sustainable, inclusive development. And we’ve been supporting work to improve evidence and understanding of all these critical phenomenon, like information disorder, technology-facilitated gender-based violence, and the online shrinking of civic space. For more on that, Steve Urquia in the room there, she’s definitely a resident expert. And really for us at IDRC, we focus on the experiences of populations and communities across the global majority. We have also aimed at strengthening the capacity of research institutions and civil society organizations to build global self-knowledge networks and to better enable cross-learning and scaling of policy solutions. So a couple of examples are the Feminist Internet Research Network and the Data-Free Development Network. And so many of the discussions just now definitely ring true about trying to reach local organizations, actors, flowing our funding directly. We’re nimble enough. We often get to do it. And that’s really where colleagues like Sidney and Rahia find great joy in the work. We also see the power. And for us, this is part of our contributions to a localization agenda. And on technology, we definitely see the gaps and opportunities, especially in terms of ensuring that strategies are tailored to context. non-European language where from what I understand most of the action can be when it comes to some of the distortions and sneaker democratic governance. So just to say collectively as donors we have a responsibility to ensure that the actions and investments made in digital initiatives do not contribute to an erosion of human rights protections and democratic institutions processes and norms. So in other words to echo the introductory remark donors must do no harm and that’s something because we’re a research funder we take seriously across every single project we fund and so it’s not a pediment to funding just echo a comment earlier it’s actually something we take very seriously and it’s becoming harder to understand how to ensure we do no harm with many of the threats out there to democracy around the world. I mean this is why the donor principles are such an important step they provide both a safeguarding and accountability framework to ensure an alignment between investments in digital and innovative initiatives and commitments to human rights and democratic values. So I’ll also emphasize the importance of inputs from government civil society and private sector throughout the consultation and drafting process of these principles. At Year Z we’re kind of a public institution we’re close to civil society we engage with a variety of actors and so these kind of multi-stakeholder settings we really see as key and I want to thank take this opportunity to thank all colleagues who have taken the time to provide feedback and really to improve the principles and to arrive at the version that you see now. And of course the adoption of the principles is just the start and that’s why together with U.S. colleagues we have wanted this launch to be not just about presenting and discussing the principles but already to begin to dive into the critical question of so what or now what and what next especially through the breakout groups you’ve had and you know I’ve had the pleasure to just really hear your debriefing now. And so This idea, again, it reflects our mix of what we think is needed for effective change going forward. So, as you’ve all just done in this session, you’ve started to address the issues around what the principles actually might mean in practice, what kind of internal and external change is required, how to go about implementation, who do we need to engage with, and how can we measure progress once it is made. This is vital into translating the principles into action and impact. And I have to say the large majority of the work that we support on human rights is about the implementation gap. You can have many great principles and frameworks and constitutions around the world, it’s really then ensuring that they get implementation in the spirit, implemented in the spirit of human dignity, as was mentioned in the opening remarks. So, if you do have further input to provide, we really encourage you to share any comments or suggestions. You have after this launch, including through the dedicated email address colleagues from the FOC have created. I imagine someone in the room can point you to it, but it’s donorprinciples at freedomonlinecoalition.com. And so, let me just conclude by thanking again all of the panelists and the presenters who came before. I believe that they have already been thanked. And also really to end on a note of gratitude to our U.S. colleagues who have shown incredible dedication and commitment throughout the development, consultation and negotiations of the donor principles. It’s with a debt of gratitude that I’ll end. Blame Sydney if I’m gone over time.

Moderator – Sidney Leclercq:
Thank you so much, Adrian. And thank you to everyone for the luncheon. Thank you very much, Lisa. Thank you.

Adrian di Giovanni

Speech speed

183 words per minute

Speech length

1319 words

Speech time

433 secs

Allison Peters

Speech speed

177 words per minute

Speech length

651 words

Speech time

221 secs

Audience

Speech speed

178 words per minute

Speech length

2727 words

Speech time

920 secs

Augustin Willem Van Zwoll

Speech speed

169 words per minute

Speech length

673 words

Speech time

239 secs

Immaculate Kassait

Speech speed

164 words per minute

Speech length

1024 words

Speech time

376 secs

Juan Carlos Lara Galvez

Speech speed

165 words per minute

Speech length

517 words

Speech time

188 secs

Michael Karimian

Speech speed

214 words per minute

Speech length

1245 words

Speech time

349 secs

Moderator – Lisa Poggiali

Speech speed

164 words per minute

Speech length

1823 words

Speech time

665 secs

Moderator – Sidney Leclercq

Speech speed

164 words per minute

Speech length

398 words

Speech time

145 secs

Nele Leosk

Speech speed

145 words per minute

Speech length

790 words

Speech time

327 secs

Shannon Green

Speech speed

162 words per minute

Speech length

498 words

Speech time

184 secs

Vera Zakem

Speech speed

146 words per minute

Speech length

1010 words

Speech time

415 secs

Zach Lampell

Speech speed

157 words per minute

Speech length

620 words

Speech time

237 secs

Zora Gouhary

Speech speed

176 words per minute

Speech length

244 words

Speech time

83 secs