Internet Governance in Times of Conflict | IGF 2023 Open Forum #152

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The analysis examines a series of speeches discussing the issue of internet shutdowns and their implications. The speakers express grave concern over the seriousness of internet shutdowns and advocate for the imposition of sanctions on those responsible. They argue that internet shutdowns violate humanity and have far-reaching consequences on safety, health infrastructure, and access to information.

One speaker emphasises the need for better early crisis warnings and suggests integrating internet shutdown indicators into forecasting procedures for crisis surveillance. By recognising internet shutdowns as crisis indicators, governments and relevant authorities can respond more effectively to impending crises.

The importance of private sector governance in protecting the internet against political pressures is highlighted. The International Corporation for Assigned Names and Numbers (ICANN), which operates as a private corporation with multi-stakeholder representation, is praised for its ability to resist pressures to remove domain names or participate in political sanctions. This private sector governance is valued as a key characteristic in safeguarding the internet against subordination to military and political ends.

The analysis raises concerns about internet access and digital transformation in conflict areas. It highlights the negative impact of the Taliban takeover in Afghanistan, where the digital transformation project funded by the World Bank was halted. Furthermore, the potential problem of the military owning internet service providers is mentioned, as it raises concerns about impartiality and the potential for censorship.

The role of social media networks and platforms in crisis management is discussed, with a call for better coordination with stakeholders and civil society. It is observed that people are more likely to rely on social media apps for reporting incidents or following news rather than traditional websites.

One speaker emphasises the need for clarity on how international humanitarian law should be applied in digital warfare situations where physical force is not involved. The use of cyberspace, spyware, and internet shutdowns in warfare creates challenges in interpreting and applying the rules of distinction, targeting, proportionality, and humanity.

In conclusion, the analysis highlights the gravity of the issue of internet shutdowns and advocates for sanctions as a means to address the problem. It calls for better early crisis warnings, the integration of internet shutdown indicators, and recognises the importance of private sector governance in protecting the internet. The challenges surrounding internet access and digital transformation in conflict areas, the potential concerns with military-owned internet service providers, and the pivotal role of social media networks in crisis management are also discussed. Finally, there is a need to clarify the application of international humanitarian law in the context of non-kinetic warfare involving cyberspace, spyware, and internet shutdowns.

Roman Jeet Singh Cheema

Access Now, a renowned digital security organization, receives daily requests for assistance in digital security issues. They prioritize global cybersecurity policy and advocate for the application of human rights law in Internet Governance. Access Now has observed an alarming increase in surveillance-related measures and spyware attacks targeting civil society, posing risks to individuals and communities.

One of Access Now’s key concerns is internet shutdowns. They strongly oppose decisions that lead to blacklisting the internet in specific regions, seeing it as a dangerous precedent. The organization actively works against internet shutdowns, recognizing the potential dangers they bring.

Access Now emphasizes the importance of reaching global consensus on various aspects of internet governance. They argue for the protection of cybersecurity incident response teams during times of conflict, asserting that emergency responders should not be targeted. They believe this principle should be extended to similar teams.

Regarding international governance conversations, Access Now supports the idea, advocating for stricter standards against cyber destructive activity. They express concern about separating and creating different internet standards, preferring a reduction in conflict over a permissive approach. They emphasize the need for preventive measures to address cyber conflict and establish stronger international governance norms.

Access Now highlights the universal unacceptability of internet shutdowns, noting that they are often used to hide impunity, violence, and targeting. They call for consequences for states that consistently perpetrate internet shutdowns and urge member states to demonstrate stronger commitment.

Lastly, Access Now advocates for active prevention of internet shutdowns through international media attention and domestic challenges. They believe the UN system, including organizations like the WHO, should play a more active role in addressing and preventing internet shutdowns.

In conclusion, Access Now, as a digital security organization, assists with digital security issues and pushes for global cybersecurity policy. They emphasize the application of human rights law in Internet Governance and oppose decisions that blacklist the internet. Their goal is to establish stricter standards against cyber destructive activity and prevent internet shutdowns through international governance conversations, media attention, and domestic challenges.

Mauro Vignati

During armed conflicts, the internet infrastructure often experiences disruptions, which have negative implications. The ICT infrastructure is frequently targeted or taken down, causing significant disruptions in communication and information flow. This poses challenges for civilians, as their ability to receive relief operations and maintain contact with their families is severely affected.

Furthermore, the absence of specific technologies in conflict zones hampers the work of international organizations. These organizations need to operate on both sides of a conflict to provide critical assistance and support. However, the disruption of technology makes it difficult for them to coordinate and execute relief operations effectively.

One of the underlying issues is the lack of distinction between civilian and military internet use. The internet architecture was not originally built to differentiate between these two categories. Consequently, during conflicts, civilian infrastructure often becomes disrupted, as it is not protected or prioritised. To address this, there is a need to establish clear guidelines and mechanisms to distinguish and protect civilian infrastructure from military targets.

Addressing this issue, it is recommended that the state takes measures to segment data and communication infrastructure used for military purposes from civilian ones. This segregation would help protect civilian infrastructure and ensure a more efficient and secure digital environment during conflicts. Additionally, tech companies should also consider implementing segmentation when providing services to military or civilian entities to prevent unintentional disruptions or compromise of civilian infrastructure.

Looking towards the future, it is vital to carefully consider how the digital infrastructure should be structured. As conflicts continue to evolve and technology advances, it is crucial to establish a robust and resilient digital framework that ensures the smooth operation of critical communication and information systems.

The International Committee of the Red Cross (ICRC) plays a significant role in conflict management and humanitarian efforts. They work in more than 100 countries and are devoted to upholding International Humanitarian Law (IHL) during conflicts. This includes the protection of critical civilian infrastructure and refraining from targeting civilian objects.

The ICRC also advocates for the consideration of data protection within IHL. They aim to convince states to include data protection as an essential aspect of international humanitarian standards. Recognising the importance of data as an object to be protected aligns with the increasing reliance on digital infrastructure during conflicts and the need to safeguard sensitive information.

In conclusion, during armed conflicts, the internet infrastructure is often disrupted, impacting civilian access to vital services and hindering the work of international organizations. The differentiation between civilian and military internet use, along with the segmentation of data and communication infrastructure, is crucial to protect civilian infrastructure and ensure an efficient and secure digital environment. As conflicts continue to unfold, it is essential to consider the future of digital infrastructure and uphold International Humanitarian Law to safeguard civilian lives and maintain connectivity in conflict zones.

Moderator – Regine Grienberger

Global internet governance is facing significant challenges due to conflicts occurring between different groups and nations. These conflicts include the ongoing Ukraine war, terrorist attacks in Israel, and military coups in the Sahel region. The competition between authoritarian and liberal systems further exacerbates these conflicts, along with the global north-south divide concerning justice issues.

Regine Grienberger highlights the negative impact of these conflicts on global internet governance. This sentiment is supported by the fact that these conflicts impede the stability and functionality of the internet. In response, Grienberger emphasizes the importance of protecting the global, free, and open internet. Governments often intend to preserve internet freedom, but their actions can inadvertently undermine these efforts. Additionally, interfering with the architectural characteristics of the internet poses significant dangers.

Regarding potential solutions, it is noted that sanctions should not be the first response against internet shutdowns. Sanctions are viewed as a complex diplomatic instrument and not the primary course of action in addressing this issue. Instead, it is crucial to integrate internet shutdowns as a crisis indicator in early warning and forecasting procedures. By incorporating this information into crisis management protocols, social upheaval, riots, and civil wars can be prevented.

In the context of the digital divide, it is revealed that nearly 12 percent of the Sustainable Development Goals (SDGs) have regressed rather than progressed. Despite this, collaboration between countries through digital partnerships remains a viable solution. Estonia, for example, actively engages in digital cooperation with almost every country, including initiatives in Afghanistan. This showcases the potential of digital cooperation to address global challenges in an increasingly divided digital landscape.

In conclusion, conflicts between groups and nations pose a significant threat to global internet governance. The need to protect the free and open nature of the internet is emphasized, alongside urging governments to be mindful of unintentional interference. Furthermore, while sanctions should be approached cautiously, integrating internet shutdowns as crisis indicators and fostering digital cooperation contribute to mitigating the challenges presented by conflicts in the digital realm.

Nele Leosk

The analysis examines the role of cyberspace and technology in modern conflicts, focusing on the war in Ukraine. It highlights the negative impact of cyberattacks on Ukraine’s telecommunication infrastructure and their ripple effects on other countries. These attacks disrupted telecommunication services, and they often preceded physical attacks during the war. The analysis also emphasizes the importance of digital society and secure infrastructure in combating cyberattacks, citing Estonia’s secure digital identity system as an example. It stresses the need for collaboration between the private sector and governments to maintain data and services during conflicts. The analysis further addresses the increasing targeting of everyday services, like hospitals and schools, by cyberattacks and their detrimental effects on individuals. It highlights the significance of public goods and digital public infrastructure in democratizing and making states more accessible. Estonia’s collaboration with Finland and Iceland on digital solutions is also discussed, emphasizing the benefits of global collaboration. Overall, the analysis underscores the urgent need for robust cybersecurity measures in modern conflicts and advocates for cooperation and innovation to address these challenges effectively.

David Huberman

The functionality of the internet on a global scale is attributed to the adoption of common technical standards, which guarantee interoperability. These standards are developed by engineers around the world who contribute their expertise to ensure the quality and efficient operation of the internet. The internet’s infrastructure relies on a system of routing and domain name system (DNS), which everyone voluntarily adopts. This system enables the internet to work uniformly across different regions.

Another crucial component that supports the functioning of the internet is the root server system. This system ensures that all DNS queries work smoothly, allowing users to access websites and online services. Even during times of conflict, if regional root servers are taken down, it does not significantly impact internet users. This resilience is a testament to the robustness of the root server system and its ability to maintain the internet’s accessibility.

The governance model of the technical layer of the internet plays a pivotal role in keeping the internet online for everyone, even when individual systems go offline. This governance model is particularly effective during times of conflict, ensuring that the internet remains operational and accessible to users. It provides a framework for coordination and cooperation among various stakeholders to address challenges and maintain the internet’s functionality.

Building and securing the internet is no longer solely an engineering endeavor. It requires a collaborative effort involving multiple stakeholders, including civil society, government, academia, and engineering. The internet has become a matter of national security for countries, and the preservation of its public core must be achieved with neutrality. Recognizing the real-world implications, stakeholders from different sectors come together to ensure the security and stability of the internet.

Economies in transition or remote areas prioritize the construction and development of the internet to connect their people and share information with the rest of the world. Once initial construction is complete, securing the internet becomes a crucial focus to prevent vulnerabilities that may compromise the economy and infrastructure.

The Internet Corporation for Assigned Names and Numbers (ICANN) is tasked with ensuring the security, stability, and resilience of a part of the internet through its multi-stakeholder model. This model has proven effective in maintaining the internet’s functionality during times of conflict, as acknowledged by David Huberman when appreciating Dr. Mueller’s explanation of its efficacy.

In conclusion, the internet’s functionality and continued accessibility are made possible by the adoption of common technical standards, the resilience of the root server system, and the effectiveness of the governance model of the technical layer. The collaboration of multiple stakeholders and the recognition of the internet’s security implications play a vital role in building and securing the internet globally.

Session transcript

Moderator – Regine Grienberger:
So welcome, everybody, to this session with the title Shaping Internet Governance in Times of Conflict. My name is Rekine Grimberger. I’m the German Cyber Ambassador, and I’m going to moderate this session. One of our panelists is still missing, so let’s see how we manage. But we will start with somebody else anyway. So when we look at what we have now, a global internet, the question that we are going to pose in this session is, just let me look up my notes. Where is it? So we live now in times of conflict, and this conflict is both imminent, sometimes even violent, a violent conflict, like the Ukraine war, like the terrorist attacks that we’ve seen in Israel these days, like the military coups in the Sahel. But there is also a strategic competition between authoritarian and liberal systems. And there is also, I would call it a redistribution conflict or a conflict over global justice issues between the global north and the global south, and also within societies. In these conflicts, people take sides. They are forced sometimes to take sides. They want to protect what they regard as theirs. And sometimes they lash out to push back others who are threatening their interests. Even institutions and conversations, sometimes also conversations that we have here at the IGF, are becoming increasingly politicized. You have seen or heard and remember, of course, that Ukraine, in March 2022, requested from Ike. can to block Russian internet domains. There are also requests from governments in a more general way to take back control of the internet to shield societies. Here he is, Roman. To shield societies, for example, from disinformation, from hate speech, from fake news. And sometimes governments do these even in good faith, unintentionally breaking what they actually want to protect, which is the global free and open internet. In this conversation that I’m going to have with my four partners here on the panel is we want to describe the pressure on internet governance in times of conflict and the dangers of fiddling with the architectural characteristics of the internet. But we would like to also highlight what keeps us together. What are the elements that help us to maintain this global internet? Which tools can we use to stabilize cyberspace in times of conflict? I would like to start to my right with David Huberman from ICANN. I would like to ask you to please introduce yourself first so that we know where you are from, your affiliation. And then perhaps share your experiences with what happens to internet governance in times of conflict from your point of view. What is the issue? What is the tissue that keeps us together? And what does ICANN do to protect this tissue? Thank you.

David Huberman:
Thank you, Dr. Greenberger. So good afternoon, everybody. My name is David Huberman. I work at ICANN. I am based in Washington, DC. And I have spent the last 25 years of my life, 24 years of my life, building internet and ensuring that people around the world and societies around the world, from the richest to the poorest to the newest and the oldest, have functioning internet. And what do I mean by that? Well, when you pick up your phone and you pick up your mobile device and you launch TikTok and you watch a video, to me, that’s not the internet. When you open up your laptop and you send an email, to me, that’s not the internet. What we are talking about is two very different layers. There is the layer where all these platforms exists, where your social media walled gardens exist, where your government ministries have their internets and their information and the information for the society exists, where your email and where your videos exist. But all of these run atop a different layer. And this is the technical layer that underpins the entire internet. You mentioned a global internet. And that’s very important because the entire world, the entire internet works because it uses common standards. Every time you do anything in an application, any time you do anything on your computer, underlying it is a system of routing and a system of the DNS, the domain name system. They’re a system of protocols that everybody in the world has chosen to voluntarily adopt. And they’ve adopted the same standards. And what that buys us is something called interoperability. Your internet in your country, in your home, works the same as my internet in my home and the same internet everywhere around the world because of common standards. A very smart person told me yesterday that it is this system that is designed to unify the world in times when conflict tries to divide us. And I want to give you a really interesting example of it. There is a system on the internet that pretty much none of you have ever heard of that is the most important system on the internet, or one of the most important things on the internet. You rely on it every time you do anything, and you don’t even know it. It’s called the root server system. And it is essentially what allows every DNS query to work. And even if you don’t know what it is, a DNS query is what allows you to get to every site you visit and allows your application to get to the places where it wants to deliver the data that you’re asking. And everybody in the world relies on this root server system to work 24 hours a day, 365 days a year, for the 40 years that it has existed. So here’s the thing. It’s in a time of conflict where if you think about a region, chaos has erupted. Violence has erupted. People are being killed. And you think, OK, between the bombs and the sabotage, we can take systems like this down, right? If the root server system here in Japan were to completely disappear, all of the different servers in the root server system in Japan disappeared, or in the Middle East right now, there’s a lot of chaos. There’s a lot of war. If all of the root servers in the Middle East were to go away, do you know what happened to all of the internet users still connected? Nothing. Nothing would happen. They wouldn’t notice it at all. And that’s because the governance system behind this technical layer has been designed, has evolved, and has been hardened to ensure that jurisdictional concerns, no. Application concerns, data concerns, no. It’s about the engineering, the interoperability, and the promotion of open standards that every developer can use to create whatever it is they want on the content layer, on the second layer. At the technical layer, the internet is not fragmented. The internet works because engineers. in China, and engineers in Japan, and engineers in South Africa, and engineers in Israel, and engineers in Palestine, and engineers in Ukraine, and engineers in Russia, all come together on mailing lists and sometimes at face-to-face meetings to create these standards upon which the internet is built. And they do so with an emphasis on quality engineering, openness, and interoperability. And so I’d like to close by remarking that in times of conflict, this is when the governance model of the technical layer actually shines, because the output of this governance model is what keeps the internet online for everybody, even as systems go off.

Moderator – Regine Grienberger:
Thank you, David. I would now turn to Roman from Access Now. Please first introduce yourself. And then what are your observations from your professional experiences with internet governance in times of conflict? And what do you do? What does your organization do to maintain this open, free, global internet that is an important means for the civil society?

Roman Jeet Singh Cheema:
Thank you so much, Ambassador. And I’m very happy to introduce myself partly, because I think it relates to this conversation and what the organization does. My name is Roman Jeet Singh Cheema. I am Senior International Counsel and Asia Pacific Policy Director at Access Now, where I also coordinate our work on global cybersecurity policy. Access Now is an international civil society organization that seeks to defend and extend the digital rights of individuals and communities at risk. And we are born from this basic understanding that tech can empower. Tech is critical to enabling human rights online and offline. But technology can also place people at risk. And that’s a very important reason why we believe this community needs to engage in this conversation and why we exist. Access Now was, in fact, born as a digital security organization. In 2009, Access Now’s co-founders during the Iranian Green Revolution protests and many other moments after that realized that technology was helping activists, mobilizers, this wide civil society sphere that many of us depend on in autocracies, democracies, and everywhere in between, that tech was enabling them but also putting them at risk. People’s devices were being attacked, compromised. People were being monitored. So that requires digital security assistance, but also public conversation around what is acceptable when it comes to recognizing how to prevent tech from doing harm. And what that means is today is we are a global digital security provider. We run a 24 by 7 digital security helpline that receives every day, as I speak, requests from civil society, journalists, and others, requests for assistance from basic things on what sort of device should I use to prevent myself from being attacked, or I believe I’m seeing something malicious here, or my friend who’s an activist or journalist has been detained, I’m worried about the information they may have on the device or in their online accounts, to many more complicated requests. Alongside that, we do public policy work and advocacy, and we also convene. I work in the advocacy arm that tracks and manages this. But why I mention this is that it’s therefore important to recognize that if tech empowers, and it also puts people at risk, how do we approach the question of the internet governance questions that we look at or the internet governance architecture? How does it address issues of contestation and times of conflict? And I think it’s very important to recognize that we specifically are so clearly aware that technology can be used to oppress and attack people, whether it’s in moments of general peacetime in terms of political contestation within countries, but also during moments of protest, of activism, but also conflict, whether it be internal armed conflict or cross-border situations. We particularly have been just interested in the rapid growth in the number of surveillance-related measures and spyware attacks that we have seen and this global hack-for-hire industry that exists, but also the more weaponized usage of cyber attacks more generally. And I know that some of my other panelists will go further into this in detail, but I did want to talk about this very briefly. It’s very important in the internet governance space as well to sometimes bridge it to the cybersecurity conversation, a conversation that you, Ambassador, and many others are at very often about what rules apply when it comes to human rights generally. And many of us are very intimately aware on how human rights law already applies to what governments and corporations and other actors do every day when it comes to technology and the importance of internet governance coming from a human rights framework. But also, how does humanitarian law apply during peacetime, internal armed conflict, and moments of general warfare? And why I mention this is that it leads to interesting moments. We’ve had such an interesting, robust conversation in the UN system when it comes to what sort of cyber norms should apply and the work of the UN Open-Ended Working Group on state-related ICT behavior. as well as other processes, but you still see those interesting moments pop up in the internet governance space, for example. I think it’s important to address the sort of conversation that came up in relation to the Russian invasion of Ukraine and then the conversation around the role of ICANN there, where many actors, including actors that we work with on digital security, actors directly involved in the situation, said it’s important for groups to take a stand. And it is our belief there, for example, that the internet governance ecosystem should not be making those sorts of decisions of taking entire entities out. And part of that comes from our own existing human rights work. We work on the issue of internet shutdowns, working together with the global community, the Keep It On Coalition, and we know the danger that happens when you decide to try to black out the internet. And if these decisions go into the technical administration of internet resources, we believe that was a very dangerous precedent that would cause problems. That does not mean that internet governance should not be accountable or should not be having conversations around who is in these spaces and what takes place. And I thought I would share this based on my experiences in the Asia-Pacific region, where society groups in countries which have gone through coups, which have gone through dramatic moments of transition, have asked which government bodies, for example, would ICANN recognize as legitimate or not? If officials are there who have sanctions or human rights measures applying or human rights claims against them, should they be allowed into meetings or not? And this is an important conversation that we think can happen further because conflict and human rights violations, unfortunately, are growing day by day. I just did want to end with noting this. It is very, very crucial for us to recognize that when it comes to these conversations, they are based on what is happening every day. So the ultimate reality is that we see a massive increase in the number of attacks targeting civil society when it comes to the use of cyber weapons and tools. And I know others may also speak to this. Why I mentioned this is that it’s therefore critical to recognize that there are certain areas that we should even more strongly say so. The work of cyber policy experts to say that the public core of the Internet, including the domain name function, should not be targeted is something that we must apply here. And I would end with noting this, that there are certain things we can agree upon further. Cyber security incident response teams within government, in civil society, or those who work in technical administration should not be targeted during times of conflict. That sort of idea that there are emergency response actors who should never be targeted, you don’t try to target the fire service, is something we need to understand more. And these basic understandings that are there between these different communities need to be synthesized further, which is why I’m very happy that we are having this conversation here. Thank you, Roman.

Moderator – Regine Grienberger:
I would like to turn now to Nele, my dear colleague from Estonia, Digital Ambassador. Please introduce yourself, and then I would like to ask you, because you are working closely with Ukraine, and Estonia has itself also experiences with attacks on the internet in cyberspace, I think it was in 2007. So that would be one aspect I would like to hear from you about. And then the other one is, you’re also looking at the bright side of digital technology, and all the possibilities and opportunities that governments with government applications can build on the internet, and the potential of digital cooperation also with your partners. So I would like to hear from you, what do you think should internet governance bring also as a contribution to this kind of overcoming of conflicts and providing for opportunities for, for example, achieving the sustainable development goals in the long run?

Nele Leosk:
Hello everybody, and thank you, Regina, for a nice introduction. So I’m a digital ambassador from Estonia, and indeed, when I started in my position, I thought I would only deal with the bright side of digital technologies, because we also have a cyber ambassador, whose task is to deal with all the dark sides of the internet and technologies. But over time, unfortunately, some of these grayer and darker issues have also, I would say, merged on my table, and increasingly so. So I would say that this has become also part of my everyday job to make sure that the technology is used for the good and for the bad, and there are several, I would say, global and multilateral and multi-stakeholder processes. One of them is why we are here, IGF. There are several others that are merging, so our task as tech ambassadors is to make sure that internet remains, as it was stated, open, free, interoperable, and accessible to everybody. But also that the technologies wouldn’t be misused. But before I go to the second part of Regina’s question, I would really perhaps point to some of the takeaways or something that we have seen in the current war in Ukraine and also beyond. And I guess we can conclude that actually both layers that you referred to, the technical layer and what is on top of it, is actually under attack, and is also very fragile. And we can see that clearly that digital or the cyberspace has become part of the war. The war in Ukraine did not start with physical missile attacks. It actually really started with cyberattacks at Viasat that really affected telecommunication in Ukraine and actually had a huge spillover effect to several other countries, including in the European Union. We also saw that data centers in Ukraine were attacked, and also the internet service provider Triolan was also attacked during the very first days of the war that started on the 24th of of February. So what has resulted, we of course also know that the connectivity has been vital and has been actually dependent on very few actors in Ukraine, if not to say currently mainly only one actor. And this brings in also other players in our, I would say, our diplomatic fora that perhaps used to be mainly focusing on the negotiations with states. Now the private sector and the big tech, of course, is a big part of it because they have an enormous power over both layers. But of course, we have also seen several good practices coming out from this unfortunate war. Both governments and private sector have actually also joined their forces to help Ukraine to keep the data, but also to keep the services running. But the other takeaway from the war in Ukraine is that when we, and this is what we have seen also in Estonia, when years back the cyber attacks were mainly targeted at critical infrastructure, government databases, perhaps government websites and services and so forth, this is changing. So increasingly, regular people’s services that they use every day are being under attack. Is it data attacks to hospitals, kindergarten, schools? And this means actually that the cyberspace is no longer actually about cyber security per se, but it’s also about how we digitalize our society. So for us in Estonia, it has been really useful that every single person in Estonia uses, for example, a secure digital identity, because it would be almost impossible to protect every single medical practice from these attacks if they were not using a secure digital identity. And I think this now finally comes to Regine’s second question about the importance actually around digital cooperation. So we have had indeed a long-term cooperation with Ukraine over the past 14 years, building their digital infrastructure, building data governance, data centers, helping to introduce a secure digital identity, helping it to be incompatible also with EU regulations and standards. And this has also very clearly helped Ukraine to provide services to people under very extreme circumstances. People in Ukraine can access their documents even if they live somewhere abroad and so forth. So this, I would say, long-term cooperation and really focusing on your digital society has become actually more crucial in times of conflicts than perhaps we would have thought before. I have some other comments, but I guess we can keep them for the second round.

Moderator – Regine Grienberger:
Thank you. And now to Mauro, here to my left. You’re from ICRC. I’d like to ask you also to introduce yourself. I wanted to have you on this panel because I wanted to highlight another element or another means that helps us to keep the world together and unified also in times of conflict. And in my understanding, that is also the role of international law and especially international humanitarian law, which are basically rules of the road for societies, for governments, to minimize harms to civilians in times of war and to perhaps also solve conflicts in peaceful ways. So please, could you explain a little bit where you stand, what you observe as threats to the global internet, and what can be done to prevent this harm or this damage done to the internet?

Mauro Vignati:
Thank you, Regina. My name is Mauro Vignatti. I’m a senior advisor on digital technologies and warfare at the International Committee of the Red Cross. I’m busy at the HQ in Geneva. I think there is no need to introduce the ICRC. So I would like to start with two remarks. First one, what you said, Regina, at the beginning. So we have armed conflicts from one side, and then we have political and economical tensions that are going to impact internet. So what we see logically is that the two are influencing each other. So we have armed conflicts that are increasing the polarization of political and economical tensions, and we have also the opposite situation. This is the first comment. The second one is about what David said in the introduction. So we should talk about internet governance, and we should talk about digital governance. So we should talk about what is the infrastructure and what is built on top. So referring to the infrastructure. So what we see at the beginning of each conflict, one of the first infrastructure that is disrupted is the ICT, Information Communication Technology infrastructure. So it could be antennas, it could be cables. So those are the first piece of the infrastructure that are taken down and disrupted. And here we have a first consideration. So internet has never been built with the idea to split what is to be used for military purposes and what is to be used for civilian purposes. So when we talk about that, we receive immediately a reaction. Say, hey, internet is like that. You want to modify internet. It’s not our goal to think of having two internets, but we have a consideration on that. So internet is not just a bubble. Internet is 77,000 autonomous systems that are connected to each other. So those are autonomous because they can work without interconnection to the other. If they want to connect to the other, OK, and then we create the internet. But just to be aware that we are talking about independent systems that are connected to each other, and with the fact that the military armed forces are using more and more civilian infrastructure, we have a situation where, logically, the civilian infrastructure will be taken down because it’s used by armed forces. So this is the first consideration in terms of international military law. There is a principle that is called the principle of distinction, that every time civilians and civilian objects must be distinguished to competence and military objectives. So in this case, we have an intermingling situation where the infrastructure is not clearly defined and split among competence and civilians. This is the first part for what we can call internet governance for the part of the digital. So what we see is that during armed conflict and even before the armed conflict are starting, we have a disruption of what we call the global digital supply chain. So this disruption of the global supply chain is done through sanctions, restrictions. Even we see lately self-exclusion of companies from territories that are in conflict. And this has a huge impact on civilians and also on the relief operations that are taking place. an international organization like ours is doing. So when we talk about self-exclusion, we see companies that are exiting specific territories and not providing any more services that civilians are used to use for several reasons. So for keeping contact with families, to be able to understand and receive information about relief operation that we deploy in the territory. And by them, they will be isolated in term of information. So they will not receive any more information coming from outside the territory of the conflict. So this is the first consideration about the disruption of global supply chain. So international organization like ours, we have to work on both side of the conflict. If the technologies that we are used to use in those territories are no more there, how we can operate in the two territories and being able to coordinate our relief operations if those technologies are no more available in those territories. And this increase the difficulties for NGOs or for international organization like the ICRC to operate in conflict zone.

Moderator – Regine Grienberger:
Thank you. I would like to put now a second round of questions to my panelists, but while I do this, please think about questions that you might want to raise. And my first question goes to David. You’ve heard about the roles that other players in the game take for themselves. So like governments, international law organizations, in civil society. What do you make of this burden sharing for maintaining the global internet? And perhaps you can also give us a little bit your take on the role of international law in this because we were having a conversation about that before and it was interesting what David said about that.

David Huberman:
Thank you, Regina. For 30, 40, 50 years, when we were building the internet in the 1960s and the 1970s, the 1980s and the 1990s, this was an engineering endeavor, okay? These were smart men and women from engineering backgrounds pursuing an engineering pursuit. But it’s 2023 and the internet is a matter of national security for countries. Just like you said, the intermingling of both military and civilian purposes over the same wires, over the same routers. My friend from Access Now talked very pointedly, very passionately about the public core of the internet and how we must maintain this neutrality, right? We can’t do this as an engineering endeavor anymore because there are the real world implications of what happens and how we build and how we evolve internet and internet standards. It’s too much part of our lives today. And so it’s never been more important that civil society, that government policy makers, that engineers work together to develop standards, to build internet. And today, equally important, more important, I’m not sure, harden the core of the internet to help prevent attacks from state actors, from bad actors, from terrorists, from whomever it is who wants to lower the quality of our lives by interrupting our digital lives. It is so important that we now start securing or better secure what we have built and what we are building. This is what ICANN does. The actual mission of ICANN is to increase the security, stability, and resilience of the part of the internet that we are in, the unique identifier system. And we do so through this multi-stakeholder model. And the multi-stakeholder model is only enriched through the spirit of collaboration, cooperation, and the shared expertise of governments, of civil society, of academia, and engineering to better build this internet that we all rely on. As for international law, she’s making me uncomfortable. uncomfortable on purpose. You can see her smiling. ICANN is an organization. It’s based in the United States. ICANN respects international law. The ICANN ecosystem is made from stakeholders all around the world. Of course, we hope they respect international law. But that’s really easy for us to say, because I’m an American, and you’re a German, and you’re an Estonian, and you are Swiss. And we live in this global north in very advanced societies where we believe in and want to progress our values that comport with what our idea of international law is. But the truth is, when I go into countries, economies in transition, many of which are in the global south, many of which are remote, many of which are found in remote corners in oceans, what we’re trying to do is build internet that works for their people so that they can connect and get information and share information with the rest of the world. And in these economies, the priority is on the initial construction, the initial development. And then the focus becomes on securing it. So these do not become vulnerable islands on the internet. And these discussions about international law, that’s not in the discussion. It’s not part of it, because we’re just trying to get connected, stay connected, and build connectivity that is secure. So it’s a very north-centric, it’s a very, very Western and European-centric type concept. And while ICANN strongly believes in these precepts, we have to understand that in the end, we are building this public core and trying to make it good for everybody.

Moderator – Regine Grienberger:
Thank you. And of course, Mauro has to respond to this.

Mauro Vignati:
Thank you. So the ICRC had during two years a so-called global advisory board of high-level person worldwide. And we’re going to publish in a week a report, the final report of the consultation we had with those persons. And one of the recommendations, so we have prepared a recommendation for combatants, for states, for tech companies, and for humanitarian organizations. So one of the recommendations that we have for states that the state should, to the maximum extent feasible, segment data and communication infrastructure used for military purpose from civilian ones. So I fully understand the argument that you are doing everything. The principal goal is the connectivity and guarantee the connectivity. But the connectivity is subjective. to a state on voluntary basis of the state. If the state decided to disrupt this connectivity, they can do it. So what we recommend to state here is to start to think about segmenting and segregating the communication systems and the data that they are providing on their territory. The same we ask to tech companies to start to think about it. When satellite companies, when cloud infrastructure are providing services to military or to civilians, they should start to think how to segment those. How we would like to have the internet in the future. How we would like the digital being structured in the future. Do we want to go on with the non-distinguishing what is a military from what is civilian or we would like to think now to evolve on a new level where data and infrastructure are separated between what is civilian and what is military.

Moderator – Regine Grienberger:
Thank you, Mauro. I saw Roman nodding and you were mentioning the cyber norms in your initial statement. So I would like to hear from you what role do you see for the international law in comparison to the other elements of the connective tissue that we have mentioned?

Roman Jeet Singh Cheema:
Thank you so much. I think it’s important when you go into this conversation to just recognize governance and law are linked but not the same topic. So for example, when you mentioned that many countries in the majority world may not want international law, I would dispute that some do, but many of them do want international governance conversations. In fact, the IGF in a sense exists because they brought up these conversations through the WSIS process and elsewhere. And why I mentioned that it’s very important to recognize where that imperative comes from. So that’s the first part. I just want to note that. So I think that’s what we also see in many of these cybersecurity conversations. We see many countries, sometimes countries that may not always be fully informed or maybe asking for something counterproductive to other elements of national strategy, but say we want it discussed in the UN system, we want more clarity. And I think it’s been useful that the cyber processes have been slow, very evolutionary, but have outlined certain things that there is an applicability of the UN Charter and international law to cyber behavior. It’s not a blank slate. There are elements there that are further clear and more clear than perhaps other parts are, including some of the conversation on the public call, which I acknowledge is not a fully recognized principle by every actor in the UN system, which is unfortunate. And perhaps we need more clarity there. But I think what’s really important there to recognize is that fundamentally, we do need to acknowledge that we need more evolution of this. I’m a bit cautious on the idea of saying that we need to separate and create different standards. Then I want to step back a moment to the conversation we see domestically in many countries are on internet shutdowns. When many countries have conducted internet shutdowns, and the position from Access Now and many of our colleagues in civil society is that internet shutdowns are never acceptable full stop. They are not legal, in our view, under international human rights law, and they ethically and principally in terms of efficacy are also ineffective. But the argument that some governments have taken is, look, we know that internet shutdowns disrupt public services, access to critical care. Maybe let’s create different layers. We’ll have government systems that are not shut down during the internet, and other systems that are shut down. And I was just thinking about it when we’re discussing this now in the context of conflict, and I recognize very importantly the principles that the ICRC is proposing here about careful targeting. But in a sense, the main principle we actually have is that this nature of cyber conflict, per se, is never acceptable. And just thinking about that, Regina, I know that that’s the tension we see in the UN processes itself. Many states, they are saying, we actually do not want cyber conflict to regularly take place. We do know that it’s already happening. And if we don’t acknowledge certain rules of the road, operators will go and conduct all sorts of problematic behavior. But this is why I was just thinking when it comes to this concept of internet governance and separating layers and doing other parts there. The basic principle is we want to reduce conflict rather than encourage more of it. I think we therefore need stronger standards saying that all sorts of cyber-destructive activity by states or by state-linked actors or non-state actors is problematic. I think one of our challenges as we apply traditional humanitarian law in this space is we are trying to make sure that when conflict takes place, it is very limited and targeted, but it’s also giving a permissive nature to certain actors. And I acknowledge that when states sometimes say this, they don’t always mean it. There are other political reasons why they’re saying this there. But this is the tension I see. Because, for example, I’ve seen so many actors within global South countries that say, we know shutdowns are problematic, but we can never stop doing them. So can you let us say that, OK, certain government systems will always be accessible. The rest will shut it down. So I just thought that parallel that came in. And I know it’s a crude parallel. It’s perhaps, in many ways, does not recognize the sophistication of many of the conversations. the RCRC and others have had. But I thought I’d just mention that as a sort of challenging reality check of how many governments may actually take these rules and apply them.

Moderator – Regine Grienberger:
Thank you, Man. I would like to touch briefly on something that Nelly mentioned before, and that is what is built on the internet is, you know, public infrastructure. And I would like to invite you to elaborate a little bit on digital public goods and digital public infrastructure, digital commons perhaps also, as elements that could also be seen as connective tissue

Nele Leosk:
on the basis of the internet. Thank you, Regina. Now you opened a very entirely new and big conversation. I don’t know how many of you are familiar with these terms that have merged over the past years, digital public goods and digital public infrastructure, and in the EU we use mainly the term digital commons. But I would perhaps summarize it from two different angles. One I would say is more philosophical, and the other is a little bit more practical. And actually the first one is really related to how we see the role of governments, but everybody, in building a digital society. And it started actually in Estonia really with rebuilding our state, with really democratizing our state, where we were in the understanding that whatever the government does, it does not do it first alone, and it does not do it for itself, or just serving certain stakeholders in a society. It started more with access to information. It moved then, I would say, to open data movement, reuse of data, but ultimately also to the technological sphere, to open standards, interoperability, and so forth. But the other part of this is actually very practical. We really realized, for example, in Estonia that the needs that governments have, but also civil society has, and the private sector has, are quite similar. For example, in digitalization, we would all need to authenticate ourselves virtually at some point, either to provide a service, either it’s a bank, it’s an electricity company, or it’s a Ministry of Interior. So what we did, we sort of combined our forces, and there are some aspects of this, I would say, digitalization that we really do together as a government, as a private sector, as every other participant in our society. And this, in a way, has also helped us first have a habit to work together and share what we have done and reuse what we have done. So ultimately, it comes back to actually to resources. But I think it is now maybe related also to creating a global good that the Internet allows us, but also digital technologies allow us to create and for everybody to use. So there are several examples from different countries around different solutions or products that have been made available for everybody for use. We have several of these products from Estonia that are used globally, also in Ukraine, that actually help us to save resources. And this is something that we would actually like to see perhaps more taking place. We have some systems created in Estonia, for example, we are working together with Finland and Iceland. There are some digital solutions that we develop together, we maintain them together. It allows us to save not only money, but increasingly actually human capital. It’s very challenging to find data architects or specialists, but surprisingly we don’t see that much yet. But we are starting to talk about it. I think the movement on DPI, the public goods, commons, is another push towards this. So I am a little bit positive to perhaps start sharing more of the good things we do.

Moderator – Regine Grienberger:
Thank you, Nelly. So now the floor is open for your questions. We have two microphones at both corners and also a hand mic here. And ask your questions. Who’s first at the mic? Okay, then it’s your question first. No, Milton, let the gentleman ask first and then you are.

Audience:
It’s your turn. Hi, thank you. My name is Gerald James and I have some questions just specifically on governance and actual action. I think there’s been an attempt to try and get some kind of discussion going on this, but I think there’s just a lot of fear around it. And I think that that’s kind of doing a disservice to those of us in the room who have lost family members or friends due to Internet shutdowns, which I’m not sure if there’s anyone else in here, but I have. And so I do think that there’s not a seriousness that’s taken to account with the idea of an Internet shutdown. And I think that they are often like an ancillary feature to society for a lot of different cultures. And it’s often something that we don’t actually necessarily give to everyone. It’s certain countries, like we mentioned Ukraine, we mentioned Iran, where we have interests from a global Western perspective, those people we drop internet to. We get them satellites and we make sure to take care of that. But I wonder where do we actually see governance action around sanctions on internet shutdowns? If you shut down the internet, why is that not a sanctionable offense? And especially if you are a UN member state or an IGF major stakeholder country, why would we endorse in any capacity your shutdown? And so I think that would be my first question and the bulk of my question. And then going forward, I guess the next part is how do we relate the actual damage that’s done to women’s safety, to family safety, to health infrastructure, and show the whole world that that is something that’s kept in place by the internet and by freedom of access to information. And when it’s turned off, people freely abandon their humanity. That is something that I’m very curious to hear about, and I don’t necessarily think that there’s a lot of parallels being drawn between the actual dangers to people and the way that the internet is used to ensure those dangers.

Moderator – Regine Grienberger:
Roman, would you like to talk about internet shutdowns once again? And I can then add something on sanctions.

Roman Jeet Singh Cheema:
So I just want to acknowledge firstly that I think I complete agreement on the fact that internet shutdowns are period not acceptable. I think the problem is we have outlined that international legal or human rights position. The follow through and application of that from member states in the international community is incredibly lacking. The purpose, for example, the Keep It On Coalition has to exist is not to track the number of internet shutdowns, it’s to prevent internet shutdowns from happening at any point of time. Because any internet shutdown is normally not just disproportionate, it is used, as you mentioned, as a cloak for impunity, for violence, for targeting, and very often in fact the claims made for internet shutdowns to prevent violence or for some other state purpose are actually exactly the opposite reason. They’re actually conducting problematic things. I think the challenge is that as we’ve discussed and recognize the shutdowns unacceptable from comments from the Freedom Online Coalition, from the UN Human Rights Council and elsewhere, we don’t see enough consequences. And being very honest here, I think we also say that even like-minded states, whether in the West or across Western alliances with colleagues in the majority world, there needs to be a much, much stronger position taken on this, which means that there should be consequences of discussions around digital public goods, around people’s participation in the internet governance ecosystem, if they are consistent perpetrators of internet shutdowns, not just in terms of numbers, but in terms of intense active effect. So it’s definitely a conversation where we’re seeing changes. But I’d say, in fact, sometimes the best prevention of shutdown has been international media attention, domestic challenges, not enough, of course, in, say, multilateral processes. And we do sometimes need to see more of that happen. In fact, I’m worried sometimes in a cyber context where people have said disinformation is a cybersecurity problem, because I can see that legal argument being constructed to say shutdowns are therefore a defense against disinformation, which they aren’t, by the way. And I’m happy to share more data on that. But we do now need to go from defensive conversation on shutdowns to actual action, literally a consistent, I hesitate to use program of action, because that means something very specific, but a clear political action plan from strong states taking positions on this. We say there will be consequences if you consistently shut down the internet, and it’s just not acceptable. And I do want to just note that the tracking of violence and impunity due to shutdowns, the initial steps being made, but there is much more that, in terms of political mandates or resources, could be given, even in the UN system, to the WHO and other actors to do more of that. They’re right now trying to do their best with a very limited set of resources and no actual mandate to track the effect of shutdowns.

Moderator – Regine Grienberger:
I would like to give you also an answer also from my government’s point of perspective. Sanctions, of course, are part of the diplomatic toolbox, but they are a rather complex instrument. And it’s certainly not the first thing that we are going to use. But for example, the German government is both part of the Freedom Online Coalition that Rahman mentioned, and also of the Declaration for the Future of the Internet, and both initiatives, alliances, contain this commitment to go against internet shutdowns in the diplomatic relationships with countries who use this instrument. And I can assure you that I do not, I’m not timid about speaking about that with partners, with my interlocutors about this issue. But what I wanted to add to this is that what we also have to understand is that there is a need to have an understanding as internet shutdowns, not as an isolated event. But we would like to integrate it in our early crisis warning in the forecasting procedures that we use to take information that we get from organizations like Freedom House, for example, on internet shutdowns as a crisis indicator for a specific country or region, so that we can step in earlier with the diplomatic measures, and not only after, you know, after the social upheaval, the riots, or civil war, or whatever has happened. So this is also something that I’m working on to kind of bring these informations to the people who should hear it. But now to here. Oh, I thought I was next. Let him be. Yeah, go ahead. Okay, Milton.

Audience:
So I do want to contribute something very important to the discussion, which I think has been overlooked. And it really relates to this business of why aren’t governments who shut down things sanctioned? And that is, to put it bluntly, there is no such thing as international law. There is international anarchy. Governments are sovereign, and there’s no world government that can impose sanctions upon them, although powerful governments try, such as the United States. And how is this relevant to ICANN and to the so-called public core of the Internet? Well, what we’ve done in order to get global governance of the public core, we have removed that whole governance problem from nation-states, and we’ve put it in the private sector. And that’s extremely important to understand. This is how we have internationalized governance of the domain name system. So David described ICANN as a multi-stakeholder organization, and everybody likes that term, so he’s probably trying to make ICANN look good. But the point is multi-stakeholderism is not the key characteristic of ICANN. The key characteristic is it is a private corporation that has multi-stakeholder representation and participation, and that gives it the ability to do things like say to Ukraine, sorry, we’re not going to remove .ru from the root, or we’re not going to participate in U.S. sanctions on countries that the U.S. doesn’t like, or we’re not going to make the Iranians, you know, succumb to their particular political agenda. So I think it’s really important to understand the role of private sector governance in this kind of protecting the Internet against subordination to military and political ends.

Moderator – Regine Grienberger:
Thank you. other question and then I’ll give you a possibility to answer yeah. Thank you my

Audience:
name is Farzana Badi, Digital Medusa. So I wanted to ask the panel about how do we how can we actually approach conflict so that we leave no one behind so we talk a lot about Ukraine but Afghanistan I have not heard a lot about Afghanistan and the situation that is going on there and Sudan is in conflict as well. In Afghanistan the digital transformation project that was funded by World Bank stopped after the Taliban took over. Taliban is a sanctioned entity and if they become government then there will be a lot of problems providing internet access to them. So how can we actually come up with an approach that we do we have a more inclusive approach that we do not leave anybody behind because they are struggling and they have nowhere to go to and then another thing that I just wanted to mention in a lot of the times in various countries the internet service providers are owned by the military so if you want to like divide this and kind of like maybe sanction the military later on then it will be a very very difficult task to do so I suggest thinking and also another point and that’s the last one using sanctions and I think we need to come up with better mechanisms for resolving conflicts especially when it comes to internet governance. Sanctions just cannot be so targeted on the internet and I have done some research on that especially at the infrastructure layer we need to have measures so that they can be they can function and they can

Moderator – Regine Grienberger:
operate. Thank you. Thank you. Let’s take the other two questions and then close and give the panelists an opportunity to answer. Please be brief. Yeah Dan Arnotto

Audience:
from the National Democratic Institute. I guess it’s kind of an extension on the private sector point you know just looking at the role of kind of social media networks and platforms I’d be curious to hear from the panel but you know we’ve developed guidance around crisis management a key component I think is actually working with different stakeholders civil society coordinating with say a meta or a X in terms of you know that aspect of dealing with critical situations and I mean for better or worse you know these platforms have become a component of the infrastructure people are going to the app they’re not going to a website anymore to report something or to follow the news or to communicate with someone about a critical issue so I think we have to consider the specific role of the platforms and those specific elements of the private sector in these considerations so I would be curious on your perspective there. My name is Chantal Joris, I’m with Article 19 Freedom of Expression organization and we’ve also been looking into into many of these issues and where the gaps are and one of the things I would be curious to hear your perspectives on is with international humanitarian law in terms of a gap in international law it still seems very much more clear when it comes to attacks in terms of involving kinetic force, involving physical force, but many of the methods of warfare now involve again the cyberspace, use of spyware, internet shutdowns, and there seems to be like much less clarity around whether the rules of distinction apply in the same way, the rules on targeting, or how proportionality and humanity principles can apply in this context. So perhaps also question to the representative of the ICRC on the process to further clarify also how these rules should be applied to also by digital companies and

Moderator – Regine Grienberger:
states. Thank you. We start with David and then you pick the questions you would

David Huberman:
like to answer. Thank you. Well I mean we’re just about out of time, so I really just I wanted to thank Dr. Mueller for very eloquently stating what I was trying to get across about how the governance model at ICANN really shines during these times of conflict.

Mauro Vignati:
Yeah, I want first to give a brief answer, the person that talked about other conflicts, so at least from the ICRC perspective, so we are working on more than 100 countries and working in so many conflicts that we take care of all the conflict where we are working in, we are not focusing specifically in one or the other conflict, so and this is something that we are working on. We opened a delegation for cyberspace in Luxembourg to research and develop and doing research and development exactly in this respect to understand if we are able as the ICRC to develop technologies and capabilities that can be deployed in conflict territories and provide connectivity to population that is no more able to have this connectivity or to safeguard the information that they are transferring because privacy is very important for for the beneficiaries of the ICRC, so this is something that we are working on. We started this R&D last year and about the sanction that are not the right method, I’m not saying that I agree or disagree, I mean we are neutral but what I say is that among the recommendations that we are delivering next week, one of the recommendation for the tech companies to understand that they have this major role in managing the infrastructure and and the digital solution on top of that and what we recommend to them is to understand if they are not under a sanction of a state and they want anyway to do their own decision to consider to keep up the fundamental functioning and maintenance of the network and the communication, so this is something that we strongly try to advocate with tech companies, so we reach out to them and we would like to stress the fact that we are working also with them in this regard. Apropos of the intervention of the representative of article 19, so I already consider some of the points that you raised in the way that even with the recommendation to follow IHL that we’re going to publish and re-stress next week, so for the states is to respect IHL. IHL already foreseeing the respect of critical infrastructure of civilian not taking down them, keeping up and not targeting civilian objects that are fundamental for the operation of the ICRC and for the civilians in this regard, so we think IHL is already covering these aspects even though there is not a common understanding about the data. If data is protected, we think it is in terms of international maintenance law. Some positions of other states do not go in this direction, so what we try to go is including data in this respect, so if data could be considered also by all the states as an object to be protected, we gonna reach out what we would like to have with in respect to shut down another another another new let’s say new topic that are not the classical cyber operation for disruption. But we think also an exfiltration information that can cause harm to civilians or accessing those kind of information. So the work that we are doing is that we try to convince states to recognize recognize

Moderator – Regine Grienberger:
data as an object to be protected. I just would like to respond and and maybe have a little bit positive note also at the end but still starting with negative it’s it does not only what you mentioned is not only of course the issue of a conflict zone we divide is increasing and not decreasing and and in 12 percent of SDGs we have gone backwards not not forward. But regardless of everything I do believe that digital is actually one of these areas where we can cooperate with almost every country because it brings so many similar issues but also similar solutions. So from Estonian side we do cooperate in digital with almost every country through different partnerships with UN, EU, different banks and and so forth and including actually also Afghanistan where we have some digitalization initiatives going on. So I end with a more positive note. Thank you. So time is out. We have to conclude this session. I thank you very much people on the panel for your contributions. It was very interesting and also for your questions some of which we will also take have to take home to think about it more thoroughly and I hope to see you again outside of this room. Thank you.

Audience

Speech speed

172 words per minute

Speech length

1307 words

Speech time

455 secs

David Huberman

Speech speed

165 words per minute

Speech length

1609 words

Speech time

586 secs

Mauro Vignati

Speech speed

160 words per minute

Speech length

1704 words

Speech time

638 secs

Moderator – Regine Grienberger

Speech speed

146 words per minute

Speech length

1852 words

Speech time

759 secs

Nele Leosk

Speech speed

135 words per minute

Speech length

1385 words

Speech time

614 secs

Roman Jeet Singh Cheema

Speech speed

221 words per minute

Speech length

2788 words

Speech time

755 secs

Internet Human Rights: Mapping the UDHR to Cyberspace | IGF 2023 WS #85

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Michael Kelly

The analysis explores two main topics: the importance of defining digital human rights and the roles of big tech companies ahead of the AI revolution, and the preference for a multistakeholder approach to internet governance over a multilateral approach.

Regarding the first topic, it is argued that as human rights transition from physical to digital spaces, regulation is needed to protect and promote these rights. The AI revolution necessitates a paradigm shift towards creativity-based AI platform regulation, and defining digital human rights and tech companies’ responsibilities is crucial in this evolving landscape.

The analysis emphasises the proactive definition of digital human rights and the roles of big tech companies to establish clear regulations governing the interaction between technology and human rights. This approach is essential to ensure responsible and ethical use of evolving technologies.

Regarding the second topic, the analysis supports a multistakeholder approach to internet governance. This approach involves involving various stakeholders, including governments, tech companies, civil societies, and individuals, in decision-making processes. It aims to ensure diverse perspectives and interests are considered for balanced and inclusive governance.

Concerns are raised about a multilateral approach that may exclude big tech companies and civil societies from decision-making processes, hindering effective internet governance. The analysis also identifies a draft cybercrime treaty proposed by Russia as a potential threat to digital human rights, potentially limiting freedom of expression and privacy online.

In conclusion, the analysis highlights the importance of defining digital human rights and the roles of big tech companies in the AI revolution. It emphasises proactive regulation and creativity-based AI platform regulation. It supports a multistakeholder approach to internet governance and raises concerns about exclusions and threats to digital human rights. This comprehensive analysis provides valuable insights into the challenges and considerations at the intersection of technology, human rights, and internet governance.

Peggy Hicks.

The discussion centres around the relevance of human rights in the digital space and the potential impact of government regulations on online activities. It is acknowledged that the human rights that apply offline also extend to the online realm. However, there is ongoing deliberation regarding their practical implementation.

The significance of the human rights framework in the digital space is highlighted due to its universal applicability and legally binding nature. This framework encompasses obligations that the majority of states have committed to. Additionally, a multistakeholder and multilateral approach plays a key role in addressing human rights in the digital realm.

There are concerns about potential government overreach and its negative impact on free speech. Many legislations globally are viewed as hindering human rights rather than protecting them, raising apprehensions about government interference and censorship.

The responsibilities of companies in respecting human rights, particularly within their supply chains, are recognised. Companies are urged to understand and mitigate risks associated with human rights violations in their operations. The UN Guiding Principles on Human Rights outline the role of states in regulating the impact of companies on human rights and establishing accountability and remedy mechanisms.

However, there are also concerns about legislation on content moderation, which is seen as often leading to the suppression of free speech. The push for companies to take down excessive content can result in the repression of opposition or dissent. The Cybercrime Convention is highlighted as an area where potential overreach is observed, which can curtail rights.

The implications of legislative models, such as the German NetzDG statute, in different global contexts are discussed. It is noted that exporting these models without considering the varying contexts can lead to problems and conflicts with human rights principles.

Furthermore, worries are expressed about regulatory approaches in liberal democracies that could potentially compromise human rights and data encryption. Measures such as client-side scanning or undermining encryption are viewed as problematic, as they could have adverse global impacts.

The breadth and severity of punitive measures under the Cybercrime Convention also raise concerns. Instances where individuals have been imprisoned for a single tweet for three to four years prompt questions about the proportionality and fairness of these measures.

While negotiation processes are still ongoing, there is a recognised need for continued dialogue to address concerns and improve the Cybercrime Convention. Multiple states share the concerns expressed by the Office of the United Nations High Commissioner for Human Rights (OHCHR).

In conclusion, the discussion highlights the importance of upholding human rights in the digital space and cautions against excessive government regulation that can impede these rights. The responsibilities of companies in respecting human rights are emphasised, along with concerns about the negative effects of content moderation legislation. The need for careful consideration of context when enacting legislative models and the challenges posed by regulatory approaches in liberal democracies are also brought to light. Ultimately, ongoing negotiations are required to address concerns and enhance the Cybercrime Convention.

David Satola

The analysis explores the importance of upholding equal rights in the digital space, irrespective of an individual’s identity. It stresses the need to establish virtual identity rights prior to the impending AI revolution. The fast-paced progress in AI technology adds a time constraint to defining these rights, making it crucial to formulate and establish them promptly.

One of the key arguments in the analysis emphasizes that while everyone theoretically enjoys the same rights in physical spaces regardless of their identity, the emergence of a new front in the digital space necessitates extending principles of equality and non-discrimination to the virtual realm.

Another aspect highlighted in the analysis concerns the rights of avatars and posthumous social media accounts, raising questions about the legal framework and rights that should govern these virtual identities, particularly in the context of the AI revolution. Addressing these issues in advance becomes essential to safeguard individuals’ virtual identities within a legal framework that ensures equal rights and protections as in the physical world.

Furthermore, the analysis underscores the potential challenges to the universality of rights brought about by the migration of our daily lives into cyberspace. As our activities and interactions increasingly occur online, it becomes crucial to ensure the preservation of fundamental human rights in this digital domain as well.

Additionally, the incorporation of national or regional laws without adequate context may pose a threat to online rights. This observation underscores the importance of crafting carefully designed and globally aligned legal frameworks governing the digital space, to prevent discrepancies and inconsistencies that could undermine the universality of rights.

In conclusion, the analysis emphasizes the need to guarantee equal rights in the digital space, highlighting the significance of defining virtual identity rights in anticipation of the AI revolution. It also discusses the challenges posed by the migration to cyberspace and the potential threats to online rights in the absence of cohesive global legal frameworks. Given the rapid advancements in AI, it is essential to act swiftly in establishing these rights to pave the way for a fair and inclusive digital future.

Joyce Hakmeh

Joyce Hakmeh, Deputy Director of the International Security Programme at Chatham House, moderated a session focused on the Internet Governance Task Force. This task force was established following a report by the American Bar Association’s Internet Governance Task Force, co-chaired by Michael Kelly and David Sattola. Michael Kelly, a professor of law at Creighton University specializing in public international law, and David Sattola, Lead Counsel for Innovation and Technology at the US Department of Homeland Security and Director of the International Security Programme at Chatham House, co-chaired the task force.

In the session, the speakers discussed the complexities of internet governance, stressing the need to find the right balance of responsibilities. They highlighted concerning practices of some autocratic countries that suppress dissent and violate human rights. They also drew attention to regulatory approaches proposed by liberal democracies, which raised human rights concerns, such as breaking encryption for legitimate purposes.

Peggy Hicks, Director of the Office of the UN High Commissioner for Refugees, participated in the session as a discussant. She raised questions about the responsiveness of countries at both national and global levels to the concerns raised by the speakers. Her inquiries covered issues related to autocratic countries and potential human rights implications of regulatory measures proposed by liberal democracies.

The session also touched upon the Cybercrime Convention, with Peggy Hicks noting that the OHCHR has been actively engaged in publishing commentary and providing observations on the content and progress of the convention. Although specific details of the convention’s progress were not explicitly covered, they discussed its complexity and potential for abuse, particularly regarding procedural powers and broad criminalization.

In conclusion, the session emphasized the importance of raising awareness about the complexities of internet governance and the potential for human rights abuses. The discussion shed light on various perspectives and challenges related to this issue, contributing to a better understanding of the topic.

Session transcript

Joyce Hakmeh:
I’m going to turn it over to Joyce Hakmeh, who is the Deputy Director of the International Security Program at Chatham House. Good morning, everyone. May I please ask you to take your seats? We’re about to begin. So good morning again. My name is Joyce Hakmeh. I’m the Deputy Director of the International Security Program at Chatham House, and I have the pleasure of moderating this short but very important session looking at the Internet Governance Task Force. And this is the result of a report done by the American Bar Association’s Internet Governance Task Force that is co-chaired by Michael Kelly, who’s sitting on my left, and David Sattola, who is joining us online. So Michael is Professor of Law at Creighton University in the U.S., where he specializes in public international law, and David is Lead Counsel for Innovation and Technology at the U.S. Department of Homeland Security. And David is the Director of the International Security Program at Chatham House, where he specializes in connectivity and cybercrime prevention strategies. In addition to the two speakers who will be presenting the findings from their research, we also have a discussant with us today, Peggy Hicks, who is the Director of the Office of the U.N. High Commissioner for Refugees. So welcome to all of you. So we have half an hour together. So the way we will do this is we will hear first from Michael and David about the research, which has been just published in Volume 26 of the University of Pennsylvania Journal of Law and Social Change. And then we will hear some reactions from Peggy and perhaps a question to the speakers, and then we’ll end the session. So without further ado, I will now turn to Michael. And just a quick reminder that this session is being recorded and can be downloaded from the IGF website. So over to you, Mike.

Michael Kelly:
Okay. Thank you, Joyce. And if we could bring David Sotola up online. He is also presenting with us. Christina, please advance the slide. We want to start with a New Yorker cartoon because they can mean anything. In 1993, you see the famous cartoon of the dog saying to the other dog on the Internet, nobody knows you’re a dog. That was 1993. Today, in 2023, this cartoon was updated. Remember when on the Internet no one knew who you were. That’s a paradigm shift. And we’re going to talk about another paradigm shift in the field of digital human rights that’s coming up with the advent of AI and the revolution that is on the fringe of happening. Christina, please advance. Why are we interested in which human rights are manifesting online? Well, because that’s where we spend most of our time. This Pew Research Center poll from 2019 demonstrates that daily over 80% of us are online almost all the time. This used to be a generational format. But as the generations go forward, we see that that is compressing at the far end. You can just look around the room and see who’s on devices of one type or another. So you live your daily life in physical space, but you also live your daily life in digital space. And that’s not always or even mostly work space. Human rights manifest in both sides of this equation. The question is, which ones follow us from physical space into digital space? How do they manifest? How are they regulated? How are they defined? And then, of course, the other end of that is how are they enforced? Christina, next slide, please. The Universal Declaration of Human Rights, as you all know, recently celebrated its 75th birthday. Which is a huge passage to mark. It’s made up of both freedoms and rights. And these come about in multiple contexts. Freedoms you’re familiar with, speech, movement, assembly, religion, freedom from discrimination. Rights you’re also familiar with, equality, privacy, security, work, liberty, democracy, education, property, fair trial and national security. But in digital space, these rights really are rendered meaningless or less useful if you don’t have core rights that exist to actually animate them. And by core rights, we talk about connectivity and net neutrality. What good are digital human rights to you if you’re not online? Not much. What good are digital human rights to you if you’re not online meaningfully? And that, of course, is the net neutrality discussion. Again, not as much. And that certainly implicates the equality prong right out of the box. The other thing that we look at from a framework perspective is whether the normative equivalency paradigm is the right paradigm. To think about the transference of human rights from physical space to digital space. The normative equivalency paradigm is basically moving the rights into a digital format without really altering them much. Other paradigms have been proposed out there. Probably the one that has gained the most attention is actually according human rights to digital entities themselves. But you get into all kinds of definitional issues in that regard. And I’m not sure that we’re there yet. I don’t know that we’re going to be there soon, but it could be on the horizon. Nevertheless, we don’t take a stand on this, which paradigm is the appropriate paradigm in our research, because our research basically creates a matrix. And so it is a mapping exercise that hopefully will be useful to policymakers, human rights advocates, and jurists as well. Christina, next slide, please. Exhibit A is the right to be forgotten. This in our physical space is the right to privacy. And it’s confirmed by the European Court of Justice to exist in digital space much at a higher level than it is in physical space. And it exists in digital space much, at least in 2015, to Google’s consternation. This was a case that was brought by an individual in Spain who wanted some content delisted from search results about him, because he had already served a criminal sentence for fraud. Spain, of course, has a very forward-looking social justice mechanism for rehabilitation, and people are supposed to get a fresh start after they emerge from the criminal system. But people kept looking up the one article about this individual that tainted his ability to do that. This was litigated all the way up to the European Court of Justice. The ECJ said yes. Google, you are required to effectuate and moderate this human right on your platform and delist material that is irrelevant, no longer useful, or mistaken. Google’s argument, of course, was, well, this is censorship, and shouldn’t that be the job of a government, not a corporation? The ECJ confirmed, no, actually, Google, it’s your job, because we’re telling you it’s your job. And so Google found itself not only in a moderation role, but an enforcement role throughout the EU or throughout the global Google reach was later litigation that I don’t have time to go into today. But the internal corporate process that Google had to set up to actually have a company moderating human rights in digital space was one where they had to figure out what is the interplay between humans and algorithms. And we haven’t even inserted AI into the process at this point. But review committees for each EU member state were set up. Now there are over 5 million web page delisting requests since the advent of this process. And the vast number of them implicate content on social media, specifically YouTube, Facebook, and Twitter. So now you’re in a situation where you’ve got a company not only defining, moderating, and enforcing a digital human right on a space it owns in cyberspace. It’s actually moderating other companies’ content, right? Because when Google takes down, delists an item on Twitter, Twitter’s affected. So now you’ve got cross-pollination happening. And is conversation happening across those platforms and across those corporations? Not at the level that it should be. So this raises, obviously, a larger question on the propriety of corporate enforcement. Which, of course, is by terms of service. You agree when you read every line of those terms of service before you click accept, which I know everyone in this room does, that you will comply with what the corporation thinks about your content that you’re uploading onto its platform. Christina, next slide please. We selected here a half dozen articles from the UDHR. You can look at the University of Pennsylvania Journal of Law and Social Change article for the complete matrix of all 30 articles just to give you a bit of a comparative perspective. Article 1, freedom and equality, manifests usually as connectivity and net neutrality. Codification is in progress in some states, not in others. Regulation is in progress in some states, not in others. In the United States, you see this going back and forth in a bit of a ping-pong ball fashion between administrations. The Obama administration moved forward on this. The Trump administration moved backward on net neutrality. The Biden administration is now moving forward again, not unlike some other areas of law. Article 12, which I just covered, the right to privacy, manifests as the right to be forgotten. It’s codified as an EU regulation. EU member states enforce it per the European Court of Justice, but Google is the actual arm. It’s under court order, though, to do so, so there’s an interplay between the state and the company. Freedom of movement we’ll come back to. There’s an asterisk there. Article 17, the right to property, digitally manifests as property in lots of different ways online. If it happens to be intellectual property, well, there’s a treaty framework for that through TRIPS, and so this is regulated by states and enforced by states, but if you look at speech and assembly, Articles 19 and 20, with speech, it’s access to social media platforms, and the regulation is via the tech corps and your terms of service, and it matters whether or not it’s a public corporation or a private corporation. If it’s a public corporation, there’s likely to be a process. If it’s a private corporation, well, Elon Musk decides whether or not you get your speech rights on his platform. With assembly, it’s access to groups, again, via terms of service. The reason we marked Article 13, and there are a couple of other articles, is that there are no positive regulations in this area yet. There’s no positive digital manifestation of this as a right or a freedom yet, frankly, because it’s assumed you have freedom of movement across the Internet. Well, that assumption is incorrect, and what it does is it leaves a gap. My British colleagues are familiar with the term mind the gap. Yeah, mind the gap, because in the absence of this, that leaves room for negative regulation, and authoritarian regimes can wall you off from certain areas of the Internet and restrict your freedom of movement in digital space. So we have to look at these gaps as well as where regulations are positively manifesting. Next slide, please, Christina. Okay, here are your corporate protectors of Internet human rights, and I’m just going to kind of pause this here for a minute for you to take a look at these guys. Google, of course, we saw resisted its role as an enforcer and definer of digital human rights, but it is doing so, and I think it’s doing so effectively under court order. But I think, you know, the Microsoft approach, where the company actually embraces something about human rights. You all remember a few years ago Brad Smith calling for a digital Geneva Convention. That voluntary embrace of their new role, policing cyberspace, I think is where we need to go if we’re going to get effectively at the 20 to 25% of human rights listed in the UDHR that have corporate fingerprints on it. Next slide, please. Maybe we trust those guys more than we trust this guy. The broader context, if we back up a few paces and we look at the back and forth between multistakeholderism versus multilateralism as the effective paradigm for Internet governance, and we’re all here in a multistakeholder environment, authoritarian regimes want that replaced with the multilateral approach, where only states are sitting at the table, not companies, not civil society. I’m civil society. I’m with the American Bar Association. Although I don’t represent their views at this conference, I would not have a seat at this table if the authoritarian multilateralists had their way. Why should big tech care? Because they will lose their seat at the table. The conversion of them from objects to subjects of international cyber law will have a profound impact on them and on their bottom line. Russia’s draft cybercrime treaty, which some of you were in the room prior to this, listened to for an hour and a half when it was first introduced, was criticized as the beginning of the end for multistakeholderism. It wasn’t really so much about cybercrime as about possibly repressing human rights. it undermines the Budapest Convention and whether or not it could suppress digital human rights, the valiant people working on this through the UN process are discussing in New York City and Vienna every few months, and although the prior panel struck an optimistic note on that, I’m not sure I completely share it. And so this opens up all kinds of other issues, the broader issues, and that’s why now is the time to crystallize what these digital human rights are, and secondly, what big tech’s role is in defining, regulating, and enforcing them ahead of the coming AI revolution, because that will change everything. There will be a paradigm shift when AI actually matures to the point that creativity-based AI platform regulation replaces logic-based algorithmic platform regulation. Let me say that again. When creativity-based AI platform regulation replaces logic-based algorithmic platform regulation, that’s the sea change, and we have to get ahead of that. We have to get ahead of that for defining digital human rights, and we have to get ahead of that for defining the roles of companies in this process and convince them that it’s in their interest to do so. Just as an example, a policing example, AI will be a more effective cop for companies policing their platform, because it’s much more difficult to get around. You can get around a logic-based algorithm, but the bad side of that, if you just flip it around, is it also can be a more effective tool for authoritarian regimes to repress your digital human rights. Just like everything else, this is a double-edged sword. I should pause and let David chime in, Joyce, I think, if there’s … Yeah.

David Satola:
Thank you, Mike. Thank you, Joyce, and thank you, Peggy. I think in the interest of time, we should probably move ahead to the commentary and hopefully leave some time for questions at the end. The only remark I would underscore that Mike already made is the multi-stakeholderization of the enforcement of human rights that we’ve seen. It’s on our slide seven, where we see that actually the human rights are being examined and enforced by private actors. This was something that I don’t think anyone anticipated back when the Internet Governance Forum started. With that, I’ll turn it back to you in Kyoto.

Joyce Hakmeh:
Thank you. Thank you very much, David and Michael. Now, we turn to you, Peggy, since I messed up your introduction. Why don’t you introduce yourself and share your views on what’s been said? Thank you.

Peggy Hicks.:
No problem. Thanks so much. Yes, I’m Peggy Hicks. I work at the UN Human Rights Office in Geneva, where we’re focusing on many of these issues and very grateful to Michael and David for taking this look at digitization of human rights and their scholarly work. It’s a theme that we talk about quite a bit in Geneva. It’s been many years now since the Human Rights Council first said that the human rights that apply offline apply online. What that means in practice, of course, has yet to be worked out. It’s really interesting to look at this mapping approach that goes through the different articles and really looks at what are some manifestations of how that is developed in real terms. Part of the reason we talk about the human rights framework as being so relevant in the digital space, I want to emphasize, and that’s because you focus, Michael, on the battles between a multistakeholder and multilateral approach. Part of what we think is crucial about the framework of human rights is its universality and the fact that it involves legally binding obligations that the vast majority of states have ascribed to already. We avoid using it at our peril. It’s part of what can help us work through some of these challenges that are presented by the analysis that we’ve heard. It also already includes accountability mechanisms. One piece of it I really want to emphasize, which I think is quite relevant to the research here, is, for example, frameworks like the UN Guiding Principles on Human Rights, which really link up the company responsibilities to the legal obligations that states have. Under the guiding principles, you have three pillars. One looks at how states have a responsibility to regulate how companies impact on human rights in their actions, and then, of course, it has the chapter that’s best known goes through what do companies need to do to better respect human rights, including understanding and mitigating risks that are within their supply chains in different ways. And then the third pillar relates to accountability and remedy on those sites. And one of the things we’ve been really working on within our office is there’s been a lot of work done on how those principles apply in industries like extractive industries or the apparel industry. What does it mean in the context of the digital space, software applications that are mass marketed and used by millions of people globally? What does a tech company have to do with how that software might be misused at some point in time? So we’ve been working with a community of practice of a number of the largest tech companies to really work through some of those issues and figure out how we can better have them take on some of these responsibilities that are outlined in this report more effectively. But I think it also goes to this tension that the mapping shows of, you know, how much responsibility do we want at the corporate level, and what do we want states to do to better tell companies how they ought to handle things? So a good example is the terms of service that you referred to. It is the case that companies set those terms of service, but there are things that they are legally required to do within them in terms of unlawful content that might be on their platforms. So you know, how far do we take those relationships and what are we looking for from governments in terms of content regulation I think is a big question. Before I close, I have to say that one of our big concerns is that governments go too far in that regard, and that’s what we’ve seen playing out when we look at content moderation related legislation globally. The vast majority of legislation that’s been adopted across the globe tends to overreach and do more to undermine human rights than to protect them. So it allows and almost pushes companies to take down too much speech because they want to repress opposition or dissent or free speech in various ways. So we have to be very careful about what we ask governments to do and what we’re expecting of companies. But in both places, we have a lot of work to do to make sure that that digitization process goes forward as we’d like to see. And I agree, the Cybercrime Convention is an interesting area in which some of these issues are playing out. We see some of the potential overbreath in some of the work that’s being done under the Cybercrime Convention is similar to what we’ve seen in other efforts to legislate online speech in areas like counterterrorism efforts, where sometimes those statutes as well are used in an overbroad way to repress rights. So those are just some initial comments, and thanks again for the efforts.

Joyce Hakmeh:
Thank you very much. We have five minutes, maybe a little bit more, to have maybe a quick discussion. But maybe sort of like a follow-up question to you, Peggy, and maybe, Mike, if you want to respond, and David as well. You talked a lot about, first of all, you sort of both outlined the complexity of this issue and the very big importance of getting the balance right in terms of who should the honest fall on and how do you get to a place where the responsibilities are clear. And in the context of what you described, you talked about some of the practices that some autocratic countries are following in terms of suppressing dissent and not respecting human rights. But we also see some regulatory approaches and initiatives coming out of liberal democracies suggesting some human rights-concerning approaches from breaking encryption for obviously legitimate purposes and so forth. So how concerned are you about that, and how responsive do you find these countries to the concerns that you raise with them, whether in a kind of national context or more globally?

Peggy Hicks.:
I think it’s a really good question and one that not only our organization, but I think many of the civil society organizations that are here today are really looking at that. I think part of what tends to happen is that governments naturally and understandably rightfully want to adopt legislation that works in their context. But the reality is that those models are then exported globally in contexts that can be very different, where there is not the same infrastructure to support and ensure that those laws are interpreted and used in a human rights-respecting way. The example that’s always given is the German NetzDG statute, which was replicated in a variety of ways in a variety of places. But we worry about that now, and the point that you made on legislation that will potentially allow for client-side scanning, for example, which we see as incredibly problematic, given the importance of end-to-end encryption, is a really good example where we understand the concerns that are leading to that type of legislation, but feel very strongly that adoption of measures in that direction could have really deleterious impacts globally and could lead to a much broader problem with the limitations or undermining of encryption.

Joyce Hakmeh:
Thank you. Mike, if you can answer this question while also addressing what could be done in order to sort of understand and avoid those unintended consequences.

David Satola:
Right. Well, at base—and this, of course, is hand in glove with the policy approach from the United Nations—by virtue of the fact that you’re a homo sapiens, you get the same bag of rights. It doesn’t matter what your race is, your gender, your religion, or whatever, and everyone is theoretically bound by that in physical space. Now, we know that’s not always true, and that always doesn’t play out, but what about in digital space? Does everyone get the same bag of rights by virtue of the fact that you’re a digital homo sapiens? Well, what is a digital homo sapiens? Is your avatar, Peggy, going to get the same rights that you do, or Joyce, or does your Facebook account go on after you die, and does it continue to have the rights that you enjoyed while you were alive? We’re in a new frontier here, and it is a huge balancing question, but it also is a definitional question. Where are we? And that’s why the definitions need to be nailed down before the AI revolution comes, and it’s coming very quickly. So there’s a temporal component to this that we really have to be mindful of.

Joyce Hakmeh:
Thank you. David? David, are you still online?

David Satola:
Yes, I am. If I could just add one very quick comment, and it hopefully relates back to something that Peggy mentioned about the universality of rights, and I think one of the things that intrigued Mike and I when we went into this research was, does the migration of our daily lives into cyberspace in any way challenge that very basic concept of the universality of rights? And while we recognize that context matters, and again, to Peggy’s point about the Brussels effect and other national or regional laws that have been exported and incorporated out of context, does that also pose a threat to the universality of rights online? So we don’t have the answer to those questions, but I think they’re worth thinking about. Thank you.

Joyce Hakmeh:
Thank you, David. And maybe one final question to you, Peggy. You mentioned the Cybercrime Convention, and the OHCHR has been quite active on that front, you know, publishing sort of commentary and making some observations on the content and how the convention is proceeding. Can you share with us sort of your latest view on where the process is? I don’t know if you yourself covered that specifically or not, but maybe sort of share with us what you think about where we are at the moment and what sort of, you know, where do you think we might be heading?

Peggy Hicks.:
Oh, thanks. It’s a good question, but I smiled only because I don’t think it’s a one-minute question. It’s a bit more complex than that. We do still have some issues. We’ve been raising consistently some concerns over how the convention might have the same sort of overbreadth problem, the fact that the types of offenses that are included are those that are punishable by three to four years. For us, for example, is something that raises questions, because we see people being, you know, put in prison for a single tweet for three to four years. So, you know, I think there are some concerns that are still there in terms of criminalization and in terms of the breadth of investigative powers and the effective safeguards that need to be there. But, of course, as has been said, the process is still ongoing, and there’ll be lots of opportunity for those who share with us those concerns, including a number of states, to put them on the table. And, you know, hopefully the negotiation process will continue in a way that moves the convention in the right direction.

Joyce Hakmeh:
Brilliant. Thank you. And I guess maybe the silver lining from all of that is that the process has raised awareness about the complexity of the issues and the kind of, you know, the potential sort of for abuse when it comes to not just the procedural powers, but also like a very broad scope of criminalization. So I think with that, we will end the session. It was a short session, but very important. Thank you, Michael, David, and Peggy for joining us today. And thank you for everyone who attended. And, yeah, we’ll see you later. Thank you.

David Satola

Speech speed

170 words per minute

Speech length

476 words

Speech time

168 secs

Joyce Hakmeh

Speech speed

181 words per minute

Speech length

861 words

Speech time

286 secs

Michael Kelly

Speech speed

169 words per minute

Speech length

2401 words

Speech time

850 secs

Peggy Hicks.

Speech speed

177 words per minute

Speech length

1317 words

Speech time

446 secs

Internet Fragmentation: Perspectives & Collaboration | IGF 2023 WS #405

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Elena Plexida

The internet is currently not fragmented at a technical level, thanks to the presence of unique identifiers such as domain names, IP addresses, and Internet protocols. These identifiers play a crucial role in keeping the internet connected and functioning smoothly. The Internet Corporation for Assigned Names and Numbers (ICANN) is an organization dedicated to ensuring the stable and secure operation of these identifiers. They work in cooperation with other organizations like Regional Internet Registries (RIRs) and the Internet Engineering Task Force (IETF) to maintain the integrity of the internet.

However, concerns have been raised about the potential for internet fragmentation due to political decisions. It is feared that politicians could decide to create alternate namespaces or a second root of the internet, which would undermine its uniqueness and coherence. The increasing politicisation of the world is seen as a factor that could influence the unique identifiers of the internet. If political interests begin to shape the internet’s architecture, it could lead to fragmentation and potentially hinder global connectivity.

It is important to distinguish content limitations from internet fragmentation. Content limitations, such as parental controls or restrictions on certain types of content, are related to user experience rather than the actual fragmentation of the internet. Referring to content-level limitations as internet fragmentation can be misleading and potentially harmful. Such a misinterpretation could create a self-fulfilling prophecy of a truly fragmented internet.

The preservation of what is needed in the internet is considered crucial. Mentions of data localisation, islands of secluded content, and shutdowns are seen as threatening to internet freedom. These issues highlight the need to protect the openness and accessibility of the internet. Adverse effects can also occur at a technical level due to legislation aimed at addressing content issues. While the technical community acknowledges the necessity of legislation, it is important to ensure that unintended consequences do not disrupt the basic functioning of the internet.

In recent years, there has been a trend towards attempts to apply sovereignty over the internet. This raises concerns among those who advocate for a global and open internet. The application of sanctions over IP addresses is used as an example to illustrate the potential negative impact of applying sovereignty over something inherently global like the internet. Maintaining the global nature of the internet is seen as essential to foster innovation, enable collaboration, and promote peace and justice.

In conclusion, while the internet is currently not fragmented at a technical level, there are concerns about potential fragmentation caused by political decisions or misunderstandings about content limitations. The preservation of what is necessary in the internet and the resistance against the application of sovereignty over its inherently global nature are key issues to consider in order to maintain a stable, secure, and open internet for everyone.

Javier Pallero

The main purpose of the Internet is to connect people and facilitate global communication, as well as providing unrestricted access to information across borders. It serves as a platform that allows individuals worldwide to interact and exchange ideas, irrespective of their geographical location. This positive aspect of the Internet promotes connectivity and enables access to knowledge.

However, the perception of Internet fragmentation is not solely influenced by technical factors but also by policy decisions and business practices. These factors contribute to the fragmentation and create barriers to the free and open exchange of information. Government policies and business practices shape the functioning of the Internet, often resulting in restrictions and limitations on access.

While these factors are significant in understanding the overall landscape of Internet fragmentation, they may not fully define it from a technical perspective. It is important to consider different aspects of Internet governance, such as protocols and policy levels, which have their own areas of discussion and involve various stakeholders. However, there should be more attention and engagement specifically in the technical aspects of internet governance to mitigate the issues related to fragmentation and ensure a more cohesive and inclusive Internet experience.

One of the main threats to internet fragmentation is posed by governments. Governments sometimes seek to control the Internet and have the power to limit access or manipulate content. The multi-stakeholder model, which involves the participation of various stakeholders, including governments, businesses, and civil society, can be an effective approach to counter these governmental threats. By revitalising this model and denouncing government advancements in controlling the internet, valuable contributions can be made towards maintaining an open and inclusive internet governance structure.

Furthermore, informing users and promoting their participation play a crucial role in putting pressure on governments to uphold internet freedom. When users are aware of their rights and the potential negative impacts of government control, they can actively voice their concerns and strive to protect their online freedoms. By empowering users with information and encouraging their active participation, the internet community can collectively work towards preserving an open and accessible internet.

In conclusion, while facilitating global communication and access to information remains the primary purpose of the Internet, the challenges of internet fragmentation must be addressed. This requires considering not only technical factors but also policy decisions and business practices. By focusing on the technical aspects of internet governance and reviving the multi-stakeholder model, as well as promoting user awareness and participation, progress can be made towards a more unified and inclusive internet structure.

Sheetal Kumar

During the discussions regarding the challenge of preserving the core values and principles of the internet while allowing for its adaptation and evolution, it was noted that both intended and unintended actions have affected internet properties and user autonomy. Government regulations and corporate decisions have played a significant role in shaping the internet landscape. The growth of internet shutdowns has particularly impacted the principle of connectivity, causing concerns about maintaining a free and open online environment.

Sheetal Kumar, a strong advocate for preserving and evolving the internet, emphasized the importance of compliance with the original vision and user experience. To address the issue of internet fragmentation, the Policy Network on Internet Fragmentation was established. This network aims to navigate the future of the internet by developing a comprehensive framework that covers the technical layer, user experiences, and governance of the internet. One of the network’s key recommendations is the need for coordination and communication among non-inclusive bodies to tackle the challenges posed by internet fragmentation.

The speakers agreed that we are currently on the wrong path and moving away from the original concept of the internet. This disruption to the internet has raised concerns about its future, emphasizing the need for collective understanding and implementation of recommendations to improve the current state. Recommendations from the Internet Governance Forum (IGF) and the multi-stakeholder policy network have been put forward to address these concerns. Implementing these recommendations could not only ensure the preservation of the core values of the internet but also contribute to achieving Sustainable Development Goals 9 (Industry, Innovation, and Infrastructure) and 16 (Peace, Justice, and Strong Institutions).

In conclusion, the discussion highlighted the challenge of preserving the fundamental principles of the internet while adapting to its evolving nature. It is crucial to address internet fragmentation and promote coordination and communication among non-inclusive bodies to ensure the internet remains a free and open space. By collectively implementing recommendations, we can work towards improving the current state and realizing the original vision and user experience of the internet.

Moderator – Avri Doria

Internet fragmentation is a contentious and intricate topic that invites diverse opinions and definitions. It is an important subject to understand, particularly with the fast-paced advancements in technology and the increasing interconnectedness of the world. However, experts and scholars continue to study this matter to gain a more comprehensive understanding of it.

Avri Doria, an advocate for open participation, brings attention to the significance of involving all individuals in the discussion on Internet fragmentation. Doria emphasizes that fostering dialogue and collaboration can lead to a better comprehension of this phenomenon. This inclusive approach aims to generate diverse perspectives and broaden the scope of analysis.

Internet fragmentation refers to the division or separation of the internet, resulting in distinct networks or restricted access in different regions or countries. Several factors contribute to this fragmentation, including government censorship, technological barriers, and varying policies and regulations across jurisdictions. The consequences of Internet fragmentation can range from limitations on freedom of expression and access to information to hindrances in international cooperation and economic development.

The ongoing study of Internet fragmentation signifies the collective efforts towards understanding its implications and finding solutions to mitigate its negative effects. Researchers and policymakers are exploring ways to address the challenges posed by fragmentation while preserving the open nature of the internet. This requires a multi-stakeholder approach involving government bodies, civil society organizations, and private sector entities.

In conclusion, Internet fragmentation remains a topic of great importance and interest due to its wide-ranging implications. The existence of divergent definitions and opinions highlights the complexity of the issue and the need for further research. Avri Doria’s emphasis on inclusive participation provides a valuable framework for fostering dialogue and collaboration, ultimately enhancing our understanding of Internet fragmentation. By working together, we can strive towards a more open and globally connected internet that benefits societies worldwide.

Umai

Discussions surrounding internet fragmentation have primarily focused on the technical layers of the internet. However, there has been a noticeable oversight of the social layer, which encompasses network engineers and their informal communities. This neglect is concerning because it fails to recognize the vital role that these individuals play in the maintenance and sustainability of internet networks.

The social layer of the internet is made up of network engineers who are responsible for the day-to-day operations and upkeep of the internet infrastructure. They work diligently to ensure the optimal functioning of networks, addressing issues, and implementing necessary updates and enhancements. Their efforts are often supported by informal communities where knowledge sharing and collaboration take place.

It is worth noting that discussions on internet fragmentation often overlook the social layer. This is particularly significant given the ageing community of network engineers, sparking concerns regarding the future capabilities of this workforce. As these engineers retire, it may become increasingly challenging to find skilled replacements with the expertise required to effectively maintain internet networks.

To address this issue, further research is required to explore the capabilities and potential of network engineer communities in maintaining internet networks. This research should not only focus on technical aspects but also consider broader factors such as industry, innovation, and infrastructure. Additionally, considering the role of education in nurturing skilled professionals, the research should emphasize the importance of quality education in fostering a new generation of network engineers.

In conclusion, discussions on internet fragmentation need to widen their scope to include the social layer, comprising network engineers and their informal communities. The ageing workforce of network engineers raises concerns about the future maintenance of internet networks, highlighting the need for further research in this area. By examining the capabilities of these communities and addressing the challenges posed by an ageing workforce, we can ensure a sustainable and resilient internet infrastructure for the future.

Dhruv Dhody

Internet fragmentation is an important issue that has attracted attention from experts and policymakers. The main concern is its impact on interoperability, which refers to the ability of different systems and devices to effectively communicate and work together. One argument suggests that not all forms of fragmentation pose the same threat, and therefore, a more nuanced approach should be taken to address the issue. It emphasizes the need to differentiate between various types of fragmentation before finding solutions.

While the negative consequences of fragmentation have been widely discussed, it is important to consider the positive aspects as well. Certain forms of fragmentation can enhance privacy, security, and local autonomy. Understanding this dual nature of fragmentation is vital for a comprehensive analysis of the issue.

However, there is an opposing viewpoint that argues against grouping together different forms of internet fragmentation. This perspective suggests that examining each form individually would provide a better understanding of their unique implications. Although supporting facts are not provided, this argument implies the importance of considering the specific characteristics of each type of fragmentation.

In conclusion, internet fragmentation is a complex issue that requires careful consideration. While interoperability is a major concern, it is crucial to recognize the varied nature and potential consequences of different forms of fragmentation. By taking a more nuanced and targeted approach, policymakers and stakeholders can effectively address this multifaceted challenge.

Michael Rothschild

During the early development of the internet in 1983, it was composed of separate fragments of networks in various countries. This meant that there was no cohesive internet as we know it today; instead, there were isolated segments of services and networks. To overcome this fragmentation, gateways were introduced to interconnect these fragments.

However, using gateways to connect the different networks had its drawbacks. It became clear that gateways could be inefficient, posing challenges to the smooth flow of information and communication. Additionally, there were concerns that gateways could potentially filter or restrict certain data or content.

Furthermore, it is important to note that the use of gateways carries implications for several Sustainable Development Goals (SDGs). Specifically, SDG 16, which focuses on Peace, Justice, and Strong Institutions, is relevant in this context. The inherent risks associated with filtering and potential restrictions through gateways could hinder the principles of justice, transparency, and freedom of expression.

Despite these challenges, there is optimism that technological advancements will provide solutions to address internet fragmentation. It is believed that future technical innovations will overcome the limitations of gateways, allowing for more efficient interconnections between networks and reducing the risks of filtering or restrictions.

In conclusion, the early stages of the internet consisted of fragmented networks that required gateways for interconnection. However, gateways proved to be inefficient and carried the risk of filtering. Nonetheless, there is hope that technical solutions will emerge to solve the problem of internet fragmentation and pave the way for a more interconnected and accessible internet.

Aha G. Embo

Internet fragmentation refers to any factors that impede the free flow of the internet and can occur at various levels, including technical, governmental and business. One of the concerns of legislators is avoiding ambiguous legislation that may hinder innovation. They strive not to stifle innovation with any kind of legislation.

Efforts are ongoing to streamline internet governance legislation globally. The objective is to develop a cohesive framework that ensures a safe, secure and integrated connectivity across different jurisdictions. Fragmentation is viewed as an impediment to this objective, as it disrupts the seamless flow of information and inhibits the integration of different parts of the internet.

On the other hand, internet shutdowns are seen as a form of internet disruption, where specific applications or services are intentionally halted. This practice is perceived as a roadblock to the free flow and integrated connectivity of the internet. It restricts access to information and inhibits communication and collaboration on a wider scale.

The conclusion drawn from the analysis is that maintaining an open, interconnected internet is crucial for enabling innovation and fostering global communication and collaboration. Fragmentation and internet disruptions pose threats to the free flow of information and the integration of the internet. Therefore, efforts are being made to address these challenges and establish a safe, secure and integrated internet connectivity worldwide.

It is worth noting that while the sentiment of the sources is generally neutral or negative towards internet fragmentation and shutdowns, there is a positive sentiment towards the importance of ensuring a safe, secure and integrated connectivity in the context of the internet. This highlights the need to find a balance between regulation and innovation to achieve the desired outcomes.

Nishigata Nobu

In his discussions on internet fragmentation, Nishigata Nobu acknowledges the challenges that this issue presents. He emphasises the problems that exist within the current internet system, particularly with regards to user interface type fragmentation, such as echo chambers and filter bubbles. These issues are detrimental to the online experience as they limit exposure to diverse opinions and information.

Furthermore, Nishigata highlights the importance of government intervention in addressing internet fragmentation. He reveals that the Japanese Government is actively following up on internet fragmentation issues, underscoring their recognition of the significance of this problem. Nishigata also points out that government intervention is often necessary to ensure public safety, economic development, and national security.

In advocating for government accountability, Nishigata stresses that governments should take responsibility for their actions in relation to internet usage. He insists that governments need to be held accountable for upholding open and free internet principles, which are essential for promoting peace, justice, and strong institutions. Nishigata supports the Declaration of Future Internet, published by the U.S. government, as a means to guide and govern internet usage.

Additionally, Nishigata recognizes the limitations of government intervention alone in solving internet-related issues. He believes that collaboration between the government and technical experts is crucial in finding solutions. Nishigata advocates for partnerships and emphasises that the collaboration between the two parties will yield better outcomes than government intervention alone. He acknowledges that technical expertise is necessary to address complex internet challenges effectively.

To conclude, Nishigata Nobu’s discussions highlight the challenge of internet fragmentation and the problems within the current internet system. He acknowledges the efforts of the Japanese Government in addressing this issue, supports the Declaration of Future Internet, and advocates for government accountability in internet usage. Nishigata emphasises collaboration between the government and technical sector as a key approach in finding solutions to internet-related problems.

Jennifer Chung

Internet fragmentation can occur at different levels, including technical, user experience, and policy. This phenomenon has implications for the development and accessibility of the internet. At the technical level, fragmentation refers to the division of the internet into separate networks or platforms with limited interoperability. This can result from differences in protocols, standards, or infrastructure. User experience fragmentation, on the other hand, refers to the divergence in user interfaces, applications, and available content, leading to an uneven online experience.

One argument suggests that internationalized domain names (IDNs) may contribute to internet fragmentation. While IDNs allow users to utilize native scripts and characters, promoting inclusivity, there is a risk of fragmentation if their implementation is not effectively managed. Ensuring compatibility and consistency across different networks and platforms is crucial for the integration of IDNs.

Policy decisions also play a role in internet fragmentation. For example, government-imposed internet shutdowns or restrictions on access to certain websites or services can disrupt the interconnected nature of the internet, negatively impacting its functioning.

Mitigating the risks of internet fragmentation requires dialogue and coordination among stakeholders. Engaging in conversations and collaboration can help address the challenges. Furthermore, it is important to avoid silos in discussions by incorporating diverse perspectives and actors to ensure a comprehensive and inclusive approach.

In summary, internet fragmentation can occur at different levels, including technical, user experience, and policy. The implementation of internationalized domain names and policy decisions, such as internet shutdowns, can contribute to this phenomenon. To overcome these challenges, dialogue, coordination, and inclusive approaches are essential to ensure a connected and accessible internet for all.

Julius Endel

The analysis reveals a prevailing negative sentiment towards the current system of running the internet and providing data. Critics argue that while the costs for running the internet and providing data are socialised, the profits generated from these operations are largely privatised and benefit only a select few companies. This has raised concerns about the fairness and equity of the current system.

Furthermore, the privatisation and socialisation effect of the internet and data provision has led to a form of fragmentation. This fragmentation is seen as a consequence of the unequal distribution of profits among a handful of companies, which further exacerbates existing inequalities in the industry. The negative sentiment towards this system stems from the belief that the benefits and advantages of the internet and data provision should be accessible to a wider range of stakeholders, rather than being concentrated in the hands of a few powerful entities.

Another issue highlighted in the analysis is the practice of data scraping. It is argued that companies are actively collecting and utilising user data to their advantage while reaping significant profits, while the public does the majority of the work in generating and providing this data. This raises questions about the fairness and ethics of such practices, as well as the need to address the disparities in profit distribution within the industry.

Overall, these issues are seen as contributing to inequalities in the industry and a lack of justice in the current system. The analysis suggests that efforts need to be made to address the socialisation of costs and the privatisation of profits, as well as reevaluate the practices of data scraping to promote a more equitable and fair system.

An interesting observation from the analysis is the connection between these issues and the Sustainable Development Goals (SDGs), specifically SDG 9 (Industry, Innovation and Infrastructure) and SDG 10 (Reduced Inequalities). It suggests that the current system of running the internet and providing data is not aligned with these goals, and calls for a more inclusive approach that takes into account the wider societal impact and benefits.

In conclusion, the analysis highlights a negative sentiment towards the current system of running the internet and providing data, with concerns surrounding the socialisation of costs, privatisation of profits, fragmentation, and data scraping. It underscores the need for a more equitable and fair system, considering the wider societal impact and goals of reducing inequalities and promoting sustainable industry practices.

Robin Green

In a positive stance, Robin Green argues against the belief that content distribution networks (CDNs) contribute to internet fragmentation. She asserts that CDNs effectively connect people to services globally and ensure the resilience and fast access of internet services. Green’s argument is supported by the notion that CDNs play a crucial role in creating a robust and interconnected internet infrastructure.

On the other hand, Green defines internet fragmentation as a negative phenomenon that occurs when the user experience becomes segmented and prevents individuals from exercising their fundamental rights. This definition highlights the importance of a unified and inclusive internet experience, where all users can freely access and navigate digital content without facing barriers or restrictions.

Furthermore, Green addresses the regulatory implications associated with internet fragmentation. She identifies data localisation requirements, restrictions on cross-border data flows, encryption, content takedowns, and geoblocking as potential components of fragmentation. According to Green, these regulatory measures not only impinge on the user experience but also hinder peace, justice, and strong institutions, aligning with SDG 16.

Green’s observation is important as it emphasises the need to address both technical and user experience aspects of internet fragmentation. She suggests that regardless of the nature of the restrictions, be they technical or user experience-oriented, they should be examined and resolved to promote a more unified and inclusive internet.

In conclusion, Robin Green offers a positive stance on the role of content distribution networks and their impact on internet fragmentation. She argues that CDNs contribute to global connectivity and internet resilience. Additionally, Green highlights the negative effects of internet fragmentation on the user experience and the infringement of fundamental rights. She advocates for addressing regulatory measures associated with fragmentation to achieve a holistic solution. By considering both technical and user experience aspects of internet fragmentation, a more inclusive and connected online environment can be realised.

Jorge Cancios

A recent analysis explores the impact of geopolitical tensions on the unity of the internet. It reveals that, as global tensions intensify, the focus has shifted from digital interdependence to fragmentation. This shift is a response to the charged atmosphere of the current global landscape.

The analysis stresses the importance of trust and network effects in achieving internet interoperability. It explains that the internet consists of numerous networks that rely on trust to stay connected. However, increasing geopolitical pressures may undermine this trust and erode the network effects, potentially leading to fragmentation.

The analysis also highlights that the maintenance of internet unity depends on binary decisions made by various stakeholders, including individuals, networks, companies, and governments. These decisions can either promote unity or contribute to fragmentation. Therefore, the report underscores the significance of thoughtful decision-making at different levels to foster unity and prevent the erosion of the internet structure.

Overall, the analysis advocates for careful and well-considered decisions by all parties to promote internet unity and prevent fragmentation. It suggests that authorities should invest in the right direction to hold the internet together, rather than contributing to its erosion. By doing so, the internet can continue to serve as a platform for collaboration, innovation, and progress.

In conclusion, the analysis sheds light on the impact of geopolitical tensions on the unity of the internet. It highlights the shift from digital interdependence to fragmentation and emphasizes the importance of trust and network effects for internet interoperability. The report underscores the role of binary decisions made by stakeholders in either promoting unity or contributing to fragmentation. Ultimately, it calls for careful decision-making to preserve internet unity and prevent erosion.

Ponsley

The discussion centred around the concept of internet fragmentation, highlighting that it is not simply a technical issue, but also encompasses other factors. Speakers pointed out that internet fragmentation is not only related to technical disruptions, but also to human rights abuses, harmful internet use, and political aspects. This means that it goes beyond connectivity problems and involves potential violations of digital rights and freedoms online.

Additionally, it was argued that specific political situations can contribute to internet fragmentation. Ponsley provided examples of how internet services can be intentionally disrupted or shut down for political gain or to create unrest. This demonstrates the link between political motivations and the fragmentation of the internet. Manipulation of the political landscape using the internet by leaders can result in the shutdown of internet services and limited access to information.

Overall, the discussion highlighted the significance of internet fragmentation from both a technical and a human rights and political perspective. By exploring these different aspects, it is clear that internet fragmentation is a complex issue that requires attention and consideration. These issues raised during the discussion are particularly relevant to SDG 16, which focuses on promoting peace, justice, and strong institutions. The internet plays a crucial role in achieving these goals, and any form of fragmentation can hinder progress in these areas.

An important observation from the analysis is that internet fragmentation poses significant challenges to achieving an open and inclusive online environment. It underscores the need for robust policies and international collaboration to effectively address this issue. Additionally, the discussions draw attention to the impact of political instability on internet connectivity and availability, highlighting the importance of maintaining a stable political environment to ensure uninterrupted access to the internet.

In conclusion, the discussion on internet fragmentation emphasises its multidimensional nature, including human rights abuses, harmful internet use, and political considerations. Political situations can contribute to internet fragmentation, leading to disruptions and even shutdowns of internet services. These issues have implications for SDG 16, which aims to establish peace, justice, and strong institutions. Addressing internet fragmentation requires a comprehensive approach that takes into account technical, human rights, and political dimensions.

Raul Echeverria

In this analysis, the speakers delve into the complex issue of internet fragmentation and government interference. They highlight that, in some countries, there are disparities in access to certain applications, leading to a fragmented internet experience. This is considered problematic as the internet should ideally function uniformly across the globe.

Furthermore, laws passed in many countries have had negative impacts on the way the internet operates. These laws are seen as detrimental to the overall functionality and accessibility of the internet. The supporting evidence provided showcases specific examples of the negative consequences of such laws on the user experience. It includes the impact on certain applications and restrictions on online activities.

However, a different viewpoint emerges, arguing that the internet should operate uniformly worldwide, aligning with SDG 9: Industry, Innovation, and Infrastructure. This positive stance emphasizes the importance of a consistent and accessible internet for all users, regardless of their geographical location.

On the other hand, there is a negative sentiment towards government interference in internet activities. The speakers express the belief that interference from governments in deciding what users can or cannot do on the internet should be minimized. This perspective suggests that users should have greater freedom and autonomy in their online activities. The negative sentiment is also supported by the observation that some policymakers prioritize political decisions or industry protection over the potential negative impact on the internet user experience.

Additionally, it is argued that measures taken by governments to restrict access to certain types of information should be proportional and reasonable. This stance aligns with SDG 16: Peace, Justice, and Strong Institutions, highlighting the importance of policies that safeguard user rights and promote transparency.

Moreover, the analysis points out that new laws and public policies in democratic countries can significantly affect user experiences on the internet. The supporting facts emphasize that certain measures aimed at protecting intellectual property or as a result of taxation have adverse effects on users. Furthermore, the lack of understanding by policymakers regarding the potential negative impact of these policies is seen as a significant concern.

In conclusion, the analysis highlights the consensus that governments, both democratic and otherwise, pose a threat to the consistent user experience due to implemented policies. The speakers argue that policymakers should prioritize the needs and rights of internet users, and policies should be informed by an understanding of the potential negative consequences on internet functionality and accessibility. It is evident that internet fragmentation and government interference are complex issues that require careful consideration to ensure that the internet remains a free and accessible platform for all users.

Tomoaki Watanabe

The debate surrounding the splintering of the internet, commonly known as the “splinternet,” has raised concerns about the potential impact of political or democratic motivations driving internet regulation. This issue is particularly relevant as even democratic countries face challenges such as terrorism and civil unrest which may necessitate some level of internet regulation. While it is crucial to find a balance between freedom and regulation, the argument emphasizes that the splinternet can be alarming when driven by political or democratic reasons.

The nature of the free and open internet is also a focal point of the discussion. On one hand, proponents highlight the achievements of an open internet, recognizing its capacity to facilitate global connectivity and promote the exchange of ideas and information. However, it is also acknowledged that the free and open internet can have negative consequences. It is important to reflect on these characteristics and consider potential drawbacks and implications.

Another argument put forth asserts that a unified internet has the potential to bring about social change. Advocates argue that a unified internet can empower individuals and communities to drive positive transformations in society. However, it is essential to note that even countries that support a unified internet and advocate for democracy face their own set of issues. To better comprehend the impact of a unified internet on social change, a more comprehensive investigation of these issues is required.

Artificial intelligence (AI) also benefits from a unified internet. AI systems, particularly large language models, heavily rely on a massive training dataset, made possible by the unified internet. This enables AI to continuously develop its capabilities and offer advanced services and solutions.

In the realm of communication, AI can provide advanced translation abilities and overcome challenges. This highlights the positive impact a unified internet can have on enhancing communication capabilities and bridging language barriers.

Interestingly, the debate suggests that while technical layer fragmentation is considered significant, the ability of governments to heavily regulate online communications may diminish the impact of such fragmentation. In other words, if governments possess the capability to regulate online communication extensively, the effects of technical layer fragmentation may be less significant.

In conclusion, the debate surrounding the splintering of the internet, or the splinternet, raises concerns about how political or democratic motivations may drive internet regulation. The nature of the free and open internet is discussed, revealing both its achievements and potential negative consequences. Supporters argue for a unified internet, as it has the potential to bring about social change and benefit artificial intelligence. However, it is important to acknowledge that even countries supporting a unified internet and advocating for democracy face their own set of issues. Additionally, the impact of technical layer fragmentation may be mitigated by governments’ strong ability to regulate online communications.

Paul Wilson

The analysis provides valuable insights into the fragmentation of the internet and the significance of preserving its integrity. One aspect examined is the role of Content Delivery Networks (CDNs) in the internet ecosystem. While CDNs facilitate access to specific services and content, it is important to note that they do not encompass the entire internet itself. This highlights the need to distinguish between accessing services and maintaining overall internet connectivity.

Another crucial point discussed is the lack of interoperability between similar services, such as instant messaging (IM) or social media platforms. The analysis reveals that there is generally a dearth of interoperability among these services, which can contribute to the fragmentation of the internet. To address this issue, it is suggested that service companies should be required to change their interoperability behavior. This would involve encouraging and enforcing interoperability between different services, ultimately enhancing the connectivity and usability of the internet as a whole.

Furthermore, the analysis underscores the importance of end-to-end internet connectivity. The COVID-19 crisis has served as a reminder of the necessity for seamless connectivity to ensure efficient remote communication and access to vital services. Point-to-point video communications during the pandemic have demonstrated the imperative need for maintaining the end-to-end model of the internet. The argument put forth by Paul Wilson promotes the preservation of the internet layer’s integrity, emphasizing that the end-to-end model is fundamental to the internet’s functioning.

One significant observation made in the analysis is the potential over-fragmentation of the internet if proactive measures are not taken. The quality of the internet varies, and it is crucial to undertake ongoing work to prevent excessive fragmentation. This highlights the importance of maintaining a balance between the diverse services and content offered on the internet and ensuring seamless connectivity and interoperability.

In conclusion, the analysis provides an in-depth understanding of the fragmentation of the internet and calls for concerted efforts to preserve its integrity. It emphasizes the distinct role of CDNs, the importance of interoperability between similar services, the need for end-to-end internet connectivity, and the significance of preventing over-fragmentation. By addressing these key issues, it is possible to maintain a high-quality and interconnected internet ecosystem that supports innovation and provides reliable access to services and information.

Tatiana Trapina

The analysis of the discussion on internet fragmentation reveals two main perspectives. The first perspective argues that the technical layer of the internet remains fully global and capable of providing connectivity, even in the face of censorship. This position is supported by the fact that TCP/IP, the system of unique identifiers, continues to dominate and has not been challenged by any alternative. Furthermore, technical tools, such as the compatibility between IPv6 and IPv4 IP addresses, have been developed to ensure global connectivity. The argument is that the internet’s technical layer is not fragmented and continues to function globally.

Contrarily, the second perspective raises concerns about the potential for real internet fragmentation due to government regulations and control. There is a belief that government restrictions, whether intentional or unintentional, could impact the technical layer of the internet. These restrictions may be motivated by political preservation or the protection of citizens. It is argued that such regulations could erode trust or disrupt the technical underpinnings of the internet, leading to fragmentation. The sentiment towards this argument is negative, suggesting that the looming danger of government regulations could pose a threat to the global connectivity of the internet.

It is worth noting that the discussion also touches upon the labeling of government censoring as fragmentation. Some argue that this labeling is inaccurate and that it should be more appropriately described as human rights abuses. The concern here is that by labeling it as fragmentation, it may become a self-fulfilling prophecy and create further division. Therefore, caution is advised when using the term “fragmentation” to describe government censorship.

The proposed solution to preventing internet fragmentation lies in upholding global connectivity and trust. It is emphasized that the technical layer of the internet operates based on trust and the commitment to global connectivity. This is supported by the fact that the technical layer was adopted by a multi-stakeholder community. It is believed that if the foundations of trust and commitment to global connectivity are preserved, any problem that arises can be solved. The sentiment towards this solution is positive, suggesting that maintaining global connectivity and trust is essential for preventing internet fragmentation.

Another noteworthy observation is the importance placed on the preservation of what makes the internet unique and interoperable. This uniqueness includes technical identifiers, protocols, and other aspects that ensure the internet’s smooth operation across different platforms and devices. This preservation is seen as paramount to uphold the internet’s integrity and prevent fragmentation.

Additionally, the multi-stakeholder model of governance is highlighted as a key aspect of managing the technical layer of the internet. The sentiment towards this model is positive, as it recognizes the importance of involving multiple stakeholders in decision-making processes. It is argued that commitment to this model is crucial for preserving trust and effectively managing the technical layer of the internet.

Finally, there is a belief that feasible fragmentation may occur due to regulations specifically targeting the technical layer of the internet. The concern here is that the erosion of trust and the introduction of different governance frameworks could lead to a scenario where fragmentation becomes a reality. The sentiment towards this argument is neutral, suggesting a cautious acknowledgment of the potential risks associated with regulations that specifically target the technical layer.

In conclusion, the analysis of the discussion on internet fragmentation highlights two main perspectives. One viewpoint argues that the technical layer of the internet remains fully global and provides connectivity, while the other expresses concerns about government regulations potentially leading to fragmentation. The proposed solution emphasizes the importance of upholding global connectivity and trust, preserving the unique aspects of the internet, and committing to a multi-stakeholder governance model. To prevent internet fragmentation, the key lies in maintaining the global nature of the internet while addressing potential risks posed by government regulations and control.

Timea Suto

The Internet is not currently fragmented, but there are real dangers of it becoming so due to pressures at the technical and policy governance layers. Decisions made at political, content, and policy governance layers can affect the technical layer, potentially causing fragmentation. Concerns about the potential fragmentation of the Internet are driven by the crucial role of the digital economy, which relies on the free movement of data across borders. Barriers to these data flows present a form of Internet fragmentation. There is strong opposition to data localization and the fragmentation of the upper layers of the Internet. Data localization and fragmentation can hinder the benefits of the Internet, and concerns about trust leading to data localization are seen as risky. It is important to handle policy matters with care to prevent unintended consequences that could hinder the open and global nature of the Internet.

Dušan

DuÅ¡an expresses frustration over the misinterpretation and misuse of the term ‘fragmentation’ in the context of internet governance. He argues that this catch-all term encompasses a broad range of issues, such as filtering, balkanization, and IDN domain names. According to DuÅ¡an, the technical layer that connects everything on the internet is still protected, and governments have been granted the right to legislate within their respective jurisdictions.

In response to this, DuÅ¡an suggests that discussions on internet governance should focus on specific issues, like filtering and blocking, rather than relying on the vague concept of ‘fragmentation’. He believes that the current high-level discussions lack substance and cautions against engaging in them without a specific focus. He advocates for a more targeted approach, particularly emphasizing the need to explore filtering, blocking, and other similar specific topics in greater detail.

Overall, DuÅ¡an’s main argument revolves around the importance of addressing specific issues in internet governance, rather than using a general term like ‘fragmentation’ that can lead to ambiguity and insufficient understanding. By focusing on individual topics, he suggests that policymakers and stakeholders can engage in more meaningful and productive discussions on the subject.

It is noteworthy that DuÅ¡an’s stance aligns with SDG 9: Industry, Innovation and Infrastructure, which aims to promote resilient and inclusive infrastructure development, increasing access to information and communication technologies (ICTs). By addressing specific issues within the realm of internet governance, it becomes possible to strengthen and enhance the overall infrastructure and accessibility of the internet, thereby contributing to the broader goals of sustainable development.

In conclusion, DuÅ¡an’s frustration stems from the misuse of the term ‘fragmentation’ in discussions on internet governance. He advocates for a shift towards addressing specific issues such as filtering and blocking to bring substance and clarity to these debates. By focusing on targeted topics, policymakers and stakeholders can work towards developing more effective and inclusive internet governance frameworks that align with the broader goals of sustainable development.

Session transcript

Moderator – Avri Doria:
the bottom of the hour, so I guess we should start. So welcome to this session on, what is the exact title? It’s on Internet Fragmentation, Perspectives and Collaboration. So that’s a good clue as to where we’re heading. It’s been an interesting topic to watch being talked about this week. Lots of opinions, lots of definitions. The beginnings of a new framework for how to understand it and discuss it, which is still growing and still being thought of. So there’s really a lot of really interesting people, knowledgeable people on this roundtable, and we really wanna get a discussion going of all the people around the table. And also, as we move on, all of you that are sitting back here. So anybody that’s gonna wanna talk is gonna have to come. There are only a few microphones, so you will have to come up and get a microphone when you wanna talk. But anyhow, so I want to welcome you all, and I really wanna get started. And as opposed to me saying a lot more, because you all have a lot more to say. So Elena Pleksiga from ICANN, would you like to start us off with a view?

Elena Plexida:
Okay. I’m on. Hello, everyone. Thank you, Avi. Yes, I can kick off with some more opinions and some more definitions, which I’m sure that you’ll be very happy to hear. So as you know, I work for ICANN, representing a technical organization. And therefore, if you will, what I will come here with. And in my personal experience, I believe if the Internet is to work the way that work the way that I would have hoped it to work, it’s wonderful. But if your goal is 100%, I don’t believe you can do it. It’s about doing everything to get your goal through to that one extent he is the challenge. And if the goal is 10%, let that be the challenge, but never let that be the challenge. So, what is it that binds it together to what we call today global Internet? And this is none other than the unique identifiers, the main names, the name space, the IP addresses, and the Internet protocols alongside. Okay, I’m not a technical person, so I think of it as some sort of common technical language that all devices speak and they can find each other on the network. So, what is the global Internet? It’s this uniqueness that gives us the global Internet. As long as different networks and devices connected on the same unique set of identifiers, we have one Internet. And that’s, of course, ICANN’s mission, one Internet to ensure a stable and secure operation of the Internet, unique identifiers, we do that together with our sibling organizations, the RIRs, the ITF, et cetera. So, what is the global Internet? It’s the Internet that’s created to do. But what is, what would be Internet fragmentation? At the content level, there are already limitations. Content is not available to everyone, everywhere. That’s been happening for years, and it’s even desirable in some cases. Think of parental controls. Of course, it’s not desirable in other cases. But that’s not Internet fragmentation. It’s not Internet fragmentation, it’s actually user experience fragmentation, if you will. But it’s not Internet fragmentation, and it’s actually confusing. And I think it’s also a good point. The internet is not just a thing that is a little bit inflammatory, it’s actually to my mind dangerous to keep referring to content level limitations as internet fragmentation. Because people leave a discussion with the impression that the internet is already fragmented. And that can become, if you will, a sort of self-fulfilling prophecy. I’m talking of my own experience. I’ve been discussing with parliamentarians about internet fragmentation, and they say, well, the internet is already fragmented, so why would we, what is there? That’s why I’m saying it can become a self-fulfilling prophecy. The internet is still there, it’s not broken. Fragmentation would be, if we take an example of the Postal Service, would be if the Postal Service stops being there. If I tell my postman that I don’t want to receive letters from Avri, that’s not internet fragmentation. I don’t want a specific part of the content. Fragmentation is when the internet breaks at a technical level, when you don’t have interoperability. So, is the internet fragmented today? No. A technical layer? Absolutely not. Can it be fragmented? Yes. I think it might. It might. Alternative namespaces. If we have that, the uniqueness is gone. A second root of the internet. The uniqueness is gone. I will not go into the technical side of it, because, first of all, I’m not technical. And second, and most importantly, because I think that, although fragmentation, fragmenting the internet is a technical issue, it will not come, if it comes, from the technical world. It will come from the political world. Deliberately or by accident, with the latter, the accident being what concerns me the most. The million dollar question, if you will, is will the global internet survive the fragmented world? So, you know, we live in a world that is not the same as it used to be before. There’s a lot of politicization around a number of issues. And we start to see this politicization over the unique identifiers as well. Them getting drawn into the geopolitical agenda. And that can be dangerous for the very global nature of the Internet. I’ll stop here. I hope.

Moderator – Avri Doria:
Thank you. Thank you, Elena. The next person I have is Jennifer Chung from DotAsia. To basically give her impressions, definitions, et cetera. Thank you.

Jennifer Chung:
Thank you, Avri. My name is Jennifer Chung. I work for DotAsia organization, which is obviously DotAsia is a registry operator of the DotAsia top level domain. I guess from my point of view, whether or not I can add to the definition or add to the controversy of this discussion that hopefully we’ll have here is DotAsia is a registry. We, of course, sit on the application layer of the technical part of the Internet. And I think perhaps we are quite clear on the fact that, you know, what technical fragmentation might be, you know, if we start with the baseline, you know, when we’re looking at where are we starting this assessment from? If the assumption that the primary benefits of the core features is to be able to have universal connectivity and to have the interoperability, I’m really bad with this word, between these consenting devices, then I think there’s that very baseline that we can agree upon. I don’t think a lot of people are really confused about it or would argue against this part. I think where we’re coming from now is I think many different definitions try to bucket fragmentation into different categories. I see a lot of papers and research and also opinions. saying that first, there’s a fragmentation of the technical layer, which hopefully is not controversial. Secondly, there’s a fragmentation of the user experience or more on the end user or how we experience, how we navigate. And thirdly, it’s a fragmentation mainly on the policy level, which is more governed by places in where decision making is made there or in governments where there’s decision making on policy regulations, legislations that could aim to destabilize or could fragments the internet as we see it. I think what is really important that we should also remember is what isn’t fragmentation. I think the word fragmentation is now used. It is very important to use this word, but if we use this word to describe every single thing that is different, I think it behooves us to actually pull back and realize no, this is actually something that is good for the development of the internet. One example I’d like to bring out from the .asia point of view is a lot of people see internationalized domain names as hey, this is, what’s going on here? Could there be a threat of fragmentation? Is this actually already a fragmentation? And I would like to pause it to say actually internationalized domain names, which means domain names that can be seen in scripts such as Urdu or the Han script, which is Chinese, Korean, and Japanese, also use the Han script. These scripts allow you to see the domain name in the native script, and the threat here really is if this is not implemented well, then we have the possibility or the danger of having a fragmented internet, not the fact that we are implementing this becomes a fragmentation of the internet. So that’s one thing I’d like to really bring up first. And the second thing is when we’re looking at a different part of fragmentation. When we’re looking at, and now I’m talking more about the policy level, when we’re looking at where we’re sitting right now at the Internet Governance Forum, we’re talking about these things. But when we’re looking at bodies that decide regulations, upcoming legislations, what we really have to remember is that when these actions and legislations aim at this content and user layer, and this causes Internet fragmentation, that it also threatens the technical layer because then the implementation then comes down, that effect, there’s a knock-on effect where things like Internet shutdowns come down from any kind of policy level, or things like when people ask certain bodies to shut down portions of the Internet. So those are the geopolitical concerns and pressures that we have to resist when we talk about fragmentation as well. And I think I want to end a little bit more with, at least for my first intervention, to mitigate these risks really requires a lot of, first of all, conversation and coordination, but also not duplicating all this conversation into different silos where nobody’s talking to each other and not quite getting the part where we need to coordinate well. So I’ll stop right here.

Moderator – Avri Doria:
Okay, thank you. It was interesting to me to hear IDN included in the list of possible fragmentations. So thank you for bringing that up, and I’ll be interested to hear more about the whole notion of implementation being something that could cause that. The next person on the list is Timié Souto, ICC basis.

Timea Suto:
Thank you, Avery. Thanks, everyone. Yes, Timié Souto, Global Digital Policy Leader at the International Chamber of Commerce. For those of you who don’t know us, we’re representatives of global industry. We have around 45 million members in over 170 countries across all different sizes and industries. What does fragmentation mean to me from this perspective and to us? And I have to react to what the others have said before because I think that’s the whole point of this conversation. On that baseline layer that Elena, you were talking about, is the Internet fragmented? No. No, it’s not. It’s working. But attempts have been made and were sort of successful to disconnect and prove that it can, it really can. And I think I need to agree with Jennifer, your last point there, that certain pressures that come, not at the technical layer, but at the top of all of that, at the content layer, at the data layer, at the policy layer, governance layer, have very real impacts on that technical layer and the Internet way, this network of networks. And I think disregarding that and saying the Internet works, it’s not fragmented, is putting our heads in the sand. Because there’s real dangers of the Internet fragmenting if we buy into the fact that we can fragment the top of it. Because it’s really easy for that to then go down. That is my maybe a bit controversial view, but I think we cannot disregard this. Especially when we are at forums like this and others that don’t have the technical expertise, maybe this forum has because it’s multi-stakeholder, but other forums that make decisions at the political layers, at the content layers, at the policy governance layers, might not have all that really technical background. So it’s easy, first of all, for them to confuse things. And secondly, thinking that if it can be done at the top. Why not do it elsewhere? What is there to lose? And I think those are very dangerous questions to ask. So for us on the business side, to bring it back to my official talking points. For us, what matters here is the digital economy that was built on top of the Internet and digital technologies and everything that the Internet enables. And when I talk about the digital economy, it’s not just about GDP or the bottom lines of business, but the society, the development goals, the growth, both personally, both for communities and for economies, that was fueled by the Internet. And that really depends for us on the ability to move data across borders, to make sure that data supports global trade, information exchange, commerce, health care, medicine, research, everything that is built on the top of data being able to flow across borders. And there are barriers to those data flows, for me, are real examples of Internet fragmentation. Maybe I don’t have a better word to call it, so we can put that challenge to the audience here if you have better ways to call it. But if barriers to data flows coming from various concerns, concerns mostly about trust on the Internet, whether it is I don’t trust my data to go outside my region because the privacy protections are not the same, or the IP protections are not the same, or the consumer protections are not the same, or just because I think I can create more value by keeping it here and not letting others share it, access it, process it, I think those are very dangerous thoughts and thoughts to data localization and fragmentation. this layer, I think, first of all, hamper a lot of the benefits of the Internet, even if it works technically, the benefits don’t come. And it’s not a user choice, right? It’s not Avri saying I don’t want to receive letters from you. I cannot receive letters from you because others have made that choice for me. And that’s also another question that we might want to delve into later. So I’ll leave it at that. And I hope I answered your question.

Moderator – Avri Doria:
Thank you. Yeah. You’re all starting to answer the question. And we’re also all starting to have a little bit of the discussion, though all these people talking about not sending me letters is going to get sad. But anyhow, next, I’d like to go to Javier Parlero, who’s a consultant, digital rights, tech and culture. So Javier is remote. So is he available to talk? Yes, please. Go.

Javier Pallero:
Oh, fantastic. Thank you so much. Thank you for the opportunity for being there. I am connected from Argentina. So hi to everyone there. So let me go with my attempt at responding at this very difficult and specific question. What I would say is that I have to agree with all of that was said before. I think this is a very complex issue that starts with a very specific definition that is technical. Right. And listening to Jen, I have to agree. Right. About thinking about what makes the Internet one. Right. Which is this unique identifiers, protocols and the common language that is spoken. But one of the aspects that I would like to bring up from a civil society perspective is not only what makes the Internet unique or one, but also why that happens. What is the reason to be that the Internet has, at least for most of us as users? And that’s actually the ability for us to be able to communicate. and to connect with everyone, right? So that’s, I think, what is at the core of this confusion or this idea, right? That for example, politicians would say, oh, the internet is already fragmented, right? Because there is this perception that the reason that the internet has to be has been changing fast. It has become more closed, more seemingly or perceived, you know, in perception, it has become more disconnected, more unable to provide that sensation of connection and, you know, the ability to express yourself without borders and to access information and so on. So I would dare to go a bit further and say that it’s actually not a confusion, this idea that intertwines the political application, technical and protocol levels of the discussion. It’s not a confusion, it’s something that happens because the thing that goes across all of these dimensions is the reason of the internet to be, right? And the reason to be of the internet is for connection and for, you know, it’s technology that enables the enjoyment of rights and so on. So that apparent confusion is actually a part of the problem and also as the last speaker before me, Tamiya said, many of these situations, these decisions in the policy level or the application level as well, when a private company becomes a dominant actor in one area of internet services, for example, all of that ends up affecting somehow technical decisions, right? So for example, politics can mandate shutdowns or data retention or national gateways, right? But also certain companies, for example, can exert more and more influence into certain protocols, for example. A key example that comes to mind is. the DRM protocols that have been added to the W3C discussions about web protocols, for example, right? And many of that comes from private parties, not necessarily governments, right? Also, another example when it comes to government that goes beyond the extreme example of shutdowns could be the censorship attempts that are done through, you know, mandating changes to the DNS resolvers, right? Or putting pressure into DNS servers, right? So all of that is just a way of saying that this dimension, even if it is not part of the technical specific concrete definition of fragmentation, which I share is more of a technical specific discussion that maybe can be benefited, you know, by being correctly framed and limited. But all of these aspects that I’ve just been mentioning are also important. They may not be part of the definition, but they are part of the problem. And then a part of the perception that has to do with the idea that we have about the internet and how we think that we want to use it. I think that when it comes to working on this, we will have to make a big effort to make a distinction about these different dimensions, maybe focus on some of them, like the protocols one, but, you know, because the other ones tend to have their own areas of discussion, right? The ones about censorship of applications or the ones about bad policies, right? All of those are properly covered, let’s say by some other actors that have activity, discussion, regulation, civil society actors that are actually very active on that. But on these other areas, there’s not that much engagement. And maybe that’s where a more narrow definition of the issue can be of service, right? Just to inspire more attention to the underrepresented dimension, if you may. But the fact is that everything is important and should be considered. So I would stop there for the initial intervention. And thank you again for the invitation and for the opportunity to be there virtually. Thank you.

Moderator – Avri Doria:
Thank you, Javier. And thank you for saying it was virtual when I said remote. That was an old-fashioned word that we’re not really supposed to use anymore, so the virtually or the online. So I really appreciate the correction. The next person I’d like to go to in this initial set of discussions is Nishigata-san for a Japanese government to give us a, where’s the microphone to go? Okay, it was going to go there. So please, thank you now that you have a microphone, please.

Nishigata Nobu:
Okay. Good afternoon, everybody. My name is, thank you for the kind introduction. My name is Nobu Nishigata from the Japanese government. I’m working at the Ministry of Internal Affairs and Communications who host this IGF event. So thank you for coming. And we do appreciate everybody’s participation and contribution which made this event good, really good. Thank you very much again. So getting back to the point of the internet fragmentation, since I’m the government official and I’m not a tech person either, I can write the registration though, not the quote. But, you know, it’s confusing matter to me, right? So just some people already mentioned, it’s many, many different definition. Then there’s a nature of the government person. We need a definition before starting talk, right? But I tried my best today. And then maybe, you know, this is a part of my job to just following what is happening in the internet every day. Then I do recognize some or maybe most of the recent issues is just following up on the category of the internet fragmentation. And then, you know, I refrain to speak that particular names of countries, but I do recognize that there is some frustration in the tech community, particularly, you know, among the several kinds of the fragmentation. And from the government perspective, then I would understand. frustration, particularly against the government intervention and its forceful type of fragmentation, for example, like internet shutdown during the election period, that kind of thing. However, from the government perspective, this is not a job of our ministry, but some other part of the government. We have to do some jobs, particularly for our public safety perspective, particularly within the border, you know, that the government has to solve. You know, internet is global, this is great, but on the other hand, the border matters to the government, you know. So then, not only for the public safety, but the government may make some actions that frustrates you guys in the tech community, and for the sake of the other high-level policy agenda for like economic development or national safety, et cetera, et cetera. So this could be maybe today’s one of the discussion points, how far the government can do or allow to do these jobs, and I understand that some communities hate even the single government intervention to the internet, however, but we do understand that we have to be accountable for these actions, and in Japan’s case, fortunately, we don’t see the severe cases yet, and of course, the government of Japan respect the open and free internet. You can see many evidence, like we support the declaration of the future internet published by the US government, or like Japan chairs this G7 meetings this year, and the G7 agreed the support to DFI, and it’s our chair’s leadership, and Japan and the US both government get together and then just finished our day zero session on the declaration of the future internet to the evangelized people in the IGF venue here. So I understand that some of the government action in general may frustrate the internet people, but on the other hand, it is not only the internet people that get frustrated, the government also sometimes gets frustrated, or I would say at least not satisfied with the current internet, and there are some issues that do be solved. For example, there are issues, I would say, regarding the fragmentation, maybe that goes upon a user interface type of fragmentation, like the filter bubble, or echo chamber, or these kind of things. And this is not only the phenomenon, but these things bring more some bad side effects brought by these internet services, right? So this is the issue that we are not satisfied and we have to tackle, but on the other hand, the government cannot solve these things by ourselves. We need particular technicians and technical people, and maybe other part of the society, but we need some other help to tackle and solve these problems. And in the intervention, maybe let me say that we had our Prime Minister Kishida came into the IJF meeting, if you are aware of, and he just committed to our effort to maintain open free internet, and particularly the reason is that we shouldn’t leave anyone behind from the benefit of the internet that brought for like 30, 40 years. Thank you.

Moderator – Avri Doria:
Thank you. It’s been an interesting first set of comments in that it started very low with a very precise, we’ve added nuances as we’ve sort of moved up the scale, and then it’s almost flowered to the point of anything that interferes with an open and free internet can perhaps be seen as a fragmentation. And so that is a very good representation. of sort of the blossoming of this conversation, the blossoming of the differences that many of us have. I’d like to now go and call on some other folks. We’ve got really an amazing number of folks around this table that have probably good things to say, and dig a little deeper. Perhaps there’ll be other nuances and other extensions that’ll get added, but also to dig a little deeper into some of what’s been said. And next, I’d like to go to Aha G. Embo, who’s a member of parliament of the Gambia. So you, okay, you got that one, okay.

Aha G. Embo:
Thank you very much for the introduction. I’m Honorable Aha G. Embo from the Gambia, member of parliament, and also the vice chair of the African Parliamentary Network on Internet Governance. I think we are discussing a very important topic, a topic that is actually confusing some people because of definition, but I quite agree that anything that can actually interface with the free flow of the internet, actually you can actually call it internet fragmentation. I’m a lawmaker, but I came from the tech community, so we will be okay with that. Now, as we try to have a stable and integrated internet, this fragmentation, whether it’s at the technical level, or whether it’s at the government level, or whether it’s at the business level, because these are the three areas that you can actually see the fragmentation may happen. So we may have an issue at the level of legislation because legislators don’t want to legislate anything that’s ambiguous. We want to be very clear on what we are trying to legislate. And again, the internet is such a way that you don’t want to put in any kind of legislation that would actually hamper or stifle innovation. And again, when you have these fragments of the internet, like these little islands that actually are not talking to each other regularly or optimally, then we may have an issue. The cost is, you know, it could be political by our own governments, but at the side of legislation I think there could be an issue here, because we are trying to have a free flow and we are also trying to streamline our legislations across the world. That’s the reason why here you see that we have African parliamentarians, we have some from the European Parliament, and we’ve been talking, what can we do together to ensure that we have a safe, secure and integrated Internet. So bringing these splinter groups would cause us actually more problems, because we already have issues in terms of legislation, now bringing actually more divisions on this area is actually going to cause us more problems. So I think this is something we really need to discuss to see what we can do together to ensure that we leave it the way it is and then to ensure that we promote a more secure and integrated connectivity. In that case we can work together as legislators or as policy makers to ensure that we can streamline what we do. Now you just mentioned sometimes about the Internet shutdown, but personally I will actually call it Internet disruption, because what is happening right now is they are trying to stop particular applications from running, not the entire Internet. So that is actually disruption. So you’re not really shutting down the Internet completely, but you are just actually stopping particular applications from actually running, and I think all those things actually is something that we really need to look at to see that this disruption of the Internet actually stops, and this fragmentation actually would actually propel that more than actually just. So the side of legislation I think it’s better we leave it the way it is and support it more to ensure that we have free flow and we also have integrated and we also have it more secure for everybody to live. Thank you.

Moderator – Avri Doria:
sort of the different behaviors and the different problems. And that perhaps is helpful. Next I’d like to go to Tomoaki Watanabe, who’s from the Center for Global Communications in Tokyo. And by the way, thank you everybody for helping play the game with pass the microphone. So please.

Tomoaki Watanabe:
Thank you. My name is Tomoaki Watanabe. I’m an academic based in Tokyo. So the way I thought about this issue is when is splinternet bad or bad enough? And I kind of agree, or I kind of resonate with the idea that the democratic or politically motivated splinternet is the one we should get most concerned about. But then maybe we should be aware of the fact that democracy, even in some of the most democratic countries, is these days a challenge to an extent or another. I think things like war against terrorism or measures against rioting or civil unrest, those things are not that foreign to some of the most democratic countries. And I’m sure that some level of internet regulation is desired by governments of those countries. So I don’t, and also let me add. One more thing, having a free and open internet, in principle, I tend to think that’s a good thing. That’s a condition for a better society. But also, I think these days, a lot of questions are asked. How good it is, or is this enough to bring about good changes? Or, sometimes, as many people have already mentioned, it causes really serious adverse effects. And in light of those things, I think we really have to think carefully how to proceed, in a way. Because I think it’s not really so simple as to say, only certain countries are problematic, and these countries are more pro-freedom, pro-unified internet. Because I think, upon closer inspection, even in those countries which are pro-democracy, pro-unified internet, there are serious problems and concerns. And maybe studying those things more closely, discussing about those things closely, might give us a better way to think about, maybe, a more comprehensive package. Maybe the unified internet is just one of, or part of the package, that would bring about good social changes. Maybe I spoke long enough. Thank you.

Moderator – Avri Doria:
Okay, thank you. So, it actually starts to get even more a little confusing, in terms of free and open. and is not always free and open and certainly not always good. And we’ve certainly heard that before. And I’m sort of starting to feel that it’s starting to cover a lot and not cover much at the same time. So it’s becoming actually a more and more interesting conversation. Next I have Tatiana Tropina from Leiden University who needs a microphone. And please, can you sort of help bring it in a little bit?

Tatiana Trapina:
It’s working now. Thank you very much, Avri. And I was listening to everybody and thinking about these 100 flavors of fragmentation, as Avri put it, or 50 shades of fragmentation. And I think the confusion comes from the fact that we place our belief, our faith in global connectivity on different layers of the internet. And I’m taking here purely technical, purely technocratic approach. For me, from this perspective, from the perspective of technical layer, nothing has challenged the global dominance of TCP IP, the system of unique identifiers. And if something has challenged it, for example, the incompatibility of IPv6 and IPv4 IP addresses, this has been fixed. The technical tools have been developed for the global connectivity to win. So for me, the glue that brings all these layers together and provides us, fulfills this promise of global connectivity is still there. But I do understand that different speakers put faith and definition of global connectivity somewhere else, below the technical layer. Then internet shutdowns become internet fragmentation, or above technical layer. Then various content regulations, restrictions, censorship. to also become internet fragmentation, because their promise of global connectivity is somewhere else. And this is where this debate gets confusing. To me, internet is not fragmented exactly because the glue that keeps us together, the technical layer, it’s still interoperable, it’s still global, remove censorship, you will have connectivity. But once this layer is gone, everything is gone. So yes, it’s not fragmented, but, but, the question is, is there no danger? There is a danger. And to me, the danger is that by trying to regulate, by trying to territorialize information flows, by trying to exercise control for various reasons, be it preservation of political system or legitimate concerns about protecting their citizens from various threats, governments start imposing restrictions that might intentionally or unintentionally impose regulation that might intentionally or unintentionally tackle the technical layer. And here I have to go away from my technical technocratic approach and say one thing. We like to think about technical layer like unique identifiers, TCP, IP, so it’s all connected, it glues it together, it’s working. But we have to think that this doesn’t exist on its own. It exists not because the government’s imposed it, not because regulation imposed it, it exists because the community, technical community, multi-stakeholder community put faith in it at some point by adoption of these protocols, by adoption of this system of unique identifiers. And it runs purely based on trust. And away from my technical technocratic approach, if regulation destroys either technical underpinnings or this trust, this is where internet is going to fragment. And I do believe that this is a danger here. And I would like to circle back to what Elena said about self-fulfilling prophecy. We talk about definitions a lot here. I do believe that at some point we start talking about solutions. And to me, one of the solutions would be to be very careful saying that Internet is fragmenting or fragmented because it’s some sort of perpetuating debate. What we have to do, we have to start thinking about basics and basic commitments. I know that it’s hard to fix government censoring the Internet. And sometimes we have to stop labeling it as fragmentations because sometimes it would be just purely human rights abuses. And it’s much fancier to say fragmentation, right? But we have to look into the core. And at the core would be global connectivity and trust. And if this session can start any debate about steps forward, I would say that it would be commitment by governments, by technical community, by anybody else to these basics. And once we preserve these basics, we can solve any other problem because the global connectivity will prevail. Thank you.

Moderator – Avri Doria:
Thank you. So we get to a point where we really are starting to overload the term and we’ve overloaded it with all of our frustrations and unhappinesses and everything else and get in trouble. Next I want to move to, and I’ve got a couple more before we’ll come back around, so many good people to talk to here, with Sheetal Kumar from Global Partners. And please pass the microphone. No, that way. Thank you.

Sheetal Kumar:
Hi, everyone. I’m sorry I’m late. But I’m really glad I got to catch Tatiana’s input there because I think it was very helpful. And the more I listen to these discussions, the more I feel like we are actually getting somewhere as long as we’re happy to navigate a choppy water. I think one of the challenges that we’re facing is that we’re talking about something we’re trying to preserve and also evolve. And so we’re trying to figure out perhaps, as you were saying, Tatiana, what we need to preserve. And I think that’s very helpful to identify and agree on, and how we evolve that, considering we need to preserve that. The issue is that whether through unintended or intended actions, and as you mentioned, a lot of those can come from regulation, there are challenges to preserving what we have. When it comes to the internet, those critical properties, the values and the principles of openness and connectivity, and indeed user control and autonomy, those are being impacted or could be impacted by regulation and decisions and the normalization of actions like shutdowns, for example. So that, to me, is the challenge. And then how specific we are, or how broad we are, I think that comes from, yes, perhaps identifying what we need to, and agreeing on what we need to preserve, identifying what the challenges are to that, and then ensuring that we can continue to evolve the internet according to that. So just quickly on where I think we’ve come to, I co-lead the Policy Network on Internet Fragmentation, and we have developed this framework, which was, I think, probably referred to before, and there we have some recommendations under each of the elements of the framework, which are the technical layer, where we refer to the critical properties of the internet, but also user experience, which combines sort of the impacts of government regulation and also corporate actions on user experience, and develops recommendations based on those, and then governance as well. So the challenge of having duplicative mandates or bodies that are not inclusive and therefore don’t coordinate and communicate with each other. Now, if we, I believe, we could take any of those and if we did some of that, that would be helpful to ensure that we are both preserving and evolving the internet in a way that preserves its original vision, but it is also possible to do part of that and still go along the pathway. So I think what we’re trying to figure out here is what is the pathway and to have some sort of compass for that, and what I hope is that the Policy Network’s contribution, which builds on the contributions of many others who have worked on this topic, whether it’s the World Economic Forum paper or the work of the Internet Society, helps to form that compass and to both preserve and evolve what we have so that we can move along the right pathway. Thanks.

Moderator – Avri Doria:
Thank you. I just wanted to mention, we’ve got two more in this initial list. Then we have an online comment, and then I want to move into a more organic conversation, and I’ll mention to the people there, there are empty seats around the table. So if you’re going to want to say something, find yourself a seat, because it’s really easy to pass. Next, I have Raul Echeverria, and the microphone should start moving towards him from Asociación or something, I’m pronouncing it wrong, Latinoamericano de Internet. So Raul, please. I tried.

Raul Echeverria:
Excellent pronunciation. Okay. That’s a very interesting discussion. I think that I said yesterday in a meeting that we have to escape from the issue of definitions, because this is where we are stuck. But we are clear about how we want the Internet to behave and what are the things that we don’t want to happen on the Internet. One of the things we have is that people have the same experiences on the Internet. the internet around the world, and it is not happening. I have experienced that, as other colleague said before, let’s not put names on the countries. But I have been in countries where I have not had the same access to the same applications that I use often in my country or in most of the world. So there are others, as Tatiana pointed out, there is risk in many policies that create negative impacts in the way that the internet works. So I think that we can be two years discussing what is fragmentation and what is not fragmentation, and probably we will not get a consensus. So I will not spend time saying if the internet is fragmented or not. We have problems. That’s the point. And we know what the problems are. And OK, we can say, OK, don’t say that the internet is fragmented, because it’s like to create that idea that if it is already fragmented, so what’s the problem? But we have to be careful, because in fact, there are policies that have already been adopted in many countries that create a negative impact that have a huge risk on fragmentation. So for not saying that, we can create also the opposite spirit, the idea of saying, OK, people complain when we pass this law and nothing happens. Everybody now is saying that the internet is not fragmented, so what’s the problem? So let’s focus on what are the things that we don’t want to happen. We want the people to have the same experience and the the Internet, anywhere in the globe, to take advantage of all the powerful of the connectivity. We don’t want interference from governments in deciding by us what we can do or what we cannot do. And also, there are legitimate interest and right in the governments to take care about some things that are proportionally, that we know that there is a common understanding that in the world about child pornography, terrorism and other things, but also the measures that are taken to avoid the access to this kind of information should be proportional and reasonable, and not to use a big wig bomb to kill an ant, you know? There are problems. This is the point. And we have to focus on that instead of, say, discussing if Internet is fragmented or not.

Moderator – Avri Doria:
Thank you. So we move from quibbling over the definitions and the multiple definitions towards, and I really like the notion of combining two things, sort of the pathway to solutions almost, the finding the actual problems and then working on solving them. The next person I have, and it’s the last one of this first round, as it were, is Paul Wilson from APNIC. Do you have a microphone? I can give you this one or that one’s coming.

Paul Wilson:
Hello, I’m Paul Wilson from APNIC. We’re a member of the technical community, one of the regional Internet address registries. I think when we’ve spoken about fragmentation at all these nuanced and high levels, I’m not sure that we want to get back down into the nitty-gritty of the Internet layer. I just do want to say that the purpose of what happens at the… that we have an internet layer in that technical sense, the underlying layer that supports everything else, that is unfragmented, that can continue to grow, it can continue to operate without fragmentation that can happen in various kinds. It’s continuous work that needs to be done. It, as I tried to say in yesterday’s panel, it’s something that shouldn’t be taken for granted, because it can be eroded. I think looking at fragmentation as a whole and even at an individual layer and individual case, I think we’re all learning that fragmentation is not just a condition of the internet, it’s a quality that varies, that comes and goes, it varies by layer, by context, by geography and so on. And so that goes at the internet layer as well, and what we’re continually trying to do is to make sure that we’re not over-fragmenting the internet layer as well and what we’re continually trying to do through policy making, through the IPv6 transition, through the management of the last suppliers of IPv4 is to preserve the integrity of the internet. So if anyone wants to talk about specific aspects of that, like IPv4 versus IPv6 for instance, then we can, but I feel like we’re past that. I wanted to make just one observation that is about the changing nature of the internet and how really these things do need to be tracked and observed and analysed as the internet grows and changes. There’s been a huge trend to a kind of fragmentation of the internet over the last decade towards CDNs, so content distribution networks which take copies of content and move those copies close to the consumers in order that they can be accessed quickly. That’s a type of fragmentation because it kind of breaks the model where the user is accessing a service which somehow exists somewhere on the internet and that service doesn’t anymore, even though it looks like one service, even on one IP address, it actually doesn’t exist in one place according to the classical model. it’s distributed, it’s fragmented, the original end-to-end model is kind of fragmented by that situation. So that’s a huge trend and a huge amount of the traffic on the internet has been, is these days delivered through CDNs. To the extent that the APNIC scientist Geoff Huston asked recently whether we were seeing the death of transit on the internet, that is the ability of the internet to negotiate a connection from any one point to any other point through transit networks. And it’s a good point because if you are no longer demanding transit, if you’re no longer demanding genuine end-to-end connectivity, then it may well fade. But then along came COVID and I think the fact that we had this incredible plethora of end-to-end, point-to-point video communications that became a necessity of everyday life sort of pointed out the necessity, the real importance of that end-to-end internet, the ability of any end point to effectively connect to any other end point. I was struck actually by the remote, the virtual participation by Javier before, from the other side of the world on this HD connection, absolutely beautiful, perfect. We have a point-to-point, end-to-end, unfragmented internet that’s allowing that kind of connectivity to take place. So I think that should be still pretty remarkable to all of us and something not to take for granted. Thank you.

Moderator – Avri Doria:
Thank you. You’re right. It is rather miraculous that we can do that and that we assume that it’s going to work and get kind of flustered when it doesn’t. Okay. We’re going to sort of move in the next. We’ve used about an hour. We’ve basically had a fair number of people give a fair number of good views that have sort of boiled the ocean a little bit for us. Adam’s going to read a comment. comment that was online, and then I’d like to basically, up to now it’s been just talk until you got said what you wanted to say. Now with a half hour left, sort of, if you’ve got brief points to make vis-a-vis what other people said, and Adam will carry around a microphone, and also I wanted to ask if there’s other participants here who weren’t the assigned speaking participants who but would like to speak and have something to say, please let us know so that we can get a microphone to you and you could speak. Either sit here or Adam will bring you a microphone. So please, Adam.

Dhruv Dhody:
Thank you. Yes, this is on. I do need the exercise, so please call for the mic. I’d love to rush over and give it to you in a second. Just, there is an online comment. It actually covers something that Tanya also mentioned. It’s from Dhruv Dhodi, and he says, while all of these can be called internet fragmentation, would you agree that they are not all equal and fragmentation at the technical layer that does not allow interoperability at all is a bigger threat than content moderation. Thus, is there a need for us all to be more nuanced when talking about internet fragmentation and sometimes, rather than sometimes clubbing them all together, which does not serve us well? So that was the comment, and I think Tanya touched on some of those issues. Who would like a microphone and get me moving? Yeah, keep putting her hand up, and then, yeah. Okay, all right.

Ponsley:
Okay, thank you. Ponsley speaking, Gambia NRI. I just want to go back to Elina. She raised something on, and all the other speakers have really talked more about it, that it’s not really a technical issue, which we know, and even when you try to put fragmentation, you try to package it down, you discover that most of what people are actually talking about is really. There are thousands of ways that internet is harming people, online. There are hundreds of ways that the human rights and digital rights have been abused, whether it’s short-term or whatever. My question on this political stuff, that’s likely what will break the Internet, what type of political situation are you seeing because people might interpret them differently. I’m not sure what the political scenario is. I think there are a lot of people who are using the Internet, later he’s overthrown, when there was elections, he started crying about using the Internet to make some noise to get him released. Some people will consider that in a way, in some parts, especially in Africa, that shutting down the Internet is a fragmentation. You are depriving some people, but actually, it’s not actually that. It’s a matter of different way that political characteristics, political points of view that affect a breakdown in fragmentation, cutting off a whole continent or a whole sector. Thank you.

Sheetal Kumar:
Start . We’re on the wrong path. And we’re moving away from something that maybe isn’t perfect, but we’re moving away from it. And so I think my question would be, or provocation would be, do we know, we all know we’re not taking it for granted, but do we know what, perhaps to Raoul’s point, like what we need to do? Do we agree on what the main issues are? And as I said, you know, some of us have been putting together recommendations, including from this IGF, from the multi-stakeholder policy network, for what can be done. Is that useful? Is that helpful to say that if we implemented those recommendations, things would get better? Do we have that common understanding? Because we are all clearly concerned about something.

Moderator – Avri Doria:
Thank you. Thank you. Did you want to address anything directly? It seemed, and then I have the gentleman there, and then I have, no, he’s already got a microphone.

Julius Endel:
Yeah, I’ve got one. Thank you very much. I’m Julius Endel from the W Academy in Germany, and I’ve maybe, I forgot your name, sorry, the lady in the black and white, you. So, I don’t know who wants to answer it. So, how would you connect the discussion about fragmentation and AI? Because what I see is that all the costs are being, for running the internet and providing all the data on all the servers and into the cloud is socialized, and all the profits are privatized on a very few, a number of companies. And so, we are kind of doing all the work, and they are scrapping all of our data, and sucking in the profits, so isn’t that also a kind of fragmentation? So, how would you connect these two, or don’t you want to see this kind of connection? Okay, trying to keep track of the hands and

Moderator – Avri Doria:
the order in which I see them. I did have Jorge, but if somebody wanted to respond to

Elena Plexida:
what was just asked, yes. Yeah, please. I wanted to respond to what was asked previously, not right now. Okay. And thank you very much for the question. I was actually taking notes of what the other people were saying and trying to react to that. I was going to get to that point, anyway. So, but let me get it from the beginning. We can keep debating on how to define things, of course, and by the way, you know we do it because we were asked to do it. I’m not sure if I’m going to be able to be here, but I really like Tanya’s speech here. Let’s look at what we need to preserve, okay? The issues that Temea or Xavier online brought up are very, very important. We do have issues with data localization, islands of secluded content, shutdowns, what have you, all that. I guess I’ll try to put it in perspective in order to get to what we need to preserve. I think we need to preserve what we need to preserve, okay? So, if we are going to be able to isolate the Internet, isolate the Internet into two, three, four different Internets, then I think the problem becomes of a whole other magnitude. And the frustration that we’re talking about that we’ll all be feeling will be of a whole other level. And you said before we assume it just works. So, I think we need to preserve what we need to preserve, which is the Internet. And Temea also mentioned and other people mentioned while we were discussing legislation to address content issues that can have an adverse effect at the technical level. I agree, that happens. I also see and no one in the technical community will say that legislation is not needed. But I think it’s important to understand that the Internet is not a tool that can have an adverse effect on the technical level. It’s usually unintentional and when you discuss with legislators and you explain, they fix it. The trend, and that is what worries me and goes back to the question that was asked by the gentleman over there, is although so far it has been unintentional and unintentionally touching on the basics of the Internet, the fundamentals, the identifiers, there are huge challenges with IT. And so, from my perspective, we again have created an environment that I don’t want anyone but different behaviors that are not common at all and what is not common is the internet has actually invested enough money in time. And as I said, a decision that can have an adversarial effect on the technical level is much we have now an effort to apply sovereignty over something that is by definition global. An example I can give is sanctions over IPs as an action that goes into that direction. So yeah, therefore that is something that we need to avoid in order to preserve what we need to preserve. Going back to Tania. Thanks.

Moderator – Avri Doria:
Thank you. Okay, I’m starting to build up a queue here. I’ve got you at the microphone there, then I’ve got Jorge, then I’ve got Raul, then I’ve got Michael, and then I’ve got Tatiana. So that’s the order I’ve managed to build. If I didn’t get it quite right, I apologize.

Robin Green:
It’s on. Thank you. My name is Robin Green. I work with Meta on internet fragmentation issues as well as a range of human rights issues tied to encryption and surveillance law enforcement access. I thought this is an absolutely fantastic discussion. Thank you for hosting it. One of the things that I’ve heard a few times over the course of this IGF and the many internet fragmentation conversations that we’ve had is this idea that content distribution networks are fragmenting the internet. And I want to push back on that a little bit because those networks are ultimately oftentimes necessary to actually connect people to services all over the world to make sure that those services are resilient, to make sure that people have access to fast internet service. And at the end of the day, when we’re talking about internet fragmentation, in my view… One of the things that we’re really focused on is what is the effect, right? So whether you’re talking about regulatory fragmentation of the Internet that has technical implications or a core technical fragmentation of the Internet like some folks have talked about The thing that we actually care about is what is the user experience is the user experiencing fragmentation? And if the goal of the Internet what which at least is the goal in my mind is for people to be able to exercise their Fundamental rights and whether those are economic rights expressive rights You know accessing information engaging in assembly and things like that At the end of the day if their user experience is becoming fragmented in a way that they can’t fulfill those goals Then to me that is internet fragmentation that needs to be addressed And so we can sort of have this larger umbrella of internet fragmentation While still looking at things from a technical perspective and then a user experience perspective But I think it would be a mistake to step away from the concept of internet fragmentation because something isn’t you know directly Mandating a technical fragmentation even where the user experience still winds up being fragmented And so that’s where I do see things as data localization requirements or other other kinds of restrictions on cross-border data flows Restrictions on encryption that would be implicating people globally users globally and then similarly implications on Content takedowns and geoblocking and other restrictions of free expression. Those are all elements of internet fragmentation They’re just you know, whether they’re technical or user experience oriented You know, there’s a difference there, but there’s still things that I think we all need to consider

Moderator – Avri Doria:
Thank you. Okay. Now next I had Jorge and I also want to I was reminded that there was a pending question in the air So if any of you have it about the connection between AI which has been a favorite subject Oh, you’re gonna have one fantastic. Well, we’ll get there. I just want I was reminded I’m reminded that I had not made sure the… I want to respond directly to that question. Okay, so we’ll get there. Oh, you want to do it now? I can. Okay.

Tatiana Trapina:
So just, it doesn’t get lost because it was asked. I wrote it down. How would you connect the discussions on fragmentations and AI? And I would say I would not connect them. I’m sorry. So that’s my answer. Thank you. But I’m still in the queue for other issues.

Moderator – Avri Doria:
Thank you. Jorge, please.

Jorge Cancios:
Okay. After this commentary. Jorge Cancios with government. So on this question, so many things have been said. It’s difficult to add something, but I couldn’t resist. And I think here, as we are in the UN, in the IGF, we have to connect this also to the discussions we are having at the global level, what’s happening at the global level. And if you look at how the situation is evolving, just four years ago, we had a report from the high-level panel on digital cooperation, which was called The Age of Digital Interdependence. So really the focus was made or laid on what unites us on how dependent we are with each other and through the digital tissue that unites many things. And I just wanted to mention that because the situation today is a completely different one. I guess that even if such a panel would try to name its report, the same way it would be criticized as being completely out of the reality where we live with very fundamental geopolitical tensions. So I just wanted to share that and also recall that we are in the midst of this process towards a global digital compact where internet fragmentation is one of the topics to be considered. And going back to something that Paul said before and others, the internet interoperability at the technical level is not a given. It’s not really something that we should take for granted. It really relies, apart from this history of trust, of building this network, it relies on huge network effects, on incentives and benefits for everyone connecting to this unique network. But really the pressures are mounting at this geopolitical level. So there may come a time where those pressures, perhaps also joined by alternatives at a standards level, at other levels, become so important that this delicate fabric of trust, which holds the tissues together, built by millions of networks, begin to erode. So this is something that I think is really the fundamental level of internet fragmentation. And the same way that this is a fabric of millions of millions of networks, it is also in the hands of those millions of people taking decisions with their networks, with their companies, with their governments, who can take decisions going into the right direction or into the wrong directions, and can decide to invest into holding that tissue together or to really continue eroding that tissue into a direction that may end up with a fragmentation. So perhaps this decision or this recommendation of investing into the right direction, which is something in the hands of many of the people coming here, could be something for the policy network on internet fragmentation and for some good recommendations, useful recommendations coming out of this IGF and flowing into the GDC. So hope that was helpful after so many thoughtful inputs.

Moderator – Avri Doria:
Thank you. Starting to have a very long list here in a very short amount of time. I’ve got Raúl next.

Raul Echeverria:
Yes. The colleague that raised the point of political situation left the meeting, but I wanted to come back to that point, because of course we are accustomed in countries that are not democratic or with weak democracies, we are accustomed to see that it’s normal to impose restrictions to access to content. But so it’s something, by the way, something that we not realize in the… And we should not. But it’s not the only problem, and this is something that we expect, but now we are facing problems in democratic countries and strong democracies that are passing laws and developing public policies that are really affecting the user experiences. And sometimes it’s based on measures that try to protect intellectual property in the networks, or because of some taxation or other things. But sometimes, because for lack of awareness and the effect that the policies could have on the internet, those things are adopted. I’m sorry to say that my experience from policymakers is not only as successful as the colleague from ICANN say, that many times we explain to policymakers and we are not successful in changing their mind, because they have, as I said yesterday in another meeting, the incentives of policymakers are diverse. Sometimes they have political decisions to protect an industry or protect from disruption. So they have commitments. They have to move ahead with decisions, even knowing that they are creating a negative impact. As a friend of mine usually says, sometimes when policies don’t fit with reality, some policymakers who try to do is try to change the reality instead of changing the policies. I wanted to come back to the political situation, because it’s not only a problem. with dictatorships or authoritarian regimes is a problem. The risk is really big of having this fragmented experience that is what matter because nobody care about the rights. We care about what the people do on the internet. So this is what really matter.

Moderator – Avri Doria:
Thank you. Next, I’ve got Michael Rothschild.

Michael Rothschild:
Thank you. Hi, my name is Michael Rothschild. I’m from the Association of the Internet Industry in Germany. I’m working with the internet since 1983 and when we started, there were only fragments. The whole, there was no internet. It consists only of fragments in the various countries, fragments of services, fragments of networks, everything. And what we did at that time was building gateways. Of course, that may not be efficient and there is a risk of filtering, I admit, but I’m pretty much sure if fragmentation on the technical level goes on, someone will find a technical solution for that one and then we only have to deal with the political stuff. Thank you.

Moderator – Avri Doria:
Thank you. I think I come from that same generation where, for me, the internet’s constantly becoming. Tatiana.

Tatiana Trapina:
Unfortunately, the gentleman who asked the question about feasible scenario for fragmentation left, but I did, I wanted to say that we can think about technical scenario, like, I don’t know, some alternative route or alternative standards or alternative system of unique identifiers. I do not believe in this, strictly speaking, exactly because I think that the technical community has enough experience of connecting things by coming up with technical solutions so the connectivity wins. And plus, if this is imposed by the governments, it will. I don’t think that this is completely unrealistic. Maybe in the future if one region, like the European Union, decides to go with absolutely different technical standards, it might happen. Where I see the feasible scenario, and this is, I think, where it becomes very important, is when regulation which is imposed targets technical layer in a way that what Jorge called fabric of trust or the kind of regulation called fabric of trust is eroded. When something on the technical layer, be it root zone service, be it unique identifiers, IP addresses in certain regions have different frameworks for governance. When the multi-stakeholder governance does not cover it all or there are competing frameworks with what we have now. And here it brings me to the point what we want to preserve. I think we want to preserve essentially what makes the internet the internet. We want to preserve this uniqueness, this glue, technical identifiers, protocols, and ensure that any developments will make them still interoperable. But I think much more important in terms of feasibility of any, I reluctantly say this word, fragmentation scenario, we need to preserve this trust. We need the firm commitment to the multi-stakeholder model of governance. Not of engagement, not of discussion, but of governance, because this is how the technical layer has been governed. And this is what we have to constantly recommit ourselves to. Thank you.

Moderator – Avri Doria:
Thank you. As we come closer and closer to the end with seven minutes, please, Tomaki.

Tomoaki Watanabe:
Yes, thank you. So I wanted to answer two questions. One, the split internet discussion, how it relates to AI. I think I have slightly different take. And there are two relations that I can think of. So I would like to make a few comments on this. So I would like to make a few comments on this. So number one, AI, especially the current, like, large language model kind of AIs, they are built up on a massive training data set, which is enabled in a way by the unified Internet. So if we wanted to leave some benefits for the AI, we would have to have a more advanced capacity to be able to communicate and be as connected as they are right now. And also, more to the political domain. I think it’s good that some of the AIs can provide at least some advanced capacity to translate and overcome the challenges that we face in the world. So that’s my take on the relationship. The other question I wanted to address was if the technical layer fragmentation matters more than the content layer. I think the answer, basically, is yes, but not in a simple way. So the second question is that if suppose if all the world’s government had very strong granular and very speedy capability to regulate online communications of any kind, then the government doesn’t really care about shutting down the Internet connections, because such a measure is always a blunt assertion and you know that too much people are going to use that for providing help to communicate. internet is connected to each other. Thank you.

Moderator – Avri Doria:
Thank you. Okay. I think I have four people left. The next one is Javier online, then I’ve got Paul, then Dushant, then Uta, and if I get all those in, we’re really doing great. That means you’re speaking briefly. So Javier, are you ready?

Javier Pallero:
Yes. I hope you can hear me. So one thing that I would like to add is more into the side of solutions, right? Just to try to move into something different. When it comes to what we have identified as the core fragmentation threats in terms of what happens with the protocols and identifiers, we have heard that the main threats on that front come from the governments, right? Governments that are feeling sometimes impotent in when it comes to try to control the internet in order to execute some of the public policies or priorities. So maybe isn’t then a reinvigoration of the multi-stakeholder model, more attention to that, maybe an active denunciation of those advancements by governments, extreme advancements by governments and also getting more information to users or to those who can exert pressure onto their own governments, a way of offering a solution. Maybe we should be doing that. Just thinking about, you know, like what’s the main threat to the more specific technical aspect of this and maybe, you know, with more participation and active denunciation of that advancement by governments, we can make a valuable contribution to that. Thanks.

Moderator – Avri Doria:
Thank you. Paul, please.

Paul Wilson:
CDNs are useful and I didn’t mean to indicate they aren’t, but they don’t help people to access the internet. They help people to access specific services and content of specific CDNs and nothing more than that. If we’re talking about user experience, fragmentation to me as a user is a lack of inter-operation between similar services and to that end, I think it’s really important to make sure that we’re not just talking about and to that end, end, I want to have a single instant messaging account like I do an email account and still exchange messages with others who choose to use different services, whether they are WhatsApp or Signal or anything else, and I’d say the same about social media. Those services could interoperate and they don’t, they generally don’t, and that’s a choice of the company’s concern. I think that will continue and that will continue to me to represent a fragmentation of my experience on the internet. Service companies will continue to do that until they’re required to change that behaviour and I wouldn’t mind seeing that day come. Thank you.

Moderator – Avri Doria:
Thank you. Please, Dušan.

Dušan:
I’ll sit here. DuÅ¡an, for the record, from Serbia. So I would like just to express one frustration that I have about fragmentation. So we call everything fragmentation. We call filtering fragmentation. We call, I remember and I agree with previous talks that when I was involved in internet, it was fragmented. Later on, we were talking about balkanisation, if you remember, to 2014, 15 and 16. IDN domain names, for example, are still fragmenting the internet. So technical layer is still the protected layer, as I would say, and it is connecting everything. But we have given governments to legislate in their part of the internet. So that part of the internet can be and should be fragmented. On the other side, we are fragmented with filtering, don’t call it fragmentation. We are blocking or something like that. Let’s talk about those particular topics, not call it fragmentation. So, we will have a high-level discussion on everything without substance. Thank you.

Moderator – Avri Doria:
Uta, you get the next to last word, because mine will be last.

Umai:
Oh, my. So, since we’ve been collecting indicators for fragmentation, sort of been writing up a research agenda about this, I’d like to put one more point next to this, and that is that we’ve been focusing very much on the technical layers. And while that, of course, is very important, I now find it important to mention that there, if we will, is this social layer underlying the internet as a network of networks. And that consists of network engineers who maintain these systems and have a huge, an informal community with informal values and forms of coordination that may be aging. And so, if we are looking at this in the future, then we may be wanting to look at this community as well and their capabilities of actually keeping things together.

Moderator – Avri Doria:
Thank you very much. And thank you all for a great conversation. And I’m certainly not going to sum it up, because that would take forever. But this simple mind of mine walks away with fragmentation as a four-letter word with lots of nuance and lots of use. So, thank you very much. Thank you.

Aha G. Embo

Speech speed

178 words per minute

Speech length

618 words

Speech time

208 secs

Dhruv Dhody

Speech speed

197 words per minute

Speech length

178 words

Speech time

54 secs

Dušan

Speech speed

101 words per minute

Speech length

182 words

Speech time

108 secs

Elena Plexida

Speech speed

229 words per minute

Speech length

1376 words

Speech time

361 secs

Javier Pallero

Speech speed

174 words per minute

Speech length

1110 words

Speech time

382 secs

Jennifer Chung

Speech speed

173 words per minute

Speech length

810 words

Speech time

282 secs

Jorge Cancios

Speech speed

115 words per minute

Speech length

527 words

Speech time

275 secs

Julius Endel

Speech speed

144 words per minute

Speech length

163 words

Speech time

68 secs

Michael Rothschild

Speech speed

158 words per minute

Speech length

132 words

Speech time

50 secs

Moderator – Avri Doria

Speech speed

180 words per minute

Speech length

1760 words

Speech time

588 secs

Nishigata Nobu

Speech speed

159 words per minute

Speech length

808 words

Speech time

305 secs

Paul Wilson

Speech speed

181 words per minute

Speech length

898 words

Speech time

297 secs

Ponsley

Speech speed

204 words per minute

Speech length

262 words

Speech time

77 secs

Raul Echeverria

Speech speed

134 words per minute

Speech length

895 words

Speech time

401 secs

Robin Green

Speech speed

178 words per minute

Speech length

466 words

Speech time

157 secs

Sheetal Kumar

Speech speed

172 words per minute

Speech length

696 words

Speech time

242 secs

Tatiana Trapina

Speech speed

162 words per minute

Speech length

1145 words

Speech time

424 secs

Timea Suto

Speech speed

154 words per minute

Speech length

749 words

Speech time

292 secs

Tomoaki Watanabe

Speech speed

133 words per minute

Speech length

667 words

Speech time

301 secs

Umai

Speech speed

180 words per minute

Speech length

150 words

Speech time

50 secs

Leveraging the FOC at International Organizations | IGF 2023 Open Forum #109

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Veronica Ferari

The analysis explores a range of important points discussed by the speakers. One significant topic highlighted is the importance of multi-stakeholder engagement in shaping Internet policies. Both APC and FOC support and encourage people to use and shape the Internet. This involvement ensures that policies are representative and inclusive, taking into account the diverse needs and perspectives of different stakeholders.

Another key point raised is the significance of incorporating the voices of marginalized groups in decision-making processes. APC, FOC, TIFER, and the Digital Equality Task Force are actively working towards this goal. They have made commendable efforts to include and amplify the voices of marginalized communities who are often underrepresented or marginalized in decision-making arenas. Recognizing that decision-making should be inclusive and inclusive of marginalized voices is crucial for reducing inequalities and promoting gender equality.

The discussion also highlighted concerns regarding AI and emerging technologies. APC draws attention to the fact that these technologies have the potential to create or exacerbate existing inequalities. It is crucial that norms and frameworks governing the use and development of AI and emerging technologies take into account the potential societal implications, ensuring that they do not reinforce inequalities or promote discrimination.

FOC’s role in coordinating international discussions on cybersecurity and cybercrime is recognized as pivotal. The importance of taking a human-centric approach to cybersecurity, one that prioritises human rights and builds on international human rights frameworks, is emphasised. The Joint Statement on the Human Rights Impact of Cybersecurity Laws, Practices, and Policies from 2020 underscores this need. It is suggested that FOC could build on existing language and positions where consensus already exists, further strengthening its role in promoting cybersecurity while safeguarding human rights.

The speakers also touch upon the significance of prioritising cybercrime treaty negotiations. It is agreed that this should be considered a key priority, given the growing threat of cybercrime and the need to ensure effective international cooperation to combat it. Furthermore, concerns are raised regarding the weakening of human rights language in cybersecurity negotiations. This observation highlights the importance of maintaining strong human rights principles within the context of cybersecurity discussions.

The need for multi-stakeholder and civil society participation in the GDC (Global Digital Cooperation) negotiations is strongly advocated. It is argued that inclusive participation from different stakeholders, including civil society, is essential to ensure that decisions and policies are informed and representative of global perspectives. A civil society meeting held on day zero of the GDC is mentioned, indicating efforts to coordinate and include civil society voices in the negotiation process.

Visa issues are identified as a barrier to global majority voices participating in the conversation. The inability of staff from APC and others to attend the event due to these issues highlights the need for more inclusive and accessible processes to allow for the equal representation of all voices in global discussions.

The analysis also reveals support for regional inclusivity in multi-stakeholder representation. The experience with Canada during the chairship, which involved organising regional consultations, is cited as evidence of this support. Regional representation ensures that the perspectives and needs of specific regions are taken into account when formulating policies and making decisions.

Another important observation made during the analysis is the need for better coordination between different forums and initiatives. The presence of numerous organisations following similar processes suggests the potential for duplication and inefficiency. Improved coordination can enhance collaboration and avoid unnecessary overlaps, enabling more effective and streamlined progress towards common goals.

In conclusion, the analysis highlights the significance of multi-stakeholder engagement, the inclusion of marginalized voices, the potential inequalities associated with AI and emerging technologies, the importance of a human-centric approach to cybersecurity, the prioritisation of cybercrime treaty negotiations, concerns over weakening human rights language in cybersecurity negotiations, the need for multi-stakeholder and civil society participation in the GDC negotiations, the impact of visa issues on global majority voices, support for regional inclusivity, and the necessity for better coordination between different forums and initiatives. These insights underscore the importance of inclusivity, representation, and cooperation in shaping Internet policies and digital cooperation globally.

Audience

The discussion highlights the importance of including diverse voices in decision-making processes, emphasizing that it is crucial for creating inclusive and fair outcomes. The audience member, who works at the U.S. Department of State and has experience in chairing discussions and decision-making processes, stresses the significance of diverse perspectives in shaping policies and initiatives.

However, challenges arise in bringing together global majority voices due to the presence of multiple forums and processes. The audience member’s experience at the U.S. Department of State reflects these challenges. Hence, it is essential to address these challenges in order to effectively listen to and represent the voices of the global majority.

During their chairship year, the Dutch government is advised to adopt a focused approach and actively engage with the existing global majority voices. By doing so, they can ensure a more inclusive and representative decision-making process. The example of the Canadian government is cited, wherein consultations were conducted with every region to gather comprehensive and diverse input.

Moreover, it is emphasized that strengthening the existing voices in the coalition is crucial for encouraging new members to join. By supporting and amplifying the existing voices, the coalition can attract a wider range of perspectives and enhance its impact. The value of collaboration and partnership is also highlighted as a means of strengthening the coalition.

Overall, the discussion underlines the significance of inclusivity in decision-making processes and addresses the challenges in bringing together global majority voices. It suggests adopting a proactive and focused approach to engaging with and strengthening existing voices while attracting new members. In doing so, decision-making processes can become more equitable and representative.

Alison Petters

The US government’s chairship of the Freedom Online Coalition (FOC) has played a pivotal role in bolstering international engagement and coordination in technology-related issues. Through collaborative efforts with global partners, the US has effectively addressed key technological challenges and promoted a rights-respecting approach to technology-related policies. This positive sentiment is reinforced by the fact that the US government has shown strong commitment to the FOC, with both presidential commitment and the active engagement of the Secretary of State in the coalition’s activities.

One of the notable achievements of the FOC under the US chairship is its successful response to new issues concerning human rights online. The coalition has issued a statement on the threat of surveillance technologies and has developed guiding principles on the government’s use of surveillance technology. These efforts demonstrate the FOC’s dedication to safeguarding human rights in the digital space.

However, there are challenges that the FOC needs to address. Integrating human rights perspectives with digital sectors and increasing the visibility of the FOC are two issues that require attention. It is crucial to consider diverse perspectives when making decisions and to ensure that the FOC’s activities are visible and impactful.

To achieve a holistic impact on governments worldwide, there is a need for more diversity in the FOC’s member countries. The challenge lies in bringing more countries from the global majority into the coalition. By including a broader range of countries, decisions made by the FOC will have a more comprehensive impact on governments globally.

The FOC also has potential as a key voice in the ongoing negotiations of the UN’s cybercrime treaty. Alison Petters, an advocate for a tight scoping of the treaty, supports the FOC as a mechanism to coordinate perspectives among like-minded partners. This demonstrates the value of the FOC in shaping global discussions on cybercrime.

Additionally, the FOC recognizes the importance of protecting human rights and marginalized groups online. Alison emphasizes the need to not undermine existing human rights frameworks and highlights the importance of continuous consultation with stakeholders to represent their perspectives.

The FOC supports the multi-stakeholder model in online governance processes, recognizing the need for meaningful engagement from various stakeholders. Alison emphasizes the importance of getting the modalities right so that multi-stakeholders can effectively contribute to the decision-making processes.

Adapting to address new threats to human rights online is crucial for the FOC’s continued relevance. Surveillance technologies and artificial intelligence pose new challenges, and the FOC must stay ahead to effectively protect human rights in the digital realm.

While expanding the diversity of the advisory network is crucial, efficiency should not be compromised. Balancing the inclusion of diverse voices with maintaining productivity is essential for the effective functioning of the advisory network.

The FOC has demonstrated successful engagement with global majority governments and has actively included non-FOC members in discussions about technology and human rights. This intensive dialogue and ongoing engagement contribute to the FOC’s mission of promoting global cooperation on these critical issues.

Furthermore, the FOC recognizes that governments with limited resources can still be involved through support and by understanding the benefits they stand to gain. This approach ensures that all governments have the opportunity to participate and contribute.

Lastly, civil society plays a crucial role in supporting the FOC’s mission. Beyond providing additional support, civil society organizations should also help expand networks and contribute to consultations. The advisory network serves as a key source of support for the FOC, and the coalition has actively consulted civil society in key countries to gather diverse perspectives.

In conclusion, the US chairship of the FOC has strengthened international engagement and coordination in technology-related issues. The coalition has successfully addressed new challenges concerning human rights online but faces obstacles in integrating human rights perspectives with digital sectors and increasing its visibility. There is a need for greater diversity in the FOC’s member countries to ensure a comprehensive impact on governments worldwide. The FOC can also play a significant role in negotiating the UN’s cybercrime treaty and advocating for the protection of human rights and marginalized groups online. The multi-stakeholder model in online governance processes is supported, and the FOC must adapt to new threats to human rights in the digital space. The advisory network is essential but expanding its diversity should be balanced with maintaining efficiency. The FOC’s engagement with global majority governments has been successful, and governments with limited resources can still participate with support. Civil society’s involvement goes beyond additional support and should contribute to network expansion and consultations.

Ernst Norman

The analysis focuses on the Freedom Online Coalition (FOC) and its efforts to promote human rights and digital cooperation. Various speakers expressed their support for the coalition and highlighted specific aspects of its work.

Ambassador Ernst Norman expressed his support for FOC’s initiatives in training policymakers on complex technical topics related to artificial intelligence. For example, Canada has trained FOC policymakers and applied this knowledge in diplomatic negotiations. The FOC’s Joint Statement on Artificial Intelligence and Human Rights was praised for its continued relevance.

The United States was commended for energising the coalition and bringing important issues to the table. They played a significant role in including FOC in the Summit for Democracy and updating the coalition’s terms of reference, preparing it for the next decade.

The Netherlands emphasised the importance of a multi-stakeholder approach to Internet governance, with a strong focus on human rights. The diverse membership and multi-stakeholder structure of the FOC were highlighted. The Netherlands aims to coordinate its positions in future digital governance forums through the FOC.

Ambassador Ernst Norman also advocated for expanding digital equality and connectivity. He proposed broadening the FOC’s membership, particularly with like-minded countries from the global majority. The FOC’s global representation and network can support this endeavour.

Enhancing engagement with all stakeholders was deemed crucial. The FOC’s advisory network involves stakeholders providing advice on governance aspects. Ambassador Ernst Norman aims to ensure widespread support and realistic positions for the FOC in negotiations for Global Digital Cooperation (GDC).

The importance of agenda setting and internal coordination in addressing human rights and digital threats was highlighted. It was suggested that the agenda for discussing human rights and digital threats should involve not only the presidency but also all stakeholders, including member states and the advisory board.

Inclusivity and the reduction of civic space were viewed as important topics that require extensive discussion. There is concern about the diminishing civic space and the marginalisation of NGOs in many countries.

Furthermore, the decrease in online civic space was considered crucial given the digital threats to human rights. It was observed that the FOC needs to strike a balance between embracing diversity and maintaining its effectiveness in addressing these issues.

However, it was suggested that the FOC should avoid trying to mimic the United Nations. Including all countries and engaging in impossible negotiations were viewed as an undesirable approach. Instead, the FOC should focus on maintaining meaningful exchanges and taking effective positions.

Overall, the analysis presented the support and opinions of various speakers on different aspects of the Freedom Online Coalition. It highlighted the importance of training policymakers, energising the coalition, upholding human rights, expanding digital equality, and engaging stakeholders. It also underscored the need for agenda setting, internal coordination, inclusivity, addressing the decrease in online civic space, and maintaining a balanced approach within the FOC.

Irene

During the discussion, the speakers highlighted the significant role played by the Freedom Online Coalition (FOC) in coordinating multi-stakeholder discussions on AI and Human Rights. The FOC served as a crucial platform for connecting various communities with differing levels of capacity and knowledge, facilitating the sharing of information and experiences. The speakers emphasized the importance of inclusivity and a proactive approach in organizing these discussions, despite the challenges they presented. The process of organizing the multi-stakeholder discussions often took longer than expected due to the complexity of the issues involved. However, it was acknowledged that although inclusivity can sometimes lead to discomfort, it is a necessary aspect of the process.

The speakers also discussed the tendency of governments to not naturally adopt a consultative approach. One speaker, Irene Xu, observed that governments often do not have a natural inclination towards being consultative. This observation highlights the need for deliberate efforts to foster consultation and engagement between governments and various stakeholders.

The rise of digital technology has brought technical issues to the forefront of political discourse. It was noted that even developed countries like Canada find it difficult to track all digital and tech initiatives. The complexity and ever-changing nature of these initiatives require continuous efforts to promote awareness and understanding.

Furthermore, there was a call for more specific guidance to engage with global majority countries and civil society. The importance of two-way communication and understanding the specific engagement requirements of these groups was emphasized. It is crucial to develop strategies that take into account the unique challenges faced by these communities.

The speakers also discussed the value of capacity building, technical expertise, and understanding of international systems in engagement efforts. An improved understanding of international systems like the UN in New York can provide valuable insights and contribute to more effective engagement. Efforts should be made to provide capacity-building opportunities and technical expertise to strengthen engagement and ensure meaningful and productive interactions.

It was suggested that being more creative with multi-stakeholder collaborations and multilateralism can help address capacity issues efficiently. Collaborative initiatives such as the FOC, International Idea, and Media Freedom Coalition were cited as examples of successful partnerships that have enabled the development of important initiatives like the Global Declaration on Information Integrity.

In conclusion, the speakers expressed their overall support for multi-stakeholder collaborations as they lead to efficient outcomes. The FOC, along with other collaborations, has shown that productive results can be achieved through such partnerships. These collaborations have facilitated the exchange of knowledge and the development of initiatives that contribute to the promotion of AI and Human Rights.

Boye Adekoke

The analysis emphasises the importance of inclusivity and multi-stakeholderism within the Freedom Online Coalition (FOC). It showcases the diverse range of expertise that the FOC has within its membership, which greatly contributes to the development of digital rights and effective governance. The involvement of stakeholders from various backgrounds, including governments, civil society organisations, and private sector actors, ensures that a wide range of perspectives are considered in decision-making processes.

One notable aspect of the FOC is its strong accountability demonstrated through the statement development and engagement process. This process involves rigorous consultations with stakeholders and ensures that decisions are made collectively, thereby enhancing transparency and legitimacy. As a result, the FOC’s outputs are seen as reliable and trustworthy due to the inclusive and participatory nature of their development.

In contrast, the analysis raises concerns about the over-reliance on multilateralism as a solution to global challenges. It highlights the potential for inequitable power dynamics in multilateral forums, which can lead to disproportionate influence by powerful nations. This imbalance may result in the limited ability of small or less powerful countries to shape global norms that align with their interests and needs. Additionally, the complexity and sometimes contradictory nature of multilateral rules can make it challenging for countries to navigate and adhere to them effectively.

However, the FOC is presented as a potential solution to mitigate the challenges associated with multilateralism. Due to the diverse range of stakeholders involved, the FOC is capable of providing a more balanced perspective on digital rights and governance issues. The coalition’s strong accountability system ensures that decisions and actions are held to high standards, further enhancing its credibility. Moreover, the FOC’s active involvement in global processes has proven beneficial, as it leverages the inputs and expertise of its diverse members.

In conclusion, the analysis underscores the importance of inclusivity and multi-stakeholderism in the FOC for effectively addressing digital rights and governance challenges. It acknowledges the strengths of the FOC, such as its diverse expertise, strong accountability, and legitimate outputs. While caution is warranted in heavily relying on multilateralism, the FOC can serve as a valuable platform for mitigating the risks and complexities associated with it.

Maria

The discussion held at the IGF focused on leveraging the experience of collaboration and multi-stakeholder engagement to shape global norms and advocate for the rights of human rights defenders, civil society journalists, and other stakeholders. The importance of the Freedom Online Coalition (FOC) as a valuable platform was emphasized, with recognition of its capacity to progressively enlarge and welcome more diverse participants. The discussion highlighted the need for interoperability and the use of existing frameworks, instead of establishing new regulations, to shape FOC priorities. Incorporating the inclusion agenda was seen as a key area for FOC to make an impact, promoting reduced inequalities and partnerships for the goals. The FOC’s role in coordinating international discussions on cybersecurity and its commitment to inclusivity through diverse stakeholder engagement were also emphasized. The use of sub-entities within the FOC to shape priorities, improving diplomatic network coordination, and government coordination for capacity building and inclusivity were identified as critical. In summary, the FOC’s work should be prioritized and improved to enhance inclusivity, ensure the implementation of global norms, and promote the rights of all stakeholders.

Session transcript

Maria :
We are starting the last day of IGF of 2023 with this very relevant conversation about how we can leverage the freedom of online coalition at international organizations for the work and support the exercise of rights. So for this conversation I am Maria Paz Canales, I am the head of legal policy and research at Global Partners Digital and I have the pleasure to be the moderator and be joined by a distinguished panel of representatives from the governments of Canada, United States and the Netherlands that hold the chair of the freedom of online coalition in the previous periods, in the current period and it will be taken over for the following one in the case of the Netherlands. And also for distinguished members of the advisory network of the freedom of online coalition that represent civil society organizations and also bring their perspective to this very important conversation about how we can discuss more about the opportunities and challenges of using the FOC to shape global norms and advocate for human rights defenders, civil society journalists and other stakeholders in multilateral institutions and processes. So very relevant conversation for all the moving parts that we are confronting in this moment. And because of that, my opening remark, I want to concentrate in this idea of like the interoperability that we usually associate with more technical concept, but today we are seeing like more than ever the need of like ensuring interoperability also regarding frameworks and regarding efforts. So how we can leverage the experience of collaboration at the multilateral level, but also with all the experience and the richness that come from the multi-stakeholder engagement. which is the natural strength of the Freedom Online Coalition. So in that sense, it’s very important to remind that for achieving the true interoperability that we need to have common objectives, and the Freedom Online Coalition precisely has championed the identification of common goals, common objectives of the like-minded states that are united in this coalition, and also have championed the idea of progressively enlarge and welcome more diversity in that participation with recently new states joining the Freedom Online Coalition. One of those very relevant common objectives is the approach to protect and promote human rights that unite all the members of this relevant coalition, and make it through this slogan that many of the human rights advocates in the digital sphere, we pursued for many years ago, make the same rights valid online and offline. And for this, it’s not necessarily imperative to establish entirely new regulations. Sometimes we need to take advantage of what already we have developed in many frameworks. And for that, consistently, the advisory network has been supporting also the work of the government in trying to leverage all the advocacy work and experience and the interpretation that comes from the international human rights system in order to enhance this collaboration and promote more effective protection of rights. With that note, I want to give the floor to the relevant people in this conversation, the ones that represent, as I mentioned at the beginning, the past experience in leveraging the value of this network. the ones that represent the present experience, and the one that come with a lot of new plans and hopes and new brand possibilities continuing this very fruitful collaboration. So first, we will welcome Miss Irene Xu, who is the representative from the Canadian government, to give her a little bit of her thoughts in terms of the experience of the Canadian government leading the efforts of the FOC. Please, Irene.

Irene:
Great, thank you, everyone. So I’ll speak a bit about Canada’s experiences. So we were the chair during the 2022 year, and before that, in the lead up to the UNESCO recommendation on the ethics of AI, we were the chair of the Tax Force on AI and Human Rights. And before the negotiations really started, we thought that we had to take a proactive and deliberate approach to having a multi-stakeholder discussion that would inform all of our engagement in the negotiations. So we had started with a briefing from UNESCO to the FOC Paris Diplomatic Network, which our mission to UNESCO also leads, the diplomatic network. The benefits of the FOC is that, even though we’re very like-minded in terms of values and principles, it is fairly cross-regional with differing levels of capacity and knowledge on these issues. So that first briefing was really important to get everyone on the same page about what was at stake and the main issues, and UNESCO’s goals for the recommendation. And then really from there, it was very much an iterative and sometimes messy process. We had regular meetings within the task force. involving countries, society and tech companies and the advisory network. And really, it was really helpful because these issues aren’t necessarily multidisciplinary. And we’re used to that when we’re formulating national negotiating positions, having to talk to different departments, society, industry. And so you had to bring together people with policy experience, with multilateral experience, with the technical expertise and very few people who know all three and could try to bring those together. Even in the negotiations themselves, which were unfortunately all virtual due to COVID, some delegations, it was very much a multi-stakeholder negotiation. Even some delegations represented by professors with expertise in AI. And so you had to do a lot of translating between the different communities, but that’s why the FOC was such a valuable place to do that coordination, bringing all those communities together. And then just maybe some lessons learned. Like I said, to be inclusive and multi-stakeholder really needs to be a deliberate and proactive decision from the start. It’s not gonna happen on its own. Plan that it will take more time than you think it will take. The issues are complex. There’s always going to be tensions. And as we like to say, if you’re not uncomfortable, you’re probably not being inclusive enough. And we also all just need to, I think it’s not, as governments, it’s not always our natural tendency to want to be consultative, but that’s why it needs to be a deliberate decision to kind of step out of normal practices to make that happen. Thanks.

Maria :
I would like to invite Veronica Ferrari from APC, Association for Progressive Communication, who is a member of the advisory network, to comment a little bit about the benefits of the multi-stakeholder engagement. Thank you very much, Irene. And on that note, about the value of the multi-stakeholder engagement and even the value of feeling a little bit uncomfortable, which is something that I think is very important. I would like to invite Maria Paz, who is a member of the advisory network, to comment a little bit about the benefits of this multi-stakeholder dialogue and what had been the experience of championing this through the work of the advisory network in collaboration with the FOC.

Veronica Ferari:
Great. Thanks, Maria Paz, and thanks for the remarks from Canada. I would like to thank the FOC for incorporating the inclusion agenda and putting that at the center of the FOC. That was continued under the U.S. chairship, and we hope it will be continued in the next chairship. So, again, thank you for the opportunity to speak today, for the invitation. I’m glad to be here. As Maria Paz was saying, I am a member of the advisory network, where I represent APC. APC is a multi-stakeholder network. We have over 40 countries located mostly in the global majority. So we are a network committed to creating a just and sustainable world by supporting people to use and shape the Internet. So APC advocates and works towards more robust and meaningful multi-stakeholder collaboration, where those who are affected by digitalization and digital policies, particularly marginalized populations, have a voice in the decision-making process. And I think this has been really important for us to have a national vision and international level. So in this sense, APC sees the FOC as a valuable platform to advance this goal of multi-stakeholder collaboration. So as I just said, the increased emphasis at the FOC on digital inclusion, on incorporating the voice of marginalized groups, we think it’s a really positive step, where engagement with community-based programs like Fcar所, where we’re learning how, and, how are we replacing models that have been required? So there is more to be done in that sense. So, I think I wanted to highlight the role of the task forces and the sub-entities as good examples of multi-stakeholder collaboration within the FOC in addressing specific focus areas and also in translating principles and statements into practical actions. So, for example, in the case of AI, we believe that when discussing norms related to artificial intelligence and emerging technologies, from our perspective, from APC, the focus should be on the implication of these systems for human rights, for social justice, also for sustainable development. So, the norms discussion should not be only around technology but also about the inequalities that these technologies can create or even exacerbate. So, from the FOC, working on decisions on new emerging technologies and AI is key to work with the TIFER but also with the Digital Equality Task Force to incorporate perspectives from marginalized groups. So, the role of the sub-entities has been key in shaping FOC priorities, the program of work, also informing discussions on these issues through learning opportunities, and also serve as a mean to engage other groups that are not necessarily part of the FOC and the advisory network. And I wanted to bring an example of which we believe in terms of setting norms that build on multi-stakeholder collaboration. I wasn’t part of the FOC at that moment, but my colleagues from APC were really involved in this. It’s the FOC Joint Statement on the Human Rights Impact of Cybersecurity Laws, Practices, and Policies from 2020. So, that statement contains recommendations for national cybersecurity practices and international processes and draws on the input of a multi-stakeholder FOC working group on that topic. So, that statement underscores the importance of a human-centric approach to cybersecurity, the need to build on international human rights frameworks when shaping international cyber norms. So, this leads to my final point. We believe that the FOC could play a key role in coordinating international discussions, for example, on cyber security and cyber crime, because particularly at the UN and some of the processes there, building on language and positions that are already there, that exist, and there is consensus about that, so that was one of the points that I wanted to mention. I know that we are talking about more processes and connections, so I may stop here, but I just wanted to highlight the importance of multistakeholder collaboration, how FOC has proven as a key platform for that, and how we can still do better and more in that sense. So thank you, Maripaz.

Maria :
Thank you very much, Veronica, for those remarks. Now I’m going to give the floor to Alison Petters, who is the representative of the U.S. government, that have held in this period the chairship of the FOC, to also share a little bit more about this engagement through diplomatic efforts and how you have tried to provide more efficacy also to this coordination, which is always a challenge when a group starts and it’s small, it’s easier maybe, but when you need to accommodate and welcome new members and new realities and new contexts, there are challenges in the coordination itself, and I’d like to also share a little bit of the U.S. government leading that experience during the last chairship of the FOC.

Alison Petters:
Well, first, good morning, everyone. Thank you so much, Maria. It’s really a pleasure on the last day of IGF to be joined by two very close friends, partners in crime, the government of Canada and the governments of Netherlands, as well as our friend Veronica from the advisory network of this global coalition focused on human rights online. The United States has been really just tremendously thrilled to be the chair of this global coalition this year. We set out at the first Summit for Democracy to bolster both our engagement and work through the Freedom Online Coalition, but bolster the work of the coalition as a whole in terms of impacting multilateral processes, multi-stakeholder processes around the globe focused on technology-related issues. I think we learned a lot of hard lessons. We saw both challenges and successes. I think perhaps the most important lesson learned for us is the need to have very strong political will at the top of our leadership chain as the chair of this coalition. We had a presidential commitment to chair the coalition. We’ve had engagement from our Secretary of State. We recently hosted a ministerial-level conversation at the UN General Assembly High-Level Week. We’ve been able to bring in more members into the coalition as a result of that strong engagement and political leadership at the top of our chain. Advancing a rights-respecting approach to technology-related issues is central to the Biden administration’s approach on technology policy. And so that has really helped us as the chair make sure that we are continuing to sort of hold ourselves accountable and getting a lot done this year. Certainly, we saw a lot of successes in terms of building the capacity of the Freedom Online Coalition during our chairship. Perhaps most importantly, building up our diplomatic networks both in Geneva and New York and working to expand those in other cities as well in order to coordinate in advance of some really key multilateral processes. So we’ve coordinated through the Freedom Online Coalition in New York around the UN Cybercrime Treaty negotiations, for example, working closely with the advisory network to bring in their perspectives in those treaty negotiations. We’ve coordinated through the Freedom Online Coalition in advance of an emerging technology resolution being considered in the Human Rights Council, making sure also that we’re talking through what priorities and perhaps threats to human rights are most critical in terms of addressing in any such resolution in the Human Rights Council. There’s been coordination in UNESCO around the guidance for digital platforms, making sure also that we’re bringing in the perspectives of the advisory network there in particular as there’s been very strong views amongst multi-stakeholders on that process in particular. The second piece I think that we’ve seen in terms of successes is really making sure that we are giving opportunities for members to facilitate coordination not just in capital cities but also through our embassies in countries around the globe. There perhaps is most critical when we’re talking about responses to threats to human rights online, so particular cases of internet shutdowns or particular cases where we’ve seen human rights defenders be targeted by digital attacks. Making sure that we are strengthening our coordination in those countries directly with our diplomats that are serving at our embassies has really been perhaps a key success of ours in our chairship this year. And then third I’ll just say in terms of a success, I think we have been successful in bringing in new issue sets into the Freedom Online Coalition continuing to evolve. We had the entire Freedom Online Coalition issue a joint statement in the Human Rights Council most recently looking at the threat of surveillance technologies. We were able to gain a host of additional governments. We’re so pleased. I think the number is nearly 60 now that have joined on, so we’ve taken this issue set, we’ve coordinated in this coalition, and then we’ve taken it out to other governments to join us. Similarly, we issued a set of guiding principles on government use of surveillance technologies. There are, of course I don’t need to tell many of you in this room, a suite of surveillance technologies that are front and center in terms of the threats to human rights defenders, journalists, political opposition figures. years, dissidents, you name it, this is a particular threat when we talk about artificial intelligence systems embedded in those surveillance technologies and we have been successful in developing these guiding principles and government use of surveillance tech that really sort of establishes what our rights respecting use would look like when it comes to these technologies. We were pleased that the whole FOC joined on to these guiding principles and then again, we were able to take that into the Summit for Democracy context and gain additional support. Certainly though, we’re not without challenges and rooms for improvement. I think one of the biggest challenges that we have is just linking human rights folks up with cyber and digital folks in each and every one of our governments. I’m sure we would all agree. Sometimes we can be siloed and so the need to bring in both of those perspectives when we’re making decisions and developing outputs of this coalition has really been a challenge for us and I think every government at this table and in the FOC. Second challenge of course is visibility of the Freedom Online Coalition. There are a lot of coalitions out there. I’m engaged in a number as I’m sure both of you are as well and Veronica from the Advisory Network and so making sure that we are keeping the Freedom Online Coalition sort of front and center and a lot of these policy discussions has been really a place where I think we continue to feel like there’s room for improvement. And then last I’ll say, we continue to see room for improvement in terms of diversifying our membership, bringing in more countries from the global majority. This is something that has been a critical priority for us. To your point Maria, not to grow this too large of a size where coordination just becomes near impossible but really to grow this. with diversity of perspectives in mind. So we are not just making decisions that impact governments in one region or another, but we’re making decisions that are holistic of the entire globe. There, this continues to be a priority and we look forward to working with the government of the Netherlands to bring in more members of the global majority into the conversation. So I could probably go on with the challenges and areas for improvement, but we wouldn’t be able to have achieved the successes that we did without the support of you all at the table and so many in this room in particular. So thank you very much.

Maria :
Thank you very much, Alison. And with that, I think this is the perfect sea way to Ambassador Ernst Norman, who represent the Netherlands, the new chairship of the Free Online Coalition for the following year. Precisely with all these relevant learns and experience that have been shared by your colleagues that previously hold the seat of the chairship. What are your views and your perspective in terms of the challenges and the opportunities and the plans that you bring as the new leadership of this coalition? Thank you very much for being here.

Ernst Norman:
Thank you very much, Maria. And so glad to be invited at this table as a newcomer. And it’s fascinating to attend this internet governance forum and to meet so many people. And it’s all this interesting discussions like we have this morning as well. And one to have a special mention on this subject, who’s sitting in front of me, being responsible actually for all the work on the Freedom Online Coalition from our side. So thank you for that. And I want to thank Erin and Alison for sharing your lessons learned on the Freedom Online Coalition. Now, Canada has shown us how Freedom Online Coalition policymakers can be trained on difficult technical topics like artificial intelligence. and how then this knowledge can be used in diplomatic negotiations. Although written two years before the public launch of generative AI, the FOC Joint Statement on Artificial Intelligence and Human Rights still holds up. This is due to the fact that Canada organized, during the pandemic, virtual classes for the policymakers, where they were informed by experts in academia, tech companies and NGOs on AI and machine learning. This knowledge base ensured that the FOC had an excellent position to coordinate this position around the UNESCO negotiations on AI and ethics. The US has done magnificent work on further energizing the FOC and Alison mentioned a number of issues that have been brought to the table. Like you have brought the FOC to the Summit for Democracy and underlined at the highest level that digital human rights is one of the biggest challenges of our time. Also, the US has done some important housekeeping on the coalition. With a new and updated terms of reference, the coalition is ready for the next 10 years. Although this might not be the most sexy subject from a PR perspective, it is most difficult for a diplomatic network. Getting everyone to agree on these changes was no small feat. So thank you very much for doing this important work. And it makes our work easier again. Thank you. Now it’s up to us to continue these important lines of work and that in a key year for digital governance. The GDC, as we all have been seeing in the last few days, will be the internet governance event for next year. Followed quickly by the WSIS plus 20. The next 18 months will be pivotal for the future of the internet and to make sure that we will be the internet we want. The Netherlands aims to keep the Internet multi-stakeholder organized, with a strong focus on human rights as a cross-cutting theme. For us, the FOC will be a key coalition to coordinate our positions in this important forum. For three good reasons. First, the FOC has played a key role in earlier processes and has proven to be an essential force in protecting the multi-stakeholder model of the Internet. And secondly, and possibly more important, the FOC is a global, inter-regional coalition with countries from all continents. As we have heard in the last few days, digital equality and expanding connectivity are still a challenge not sufficiently addressed by the past IGFs. If we want to move forward on Internet and AI governance, we must include a global majority perspective. We will therefore seek to broaden FOC’s memberships, I would say further expand, because you have done excellent work already on that topic, but especially with like-minded countries from the global majority, and have them engaged actively in the discussions. Having the FOC presidency, we want to make sure that the FOC’s position in the GDC negotiations is widely supported and realistic. And thirdly, with the FOC we have a long history of engaging with all the stakeholders. They are involved through the FOC advisory network and provide us, governments, with solicited and unsolicited advice on these important governance aspects. These elements of the FOC, our history, our expertise, our diverse membership and multi-stakeholder structure make the FOC an excellent coalition to coordinate our positions on the important themes such as Internet governance and AI, but also I would like to thank all the countries who have supported this important declaration. Thank you.

Maria :
Thank you very much, Ambassador. And I think that with that final remark in terms of the value of the multi-stakeholder engagement for digital technologies governance, I would like to bring also someone from the advisory network that will join us online for making also more inclusive this conversation. So we have Boye Adekoke from Paradigm Initiative that will provide some additional remarks and thoughts about how we can think about the multi-stakeholder model in terms of like the digital governance. Thank you. Thank you. Thank you. are for this model and the switching to multilateralism and how the value of this model, multi-stakeholder model supported so strongly as we have heard by the Fe assists, by the past, by the present, will be enhanced and could be an opportunity to continue to advocating and advancing the mission of ensuring human rights-based approach in all the digital technologies governance. So, Boye, can you hear us and can you bring your perspective? Thank you.

Boye Adekoke:
Yeah, thank you very much. I hope I’m audible. You can hear me over there. Thank you very much for the opportunity for me to contribute to this conversation. And thanks to the FOS3 support unit for putting together this session and for asking me to, you know, share perspective. I think a lot has been said. Again, congratulations to the government of Netherlands for assuming the chairmanship and great work to the United States government for the amazing work that they’ve done in the past one year. A lot has been said during this conversation and some of the things that I was going to mention are already been mentioned. So I will just gladly skip those so that I can make the other point. I think the point about inclusivity has been stressed and that of multistakeholderism in terms of the power of FOC to bring all of these to the table. But I also want to say one of the importance of the FOC and one of the benefits that the globe can benefit from leveraging what the FOC has done in the past year is that the FOC is also a platform with diverse expertise. So apart from the fact that the FOC has done a great job in ensuring inclusivity, in ensuring multistakeholderism, what you would also find within the FOC is a diverse kind of expertise. So even within different stakeholder groups, you find different people coming with different expertise. For example, I’m a member of the Advisory Network. I represent an organization, but I would say, tell you for free, that even while I represent an organization, even as an organization, we also represent a network of a number of other, you know, civil society organization focusing on different kinds of expertise within the digital inclusion and digital rights space, you know, across the globe. So I think this is also one of the benefits of, or one of the value that the FOC can offer, you know, some of these global processes in terms of setting global norms and all of that. But I also think something that I’ve seen, you know, working to the FOC is accountability. I think that the FOC also has a very great level of, you know, accountability system in terms of how the Advisory Network and the FOC itself as a coalition in terms of how we engage in developing, you know, statements, in developing comments on many processes that the FOC have been engaged in. And I think this is very great for, you know, for in terms of setting, you know, very reliable global norms in terms of creating very effective system. It can help prevent abuses of power and ensure that global norms are implemented, you know, effectively. I also think that one of the benefit that I see within this context is also legitimacy. A lot of the, you know, statements that the FOC have put out have gone through a lot of rigorous process, you know, involving Advisory Network member and the FOC countries and all the FOC nations themselves. So at the end of the day, what we have a lot of times is we have legitimate outputs. And I think that this principle can also, you know, be mainstreamed to how, you know, global norms are set in terms of getting that, you know, legitimacy from various, you know, diverse group of stakeholders. So. These are a few points that I think that are, you know, kind of very, very relevant or valuable in terms of how the L4C operates that I think that a global norm can benefit from. And, you know, very quickly before I keep quiet, I also like to say that it’s very important that I say that we have to be very careful in terms of our system of the digital policy or the digital norms that are being set in this age and time to avoid the mistakes of the past. There might be that temptation, you know, to absolutely resort to multilateralism. I just come back from the ad hoc session, you know, on the cybercrises that the UN is currently working on. And I see that temptation a lot of times to overly, you know, resort to multilateralism, you know, by not giving to society voices enough opportunity to make contribution during some of these sessions. I was in the room, so I see this happen practically. So I think that’s something that we need to be very careful of. And I think that’s where L4C can also come in again, because whether we like it or not, within the multilateralism set up, there is always inequitable power dynamics in multilateral forums. Powerful nations may have disproportionate influence, you know, potentially leading to norms that primarily serve their interests. Similar, smaller or less powerful nations may have very limited ability to shape, you know, global norms to their advantage. And I think the L4C provides some sort of, you know, platform whereby this can be mitigated to very large extents. I already mentioned accountability the other time, but I also want to mention what I also call the problem of fragmentation and complexity, because over time, multilateralism can lead to proliferation of agreements, treaties, and norms, creating a complex and sometimes contradictory web of rule that can be challenging to navigate and implement. So I also think that the FOC can really, really come handy in terms of helping to mitigate some of these challenges with multilateralism and also making sure that the mistakes of, you know, assuming that nations can just come together and develop norms and develop, you know, rules to guide, you know, behaviors within the international context. It can be very problematic. And the FOC in that instance, I think is also a very great platform that can help in terms of mitigating those types of challenges. Because as a member of the FOC, I’ve seen how many of the processes that we’ve been involved in, many of the, you know, many of the involvement of the FOCs even in global processes has benefited a lot from inputs of the different members of the FOC, inputs from, you know, from diverse groups, input from representation of diverse communities, et cetera, that, you know, the FOC embody. So I think this is very important. And I think that it’s just a great opportunity for the world to benefit or for the world to leverage the amazing work that is being done by the FOC. And I also see the FOC as also an evolving, emerging, you know, platform that continues to improve, that continues to expand. So it’s at this point, I’ll just put a stop to here. Thank you very much.

Maria :
Thank you very much, Boye, for being part of this conversation today. And I think that with your remarks, we have also a good seaway in opening a little bit more, also a conversation to the ones in the table, but also in the room and even participants online, if there are someone that want to jump in the conversation. Exploring a little bit more, like Ambassador have reminded us, the importance of the housekeeping. We have heard from each one of the chairships, the past, the current and the future about how important it’s like to address the issue of coordination, because even when we can… in terms of the values and being like-minded in terms of the goal of promotional protection of human rights related to digital technologies. This need an operational layer. And that’s the challenge that is up to the coalition to figure out every year for continue developing the great work that had been doing. So in that sense, in a more operative way, how we can think about what are the key subject matters or the processes that you identify as the one that will need to be prioritized in the following year for the improvement of diplomatic network coordination ahead of any relevant negotiation of any of these identified processes. And in that same line, how can the FOC leverage its previous work and upcoming work to enhance the ability that governance processes have a meaningful inclusive approach of majority world voices and this multi-stakeholder nature that we have discussed is the essence of really have a process that can be fully aligned with the best protection of human rights. So I invite anyone around the table to react, but also from the audience. I don’t know. You want, yeah.

Alison Petters:
Well, I think this is kind of the heart of this discussion today we’ve had several days now of IGF. And I think all of the processes that probably every single one of us would list have been on the agenda for IGF sessions this week. So we do have right now an ad hoc committee that is going between Vienna and New York negotiating a UN cyber crime treaty. This is critical that we have a tightly scoped criminal justice instrument that protects a rights respecting approach to the investigation of. cyber crimes, and they’re having the Freedom Online Coalition be a key voice and mechanism in which we can coordinate our perspectives amongst like-minded partners is going to be really critical. We’ve heard a lot of discussion about the global digital compact process. We have also, of course, looked ahead to WSIS Plus 20 as one of the central processes where internet governance issues are going to be on the agenda. And then we have other processes that have been discussed, things like the High-Level Advisory Board and artificial intelligence. I think the key to each and every one of these processes is it’s going to be mission-critical, the most important thing that this coalition does, to make sure that we are focused on protecting human rights online, and that means protecting the existing human rights insurance that we have that guide all of our work in the UN system and multilateral institutions, and not taking us backwards. So that’s first and foremost, not undermining the existing frameworks that we have. Second, I think, is going to be mission-critical, that we’re focused on protecting marginalized and vulnerable groups. There continues to be efforts in multilateral fora, multi-stakeholder fora around the globe to undermine protections for women and girls and all their diversity, undermine protections for LGBTQI plus individuals, undermine protections for other marginalized and vulnerable groups. And we can use this coalition to make sure that we are continuing at every turn to make sure that we are putting those groups at the heart of the human rights agenda, and that we are also making sure that we are consulting with those stakeholders to make sure that we’re representing their perspectives. Third, we heard a lot of discussion, and I think Boye also talked a lot about this, and the need to make sure that we’re protecting them. multi-stakeholder model. We have processes in the United Nations that in some ways are inherently governmental because the United Nations is a composition of member states. But it is very critical that we are protecting efforts to ensure that multi-stakeholders can engage. And that means getting the modalities right for a number of those processes so multi-stakeholders are able to not just be there at the table, but actually meaningfully engage. And I’ve heard a lot of discussion here this week about that as well. And then last, I’ll just say I think it’s going to be really critical that we are evolving as a coalition, that we are not just focused on the traditional threats to internet freedoms that we’ve been looking to protect since this coalition was founded over a decade ago, but that we are putting on the table new threats to human rights online. So I spoke about some of our efforts on surveillance technologies. Certainly when we look to issues around artificial intelligence governance, both the opportunities and threats that we see there to human rights. And so continuing to make sure that we are really evolving as a coalition and putting the most critical priorities on the table in these processes is going to be really important. And we can’t do that without the advisory network in this room and beyond here at IGF to make sure that they’re holding us accountable to actually doing so.

Maria :
Thank you very much, Alison. I don’t know if any of the other representatives want to react. And also, Veronica, I think that maybe it will be very relevant to hear from you about particularly this challenge of being truly inclusive and have a really effective way to engage these marginalized communities. It’s a challenge. It’s not easy. We know as civil society organization that we work. We are representative in so many cases. of this marginalized community’s interest and perspective, but truly inclusive means to bring the affected people to the conversation, and that’s itself a challenge. So, Veronica, your take on that.

Veronica Ferari:
Yeah, yeah, thanks, Maripaz. Yeah, I was listening to Allison. I just wanted to say plus one to some of the things she just mentioned. So, yeah, in terms of priorities, cybercrime treaty negotiations, I agree with that. Again, UN cybersecurity-related processes like the Open Ended Working Group, we are seeing language on human rights being weakened in negotiations, so that would be a good space for coordination, and the FOC could be a key platform for that. And we’ve been hearing during these days how the next years are critical for internet governance. So, again, as Allison’s saying, which is plus 20 as a key and foundational process for internet governance, as the IGF, as one of the main outcomes, as a symbol for multi-stakeholder model, and things that we need to protect as a community. So, another key process is the GDC and the negotiations. We had a civil society meeting on day zero for coordination around the GDC, and it was raised the need for multi-stakeholder and civil society participation in the GDC negotiations, and we also believe that the FOC can play a key role in facilitating that. Again, as Allison was saying, there are too many forums and initiatives to follow, so connections and a bit more coordination, or knowing a bit more what’s happening in all these coalitions and spaces. So, I was thinking about the FOC, but also Tech for Democracy, the global partnership, so how we can better coordinate around these efforts, since a lot of the organizations are following these same processes. And I wanted to raise a point connected with the idea of meaningful inclusive approach, and the idea of majority voices being heard, and their perspectives being taken into account in the conversations. So, I also heard during this week the need for. inclusion thinking at the regional level. So multi-stakeholder reason is not only about different stakeholders, but also different regions being represented. So I remember like, and we discussed that a lot, like Canada during the chairship organized regional consultations, and that’s a good experience that it could be good also to see that in the FOC and in different processes. And I wanted to take the opportunity also to raise one main obstacle for meaningful inclusivity and presence of global majority voices, which is the visas. So a lot of staff from APC and also people from our members couldn’t come to Japan because of visa issues. We experienced the same with negotiation of the UN in New York and in Europe too. This is not like an isolated thing. We see this as a, of course, as a product of a systematic issue, but it’s important to address that when we talk about inclusivity and majority global voices in this conversation, how that’s a barrier and how to think alternatives, but also how to address this structural problem. Those were some of the points I wanted to raise. Thank you.

Maria :
Thank you very much, Veronica. And I invite you, anyone else want to jump in on this question, but also bring new questions to the conversation. Ambassador, please. Thank you.

Ernst Norman:
But what Ellen was already mentioned, the number of subjects, I mentioned them too, but what Alison also said was relevant that the agenda must be evolving. And on the agenda setting, it’s not only the presidency who’s responsible for that, that’s indeed to discuss it with all involved, not only the member states in the coalition, but indeed the advisory board as well. On the human rights, I would like to stress that it’s important also for us governments at home to coordinate with your own other human rights departments because human rights. the digital threats to human rights is a vehicle, but it’s happening also in the real world. And we may have to be sure that it’s connected in our offices as well, that we don’t have a separate discussion. So please involve all your colleagues who are addressing this issue on human rights. And on inclusivity, that’s actually a topic which is widely discussed also in our ministry, the reduction in civic space. And I think that’s maybe should also be discussed. We can, I mean, we talk about meaningful inclusivity, et cetera, but it’s a wider problem that in many countries, there’s a reduction in civic space in the possibility of NGOs to work, they’re being kicked out, et cetera. So that’s also a serious threat. We can work on inclusivity and in the coalition, but we have maybe also to address this reduction in civic space also online. Thank you.

Maria :
Thank you very much. I don’t know if anyone want to take the floor from the audience for commenting or bringing new issue. If I may also a comment from my side, I think that is very relevant in that inclusivity also that something that you mentioned in your intervention, Alison, related to the coordination of different bodies inside the government, as you were mentioning in the case of like human rights protection in different fronts, but also different level of expertise, because for inclusivity also, we need to create capacity and one key role that the collaboration that is coordinated and created through FOC is like to bring more information about where are the right places in society. It’s difficult to figure out what is the most appropriate interlocutor for having some conversation inside the government. So the role that also the coalition can have like in facilitating that coordination internally in their own governments or across government is also related to facilitate that information for the… more effective action of the advocacy of the civil society in this issue. So, a really interesting point on that. I don’t know if anyone else wants to add something in those lines? Or maybe Irene, do you want to jump in?

Irene:
Sure. So, the only other thing I wanted to mention is, even for Canada, it’s hard to keep track of all the different digital and tech initiatives. These used to be mainly technical issues with some political implications, and now they’ve become political issues that happen to be facilitated through tech. And I think if we want greater participation from both global majority countries and civil society, we need to be much more specific about what we want from them, the kind of engagement we want from them, and also to bring something of value to them. So, whether that’s greater capacity building, better understanding of, I mean, how does New York UN work? I don’t think anyone really knows. And also the technical expertise. So, I think it’s not just about what we want, but it needs to be much more of a two-way conversation between those who are trying to engage and what our goals are.

Maria :
Super important point. Like, how to bring people in the process, but in something that is valuable for everyone around the table. So, we have one comment or question from the audience. Please go ahead and introduce yourself.

Audience:
Sure. Hi, Nikki Muscati. I’m from the U.S. Department of State. I work on the FOC. I have both a question and a comment kind of together. You know, we spend a lot of time in our chairship thinking about how to include various voices in the decision-making process for the priorities that we had in our chairship. And as our deputies and Secretary Allison Peters noted, You know, there is a challenge for how you can bring these voices together, particularly within the global majority, because of the sheer amount of forums and processes that are happening that our governments are all engaged in. And so, I guess, I don’t want to put you on the spot, Ambassador, but, you know, I am curious if you’ve thought a little bit about how you want to maybe narrow your focus during your chairship year next year, and in that, how you’re hoping or planning to engage with some of the already existing global majority voices that are within the coalition to bring them into these processes and these conversations, and then, not to put my own leadership on the spot, but also the Canadian government, you know, what advice might you have for the Dutch government in terms of how to engage these other voices? Of course, you know, we spent a lot of time thinking about this this year, and, you know, as it was noted, the Canadian government did consultations with every region, so maybe if they’re interested in maybe a little bit of a back-and-forth there, if you all would indulge me in the last few minutes, because it is really important that we bolster the existing voices that we have in the coalition, or else, why would someone join?

Ernst Norman:
I would like first to listen to the recommendation of others, because, like I mentioned, I mean, I’m here to learn, and I can say very strong things, but without the experience within the coalition.

Alison Petters:
We heard the advice first. Well, I’m happy to start and turn it over to my colleague from the Government of Canada. I mean, this is the heart of the challenge, right, in terms of expanding the tent, not expanding too large, where we can’t get anything done, making sure that we are expanding the diversity of the advisory network. It would be near impossible to bring in every single voice into the advisory network, so getting that right as well is kind of a key challenge that we have. I don’t think hope is lost. I think some of it, first and foremost, in terms of engaging global majority governments, we have seen successes in bringing additional governments into the fold. Some are not full members yet, but may be joining, but we have already been in very intensive dialogues with them about their priorities as it relates to technology and human rights. So there, I think we learned a good lesson, which is bringing them into the discussions with other FOC members as partners, as equal footing. So at our event in the UN General Assembly recently with the Secretary of State, we invited non-FOC members, some key countries that have important perspectives to bring into the fold as it relates to technology policy, some which are strong democracies, strong records on human rights, but maybe have not engaged with the FOC previously. And so bringing them to the table to add their voices and perspectives has been really important to just get them interested in the work in the FOC, to familiarize them with the FOC’s work. And then following that up with capital level engagement. So as you know well, Nikki, we have really enlisted the support of our ambassadors and our diplomatic corps in these key countries to continue the dialogue. It can’t just be a one-off ministerial level event in the UN and then we say, thanks so much, please join us. It really needs to be a constant dialogue. Third, I would just say a recognition that resources are scarce, I mean, in every government, but some governments have more resources than others. And so, continuing to work with those governments that may feel like they don’t have the resources to engage to both support them and help them perhaps have those resources, but also making the case of what’s in it for them. Like, what are you gonna get out of this if you prioritize this over something else? I mean, we heard a lot about the… proliferation of different processes here. Second, I would just say, you know, in relation to not just governments, but expanding the work that we’re doing with civil society, the advisory network is a key source of support to us, but it certainly, and I’m sure you would agree, it shouldn’t be the only source of support, right? So, for example, the Freedom Online Coalition, our close friends and partners in the U.S. A.I.D., our development agency, has recently launched this week a set of donor principles in the digital age. Not only did we consult the advisory network, our multi-stakeholder component in the Freedom Online Coalition, but we went way beyond that in terms of consulting with civil society in key countries, building out, you know, broader networks of stakeholder voices from global majority countries in particular, and I think that was another example of how we can do this, starting with the advisory network and then sort of building out from there. I would say they’re also leveraging the fact that we are 38 governments and we all have our own networks, and so it shouldn’t just be the chairs’ networks in different countries that we’re consulting, but coming to the government of Canada or coming to the Netherlands and saying, you know, who do you know that we should talk to in these countries is really important, but this is, you know, the heart of the question that we’ve been asking all year, and I’m sure for the Canadian government, the heart of the question they were asking themselves as well, as chair.

Maria :
Thank you so much. Final thoughts on that from the ambassador from the government of Canada? You can go ahead. Okay. We have three minutes. Yeah. Okay.

Irene:
Okay. I’ll try to be quick and talk about a couple of examples. So during last year’s UNGA High Level Week, we organized an event between Freedom Online Coalition, International Idea, and Media Freedom Coalition. I think, like, being more creative with our multi-stakeholders and multilateralism would also help with the capacity issue of different countries not having to join four or five different coalitions outside of the UN processes. And also, when we recently developed the Global Declaration on Information Integrity, we used both International IDEA and Freedom Online Coalition to try to get agreement among democratic and rights-respecting countries on what that would look like. So different ways of trying to be creative with how we approach these things to make things easier for everyone.

Ernst Norman:
Just I think what I just realized is that the FOC is not the IGF, where there’s lots of like-mindedness, but there’s no text to be negotiated or whatever. And you hear some countries being very like-minded with us on subject of human rights, and you wonder, OK, but what’s happening at your home? So it is different. We truly want to be like-minded, and at the same time, we don’t want a copy to be the UN, in the sense that you want to include all countries and have impossible negotiations there. You want to have a meaningful exchange with each other, that we are able to take positions in the end to convince the broad majority, the global majority, and that we indeed are effective in our work. And that’s a balance we have to find, also in enlarging the group. And that’s a challenge, because maybe you want to have some countries, but at the same time, it can be more complex. So indeed, it’s a delicate balance we have to find with each other, and to make sure that the FOC will stay effective in the coming years. Thank you.

Maria :
Thank you very much. Thank you, all the speakers, for being part of this relevant conversation today. I think we have captured relevant learnings, and particularly reinforced that there are clarity in the main values that stick together this coalition, the promotion and protection of human rights, the commitment with inclusive and meaningful stakeholder engagement. and the need of effective coordination and be creative and continue expanding and deepening the action that already have been developed in terms of like all the interoperabilities that I was mentioning at the beginning of the conversation, the interoperability inside different government bodies between different governments that are coming and enlarging the coalition with the challenges that the ambassador just pointed out of like being more diverse, being mindful and accommodate different contexts but without sacrificing the basic values. So on that note, thank you very much for being part of this conversation and have a good final date of IGF. Thank you.

Audience:
Thank you. Thank you all for having me. Yeah, be good. Thank you all. Thank you. Yeah. Thank you. Yeah. Thank you. Thank you. Thank you all. Thank you. Thank you. Thank you. Thank you all. We are done. We are done. Yes. Now you can go. Now you can go. Everyone is going to have a chance to leave for the group event. So thank you to all of you for being here. Thank you all very much. Thanks for being here.

Alison Petters

Speech speed

173 words per minute

Speech length

2762 words

Speech time

958 secs

Audience

Speech speed

194 words per minute

Speech length

426 words

Speech time

131 secs

Boye Adekoke

Speech speed

193 words per minute

Speech length

1280 words

Speech time

398 secs

Ernst Norman

Speech speed

169 words per minute

Speech length

1303 words

Speech time

463 secs

Irene

Speech speed

163 words per minute

Speech length

836 words

Speech time

309 secs

Maria

Speech speed

156 words per minute

Speech length

2188 words

Speech time

843 secs

Veronica Ferari

Speech speed

179 words per minute

Speech length

1300 words

Speech time

435 secs

Harnessing AI for Child Protection | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

During the discussion, multiple speakers expressed concerns about the need to protect children from bullying on social media platforms like Metta. They raised questions about Metta’s efforts in content moderation for child protection across various languages and countries, casting doubt on the effectiveness of its strategies and policies.

The discussion also focused on the importance of social media companies enhancing their user registration systems to prevent misuse. It was argued that stricter authentication systems are necessary to prevent false identities and misuse of social media platforms. Personal incidents were shared to support this stance.

Additionally, the potential of artificial intelligence (AI) in identifying local languages on social media was discussed. It was seen as a positive step in preventing misuse and promoting responsible use of these platforms.

Responsibility and accountability of social media platforms were emphasized, with participants arguing that they should be held accountable for preventing misuse and ensuring user safety.

The discussion also highlighted the adverse effects of social media on young people’s mental health. The peer pressure faced on social media can lead to anxiety, depression, body image concerns, eating disorders, and self-harm. Social media companies were urged to take proactive measures to tackle online exploitation and address the negative impact on mental health.

Lastly, concerns were raised about phishing on Facebook, noting cases where young users are tricked into revealing their contact details and passwords. Urgent action was called for to protect user data and prevent phishing attacks.

In conclusion, the discussion underscored the urgent need for social media platforms to prioritize user safety, particularly for children. Efforts in content moderation, user registration systems, authentication systems, language detection, accountability, and mental health support were identified as crucial. It is clear that significant challenges remain in creating a safer and more responsible social media environment.

Babu Ram Aryal

The analysis covers a range of topics, starting with Artificial Intelligence (AI) and its impact on different fields. It acknowledges that AI offers numerous opportunities in areas such as education and law. However, there is also a concern that AI is taking over human intelligence in various domains. This raises questions about the extent to which AI should be relied upon and whether it poses a threat to human expertise and jobs.

Another topic explored is the access that children have to technology and the internet. On one hand, it is recognised that children are growing up in a new digital era where they utilise the internet to create their own world. The analysis highlights the example of Babu’s own children, who are passionate about technology and eager to use the internet. This suggests that technology can encourage creativity and learning among young minds.

On the other hand, there are legitimate concerns about the safety of children online. The argument put forward is that allowing children unrestricted access to technology and the internet brings about potential risks. The analysis does not delve into specific risks, but it does acknowledge the existence of concerns and suggests that caution should be exercised.

An academic perspective is also presented, which recognises the potential benefits of AI for children, as well as the associated risks. This viewpoint emphasises that permitting children to engage with platforms featuring AI can provide opportunities for growth and learning. However, it also acknowledges the existence of risks inherent in such interactions.

The conversation extends to the realm of cybercrime and the importance of expertise in digital forensic analysis. The analysis highlights that Babu is keen to learn from Michael’s experiences and practices relating to cybercrime. This indicates that there is a recognition of the significance of specialised knowledge and skills in addressing and preventing cybercrime.

Furthermore, the analysis raises the issue of child rights and the need for better control measures on social media platforms. It presents examples where individuals have disguised themselves as children in order to exploit others. This calls for improved registration and content control systems on social media platforms to protect children’s rights and prevent similar occurrences in the future.

In conclusion, the analysis reflects a diverse range of perspectives on various topics. It recognises the potential opportunities provided by AI in various fields, but also points out concerns related to the dominance of AI over human intelligence. It acknowledges the positive aspects of children having access to technology, but also raises valid concerns about safety. Additionally, the importance of expertise in combating cybercrime and the need for better control measures to protect child rights on social media platforms are highlighted. Overall, the analysis showcases the complexity and multifaceted nature of these issues.

Sarim Aziz

Child safety issues are a global challenge that require a global, multi-stakeholder approach. This means that various stakeholders from different sectors, such as governments, non-governmental organizations, and tech companies, need to come together to address this issue collectively. The importance of this approach is emphasized by the fact that child safety is not limited to any particular region or country but affects children worldwide.

One of the key aspects of addressing child safety issues is the use of technology, particularly artificial intelligence (AI). AI has proven to be a valuable tool in preventing, detecting, and responding to child safety issues. For example, AI can disrupt suspicious behaviors and patterns that may indicate child exploitation. Technology companies, such as Microsoft and Meta, have developed AI-based solutions to detect and combat child sexual abuse material (CSAM). Microsoft’s PhotoDNA technology, along with Meta’s open-sourced PDQ and TMK technologies, are notable examples. These technologies have been effective in detecting CSAM and have played a significant role in safeguarding children online.

However, it is important to note that technology alone cannot solve child safety issues. Law enforcement and safety organizations are vital components in the response to child safety issues. Their expertise and collaboration with technology companies, such as Meta, are crucial in building case systems, investigating reports, and taking necessary actions to combat child exploitation. Meta, for instance, collaborates with the National Center for Missing and Exploited Children (NECMEC) and assists them in their efforts to protect children.

Age verification is another important aspect of child safety online. Technology companies are testing age verification tools, such as the ones being tested on Instagram by Meta, to prevent minors from accessing inappropriate content. These tools aim to verify the age of users and restrict their access to age-inappropriate content. However, the challenge lies in standardizing age verification measures across different jurisdictions, as different countries have different age limits for minors using social media platforms.

Platforms, like Meta, have taken proactive steps to prioritize safety by design. They have implemented changes to default settings to safeguard youth accounts, cooperate with law enforcement bodies when necessary, and enforce policies against bullying and harassment. AI tools and human reviewers are employed to moderate and evaluate content, ensuring that harmful and inappropriate content is removed from the platforms.

Collaboration with safety partners and law enforcement is crucial in strengthening child protection responses. Platforms like Meta work closely with safety partners worldwide and have established safety advisory groups composed of experts from around the world. Integration of AI tools with law enforcement can lead to rapid responses against child abuse material and other safety concerns.

It is important to note that while AI can assist in age verification and protecting minors from inappropriate content, it is not a perfect solution. Human intervention and investigation are still needed to ensure the accuracy and effectiveness of age verification measures.

Overall, the expanded summary highlights the need for a global, multi-stakeholder approach to address child safety issues, with a focus on the use of technology, collaboration with law enforcement and safety organizations, age verification measures and prioritizing safety by design. It also acknowledges the limitations of technology and the importance of human interventions in ensuring child safety.

Michael Ilishebo

Content moderation online for children presents a significant challenge, particularly in Zambia where children are exposed to adult content due to the lack of proper control or filters. Despite the advancements in Artificial Intelligence (AI), it has not been successful in effectively addressing these issues, especially in accurately identifying the age or gender of users.

However, there is growing momentum in discussions around child online protection and data privacy. In Zambia, this has resulted in the enactment of the Cybersecurity and Cybercrimes Act of 2021. This legislation aims to address cyberbullying and other forms of online abuse, providing some legal measures to protect children.

Nevertheless, numerous cases of child abuse on online platforms remain unreported. The response from platform providers varies, with Facebook and Instagram being more responsive compared to newer platforms like TikTok. This highlights the need for consistent and effective response mechanisms across all platforms.

On a positive note, local providers in Zambia demonstrate effective compliance in bringing down inappropriate content. They adhere to guidelines that set age limits for certain types of content, making it easier to remove content that is not suitable for children.

Age-gating on platforms is another area of concern, as many children can easily fool the verification systems put in place. Reports of children setting their ages as 150 years or profiles not accurately reflecting their age raise questions about the effectiveness of age verification mechanisms.

META, a platform provider, deserves commendation for their response to issues related to child exploitation. They prioritize addressing these issues and provide requested information promptly, which is crucial in investigations and protecting children.

The classification of inappropriate content poses a significant challenge, especially considering cultural differences and diverse definitions. What might be normal or acceptable in one country can be completely inappropriate in another. For example, an image of a child holding a gun might be considered normal in the United States but unheard of in Zambia or Africa. Therefore, the classification of inappropriate content needs to be sensitive to cultural contexts.

In response to the challenges posed by online child protection, Zambia has introduced two significant legislations: the Cybersecurity and Cybercrimes Act and the Data Protections Act. These legislative measures aim to address issues of cybersecurity and data protection, which are essential for safeguarding children online.

To ensure child internet safety, a combination of manual and technological parental oversight is crucial. Installing family-friendly accounts and using filtering technology can help monitor and control what children view online. However, it is important to note that children can still find ways to outsmart these controls or be influenced by third parties to visit harmful sites.

In conclusion, protecting children online requires a multifaceted approach. Legislative measures, such as the ones implemented in Zambia, combined with the use of protective technologies and active parental oversight, are essential. Additionally, close collaboration between the private sector, governments, the public sector, and technology companies is crucial in addressing challenges in policy cyberspace. While AI plays a role, it is important to recognize that relying solely on AI is insufficient. The human factor and close collaboration remain indispensable in effectively protecting children online and addressing the complex issues associated with content moderation and classification.

Jutta Croll

The discussions revolve around protecting children in the digital environment, specifically addressing issues like online child abuse and inappropriate communication. The general sentiment is positive towards using artificial intelligence (AI) to improve the digital environment for children and detect risks. It is argued that AI tools can identify instances of child sexual abuse online, although they struggle with unclassified cases. Additionally, online platform providers could use AI to detect abnormal patterns of communication indicating grooming. However, there is concern that relying solely on technology for detection is insufficient. The responsibility for detection should not rest solely on technology, evoking a negative sentiment.

There is a debate about the role of regulators and policymakers in addressing these issues. Some argue that regulators and policymakers should not tackle these issues, asserting that the responsibility falls on platform providers, who have the resources and knowledge to implement AI-based solutions effectively. This stance is received with a neutral sentiment.

The right to privacy and protection of children in the digital era presents challenges for parents. The UNCRC emphasizes children’s right to privacy, but also stresses the need to strike a balance between digital privacy and parental protection obligations. Monitoring digital content is seen as intrusive and infringing on privacy, while not monitoring absolves platforms of accountability. This viewpoint is given a negative sentiment.

Age verification is seen as essential in addressing inappropriate communication and content concerns. A lack of age verification makes it difficult to protect children from inappropriate content and advertisers. The sentiment towards age verification is positive.

Dialogue between platform providers and regulators is considered crucial for finding constructive solutions in child protection. Such dialogue helps identify future-proof solutions. This argument receives a positive sentiment.

Newer legislations should focus more on addressing child sexual abuse in the online environment. Newer legislations are seen as more effective in addressing these issues. For instance, Germany amended its Youth Protection Act to specifically address the digital environment. The sentiment towards this is positive.

The age of consent principle is under pressure in the digital environment as discerning consensual from non-consensual content becomes challenging. The sentiment towards this argument is neutral. There are differing stances on self-generated sexualized imagery shared among young people. Some argue that it should not be criminalized, while others maintain a neutral position, questioning whether AI can determine consensual sharing of images. The sentiment towards the stance that self-generated sexualized imagery should not be criminalized is positive.

Overall, the discussions emphasize the importance of child protection and making decisions that prioritize the best interests of the child. AI can play a role in child protection, but human intervention is still considered necessary. It is concluded that all decisions, including policy making, actions of platform providers, and technological innovations, should consider the best interests of the child.

Ghimire Gopal Krishna

Nepal has a robust legal and constitutional framework in place that specifically addresses the protection of child rights. Article 39 of Nepal’s constitution explicitly outlines the rights of every child, including the right to name, education, health, proper care, and protection from issues such as child labour, child marriage, kidnapping, abuse, and torture. The constitution also prohibits child engagement in any hazardous work or recruitment into the military or armed groups.

To further strengthen child protection, Nepal has implemented the Child Protection Act, which criminalises child abuse activities both online and offline. Courts in Nepal strictly enforce these laws and take a proactive stance against any form of child abuse. This indicates a positive commitment from the legal system to safeguarding children’s well-being and ensuring their safety.

In addition to legal provisions, Nepal has also developed online child safety guidelines. These guidelines provide recommendations and guidance to various stakeholders on actions that can be taken to protect children online. This highlights Nepal’s effort to address the challenges posed by the digital age and ensure the safety of children in online spaces.

However, ongoing debates and discussions surround the appropriate age for adulthood, voting rights, citizenship, and marriage in Nepal. These discussions aim to determine the age at which individuals should be granted certain legal landmarks. The age of consent, in particular, has been a subject of court cases and controversies, with several individuals facing legal consequences due to age-related consent issues. This reflects the complexity and importance of addressing these issues in a just and careful manner.

Notably, Ghimire Gopal Krishna, the president of the Nepal Bar Association, has shown his commitment to positive amendments related to child rights protection acts. He has signed the Child Right Protection Treaty, further demonstrating his dedication to upholding child rights. This highlights the involvement of key stakeholders in advocating for improved legal frameworks that protect the rights and well-being of children in Nepal.

Overall, Nepal’s legal and constitutional provisions for child protection are commendable, with specific provisions for education, health, and safeguarding children from various forms of abuse. The implementation of the Child Protection Act and online child safety guidelines further strengthens these protections. However, ongoing debates and discussions surrounding the appropriate age for various legal landmarks highlight the need for careful consideration and resolution. The commitment of Ghimire Gopal Krishna to positive amendments underscores the importance of continuous efforts to improve child rights protection in Nepal.

Session transcript

Babu Ram Aryal:
somewhere around the world and good afternoon, maybe late evenings somewhere. This is Baburam Aryal. I’m a lawyer by profession and I’m from Nepal and I lead Digital Freedom Coalition in Nepal and moderating this session for today and I have a very distinguished panel here to discuss artificial intelligence and child protection issues in contemporary world. Let me briefly introduce my esteemed panelist, my next to me, senior advocate Gopal Krishna Gimire. He is the president of Nepal Bar Association and he brings more than 30-35 years experience of litigation in Nepal and Yuta Kral is a very senior child right protection activist and she is leading her organization and contributing through dynamic coalition on child rights and Sarim Aziz is policy director for South Asia at META and having long experience on platform issues and protection of child rights and other issues as well and next to Sarim, Michael is there and Michael is directly dealing with various issues. He belongs to Jambian police and senior official at Jambian police and especially focusing on cybercrime investigation and digital forensic analysis. So introducing my panel, I have my colleague Ananda Gautam who is moderating online participants and I would like to begin with a very brief concept that what is the objective of today’s discussion. Just right inside I am seeing two kids, coincidentally they are my kids as well. They are very passionate with the technology, they are very keen on using internet and we have a big discussion whether we allow our kids to give access to technology and the connectivity and our experience shows that allowing them in the platforms are opportunity for them. They are growing themselves with new regime, new world and they have created their own set of world in their own way. I see sometimes I fear whether I’m leading my kids to very risky world or not and this leads to me to engage at this issue, technology and the risk and technology and the opportunity. Now artificial intelligence has taken over most of the human intelligence in various area of work like education, law and other area of profession and artificial intelligence is giving opportunity, lots of opportunities are there but simultaneously there are some risk as well. So in this discussion we’ll take the artificial intelligence issues and the child protection issues and harnessing child protection through artificial intelligence. There are various tools available around the world and these are accessible to all the segment of people including child and elderly people. So I’ll come to at the beginning I’ll go to Michael whose responsibility is dealing with this kind of issues regularly. I’ll go to Michael, what is your personal experience from your department, what are the major issues that you have experienced and once we hear from you then we’ll take this discussion at further level.

Michael Ilishebo:
Michael. Good afternoon, good morning, good evening. So as a law enforcement officer dealing with a cyber crime and digital forensic issues moderating of content online from the human side and from the AI side has posed both a challenge to our little ones in the sense that speaking from somebody from the developing world is that we are mostly consuming news or any other form of entertainment or any other form of content online that is not generated in our region. Of course we’re not generating our own content but the aspect of like being a gatekeeper as parents or using technology to filter that content which is not supposed to be shown or exposed to the little one has become a little bit of a challenge. I’ll give you like a simple example. If you are working, you are analyzing a mobile device from maybe a child who can do a child who is maybe 16, the content that you find in their phones, the data that they’ve in terms of the browsing history, like there is no control. So whatever is exposed to an adult ends up being exposed to a little one. As a result it has become a little bit challenge in terms of addressing issues of content moderation on both fronts. Of course there could be some aspects of AI that could help moderate some of this content but at the end of it or if we remove the human factor out of it, AI will not be able to address most of the challenges that we are facing right now. Further on in terms of combating crime or combating child exploitation incidences, you will find that most of these sites that host most of the content, despite them having clear guidance and policing on gatekeeping in terms of age, our children still find their way in places online that they’re not supposed to. Of course there’s no system that will detect the use of a phone to indicate their age or their gender as a human being would. It still remains a challenge in the sense that once a phone is in the hands of a minor, you don’t have control over what they see, you don’t have control on what they do with it. So basically it has become a serious challenge on the part of the little ones and us enforcing cyberspace to ensure that the little ones, the minors, are protected from content that is not supposed to be exposed to them. Thanks Michael. I would like to know your experience. I hope I belong to Nepalese society and Zambian society might be similar from education and all these things. What are the trends of abuse cases in Zambia? Do you remember any trends? So basically in terms of abuse, Zambia like any other country has those challenges. So basically I’ll give an example. Of late the talks on child online protections have been gaining momentum. There’s been some clear guidelines from government to ensure that issues of child online protection, data privacy, issues to do with the safety and security of everyone including the little ones online has been gaining momentum through the enactment of various suggestions. Like we have the Cybersecurity and Cybercrimes Act of 2021 which clearly has now outlined the types of cyberbullying which are outlawed. So basically if you go on social media platforms such as Facebook, Tik Tok, Instagram, Snapchat and all those, most of the bad actors who are engaging in these bad activities of either sending bad images to the children or any other content that we deem is inappropriate, most of them have been either arrested or talked to depending at times it’s there within their age range. They share these things among their minors. If it’s a minor of course you talk to them, you counsel them, you try to bring them to sanity in terms of their thinking. But if it’s an adult you have to know their intentions. So one of our experiences that the law itself is slowly addressing some of these challenges that we are facing. But again that does not stop there. There are a lot of cases or scenarios that remain unreported. So it is difficult for us to literally address those challenges. But in a nutshell I would literally tell you that the challenges are there, the problems are there, but of course addressing them is not a one-day issue. It’s about continuous improvement and continuous usage of the law and technology based especially from the service providers to address some of these challenges.

Babu Ram Aryal:
Thanks Michael. I’ll come to Utah. Utah, you have been engaged in child protection since long. You have good experience. We are seeing each other in IGF for several times and then shared the discussion as well. You also belong to, you are also member of Dynamic Coalition on Child Rights. So what is your personal experience from protection issues and ethical legal issues on protection of children online especially when AI is significantly contributing and intervening on these platforms? Utah.

Jutta Croll:
Yeah, thank you so much for not only inviting me but posing such an important question to me. First of all I would like to say you introduced me as an expert in child protection issues and you may know that the Dynamic Coalition even changed their name from Child Online Safety Coalition to Children’s Rights Coalition in the digital environment. I think it’s important to put that right from the beginning that children have a right to protection and to provision and to participation. So we always need to look for a balanced approach of these areas of rights. And of course when it comes to artificial intelligence I would like to quote from the General Command Number 25 to the Convention on the Rights of the Child. You may know that the rights of the child have been laid down in 1989 when although the Internet was there it was not designed to be used by children. And the UN Convention doesn’t refer in any way to the Internet as a means of communication, of access to information and so on and so on. So that was the reason why four or five years ago the Committee of the United Nations, the Committee on the Rights of the Child decided to have such a general command in regard of children’s rights in the digital environment to give more a closer look into what it means that children are now living in a world that is mainly affected by use of digital media and look into how we can protect them. And in one of the very first articles of this general command it says explicitly that artificial intelligence is part of the digital environment. It’s not only a single thing, it’s woven into everything that now means the digital environment. So it’s therefore necessary to have a look whether artificial intelligence can be used or is able to improve digital environment for children, whether it can help us to address risks that have already been mentioned by Michael, whether it can help to detect content that is on the one hand harmful for children to be watched on the Internet, but also for content that is directly related to the abuse of children, which is where we are talking about child sexual abuse imagery. But nowadays, and that is also due to the use of artificial intelligence and new functionalities and technologies that the Internet is used to perform online live sexual abuse of children. And that is also where we have to have a look at what artificial intelligence, how it can be beneficial to detect these things, but also where it might pose additional risks to children. And I stop at that point and I’m pretty sure we will go deeper into that. I’ll come to on detection side on next round. Jutta, can you save me some more issues from ethical and legal side, if you can say some lights on this. You mean the ethical and legal side of detection of harmful of child sexual abuse imagery in general? Ethical issues of use and misuse of technology and platforms. Okay, I do think that the speaker on my left side has much more to say about the technology behind that. What I can say so far from research is we need both. We need to use, to deploy artificial intelligence to become, yes, to monitor the content, to find and detect the content for the benefit of children. But still I’m pretty much convinced that we cannot give that responsibility to the technology alone. We also need

Babu Ram Aryal:
human intervention. Thanks. Initially in my sequence, Mr. Gopal was next to you, but as you just referred, I’ll go to Sarim first and then come back to Gopal. So Sarim, now you have two very significant opinions on the plate and to respond on that and again the same questions that I like to ask. Meta platforms are significant for not only for kids, for all of us, but kids are also coming to various platforms and not only meta platforms, we neutrally discuss as platforms. So what is your thoughts on this? What are the major issues on the platforms, including meta platforms, about the opportunities? Of course, as you rightly mentioned, first comes rights and then only if any violation, then production is there. So, Sarim, can you share some of your thoughts? Thank you,

Sarim Aziz:
Babu, and honored to be here to talk about this very serious issue and humbled, obviously, with the speakers here. I think, as they’ve previously said, I just want to just reiterate that this is a global challenge that requires a global response and a multi-stakeholder approach, and I think law enforcement alone can’t solve this, tech industry alone cannot solve it. So this is one where we require civil society. We need families. We need parents. So that’s how we have, as META, have approached this issue. And so we work on all those fronts in terms of industry. I think this is also a good example where the child rights and the child safety industry actually can be an example for many other areas, actually, like terrorism and others, because we are part of a tech coalition, which was formed in 2014. Microsoft, Google are also part of that. That’s been in an excellent forum for us to collaborate and share best practices and come together to address this challenge. And we’re actually, in 2020, as part of Project Protect, we committed to expanding the scope of that to protecting kids and thinking about child safety, not just preventing the most harmful type of stuff, which is the CSAM and other things, but also keeping kids safe. So I think if I were to summarize the META’s approach, we look at the issue in three buckets. So the first would be prevention. And this is important, because this is where AI has a role to play in all of these three areas. So when you think about prevention, we have something called, for example, search deterrence. So when someone is going out there on platforms and trying to look for such content, I think Michael, at one point, talked about pre-crime. So I think we actually do AI and type of heads use is based on AI as well, in terms of what people are typing. We prevent searches coming up within Facebook and other Instagram and other search mechanisms to prevent such content from surfacing. And if people are intentionally trying to type this stuff, we actually give them warnings to say, this is actually harmful and illegal content that you’re trying to look up, and divert them towards support mechanisms. So that’s pretty important for prevention. Also, if you think about the bad behavior, sometimes kids are vulnerable, and they might get friend requests from people who are adults, or they’re not even connected to, strangers. So now we actually have in-app advice and warnings popping up to them to say, you shouldn’t accept people, accept friend requests from strangers. This person is not even connected to your network. So those are things that AI can actually help and detect and just surface, like in-app advice, safety warnings, notices, and also preventing unwanted interactions. So we actually do intervene and disrupt those types of suspicious behaviors when we detect that using AI. So prevention is that one bucket where we are optimistic and excited about what AI can do to prevent harm from occurring. The second bucket is the large part of the discussion that we’ve seen already around detection. Detecting CSAM has been a large, I think, for over a decade, it’s been a large focus for the industry, using, obviously, technology like PhotoDNA, which was initially built by Microsoft, and we’ve actually built on top of that, where we now have photo and video matching technology that Meta has open sourced, I believe, just recently. That’s called PDQ, as well as TMK, which is for video matching. So that’s been open sourced on GitHub. So now, yeah. A bit of clarification about PDQ and TMK. You know, the audience may not know. Yeah, I mean, those acronyms are easier to Google, PDQ. And these are basically, it’s like built on top of PhotoDNA, but it’s been open sourced so that any platform, any company. So we want to make sure that this is something that Meta truly believes in open innovation, that bad actors, they will use technology in multiple ways. I think our best defense is to open source and this technology, make it accessible to more safety experts and more companies out there. You don’t have to be as large as Meta to be able to implement child safety measures. Now, if you’re an emerging platform in Zambia or in any other country, you can take this technology and ensure that you prevent both sort of spread of this type of CSAM content, but also detection and sharing of hashes and digital signatures to detect CSAM. So that’s where it helps for both photos and videos. So it’s called PDQ for photos and TMK plus PDQF for videos. That’s been open sourced on GitHub for any developers and other companies to take. And this also helps for transparency. And Ampreet talks about ethics. Like, you know, this shows the tech that we use so we can be externally audited on what’s the kind of technology that we use to detect this. This is also technology we use internally for, obviously, training our algorithms and detection, you know, and machine learning technology to ensure that we are able to detect these kinds of contents. And lastly, the most important issue where AI is also helping is response. And that’s where law enforcement comes in and other civil society and safety organizations like the National Center for Missing and Exploitative Children, they’re a very important partner for Meta and other companies, where, you know, anytime we do detect CSAM content, we actually even help them build a case system using the same technology that I mentioned. So if it is youth, for example, that are dealing with non-consensual imagery issues, you know, that they’ve put up themselves. And so there’s a project called Take It Down that’s been launched by NECMEC, which helps. And that’s cross-platform. It’s Meta’s on there. TikTok’s part of it. Other companies are part of it, where those images can be prevented from spreading. So those are important initiatives. And that response and closing that loop with NECMEC that works with law enforcement around the world, they have a cyber tip helpline that helps law enforcement in their responses is really critical. So I think I’ll just pause there. But I think that’s sort of the three areas where we see technology as well as AI is playing a very important role in preventing, detecting, and responding to these child safety issues. Thank you.

Babu Ram Aryal:
Thank you, Sharim. One very interesting issue that governments in the developing world are complaining about the platform operators, that platform operators are not cooperating in the investigation issues when, from a developing country, when they don’t have much technology to catch the bad people. Michael, I’ll come back to Gopal again. You just sparked that question. That’s why I’m going to Michael. Michael, what is your experience while dealing with these kind of issues, especially what are the response from platform providers on child abuse cases online?

Michael Ilishebo:
So basically, that depends on which platform that content is on. Facebook has been a little bit responsive. They are responding. Instagram, they’re responding. TikTok, being a new platform, we’re still trying to find ways and means of engaging their law enforcement department, liaison department. Also, we’ve seen an increase in terms of local providers. Those ones, it’s much easier for them to bring down the content. It’s much easier for them to follow the guidelines of putting an age limit to whatever they are posting. If it’s a short video, if it contains a bit of some violence, contains some nudity, or any other feature we can deem to be inappropriate for a child, they are required to do the correct thing of either block the age in terms of it being accessed on Facebook. Because if I joined Facebook and I entered my age as 13, so that content will not reflect on my timeline or on my feed because of my age. But as I’ve said earlier on, it’s difficult to monitor a child who’s opened their own Facebook account because they’ll just make themselves 21. You’ve seen on Facebook, there are people who are 150 years old. You check on their birthday, they say this person is 120 years old. Because this platform themselves, like Facebook, does not actually help us in addressing the issues of age-getting. So basically, as a way of addressing most of these challenges, I’ll restrict myself to META because they can answer to the question, to any issue I’m going to raise, because they are part of the panel. I can’t discuss about Google, I can’t discuss about any other platform, which is not here. So META has been responsive, though in a way, at times it is slow. But based on their law enforcement portal, issues of child exploitations are given priority. Issues to do with, probably, freedom of expression, those ones may be a little bit slow. But on META’s part, I would still give them 100% because within the shortest period of time, when you request either for a takedown of data or for information behind this account in that, they will still provide you within the shortest period of time. So my experience with META so far has been OK. Thank you.

Sarim Aziz:
Thank you for that. That was not pre-scripted. I had no idea what Michael was going to say, but thank you for that feedback. I did want to just comment on the age verification issue. I think that’s something that’s obviously in discussion with experts around the world in different countries, lots of discussions going on. But we at META, we are testing some age verification tools that we started testing in June in some countries. And based on initial results, we see that we have about 96% of teens who tried to change their birth date. We were able to actually stop them from doing that. And so again, I don’t think any tech solution is going to be perfect, but there are attempts being made to figure out what is. This is on Instagram, by the way, this age verification tool that we have. And we hope to, based on those results, expand it further to prevent minors from seeing content that they shouldn’t be seeing, even if they’ve tried to change their age and things like that. Just wanted to comment on that.

Babu Ram Aryal:
Thanks, Sarim. Now, finally, I’ll come to Mr. Gopal. So we have discussed various issues from technical perspective, some direct enforcement perspective as well. And Utah has discussed certain issues. And NC also referred child rights convention as well. As a long practicing lawyer, what do you see in your country perspective from Nepalese context? What are the major legal protections for children, especially when we talk about the online protection over the online platforms? Yeah, please.

Ghimire Gopal Krishna:
Thank you, Babu. Thank you very much for giving me this opportunity to say something first about my country. Of course, I am representing Nepal Bar Association. That means the human rights protector, the institution of human rights protector. Basically, we have four subjects we are just focusing on. Just first is we are human rights. We deal with human rights. Secondly, the democracy, the rule of law, and the human this. The fourth issue is the independent of judiciary, the issue of independent of judiciary. Of course, being a human rights protector, we have to focus on the child right issue too. This is our duty, that we are focusing on the human child right issue. You know, in our present constitution, Article 39 explicitly says that we have right to child. Every child shall have a right to name, birth, and recognition along with his or her identity. Every child shall have a right to education, health, maintenance, proper care, sports, entertainment, and overall personality development from the families and the state. And every child shall have the right to elementary child labor, elementary child labor, child development, and child participation. No child shall have engaged in any factory. This is the important right for a child in Nepal’s constitution. Mine or similar other hazardous work, no child shall be subjected to child marriage, transported illegally, and kidnapped or taken hostages. No child shall be recruited or used in the army, police, or any armed group, or to be subjected in the name of culture or religious traditions to abuse, exclusion, or physical, mental, sexual, or other form of exploitation or improper use by any means or any manner. And no child shall be subjected to physical, mental, or any other form of torture in home, school, or other places, and condition, whatever. So this is the constitutional right. You mean a very clear protection of child and abuse of children online as well. That reflects in the constitution, you mean? Yes. In our constitution, we have clear provisions for protection of child rights. And we have Child Protection Act also. The Child Protection Act, it criminalizes the child abuse and activities, online and offline. Child abuse activities, online and offline both. And we have a child, this pedophile cases. And the courts in Nepal is very strictly, very strictly prohibit such type of activities. It is clearly our court is in practice. And we have this online child safety guidelines also. Online child safety guidelines explicitly told that this provide recommendations of action to the different stakeholders. And though we have not gone through AI, and not even think about it, I would like to say not even think about it, but our constitution, whatever we have the legal provisions, and whatever we have the legal constitutional provision, the child right is in this phase. It’s especially our constitution and our legal framework, especially very close to the child protection issues, what would I like to say. And this, I can say that child right is our focus. And child right is the core issue for our legal provisions, constitutional and legal provisions too.

Babu Ram Aryal:
Thank you. I’ll go to the next round of discussion of this session. Basically, when we propose this workshop, our workshop is harnessing AI to child protection, right? So I’ll come to Sarim first. How technology are leveraging production of child online, especially when AI tools are available? What are the tools? What are the models? And how this can leverage on protecting child online?

Sarim Aziz:
Thanks, Babu. So yeah, I think I’ll connect a bit deeper into sort of my overview I mentioned. So as I mentioned, AI has been a critical component of online child safety, prevention, detection, and response for a very long time. So this is actually, even though I think the gen-AI discussion has sort of maybe hyped the interest around AI for child safety, it’s been a very critical component of that response, as I mentioned. So the most obvious one, as I mentioned, is the CSAM, child sexually abusive material. And it started with Microsoft 10 years ago with the photo DNA technology, which has evolved and we’ve open-sourced our own since then and you know that work on detection is the most crucial because that also helps with prevention, detecting things at scale, especially we have a platform of 3.8 billion users so you know we want to prevent such content from people even seeing it, from people from even uploading it and then also that involves a lot of, still requires a lot of human expertise, that’s important. I don’t think it’s completely like humans are not involved because you know making sure you’ve got a large high quality data sets of CSAM material to train the AI to be able to detect this is sort of requires a lot of human intervention and you know and we still need human reviewers for things that maybe AI cannot detect so that there’s definitely a challenge with gen AI in terms of maybe on the production side where you know people might be producing more of this more easily but I don’t think, I think we are you know on our side we’ve got the defenses ready to build on and improve on to make sure that we’re able to leverage AI to also detect those kinds of things. I think there’s a lot more work to do in that space but we have a good, the industry has done well in terms of leveraging AI on the detection side. I think the prevention side is to us is more exciting because that’s something new that we’ve focused on in terms of for user education, youth education, preventing them from interactions that are suspicious that are you know with strangers and adults that that they shouldn’t be having. I think this issue of you know the parental supervision is an interesting one we obviously have parental controls built into our products into Facebook and Instagram do we we believe that parents and guardians know best for their kids in terms of but at the same time you know there are there are obviously privacy issues that we also have to consider so those are some of the ethical discussions that are ongoing but yeah I think though so I think prevention detection are excellent I think the response side this child safety is one of the few areas where we the partnerships like neck back and multi-stakeholder responses are so critical to ensure that you know we’re able to work with safety partners all around the world law enforcement around the world we have a safety advisory group as well of 400 experts from around the world that advise us on these sort of products and our responses. A very quick follow-up question sorry you just mentioned that we have safety partners and how it works especially while protecting protecting child there are various community standards and then you know there are even some countries they their age of though CRC has very specific age group on minority and majority and and certain even in my country there are some debates in recent past that though it’s the CRC says 18 years and and our local child legislation also says 18 years but it’s still even in Parliament there was some discussion that we should reduce the age of minority majority threshold so so how dealing with a different legislative regime platform operators work on combating these kind of issues. Yeah I think those discussions are ongoing as we speak in many countries on terms of what is the right age to you know at what age do you require parental consent and so those I mean I everyone will have a different opinion on that I think what we’re trying to really focus on is that of course on Metas platforms for most products you need to be at least 13 to create an account in some jurisdictions it’s different so we obviously respect law wherever we operate however I think our focus is really on regardless of whether you’re 13 or 14 like is it’s the is the the nature of the interaction and making sure that you are safe and same that whether if there’s violent content we have something you know what we call like marked as disturbing so we you know as well even for for adults actually so I think there’s making sure that minors don’t even see the content like that but also if even if adults see it the day you know AI actually helps us to make sure that this might even make someone who’s 18 uncomfortable so we have technologies on that as well so I mean age is obviously a number but at the same time you know we need to make sure the protections are in place the systems are in place to protect youth in general whether they’re 13 or 14 or 17.

Michael Ilishebo:
thanks Michael you’re like you want to respond on this year I’m just adding on to what he has said among the issues that is a little bit challenging is a classification of inappropriate content I’ll give an example mmm under meta platform 13 years is the minimum age one can join Facebook but based on our laws and standards in the countries that we come from 13 years is deemed to be a child who can’t even probably own a cell phone the second part is the content themself an image of violence probably in a cartoon form or music with some vulgar violence content or anything that may be deemed inappropriate for Zambia might actually be deemed appropriate say for the u.s. a child holding a gun in Zambia or in Africa either through the guidance of the parent or without it’s literally something that is unheard of but in the u.s. we’ve heard of children going with guns in schools doing all sorts of things we’ve seen images where if you look at it as a parent you’d be worried but that image is there on Facebook and he’s been accessed by another child in another jurisdiction where it is not deemed to be offensive so issues of clarification themselves they’ve played a challenging role up just to add what is he said thanks Michael Utah yes thank

Jutta Croll:
you for giving me the floor again I I would like to go a bit more into where a I can be used or probably also where it where it can’t be used and some of you may know that there is already a new draft regulation in the European parliamentarian process on child sexual abuse which differentiates between three different it’s not different types but it’s three things to be addressed one is already known child sexual abuse imagery which Sarum has described very well it is possible to detect that with a very low false positive rate due to photo DNA and the improvement that already matter and other companies have made during the last years have led to all also being able to detect video contact with which was quite difficult some years ago it’s it has become much better then the second part is not yet known sexual abuse imagery so the new products and they are coming in in a huge amount in huge numbers of images and videos are uploaded every day and of course it’s much more difficult to detect this imagery that have not been classified as being child sexual abuse imagery and the false positive rate is much higher in this regard and then the third part which is the most difficult is detection of grooming processes where children are groomed into contact to a stranger in order to be abused either online or to produce self-produced sexualized content of some themselves and sending that to to the grooming person so we we know that these different areas react to different artificial intelligence strategy in a different way and the most difficult is the part in grooming where where obviously if you haven’t the means to look into the content because the content of the communication might be encrypted then you would need to use other strategies to to detect patterns for example of the type of communication one sender addressing a thousand different profiles to get in contact in the in the expectation that at least maybe 1% of these of these addresses will react to the grooming process and getting in contact with the person so and that’s I think it’s where talking about shared responsibility that could not be done by the regulator it could not be done by the policymaker but it could be done by the platform providers because you have the knowledge I do think you have the resources to look deeper into these new developments and try to find a technological way based on AI to address these issues and I push the ball back to you because I’m pretty sure you can answer to that.

Sarim Aziz:
Thank you actually that’s exactly I think the area of focus for us in recent times is to focus on preventing grooming and that’s where AI is playing a very key role as I mentioned on just preventing that in you know unwanted interaction between an adult and a teen you know so we’ve changed things like for example just preventing the default settings of a youth account would not be able to message you know like a stranger so that and also even on comments of public information so if a comment that’s done by a youth for example it will not be visible to an adult so we’re actually trying to kind of reduce that sort of unwanted interaction it’s still early days for this but I think we’ve taken measures already we haven’t waited to know we know this is the right thing to do in terms of you know ensuring that adults are not able to discover teen content so in the in Instagram for example in our discovery reel you won’t see any youth content there that’s you know and same with whenever we detect this sort of any attempts a friend request as I mentioned that was an example where it’s someone who’s not in your network we do give warnings to teens just and that’s an opportunity to educate to say like shouldn’t be this person is a stranger you shouldn’t be accepting a friend request to discourage them so I think you’re right this is the the right focus for us to kind of continue using technology and AI to prevent sort of grooming and and protecting sort of unsuspicious interactions and unwanted interactions between you know teens and adults very very a significant issue Utah just referred that detection of grooming process of a child and in platform I have myself dealt with certain cases in Nepal as well so it’s also Michael raised is classification of use of platforms and and there are various categories of you is who gets the connected with the platforms as a business or platform providers one of platform providers maybe my question could be a law enforcement issue but from accountability perspective if if it is seen that a platform is used for a long grooming of child and and leading of significant abuse to child then do you see the as you rightly mentioned that share accountability do you see the platform also should share the accountability of that serious incident not only the matter of law enforcement or you’re not clear my question sorry I think platforms definitely have a responsibility to keep their users safe and and I think as Michael alluded to it we you know as I said this is a global issue requires a global response we have to do our part in that and we do that by using that making sure we create the product by having safety by design and some of these changes we’re making is literally safety by design like when we’re developing these these these features to make sure that how would how would you you know use this and how could they how could we keep them safe you know even even like you know when you’re we don’t suggest for example adults and your people friends you know like things like that so these are this is safety by design right in the product but beyond that when something bad happens absolutely we you know we do work very closely with law enforcement from around the world including with NACMEC through NACMEC as well when we see a situation where you know a child is in danger and and many times I mean you won’t read about it in the paper but platforms do cooperate and they they reach out to law enforcement with the information that they see to ensure that you know child’s again and or anyone can can be kept safe at least that’s that’s my view I obviously can’t speak for behalf of every platform but that’s how we operate at Meta.

Babu Ram Aryal:
I have two questions for the panelists but before going to those questions it’s my questions in future will be there with privacy and the future strategy before that I’ll take a few questions from the audience as well and I open the floor for your questions if you have any questions from the floor I would like to welcome yes so thank you so much for the conversations the question about please introduce me oh yeah sorry my

Audience:
name is Sumana Shrestha I’m a parliamentarian from Nepal when it comes to protecting children one of the other thing we also need to protect them from is bullying right so you’ve got so many different languages what is what are what is Metta for example doing about content moderation in different countries in which it’s used it’ll be great to know thank you thank

Sarim Aziz:
you for the question and for joining this discussion yeah we have very clear policies against bullying and harassment on our platform across all our surfaces it’s the same policy on Facebook and Instagram and others so we have the same policies applied everywhere so we want to protect people same protections to all all youth all adults as well of course our threshold is much much lower when it comes to kids and youth when we see the type of you know bullying if a minor is involved in that type of situation that we our policies are much more harsher in terms of the the enforcement action that we take as well as like the the strikes against individuals who might be engaged in that behavior we we do a variety of enforcement actions not just sort of stopping the behavior but also some restricting sort of additional sort of abuse from that those types of accounts but of course we have we rely on bullying is a difficult one where I have to say I don’t think I mean it has made progress but I think it’s a difficult one where I compared to CSAM and other areas in terrorism where AI has not been you know completely successful in sort of you know like we don’t have a 99% sort of action rate on that because of the nature of bullying can be so different right and it may be it may not be obvious to a stranger that there’s bullying going on because context is so important you know between the two individuals involved the cultural context so I think the policies are clear we do enforce and we do remove any type of and prevent such kind of content but we largely rely on our human reviewers we have people from around the world including people experts from Nepal who sort of review content in local language and and help us enforce against that but but that type of kind of we do rely also on the community to report because if no one reports it then platforms are not going to know that this is bullying and this is why that that context and intervention including safety partners and civil society partners we have partnerships in many countries with local safety organizations including in Nepal where you know victims of bullying can report such content to local partners who can ensure that meta services take action quickly against that.

Babu Ram Aryal:
More questions? Audience? Online question? Okay we have got one online question. There’s a

Audience:
question from Stacey. What are the current accepted norms for balancing human rights with privacy and security? Are we good at it?

Babu Ram Aryal:
Any specific research person? No, they did not mention. So Sarim and Jutta?

Jutta Croll:
Okay I’m going to take that one first. I wanted already to refer to privacy of children, because I think it’s part of the UNCRC that is the most ambivalent paragraph of the Convention of the Rights of the Child, because children have the right to a privacy of their own. So that also means, and it’s very made very clear in the General Command Number 25, because with digital environment, with digital media, it has become more difficult also for parents to strike that balance between keeping the privacy of the children on the one hand, and that would mean not looking into their mobile phone like Michael had been talking about before, but on the other hand parents also have, it’s their task, their duty to protect their children. So it’s very difficult in the social environment of the children, in the family, to have a balance between their right to privacy and their right to be protected. But also when we look into that regulation, for example, that I’ve been quoting, the EU regulation that is underway, but also in other regards, that it is quite difficult, because at the moment that we are asking for monitoring content, we know that is always an infringement of the privacy of the people that have produced that content or that are communicating. So looking into the private communication of people would be an infringement of their right to privacy, and that would also mean an infringement of the rights of children and young people, because they have that right to privacy as well. And on the other hand, if we don’t do that, how could a platform like Meta or any other platform follow their responsibility and accountability for protecting their users? That is quite, and I do think it’s an equation that doesn’t come to a fair solution. We need to tackle it from different directions to try to find a balance in this way.

Sarim Aziz:
Yeah, just to add to that, I think this is a really important one. I think when you asked the question, it reminded me of the Google case where I think there was a parent who took a sort of a nude photo of their child to send to a doctor during COVID, and I think Google’s AI sort of marked that as really harmful content and reported that situation to law enforcement. So I think there is definitely that balance and the rights of the child versus rights of parents, and that’s an interesting one. But I think I do want to say that Industry’s View also is quite, I think, against this scanning private messages situation, because all the numbers seem to indicate that we don’t need to actually do that. If you look at all the things that I mentioned in terms of prevention and detection, it is based on behavioral patterns. It’s not based on necessarily content. CSAM aside, yes, of course, that requires that to be… I think if we focus our energy on just public surfaces where users come and the kinds of behavior that we are trying to prevent, grooming behavior, I think there’s plenty of opportunity for technology and civil society and experts to focus there. At first, then, you don’t need to break, get into a private messaging. In fact, a good statistic is a Q1 of this year, and I’m only quoting meta numbers. I think the global numbers from platforms is even more. This is just matters number. In Q1 of this year, we sent 1.2 million reports to NECMEC of child-related CSAM material without invading anybody’s privacy. That’s a staggering number. That’s just meta. Again, if you add the other numbers, I think it’s even higher for other platforms. I don’t think we need to go there. I think that it gets into a lot of unwanted side effects that you don’t want. I think if you focus our energy on behavioral patterns, public surfaces, there’s enough opportunity to prevent grooming behavior and keep kids safe.

Babu Ram Aryal:
In previous conversations, Michael mentioned about the privacy, and I said before opening the floor, I said I have a separate question on privacy. Let’s discuss more on privacy. I would like to ask more on privacy. In USA, there was big debate on COPA and CHIPA, Child Online Protection Act and Child Internet Protection Act were largely debated, and those debates went to the Supreme Court and clearly discussed about the child protection is one side, and freedom of adults are different side. So how we can meet the better position, especially from talking from development country perspective like Nepal and Zambia, what kind of legislative framework could be more efficient? Because lots of countries, they don’t have a specific legislation on online child protection. There might be certain provisions on the regular Child Protection Act, but not a very clear position on child protection online issues. Michael, I’ll come to you first to respond on this. What is your experience in Zambian legal regime? How Zambian legislative framework is addressing this kind of issues?

Michael Ilishebo:
So basically, as I wrote earlier on, in 2021, there was a split in terms of amending our Electronic Communication and Transactions Act, which contained both the aspect of cybercrime, cybersecurity, electronic communications, and other legislative issues on ICT. So now, it was more like, I would say, we came up with two more legislations that we separated from the ECT Act. One of them is Cybersecurity and Cybercrimes Act. The second one was the Data Protections Act. So basically, the Data Protections Act covers matters and issues to do with privacy. But of course, privacy is a generic term. At the end of a day, a child who’s 10 years, what privacy do they need when they are under the control of a guardian or a parent? They may not know that which is good, that which is bad, because of their stage of their age and state of mind. Also, coming back to the issues of security and safety, they become vulnerable the moment issues of privacy comes in. If you ask a child to say, let me have your phone, let me see whom you’ve been communicating to, a child would say, I have the right to privacy. What do you do? It’s true, as long as you’ve deemed that child to own a phone, you’ve allowed them to have a bit of some privacy. But again, it also depends on which platform they are using. I will give an example of my kids. My kids back home, for their YouTube channel or any product from Google, I use a family account. That allows me to regulate what app they’re installing. Even if I’m not there, I will receive an email to say, this one wants to install this application. It’s me to either allow it or block it. The same happens to YouTube. So basically, I’ve taken that step because the human oversight, I will not always be there to see what they are doing, but somehow technology will help me through AI to filter and probably bring to my notice on certain things that technology feels like this is above their age. There are some games online that would appear innocent in the eyes of an adult. But as a child keeps playing those games, a lot of bad things, a lot of images that may be of sexual expectations will be introduced in the middle of the game. But when you look at it as an adult, you won’t even see anything. So these providers, like in the case of Google, has a way of knowing which application, either on Play Store or any other platform, which is appropriate for a child. So as a step to protect my kids, I’ve allowed them to use only a child, a family-friendly account where, despite me being in Japan, I’m able to know if they viewed this video, which I may deem to be inappropriate. I will either block it from here or probably talk to them that never ever visit this page. Of course, Microsoft also may come up with their own policies through their browser on blocking certain sites and probably pages or another thing that they may be doing online using their platform. But again, it comes back to the issue of human rights and privacy. To what extent are we able to control our kids? Are we able to control them based on the fact that they are using a single device in a house where this one uses in the morning, this one in the evening, or they’ve got single devices? Or alternatively, we’ve allowed them to use single devices based on the fact that we’ve installed a family-friendly account which enables you as a parent to monitor it. But of course, it’s not always the case because a child is an adventurous person. They always find ways and means to bypass every other control. They seem to know more than we do. The same also applies to crimes where a child is a victim. A child may be groomed by somebody they’re chatting with. They may be told, you place that, place this, place that. They’ll bypass all the controls that you’ve put in place. As much as you’ve put your privacy protections and probably safety laws to how they navigate their online space, there’s a third party out there who is taking control of them and making them do and visit certain things that they’re not supposed to do.

Sarim Aziz:
question. I think this comes back to the prevention aspect of it. I think the last example that Michael just mentioned, we’ve changed our default settings for youth accounts exactly that to prevent any kind of interaction. I think prevention is really a good strategy and focusing, making sure we’re having safety in there by design. This is where AI is helping. On the ongoing debate, as Michael said, I think kids are digital natives in this world. They are good at circumventing all this stuff. But if there’s safety and design into products and services that we use, and we have parental supervision tools as well on Meta’s platforms so parents are aware who they’re communicating with and things like that and what type of content are they interacting with. By default, kids don’t see any advertising on Facebook. Obviously that’s important. At the same time, any content that might be violent or graphically violent or inappropriate is not visible to them. As I said, even for adults it’s disturbing. We do mark that as disturbing for adults so they don’t have to see such content by default. It’s an ongoing discussion, I think, where the solution is safety by design and youth safety by design in products because the kids are sometimes early adopters of these things that come in and making sure that it’s keeping them safe. If we keep them safe, we actually keep everyone safe as well, not just kids.

Jutta Croll:
Yes, I have to respond to one thing that Saurabh said and that is when you say kids don’t see advertisement on Meta, it’s when they have been honest with their age. But when they have been lying on their age, they might see advertisement. We have already been talking about age verification or age assurance. I would just say that it’s key to solve the issue. As long as we don’t know the age, I would say of all users, it’s not only that we need to know whether a child is a child. We also need to know whether an adult is an adult to know that there is an inappropriate communication going on. I’m pretty sure that in the near future we will have privacy saving methodologies to make sure that we know the age of the users to better protect them. But coming back to that question that you raised and posed also to Michael, I could say it’s one sentence. Talk to each other. Parents as well as children have to talk to each other. It’s always better to negotiate what is appropriate for the child to do than to regulate. I do think that the same applies to policy and to platforms and to the regulator. Talk to each other and try to find a constructive solution between the two of them. Yuta, I don’t know whether this is proper to ask you or not. Before you mention about the upcoming legislation of European Union Parliament, can you share some of the domestic practices of European member state about online child protection? Because I wanted to ask that question before but the sequence developed differently. Sorry for that. But if you can share any member state perspective on online child protection. So do we have two more hours to talk about all the strategies that we have in Europe? Of course it’s different in the different countries. We see that, and I think that’s for several years, that countries that start right now or have started to legislate two or three or five years ago have much more focus on the digital environment and how to legislate child sexual abuse in a way that is appropriate to the digital environment. While countries that have longer-standing child protection laws that did not address the digital environment, of course they need to amend their laws, but that takes time. So the newer the legislation, the better. It is fit for purpose to address child sexual abuse in the online environment. What we did in Germany was in 2021 we got an amended Use Protection Act that refers much more to the digital environment than it did before, and that has kind of that approach that I just have been talking about before. It’s called dialogic regulation. It’s not that it poses obligations on the platform providers, but it asks for a dialogue with the platform providers to try to find the best solution. And I think that is much more future-proof than regulating, because you always can regulate only the situation that you are facing in the moment that you are doing the legislation. But we need to look forward, and again I’m referring to the platform providers. You’re in the position to know what is in the pipeline, which functionalities will be added to your service. So if you do safety by design, then as Sonja Lewitt put it just in another session, it should be a child rights-based design, then probably the regulator would not have so much work to do.

Babu Ram Aryal:
Gopal, you want to say something on this?

Ghimire Gopal Krishna:
Though before some time, just what is the right age of adult? This question is now debatable in our context, in Nepalese context. We have now the marriage age is in debate, though before some time we have our parliamentarian, I think she has gone. What is the correct age for marriage? We have a provision that when a child completed 16 years of age, we provide him or her citizenship certificate. That means a type of adult certificate. Citizenship certificate is provided in 16 years of age when he completed, she completed. And a child can be voted for their representative in Nepal when he or she completed 18 years of age. The voting right is to be provided to the person. And third one is, what is the age of marriage? The age of marriage is 20 years. And many cases now in my practicing life, many cases are laws before the court. The rape cases are laws before the court. When, what is the consented age? What is the age of consent? That is in debate. And many people are in jail nowadays. Many people are in jail when they are, if they are consenciously, they indulge themselves before the age of 18. And it is the matter of questionable. That is why it is very important. And our civil society, this matter is now in debate in civil society too. That is, it is very important for we, what is the proper, interestingly, though we have set principles, we have examples. And what is the proper age for marriage? And whether it could be, whether it could be internationally settled, the similar age or not. This is a very important question for we now. That is why I just raised this question to our fellow.

Babu Ram Aryal:
To link this issue, I’ll go to Sarim very briefly. Sarim, especially when the litigation comes and the law enforcement agency see the different age, age group, actors, content, especially like sexual relationship or any kind of other similar kind of content. As he was referring, there are some different legislation that allows relationship between people and there could be use or as an evidence. So this is the debate at different societies. So how easy to deal with this kind of odd situations for platform providers and what are the platform providers response on this kind of issues?

Sarim Aziz:
Yeah, I think these child safety issues are definitely top of mind for our trust and safety teams at Meta and I’m sure for other platforms too. And I think the NECMEC number that I shared earlier is a good sort of proof point of how we cooperate with civil society and law enforcement. Of course, there are some cases where we don’t wait for NECMEC. If it’s a child in imminent danger, we believe that we have child safety teams that look at this stuff and these cases and law enforcement teams that directly reach out to law enforcement in the country. And there’s been cases where we busted sort of these child rings as well. So I think that’s an ongoing effort. I wouldn’t say it’s easy. I think it requires a lot of, I mean, AI has helped in that effort but it still requires human intervention investigation. I think the age verification piece is interesting. As I mentioned that that’s where we are doing some tests and the AI does help because one of the solutions that they’re testing is like, where the youth has to send a video of itself for verification. So I think you can rely on IDs to a certain extent but there are other questions of data collection on that. How many, how much private citizen data are you gonna collect? And then there are other suggestions where you link into government systems but then there are other surveillance sort of concerns on that. So I don’t think there’s a silver bullet here and I don’t think any solution is going to be perfect. We are doing tests with age verification as I said on Instagram. I think we’ll wait and see what results say. There’ll be some level of verification but again, I don’t think anything will be perfect and certainly, but we do need to sort of figure out as I said that we need to figure out whether an adult is an adult and whether a child is a child and especially we have other behaviors that we need to detect of like suspicious behaviors and fake accounts and things like that as well. So that’s where AI is also helping us definitely quite a bit.

Babu Ram Aryal:
Thanks. Follow up, very brief question. You frequently mentioned we report to NECMEC, we send report to NECMEC. NECMEC is based in and are not perfect organization based in the US, right? So what could be your response if other jurisdictions wanted to collaborate with you?

Sarim Aziz:
NECMEC collaborates with law enforcement around the world. So if you are a law enforcement agency in any country, you can reach out to them. I think there’s another organization called ICMEC that they also partner with which is international and they work with law enforcement to set up their cyber tip helpline. So that gives local law enforcement access to that information.

Babu Ram Aryal:
Jutta.

Jutta Croll:
Yes, I just wanted to add something because I’m very grateful for you to bringing in the question of consent because the principle of age of consent has also come under pressure pretty much in the digital environment where the question is what has been consensual and what wasn’t. And the general comment that I have been referring to before says in article 118 that self-generated sexualized imagery that is shared in consensus between young people should not be criminalized. So I think that is a very good provision for young people also need to experience their sexual orientation to learn about each other. But when it comes to these images, AI would not be able to understand whether it was a consensual sharing of images or whether it was not in consent. So that makes it very difficult to apply such a rule like, okay, what is consensual and what is not consensual? And that’s also, as I said before, we can rely on artificial intelligence in very much aspects and I’m pretty sure it will get better and help us better to protect children but there are certain issues where artificial intelligence cannot help and we need to have human interventions. Thank you.

Babu Ram Aryal:
Thank you. Any questions from audience if you have? We have very less time for the session. If you have any question, please.

Audience:
Just for the protection of the right of the child, the companies or the social media groups should be responsible enough for the registration as well as the content and the use of AI to identify the local language so that there will not be any kind of misuse of the word or something else. Just like I would like to give my own example. It was about eight years ago, I got a friend request by putting the name of the one beautiful child. She was about, the picture was seen about 13 or 14 years old and I didn’t accept. She frequently called me at the evening and I ignored that one. And then what happened? I thought that oh, I should tell, I should note that call and then I should tell their parents that what, that your child is doing this thing. Then I asked her to give her, do you have your private number or phone number, something else? I don’t like to talk in the social media. Then what happened that she was not the child. She was some woman that she wanted to have some informal relation with me or someone else. Then I asked her why? Why did you put the name of, picture of the child and the name different thing? And then she said oh, if I put the picture of the child then I would like that one. This is the case I face by myself. Similar things may happen for others also. So that the registration systems in the social media should have some authentication mechanism. Without that there might be some similar cases that might happen to others also. So that my request to the social media agency is to be more accountable, responsible and intelligent enough that our platform is not misused. That is my suggestion as well. Thank you very much.

Babu Ram Aryal:
Thank you. Asirbad.

Audience:
Thank you, Barbara. So I’ll move a little bit to the human side while we’re talking about AI. Very briefly, okay, time is very brief. So young people have been very much experience a lot of peer pressure and in digital area social media has caused it even more. For example, the increasing level of anxiety, depression, body misconcern, disorder, eating and suicidal thought. And so when we look at the root cause of peer pressure, the need to fit in, the fear of rejection and looking at the sense of belonging, those are needed, those are the human aspects. So, and due to that they are very much vulnerable to online exploitation, right? So what is a social media company doing in terms of the human aspect as well along with the technical one? Thank you. Just a short one. My question is also for Sarim. I’m Binod Basnet from Nepal. So recently there has been some message that is circulating in the Facebook messenger that you have been infringing some child protection policies of Facebook. So if you don’t follow some instructions your account will be suspended and stuff like that. And when you go into that message there’s a photo of Meta with the Meta logo and it’s very panicking for young users and they tend to give in their contact details and their ID and the password and stuff like that. But I think in reality they are phishing sites that are seeking your passwords. So my question is what is Facebook doing in regards to those phishing hackers? What is the retaliating or what is the policy that Facebook takes against those phishing sites? Thank you.

Babu Ram Aryal:
Thanks. Though our original time is almost about to conclude but questions are coming. So Sarim, directly to you.

Sarim Aziz:
I mean the question’s all directed at me. I’m happy to continue the conversation after this discussion. But look, on all fronts. With your concluding remarks. Yeah, concluding remarks would be, I think I’m gonna go back to my introduction to say this is not just a platform that can solve all the solutions. I think we have technology that can assist. Technology still requires human expertise both from the platforms but also expertise from civil society, law enforcement and government and parents and families. So phishing is a longstanding issue. I think the smartest people in this room have been phished and it has, whether that’s from Meta or some other logo that they recognize. I think it’s a matter of like when you’re short of time and your attention span is shorter you could be phished very easily. So I think that’s an issue where we need to increase digital literacy and education and actually from a systems perspective the way you fix that is you have authentication. One time authentication so that even if someone does get phished that they’re not able to, your credentials don’t, one time authentication will prevent phishing attacks from getting access to the system. So that’s, I think systemically needs to change. So that’s a separate issue. Absolutely, I think in terms of safety we Meta cannot alone solve these issues. I think in terms of the human aspect we work with the 400 safety advisors we have but there are other organizations that we’re members of the We Protect Alliance as well and other organizations where us with along with industry we want to protect kids and I mentioned earlier on how we are using our platforms and using AI to detect and educate kids on when there’s potential grooming or potential unwanted interactions to prevent kids from interacting with adults. So those are some efforts but there’s a lot more that we can do and we’re open to ideas. The gentleman who mentioned that I think he was maybe phished or maybe there was some other attempt to connect with him. Of course, I mean, we also rely on community. I think some of the challenges we have is people don’t report. I think they think that platforms have the manpower or the ability to just know when something’s wrong. We won’t until people, civil society and users report things to us and that’s where we rely on our partners so that’s really key when these kinds of situations happen to protect yourself but also your community.

Babu Ram Aryal:
Very closing one minute, less than one minute response from Michael.

Michael Ilishebo:
So basically, we will not address any of the challenges we’ve discussed here if there’s no close collaboration between the private sector, between the governments and the public sector and also the tech companies. So in as much as we can’t put trust in AI to help us policy cyberspace, the human factor and close collaboration will be the key to addressing most of these challenges. Thank you.

Babu Ram Aryal:
Thank you, Jutta.

Jutta Croll:
Yes, thank you for giving me the opportunity for one last statement. I would like to refer to Article 3 of the UN Convention of the Rights of the Child that states the principle of the best interest of the child and in any decision that we may take, that policy makers may take, that platform providers may take, just consider whether this is in the best interest of the child and that means the individual child if you are talking about a case like we have heard of some of these cases but all children at large. Also, consider whether the decision that shall be taken, the development that shall be made, the technological invention that shall be made, is it in the best interest of the child and then I do think we will achieve more child protection.

Babu Ram Aryal:
Thank you. Closing.

Ghimire Gopal Krishna:
We are the part of Child Right. We already signed in the Child Right Protection Treaty so being the part of very responsible part of society, being the president of Nepal Bar Association, I am committed to always in favor of Child Right Protection Acts and its amendments, its possible subject, I mean positive amendments and so on. So I’m very thankful to Bauram to give me this opportunity.

Babu Ram Aryal:
Thank you very much. Thank you very much. We are running out of time. I’d like to thank my panelists, my team of organizers and all of you who actively participated on this and of course for the benefit of a child, dedicated to these childs. I would like to thank all of you and I close this session. We hope we’ll have a good report of this discussion and we’ll be sharing the report to all of you through our channel. Thank you very much.

Babu Ram Aryal

Speech speed

114 words per minute

Speech length

1678 words

Speech time

880 secs

Sarim Aziz

Speech speed

188 words per minute

Speech length

5241 words

Speech time

1670 secs

Audience

Speech speed

165 words per minute

Speech length

776 words

Speech time

283 secs

Ghimire Gopal Krishna

Speech speed

113 words per minute

Speech length

1016 words

Speech time

538 secs

Jutta Croll

Speech speed

140 words per minute

Speech length

2520 words

Speech time

1081 secs

Michael Ilishebo

Speech speed

156 words per minute

Speech length

2394 words

Speech time

921 secs

Internet Data Governance and Trust in Nigeria | IGF 2023 Open Forum #67

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Nnenna Nwakanma

The analysis of the speeches reveals several noteworthy points made by different speakers. One speaker argues that data governance should not stifle innovation, but rather motivate it. This perspective highlights the importance of fostering an environment that encourages innovation and allows for the creation and development of new ideas.

Another speaker emphasizes that the value of data lies in its effective use, rather than simply relying on population size. The speaker highlights that economic value can be derived from the creation and reuse of data, and that leveraging data effectively can contribute to wealth creation and poverty reduction. This argument sheds light on the potential for data to drive economic growth and social development, emphasizing the need for strategies that focus on maximizing the utility and impact of data.

On a more negative note, there is a sentiment of dissatisfaction with data governance in Nigeria, as it is perceived to reflect the overall governance standards of the country. This observation suggests that concerns about the state of data governance might be indicative of broader governance challenges, pointing to the need for comprehensive governance reforms to address these issues.

On a positive note, there is support for citizens having access to and actively creating data. This perspective highlights the importance of empowering individuals to participate in data generation and ensuring that they have the necessary tools and resources to contribute to the data ecosystem. It underlines the belief that data should not be exclusive or controlled by a select few, but rather open to all citizens to promote transparency, accountability, and participation.

Lastly, there is a consensus among the speakers that continued dialogue among stakeholders is crucial. This observation recognizes the need for ongoing conversations and collaboration to build trust and strengthen relationships between different actors involved in data governance. This highlights the importance of creating platforms and opportunities for stakeholders to come together, exchange ideas, and work towards common goals.

In conclusion, the analysis of the speeches sheds light on various aspects of data governance. It highlights the need to promote innovation, harness the effective use of data, address governance challenges, empower citizens in data creation, and foster dialogue among stakeholders. These insights provide valuable guidance for policymakers and stakeholders in the field of data governance, emphasizing the importance of taking a holistic approach to ensure equitable and effective data governance practices.

Kunle Olorundare

The analysis consists of several arguments surrounding internet and digital rights. The first argument suggests that the internet should be open, with the internet being referred to as the network of networks. It highlights that data is generated on the internet, emphasizing the importance of an open and accessible internet for everyone. This argument is supported by the positive sentiment towards an open internet.

The second argument focuses on individual privacy rights. It asserts that everyone should have the right to privacy when using the internet or phone. The Internet Society, an organisation mentioned in the analysis, is stated to believe in digital rights and supports the idea of individual privacy. This argument also has a positive sentiment towards the importance of privacy for individuals.

The third argument discusses the need for individuals to have control over their own data. The Internet Society is mentioned to believe that communication should be encrypted end-to-end and supports the concept of rights to be forgotten. This argument highlights the importance of data sovereignty and privacy, aligning with the positive sentiment towards individuals having control over their data.

The fourth argument revolves around how government, the international community, or the private sector can contribute to strategy challenges and solutions related to data and digital rights in the public sector. Although no concrete evidence is provided to support this argument, it remains neutral in sentiment.

The fifth argument explores the implementation and enforcement of digital rights in Nigeria. However, no supporting facts or evidence are mentioned, resulting in a neutral sentiment towards this argument.

The sixth and final argument discusses the practicality of implementing data law in Nigeria, specifically relating to data duplication. However, no supporting facts or evidence are provided, leading to a neutral sentiment towards this argument as well.

In conclusion, the analysis presents arguments advocating for an open internet, individual privacy rights, and individual control over data. It also raises questions about the involvement of government and the private sector in addressing data and digital rights challenges. However, the lack of supporting evidence weakens the arguments regarding digital rights in Nigeria.

Afolabi Salisu

During the discussion, the speakers addressed several crucial topics pertaining to database management, regulatory policies, and data privacy in the African context. One notable aspect emphasized was the substantial financial benefits that the private sector can derive from effectively exploring and capitalising on extensive databases. By leveraging the wealth of information stored within these databases, businesses have the opportunity to gain valuable insights and contribute to economic growth.

Another critical area of focus was the implementation and monitoring of frameworks. The speakers highlighted the importance of a regulatory perspective to ensure that frameworks are effectively implemented and monitored to achieve the desired outcomes. It was stressed that having frameworks in place alone is insufficient; efficient systems must also be in place to track and evaluate progress. This promotes the attainment of intended objectives and assists in identifying areas that require improvement or further action.

The discussions on data privacy and data governance emphasised the need for a unified approach across Africa. Afolabi Salisu specifically underscored the importance of a cohesive strategy, emphasising the significance of data privacy and governance at both the individual country and cross-cutting issue levels. The engagement of the African parliamentarians network on Internet governance with counterparts from the US and Europe highlights the importance of collaborative efforts on a global scale.

In addition, Afolabi Salisu expressed the belief that APNIC (Asia-Pacific Network Information Centre) could play a significant role in ensuring data governance and privacy across Africa. This endorsement underscores the potential for collaboration between regions and organisations to address the challenges and concerns surrounding data privacy and governance.

In conclusion, this discussion shed light on the financial opportunities stemming from large databases, the significance of effective framework implementation and monitoring, the necessity of a unified approach to data privacy and governance in Africa, and the potential role of APNIC. This comprehensive analysis offers valuable insights and perspectives for stakeholders and policymakers navigating the complex realm of database management, regulatory policies, and data privacy.

Bernard Ewah

The analysis explores the implications of data governance, commodification, and protection in light of the increasing value of data. It addresses the challenges arising from evolving data ownership, multiple data residences, and owners. The emergence of new data sources and the growing reuse of data by various parties further emphasize the need for robust data governance.

Efficient regulatory instruments are essential to strike a balance between data commodification and data subject protection. The complexities of handling structured and unstructured data add to the regulatory challenges. Regulatory authorities must carefully navigate these complexities to ensure data subjects’ privacy and rights are safeguarded while promoting innovation and market growth. Bernard Ewah supports market-facing regulatory instruments that foster innovation and data protection. Private sector investments in digital infrastructure are also crucial in supporting these regulatory measures.

The analysis demonstrates the positive impact of data in accelerating the achievement of sustainable development goals. Examples include the establishment of a dedicated data protection organization in Nigeria and the passage of a bill protecting data subjects’ privacy. Effective data utilization enables governments and organizations to develop strategies for sustainable development.

Capacity building across various actors is highlighted as a key aspect of the data ecosystem. Equipping practitioners and users with knowledge enables them to navigate the complexities of data governance, commodification, and protection. Engagement with partners and capacity building across different levels of government are critical for successful collaboration and coordination.

The economic value of various data types, such as social media, mobile phone, scanner, financial, automatic identification systems, and geospatial or satellite data, is recognized by governments in Africa, including Nigeria. This recognition positions Africa as a potential data hub, fostering economic growth and increased participation of its people.

Overall, the analysis emphasizes the need to enhance data protection and valuation in response to evolving data ownership and multiple data residences. It emphasizes the importance of regulatory instruments that promote innovation while safeguarding data protection. Furthermore, utilizing data effectively and building capacity among various actors contribute to accelerated sustainable development and Africa’s potential as a data hub. The analysis advocates for comprehensive approaches that balance data governance, commodification, and protection to unlock the full potential of data in today’s interconnected world.

Jimson Olufuye

The analysis reveals several key insights. Firstly, it highlights the significant value of data in economic growth. The use of World Bank open data and NCC data in research for an international organization has positively impacted Nigeria’s GDP. Nigeria’s re-basing of the economy has propelled it to become the number one economy in Africa, underscoring the importance of data in driving economic growth.

The analysis also emphasizes the importance of localizing data centers to stimulate their construction and develop robust data infrastructure. The inclusion of a company providing consultation for building data centers indicates acknowledgment of the benefits of localizing data.

Furthermore, the analysis commends the government for establishing a proactive policy framework for data governance and enacting necessary acts. This approach ensures responsible handling of data and compliance with regulations.

Clear governmental frameworks and guidelines are crucial for companies dealing with cross-border data. Examples such as Jumia, Conga, and eTransat demonstrate the benefits of these frameworks. Addressing issues like conflict resolution and prosecution-related matters between countries is essential for smooth cross-border data transactions.

The analysis suggests African banks should expand into Europe and other parts of Africa to support economic growth and strengthen global connections. Although no specific evidence is provided, this expansion is seen as a strategic move.

The endorsement of the African Union Convention on Cyber Security and Data Protection is considered beneficial. This international framework facilitates collaboration and establishes common standards for safeguarding data.

Signing the Country Code Top-Level Domain (CCTLD) agreement would enhance trust. Banks domiciling their country code top-level domain in Nigeria would reinforce Nigeria’s reputation as a trusted data host.

The analysis acknowledges Nigeria’s progress in data governance and commends the efforts of the event organizer and the National Assembly. This recognition indicates positive strides in data governance practices.

Efforts to enable digitalization and effective data monetization through APIs are mentioned, but no evidence is provided. It implies the Minister of Internal Affairs is actively involved in this initiative.

The importance of cyber security in protecting data and systems is highlighted.

Although no specific evidence or arguments are presented, enabling the private sector for compliance, execution, and management is deemed necessary.

The analysis highlights the potential of the African Continental Free Trade Zone to empower the private sector. With a market of over 1.3 billion people, leveraging this potential is crucial.

Finally, the analysis expresses support for signing and ratifying the Malabo Convention, without further details or evidence.

In summary, the analysis highlights the value of data in economic growth, the benefits of localizing data centers, the importance of proactive policy frameworks for data governance, the advantages for companies dealing with cross-border data, the potential expansion of African banks, the endorsement of the African Union Convention on Cyber Security and Data Protection, the need to sign CCTLD for trust cultivation, Nigeria’s progress in data governance, the focus on digitalization and data monetization, the importance of cyber security, enabling the private sector for compliance and management, the potential of the African Continental Free Trade Zone, and support for signing and ratifying the Malabo Convention.

Sam George

The data policy framework of the African Union (AU) serves as a guide for considering data governance, data sovereignty, and cross-border data flows within the continent. Different African regions, including ECOWAS, AALA, and SADC, actively participate in addressing these data-related issues across Africa.

However, there are concerns that western pressure may inadvertently lead to poorly implemented data protection legislations. For instance, Egypt has had a data protection law for several years, but the absence of an appropriate authority has hindered its effective enforcement. This highlights the importance of not treating legislation as a mere checkbox exercise, but instead ensuring robust implementation and enforcement mechanisms.

APNIC, the Asia-Pacific Network Information Centre, plays a crucial role in advocating for the allocation of resources from portfolio ministries to data protection agencies and commissions. This support strengthens data protection measures and ensures their effective operation.

Additionally, data protection should be prioritized not only in urban areas but also in rural locations. Ghana, for instance, has a significant gap in data protection standards between capital cities and other areas. This discrepancy underscores the need for comprehensive and inclusive data protection measures that extend beyond urban centers.

In conclusion, the AU’s data policy framework provides guidance for addressing data-related challenges in Africa. While various regions actively engage in this process, caution must be taken to avoid the unintended negative consequences of western pressure on data protection legislation implementation. APNIC’s role in advocating for resources is essential for the functioning of data protection institutions. Finally, it is crucial to prioritize data protection in both urban and rural areas to ensure comprehensive safeguards for individual privacy rights and effective data governance.

Adedeji Stanley Olajide

Data protection laws are vital for ensuring the usability, consistency, and security of data. The House Committee on ICT and Cybersecurity strongly supports implementing strict laws with clear rules and severe penalties for violations. These laws must address the challenges posed by the constant movement of data, particularly financial and health records, and should allow for regular updates and enhancements.

Effective data protection requires more than just lawmaking. It should encompass monitoring and evaluation of data controls to ensure compliance and effectiveness in safeguarding data. Monitoring efforts help identify potential gaps and weaknesses in the data protection framework, enabling timely improvements.

The versatility of data and its dependence on its source are important considerations for policymakers when crafting data protection laws. A comprehensive approach is needed that takes into account the different sources and applications of data.

Additionally, incorporating practices like scrubbing and data staging into the lawmaking process is essential. Scrubbing involves cleansing data by removing personally identifiable or sensitive information, while data staging involves preparing and organizing data for analysis or use. These practices enhance the responsible and secure handling of data.

In conclusion, data protection laws should prioritize usability, consistency, and security. The House Committee on ICT and Cybersecurity advocates for strict laws with clear rules and severe penalties. Regular monitoring and evaluation are necessary to ensure the effectiveness of these laws. Policymakers should also consider the versatility of data and its dependence on its source. Including practices like scrubbing and data staging strengthens the data protection framework. By addressing these aspects, policymakers can create a comprehensive and effective legal framework that safeguards data and promotes responsible use.

Chidi Diugwu

In the analysis, several speakers discussed the importance of data protection and regulation. One key point that was highlighted is the significance of metadata. Metadata refers to structured and consensual information about data. It was mentioned that when using applications, the phone can gather various pieces of metadata such as phone numbers, location, geophysical data, and steps taken. This data can be used for profiling purposes in artificial intelligence.

Cross-border collaborations in data regulation and control were also discussed. It was noted that data travels internationally at the speed of light, raising concerns about the extent to which data travels and how the information is used. For example, it was mentioned that users’ data can travel as far as the United States of America or China. Understanding these aspects is crucial for effective data regulation.

The speakers also emphasised the importance of respecting the rights of data users. It was highlighted that consumers have the right to choose what information to assess, whether or not to share their data, the ability to stop participating, and the freedom to change their mind. Additionally, there was mention of a duty of care on the part of data controllers and processors.

The active regulatory role of the Nigeria Communication Commission (NCC) in data protection was also discussed. The NCC was described as having various regulations and interventions in place. They use tools such as lawful intercept and child online protection, and have a computer security incident response team that monitors and alerts the telecommunications sector accordingly. This demonstrates the NCC’s commitment to ensuring data protection.

Another important topic raised was digital rights, particularly in relation to access to data and cybersecurity. One speaker, Dr. Chidi, highlighted the need to upskill women in cybersecurity. This reflects the importance of inclusivity and gender equality in the digital world.

Lastly, it was suggested that systems governing digital rights should operate with principles of transparency and explicit privacy policies. This ensures that different sectors, such as human devices and automated systems, adhere to clear guidelines regarding the use of data. Transparency and explicit privacy policies help build trust and protect individuals’ rights.

In conclusion, the analysis provided insightful information on various aspects of data protection and regulation. It emphasized the significance of metadata, cross-border collaborations, respecting data users’ rights, the active regulatory role of the NCC, digital rights, and the importance of transparency and explicit privacy policies. These discussions shed light on the complexities and challenges in navigating the ever-evolving digital landscape.

Mary Uduma

The Nigeria Open Forum session on data governance and trust took place in the afternoon. It was chaired by Senator Afolabi Salisu, who is highly respected as the senior committee chair on ICT and cybersecurity. The participants introduced themselves and shared their background and expertise in the field.

During the session, there were active discussions on the importance of establishing robust frameworks and policies for managing and protecting data, especially with the growing reliance on digital technologies. The attendees offered diverse viewpoints, generating fruitful dialogue on the subject.

The valuable contributions made by the attendees were appreciated, highlighting the significance of collaboration and knowledge sharing to effectively address the challenges of data governance and trust.

Towards the end of the session, it was suggested that a future meeting be held in Abuja, Nigeria’s capital, to continue the discussions and further collaborative efforts. This idea was met with enthusiasm, demonstrating the participants’ desire to continue working towards enhancing data governance and trust within Nigeria.

Overall, the Nigeria Open Forum session provided a platform for experts and professionals to exchange ideas, share best practices, and develop strategies for ensuring data security and reliability. The session promoted collaboration and decision-making in the realm of ICT and cybersecurity, ultimately contributing to a more secure and trustworthy data environment in Nigeria.

Session transcript

Mary Uduma:
Hello. Good morning. Good afternoon. Good evening. Depending on where you are connecting. This afternoon, we have the Nigeria Open Forum session. It is on data governance and trust. I will hand this mic over to the chairperson. We have the privilege and honor of having the distinguished senior committee chair on ICT and cyber security, Senator Afolabi Salisu. Over to you, sir. You introduced yourself and maybe you allowed the speakers also to introduce themselves. Thank you very much. Over to you.

Afolabi Salisu:
Thank you so much, Madame. I would like to join you as well to welcome everybody, both physically and online. Today is Nigerian Open Forum on IGF Kyoto 2023. As I’ve been introduced, my name is Senator Afolabi Salisu. I’m the chairman of the Nigerian Senate Committee on ICT and cyber security. I’m not just chairing this committee. I’m also an IT practitioner for almost close to four decades. I’m here both as a legislator and also as a practitioner. I’m proud to be Nigerian, too. Today, we’ll be looking at the issue of data governance and trust. We have a panel that will do justice to this topic. We have an array of people from both private sector, from the regulatory organization, as well as also from the legislator. Let me just ask the lead discussants and the panelists to just introduce themselves briefly, and then we’ll start the ball rolling. I’m going to start from Dr. Bernard Ewah, please.

Bernard Ewah:
Thank you, distinguished senator, and a very great welcome to everybody who has joined us today. My name is Bernard Ewah. I’m the acting director of e-government development and regulation at the National Information Technology Development Agency.

Chidi Diugwu:
All right. Thank you, the moderator, and good afternoon or good morning to everyone watching. My name is Dr. Chidi Diugwu. I’m from the Nigerian Communications Commission. Thank you.

Adedeji Stanley Olajide:
Good afternoon, everybody. My name is Honorable Adedeji Stanley Olajide, also a rep member for the Nigerian National Assembly, the chairman of the House Committee on ICT and Cybersecurity.

Jimson Olufuye:
Good afternoon. Thank you. And that is saying greetings to everyone listening in Japanese, and welcome. My name is Jimson Olufuye. I have the privilege of being the chair of the Advisory Council of the Africa ICT Alliance, which is made up of about 40 countries, private sector practitioner in the IT industry. I’m also the principal consultant at Contemporary Consulting, IT firm based in Abuja, Nigeria. Welcome, everybody.

Afolabi Salisu:
Thank you. I’m talking about being contemporary. We need to have a balanced mix. Therefore, I have online Nnenna Nwakanma, Satisfiable Gender, as well as online participation. We cannot be talking about the Internet without having a panelist on the Internet. So, Nnenna, please introduce yourself.

Nnenna Nwakanma:
Hello, everyone. I hope you can hear and see me. Bonjour from Abidjan. My name is Nnenna. I come from the Internet. Happy to be here.

Afolabi Salisu:
Thank you so much. Without much ado, we’re going to have the late presenter, Dr. Bernadette, who is going to do justice for some 10 minutes. And thereafter, we’re going to have our panelists who will make their interventions in very few minutes so that we can have some time for the audience. So, at this point in time, I must also recognize that part of the crew for this program, we have Kunle Olundari, who is the online moderator. He’s also somewhere here. He’s the acting president, Internet Society, Nigeria chapter. So, without much ado, Dr. Bernard Ewah, you have 10 minutes to lead us through the issue of data governance and trust. How far so far in Nigeria? Your time starts counting now.

Bernard Ewah:
Thank you very much. I hope I can live with the 10 minutes. In the last couple of days, a lot has been spoken about data governance and trust. For us now, we’re going to try to mirror that in the perspective of Nigeria, the journey we’ve taken and where we hope to navigate in the next few years. In the discussion on data governance and trust in Nigeria, we are approaching it from two dimensions. Of course, the commoditization of data has given the need for countries to derive value from data, at the same time protecting data subjects. However, in the last few years, this issue has continued to be even more complex, because we are seeing discoveries of new data sources. We are also seeing data increasingly being owned, residing in multiple residences, having multiple owners, and also the subjects dwelling across multiple legal jurisdictions. For regulators, we have to be aware of this changing dynamics. More importantly, the fact that data has become a highly sought-after commodity means that there are more and more capabilities for its reuse by parties. These are some of the challenges that regulatory authorities are dealing with. As I said at the beginning, the critical focus here is how to create that balance between extraction of value and protection of data subjects. Along the line of the challenges which I mentioned earlier on, of course, is the need for a broad set of data governance structures that recognize the increasing dynamics of data itself. A few years ago, or a decade or so ago, it was common for us to focus on a particular non-organized structured set of data, but that is not the case today. We are also seeing data that is increasingly being integrated, so a mix of structured and unstructured data, while we continue to discover new data sources. That calls for a lot of new needs for infrastructure, storage. There’s also the challenge of filtering good data from bad data. In the last few years, we’ve also seen the occurrence of fake news and how to deal with that. All of these are challenges for regulatory organizations. However, in the midst of this, there are lots of opportunities for all stakeholders. As I said earlier on, the key is balancing or having a healthy balance between the commodification of data and protection. That’s essentially the fine line that regulatory authorities in Nigeria have been treading, so that the regulations are not too stifling developments. At the same time, it gives ample opportunities for stakeholders, particularly the private sector and other interest groups, to use data to grow their economy. What we’ve seen in terms of data governance is a move towards regulatory instruments that are market-facing. Instruments that begin to recognize that, yes, innovation has to be on both sides. On the price side of regulation, just as we expect these regulations to also instigate innovation in the market. Create new products or provide opportunities for private sector-led investments or create new provisions for new services. Importantly, in the mix of all of these is for regulations to create avenues for private sector to invest in new digital infrastructures. Data itself is one of those potent infrastructures. That is the focus of the governance processes that regulatory organizations are focusing on. Lastly, the idea is to ensure that data can be used effectively to accelerate attainments of sustainable development goals. In Nigeria, we have taken very concrete and laudable steps, beginning with the creation of a dedicated organization for data protection. And the passage of a bill that promotes the protection of data subjects and their privacy and provides clear instruments for compliance and punishments of offenders. We’ve also passed the Code of Practice for Interactive Computer Services and Platforms. Just before this session started, we had an informal interaction with colleagues here from Kenya who spoke very favorably about Nigeria’s Code of Practice. That’s an example of how governance instruments can be used to enhance trust in society and allow participants and other stakeholders to play by the rules of the game. As I mentioned at the beginning, governance has to be driven in such a way that there are clear outcomes that lead to positive changes in the economy. That begins with building a strong knowledge base for both practitioners as well as stakeholders. as users of data, creating necessary and forward-looking policies that catalyze investments and development in the use of data as a resource for development, providing digital platforms or public infrastructure such as platforms that allow for the reuse of data to extend the ideas of open governments, open data, some of these areas that data governance approach is seeking to achieve. Of course, I talked about infrastructure and the needs to allow governance to enhance innovation, allow the innovation ecosystem to tap on the potentials created by data to create the new products and services that strengthen the economy. Lastly, provide supporting trade from innovations. So in conclusion, I hope I’m within the 10 minutes, Mike. In conclusion, key to enhancing the use of data is the need for us to strengthen the capacity of various actors in the system, continue to create awareness among users, promote engagement across levels of governments, across other user communities who stand to benefit from data. An example is statistical organizations like the National Bureau of Statistics in Nigeria, the Population Commission. So the availability of new data sources and the potentials that they come with to strengthen existing practices has to be followed up with effective engagements with those partners and building capacity across layers of governments from national to subnational. Thank you very much.

Afolabi Salisu:
Thank you so much, Dr. Ewah. You are free to clap, by the way. Thank you so much, Dr. Ewah. And I think the important thing you have said to us is that Nigeria has progressed in terms of data governance and trust. We used to have a data bureau. Now we have an act that has established a National Data Protection Commission, including the Code of Conduct, meaning that we now have a legal framework. But one thing is to have a legal framework. And then I’m coming to you now. One thing is to have a legal framework. From the civil society perspective, what more do you think we need to do to enhance transparency in terms of data acquisition, particularly by citizens, the right of consent? Do you think what we have is sufficient now? Or what are those new elements that you think we need to introduce to ensure that the data collection, data acquisition from the civil society perspective is respected? Inena, your two minutes start from now. Inena, bonjour. Thank you. Inena? Hello. OK. Yes. Did you get my question? No. Can you repeat? It’s raining here. And the on sound is low. But please go ahead. OK. Dr. Ewa has spoken extensively about some of the framework that have been put in place by the Nigerian government, including the establishment of Nigerian Data Protection Commission and a number of some of these issues. But from the city’s perspective, from the civil society perspective, wanting to have this framework, what more do you think we need to do to enhance transparency, accountability, in data acquisition, including the right of citizen to decline or consent to the acquisition of their data?

Nnenna Nwakanma:
OK. So this is a Nigerian forum. And were it not that this was an international forum, I would have switched to pidgin English. But I think I will speak proper English for accountability purposes. So the first thing is, I love the presentation because it took the angle of regulation that encourages innovation. Because in most cases, when we talk about regulation, we are talking about imposing the law. We are talking about stifling. We are talking about fining people and all of that, basically scaring them away. What our parents do at home. Don’t do this. Don’t do this. Don’t do that. But then what should we be doing? No one tells us. But I love this because we’re looking at regulation for innovation. That’s my first submission. My second submission is that, as Nigerians, we think that we are big, we are populous, and all of that. But I don’t think, in the case of data governance, that really matters a lot. It is in the creating, the reuse, the reusing of data that value is added, not in the population of a country, OK? So I really want to put that across. It’s not enough to say we are over 200-whatever million. That doesn’t make any economic sense. What makes economic sense is when the dollars come in, especially now that dollars are in Nigeria. So one is regulation for innovation, value in the data itself for poverty reduction, for development, and for wealth creation. That is what is of importance to me. Now, governance of data cannot happen in a silo. I see a lot of brothers and sisters here. We are Nigerians, and we know how governance is in Nigeria. I don’t think that the data governance in Nigeria will far exceed the general governance in Nigeria. And I’m saying this because we have senators. We have governors. We have people in authority in this room. So my feeling is that the data governance in Nigeria will not supersede the general governance that we have in terms of other things, the social, political situation in Nigeria. Now, flows follow flows. And I really want to put some energy on this. As citizens, which data do we have access to? Which data are we creating? Which data is being valued? I mean, I’m from Aba. I support Enimba. But then much of the data I will create will be on the European Champions League or the English League, because that is where much of the talk is going. So earlier in the week, I was talking about data as in travel. There are hubs. There are important things. And the question is, are we creating the data? Are we availing the data that will be used, reused, to make for value purposes? I know I only have two minutes. Let me end with data dialogues. We talked about trust. Having that continued discussion, I’m happy that at least Nigeria has a framework. Nigeria has an agency. We know the who, the what, the how, and the why. Congratulations to Nigeria. Now, the continued dialogue among partners, citizens, private sector is what will create that trust. We are not there yet, but we are in the right direction. And I’m happy that we’re having this conversation, though it took Kyoto to bring us together. But I hope that after this, we can continue conversation at home so that we can operationalize what we have and make it better. And once again, data for us, for me as a citizen, is for me to feel safe, but it’s also for me to make money and be a better Nigerian. I’ll stop so far.

Afolabi Salisu:
Thank you so much, Nnenna. You can give Nnenna a virtual clap if you like. One thing she said I came very strongly through for me was, how do we derive value from data? And when it comes to economic engagement, the private sector is best suited to pursue it. Dr. James Olufoye, you have been in this space for quite some time, and you went into private sector. We have a population of around 20 million people. Seventy percent of them are youth. They generate data now. What are the opportunities? How will the private sector tap into these opportunities to turn this data into Naira and Kobo, into yen and into dollars?

Jimson Olufuye:
Very outstanding question, because money is important. Creation of wealth, boosting income, and GDP is very, very key. I’ll be speaking, of course, from the private sector perspective, and I would like to do a little introduction. I got to how this idea came about for many that are listening to us that may not know much about the idea itself. IGF is one of the two tracks of activities from the Tunis agenda of the World Summit of Information Society in 2005. Internet Governance Forum is a track. Enhanced cooperation is another track. And it was really clearly stated that all stakeholders must be involved in this. All stakeholders, the government, the private sector, the civil society, the academic, and the technical community must all be involved. So as private sector, we have Africa for Africa, basically. And as I mentioned, Africa City Alliance, the vision is to fulfill the promise of the digital age for everybody in Africa through advocacy. And we’ve been relating with many governments like Egypt, of course, Nigeria, NCC, NIDA, very proactive in that regard. So from the private sector perspective, there’s a lot of value to be derived in data. And when you talk about data, of course, talking about internet, so the internet precision is quite high. And we commend the government for putting proactive policy framework in place. And of course, the acts, necessary acts for data governance as Dr. Ewa mentioned. So with this framework in place, we could really achieve a lot. Not too long ago, I did a research for an international organization. And that used the data available, open data, World Bank open data. So we also use data from NCC available on their website. Nigeria belong to the Open Government Partnership. So data need to be released, be made available. Without data, you’ll get more wealth. I saw something on social media. Somebody said, in 2023, Nigeria has become poorer, you know, than it was in 1980. And so we are at the same level. I look at the data, I said, it’s not true. Because in 1980, our GDP per capita was just $800. And 2023, our GDP per capita is times three of that, about $2,500. And Nigeria is number one economy in Nigeria, in Africa. And that is because we rebased the economy. We can see the impact of data, the impact of the internet in boosting the GDP. And we have not gotten to this as anything. We’ve not gotten to our optimal position. Because our optimal position should be in the range of $1.5 trillion. So data has a lot of value to add. And again, commend the government with the structure, you know, that is in place. Now, there is a measure or an item of that act. And that talks to localization of data. That policy encourages the building of more data centers. For my company, we do that. We do consultation for, if you want to build data center, we do that. Cyber security, we do that, you know, as well. Then look at. On local.

Afolabi Salisu:
I’ll charge you for adverts.

Jimson Olufuye:
Okay. Well, those are the benefits. You said I should bring the value. So I’m talking about the value. Now, we know we have companies like Jumia, like Conga, you know, eTransat. These are companies that deal with cross-border data. And cross-border data, it’s important we look at it. The government need to look at that. And then put in framework to ensure that if there are issues with data, even in, say, in Ghana, in Kenya, we can resolve those issues. Especially for, with regard to prosecution. Okay. Law enforcement. Okay. And then we have banks all over Africa. We need to encourage them to move to Europe, to Africa. So we need to, therefore, be involved in endorsing, for example, the African Union Convention on Cyber Security and Data Protection. So with that, we have a framework to relate it to other countries. So these are international values that we could also create. And, of course, we have our CCTLD. Okay. With NERA being our registry. So that’s, CCTLD needs to be signed. Because once it’s signed, there’ll be more trust. Because we are talking about data governance and trust. There’ll be more trust. So DNSSEC needs to be signed. We are looking forward to that. And with that, we’ll see many of our banks that will be domiciling their CCTLD. The country called top-level domain. They’ll be domiciling it in Nigeria. So those are measures that will create. So we are making progress. No doubt about that. Okay. And it needs to be sustained. And that’s why I appreciate the organizer of this event, this forum at the international level, to project what the progress Nigeria has been making. And also to see the other things we need to do. Also to thank the National Assembly for being proactive. So I appreciate you, the Senate Committee on ICT announced, to being here. We’ll have you at least make up this category of this level here. Because I’ve been involved in IGF for a long while as a private sector entity. And indeed, truly this is the first time. So it’s a time to look forward to more breakthrough with regard to data governance and trust in Nigeria. Thank you very much.

Afolabi Salisu:
Thank you so much. Private sector can smell money. I saw that there’s money to be made. And of course, particularly because we have large database now that we need to begin to explore and exploit it for financial benefit and also grow the economy. Dr. Chidi, I’m going to come to you now from NCC. Right, we have Data Protection Commission in place. We have a number of framework. And somebody, some people will say, our problem is not about initiatives. It’s about finishatives. Finishing what we initiate. Now that we have all of this framework in place, what do we need to do to ensure, or rather, from the regulatory perspective, what we organization and other organization in this space do to ensure that the implementation, monitoring of all of this framework produce the desired outcome so that we can begin to have finishatives?

Chidi Diugwu:
Right, so here, let me move this for you so that I can tell you. Okay, I hope. My two minutes hasn’t started. Thank you very much, Distinguished Senator of the Federal Republic of Nigeria. I was actually going to start on a different note, but I was a little bit interested in what Ndena was talking about. And so, as a regulator, I think it’s important that I address it. But before then, I thank the lead presenter for his very beautiful job. There’s a known correlation between population size and market size. So, telecommunication success rate is defined by the size of the market. In Nigeria, as far back as six months ago, we had over 226 million subscribers. And then, talking about data, it’s basically what you get off the internet. And then, we have about 156 million Nigerians using the internet. Out of these, about 92 million have broadband connectivity. So, in telecommunications, the compounded annual growth rate is, you know, ranging anything between 12 to 14% every year. And then, we’re also compacting other industries like the manufacturing and then the financial services, growing, you know, the fintech market. You know, every day, you can stay at home now and then generate enough revenue. And there are many data sources. But that’s not the subject of today. I think the subject is more or less to talk about data. And then, what we should be aiming our big guns at. What is data? There must be clarity. Data in itself can mean nothing. But I think the question would have been metadata. You know, metadata is a structured, you know, information. If you like, consensual information about data. And for us as regulators, that’s what we’re interested in. Now, let me define metadata further. Using a lot of applications. If you are using a mobile phone device, what are you doing? Your phone is able to mainstream your phone number, your location, your geophysical, whatever it is, the steps you take, and so on and so forth. And then, if you have many applications, even this can go as far as talking about your heartbeat and so on and so forth. And then, if you are using the email, the email is able to profile how you stay on the email, how you respond to email, whom you are responding to, the time it took you to respond, the messages that you read and those that you do not read. These are metadata. And somebody somewhere is profiling this thing in the name of artificial intelligence. And these are major source of revenues. So, if we begin to talk about people that are browsing, okay, fine. You are browsing about going to London, to wherever it is. But the moment you click on the internet, your information is being, you know, generated. And then, somehow, on that sort of compelling means, you could be told that if you do not click yes, you cannot have access to further information. So, metadata is, you know, very disruptive. And then, we must address it as regulators. Now, the NDPR, you know, which essentially is a Nigerian response to how data is being used, is very effective and very commendable. However, we need to know that the battlefield is not, you know, at the national level only. Metadata travels with the speed of light. And then, you can never talk, you can know the source because it comes from the data subject. And that could be, as far as you know, data transit and then the destination. So, what, where, to what extent and to what speed do our data travel? These are very big questions that we answer as regulators. Now, I’ll give you an instance. The NDPR is somehow localized, talking about how you behave in the marketplace in Nigeria to make Nigeria a corporate, you know, responsible trader. That is fine. But what do you say to users whose data has traveled as far as to the United States of America or to China, all in the name of cookies and stuff like that? So, these are the source of revenues that never get to be relocated to Nigeria. And then, we must have to begin to talk about cross-border collaborations to understand to what extent data travel and to what extent our information is going to be used to shape the future of mankind. We are talking about algorithms, you know, displaying artificial intelligence, studying how human beings behave. They are very fine and good. But at the very convenient level, all we say is that the consumer has the right to choose what information they want to assess. The consumer has got the right to know whether to give out their data or not to give out their data. The consumer has got the right to say, I do not want to participate any longer. The consumer has the right to change their mind. But there is a duty of care on the part of both data controllers and data processors. And unfortunately, in most cases, data controllers and data processors are outside of our jurisdictions. So, in a nutshell, as a regulator, what we do is that we have got a number of regulations, a number of interventions that we use from time to time to ensure that, you know, like lawful intercept, you know, and then child online protection and then other regulatory issues available at our website. You can go there and see. We have a robust set regime. That is the computer security incident response team that mainstreams, you know, data as they develop and then allot the constituency of telecommunications accordingly. Thank you very much.

Afolabi Salisu:
You better clap for the regulator because non-clapping can be a violation of regulation. Thank you, Dr. Chidi, for that very beautiful submission. I’m going to my brother and colleague in the House of Representatives, the Chairman of the House Committee on ICT and Cybersecurity. And your question is very simple. You made the laws. National Assembly makes all the laws, including the one that governs the data protection. What roles are for National Assembly beyond making the laws to ensure that these laws are obeyed, not just by individuals, but by corporate players in the space? So, I have online now Honorable Stanley Adedeji to give us legislative intervention.

Adedeji Stanley Olajide:
Thank you, Chairman. Thank you, everybody. I am Honorable Adedeji Stanley Olachide. Well, let’s just put this in perspective because to enact laws, there are a few things that we must understand. What are the value? I mean, this is data is our jewel. That’s the new oil. In order for us to do what we do best, we have to promote laws that basically will make data usable. Data will also promote the integrity of the data. Also, in order for data to be useful, it must be consistent and secured. Because if you have data in the hands, for example, when let’s talk about the security aspect of data, where our laws must protect, you also want to make sure that the chain of custody of data is protected in our laws. You don’t want data in the hands of the wrong person. You only give people rights to data that they need, not indefinite access to the data. Then also, you have to make sure that segregation of duty is built into your data structure as well with our laws. And there will be clear monitoring and evaluation of all these controls because you build controls to help you achieve these things. And the laws must be very clear on how all these things are going to be achieved. And they will be very strict. The law is also going to provide very clear, strict rules. You break these rules, there will be sanctions, stiff sanctions for breaking these rules. So all in all, we have to legislate, understand, because data is a moving target. It’s not a static thing. When you’re dealing with historical data, it’s different. Historical data is static. But when you’re dealing with financial data or health records or medical records, those are more like they’re on the move. So you always have to constantly revamp your laws to deal with the challenges of now. So if for some reason there are new rules that says you only give your DNA structure, a certain part of it to the users or to the government, you only restrict that. So in clear language, I don’t want to go into too much technicalities. In clear language, we have to protect our data to make them usable, flexible, secured, and make sure that this is the jewel that we are going to use to. For example, we just went through COVID-19. The amount of data that medical, I mean, researchers are going to need from this COVID-19 data, have we protected it? So we are going to have to make sure that we have laws in place to protect that data so that we can uncover the values attached to them. So that’s what we are going to be doing to make sure that our laws can guide those principles. Thank you.

Afolabi Salisu:
Thank you. And I’d like to thank my brother for that intervention. Okay. And when Dr. Chidi was talking the other time, you mentioned Cross-Border Cooperation. And I’m also delighted to say that we have in this audience today the African Parliamentary Network on Internet Governance, APNIC, led by our Secretary General, Honorable Sam George, who is from Ghana technically, but he’s actually Nigerian. He told me off camera that he loves Nigeria and you’ll love more that he loves Ghana. So I’d like to welcome the members of APNIC who are here from different countries, from Ghana, from Gambia. Thank you so much for coming. I also would like to congratulate the House, the Deputy Chief Whip of the Nigerian House of Representatives, Honorable Adewumi Uriyomi Onanuga. I mean, thank you, Deputy Chief, for coming here. So I’m going to yield ground. We have some 90 minutes left. I just think we need to take interventions from the audience, both online and physically here. But we’re going to be talking about data without hearing the voice of the youth. I’m going to turn my moderator into an interventionist now. Kunle Olorundare, you’re the acting president of the Internet Society. Oh, Kole is online. Okay. Kole, if you’re online. Yes, I’m here. You’re here?

Kunle Olorundare:
Yes, I’m here.

Afolabi Salisu:
Okay, great. Yes. Now, the bulk of data that has been generated is actually largely from the youth. I mean, so we can’t be talking about data without hearing the voice of the youth. So for a moment, you are going to transform from being online moderator to a panelist, and that’s the power of the chairman. Okay, so can I have your perspective, the perspective of the youth, perspective of the Nigerian Internet Society on this issue of data governance and privacy in Nigeria?

Kunle Olorundare:
All right. Thank you very much, Mr. Moderator. I am very much excited to be on this call, so to say. First of all, let me appreciate our honorables who has made it a point of duty to be on this call, even out of their business schedule. And it’s a good thing that we are having this kind of discussion, even at an international forum, and that shows that we have a lot to do together in terms of multistakeholderism, and that is the right way to go. I will say that, yes, back to your question. So in Internet Society, we believe that the Internet, as it were, should be open. And when I talk about the Internet, you know I’m talking about the network of networks, which, of course, is the place where all this data, most of this data are being generated. So we believe that Internet should be open. And, of course, we believe in what is known as digital rights, digital rights in the sense that we believe that everybody has right to privacy. And, of course, whatever data that I’m generating should be my data and nobody should, you know, drop on me when I’m using the Internet or when I’m using my phone. That is what we believe in. As a matter of fact, there is something we preach every year, and I think this is the month it is known as encryption. Encryption has to do with, okay, you know, end-to-end, you know, covering of your messages, of your data in the way that nobody can access your data. Nobody can listen to your data. A very good example is when I’m using WhatsApp, I want to ensure that what I’m transmitting to another person is only being received by that person and no other person, and nobody should intercept it. That is one of the things we believe in the Internet Society. And I’m so excited that we are discussing this. And for us, we also believe in what is known as rights, you know, to be forgotten. If I say that, okay, fine, I have done this online, and, of course, I want it to be erased, and it should be erased. Nobody should go back to it and try to trace it. Inasmuch as I’ve said that, okay, this has been forgotten, and I think it should be forgotten. So, these are some of the values. That’s one of the things that we believe in Internet Society, and we believe that the Internet should be secured. And when it is secured, that means nobody, I mean, nobody can, you know, have access to my data. The issue of privacy is personal, and we believe that we should take it as that, and nobody should have access to my data once it is my data. So, that is what we believe in Internet Society. We believe in encryption. We believe in privacy. We believe that, you know, communications should be encrypted end-to-end. Thank you, Mr. Moderator.

Afolabi Salisu:
Thank you so much, Kunle, for that very beautiful. I can assure you that your conversation with us online, we encrypted it, so, and I hope you understand that my own, to you as well, is also encrypted, okay? So, don’t just give it up.

Kunle Olorundare:
Absolutely. Absolutely.

Afolabi Salisu:
Honorable Sam George, in the last few days, the African parliamentarians network Internet governance. We have engaged a number of our counterparts from other continents. Earlier today, we met with some delegation from U.S. Yesterday, it was with European parliamentarians, and we discussed a number of cross-cutting issues, particularly because as Africa, we need to approach this issue of data privacy, data governance with one voice. Could you like to share your perspective, what APNIC can do as an organization to ensure data governance and privacy, not just in individual country, but across Africa, given the fact that some of our tech start-ups have. So, I’m going to turn it over to Abhijit, who is going to talk about the role of APNIC in the development of operations across African countries.Honorable Sam George, please?

Sam George:
Thank you very much, and a very good afternoon to everyone. APNIC, as you rightly stated, is African parliamentary network on Internet governance. It’s about a year and a half old. I’m going to talk about the role of APNIC in the development of operations across African countries. So, I’m going to start with a little bit of background. I was in Kenya last year, and then Abuja about three weeks ago, and now we’re in Kyoto. Basically, when we’re discussing the subject of data governance, data sovereignty, cross-border data flows, we need to look at it within the context of the AU data policy framework. So, we have to look at it in the context of the AUDPF, which is the AUDPF Framework for the African Continent, and if we use the AUDPF, then we now have to break it down to the regional levels and see what ECOWAS is doing through West Africa is doing through ECOWAS, what AALA is doing, what SADC is doing, and then we break it down to the individual countries. Now, we need to be careful not to be too aggressive with the EU, because there’s a lot of pressure from the European Union, from America, from other Western powers for our governments to tick checkboxes, and so you have data protection legislation that’s just been passed, but not been implemented, because there’s a difference between passing the legislation, and then there’s a whole different ball game implementing it. We have different levels of adaptation. We have different levels of adaptation, and we need to be careful about that, and we need to be careful about that. We passed our data protection law in 2000, I think 2012, yes, in 2012. We passed our data law in 2012, so we’ve had a data protection commission running for over ten years. However, if you ask me if I’m comfortable with the implementation of data protection, I think that a lot of people are not comfortable with the implementation of data protection. So, we’ve had a lot of people who have created the authority this year. Egypt has had a data protection commission authority for about four, five years, or they’ve had a law for about four, five years, but no authority has been set up. So, you realize that you can just do the checkboxes, and when you attend this international conference, they say that we have legislation, but how is that legislation impacting the data protection, and how is it impacting the data protection authorities? So, we need to be careful about that. Now, for us as parliamentarians, one of the key things we need to begin to look at, and that is members of APNIC, is to ensure that portfolio ministries make available resources to data protection agencies and commissions and authorities, because it’s extremely important, and that’s, I’m going to use the case study of Ghana, because we’ve done this for a bit longer than Nigeria has, but we’ve done this for a bit longer than Africa has, and therefore, we need to ensure that the data protection commission, the data protection commission, is able to get its resources from government. And so, what you then see is when data protection commissions or commissioners begin to impose or enforce their act on government agencies, the funding for those agencies get, for data protection gets reduced, because you’re beginning to create a problem for government. So, the government is going to have to make sure that the funds are being appropriated, that the funds are being properly accredited. All of this creates issues for government, and they create governance issues. So, the only way government is going to handle this is either cut down your appropriation, or they bring the same level of appropriation, but there’s one thing being having your funds appropriated, there’s another thing having your funds disbursed. So, you will see a huge appropriation for data protection, and you will see a huge appropriation for data protection. So, the government is going to have to make sure that the resources of their appropriation was disbursed to them. And so, you realize that in Ghana, for example, data protection is top-notch in our capital cities, but once you step outside of the capital cities, what is the level of data protection? Outside of those capital cities, you see. So, for us as APNIC, we think that members have to make sure that they have access to the data, and that they have access to the data. So, we have to make sure that we have access to the data. It’s now become a fundamental right like the rights of movement and the rights of free speech. And we must enforce this digital right by putting our money where our mouth is. Thank you.

Afolabi Salisu:
» Thank you so much. With this intervention, the context about who has better data rights is not settled. So, we have to make sure that we have a better data rights, and we have to make sure that we have a better data privacy. » Distinguished panelists, we have had a very wonderful time. I have just five minutes left, and I’m constrained. Online question? Okay. Can we have you very briefly?

Kunle Olorundare:
» All right. Thank you very much, Mr. Moderator. I have a couple of questions that I would like to ask, and the first one was asked by Mr. Musa Megiri. He asked, how can government or international community or private sector – sorry, I’m reading verbatim. I think we will pick out the context, but let me read verbatim. It is about market, and it is about governance, and I think that we can help that public sector to be part of contribute activity to the strategy challenges and solutions at both the local and national level. So, I think the question has to do with – » Absolutely. » I think the question has to do with the question of digital rights. Mr. Benjamin Ebi-Ikiba, he asked, why are the digital rights of Nigerians not protected, and who carries out these protections, so I think he wants to know about the implementation and enforcement of digital rights in Nigeria. Then she made a comment, she said, cascading high-level policy statement to the level of citizens’ access, and she said, I think the question has to do with the practicality of these measures. So, she’s trying to comment on how we are implementing the data, you know, act in Nigeria. So, these are the comments we have so far, so good online, and I believe that – oh, okay. So, there is one other comment here from Timi Ambali. He commented, he says, what is government doing – sorry, this is a question, not a comment – what is government doing to ensure that the data is not being used for the purpose of duplication, duplication, duplication, duplication, duplication, it has been pending for a long time. I think it’s a very good question, and I’m sure that all the participants will be interested in knowing the answer to this question. Thank you, Mr. Moderator. Over to you.

Afolabi Salisu:
≫ Thank you so much. I have five minutes left, and because we’re talking about data protection, I’m going to ask Dr. Ewa to answer the question, and then I’ll go back to this side, and then the honorable here will wrap it up. Your 60 seconds start from now.

Bernard Ewah:
≫ Great. So, a lot has been said, but to summarize most of the comments that have been made, everybody’s talking about how do we enhance data protection. You know, if there’s no value to your data, if you cannot protect your data, if you cannot protect your data, if you don’t protect your data, if you don’t protect your own data, you will not see the need to protect it. And so that speaks to how do we ensure that data subjects recognize the value to that data. So, and then the key question is what kind of data are we talking about, what kind of data can people derive value from? Governments across Africa, including Nigeria, I’m just going to give you a couple of examples. So, the first one is social media data. Social media data can be very useful. It can, if properly harnessed, can be a platform. The next, number two is mobile phone data. We have very huge people using mobile phones and so on. That data has very immense economic value. Number three is scanner data. Scanner data is very important. Scanner data is very important. Scanner data is very important. Number four is financial data or transaction data. We all swipe credit cards and debit cards and so on. Equally very valuable for economic planners and so on. Number four is automatic identification systems. This is data that is generated from sensors, unmanned systems, area vehicles. This is data that is generated from sensors, unmanned systems, area vehicles. The last one is geospatial or satellite data. So, these new data sources can determine how we navigate towards becoming a data hub, Africa being a data hub, and using that to grow the participation of our U.S. people.

Afolabi Salisu:
Thank you so much for letting us into these five sources. Dr. Chidi, your 60 seconds starts now.

Chidi Diugwu:
Thank you. My name is Dr. Chidi. I’m a senior researcher in the digital network and society department development. I help in upskilling women in terms of cybersecurity. In terms of your digital rights, let me just enumerate them very quickly. Right to access data, right to ratify of error, right to decision, delete your message. Right to automate your questions. Right to get fahrenheit. There are many sectors like human devices, automated systems, but these systems a governed by principles, transparency, explicit privacy policy stating what the data should be used for and no impose and budgets. So, I’m going to leave it to Dr. James Olufoye to talk about this. I’m going to talk about the practice regulation of NCC 2007, the registration of telephone subscribers data from NCC, the lawful intercept from NCC, the guidelines on management of personal data, and a host of others.

Mary Uduma:
Thank you very much. I hope we’ll be able to meet you in Abuja.

Jimson Olufuye:
Thank you very much for inviting me to be a part of this. Thank you very much for inviting me to be a part of this. All this means that we’re working on digitalization. That means we need to enable API for effective data monetization. We know the minister of internal affairs is doing something about this. Cyber security is very important. We need to enable the private sector to enable the private sector to enable the private sector to enable compliance, execution, and management. And then, as we do all this, we should have the African continental free trade zone in mind because we need to enable private sector to unleash their potential in that big African market of more than 1.3 billion people, and finally, sign and ratify the Malabo Convention, talking to our senator.

Afolabi Salisu:
I’m sorry, I’m running out of time. It’s almost 1. 5 minutes, so you have 30 seconds.

Mary Uduma:
First of all, I want to thank everybody for their contribution.

Adedeji Stanley Olajide:
You can see that data have a lot of versatility, depending on where the source of the data is. So scrubbing, data staging is also going to be part of our lawmaking, and we’re hoping that we’re going to make a lot of progress on that,

Afolabi Salisu:
and I want to thank all of you for your contributions. I want to thank all the panelists, and I must also acknowledge our team from Nigeria, including our colleagues from the Nigerian communication commission, and I must recognize Madame Internet herself, Mrs. Mary Udoma, for the wonderful job you have done together. Until we meet again next time, keep the flag flying. This is Nigeria. Have a wonderful evening. Thank you. One minute, please. All right. I’m done with the first part. I’m on the second one now. All right, thank you very much. Have a good day. Bye for now. Thank you. Thank you. Thank you.

Adedeji Stanley Olajide

Speech speed

153 words per minute

Speech length

614 words

Speech time

241 secs

Afolabi Salisu

Speech speed

178 words per minute

Speech length

1962 words

Speech time

660 secs

Bernard Ewah

Speech speed

122 words per minute

Speech length

1390 words

Speech time

684 secs

Chidi Diugwu

Speech speed

189 words per minute

Speech length

1277 words

Speech time

406 secs

Jimson Olufuye

Speech speed

170 words per minute

Speech length

1234 words

Speech time

435 secs

Kunle Olorundare

Speech speed

204 words per minute

Speech length

937 words

Speech time

276 secs

Mary Uduma

Speech speed

150 words per minute

Speech length

123 words

Speech time

49 secs

Nnenna Nwakanma

Speech speed

157 words per minute

Speech length

730 words

Speech time

279 secs

Sam George

Speech speed

262 words per minute

Speech length

897 words

Speech time

206 secs

International multistakeholder cooperation for AI standards | IGF 2023 WS #465

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Matilda Road

The AI Standards Hub is a collaboration between the Alan Turing Institute, British Standards Institute, National Physical Laboratory, and the UK government’s Department for Science, Innovation, and Technology. It aims to promote the responsible use of artificial intelligence (AI) and engage stakeholders in international AI standardization.

One of the key missions of the AI Standards Hub is to advance the use of responsible AI by encouraging the development and adoption of international standards. This ensures that AI systems are developed, deployed, and used in a responsible and ethical manner, fostering public trust and mitigating potential risks.

The involvement of stakeholders is crucial in the international AI standardization landscape. The AI Standards Hub empowers stakeholders and encourages their active participation in the standardization process. This ensures that the resulting standards are comprehensive, inclusive, and representative of diverse interests.

Standards are voluntary codes of best practice that companies adhere to. They assure quality, safety, environmental targets, ethical development, and promote interoperability between products. Adhering to standards helps build trust between organizations and consumers.

Standards also facilitate market access and link to other government mechanisms. Aligning with standards allows companies to enter new markets and enhance competitiveness. Interoperability ensures seamless collaboration between different systems, promoting knowledge sharing and technology transfer.

The adoption of standards provides benefits such as quality assurance, safety, and interoperability. Compliance ensures that products and services meet defined norms and requirements, instilling confidence in their reliability and performance. Interoperability allows for the exchange of information and collaboration, fostering innovation and advancements.

In conclusion, the AI Standards Hub promotes responsible AI use and engages stakeholders in international AI standardization. It fosters the development and adoption of international standards to ensure ethical AI use. Standards offer benefits like quality assurance, safety, and interoperability, building trust between organizations and consumers, enhancing market access, and linking to government mechanisms. The adoption of standards is crucial for responsible consumption, sustainable production, and industry innovation.

Ashley Casovan

Standards play a crucial role in the field of artificial intelligence (AI), ensuring consistency, reliability, and safety. However, the lack of standardisation in this area can lead to confusion and hinder the advancement of AI technologies. The complexity of the topic itself adds to the challenge of developing universally accepted standards.

To address this issue, the Canadian government has taken proactive steps by establishing the Data and AI Standards Collaborative. Led by Ashley, representing civil society, this initiative aims to comprehensively understand the implications of AI systems. One of the primary goals of the collaborative is to identify specific use cases and develop context-specific standards throughout the entire value chain of AI systems. This proactive approach not only helps in ensuring the effectiveness and ethical use of AI but also supports SDG 9: Industry, Innovation, and Infrastructure.

Within the AI ecosystem, various types of standards are required at different levels. This includes certifications and standards for both evaluating the quality management systems and ensuring product-level standards. Furthermore, there is a growing interest in understanding the individual training requirements for AI. This multifaceted approach to standards highlights the complexity and diversity within the field.

The establishment of multi-stakeholder forums is recognised as a positive step towards developing AI standards. These forums play a vital role in establishing common definitions and understanding of AI system life cycles. North American markets have embraced such initiatives, including the National Institute of Standards and Technology’s (NIST) AIRMF, demonstrating their effectiveness in shaping AI standards. This collaborative effort aligns with SDG 17: Partnerships for the Goals.

Inclusion of all relevant stakeholders is seen as crucial for effective AI standards. The inclusivity of diverse perspectives is paramount for ensuring that the standards address the needs and challenges of different communities. Effective data analysis and processing within the context of AI standards necessitate inclusivity. This aligns with SDG 10: Reduced Inequalities as it promotes fairness and equal representation in the development of AI standards.

Engaging Indigenous groups and considering their perspectives is critical in developing AI system standards. Efforts are being made in Canada to include the voices of the most impacted populations. By understanding the potential harms of AI systems to these groups, measures can be taken to mitigate them. This highlights the significance of reducing inequalities (SDG 10) and fostering inclusivity.

Given the global nature of AI, collaboration on an international scale is essential. An international exercise through organisations such as the Organisation for Economic Co-operation and Development (OECD) or the Internet Governance Forum (IGF) is proposed for mapping AI standards. Collaboration between countries and regions will help avoid duplication of efforts, foster harmonisation, and promote the implementation of effective AI standards globally.

It is important to recognise that AI is not a monolithic entity but rather varies in its types of uses and associated harms. Different AI systems have different applications and potential risks. Therefore, it is crucial to engage the right stakeholders to discuss and address these specific uses and potential harms. This aligns with the importance of SDG 3: Good Health and Well-being and SDG 16: Peace, Justice, and Strong Institutions.

In conclusion, the development of AI standards is a complex and vital undertaking. The Canadian government’s Data and AI Standards Collaborative, the involvement of multi-stakeholder forums, the importance of inclusivity and engagement with Indigenous groups, and the need for international collaboration are all prominent factors in shaping effective AI standards. Recognising the diversity and potential impact of AI systems, it is essential to have comprehensive discussions and involve all relevant stakeholders to ensure the development and implementation of robust and ethical AI standards.

Audience

The analysis reveals that the creation of AI standards involves various bodies, but their acceptance by governments is not consistent. In particular, standards institutions accepted by the government are more recognized than technical community-led standards, such as those from the IETF or IEEE, which are often excluded from government policies. This highlights a discrepancy between the standards created by technical communities and those embraced by governments.

Nevertheless, the analysis suggests reaching out to the technical community for AI standards. The technical community is seen as a valuable resource for developing and refining AI standards. Furthermore, the analysis encourages the creation of a declaration or main message from the AI track at the IGF (Internet Governance Forum). This indicates the importance of consolidating the efforts of the AI track at IGF to provide a unified message and promote collaboration in the field of AI standards.

Consumer organizations are recognized as playing a critical role in the design of ethical and responsible AI standards. They represent actual user interests and can provide valuable insights and data for evidence-based standards. Additionally, consumer organizations can drive the adoption of standards by advocating for consumer-friendly solutions. The analysis also identifies the AI Standards Hub as a valuable initiative from a consumer organization’s perspective. The Hub acknowledges and welcomes consumer organizations, breaking the norm of industry dominance in standardization spaces. It also helps bridge the capacity gap by enabling consumer organizations to understand and contribute effectively to complex AI discussions.

The analysis suggests that AI standardization processes should be made accessible to consumers. Traditionally, standardization spaces have been dominated by industry experts, but involving consumers early in the process can help ensure that standards are compliant and sustainable from the start. User-friendly tools and resources can aid consumers in understanding AI and AI standards, empowering them to participate effectively in the standardization process.

Furthermore, the involvement of consumer organizations can diversify the AI standardization process. They represent a diverse range of views and interests, bringing significant diversity into the standardization process. Consumer International, as a global organization, is specifically mentioned as having the potential to facilitate this diversity in the standardization process.

In conclusion, the analysis highlights the importance of collaboration and inclusivity in the development of AI standards. It underscores the need to bridge the gap between technical community-led standards and government policies. The involvement of consumer organizations is crucial in ensuring the ethical and responsible development of AI standards. Making AI standardization processes accessible to consumers and diversifying the standardization process are essential steps towards creating inclusive and effective AI standards.

Wansi Lee

International cooperation is crucial for the standardization of AI regulation, and Singapore actively participates in this process. The country closely collaborates with other nations and engages in multilateral processes to align its AI practices and contribute to global standards. Singapore has initiated a mapping project with the National Institute of Standards and Technology (NIST) to ensure the alignment of its AI practices.

In addition, multi-stakeholder engagement is considered essential for the technical development and sharing of AI knowledge. Singapore leads in this area by creating the AI Verify Testing Framework and Toolkit, which provides comprehensive tests for fairness, explainability, and robustness of AI systems. This initiative is open-source, allowing global community contribution and engagement. The AI Verify Toolkit supports responsible AI implementation.

Adherence to AI guidelines is important, and the Singapore government plays an active role in setting guidelines for organizations. Implementing these guidelines ensures responsible AI implementation. The government also utilizes the AI Verify Testing Framework and Toolkit to validate the implementation of responsible AI requirements.

Given Singapore’s limited resources, the country strategically focuses its efforts on specific areas where it can contribute to global AI conversations. Singapore adopts existing international efforts where possible and fills gaps to make a valuable contribution. Despite being a small country, Singapore recognizes the significance of its role in standard setting and strives to make a meaningful impact.

The Singapore government actively engages with industry members to incorporate a broad perspective in AI development. Input from these companies is valued to create a comprehensive and inclusive framework for responsible AI implementation.

The establishment of the AI Verify Foundation provides a platform for all interested organizations to contribute to AI standards. The open-source platform is not limited by organization size or location, welcoming diverse perspectives. Work done on the AI Verify Foundation platform is rationalized at the national level in Singapore and supported globally through various platforms, such as OECD, GPA, or ISO.

In conclusion, Singapore recognizes the importance of international cooperation, multi-stakeholder engagement, adherence to guidelines, strategic resource management, and industry partnerships in standardizing AI regulation. The country’s active involvement in initiatives such as the AI Verify Testing Framework and Toolkit and the AI Verify Foundation demonstrates its commitment to responsible AI development and global AI conversations. The emphasis on harmonized or aligned standards by Wansi Lee further highlights the need for a unified approach to AI regulation.

Florian Ostmann

During the session, the role of AI standards in the responsible use and development of AI was thoroughly explored. The focus was placed on the importance of multi-stakeholder participation and international cooperation in developing these standards. It was recognized that standards provide a specific governance tool for ensuring the responsible adoption and implementation of AI technology.

In line with this, the UK launched the AI Standards Hub, a collaborative initiative involving the Alan Turing Institute, the British Standards Institution, and the National Physical Laboratory. The aim of this initiative is to increase awareness and participation in AI standardization efforts. The partnership is working closely with the UK government to ensure a coordinated approach and effective implementation of AI standards.

Florian Ostmann, the head of AI Governance and Regulatory Innovation at the Alan Turing Institute, stressed the significance of international cooperation and multi-stakeholder participation in making AI standards a success. He emphasized the need for a collective effort involving various stakeholders to establish effective frameworks and guidelines for AI development and use. The discussion highlighted the recognition of AI standards as a key factor in ensuring responsible AI practices.

The UK government’s commitment to AI standards was reiterated as the National AI Strategy published in September 2021 highlighted the AI Standards Hub as a key deliverable. Additionally, the AI Regulation White Paper emphasized the role of standards in implementing a context-specific, risk-based, and decentralized regulatory approach. This further demonstrates the UK government’s understanding of the importance of AI standards in governing AI technology.

The AI Standards Hub actively contributes to the field of AI standardization. It undertakes research to provide strategic direction and analysis, offers e-learning materials and in-person training events to engage stakeholders, and organizes events to gather input on AI standards. By conducting these activities, the AI Standards Hub aims to ensure a comprehensive approach to addressing the needs and requirements of AI standardization.

The discussion also highlighted the significance of considering a wider landscape of AI standards. While the AI Standards Hub focuses on developed standards, it was acknowledged that other organizations, like ITF, also contribute to the development of AI standards. This wider perspective helps in gaining a holistic understanding of AI standards and their implications in various contexts.

Florian Ostmann expressed a desire to continue the discussion on standards and AI, indicating that the session had only scratched the surface of this vast topic. He welcomed ideas for collaboration from around the world, underscoring the importance of international cooperation in shaping AI standards and governance.

In conclusion, the session emphasized the role of AI standards in the responsible use and development of AI technology. It highlighted the significance of multi-stakeholder participation, international cooperation, and the need to consider a wider landscape of AI standards. The UK’s AI Standards Hub, in collaboration with the government, is actively working towards increasing awareness and participation in AI standardization. Florian Ostmann’s insights further emphasized the importance of international collaboration and the need for ongoing discussions on AI standards and governance.

Aurelie Jacquet

The analysis examines multiple viewpoints on the significance of AI standardisation in the context of international governance. Aurelie Jacquet asserts that AI standardisation can serve as an agile tool for effective international governance, highlighting its potential benefits. On the other hand, another viewpoint stresses the indispensability of standards in regulating and ensuring the reliability of AI systems for industry purposes. Australia is cited as an active participant in shaping international AI standards since 2018, with a roadmap focusing on 40,2001 in 2020. The adoption of AI standards by the government aligns with the NSW AI Assurance Framework, strengthening the use of standards in AI systems.

Education and awareness regarding standards emerge as important factors in promoting the understanding and implementation of AI standards. Australia has taken steps to develop education programs on standards and build tools in collaboration with CSIRO and Data61, leveraging their expertise in the field. These initiatives aim to enhance knowledge and facilitate the adoption of standards across various sectors.

Despite having a small delegation, Australia has made significant contributions to standards development and has played an influential role in shaping international mechanisms. Through collaboration with other countries, Australia strives to tailor mechanisms to accommodate delegations of different sizes. However, it is noted that limited resources and time pose challenges to participation in standards development. In this regard, Australia has received support from nonprofit organisations and their own government, which enables experts to voluntarily participate and contribute to the development of standards.

Context is highlighted as a crucial element for effective engagement in standards development. Australia’s experts have been actively involved in developing white papers that provide the necessary background and context for standards documents. This ensures that stakeholders have a comprehensive understanding of the issues at hand, fostering informed discussions and decision-making processes.

The analysis also highlights the challenges faced by SMEs in the uptake of standards. Larger organisations tend to adopt standards more readily, leaving SMEs at a disadvantage. Efforts are underway to address these challenges and make standards more accessible and fit for purpose for SMEs. This ongoing discussion aims to create a more inclusive environment for all stakeholders, regardless of their size or resources.

The significance of stakeholder inclusion is emphasised throughout the analysis. Regardless of delegation size, stakeholder engagement is seen as critical in effective standards development. Australia has actively collaborated with other countries to ensure that mechanisms and processes are tailored to their respective sizes, highlighting the importance of inclusiveness in shaping international standards.

Standards are seen as enablers of interoperability, promoting harmonisation of varied perspectives in AI regulations. Different regulatory initiatives and practices in AI are deemed beneficial, and standards play a key role in facilitating interoperability and bridging gaps between different approaches.

Moreover, the adoption of AI standards is advocated as a means to learn from international best practices. Experts from diverse backgrounds can engage in discussions, enabling nations to develop policies and grow in a responsible manner. The focus lies on using AI responsibly and scaling its application through the use of interoperability standards.

In conclusion, the analysis underscores the importance of AI standardisation in international governance. It highlights various viewpoints on the subject, including the agile nature of AI standardisation, the need for industry-informed regulation, the significance of education and awareness, the role of context, the challenges faced by SMEs, the importance of stakeholder inclusion, and the benefits of interoperability and learning from international best practices. The analysis provides valuable insights for policymakers, industry professionals, and stakeholders involved in AI standardisation and governance.

Nikita Bhangu

The UK government recognizes the importance of AI standards in the regulatory framework for AI, as highlighted in the recent AI White Paper. They emphasize the significance of standards and other tools in AI governance. Digital standards are crucial for effectively implementing the government’s AI policy.

To ensure effective standardization, the UK government has consulted stakeholders to identify challenges in the UK. This aims to provide practical tools for stakeholders to engage in the standardization ecosystem, promoting participation, collaboration, and innovation in AI standards.

The establishment of the AI Standards Hub demonstrates the UK government’s commitment to reducing barriers to AI standards. The hub, established a year ago, has made significant contributions to the understanding of AI standards in the UK. Independent evaluation acknowledges the positive impact of the hub in overcoming obstacles and promoting AI standards adoption.

The UK government plans to expand the AI Standards Hub and foster international collaborations. This growth and increased collaboration will enhance global efforts towards achieving AI standards, benefiting industries and infrastructure. Collaboration with international partners aims to create synergies between AI governance and standards.

Representation of all stakeholder groups, including small to medium businesses and civil society, is crucial in standard development organizations. However, small to medium digital technology companies and civil society face challenges in participating effectively due to resource and expertise limitations. Even the government, as a key stakeholder, lacks technical expertise and resources.

The UK government is actively working to improve representation and diversity in standard development organizations. Initiatives include developing a talent pipeline to increase diversity and collaborating with international partners and organizations such as the Internet Governance Forum’s Multi-Advisory Group. Existing organizations like BSI and IEC contribute to efforts for diverse and inclusive standards development organizations.

In conclusion, the UK government recognizes the importance of AI standards in the regulatory framework for AI and actively works towards their implementation. Consultation with stakeholders, establishment of the AI Standards Hub, and efforts to increase international collaborations reduce barriers and promote a thriving standardization ecosystem. Initiatives aim to ensure representation of all stakeholder groups, fostering diversity and inclusion. These actions contribute to advancements in the field of AI and promote sustainable development across sectors.

Sonny

The AI Act introduced by the European Union aims to govern and assess AI systems, particularly high-risk ones. It sets out five principles and establishes seven essential requirements for these systems. The act underscores the need for collaboration and global standards to ensure fair and consistent AI governance. By adhering to shared standards, stakeholders can operate on a level playing field.

The AI Standards Hub is a valuable resource that promotes global cooperation. It offers a comprehensive database of AI standards and policies, accessible worldwide. The hub facilitates collaboration among stakeholders, enabling them to align efforts and work towards common goals. Additionally, it provides e-learning materials to enhance understanding of AI standards.

Moreover, the AI Standards Hub strives to promote inclusive access to AI standards and policies. It encourages stakeholders from diverse backgrounds and industries to contribute and participate in standard development and implementation. This inclusive approach ensures comprehensive and effective AI governance.

The partnership between the AI Standards Hub and international organizations, such as the OECD, further demonstrates the significance of global cooperation in this field. By leveraging expertise and resources from like-minded institutions, the hub fosters a collective effort to tackle AI-related challenges and opportunities.

In summary, the EU AI Act and the AI Standards Hub emphasize the importance of collaboration, global standards, and inclusive access to AI standards and policies. By working together, stakeholders can establish a harmonized approach to AI governance, promoting ethical and responsible use of AI technologies across industries and regions.

Session transcript

Florian Ostmann:
Good morning, everyone. I think we’re going to start. I realize it’s an early start. And thank you very much for those of you who are in the room for making it so early to this session to start today with us. My name is Florian Ostmann. I’m the head of AI Governance and Regulatory Innovation at the Alan Turing Institute, which is the UK’s National Institute for Data Science and AI. And it’s a real pleasure to welcome you to this session today, which will be dedicated to thinking about AI standardization and the role that multi-stakeholder participation and international cooperation have to play to make AI standards a success. There’s been quite a lot of discussion, of course, across many different sessions around AI over the last few days, including on AI governance in many different ways. And standards has come up in quite a few different contexts. But I don’t believe there has been a full session dedicated to standards in the sense that we will be looking at today, which is standards developed by formally recognized standards development organizations. We’ll tell you a bit more about what we mean by that in a moment. And so we’re really excited about the opportunity to dive deeper into this particular topic, into the role that standards as a specific governance tool can play to ensure the responsible use and development of AI. And to do so in particular in relation to the principles that are at the core of IGF in terms of multi-stakeholder participation and international cooperation. I’ll say a few words about the structure of the session. We will begin with a presentation about an initiative that we launched in the UK just about a year ago. That initiative is called the AI Standards Hub. Some of you may have heard about it before. It’s a partnership between the Alan Turing Institute, the British Standards Institution, and the National Physical Laboratory in the UK, working very closely with. the UK government. And it’s an initiative dedicated to awareness raising and increasing participation capacity building around AI standardization. So we’ll tell you a bit about how we set up the initiative, what the mission is, and also our plans and sort of interest to collaborate internationally with like-minded partners around these topics. And we’ll then move on to a panel discussion. We’ve got four terrific speakers with us today from different regions of the world to join us on a reflect on these themes of multi stakeholder participation and international cooperation in AI standards. And then we’ll make sure to reserve some time at the end for sort of your participation, your thoughts, and questions that you may have. We will be using Mentimeter later on as an interactive exercise, but we will get to that later on. We’ll share the link for that when we get to it. And please do feel free, you know, throughout the session to use the chat function or the Q&A function to share any questions. We will monitor the chat and we will try our best to work any questions into the session as we move along. So with all of that said, we will start with the presentation and for that I’m joined by two colleagues by Mathilda Road, who is the AI and cyber security sector lead at the British Standards Institution, which is the UK’s national standards body, and Sandeep Bhandari, head of digital innovation at the National Physical Laboratory, which is the UK’s National Measurement Institute or Metrology Institute. So I’ll pass over to them and then I’ll come back later.

Matilda Road:
Mathilda, over to you. Thank you, Florian. Good morning everyone. It’s great to see so many of you here and thank you to those of us who are joining us online as well. So the AI Standards Hub, as Florian has already introduced, has got two key missions. And the first is advancing the use of responsible AI, and that’s by unlocking some of the particular benefits of standards as a governance mechanism. As Florian mentioned, this week we’ve heard a lot about regulation for AI, guidelines and frameworks, but in this session we’re specifically focusing on standards which is distinct from these other regulatory mechanisms in the sense that standards are voluntary codes of conduct of representing best practice. And the second mission of the Standards Hub is to empower stakeholders to become actively involved in the international AI standardization landscape, including participation in the development of standards and the informed use of published standards. If you’ve attended any other sessions this week on how we can look at responsible AI practices, you might find the landscape slightly overwhelming and the AI Standards Hub can hopefully be a tool to help navigate that space. Is anyone in the room involved in the development of standards in any way? Just out of interest? No, okay, great. So there are several organizations behind the AI Standards Hub. We’ve heard again this week, I’m sure you’ve been to other sessions on the use of responsible AI calls for tracing the data that’s used in models, finding weaknesses, making sure that they’re reliable and not giving us untrustworthy results. Many of these questions are actually still open research problems, and that’s one of the reasons that the Hub brings together several organizations with different strengths. So the three that we’re here representing today that make up the Hub are the Alan Turing Institute, which is the national institute for. Data Science and AI, so it’s an academic research organization. BSI, British Standards Institute, which is the national standards body that represents the UK at ISO, the International Standards Organization. And the National Physical Laboratory, NPL, which is the National Metrology or Measurement, not Weather Institute, which produces technical measurement standards. And these feed into the overall standards themselves. And this initiative has been supported by the UK government’s Department for Science, Innovation, and Technology. So international standards are governance tools which are developed by various standards organizations, some of which we’ve listed on the slide here. And if you aren’t aware of the standards development landscape in AI, which hopefully by the end of this session you will be more informed on that topic, you might have come across some of the most famous ISO standards, such as 27,001 series on cybersecurity and 9,001 on quality management. And there is now a rapidly growing landscape of standards for AI. So we’re anticipating the first ISO standard on AI to be published at the end of this year or perhaps beginning of next year. And there are many others in development, including on the use of sustainable AI, mitigation of bias in AI, and a very interesting standard to be published, hopefully next spring or summer, 42006 on audit practices for AI, which will also be very interesting for compliance with the EU AI Act. So why standardization for AI versus, for example, regulation or framework? So regulation is obviously something that’s supported by legal framework and organization. organizations are required to comply, whereas standards are voluntary codes of best practice. But why would companies bother to adhere to these voluntary codes? Well, as you might have heard from some of the large organizations developing AI models this week, they’ve been developing their own internal codes of best practice, but each one of these are slightly different. If we can develop a standardized way of doing this, we can provide quality and safety assurance, in-build other goals like environmental targets or UN Sustainable Development Goals. They can be used for ethical development, knowledge and technology transfer, and to provide interoperability between products. Ultimately, standards can help build trust between organizations and their consumers, and also along the supply chain, both in the supply chain that an organization is feeding into and the organizations that are feeding into your own supply chain. These can also provide market access by complying with certain trade requirements. They link into other government mechanisms, and can also be used as a kind of pathway towards regulation, as they are indeed in certain sectors, particularly for things like medical devices, for example. So, just because we had a response in the room, to standards development, I hope that this is relevant information, that standards are voluntary for organizations to comply with. They’re developed by committees, so they’re not developed by standards bodies, and unlike regulation, which is developed by regulators, they’re developed by experts in this area, who are volunteers on a standards committee, and they’re also developed by consensus, two-thirds consensus, in case you’re interested. There are also quite a lot of standards. 3,000 standards, roughly, are produced every year by BSI, British Standards Institute, alone, and again, I hope that the AI Standards Hub will be a useful tool for. those of you who are looking to navigate this space with regards to AI. So not just the horizontal AI standards, which are general standards related to AI, but also the ones that are sector specific, because we have specific requirements in certain sectors. Because it’s early in the morning, I thought it would be fun to do a quiz. And I wondered if anybody, if these things on the board mean anything to anyone in the room. Don’t be shy. Okay, good. Yeah. This again, is a kind of indicator of the fact that there can be quite an overwhelming number of acronyms and numbers in the standards landscape, which once you become familiar with using them, you find yourself using them all the time, but can make it quite impenetrable in the first place. So 42001 is the standard that we’re expecting to be published at the end of the year. It’s currently in FDIS stage, which is final draft international standard. It means it’s only out for editorial comments. And as long as there aren’t too many of them, we’re expecting it to be published in December or January. So this will be the first international standard published on AI. And it’s an AI risk management system standard. I already mentioned 42006 on audit, which we expect next spring. JTC1 is the joint technical committee one, which is the parent committee of subcommittee 42. I can keep going on with these numbers. That actually developed 42001. And then just showing how this maps down to the national level, ART1 is the relevant AI standards development committee within BSI. And in case you’re interested on the ISO website, there’s a lot of information about how many and which countries are involved in each committee in the development of the standards. So you can dig into that data. And with that, I’m going to hand back to Florian to tell you more about the hub.

Florian Ostmann:
Thank you, Matilda. So with that set out in terms of what kinds of standards we are focused on and why we think those standards are important, let me tell you a bit more about the relationship between the standards hub and the UK’s policy thinking on AI, and then go into more detail on how we developed our strategy and the kinds of challenges that we’re trying to address with the hub. So in terms of the policy context, and Nikita, who’s joining us on the panel discussion will go into more detail later on. The main thing to mention is that the UK government has, over the last few years, gone through a process of thinking about the regulation of AI, but also more broadly, the regulation of digital technologies in general, and throughout different pieces of policy work, policy papers, and policy statement, there’s been a recognition of the role of standards as a governance tool for the reasons that Matilda mentioned, sort of the way in which standards are developed, the fact that they are open to input from all relevant stakeholder groups, the fact that they can be useful to support regulation in various ways, or also to fill regulatory gaps where regulation doesn’t exist. So Nikita will say more about this, but essentially the hub is a deliverable that was highlighted in the National AI Strategy that the UK government published in September 2021, and also plays an important role in the context of the recently published, that was about half a year ago in March this year, the recently published AI Regulation White Paper. Now, the AI Regulation White Paper, at a very high level, just a few words about that, sets out a context-specific. So, what is the White Paper? The White Paper is a framework that is based on a very specific risk-based and decentralized regulatory approach. So, what that means in practice is that it’s based on the view that existing regulatory bodies are best placed to think about the implications of AI in the relevant regulatory remits. And in order to sort of encourage and enable regulatory regulators to think about the implementation of AI, the White Paper sets out five principles. So, the White Paper is fairly familiar to anyone, you know, familiar with AI governance. They resonate very closely with the OECD AI principles, for example. So, the White Paper sets out these five principles and then essentially sort of puts the task to regulators to think about the implementation of these principles in their remits and emphasizes the role of the regulators in the implementation of these principles. So, in a sense, there’s an important link between the objectives of the regulatory approach and the role of standards in the sense that standards are seen as facilitating the implementation of principles, providing the detail that is needed to make those principles meaningful in a given context, in a given regulatory remit. So, the White Paper sets out five principles and then sets out the five stakeholders that we’re trying to address with the hub. As Matilda mentioned, standards in the organizations that we are focused on are developed through a process that is open to all stakeholder groups, and we know that in the AI space, there are lots of different stakeholder groups, you know, whose interests are affected or whose views are relevant to the development of standards, and that includes, of course, different actors in the AI space, such as populations, institutions, contexts, many different possibilities that are out there. Otherwise, those are exposed participants outside of industry, and includes, importantly, civil – and consumer perspectives, and it also includes regulators and academic researchers. And while the standards development process is open to all of those groups, we know historically that not all of these groups are equally strongly represented in those processes. And so to give some examples, civil society voices we know are less strongly represented compared to other voices, compared to industry, for example. And within industry, SMEs and startups, for example, are less strongly represented compared to larger companies. So at the core of the mission of the AI Standards Hub and the reason for setting it up isn’t just the recognition of the importance of AI standards, but it’s also the recognition that in order for AI standards to be effective and fit for purpose, it’s really important that all of those stakeholder perspectives are included in the development of standards. And what we’re trying to do with the Hub is to help all of those stakeholder groups, and especially those who have less experience in the space, to develop the knowledge, develop the skills and the understanding, and also perhaps the coordination that’s needed to achieve that involvement. In terms of what sort of the key groups are, I think I mentioned them already. So in the private sector, it includes larger and smaller companies, includes civil society, consumer groups, regulatory bodies, academia, and then, of course, there are people who are already actively involved in standards committees. Those are also key because they can, of course, play an important role in guiding others and sharing information about that work. We did a fair amount of stakeholder engagement leading up to the launch of the Hub. So we were very mindful of making sure that we develop an initiative and develop a shape for the initiative that meets actual needs, rather than just developing something in the abstract for which there isn’t a need. And so we did several engagement roundtables and surveys. with each relevant stakeholder groups and you know one of the things we tested at the outset of course was you know is there a recognition of the importance of standards and what’s the current level of awareness and engagement in the space and as this slide shows you know there’s more detailed data but just at a high level to give you a sense it’s really that across all groups you know there was a strong recognition that standards are going to be key for AI governance and there is significant thinking in each group about AI standardization but there is a clear gap as you can see between the perceived importance of the topic and the extent of current thinking and that sort of you know awareness gap and to some extent also capability gap in thinking about standards is what we’re trying to address. We then try to dig a bit deeper and sort of you know explored with stakeholder groups what are the challenges what’s holding you back you know what explains that gap what’s holding you back in engaging with AI standardization and at a high level there were sort of four key areas that came out of that part of the engagement. The first one is a perceived lack of easily accessible information around AI standards that includes understanding keeping track of which standards are being developed and published but then also identifying those standards that are most relevant to a given user or stakeholder. Secondly the skills needed to contribute to standards development or engage with standards once they are published so there you know there’s a strong sense that both the process of development developing standards can be quite complex and you need knowledge and skills to navigate the process but then of course also knowledge about what does best practice for AI look like you know what does a good standard look like what should I be contributing if I am on a committee and contribute to drafting a standard. So skills the second area. Thirdly, securing organizational buy-in for engagement. So we know that engagement with standards development can be time-consuming. How do I convince my organization that that’s a worthwhile thing to do, given that they’re computing resource priorities? And that’s sort of relevant, of course, especially for those types of organizations who are historically less involved in this space. And then fourthly, a need for analysis and strategic direction. Given the fact, and I’ll say more about this in a moment, that there is such a vast number of AI-related standards already being developed, which are the areas that are most important? Are there gaps that need to be addressed, standards that are missing? And a need for strategic direction in shaping AI standardization. So those were the challenges. And we then, in shaping the strategy, essentially translated those challenges in four different pillars of activity that the hub is pursuing. The first pillar is what we’re calling the observatory. That can be found on our website, and that consists of two databases. One is a database on AI standards, and the other one is a database on AI-related policies from around the world. Community collaboration is around organizing events, virtual events, in-person events, to engage the community and bring stakeholders together around conversations to gather input into standards that are under development to identify priorities and needs and so on. Knowledge and training, that’s where we’ve developed a suite of e-learning materials that can be found on our website, but we’re also offering in-person training events, virtual ones and in-person ones. And then, fourthly, research and analysis. That’s sort of a more traditional research function where the hub pursues research to develop insights to address these challenges. needs for strategic direction and analysis. I would like to say just a bit more about each pillar and in particular the observatory and within the observatory the AI standards database because that’s in a way the resource that you know sort of took the most thinking in terms of how we develop it and and how it should be designed. So the observatory for AI standards is a database on our website that tracks both standards under development and standards that have already been published for AI. You can see a breakdown on the slide for how these standards are distributed across different categories. The key thing here is that it’s a really a large number already so over 300 relevant standards that are captured in the database, a large number of standards that are already published and what sort of was key in designing the database is to make it easier to navigate that vast number of standards. So we’ve developed a range of different filter categories, a search function and so on. We also have interactive features, it’s possible to follow a standard in which case you get updates when the standard moves from one development phase to the next for example. You can let other community members know if you have been involved in the development of standards so they can reach out to you and try to find out more and then there is a discussion thread and also opportunity to leave reviews for a standard that you have may have used or that you may have been involved with. In terms of the other pillars I’ll keep this very brief but you will find more information on all of this on our website. So on the community collaboration pillar over the last 12 months we had a series of events. Those are to a large extent recorded and you will be able to find recordings on our website. page if you’re interested. Some of that was focused on transparent and explainable AI as a specific topic, others was more generally focused on trustworthy AI. There was targeted engagement with certification bodies and then we also have a standing forum for UK regulators where regulators have a space to come together among themselves as a single stakeholder group to exchange knowledge around the role that standards can play in AI regulation. For knowledge and training, as I mentioned, that includes various e-learning materials. There’s sort of a snapshot of some examples on this slide. If you’re interested, we’d like to invite you to take a look at that on the website and the same is the case for research and analysis. This is just a snapshot of some of the most recent pieces but more of, you know, more of that and more details you’ll be able to find on the website. That concludes the sort of the summary of what we have been up to so far and why we set up the AI Standards Hub and I’ll now pass on to Sonny to tell us more about our objectives and our interest to collaborate internationally.

Sonny:
Brilliant, thank you Florian and good morning to everybody in the room and good afternoon and good evening to those online as well. So my name is Sonny and I’m from the National Physical Laboratory and part of this amazing collaboration that we’ve set up and I’ll talk a bit more about what the collaboration is and what our aspirations are and why we have those objectives and aspirations. So we’re seeing, we’ve heard a lot over the last few days about the growing need for standards to help with governance, with assessment and heard about many different challenges. So on the screen you can see several different initiatives, development of policies and strategies, but actually even just yesterday I heard about some work going on in Africa where across the continent there are at least 25 initiatives and around 466 different policies in development. And so, we’ve got all of this environment out there in the world where lots of countries, lots of regional organizations are working to do all of this work. And we’ve really seen that the world recognizes the importance of standards. So, if we draw out just one of those examples with the recently published EU AI Act, we can see that the support, the development, the creation of the conformity to standards, to do that, most nations have something called a quality infrastructure, which tends to be built up of a few different organizations, which include organizations such as myself, which is the Technical Measurement Standards, and then the National Standards Body, such as the British Standards Institute, and then other organizations that actually check the conformity and compliance, as well as accredit organizations. So, our hub is an example of how bringing these can be a valuable exercise in itself, because it helps with a diverse set of skills, capacity building, as well as looking at the entire ecosystem, all that value chain in the whole together at the same time. But how do we lift that from a national paradigm to an international paradigm? These standards have to be worked in by consensus. And we all have shared challenges, and globally, we’re all at various stages of our domestic journeys. So, how do we bring everyone around the world to the same level and work on our shared global challenges to truly realize the benefits of AI, as well as provide that confidence to us as normal people, as the public, to have in this technology and really benefit from it? So, here on the screen, you see some of the role of standards within the EU AI Act, where they’ve taken five principles, and then set out seven essential requirements within a high-risk system or high-risk systems. And so, the European Commission has requested CENTER-ELECT to now develop standards around 10 issues to really try and harmonize these standards, which then make this presumption of legal conformity. Now, as I said, these standards are generally voluntary, so how can we work on these together such that everyone is on that same level? level platform. So we really are trying to do this, and on the screen you just see three small examples of some of the things that we have in train. In addition, we have had much international interest, and really pleasing the reach out we’ve had from north, south, east, and west. And so we’re partners with the OECD, and we cross-reference with their tools and metrics for trustworthy AI, and they also cross-reference with the hub. We’re also doing a lot of work with NIST, and other like-minded organizations, put that into a bit of context, NIST is the American equivalent, or NPL is the British equivalent of each other, and there are 100 different organizations of this around the world, which are signatories to the 1875 METRE Convention. So there is already certain platforms to do this work. Now assessment, for example, is expressly a measurement activity. How can we all understand and make those measurements? How would you actually measure the trustworthiness of something quantitatively, also appreciating that AI is very context-specific, so now there is a new paradigm where we also have to think about qualitative assessment and measurements. And then another example we have here is where some of the work we’re working with other national standards bodies around the world, and in this case we’ve pulled out the bilateral work going on with Canada at the moment, and again, it’s not limited to just Canada, we’re working with many other countries. Next slide please, Florian. And so broadly, these are the kinds of things that people are asking us to think about and do. So how do we build these international stakeholder networks? There is a big challenge out there in the world that every region is lacking the skills, the resources, the people, the knowledge in these things, so how do we bring the right people together to share, to address these shared challenges? And so, as talked about several times already, it’s about bringing the national standards bodies together with the national measurement institutes, bringing the right academic prowess into the room, and most importantly, why are we doing this and who are we doing this for? We’ve been asked to help and work with others on clarity. collaborative research, and then also developing these shared resources, so lifting up from the national paradigm to the international paradigm. And so Florian’s already shown some screenshots of the platform. And what we’d really like to, I’d like to finish off this part is, this is not just a UK resource. This is available. Anybody can access this, so please come have a look, and if there’s anything there of interest and you would really like to work on shared challenges, then please get in touch. Thank you.

Florian Ostmann:
Great. Thank you, Sonny. So that brings us to the end of the presentation part of our session. As I mentioned earlier, we do have sort of an interactive exercise that we’ve prepared and that we’d like to come back to towards the end. We’ll do that using Mentimeter, and so before we move on to the panel discussion, I’d like to invite you all, both those of you in the room and those of you joining online, to take a moment to go to Mentimeter and then in your own time sort of complete the questions that you will see on your screen that come up. It’s not a big exercise, so don’t worry. It’s also worth mentioning that it’s completely anonymous, but I think there will be some interesting results that we can look at when we get to the discussion later on. So to get to Mentimeter, you can either go to menti.com and enter this code. My colleague Anna will also put the link for Mentimeter into the chat, so you can just click there, or you can try to scan the QR code if that works for you. So we’ll just take a moment, I’ll leave the slide on for a short moment, and the link is now in the chat as well, before we move on. Great, I think I’ll stop sharing the slide, but the link for the Mentimeter is in the chat, so I hope everyone will be able to access that there. Let me move on to introducing our panel. As I mentioned, we’re very excited to be joined by a great panel of experts today with a vast amount of experience in the AI standards space from across different regions of the world, and also Nikita Bangu, our colleague from the UK government, who will tell us a bit more about the context in the UK policy field. So I will stop sharing the slide, and I’d like to invite our panellists to turn on their cameras. Fantastic. Nikita is joining us here on the stage, so it’d be great if you could move the camera such that we are both visible. Nikita is sitting to the right of me. And I’ll just briefly introduce our panellists. So I’ll start with Nikita. Nikita Bangu is the Head of Digital Standards Policy in the UK government’s Department for Science, Innovation and Technology. She works in the Digital Standards team in the department, which brings together the UK government’s global engagement with key internet governance and digital standards bodies. And she works on Digital Standards Policy Portfolio, which includes standards policy on new and emerging technologies such as AI and other areas such as quantum technology. So welcome, Nikita. Thank you for joining us. Ashley Kosoban. Next on the panel is the Executive Director of the Responsible AI Institute, which is a multi-stakeholder non-profit dedicated to mitigating harm and unintended consequences of AI systems. Ashley has been at the forefront of building tools and policy interventions to support the responsible use and adoption of AI and other technologies. She’ll tell us more about that, including her important work and the Institute’s important work on certification. And previously, Ashley led the development of the first major AI-related government policy instrument in Canada, which is the Directive on Automated Decision-Making Systems. Welcome, Ashley, and thank you for taking the time to join us. Wansi Lee is the Director of Data-Driven Tech at Singapore’s Infocomm Media Development Authority. In the area of AI, Wansi’s responsibilities include driving Singapore’s approach to AI governance, growing the trustworthy AI ecosystem in Singapore, and collaborating with governments around the world to further the development of responsible AI. She is also responsible for encouraging greater use of emerging data technologies, such as privacy-enhancing tech, to enable more trusted data sharing in Singapore. Welcome, Wansi, and thank you for joining. And then, last but not least, we have Aurelie Jacquet, who is an independent consultant advising ASX 20 companies on the responsible implementation of AI. Aurelie also works as a principal research consultant at CSIRO Data61, which is part of Australia’s National Science Agency, and she leads global initiatives on the implementation of responsible AI in various areas. And one piece that’s particularly worth highlighting, which, again, we’ll hear more about, is Aurelie’s role in chairing Australia’s National Committee for Standardization for AI, which represents Australian views within ISO and the development of AI standards in ISO. Welcome, Aurelie, and thank you. Great, so with those introductions done, let’s move on to the first round of questions. And I would like to start with you, Nikita, from a sort of a UK perspective. I mentioned earlier, you know, at a very high level, what the relationship between the hub and the wider policy thinking and government has been and is. But it’d be great to hear from you a bit more about how policy thinking in DCIT relates to the hub. What are the ideas that led to the creation of the hub? And why does the UK government think that this is an important initiative?

Nikita Bhangu:
Sure, thank you, Florian, and good morning to all of those in the room and good afternoon and evening to those online as well. As Florian mentioned, I’m Nikita Bangu and I’m the UK government representative on the panel today. So, I mean, Florian, Mathilde and Sunny provided a great overview of what the AI Standards Hub does. I guess just to provide a bit more context from the UK government perspective and our policy thinking there, I’ll just run you through kind of our approach to standards and how we’ve embedded that into our AI policy governance as well. So, I guess to start with, it’s just to note that the UK government sees standards as many benefits in kind of AI standards and engaging in the standardization landscape. In our recent AI White Paper, which sets out our approach to regulating AI policy more broadly, we noted the importance that standards and other tools such as assurance techniques can play within the wider AI governance framework as well and can help sort of implement some of the approaches from the UK government’s approach on AI policy as well. The paper sort of recognizes that digital standards are not an end to themselves. They are a means of making the technology more accessible. technology work and really important to consider sort of the wider toolkit that we have within our regulation and governance approach to AI as well. Under the UK presidency of the G7 we also looked into digital standards with our G7 and like-minded colleagues as well and set up the collaboration on digital technical standards as well to kind of note the importance of working together within this space and recognizing the benefits that standards have within the wider AI policy regulatory framework. I think having said that in terms of the benefits of AI standards we also recognize that there it is a very complex space there are from speaking to our stakeholders and through sort of our research and collaboration with international partners we recognize that there are many barriers in place in participating in the AI standardization ecosystem so as UK government we were really keen to sort of work with our stakeholders on our international partners to reduce these benefits so that standards can be for all whether it’s from knowing what standards are and how to adopt them into your business to kind of encouraging that multi-stakeholder global approach to developing standards and providing all groups with the opportunities and toolkits they need to participate in this ecosystem as well. You would have heard a lot at the IGF today of the importance of collaboration and multi-stakeholder approach to digital technologies that’s exactly the same for standards which it’s quite difficult to do because as I mentioned it is quite complex many people have been many people who develop standards have been playing sort of in that game for many years so it’s there is kind of a need to sort of support our stakeholders there to sort of help get them in those organizations and really understanding what standards are so I guess through consulting with our stakeholders we identified the key challenges in the UK which Florian went through in the presentation just now, we kind of thought about how we can kind of intervene in that market to support our stakeholders in reducing those barriers to enable the benefits of AI standards to seep through. Some of our key aims for kind of setting up the AI standards hub and doing that work was to increase the adoption and awareness of standards to create clear synergies between AI governance and standards hence sort of our work with the AI white paper and setting out the role for AI standards as a tool for trustworthy AI and also to provide practical tools for stakeholders to understand and engage in the standards ecosystem. So that really was our thinking behind setting up the AI standards hub and sort of working with our key experts in the field bringing together parts of the UK national quality infrastructure, British Standards Institute, a national physical laboratory and our national AI centres to bring the minds together so that we can reach a wide user base in the UK and beyond to help facilitate the reduction of barriers we’ve seen in this place. The AI standards hub has been running for a year now. I think next week it’s the first birthday of the AI standards hub which is great and we’ve seen lots of success in this space over the past year. We’re looking to increase our international collaboration with the AI standards hub in the coming years. I’m really keen to follow up on this conference and participate with you more in that space as well. I think the last thing I would just note is that we, UK government commissioned an independent evaluation of the pilot phase of the hub which was the first six months of the hub to just to understand sort of what’s worked well. and how we can continue growing. We will be publishing that evaluation, so it will be on our gov.uk website, so accessible for all to look at. But some early findings really indicate that the hub has helped support the UK community in understanding what AI standards are. We conducted a survey and found 70% of respondents kind of noted that the hub is really helping in building that knowledge gap, and sort of inspiring and motivating them to get more involved in the development organizations, which is great to see. I’ll stop there.

Florian Ostmann:
Great, thank you very much, Nikita, for adding that context, and yeah, it’s exciting that we’re approaching the one-year anniversary. Thanks for mentioning that. Great, so we’ll now move on sort of to the international perspectives, and I didn’t make it explicit earlier, so the great thing about the panel is that we’ll have sort of perspectives from Canada and the US, so North America, that’s Ashley’s focus, and then Wansi from Singapore, and Aurelie’s experience in Australia. It’ll be great to hear how some of the themes that we shared resonate with your experiences in those countries. So I essentially would like, as a first round, to ask each of you roughly the same question, which is how does what we’ve presented so far, and of course, you’ve heard about the AI Standards Hub previously, the challenges that we’re trying to address, the kind of initiative that we’ve built, how does that resonate with what you see in terms of AI standardization priorities and challenges in your countries? I know that in some cases, there are initiatives that are quite similar, or comparable in nature, at least overlapping, and perhaps we can start with you, Ashley. One such initiative is the Data and AI Standards Collaborative that you are heavily involved in. So it’d be great to hear a bit more about that, and also more generally, your reflections on this space.

Ashley Casovan:
Yes, thank you so much, and thanks for having us here to present about the work that we’re doing, and also, I think, just establishing this really important conversation related to AI standards. I think it’s, as you’ve mentioned, becoming a more important discussion, or at least one that more people are reflecting on, given the connection to different types of regulations. However, it still seems to be a very confusing topic, because standards can mean so many different things. Ironically, standards are not standard, and so there’s a lot of different points in which, or entry points, I guess, into that conversation, and understanding why they’re being established for what purpose is something that we’re trying to reflect on in the Canadian context. As Florian’s mentioned, I am heavily involved in an initiative that’s been established by the Canadian government called the Data and AI Standards Collaborative, and I am the co-chair of that, representing civil society. And in this capacity, we’ve been really trying to understand the implications of AI systems, and the data that feeds into those, and really trying to bring together civil society, academia, and government agencies to reflect on what types of standards are needed, really, really similar in nature to what you’ve already heard from the Standards Hub. And one of the things that we’re quite interested in doing as part of this initiative, is trying to identify different types of specific use cases, again, kind of aligned to the pillars that Florian presented. on previously and understand the context specific standards that are required within the whole value chain of an AI system. And I guess what I’ll say in addition to that is maybe because I’m here to represent the North American piece, but I do not speak on behalf of NIST, but because it was mentioned earlier, we’re starting to see a lot of uptake in tools in the North American markets that is related to some of the work that is happening in these national government activities. So Florian earlier spoke about NIST’s AIRMF, the risk management framework. And what we’re starting to see through this initiative, OECD, et cetera, is the work through these multi-stakeholder forums to be able to establish good baseline initiatives for standards to be developed from. So that could be things like even just what does the life cycle of an AI system look like? What are the different types of definitions that we should be using for these systems and have some commonality amongst those? Because what we’d like to get into then more deep from a Canadian data and AI standards collaborative perspective is, as I said, back to those use cases, understanding what types of certifications, standards, mechanisms are required for both the evaluation of a quality management system that Nikita spoke to earlier that we’re seeing with 40-2001. And then what is needed at a product level, which is work that we’re doing at the Responsible AI Institute, which I’m sure I’ll speak to after. And then looking also to individual certification. And this is something that Aurelie, I’m sure, will address as it’s something that we’re working on. that she’s been quite interested in for a while in terms of what does individual training look like. And so when I mention these different types of standards that are needed, there’s really a breadth that we’re needing to look at. So I’ll leave it there and I’m just really happy to be here and have this discussion at an international forum like this.

Florian Ostmann:
Great, thank you very much Ashley and we’ll come back to some of those points later. Moving on to you Wansi, a similar question for you. How does sort of the points around the importance of standards but also the challenges, how does that resonate with your work and your experience in Singapore? And I believe there’s an initiative that’s quite relevant from your perspective that’s the Verify Initiative. It’d be great to hear a bit more about that and then your sort of views on standardization more broadly.

Wansi Lee:
Thanks Lauren. Hi everyone, I’m Wansi from Singapore. I’m from the Singapore government. Thanks for having me on the panel this morning. It’s really interesting to be able to talk about standards with like-minded folks from around the world. One of the things that we recognize in Singapore, for us in Singapore, it’s very important is the need for international cooperation. So that’s something when Sunny talked about just now is something that really resonated as well. So international cooperation, it can be done in various ways. Of course, Singapore is an active member in the ISO process. So we participate and we contribute and we vote and so on. But at the same time, we do not just be at the ISO kind of level of cooperation. We also work quite closely with countries and we also participate actively in the multilateral process. So maybe just as an example, since I actually spoke about NIST and that was also brought up in the earlier presentation, the NIST AIRMF coming from the U.S. and so many organizations that we’re looking at. And then for us, and what’s important then is how then do we, our own work in Singapore, map to or how do we work together with what NIST has already published. So we very actively started a mapping project with NIST. We developed a crosswalk where we sort of looked at what we’ve done in Singapore in terms of our guidelines for AI verify, model AI governance framework that we have published a couple of years ago. And then we did a mapping exercise then see where we are aligned and where we’re different. And of course, at this level, we’ve gone to some level of detail. Even at the level of details, there are many similarities and quite a lot of alignment. We find that this work is very helpful for organizations or companies that are operating internationally because they want to make sure that this, what they’re doing in terms of implementing the right practices and so on for responsible AI, it is aligned both to Singapore’s requirements as well as some of the standards work that’s happening in the U.S. So that’s why we started that process with NIST. And of course, then extending that, then we’re looking at other standards that are being developed through ISO, SENSE, and NALAC and so on, and to see how we can align as well. So that’s one example of how we can cooperate internationally and how we can make sure that there’s at least some kind of alignment or interoperability amongst the kind of guidelines and standards that have been developed. The other area that resonated is the need for multi-stakeholder engagement. Of course, there are platforms to do that. ISO is one platform. Our own Singapore Standards work is also another platform. But I thought, as Florian mentioned, I’ll highlight one of the things that we’re doing that’s a little bit different, just to show that there are many alternatives out there. So besides guidelines… requirements that the Singapore government sets out. We also wanted to make sure that organizations are able to demonstrate adherence or compliance to some of these guidelines, right? So we developed the AI Verify Testing Framework and Toolkit. So it’s a set of detailed requirements of how then you think about validating responsible AI practices or implementation of responsible AI requirements when organizations implement AI systems. So quite a lot of detailed process checks, for example, align again to international principles. We looked at requirements from around the world. We looked at principles from OECD and so on, and then we define that into a set of testing requirements. At the same time, we also identified how do we test, right? It’s not just about process checks, but also how do we actually test the system? So we developed a toolkit looking at some of the work that’s already been done by academics around the world, as well as some of the work that’s been done by companies and put together a toolkit to test for fairness, explainability, and robustness, because those are things that we think we can test at this stage. And that, but we also recognize that testing capability continues to evolve and there are many gaps. So people around the world are working on different aspects. That’s why then we decided for AI Verify Testing Framework and Toolkit to open-source it, that’s one, but not only to open-source it so that people could contribute, but also created a foundation, an open-source foundation to support the contribution and engagement of organizations, developers, individuals around the world to build up AI Verify toolkit and framework. Even as we look at generative AI, for example, that’s something then that needs to be extended. verify and that’s why we feel it’s important to work with the global community and the open source foundation is one way in which you can get multi-stakeholder involvement in technical development as well as sharing of knowledge and experiences in the space of AI governance testing. So that’s one kind of slightly different take on how we take on multi-stakeholder engagement approach. Thanks. I’ll just pause here for now.

Florian Ostmann:
Thanks. That’s great. Thank you, Vansi. And it would be great to come back later in the next round and go into a bit more detail both on sort of priorities for international collaboration and the multi-stakeholder involvement. But before going into more depth, let’s move on to you, Aurelie, sort of for your general take on this topic. You, of course, have a lot of hands-on experience, you know, probably the most hands-on experience in relation to standards given your role as the Australian committee chair. So it would simply be great to hear from your vantage point what’s your take on AI standardisation both in terms of importance, challenges and sort of the international cooperation and multi-stakeholder angle.

Aurelie Jacquet :
Thank you, Florian. And again, delighted to be here in this forum and talking about standards and certifications. This is my favourite topic. To your point, I’d like to maybe go back in time and remind that actually back in 2017 already, the UN published, there was a few papers, academic papers that were published on standards at the time explaining how they can be used as an agile tool for international governance. And so now that the standards are mature, and we see a lot more published, there’s increased interest. From my perspective, I actually led Australia active participation in the standards. So my motivation was actually, I come from the global markets, financial services, and I saw the mini crash that had happened. We had to, we had an onset of regulation that came after the GFC. And at the time, from a compliance perspective, the thought was we really need some set of best practice that we can provide to industry in order to ensure that the onsets of regulation is industry informed. So that was a strong motivation for us to actually make the submission to standards and ISOs for Australia to actively participate and shape the international standard on AI. So that was our entry into that world back in 2018. And as I said, the core business case is Australia is a small country and it really needs to actively participate in a topic and in the involvement of best practice that for AI that is effectively an international, that’s got an international remit. So we had a roadmap already in 2020 for AI standards that I’d already focused on 40, 2001. You’ll hear that number a lot from me. That’s the AI management system standard. And this is what we described as the crown of the journey. standards because it provides for the certification of AI systems. So this was one key part of our roadmap. Obviously, also as part of the work that I do with CSIRO, Data61 and National AI Centre in Australia, there was one challenge is often standards, they are embedded everywhere in our life, but they’re not visible and often organisations are not aware of them. So we started also in Australia through the National AI Centre and through the Responsible AI Network, which is a community of what we guess is best practice with a community of experts. That’s got seven pillars, including standards. We started to develop education program that also covers best practice, including AI standards. So the initial course that we developed were on what are standards and how they’re part of our daily life and how they’re relevant for AI. And of course, the AI management system standards, what’s coming, what’s likely to become the standards that will enable audit of those AI systems. With CSIRO, Data61, we’re also building tools, a set of tools that are leveraging standards work. And you see in a day-to-day as part of adoption in Australia of standards, we’ve already got the NSW AI Assurance Framework that actually is leveraging standards to provide assurance for AI system used by the government. This has been made mandatory for all public services. in New South Wales, if they’re using AI, they have to go through that AI assurance framework. And obviously, from a business perspective, there’s definitely some appeal, we see increased appeal in looking at standards that are starting to, that have effectively over 60 countries that are involved in developing them, but also that, let’s say, SENCENELEC and the EU Commission have been interested from the beginning. We had the EU Commission coming to our ISO meeting from 2018 onwards. So that’s why we see our governments already, even at the federal government, we had some guidance about Chad GPT and generative AI provided and that referenced some of our standards on bias and others. So there’s been a good uptake from that perspective in Australia. And I finish with international initiatives that we have initiated. Also, with Standard Australia, we have developed a workshop that we delivered at the APEC SOM, explaining how effectively AI standards can help, really, how standards can help scale AI and what has the benefit for organization in different state. But to your point, Florian, there’s still this challenge of getting the standards well known, so they’re much more visible and increase participation. But really, this is standards so far as proven, as I said from the beginning, as a very good agile tool for international governance.

Florian Ostmann:
Great. Thank you very much for that overview, Aurelie. And I think that was a really good segue to the next round of questions. So, you know, we ended on the challenges and more work that needs to be done, and I’d like to basically do two rounds, you know, on each of the topics for the session. So the first one on multi-stakeholder engagement and then the second one on international collaboration. Let’s start with multi-stakeholder involvement. So, of course, standardization is already, you know, compared to other governance mechanisms, a very, you know, inclusive mechanism, right? I mean, it’s, you know, contrasts with regulatory rulemaking, for example, in that the process is open to all stakeholder groups in principle. But we are aware, as we mentioned at the beginning, that not all groups are equally represented. So it would be great to hear from all of you, and we’ll start with Nikita. You know, what do you see as the main challenges? What are the main obstacles for achieving, you know, equitable involvement from all groups? And also, what are the most promising strategies for addressing those challenges? So what can be done, including what can be done collaboratively at the international level to ensure and increase stakeholder inclusion?

Nikita Bhangu:
Sure, thanks, Florian. So I’ll start with some of the main challenges. I mean, we covered it previously in terms of sort of what the UK sees as some of the challenges, but just to kind of point to a few of the key ones. For the UK in particular, it’s sort of that ensuring we have the right representation at relevant standards development organizations. We’re seeing quite a few large companies, for example, representing industry at standards development organization, which is, of course, great in terms of providing that view. However, for the UK, most of our technology companies are small to medium enterprises as well, who often are quite small companies and may not have a large regulatory team or standards expert. who have the skill set needed to engage effectively in the standards development organization. So that’s, we recognize that as a key challenge in terms of our small to medium enterprises as well. I think for us as well, another key stakeholder group is civil society, which has sort of always been a bit of a, quite challenging to get the resourcing and the expertise into standards development organizations and I guess Florian just mentioned the key point of the standards are for everyone and standards are at the sort of offset providing those building blocks for how technology will be developed. So it’s crucial that all stakeholder groups are taken in mind when developing these standards. I think another key challenge for UK government particularly is the issue of government is another key stakeholder and kind of expertise getting the state standards development organizations. For us, we have a very, very small technical team within our digital department back in London, which obviously their resources can only stretch so much. So getting those viewpoints and that coordination across sort of constrained resourcing is another challenge. One thing we’re doing in the UK government at the moment is thinking about talent pipeline as well. You know, trying to increase diversity presently but also in the future working with standards development organizations and other international partners to create sort of the next generation, I guess you can call it, of standards development organizations, developers as well. I know there’s a lot of work going on in this space already. I think BSI do quite, our national standards body do quite a lot in that space. And the IEC has a young professional program as well to sort of, again, provide that career route and continuation of skill sets into the standards organization as well. One thing particularly relevant for the IGF is that we’re also working with the MAG, the multi-advisory group, to sort of embed digital standards within that thinking as well, so again using international fora to promote that view of multi-stakeholder and the tools that we can develop together to get different voices in standards development organizations as well.

Florian Ostmann:
Great, thank you Nikita. Ashley, over to you. How does that resonate with you in terms of, you know, your views on obstacles and also solutions for ensuring inclusion and participation of stakeholders?

Ashley Casovan:
Yeah, I think all of that resonates here as well. I think one of the challenges that we’re actually having with the Data and AI Standards Collaborative is that we’re trying to be incredibly inclusive and so to some of the points that Nikita was just making, the bandwidth for the teams that are actually within government that are trying to process and analyze all of that information does become constrained and so it’s, I think, why I spent so much time in my previous discussion talking about the need for us to really understand what types of standards are we talking about because then we can identify who needs to be at the table for which types of conversations. To have broad-based discussions about all types of AI in all types of contexts makes it really difficult to get the right stakeholders there. One very significant effort that we’re trying to make is to ensure inclusion across all aspects of civil society and so something that’s been missing from a lot of our conversations is Indigenous groups in the Canadian context and so we’re making a concerted effort to ensure that voices from the most impacted populations in Canada are being not only brought in but again really understand in the harms that can come from these AI systems to try and find appropriate ways that standards can help to mitigate those.

Wansi Lee:
Great, thank you Ashley. And over to you, Wansi, for your views on stakeholder participation. Yeah, it’s definitely a very complex space. Singapore is a small country, smallest here I think amongst everybody on the panel, and we have also limited resources. One of the things that we need to do is then make sure that we focus our resources in areas where we can contribute to the global conversation, because there’s lots going on in the standard space. And so we want to make sure that what we do makes sense in the grand scheme of things. And that’s why we are very targeted in terms of where we want to develop and spend effort, because a lot of the things that’s already happening internationally, we could adopt. And where we think there are gaps, then we want to make sure that we have to plant. And that’s why we look at actually tooling, testing as an emphasis in terms of where we want to put our resources. That’s not to say that other areas are not important, it’s just where we think, oh, there’s a gap and this is where Singapore can help. And that’s how we started AI Verify. In terms of getting more involvement, I think we definitely are very active in making sure that what we do is not just a government kind of perspective. We are very active in engaging industry, companies, large and small companies. operate globally, that operate domestically in Singapore, to make sure that their voices or their input can be incorporated. Everyone or all organizations can participate in let’s say the AI Verify Foundation, open source anyway. We’re trying to make that mechanism for any organization that’s interested, even if you’re very small, not from Singapore, it is a platform that you can you can contribute on. And then from there, then we take some of the work that’s being done at AI Verify Foundation, rationalize it at the national level in Singapore, and then we see how we can then support that more globally across in other platforms, whether it’s OECD, GPA, or ISO, or other multilateral platforms. Thanks.

Florian Ostmann:
Great, thank you, Vansi. And over to you, Aurelie, for your take on stakeholder inclusion.

Aurelie Jacquet :
Thank you, Florian. So, Australia has a small yet powerful delegation. So, if you have a small delegation, that should not stop you from being involved in the standards. Most of the experts were new to standards, so it took a little bit of adaptation. When we got started back in 2018, one thing I’d like to highlight is I say, we worked with other small countries to ensure that the mechanisms that are in place actually fitting for our size. And when you have many experts, you cannot have them in all the different meetings at all times. So, we’ve worked very closely with others to make this process manageable. Australia is actually leading a great way to look at the key element that we have in Australia and how we want to lead them overseas. Of course, we have the resource challenge and the time challenge. From a resource perspective, we’re very lucky to help with an organisation that is a non-for-profit or smaller business. We have help from the government that just allows us to participate and travel as volunteers and attend the ISO plenary that’s coming actually in Vienna next week. One challenge also that we’ve been working very closely with Saira and the National AI Centre is really if you have not participated in the development of those standards, sometimes it’s hard to get the context around those documents when they’re written. Our experts have worked really hard to start developing some white papers on giving the background between 42001 or some of the bias standards or the sustainability standards that we are developing and how they’re building into practice. One challenge remains, obviously, it’s for SMEs, the uptake standards are often uptake by broader organisations, so how do we make this more fit for purpose for SME? How do we make it easier accessible for SME? That’s conversations that are ongoing and on which we are working very closely.

Florian Ostmann:
Great. Thank you, Aurélie. Now, we’re already approaching sort of almost getting close to the hour, so we don’t have much time left. There’s lots more that I’d like to ask, but I also would like to make sure that we get a chance. So, I think we’ll briefly pause the panel and see who might like to come in. I think there’s one contribution in the back and also Holly in the front, so if the two of you would like to come in and then if anyone online would like to come in, you will be able to actually speak, so please do raise your hand if you’d like to contribute. But, yeah, please go ahead.

Audience:
Thank you. My name is Walton Atwes. I’m the coordinator of the Internet Standards, Security and Safety Coalition here at the IGF, Dynamic Coalition. I’d like to make two comments. What I notice is that what we’re talking about here are all more or less government-accepted standards institutions like ISO, SENELEC, et cetera. What I’ve noticed in the research that we’ve been doing on internet standards is that in the technical community, quite often, all sort of standards are made as well, and we found that they’re almost 100% not accepted in government policies. So if that, I don’t know, but if that is the case with AI as well, then you have two separate bodies creating standards, which one may be official at some point and the other ones who make the internet run and AI run on the internet are not addressed in any way. So my suggestion would be to reach out to the technical community and see what is being done in the IETF or IEEE, et cetera. My second comment is more strategic a little bit. I hear these fantastic initiatives that you’re presenting, and we have probably had 19 other AI sessions here at the IGF. So what is going to come out of this session? Ideally, it would have been some sort of, we can’t call it the declaration in the IGF context, I know, but what you’re doing should be the main message coming out of the AI track here at the IGF. at the IGF, and probably now we all go home with a little report somewhere stuck on a fairly obscure website. So when you talk about the MAG, perhaps if you want to influence it, that next year there will be some sort of a declaration on this. Because what you’re presenting here is the future. And it’s a shame if we go home without the world hearing about it. So thank you.

Florian Ostmann:
Thank you very much for that. Thanks for the encouraging words in your second comment. To the first comment, just to briefly say, I think the point you raised is a really important one. And we’re focused in the presentation on the organizations that we mentioned, but we very much are aware of the wider landscape, including standards developed in ITF and others. And so it’s really part of the mission of what we’re trying to achieve is to make those connections and provide the full picture. So thanks for bringing that in.

Audience:
Holly, please. Hi. I’m Holly Hamblett with Consumers International. We’re a membership organization of consumer groups around the world. And I want to start by saying, I think this is a really great initiative. I think it’s going to be really helpful to have that multi-stakeholder approach, and it’s really vital to get consumer organizations and larger civil society involved in these processes. But I wanted to briefly comment on just the value of consumer organizations joining the AI standards hub, what we can bring, and then commenting on what the AI standards hub will give to us and how it will be helpful. So the value of consumer groups and Consumers International, especially, is that we can play this role in ensuring that AI is developed ethically and responsibly because we represent the interests of consumers who are the end users of the products and services. And we bring this unique perspective that a lot of consumer organizations are complaint mechanisms. for consumers, so they have direct insight into how they’re using the products and services, how it’s impacting them. They do a lot of product and service testing themselves, so they have information on whether it’s compliant with consumer protection regulations that are existing, whether it needs to be enhanced in some way with standards. So what I’m saying here is that consumer organizations have a lot of data that can help standards be supported with evidence and make sure that it is reflective of consumer interests. And the things that consumer organizations can bring to this space, we have those insights to make sure that standards are grounded in ground-level realities to reflect how the technology will impact consumers. We can bring a global perspective, not just Consumers International, but our whole membership base. We have around 200 consumer organizations in around 100 countries. This is very, very global, very diverse and representative, and bringing in these voices is absolutely vital. We can help ensure that standards are designed to protect consumer interests from the outset. It’s a huge problem with regulation, standards, policies that consumer interests are brought in at the end, and they’re an afterthought a lot of the time, which leads to further harm for consumers. But bringing them into the discussions to begin with is a really great way to make sure that not only is everything compliant with existing regulation, but that it is sustainable in the long run, because we can consider what impact it will have on consumers, mitigate the risks, and ensure that everyone enjoys the benefits. We can provide feedback on draft standards to make sure they’re clear, concise, and easy to implement. This isn’t just generally to businesses, governments, anyone that it applies to, but this is to consumers themselves. Consumers are aware of the standards. They’re able to exercise their consumer rights, they’re able to engage with technology a lot better. So it’s really good to make sure that they are translated into very consumer-friendly language. And that’s something that consumer organizations can absolutely help with. And then final way that we can help is to promote the adoption of standards by consumer organizations, businesses, governments. We are fairly connected in who we work with, and it’s a big benefit of working with consumer organizations that we’re able to say this is consumer-friendly, we support this. And it can help push that forward as a standard. The AI standards hub for us is going to be incredibly helpful. Florian mentioned absolutely in the PowerPoint that there are two very sizable challenges that consumer organizations face, or civil society generally. One being that we are not often welcome in the spaces, it’s very difficult to get into the standardization process. This is largely due to the fact that the process is dominated by industry experts or technical representatives, and civil society isn’t generally there. Which then leads to the consumer interest being the afterthought, which is something that absolutely needs to be avoided. And then secondly is the capacity building. Some of our organization’s members are wonderful in the digital space, they’re very, very clued up on it. Other members are experts in consumer protection and consumer protection only. It’s very difficult for consumer organizations, traditionally being underfunded, not very well resourced, and not experts in everything, to then try and cover the vast scope of all digital issues. particularly complex emerging technologies like AI. So something like the community and capacity building of the hub is gonna be beyond helpful. This isn’t something that we offer our members, so it’s gonna be helpful to us as an organization and to our members through that as well to make sure that they can contribute not only to our work but to work globally, internationally, make sure that there’s the space and the capacity there to be able to do that. And then I’ll end on one final note because I know I’m taking up a lot of time here, but it’s very important to consider that consumer organizations are not a monolithic group. We represent a diverse range of views and interests and it’s important to ensure that there’s broad representation of all consumer voices and AI standardization. And one way to make this easier is for the consumers themselves to understand the process, to contribute to it and to know what is going on and how they can be a part of this. So we need to develop these user-friendly tools, we need to have the resources and help consumers learn about AI, AI standards and provide their feedback consistently. Thanks.

Florian Ostmann:
Great, thank you very much for that. And we’ll be very interested to explore with you how we can work together, address those challenges. And it’s particularly great to hear about and consider your role as an international organization that brings together consumer organizations from around the world. Now, we’ve almost run out of time. I’d like to use the last couple of minutes, if we can, for a short, very quick round across the panel and invite each of you to share your final reflections. And perhaps in particular, if you have any points, maybe your top three priorities for international collaboration, if you bring it back to that theme and also going back to the earlier question or the comment to encourage us to think about tangible,

Nikita Bhangu:
sort of tangible. outcomes following kind of discussions and collaborations in this space, really emphasizing sort of research on standards and UK research, but also working with international partners to understand the broad issues that we’ve discussed today as well. Thanks.

Florian Ostmann:
Thank you, Nikita. Ashley, over to you.

Ashley Casovan:
Thanks. I’ll keep it short. I think that understanding what’s already happening in the space so that we’re not reinventing in any country is really important. And so an international exercise, whether that’s through OECD or another forum like IGF to do almost a mapping of the standards that are taking place where so that we can look to understand not only what’s being done, but then what types of harmonization efforts are required is something that I’d really love to be able to see. And then, again, I can’t stress this enough, AI is not one monolithic thing. And so really starting to break down the different types of uses and then therefore harms that are attributed from these systems and those specific uses, and then getting the right people, right stakeholders around the table to be able to have those dialogues that recognizing AI crosses or transcends borders, I think is going to be important dialogues for us to have in the years to come.

Florian Ostmann:
Great. Thank you, Ashley. And, Wansi, over to you.

Wansi Lee:
Thanks. I’ll also keep it short. I think for us, it’s really important that there’s no fragmentation of AI standards and AI regulations. So we have been working very hard over the last few years, and we continue to do that to partner with countries as well as actively multilateral platforms to try and hopefully drive towards, at least work together towards some kind of harmonized or aligned or interoperable standards for AI. I mean, we’re starting now. see a lot of countries coming up their own, you know, sort of requirements. In Singapore we’re doing it both within our region, in ASEAN we support the development of a consistent ASEAN guide for AI, a responsible AI implementation, but at the same time also then beyond ASEAN then we’re also active globally. Thanks.

Florian Ostmann:
Great, thank you. Aurelie.

Aurelie Jacquet :
Thank you. So, following on Wansi’s point, I think what’s important to know is it’s actually good to see different practices and different regulatory initiatives. What standards do is interoperability, that’s why we are doing standards, that’s why we are involving standards, it’s because of interoperability. It’s not about unification, it’s about harmonization. So, the key point that we made in some of our workshops at the APEC SOM, it’s allowing to have diverse views but actually making sense of each of these views and the standards is a thread that brings all those views and perspectives together. So, because from an Australian perspective what the three things that we focus on is making sure we use AI responsibly and we can scale it. To do that we need interoperability and that’s why we use standards not only as a way to check international best practice but also to learn from international best practice because when you have 100 experts from government, academia and industry together in the room that are discussing the best practice for responsible AI, this is a great resource to inform local policy but also develop our expert and grow the industry.

Florian Ostmann:
Thank you, great. I mean in many ways we’ve only sort of scratched the surface during the last 90 minutes, we could easily spend another hour or two discussing, but I’m glad we got as far as we did, and I do hope that what we were able to cover sort of spiked your interest for those of you who might be entering the standard space without a background, for those of you who are already involved, to sort of get those different perspectives from around the world, and for all of you, you know, going back to sort of the motivation for the session and the discussion around international collaboration, we’d be really, really interested if you have ideas on collaborating and, you know, joining up initiatives from across the fields that you’re working, and we’d be really interested and would love to hear from you, so please do reach out to us if you have ideas for working together. I think that’s the main message to end on, and other than that, all that is left to do, I think, is to thank everyone, thank our esteemed panelists, thank you for joining online across different time zones, and yeah, thank you Nikita for being in the room, and thank you to my colleagues Matilda and Sunny for being on the stage. So thank you everyone, and let’s hope that, you know, there’ll be a continuation of these discussions, and yeah, to see many of you again in one way or another. Thank you.

Ashley Casovan

Speech speed

168 words per minute

Speech length

1097 words

Speech time

392 secs

Audience

Speech speed

172 words per minute

Speech length

1396 words

Speech time

487 secs

Aurelie Jacquet

Speech speed

133 words per minute

Speech length

1389 words

Speech time

625 secs

Florian Ostmann

Speech speed

171 words per minute

Speech length

5342 words

Speech time

1874 secs

Matilda Road

Speech speed

160 words per minute

Speech length

1346 words

Speech time

506 secs

Nikita Bhangu

Speech speed

164 words per minute

Speech length

1599 words

Speech time

586 secs

Sonny

Speech speed

189 words per minute

Speech length

1057 words

Speech time

335 secs

Wansi Lee

Speech speed

173 words per minute

Speech length

1503 words

Speech time

523 secs

How to retain the cyber workforce in the public sector? | IGF 2023 Open Forum #85

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Martina Castiglioni

The European Cyber Security Competence Centre (ECCC), operational from this year, plays a key role in the ambitious cyber security objectives of the Digital Europe Program and the Rise of Europe Programs. Together with member states, industry, and the cyber security technology community, the ECCC aims to shield European Union society from cyber attacks. It is a positive development that demonstrates a proactive approach to cyber security in Europe.

However, despite numerous cyber security initiatives, the skills gap remains a significant challenge. While public and private investment initiatives aim to close this gap, the situation is still concerning. Simply having a large number of initiatives does not guarantee a reduction in the skills gap. This ongoing issue requires further attention and efforts to ensure a skilled workforce meets the demand for cyber security professionals.

On a positive note, the Cyber Security Skills Academy serves as a single entry point for cyber security education and training in Europe. Supported by €10 million in funding, the academy aims to develop a common framework for cyber security role profiles and associated skills, design specific education and training curricula, increase the visibility of funding opportunities for skills-related activities, and define indicators to monitor market progress. The existence and support for the Cyber Security Skills Academy are promising steps in addressing the skills gap and providing comprehensive education and training opportunities for those interested in cyber security.

In conclusion, the European Cyber Security Competence Centre (ECCC) actively works towards achieving the cyber security goals of the Digital Europe Program and the Rise of Europe Programs. However, the persistent cyber security skills gap remains a challenge that needs attention. Efforts are being made through various investment initiatives, and the establishment of the Cyber Security Skills Academy shows promise in bridging this gap. By prioritising education, training, and skill development, Europe can strengthen its cyber security capabilities and effectively protect its society from cyber threats.

Audience

According to the information provided, Sri Lanka is currently facing challenges in implementing cybersecurity policies. Despite the development of a five-year policy for cybersecurity, the implementation process is proving to be difficult. This negative sentiment suggests that Sri Lanka is struggling to effectively address cybersecurity issues and protect its digital infrastructure.

In addition to the cybersecurity challenges, Sri Lanka is also experiencing a talent deficit in the IT sector. It has been highlighted that there are around 30,000 vacancies for graduates in the IT industry. This negative sentiment underscores the need for more qualified professionals in the field to meet the demands of the growing industry. It implies that the lack of skilled talent could potentially hinder the growth and development of the IT sector in Sri Lanka.

However, amidst these challenges, there is a glimmer of positivity in the form of strong collaboration. The speaker emphasises that building capacity within the government can only be achieved through collaborative efforts. This positive stance recognises that partnerships and cooperation between different stakeholders are crucial in improving the government’s ability to address various issues, including capacity building. It implies that by working together, the government can enhance its capabilities and effectively meet the demands of the ever-evolving digital landscape.

Furthermore, it is acknowledged that the digital world is inherently imperfect, and no system is completely safe from hacking. The speaker provides examples, such as the Pentagon and the White House, to support this argument. This negative sentiment highlights the notion that despite advancements in cybersecurity measures, there will always be weaknesses that can be exploited by hackers. It suggests that the focus should not solely be on finding a foolproof solution, but also on continuously improving and adapting cybersecurity measures to mitigate risks.

In conclusion, Sri Lanka is currently facing challenges in implementing cybersecurity policies and addressing the talent deficit in the IT sector. However, there is optimism for building capacity within the government through strong collaboration. It is also acknowledged that there is no foolproof solution for preventing hacking, as systems will always have vulnerabilities. These insights highlight the need for ongoing efforts to strengthen cybersecurity measures and foster collaboration to effectively address digital challenges in Sri Lanka.

Yasmine Idrissi Azzouzi

The global shortage of cyber security professionals is a pressing issue, with a current deficit of 3.4 million individuals. Unfortunately, the public sector faces difficulties in competing for talent due to a lack of funding. To bridge this workforce gap, it is crucial to raise awareness about the diverse range of roles within the cyber security field and its multidisciplinary nature. Contrary to popular belief, cyber security is not solely a technical domain but encompasses various disciplines.

Addressing the underrepresentation of certain communities, including women and youth, in the cyber workforce is essential. By promoting inclusivity and diversity within the field, we can encourage more individuals from these communities to pursue careers in cyber security. This aligns with the goals of SDG 5: Gender Equality and SDG 4: Quality Education.

Furthermore, there is a revolving door between the public and private sectors in cyber security. To attract and retain qualified professionals, it is imperative to invest in their development and well-being. Upper-level positions face a significant shortage, and professionals in the public sector often experience excessive workloads. This highlights the importance of investing in cyber security professionals to ensure an efficient and effective workforce.

To address these challenges, it is proposed to appeal to individuals’ sense of purpose and prestige. Promoting the opportunity to work for the government and contribute to national security can be enticing to potential candidates. By framing the cyber security field as challenging and impactful, it becomes more attractive to individuals seeking meaningful work.

In conclusion, the shortage of cyber security professionals is a global concern that requires immediate attention. Raising awareness about the diverse range of roles, addressing underrepresentation in certain communities, investing in professionals, and promoting the sense of purpose and prestige associated with working in the field are vital steps to bridge the workforce gap. By doing so, we can ensure a more secure digital landscape and contribute to the goals of SDG 8: Decent Work and Economic Growth.

Marie Ndé Sene Ahouantchede

The ECOWAS region, encompassing West African countries, is currently grappling with escalating cybersecurity challenges due to the rapid advancement of digital technology. This digital transformation brings about new opportunities for malicious cyber activities, resulting in a negative sentiment towards the region’s cybersecurity landscape.

One significant issue exacerbating the situation is the acute shortage of skilled cybersecurity professionals. The percentage of government and public sector organizations equipped with the appropriate cyber resources to meet their needs is alarmingly low, standing at just 29%. Furthermore, projections indicate that by 2030, an estimated 230 million people in Africa will require digital skills, highlighting the pressing need to address the inadequacy of skilled cybersecurity professionals to meet this demand. The limited supply of these professionals in the ECOWAS region is viewed as a negative contributing factor to the cybersecurity challenges.

However, it is encouraging to note that ECOWAS and West African governments are taking proactive steps towards mitigating the situation through the implementation of positive cybersecurity education and training initiatives. Under the umbrella of the Organization of Computer Emergency Response Teams (OCYC), the ECOWAS Commission launched the ECOWAS Regional Cybersecurity Hackathon—an event aimed at fostering innovation and collaboration to address cybersecurity challenges within the region. Additionally, an advanced training program was provided to member states, focusing on enhancing their capabilities in managing and responding to computer security incidents in 2020. These initiatives indicate a positive effort being made to strengthen cybersecurity education and training in the region.

A significant concern facing African countries is the brain drain in the field of digital professions. Despite endeavors to attract digital professionals, the public sector’s salary policy remains insignificant when compared to the global digital talent shortage. This brain drain further exacerbates the shortage of skilled cybersecurity professionals in the ECOWAS region, compounding the challenges faced and reinforcing the negative sentiment.

As a recommended course of action, the inclusion of education and training initiatives, alongside public-private partnerships, within the national strategy is deemed crucial to addressing the talent shortage in the field. Noteworthy examples include Benin’s Ministry of Digital Affairs collaborating with the Smart Africa Digital Academy to develop cybersecurity education, and the signing of a Memorandum of Understanding between Togo and the United Nations Economic Commission for Africa (UNECA) to establish the African Center for Coordination and Research in Cybersecurity. These partnerships demonstrate the importance of collaboration and concerted efforts across various sectors to bridge the talent gap and bolster cybersecurity capabilities.

In conclusion, the ECOWAS region is facing significant cybersecurity challenges as a result of digital transformation, leading to a negative sentiment. The shortage of skilled cybersecurity professionals aggravates the situation, further compounding the negative sentiment. However, ECOWAS and West African governments are implementing positive cybersecurity education and training initiatives, countering the shortage to some extent. African countries are experiencing a brain drain in the digital professions, adding to the challenges faced. Education and training, in conjunction with public-private partnerships, are recommended as integral components of the national strategy to combat the talent shortage. These insights highlight the need for concerted efforts within the region to strengthen cybersecurity capabilities and address the evolving cybersecurity landscape.

Regine Grienberger

The discussion centres on the crucial requirement for cyber experts within the public sector to ensure digital sovereignty. The need for digital sovereignty is being deliberated in both Germany and the European Union. It is argued that governments must have control over their own networks to assert their sovereignty in the digital realm.

To address this issue, it is suggested that a portion of the digital or digitisation budget be allocated for cybersecurity measures. Specifically, the cybersecurity agency recommends setting aside 15% of the budget for this purpose. Additionally, pooling cybersecurity services for multiple public institutions and moving data to the cloud are seen as effective strategies to strengthen cybersecurity in the public sector.

Another important aspect highlighted in the discussion is the need to increase cyber literacy amongst the workforce. It is acknowledged that humans often form the weakest link in the cybersecurity chain. To mitigate this, there is an idea to conduct a cybersecurity month in October, during which colleagues can be informed about various cyber threats and receive training on how to handle them.

Furthermore, it is emphasised that the public sector requires not only technical experts but also individuals who possess the ability to effectively communicate with management. The importance of having employees with a dual skill set, generic knowledge combined with cyber expertise, is highlighted. It is suggested that such individuals can be hired and then upskilled or reskilled while on the job.

In an interesting proposition, one speaker advocates for job rotation instead of retaining trained experts solely in the public sector. This would involve training individuals within the public sector, releasing them to work in private companies, and subsequently gaining them back later in their careers. This proposal aims to provide a more comprehensive skill set for cyber experts and foster collaboration and knowledge exchange between the public and private sectors.

Overall, the discussion centres on the various strategies and recommendations to address the shortage of cyber experts in the public sector and enhance digital sovereignty. By implementing these measures, it is believed that the public sector can effectively tackle cyber threats and safeguard national interests in the digital domain.

Lara Pace

The analysis examines several aspects of cybersecurity in both the public and private sectors. It begins by discussing the potential benefits of job rotation from the public to the private sector in cybersecurity. Understanding the challenges faced by the public sector within the private sector can lead to innovative solutions. Laura’s experience transitioning from the public to the private sector while focusing on global cybersecurity serves as evidence. This suggests that job rotation can positively enhance cybersecurity expertise and knowledge transfer between sectors.

The analysis then addresses the issue of retaining cybersecurity professionals in the public sector. Creating a clear and inclusive environment with well-defined career pathways is essential for keeping professionals. The report notes that professionals, including those in cybersecurity, have a natural desire to progress. By offering attractive career advancement opportunities and fostering an inclusive workplace culture, the public sector can improve retention. This argument is supported by the idea that a supportive work environment leads to higher job satisfaction and employee loyalty.

In terms of incentivization in cybersecurity, the analysis takes a neutral stance, suggesting that incentives do not have to be solely monetary. While specific evidence or arguments are not provided, the report proposes that recognition, career development opportunities, and job flexibility can be effective motivators for cybersecurity professionals. This implies that non-monetary incentives can attract and retain skilled individuals in the field.

The analysis also emphasizes the importance of effective human resource training in cybersecurity, paired with job creation initiatives. Currently, cybersecurity training often happens in isolation, leading to trained personnel leaving their geographic region. To address this, the analysis recommends a coordinated national effort that integrates comprehensive training programs with job creation strategies. This holistic approach can bridge the cybersecurity skills gap and provide more employment opportunities.

Lastly, the analysis acknowledges that cybersecurity is not always a top national priority. It suggests that when implementing initiatives, it is crucial to consider concurrent efforts that prioritize job creation. This ensures that cybersecurity professionals trained in the country remain in the field. It highlights the need for a balanced approach that aligns cybersecurity goals with other national priorities, such as industry and innovation.

In summary, this analysis provides insights into various aspects of cybersecurity in the public and private sectors. It discusses the benefits of job rotation, the importance of creating an inclusive environment for talent retention, and the value of non-monetary incentives. Additionally, it emphasizes the integration of training and job creation as a coordinated effort and advocates for balancing cybersecurity priorities with other national initiatives. These findings and recommendations contribute to a comprehensive understanding of cybersecurity and provide guidance for policymakers and organizations in navigating this evolving landscape.

Komitas Stepanyan

The analysis explores the urgent need to enhance the pipeline for cyber security professionals in Armenia. To address this issue, a range of initiatives has been implemented in the country. One initiative involves collaborating with renowned universities in Armenia to develop and nurture a skilled workforce in the field of cyber security. Furthermore, a campaign led by the deputy governor of the Central Bank of Armenia aims to raise awareness about the career opportunities and importance of pursuing a career in cyber security.

Specialized training is seen as vital in enabling professionals to effectively recognize and respond to cyber incidents. These training programs focus on incident response, forensic research, and compliance/audit of cyber security incidents. By equipping professionals with these specialized skills, they will be better prepared to handle and mitigate cyber threats and attacks.

In addition, the analysis highlights the unique appeal and satisfaction that can be derived from working in the public sector. While monetary motivation is important, the impact and sense of purpose associated with public sector work are highly valued. Public sector professionals have the opportunity to make a difference in the lives of thousands or even millions of people.

Efforts are underway to establish a nationally recognized Computer Emergency Response Team (CERT) in Armenia. This is essential for effectively responding to and managing cyber security incidents at a national level. Additionally, there are plans to apply for membership in FIRST, an international organization focused on incident response. These efforts demonstrate a commitment to enhancing cyber security capabilities and collaborations with global counterparts.

In conclusion, the analysis underscores the need to expand the pipeline of cyber security professionals in Armenia. Collaborations with universities, specialized training programs, the appeal of public sector work, and the establishment of a national CERT and potential membership in FIRST are all key components in fortifying the country’s cyber security landscape. These initiatives are crucial for addressing cyber threats, safeguarding critical information systems and infrastructure, and ensuring a secure digital environment.

Laura Hartmann

According to the World Economic Forum’s Future of Jobs report, there is currently a global shortage of 3.4 million cybersecurity professionals. This shortage is largely due to the increasing digital economy and the rising threat of cyber-attacks. The speakers highlight the need for a growing number of skilled individuals in the field of cybersecurity to address these challenges.

One of the main issues discussed is the public sector’s struggle to retain cyber professionals. Due to the lack of funding, many public sector organisations are finding it difficult to compete with private sector companies in attracting and retaining talented individuals in the cybersecurity field. This poses a significant problem considering the increasing number of cyber-attacks that require effective cybersecurity measures.

To tackle this issue, the speakers suggest the implementation of cross-industry initiatives and cyber capacity-building initiatives. Cross-industry initiatives involve collaboration between different sectors to raise awareness and address the issues related to cybersecurity. This approach allows for a broader perspective and a more comprehensive response to the challenges faced in the digital world.

Furthermore, the speakers emphasise the importance of holistic approaches starting from education. They argue that raising awareness about cybersecurity and building a solid foundation of knowledge in this field is crucial for public safety. This holistic approach also involves management understanding the need for investment in cybersecurity.

The analysis also reveals a positive sentiment towards cyber capacity-building initiatives, especially for developing countries. The speakers mention initiatives implemented by GIZ, commissioned by the Federal Foreign Office of Germany, to improve cyber capacity in partner countries. This highlights the importance of addressing the shortage of skilled professionals in the cybersecurity field not only in developed nations but also in developing nations.

In conclusion, the analysis highlights the growing global shortage of skilled professionals in cybersecurity due to the increasing digital economy and the threat of cyber-attacks. The public sector faces difficulties in retaining cyber professionals, and cross-industry initiatives and cyber capacity-building initiatives are proposed as solutions. A holistic approach, starting from education and raising awareness, is crucial for public safety. Additionally, the importance of addressing the shortage of skilled professionals in the cybersecurity field in developing countries is emphasised.

Session transcript

Laura Hartmann:
Okay, hello. Yes, so welcome everyone today, the audience in the room and also joining virtually for our open forum on how to retain cyber professionals in the public sector. My name is Laura Hartmann. I work for the German Development Agency, GIZ, specifically on cyber capacity building, and I’ll be moderating the session today. We are very privileged to be joined by a distinguished panel with speakers on site and joining virtually from the public and private sector, who will bring in various perspectives and share national, regional, and global insights, and also present very concrete initiatives on how this issue could be addressed. For the audience on site and online, you’ll have the chance to actively join the discussion after our speakers’ inputs, and before we start the session and before I give it over to the speakers, I’ll give you a few framing points why we are here today to discuss this. So, I think we are all aware that our digital economies are expanding, technologies are getting more sophisticated, and the number of cyber attacks and incidents is rising. What is not rising to a required extent is the number of cyber professionals protecting our infrastructures. So, according to the Future of Jobs report published by the World Economic Forum this year, there’s a shortage of 3.4 million globally in cyber security, and to support our global economy, the number is only rising. Secondly, organizations are currently competing for talent, mainly by paying more and more to the same pool of people. So, one could argue that this exacerbates the staff shortage, and also the public sector cannot compete here because of a lack of funding. And thirdly, the whole topic is particularly important with an eye on developing cyber capacity building initiatives. So, as GIZ, for example, we are commissioned by the Federal Foreign Office of Germany to implement cyber capacity building initiatives with partner countries from different contexts, and since the start, since we have implemented, since we started implementation, partners, all partners voiced the same issue that they fear that re-skilled, up-skilled people will just leave the public sector. So, our panel aims today to find answers to this challenge, and how to close this gap in cyber security professionals, especially for the public sector. I’m happy to introduce you to our first speaker of our session now, which is Yasmin Idrissi-Azouzi. Yasmin is a cyber program officer at ITU, and works in the Bureau of Development. She’s been leading cyber capacity building projects mainly for women, but also in the field of child online protection. She’s involved in national cyber policymaking, strategy development, and capacity

Yasmine Idrissi Azzouzi:
building. So, Yasmin, over to you. Thank you. Thank you very much, Lavra, and for this very timely and very important topic as well, thank you for the invite to share. So, what I think is that it really boils down to making the field attractive, and the way to do that is twofold. Of course, quantitative measures are important to attract people. In fact, many communities are underrepresented in the cyber workforce, including women, including youth, and people often just don’t envision themselves in cyber security jobs. They’re not aware of the many opportunities that are present in the field in this important and growing workforce. There needs to be a need to raise awareness on the types of roles that are needed. Cyber security is just not technical, it’s highly multidisciplinary, and we often forget that the public sector is also schools, securing schools, securing hospitals, securing ministerial departments, and other key critical infrastructure for some countries. So, the need is really there to focus on attracting marginalized communities as well. One way of doing that, the public sector can do this as well, sometimes better than the private sector, and that’s offering some benefits like gender-sensitive work arrangements, childcare, parental leave, etc., so that there’s also a better inclusion of women. And so, that’s on attracting people on a quantitative point of view, but also, it’s important that once they’re there, they need to stay, and the idea is to be able to retain them, and the best way to retain them is through some qualitative measures. One of them is to offer opportunities for career progression and leadership roles, even for people that have technical profiles, be able to offer them that capacity to jump from a fully technical role to one more of leadership and maybe policy. And there needs to be this accommodation, in a way, for multidisciplinary roles. What is often observed is that there is a shortage among upper-level positions as well, so we need to just acknowledge that there is a revolving door between the public and the private sector, and really invest. So, the investment should be twofold. Obviously, investment in people is key, but also investing in technology. I mean, cybersecurity professionals in the public sector are often very overworked, doing the job of several people sometimes. So, sometimes, investing in software that can automatize some aspects can help, can be, you know, certainly of help, but of course, I think the most important is really to invest in people. The idea is to, you know, have people in your institutions be encouraged to take part in capacity-building programs, and at the ITU, we do offer them for the public sector, for example, and I would take a little moment to explain a couple of them. So, one is the cyber drills, and these are these comprehensive holistic exercises that are cross-country, cross-regions, that are for people in the public sector taking care of national policies and national incident response as well, and these are usually for also for exchange between countries and trying to understand the lessons learned and the common challenges. Another project and program that we have for people in the public sector, very specifically for women, and that’s her cyber tracks, in which we are collaborating with the foreign federal office of Germany, with Regina here, and the GIZ as well, and the idea of this is to, of course, allow for women not only to participate in national cyber policy making and cyber diplomacy at international level, but also to do it meaningfully through, obviously, training in understanding better the diplomacy, better the policy for the technical profiles, but also having a holistic approach of providing mentorship, providing role models, and networking, which is something that is definitely key in this field, trying to understand that the challenges that you are going through, other people have gone through, and that you’re not sort of alone on this. So, as introduced by Laura at the beginning, there are, of course, the salary motivator that often is an issue between public sector and private sector, but what the public sector can do is offer what we call long-term motivators that are not like a higher salary, which is a short-term motivator. So, one thing that we can focus on is really to promote that this is an opportunity for working for something that is challenging, try to appeal to people’s sense of purpose, which, of course, is not only the case in cyber security, but in the public sector in general, trying to see if you believe in a mission where you are contributing to something that is for the betterment of your overall country and society. And one thing that we can do, and that is often overlooked, and in the cyber security community, we always sort of have this echo chamber, but it’s important to look at other fields. You know, there are a lot of studies on turnover rates and maintaining sort of employee engagement in the public sector, and the public sector does have some strengths. Obviously, job security, stability of income, some non-financial benefits, as I mentioned, you know, parental leave, paid leave, pensions. Also, of course, I mean, the prestige for working for one’s government. So, meaningfulness, definitely accomplishing something of real value, satisfaction and pride of the work that has been performed, and the possibility to progress as well. But one thing, and that’s last but not least, I will really conclude on that, that’s the recognition, because with all the training in the world, I think I like this sort of metaphor of a plant. You know, you can give it all the water, the nutrients it needs. In this case, obviously, it would be education and training, but it will not grow without sunlight, without, you know, shedding the light on it. So, shedding light on people’s accomplishments and giving them value as people, not just, of course, numbers, is definitely something that can be helpful in attracting more people and retaining them, and making them also proud to be working in the cyber security workforce for their government. Back to you, Laura. Thank you, Yasmin, for the input. So, I think

Laura Hartmann:
the key points that we heard from Yasmin was that we should pay more attention, or in the public sector, more attention should be paid on promotion strategies. So, really shifting the awareness of people that are maybe not so attracted to the field, cyber security, as a working field. So, underrepresented groups, marginalized groups, women, so that they, so that we can give them the possibility with cyber capacity building initiatives to join the workforce. Also, I think the opportunity for technical staff to switch to leadership roles, managerial roles, and vice versa. I think these are some key points that we should keep in mind for the discussion later, and I would now give the floor to one of our speakers online, Martina Castiglioni. She’s a program officer of working in the ECCC, which is the European Cyber Security Competence Center and Network. Before joining the ECCC, she was the head of training and advisory services at the Italian Cyber Security Competence Center, and was heavily involved in the development of cyber security training campaigns at national level. She worked in cyber security advisory services and supported operators of critical infrastructures, and her expertise includes cyber security risks management, governance, cyber security auditing and assessments, and crisis management. Thank you, and over to you, Martina.

Martina Castiglioni:
Good morning, everyone. Could you hear me? Yes, we hear you. Okay, thank you. Could you also see me? Not yet. We cannot see you at the moment. Okay. Maybe that still changes. Okay. So, by the way, good morning, everyone, or good afternoon. Firstly, thanks for inviting me here, and this is really the third time I will try to bring here a glimpse of what is going on in Europe regarding this topic, and in particular, what is going on under the competence of the European Cyber Security Competence Center, ECCC. As mentioned by Laura, indeed, I’m working for this new center focused on cyber security. I’m indeed calling from Bucharest, where the European Cyber Security Competence Center is located, and let me introduce in a few words the role of this center. The center was established in 2021, but is operational from this year, and the center, together with the member states, together with the industry and the cyber security technology community, has the aim to shield the European Union society from cyber attacks and maintain research excellence and reinforce the European Union industry in this field, and also boost the development and deployment of, of course, advanced cyber security solutions. The center, the ECCC, will play a key role in delivering on the ambitious cyber security objectives of the DEP, the Digital Europe Program, and the Rise of Europe Programs, and these two programs, of course, have also the aim to narrow the cyber security skills shortage of Europe, both in public and in private sector. I would like to explain better that this center, the ECCC, of course, is operating in a new framework, European Cyber Security Framework, because it’s working together with the so-called network of NCC, so National Coordination Center. So, in this way, each member state will have a contact point under the, a contact point for the European Cyber Security Competence Center, and we receive fundings from a European program, and we’ll develop cyber security capacity building activity at national and regional level, and promote cyber security educational program, programs at national and regional level, under the big hat of Europeans, of the European Cyber Security Competence Center. In this way, we aim to facilitate the collaboration and the sharing of expertise and capacities among Europe, in particular among the research, the academia, the public authorities, under the so-called Cyber Security Competence Community, that is the third player of this new European Union framework, after the Competence Center, and after the NCC’s networks. So, of course, the initiatives that are going on in Europe to tackle the topic of this agenda, based on the fact… fact that the security of the European Union cannot be guaranteed without the most valuable asset, our people. So as shown by the latest report at European level, mainly published by INISA, a larger number of cyber security incidents have also targeted public administration and governments in member states and public bodies at European and national level. So we need professionals with the skills and competencies to prevent and defend the Union, including its most critical infrastructure, against cyber attacks and ensure its resilience. But also we need skilled people that are needed to implement the cyber security legislations to deliver the cyber security legal and policy requirements, otherwise those pieces of legislations will not achieve their objectives. So regarding the initiatives that are going on so far in Europe, of course we have many public and private investment initiatives focusing on closing the cyber security skills gap also in the public sector. Of course these initiatives already existed, but the situation shows that the cyber security skill gap still represents a huge issue and it might be referring to the lack of synergies and coordinated action that have been taken so far to close the cyber security skill gap. Indeed the Europe’s Digital Decade Policy Programme 2030 has set the target of increasing the number of ICT professionals by 20 million by 2030 and also narrowing the cyber security skills gap that we have in the public sector by the cyber gender skill gap that we have in this field. I would like to mention this concrete initiative that is going on, that has been established in particular this year, the so-called Cyber Security Skills Academy, the academy, I will refer to this with the word academy, that will be our single point of entry and synergies for cyber security education and training and will offer also funding opportunities and specific action for supporting the development of cyber security skills. Of course the main focus will be on skilling the cyber security professionals in Europe and the implementation of this academy will be supported by 10 million funding from the so-called Digital Europe Programme and the European Cyber Security Competence Centre indeed will implement the strategic objective cyber security under this programme. The academy so far has its concrete representation in a dedicated website that is available, is public available, but strategically and operationally the academy refers to four pillars. The first one is fostering knowledge generation through education and training by working on a common framework for cyber security role profiles and associated skills. And here I would like to mention that ENISA has defined a specific European cyber skills competence framework that defines the roles and profiles competencies of cyber security professionals and this will be our first basis for the academy to define and assess the current skills, the skills that we need and monitor the evolution of the skills gap and provide indication of specific new needs also for the public sector. Another important pillar will be of course designing a specific cyber security education and training curricula suitable for these specific roles and here I would like to mention the project CyberSecPro founded by DEDEP which brings together 17 higher education institutions and 13 security companies from 16 member states in order to collect the best experience and become the best practice for all cyber security training programs that will be developed under the academy. Then I would like also to mention that this is a responsibility of each member state. Indeed the NCCs, the National Coordination Centre, are invited to explore how to set up the so-called cyber campus in each member state. The cyber campus would aim at providing pools of excellence at national level for the cyber security community and the academy will help their networking and the coordination of the activities of the different cyber campus in each member state. This responsibility that each member state has, it refers to a specific piece of our legislation, European cyber security legislation, because each member state in Europe according to European legislation should adopt as a part of the national cyber security strategy specific measure in view of mitigating the cyber security skill shortage. Then I would like to mention briefly the last important pillar of this academy initiative and then I will close. I know that the time is running out. Of course we understood that we have to better gain visibility of the different initiatives that are going on in Europe. So another important pillar of the academy will be ensuring a better visibility over the available funding opportunities for skills-related activities in order to maximize their impact. And in this objective, I would like to mention that there is going on a specific working group managed by the European Cyber Security Competence Centre in collaboration with the NIS and with the Commission that has the aim to map all the initiatives and all the cyber security training initiatives and all the cyber security funds opportunities related to the narrowing the cyber security skill gap. So in this way with a better overview and an efficient overview of the current cyber security funds related to this specific topic, of course this will help to better define the priorities in terms of fundings of this academy and of the Digital Europe programme broadly. Then I will close this overview of the academy. Of course I mentioned that another important pillar of this academy will be defining indicators to monitor the evolution of the market and be able to assess the effectiveness of the actions of the academy. So under the academy a specific methodology will be developed that will allow to measure the progress to close the cyber security skills gap. We will be defining specific cyber security indicators to monitor the evolution of the cyber security labour market in order to be able to assess and to re-elaborate and to adjust funding opportunities and the activities that are going on. There will be specific KPIs on cyber security skills by the end of 2023 that will be elaborated by ENISA and in this way with this KPI we will be able to collect data on indicators and also report on them with the first collection by 2025. Because another issue that has been revealed so far is that we didn’t have a specific report about cyber security skills gap based on common indicators, common KPIs. And of course this has been an issue to better identify the priorities to take with the issue of cyber security skills gap in particular also at public sector. So with this new advancement in terms of elaborating KPIs, elaborating reports, in this way of course this will be an important pillar for the overall achievement of the cyber security academy. I will come back to you Laura and sorry if I take more time.

Laura Hartmann:
Thank you Martina. So highlighting what you just said, just two highlights. Thank you first of all for giving us an overview of what the EU institutions and agencies do. And I think it’s really interesting what you said on the lack of synergies between them on regional level. So this highlights the need for even more coordinated approaches on cyber capacity building again. Also to retain, re-skill, attract and up-skill people. And it’s reassuring that the ECCC takes initiative on EU level. And then also I think what came clear is that we need to really focus on the concrete role. So what kind of professions do we want to have in the jobs market. So these have to be first identified, pointing to the need of conducting studies on that. So what kind of job roles are really needed because if you Google that, if you research for that, it’s always about the workforce gap in cyber security but never about concrete roles. And so yeah, coming back to the panel and the speakers, happy to give the floor now to Komitas Stepanian on my left. Komitas is the Technology and Cyber Security Director at the Central Bank of Armenia. And he’s part of the team working for national level digital transformation including cyber security in Armenia. He’s a short-term consultant for the World Bank for Digital Transformation and GovTech activities and also works partly for the IMF for cyber security initiatives and programs. Komitas, please have the floor.

Komitas Stepanyan:
Much better. Thank you very much for this opportunity to speak in this very important panel. Actually, you have already mentioned a couple of important things like there is a huge shortage of cyber professionals and how to fill this gap. And another thing that cyber security is quite wide. Everyone talks about the cyber security skills shortage but what kind of special shortage do we need. For example, if you Google, you can find that right now almost all companies including public sector, they need database admins, network admins, people who can really understand how the entire technology server infrastructure works. These are little technical but these are part of cyber security. Overall, let me explain how Central Bank of Armenia is a public institution and how my country overall, what are we doing, how we are trying to fill this skill gap in the public sector. First of all, we need to increase the pipeline. To do that, there was an interesting initiative and campaign run by the leadership of the Central Bank of Armenia. Imagine the deputy governor of Central Bank of Armenia was leading the team and we met most recognized universities in Armenia, top five universities in Armenia to talk with the management first of all and to see what kind of programs, what kind of specific subjects or syllabuses can be developed for different universities. And secondly, we met all students to promote this activity. I was dreaming that when I was a student, somebody could come to my university to talk about this initiative that for example, being a young student in second or third grade, I can have a chance to join to the public sector and work for the Central Bank of Armenia or Minister of Finance or other public institution. So it had a huge impact and we had great response from different universities. We continued this campaign. We collected more than 300 CVs and did an interview to identify 30 talents to torture them for six months to have a team working for cybersecurity. Currently, it’s very important to have incident response team for any public institution. This is one of the main strategic objectives. Two years ago, we established an information system agency who is responsible for three main pillars. First is digital identification, national level. Second is the interoperability of the public institutions and not only public institutions. And third pillar is cybersecurity. So this institution is responsible for working with academia, with different universities to create specific subjects based on our needs to fill this gap. After that, we work with the international recognized organizations to provide specific training for 25 people, which has been carefully selected from different public institutions, from different ministries, agencies, including Central Bank and a couple of commercial banks as well. We had special training for incident response, for forensic research and for compliance and audit of cybersecurity incidents. Because once again, if there is not enough capacity to recognize that there is an incident, then it’s going to be end of day. And according to the statistics, like cyber breach identification, an average time for cyber breach identification is over 200 days, according to the statistics. So imagine cyber criminals are hacking your environment and after 200 years, you can be able and some institutions, unfortunately, are not able to identify that their systems are breached. So this was the second initiative. And right now, this process is going on and I’m waiting to the results. The exam will be at the end of October. It’s an ongoing process and I hope a couple of my colleagues will get certified, will become certified for incident, cybersecurity incident response, which is really very important. And we would like to continue this training program and a couple of them who are already certified can become like trainers or they can share their experience with others. We are also closely cooperating with the private sector because we all know that lots of good professionals work for private sector. We have already heard that public sector cannot be attractive if you are looking only salary. Private sector pays more, but public sector has its own beauty because we are working for a mission. Mission and challenge is more important for many, many ones. So during your career, mainly at the middle of your career, you may rethink that money is a very short motivator, as Yasmina already mentioned. And I fully agree with this. But the objective, what are you doing? And when you empower your young colleagues that whatever you do will have an impact on thousands or maybe million people, it can be the most great motivator for them. So we are continuing this program and I hope after another year we’ll have more certified professionals. Also we are working for setting up a national CERT. And it’s also an ongoing process and after that we would like to have a recognized national CERT to work with the international other CERTs and after that we’ll apply for being a member of FIRST. At this moment, I hope this helps as a concrete example.

Laura Hartmann:
Thank you very much for your points, Komitas. the most or what became clear is mission and purpose, right, that you mentioned that can account for maybe the lack of funding or just so private companies are paying more. It’s the top one argument for the lack of cybersecurity skills in the public sector. Happy to take this into discussion later. So just to speed up a bit because we’re running a little bit short in time, I’ll give the floor now to the second virtual speaker that we have today, Marie Ndeye-Sene Ahuangde. She is a digital specialist with 22 years of experience in the field of information systems and digital transformation and in August last year she joined the ECOWAS Commission where she works as a program officer for your applications and e-government and is there responsible overall for coordination and the driving of the digital transformation efforts on behalf of the Commission in ECOWAS. The floor is yours, Marie.

Marie Ndé Sene Ahouantchede:
Thank you, Laura. Good morning, everyone. I’m very delighted to have this opportunity to join the panel. So thank you on behalf of our Commission in question. ECOWAS is the economic community of West African states. I’m not the main resources in cybersecurity, but I will try to share in this panel ECOWAS approach and perspective on cybersecurity, especially on the workforce. So ECOWAS… One second. Can you try to switch off your video so that we also have the chance to see you in the room? Thank you. Can you see me, please? Yes, we can. Perfectly. All right. So ECOWAS region and widely Africa face growing cybersecurity challenges. I’m sure that you are aware of that. And this is a result of the digital transformation that has given rise to new opportunities for malicious cyber activities. Many studies have pointed out the urgent need for a skilled workforce capable of effectively answering the growing cyber threats that the region is facing. Globally, in digital skills, it was announced in 2023 at the 12th Assize for African Digital Transformation in Madagascar that the need for digital skills in Africa will reach 230 million people by 2030, while just the 5% to 10% of this need are covered depending on the country. Specifically, if we talk about cybersecurity, the CAPMG’s 2023 cybersecurity outlook mentioned that the percentage of government and public sector organization with appropriate cyber resources to meet their needs is only 29%. Given the persistence of the critical needs for cybersecurity professional, the coordinated regional approach of ECOWAS, the ECOWAS cybersecurity agenda, aims to increase cyber resilience in the region and to support member states in strengthening their capacity building, which certainly requires the availability of a skilled cybersecurity workforce. The supply of cybersecurity specialists in ECOWAS region, I can say, is under capacity in the face of the exploding demand. To address this concern, ECOWAS Commission and West African government are multiplying efforts in cybersecurity education and training initiatives, identification of cybersecurity talents, capacity building cooperation, partnership, and awareness. For example, at the regional level, in collaboration with EU, the OCYC project is the West African response on cybersecurity and fight against cybercrime was set up. As a mean of building a sustainable cyber workforce in the region, under OCYC, the ECOWAS Commission launched the ECOWAS Regional Cybersecurity Hackathon. This hackathon helped to build a regional pool of cybersecurity youth to assess the level of the region’s maturity in terms of skills in cybersecurity and fight against cybercrime and also to increase the interest of youth in digital security. To upskill professional, still under OCYC, the ECOWAS Commission and its partner are supporting the judicial authorities of the region to tackle the need through global capacity building initiatives. In the same dynamic, I can say that the advanced training was provided in 2020 to member states with the computer security incident response team in order to enhance capabilities for handling cyber incidents and managing threats. As part of operation in cybersecurity, a joint platform G7 ECOWAS for advancing cybersecurity was launched also in September 2022 to increase partnership for the continued implementation of ECOWAS cybersecurity agenda for resilient cyberspace in West Africa. At the national level, the country in West Africa have adopted a series of cybersecurity measures including the development of cybersecurity education. As an example, I can cite the National Digital Academy that was being launched to offer advanced training to Beninese trainers and managers in ICT, artificial intelligence, and cybersecurity. This is the outcome of the partnership between Benin’s Ministry of Digital Affairs and Smart Africa Digital Academy. Despite the efforts I just mentioned, African countries are facing a real brain drain. To attract digital professionals, the public sector generally apply a specific salary policy such as the bonus founded on the base salary. This remains insignificant in a global context of digital talent shortage and the major consequence of which is a brain drain. Aware of the inadequacy of the public sector to compete with the private sector and of the attractiveness of the market for cybersecurity talent, education and training are now recommended to be included in the national strategies. The public sector is also exploring a multi-stakeholder approach involving private sector and international partners. A good example of collaboration in Togo is the Memorandum of Understanding signed with UNECA, the United Nations Economic Commission for Africa, to collaborate on establishing the African Center for Coordination and Research in Cybersecurity. And I know that efforts are ongoing to fast-track the realization. I wanted also to share the following best practices of public-private partnership is the specific case of the strategic partners between Togolese Republic and a company named Asseco Data System in Poland. This partnership gave birth in 2019 to Cyber Defence Africa. It is a joint venture company that offers cybersecurity services mandated by the Togolese Republic to ensure security information system in Togo and beyond its border. The partnership combined the CERT with a national SOC. Given the limits of the public sector in terms of its ability to keep its talent and the global shortage of cybersecurity, the way out that I see for our region to elaborate national cyber workforce and education strategies, develop cybersecurity capacity building and education plan, introduce gender diversity by getting girls interested yearly, revamp the public recruitment process and condition, expand the talent pool by collaboration, invest in training and development program to promote cyber certification, to develop the talent inside the public sector. Also, to finish, the last way out I see is to follow ITU guideline when developing the national cybersecurity strategy, especially on the cyber capability and capacity building and awareness raising. Thank you, Laura. This is what I wanted to share with you, what is going on in our region. Thank you very much.

Laura Hartmann:
Thank you very much, Marie. Your input is very well noted and appreciated. I think you’ve highlighted one point that is specifically important, the note that you put forward of the brain drain. You’re facing challenges that go well beyond the cybersecurity workforce gap. I think we should be aware when working in cyber capacity building initiatives and partner initiatives with countries from the global south that this is a multi-challenge LA and that we should be aware of that. I think now over to the fifth speaker of this panel, which is Ms. Regine Grienberger. She is the cyber ambassador at the German Federal Foreign Office and a career diplomat. Her professional path has focused on EU foreign relations, EU economic and financial issues and common agricultural policy. Dear Regine, as the FFO has been increasingly involved in cyber capacity building for the past years,

Regine Grienberger:
what would be your lines of thought on the topic? Thank you, Laura. First of all, I want to say I appreciate very much what all the other speakers on the panel have said because it is really highlighting the very complexity of this problem of how to retain the cyber workforce in the public sector. I’m speaking as a servant who is both working inside a public institution that has experience and experiences the shortage of experts, but also as somebody who is engaged in cyber capacity building. It has been mentioned cyber tracks. Marie mentioned the ECOPASS action plan and other projects that we think or we hope are helpful to establish a new or better substrate for cyber experts with our global partners. One thing I wanted to say before I advance with my notes is a discussion that I often have with my counterparts and that is, does the public sector really need cyber experts or can everything be outsourced to the private sector? We have a discussion going on both within Germany but also within the European Union and I think many of you will know this perhaps from home, the discussion about what is digital sovereignty, this ambition to have a digital sovereignty as a government or as a state, what does it demand from governments and certainly having control on our own networks as governments is one important part of digital sovereignty and so I would answer this question, is it possible to outsource cyber security to the private sector? Yes, we’ve seen very good examples where this works very well, for example in Ukraine that is relying heavily on the private sector to maintain government networks in times of war, but you also have to take care of covering your own needs with your own experts. Our cyber security agency gives the recommendation to set 15% of your digital or digitization budget aside for cyber security measures including personnel and also training and I think this is also a good benchmark if you are thinking about how much will this cost me, so 15% is a good rule of thumb of how much it will cost at least. Martina described the demands or the requests stemming from the new NIS directive, so our updated European cyber security regulation. In Germany we assume that the number of cyber experts needed for critical infrastructure will be x8 what we have now because x8 institutions will appear on the list of entities that have to follow or have to comply with the standards of this new regulation and you can imagine that this means about 10,000 of cyber experts missing within the moment that this regulation enters into force. This can only be dealt with if we take it really seriously also as public sector that we have to really find and also build these experts. I have two immediate remedies perhaps to propose. One is pooling. We recommend this also for example for you mentioned schools and universities I would say also municipalities perhaps they are too small to afford their own cyber security expert but certainly not too small to join forces with other municipalities in a similar situation and pool cyber security services for several public institutions. And the other one and that is also a lesson learned from Ukraine is moving things to the cloud makes it also much easier to take care of cyber security so I’m not recommending a specific offer from a specific corporate company but we have seen that this helps because it’s then well protected by the most advanced and sophisticated tools. A third recommendation that I would like to make is make it a little bit easier for the few cyber experts you have by raising the digital and cyber literacy of your workforce in general. So we are for example conducting a cyber security month in October actually these days to inform our colleagues about cyber threats how they are themselves high value targets for cyber criminal organizations and state actors conducting espionage operations so that they know a little bit better how to protect themselves with easy means because I mean that’s a cliche but the weakest link in the cyber security chain is always the humans so my recommendation make it easier by increasing the cyber literacy of your workforce. Okay then now what kind of experts are actually needed it was also been mentioned already so I would say and I see it in contact with my colleagues also from our IT department you need technical experts and you need also people who are able to speak the language of management and hierarchy so that the higher up levels of management of a ministry for example for example, understand the need and the requests, understand also the need to invest and to raise these costs because it’s a costly exercise to improve cyber security. So I think by, you know, by hiring one person that can actually speak the language of management, you might be able to free money to hire 10 more experts who then do the groundwork. And then in exchange with my colleague from the IT department, I also learned that they’re basically always hiring people that are not up for the job that they are meant for. So they are always hiring people who have generic, more generic knowledges than needed, but then are upskilled or reskilled on the job or so by their colleagues or by short-term cyber security reskilling or upskilling programs that we buy from the market. So then there was this issue of competition with the private sector. Of course, mostly it will not be the case that money and salary offerings are adequate to attract the workforce. But purpose is. Purpose is really an important thing. And purpose is not only, you know, the recognition by the higher levels, but also understanding at which part of the machine you are actually working. So a more holistic view of cyber security might also help to retain people in the sector if they understand that in the public sector they are allowed to really contribute to a bigger picture that they also understand. Plus, job security, flexible work arrangements, I think, especially for women, also the particular protection that civil servants have in the public sector is something interesting. Of course, this is not for all workplaces, but for some, for example, for our ministry, that is an argument for women to join the foreign office and not a private sector company. Then there is also this, so there is this idea, it’s not worth training experts in the public sector because they will be stolen by private companies afterwards. So I don’t invest. So I would recommend, don’t think in these terms. Think of it in terms of job rotation. So you train the people as public sector, you release them to private companies, and you gain them back at a later stage of their career. I think this is particularly true in all the field of IT experts. They have so many opportunities and usually are also curious people. So you should let them look at other opportunities and perhaps gain them back. And my last point is we are in a very transformative period of time with regard to IT digitization and cybersecurity. So our job profiles and the educational profiles, and somebody mentioned it, was it Marie or Laura, these, you know, these curricula that we have in high schools and graduate schools and universities and business schools are perhaps not up to date. So we would, we should work with our, for example, in our case, Ministry of Education and Ministry of Labor to update also the job profiles, educational profiles, so that the institutions are really also able to produce the kind of, you know, knowledgeable people that we need.

Laura Hartmann:
Thank you. Thank you very much for this comprehensive input, Regina. And I will directly give it back to our last speaker joining virtually. She’s Laura Pace. Laura has just under 15 years of international experience working in building cybersecurity capacity across the world. And she has worked for multilateral and national government academia and is now working for PGI in the private sector, where she’s the head of the capacity building practice. Laura, it’s my pleasure to hand it over to you.

Lara Pace:
Hi, Laura, good afternoon. It’s been fascinating listening to my colleagues and understanding all these initiatives that are on the way. I guess you’re quite tight for time. Yes, that’s correct. Can we can we do overtime by five minutes, please? Yes. OK, so I think I’m going to leave I’m going to leave you with a couple of points. So essentially, I’ve been working internationally for 15 years and the beginning of my career really focused on developing governance structures and cybersecurity strategies and really creating plans at the national level. And now, having done that for so many years, I’m focused on essentially doing the same, but in helping governments really build the human resource to implement those strategies. There were a couple of points that I I I I picked up on, which is now the ambassador’s point about jobs rotation, which is fundamental, I think this might sound a little bit controversial, but I think if you have skills and expertise in the public sector that suddenly moves into the private sector, that could fundamentally be seen as a positive. I’m in no way encouraging brain drain here, but essentially responding to cyber attacks and cyber incidents requires a whole ecosystem approach. So suddenly you’re sat in the civil service and you are working with private sector individuals that have been trained by the public sector and also understand the challenges of the public sector. So I’m now sat in the private sector thinking about the challenges for the national perspective and can offer interesting solutions. So that’s one point I wanted to make. And I really agreed with Yasmin’s contribution in terms of retention of skills in the public sector, in terms of really creating an inclusive environment, having very clear career pathways so people can understand where they can progress, because as human beings, we all want to progress and better ourselves both personally and as an organization or as a national institution. And the last thing is incentivization. And that does not necessarily equate with more money. The last point I wanted to make was, I think, and sometimes I’m guilty of this, as a cybersecurity professional working internationally, I think, you know, cyber is the ultimate priority at a national level. And actually, we really need to consider if we are going to make interventions in terms of skilling up and training, that there is also a similar initiative happening to ensure that the jobs are being created to retain that talent, especially in emerging markets. We get a lot of requests and we see a lot of RFPs for governments to have like skilling programs. But sometimes that happens in like a silo. And what happens is you have this very intense sort of skilling up program, and then the expertise does not remain within that geography. So I think it’s really important that it has to be a two pronged approach or maybe not a two pronged. I think about capacity building as a 1980s hair comb. You know, each tooth has to come all together like in a national coordination effort. Yeah, I think I wanted to leave you with just those key, key, key points. We work from Latin America and the Caribbean all the way to the Pacific, helping governments scale up. So, yeah, I think somebody mentioned academies, which is one of the things that we do. But I thought I would just leave you with those two comments because I know you’re very tight for time. Thank you very much, Lara.

Laura Hartmann:
So if we have some more minutes, can we allow for a question from the from from the from the audience? Or is there even a question from the audience? Please come in. Yes.

Audience:
I’m from the government of Sri Lanka, and we closely got many benefits from EU and some of the partner countries in capacity development and including of the development of the cybersecurity strategy and the policy. So we developed the policy for the five years. So then now the challenge is to to implement it so that we feel that I’m a civil servant for the past 23 years. So by the time the word ransomware came to to to the to the to media. So so we we just grab it. But nobody in the public sector had hardly heard of it. And we got the private sector experts to explain. So time to time we have the collaboration with the private sector. And and and and we have also in a separate track, the military and the forces, armed forces, they have their own kind of cyber defense. But we keep it in a very strategical way to bring their knowledge to the normal civil service work and other work. Somebody mentioned about the capacity building and curriculum change. I think I’m the remote speaker. I think we we gave this into the most of the ICT or or digitalization related curriculum school and academia. But we as a country face a challenge of losing the talent from the market to going overseas, a private sector itself. So we are a small country of 20 million. We we have a currently in the IT industry has about 30,000 vacancies for the graduates so that there’s a challenge for the private sector itself. So we the without a collaboration, the capacity building within the government itself won’t be a won’t be a sustainable solution. That’s the way I see it.

Laura Hartmann:
And and yeah, it’s. Thank you very much. Any of the speakers would like to come in or leave it with. The note agreement. OK, so then I would like to thank you all for listening. Thank you very much to our speakers joining virtually at the very early morning in Europe and in Africa. And thank you very much to the speakers here on the panel. And you want to make note. Yes, please. Hello. Hi, thank you. We are hearing to you lots of things you are discussed, but we know that many of the time

Audience:
government and private sector has a hacking. But I want to know how to fully save any government or any private sector by hacking. Can you tell me something? So the question was, if I understand correctly, how to fully secure the government system from hacking attacks? I can try being very technical. I started my career in a very technical level, and now I’m at the leadership level. There is nothing which is impossible to hack in the world. And there will not be because digital world is imperfect. Always there will be weaknesses which can be used to hack systems, different types of systems, maybe Pentagon, I don’t know, White House. We’ve seen such kind of activities and we’ve seen in the future. There is nothing 100 percent possible to be safe. And technology always has weaknesses. OK, thank you very much. I think we need to close the session now to give the floor to the other session

Laura Hartmann:
that’s coming and that’s about to take place here. So, yeah, I think let us just all agree that really a holistic approach is very, very important. So the so beginning from the education and then an ecosystem approach that our last speaker has voiced. Lara, I think that’s fundamental. So cross industry initiatives, really, so that we go from a nice to have to public to really raise the awareness that it’s a public safety issue as well. And ultimately, yes, we need people that can talk to management, that understand that there’s the need for investment and that translate this. So thank you very thank you very much, everyone. And happy IGF. Thank you. You’re an inspirational talk. Yeah, I. But here at the venue. You have a very tight schedule. There is. Very much more great. Here. But. Would the left of the.

Audience

Speech speed

177 words per minute

Speech length

498 words

Speech time

169 secs

Komitas Stepanyan

Speech speed

169 words per minute

Speech length

942 words

Speech time

334 secs

Lara Pace

Speech speed

172 words per minute

Speech length

668 words

Speech time

233 secs

Laura Hartmann

Speech speed

138 words per minute

Speech length

1716 words

Speech time

744 secs

Marie Ndé Sene Ahouantchede

Speech speed

113 words per minute

Speech length

1047 words

Speech time

558 secs

Martina Castiglioni

Speech speed

139 words per minute

Speech length

1716 words

Speech time

742 secs

Regine Grienberger

Speech speed

138 words per minute

Speech length

1433 words

Speech time

621 secs

Yasmine Idrissi Azzouzi

Speech speed

162 words per minute

Speech length

1110 words

Speech time

410 secs

Future-Ready Education: Enhancing Accessibility & Building | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The analysis reveals several important points regarding the need for improvements in education systems and the impact of technology on learning. Here is a more detailed summary of the main findings:

1. Nepal requires more practical and skills-based education to enhance employability. Despite having years of formal education, Nepalese students struggle to find employment. However, short-term skills courses have shown to lead to employment opportunities with higher wages in foreign countries. Therefore, there is a strong argument for incorporating practical and skills-based education to better prepare students for the job market and increase their employability.

2. It is crucial to incorporate digital literacy, digital skills, and re-skilling in the education system. Pedagogical changes are necessary to shift from traditional teaching techniques to modern, skills-based methods. Additionally, the proposition for ‘finishing school’ concepts in Nepal highlights the need for teaching relevant and practical skills that align with the demands of the digital era and enable students to succeed in the current job market. In summary, the integration of digital literacy and skills is urgently required in the education system.

3. The youth express concerns about AI readiness and the ethical use of AI tools in education. University students are interested in using AI tools such as ChatGBT to assist with homework. However, questions arise regarding ethical guidelines and best practices for the use of AI in education. It is necessary to address these concerns and ensure that the integration of AI tools in the learning process is responsible and beneficial.

4. The role of individuals and youth in promoting digital literacy is questioned. It is important to understand the actions that individuals can take to contribute to the development of digital literacy. Fostering a culture of continuous learning, digital skills development, and active engagement with technology among individuals and especially the youth is crucial for promoting digital literacy and bridging the digital divide.

5. Finding digital solutions for remote locations to implement AI and digital tools is of utmost importance. In the case of the Philippines, which comprises over 7,000 islands, many remote locations lack internet and utility services. It is essential to develop initiatives and tools that can bridge this digital divide and provide access to AI and digital technologies in under-served areas. This will help enhance education opportunities and equalize access to resources for students in remote locations.

6. Specific initiatives and tools are needed to help under-served, remote schools access AI and digital technologies. The Philippines has numerous remote and under-served schools that require dedicated efforts to provide them with access to educational technology resources. Such initiatives will ensure equal opportunities and bridge the digital gap between urban and rural areas.

7. While the internet and technology themselves are neutral, their usage can be potentially harmful. Educating individuals about responsible and safe technology use is crucial to mitigate potential negative impacts. Promoting digital literacy, online safety, and critical thinking skills will empower individuals to navigate the digital landscape responsibly and safely.

8. The multistakeholder model is critical for inclusive decision-making. Inclusive decision-making requires input from multiple stakeholders to ensure diverse perspectives are considered and social inclusivity is promoted. By involving various stakeholders, more comprehensive and effective solutions can be developed to address the challenges in education and technology.

9. Resilience in digital education requires inclusive design, acceptance of diversity, and empathy. To ensure that digital education is accessible and beneficial to all learners, inclusive design principles are essential. Considering a variety of user needs and creating learning environments that embrace diversity and foster empathy will enable all students to benefit from digital education resources.

10. Community involvement is crucial for a better-shared future. Learning from each other as a community can lead to progress and enrich the educational experience. Active involvement of communities in educational activities and decision-making processes nurtures a sense of ownership and shared responsibility, contributing to the overall improvement of education systems.

11. Promoting inclusive, equitable, and quality education through the internet is important. The Internet Society’s special interest group on education focuses on advocating for this cause. By leveraging the internet’s vast potential, opportunities can be created to provide quality education to all individuals, especially those who are marginalized or face barriers to accessing traditional education systems.

In conclusion, the analysis highlights the importance of practical and skills-based education, the incorporation of digital literacy, the ethical use of AI tools, and community involvement in enhancing the quality and accessibility of education. Furthermore, it emphasizes the significance of inclusive decision-making, resilience in digital education, and promoting digital literacy. Addressing these concerns and effectively leveraging technology will create more inclusive and equitable opportunities for learners worldwide.

Vallarie Wendy Yiega

In the analysis, the speakers highlight the importance of future education being skills-oriented to prepare students for emerging careers. They argue that the shift from regurgitation-based learning to critical thinking and creativity is essential. Furthermore, they discuss the impact of artificial intelligence (AI) and digital tools on education methods.

The speakers also emphasize the need for practical steps beyond policies and legislation to be taken by governments and organizations. They provide examples such as the Universal Service Fund in Kenya, which focuses on providing internet access, and stress the importance of accountability and monitoring in policy implementation.

The accessibility of low-cost devices and internet connectivity is deemed vital for education. The speakers mention telecom players in Kenya partnering with the government to provide low-cost devices and highlight the role of the internet in accessing education tools and platforms.

The analysis also underscores the importance of equipping educators with the necessary digital skills. The need for curriculum integration with digital subjects is identified, and the challenge of the digital skills gap among educators is acknowledged.

The establishment of digital libraries and cross-border collaboration in education is seen as necessary. However, further details or evidence supporting these arguments are not provided in the analysis.

Infrastructure is identified as essential for implementing digital education. It is noted that urban areas often have better access to digital tools, creating a divide with rural regions. The analysis also highlights how infrastructure issues can hinder efforts to understand digital tools. Collaborations with internet service providers and private companies are considered crucial for infrastructure development.

Data privacy and cybersecurity are raised as concerns. The speakers refer to a school that was fined for inappropriate use of students’ images for advertisement, and they note a lack of awareness among educators regarding data protection obligations. Firewalls and data protection measures are suggested as necessary in schools.

Continual professional development and reskilling of educators regarding new technological tools are emphasized. The analysis suggests the need for resources to be created for regular skilling and reskilling, and training on new technologies, such as generative AI, is recommended.

The potential positive and negative impacts of generative AI tools in education are discussed. The analysis highlights that AI can assist in tasks such as drafting emails while adding value without replacing humans. However, it also states that understanding how to use generative AI tools ethically and responsibly is essential.

The analysis includes a quote from a tech lawyer who is favorable toward the use of technology for positive impact, suggesting a pro-technology stance.

Self-education in the field of internet governance is seen as crucial. The analysis mentions that the Internet Society offers online courses to engage in the internet governance space.

Understanding the local context is considered necessary for successfully navigating in internet governance and achieving change and impact.

Joining relevant youth organizations is recommended for enhancing skills in navigating the internet space. The analysis mentions an organization in Asia that has helped build communities, advocate for digital literacy, and provide opportunities.

Persistence and continuous engagement in the space are highlighted as factors that can lead to a better understanding of digital literacy and internet governance.

The analysis emphasizes the importance of carrying this generation of digitally skilled learners into the future. Each-one-teach-one is suggested as a mantra to ensure that everyone learns digital skills.

Lastly, the speakers advocate for contribution through policy-making, building innovative solutions, and raising voices for a future-ready digitally skilled education system.

Overall, the analysis discusses various aspects of future education, including the need for skills-oriented learning, digital access and infrastructure, educator training, data protection, AI tools, and internet governance. It highlights the potential positive impact of technology but also emphasizes the importance of responsible use and continual professional development. The analysis provides a comprehensive overview of the main points and arguments surrounding the future of education.

Ananda

The analysis explores several key aspects of the intersection between technology and education. One important point highlighted is the importance of reskilling educators and contextualising technology in the local context. This emphasises the need to equip educators with the necessary skills to effectively incorporate technology into their teaching methods and adapt it to suit the specific needs and challenges of their students and communities. The argument stresses the significance of this reskilling process, emphasising that it is vital for preparing educators to thrive in the era of Industry 4.0.

Another significant aspect highlighted is the role of multi-stakeholder engagement in the Internet Governance Forum (IGF) and the collaborative effort required to build a sustainable ecosystem. The analysis emphasises that effective policies and initiatives in the technology and education sectors require the active involvement and support of the government, civil society, and the private sector. It argues that the collective efforts of these stakeholders are essential for creating an enabling environment conducive to the successful integration of technology in education.

The potential of community networks and community learning centres in providing internet connectivity is also explored. The analysis points out that these networks, owned and managed by the respective communities, are particularly important in areas where there is a lack of connectivity. An example from Africa is highlighted to demonstrate how community networks can bridge the digital divide in underserved regions. This suggests that the establishment of such networks and learning centres can play a crucial role in expanding internet access and promoting knowledge-sharing in remote and marginalised communities.

Furthermore, the analysis emphasises the value of open courseware in rural technology and its role in improving access to quality education. It mentions initiatives like the Rachel Foundation and Khan Academy as examples of platforms that offer open educational resources. These repositories provide free and easily accessible educational materials, which can be particularly beneficial for individuals in rural areas who may face challenges in accessing traditional educational resources.

An important observation made in the analysis is the need to involve and empower youth in expanding internet access and making it more inclusive. The analysis asserts that young people are the most significant stakeholders in the internet and have a crucial role to play in improving its accessibility and inclusivity. By encouraging youth participation and giving them opportunities to contribute their perspectives and ideas, the analysis argues that the internet can become a more inclusive and empowering tool for all.

In addition to these key points, the analysis also mentions the existence of open source repositories such as Rachel and Colibri, which provide educational resources that can be broadcasted or transferred offline. It highlights the benefits of these repositories, including regular updates and the ability to share educational content without internet connectivity. The analysis concludes by emphasising the need to investigate and implement feasible technological solutions like Rachel and Colibri to meet the demand for education resources. It mentions the feasibility study conducted by Ananda and their team, who are seeking funds to upgrade the deployments of these resources.

Overall, the analysis provides a comprehensive overview of the different aspects of technology and education, highlighting the importance of reskilling educators, multi-stakeholder engagement, community networks, open courseware, youth involvement, and open source repositories. It offers valuable insights into the potential of technology to enhance education and emphasises the collaborative efforts required to ensure equitable and inclusive access to educational resources.

Binod Basnath

The analysis emphasises the need for robust digital education policies in Asia. It suggests that governments should have a wide vision and mission in order to develop these policies. It highlights the experience from the COVID-19 pandemic, which has had a significant impact on education, as evidence for the need for resilience in education systems. The analysis also stresses the importance of adequate infrastructure development. It points out that in Nepal, only a third of community schools have minimal digital resources. Additionally, post-COVID, only 36% of Nepal has broadband connectivity, falling significantly short of the 90% target.

Inclusion is identified as a vital aspect of ensuring no one is left behind in digital education. The analysis argues that inclusion should be embedded from the design to the implementation of learning practices. It points out that without inclusive educational design, vulnerable communities are at risk of being left behind.

Digital literacy and competence development are deemed indispensable in digital education. The analysis highlights the need for content in local languages to cater to local needs. It also highlights that without digital literacy, students, parents, and teachers will struggle to implement digital education programs.

The analysis concludes that a comprehensive approach is needed to build digital education resilience. It advocates for well-planned and inclusive policies, adequate infrastructural development, and competence development. It highlights the pivotal role of competent governance in foreseeing and preparing for the challenges of the digital education system. The analysis also points out a gap in infrastructural development and competence for ICT usage in the education sector in Nepal.

Another argument presented in the analysis is the disparity in employment value for formal education and technical skill training. It mentions a case where a student in Nepal found a high-paying job in Japan after three months of specialized training, but struggled to find a job in their home country after around 15-20 years of formal education. This highlights the need to produce a workforce that caters to the needs of the modern technology era, as currently, young people are not getting jobs due to a lack of required skills.

The analysis also discusses the importance of digital methods in the learning system. It suggests the need for a digital curriculum, digital pedagogy, and a digital means of assessment system to match the pace with Industry 4.0.

The analysis highlights youth participation in Internet Governance Forums as a means to advocate for necessary changes in the digital education landscape. It encourages youths to take their competency back to their communities to empower more youths with digital competency and literacy.

Noteworthy observations from the analysis include the implementation of ICT resource units in Nepal, which create an internal networking system for communities and enable sharing of information through voice calls, video calls, and messaging systems. The analysis also mentions the pilot project of a locally accessible cloud system in the Philippines, aimed at being used for education and health sectors for marginalized and backward communities in Nepal.

The analysis calls for more awareness among policymakers about the use of ICT in education. It suggests that if implemented correctly, ICT education can be more inclusive and accessible. It highlights the need for policymakers to be aware of an ICT education master plan, as this can be an effective tool to reach education goals. The analysis notes that Asian countries are moving towards a second ICT education master plan.

Ashirwa Chibatty

The analysis of digital education and equitable access to the internet reveals several important points. Firstly, it highlights that although the internet is meant to be accessible to everyone, access is not distributed equally. This raises concerns about the fairness and inclusivity of digital education.

One major challenge in the digital education ecosystem is the language barrier. Many digital content and resources are primarily available in English, which may not be the first language for a significant proportion of the global population. This language digital divide hinders individuals’ ability to fully engage and benefit from digital education.

Another challenge highlighted is the existence of skill gaps for digital teaching and learning, as well as industrial skill divides. These gaps limit individuals’ capacity to effectively utilise digital technologies for educational purposes. Bridging these gaps is essential to ensure that everyone has equal opportunities for quality education in the digital age.

Equitable access to digital education requires overcoming various challenges related to accessibility, literacy, assessment, and security. According to an IEEE essay, Ashirwa Chibatty outlines four pillars: accessibility, literacy, assessment, and security, which are essential to addressing these challenges. Ensuring that digital education is accessible to individuals with disabilities, promoting digital literacy, implementing effective assessment methods, and ensuring cybersecurity are crucial components of equitable access.

The analysis also shows that gender disparities exist in accessing and utilising digital technologies. Women and non-binary individuals face more exclusion due to socio-cultural norms. As per GSMA’s State of Mobile Connectivity Report 2022, women are 20 percent less likely than men to use mobile internet. Addressing these gender inequalities and reducing digital divide along gender lines is crucial in achieving equitable access to digital education.

The multistakeholder model is emphasised as being crucial when dealing with technology. The involvement of various stakeholders, including governments, educators, technology providers, and communities, is essential to ensure that the use of technology in education is equitable, inclusive, and aligned with the needs of all learners.

Inclusivity and diversity are also highlighted as important considerations in the design process of digital education. Recognising and valuing different perspectives and experiences can lead to the development of more inclusive and effective educational technologies and platforms. Ashirwa Chibatty advocates for learning from each other, being empathetic, and working as a community to drive progress in digital education.

Ultimately, the aim is to achieve a global internet that promotes inclusive, equitable, and quality education for all. Ashirwa encourages individuals to join Internet Society’s special interest group on education, highlighting the importance of collective efforts to advocate for an inclusive and equitable education via the internet.

In conclusion, the analysis underscores the need for equitable access to the internet to ensure inclusive and quality digital education. Language barriers, skill gaps, and gender inequalities are among the challenges that need to be addressed. The involvement of multiple stakeholders and the consideration of inclusivity and diversity in the design process are essential for achieving equitable access to digital education. Creating a global internet that supports inclusive and equitable education is a shared responsibility that requires collaboration and commitment from all sectors of society.

Umut Pajaro Velasquez

The COVID-19 pandemic has exacerbated the digital divide in Latin America’s education system, particularly in rural and marginalized communities. These communities face a lack of access to digital resources and tools for education, intensifying existing inequalities. Due to lockdowns and school closures, the reliance on digital education has significantly increased. However, many students in underserved areas lack the necessary devices and internet connectivity for effective online learning.

To address this issue, Latin American governments have taken steps to promote internet access in rural areas. Laws have been enacted in Mexico, Colombia, and Argentina to prioritize and support community-driven internet accessibility. These efforts aim to bridge the digital gap and provide equal educational opportunities for all students, regardless of their location.

Monitoring and accountability of resources is crucial to improving internet and device access. Misuse of resources intended for enhancing digital access is a challenge that needs to be addressed. Implementing programs to monitor and ensure proper utilization of these resources is essential for effective implementation and equitable outcomes.

Teacher training is vital in delivering quality education, especially in digital learning. However, many teachers were ill-prepared to use digital tools during the pandemic. Tailored training programs that address their specific needs and equip them with the skills to effectively use digital resources for teaching are essential.

Digital literacy is another key aspect of modern education. Developing after-school programs and online resources and incorporating digital literacy into the curriculum can help students acquire skills necessary for success in the digital era. Digital literacy programs should focus on competencies such as problem-solving, critical thinking, communication, and teamwork.

As reliance on digital education increases, cybersecurity infrastructure in schools and educational institutions becomes paramount. Educators and students need professional development opportunities to enhance their understanding of cybersecurity best practices. Implementing strong firewalls, intrusion detection systems, and other security measures is crucial for safeguarding sensitive data and ensuring online safety.

Ethical and legal implications of integrating artificial intelligence (AI) into education should also be considered. While youth are aware of AI’s potential, they may not fully understand its ethical and legal aspects. Educators should teach students about the ethical considerations and legal frameworks surrounding AI use to ensure responsible implementation and usage.

Building human capacities, such as critical thinking, in AI education is important. Emphasizing critical thinking and problem-solving skills can help students navigate the changing landscape of technology and utilize AI for positive outcomes.

Voice plays a crucial role in advocating for desired technologies and effective implementation. Through participation in policy-making processes, individuals can contribute their perspectives and shape the development of technology infrastructure in education.

In conclusion, education’s future entails constant digital transformation and adaptability. Addressing the digital divide and education inequality is crucial, particularly in the global south. Ensuring access to necessary resources, such as internet connectivity and devices, while developing the skills and capacities required for success in the digital era is essential. By doing so, an inclusive, equitable, and technologically proficient education system can be fostered, preparing students for the challenges and opportunities of the future.

Session transcript

Ashirwa Chibatty:
Thank you very much, and now I will turn it over to Mr. Ashirwa Chibatty. Good morning, everyone. I’m Ashirwa Chibatty, the chair of Internet Society’s special interest group on Internet for Education, and today I will be moderating and organizing a workshop. This session is for all of us to move towards a global Internet that ensures inclusive and equitable quality education and promotes lifelong learning for all. So without further ado, let me introduce my speaker, Mr. Binod Basnath. Mr. Binod Basnath is co-founder and director of Educating Nepal and Empowering Asia. He is an MPhil graduate from Kathmandu University in development studies with his focus on education. He is a researcher in the field of digital and inclusive education. He was APRIGF fellow in 2017 and Austrian awards alumni since 2019 upon completion of a course on inclusive education and policies and practices from Queensland University of Technology Australia. He is also an Australian awards impact ambassador for Nepal upon his efforts for digital education post-COVID-19 pandemic in Nepal. He is a member of Internet Society’s accessibility standing group, and he is fluent in English, Nepali, and Hindi, and he also speaks a little bit of broken Japanese, I guess. Mr. Binod, please speak a little bit of Japanese. The next talented figure we have here is Ms. Valerie Yeager. She’s an advocate of high court of Kenya and Internet governance lawyer and a tech policy analyst. She currently works as an associate in intellectual property and technology media and telecommunications team at Bowman’s law firm. She was a youth ambassador at the United Nations Internet governance forum held in Poland, a youth volunteer at IGF in Ethiopia, as well as a youth leader for declaration of the future of the Internet under the European Union and Czech Republic. She has also been a fellow with Internet Society, ICANN, AFRINIC, and Kenya school of Internet governance. She was an ambassador for digital grassroots, a youth-led community in charge of building awareness around digital rights in Africa. Valerie, too, is multilingual, and she fluently speaks English and Swahili, and believes in being a woman in the area. She probably watches too many Korean movies, so she has a little bit of Asia in herself as well. So Binod is from Asia, and Valerie is representing Africa at the moment. And joining us online, we have Umut Pajaro-Velquez. They have a BA in communications and an MA in cultural study, and currently works as a researcher on issues related to digital rights, ethics, and governance of AI. They are focused on finding solutions to biases towards gender, race, and other forms of diversity that are often excluded or marginalized in the constitution of data that feeds these technologies. They are the chair of gender-standing group of the Internet Society and the coordinator of youth like IGF and Youth IGF Colombia. They also chair the gender-standing group of ISOC, and they are fluent in English and Spanish. That’s why we often use them as a translator, and he provides his translating services for free. We also have Shraddha as our online moderator from the same SIG, Internet for Education. So, without further ado, I would like to move on to the next slide. So, we say that the Internet is everyone. In Internet Society, we believe that the Internet is for everyone, but there are some food for thoughts for you. There are some things, there are some questions that we need to ask ourselves and within our community. Those are like, does everybody have equitable access? We say the Internet is for everyone, but is access equitable? What is meaningful connectivity, and what is digital poverty? These are the things that we need to ask ourselves when we talk about Internet, and when we talk about education for all, and Internet for all. So, when we talk about digital education, before I move into my slides, the flow of this session would be briefest at the stage, and then move towards our speaker. There are a few questions that we need to address, and our speakers are from diverse regions, from Africa, from Asia, and from Latin America and Caribbean, so we hope to have a diverse voice here. So, there are certain challenges when it comes to digital education ecosystem. So, what are those challenges? The first one is language digital divide. A lot of content that are available on the Internet are in English language, which might not be the first language of everybody. Actually, it’s not the first language of most of the people, and there are some people in our area who are not that much fluent in English, so that’s one of the challenge to quality education. The next challenge is lack of skills for digital teaching and learning. So, post-COVID, everybody, we moved towards digital education. Everybody was focused on work from home, online classes, and during online classes, the teachers and administrators didn’t have that adequate knowledge and skills for teaching and learning. And the third one is the industrial skill divide. We’re moving towards fourth industrial revolution, how we say industry 4.0, education 2.0 for industry 4.0, so how do we cater those needs? There still is a lot of divide among that, and what that is doing is it’s furthering the digital divide, and that’s not what we want. So, moving further, I would like to share the IEEE’s essay Industry Connections Report on Digital Resilience. You can scan the QR code for the full report, but when we see about the challenges, we have four levels of pillars for challenges. One relates to accessibility, that connects to infrastructure, connectivity, and language divide. The second one is on literacy that focuses on digital content and solutions, skills for teachers and learning, and the industrial skill divide. The third comes the assessment. How do we measure the quality of learning, and how do we engage a learner in online space? And the third one is the challenge of security, cybersecurity, human resilience, building human digital resilience that’s most important, and what are the future implications that we might bring when we are shifting the whole world towards the blended form of education? And, again, there are people who do not have Internet connection, so they cannot get education. There are people who have Internet connection, but they’re not very much used to it, so they don’t know how to use it. And the third one is those who know how to use Internet, those who are very much active Internet, are very much prone to cyber risk, and when we talk about education and bringing our young kids into the space, we have to be very careful about those. And, yes, the social cultural norms that restrict the role of women and girls in society hinder their access to the use of digital technologies. And as per GSMA’s State of Mobile Connectivity Report 2022, worldwide, women are 20 percent less likely than men to use mobile Internet. And now when we talk about gender, it’s not a binary. It’s not zero and one. There are a lot of spectrums, and if a woman are 20 percent less likely, the non-binary gender, they are more prone to it. So with that, I would like to move directly to our first question that we would like to address. It will be a different session. We would be asking questions, and the speaker would obviously share their experience and set the stage, but we’d also want more interaction coming from the audience here so that together we can learn more and do something for the betterment of society. So with that, I move to my first question. How can governments organize and ensure equitable access to digital education infrastructure in Asia Pacific, Africa, and Latin American Caribbean region? First, to set the stage on this question, I would like to move to Binod Basnet to share his experience from Asia’s perspective.

Binod Basnath:
Thank you, Ashwath, for the question. Before I address the question, I’d like to welcome all of you to this session, to all those who are participating here at Kyoto International Conference Center, and those participating online. Thank you all for being here, and I do hope for a very proactive participation and engagement of everybody throughout this session. Coming back to the question, the question actually does not have a rigid answer. Well, the question in itself is very broad, and I cannot take much time on elaborating every aspect. I’ll try to be as precise as possible. I’ll try to sum this up within four points. So talking about having a resilient digital education for each economy, especially for Asia, on my behalf, it will be much more about Nepal, because that’s where I’m from, and that is the context that I’ll be bringing in more. So it won’t be just Nepal. It will be representing many least-developed countries or developing nations as a whole. So for the first part, I think it’s very important for a nation, for a country, for its governance to have a wide vision and mission, and this also coincides with the researches done by ISOC as well. Until and unless we have a good vision, we cannot bring in good policies for the nation. Especially for Nepal, when we have just moved into a federal system of governance since 2015, we’re quite young with the federal system. We moved in from constitutional monarch, and the powers and responsibilities have been dispersed amongst three tiers of government, central, provincial, and local. So different aspects, different policies, and different duties have been assigned to different tiers of government, and we still have to make much more policies and programs that help each government understand what their roles are. So that is one aspect that we need to think about, and especially after the COVID-19 pandemic, we’ve understood that we had a huge impact of COVID-19 on education as well, especially for the LDCs. It was a hard time for education, and it’s not unsafe to say that actually remote education was something that prevented a complete meltdown of education during the lockdown periods of COVID-19. Saying that, for Nepal, instead of use of internet for education, use of radio, use of television were more effective than the internet way of education, because we did not imagine this earlier, and we’re not prepared for it. And that was the same case for all other developing nations as well. So the policies that were devised before COVID-19 has to be reconsidered and re-evaluated. Similarly, when we had an earthquake in 2015, there was disruption in education, but then the government came up with different building codes and different modality of learning. But after COVID, I think we’ve forgotten a lot about disasters, and we’re going back to our normal lives, forgetting what we had to change for education. And it’s easy because we have the Education 2030 Plan, the SDG4, and what its targets are. It’s easier for government to align ourselves with those targets and meet those targets. So the first point for me is a proper vision and mission. The second point is infrastructural development. Of course, without proper infrastructure, we cannot imagine the new way of learning, remote education, hybrid learning, or blended education. This requires proper infrastructure. Talking about Nepal again, we have around over 35,000 schools in Nepal, 27,000 of them being community schools, 6,000 of them being institutional or private schools, and over 1,000 being religious schools. Maybe the private schools by themselves are quite well off in comparison to the community schools. When we look at the data, we have bare minimum of one-third of those community schools that have minimum infrastructure for ICT. Now, having infrastructure for ICT is one point, and adopting it for education and other uses is another point. Even having infrastructure may not be enough if we’re not using it because they are just medium. And when we look at the internet penetration rate, we had a huge target of reaching 90% broadband connectivity by 2021, but post-COVID, we’ve just reached to around 36%. So without infrastructure and without its implementation, we cannot imagine the new modality of learning for schools and children. My third point will be one of the most important pillars, and that’s inclusion. We need inclusion for everyone because anyone can be a person with disability if we are not provided with right infrastructure support or any other forms of support. So we have the target of not leaving anyone behind. So when we design any curriculum or any learning practices, it should be inclusive from the design to its implementation and any other thing that is there beyond. So inclusion for PWD, IDPs, women, marginalized communities, vulnerable communities, and gender, and many other, those are the things that has to be considered from the beginning till the end. And the third aspect is, of course, it is connected with, again, infrastructure, but it’s about content, it’s about competence, and it’s about skills. We need contents that can be used for digital education, and we need them in the language that are, we need them in different languages that tailor to the local needs of the people. And talking about competence without teachers, students, parents, and everyone having competence for digital literacy, it’s very difficult to implement these programs in the schools or communities. And I think I’ll come back to this point for the other questions, but I think I sum up my answers within these four points. Thank you.

Ashirwa Chibatty:
Thank you so much, Vinod. Of course, the policies that we make in Nepal, from my personal experience, are good sometimes, but we also need to be realistic more than idealistic when it comes to educating kids, because Internet and digital technology are just tools, and without human interaction, the basic education needs for young children cannot be fulfilled. And being said that, I think there’s a lot of things that echoes with Africa as well. So, Valerie, to you, how are things in Africa, and how do you think that the African government, African union organizations are doing, and what can they do for equitable access to education?

Vallarie Wendy Yiega:
Thank you so much for that question. I think a lot that has been said by Vinod is very similar to what happens in Africa as a continent, but also in my country, Kenya, where I come from. So, because he’s handled it very well, I’ll just give you contextual examples of what happens and why we’re talking about being future-ready in terms of education and the skills that you’re going to get. Because I think we’re coming from an era where education was just given to students. You have to get the information, get the content, and regurgitate the same for, say, exams or passing tests. But now we’re looking into a future that is very skills-oriented, looking into a reimagined future where we’re getting careers that were not there previously. So how can government and organizations come in to ensure that we have a future-ready form of education? And one thing I’ve seen is that it’s a lot about policies and legislation, but more the implementation and the practical steps to get us there as opposed to just putting the law as it’s written, but it cannot be implemented. So I’ll give you an example. What we have in Kenya is what you call the Universal Service Fund. And I know it cuts across, because I’m sure Uganda has something similar as well, so it cuts across some of the African countries and globally as well. So what this fund does is that a lot of the companies that work around technology or telecommunication then donate to this fund in partnership with government to ensure there’s accessibility and access to the Internet. And I think over the years, it’s been a fund that has been slow to be taken up because there’s been no accountability and monitoring of how the fund is performing, is the money going into the fund, is the fund being practically implemented across these regions that require Internet access. But I think now what we are seeing, especially with our government, with our Ministry of ICT in particular, is that they’ve put systems in place to ensure that there’s accountability and monitoring of this fund to ensure that we get to that goal of Internet access, especially in rural areas. I’d like to connect what my co-panelist said earlier about what COVID did. Because if you look at it, I think if we look at life generally, as the studies that we do, we may tend to forget our why. If we look at the impact that is made across and over time, then we better understand what this impact is. So I’ll give you an example. During COVID, the people who were staying in the urban areas were able to continue their education because they were connected to the Internet, but those who were in the rural areas, because of lack of the Internet as well as issues such as power connectivity, they were not able to continue. So what that meant, especially now with the tagline of leaving no one behind, is that we potentially left a number of students who have a gap that they need to fill in order to get to where the students who are able to continue their education seamlessly are now at. And this, over time, creates a situation where you have a form of a global south, but one that is heavily impacted by education. You already have a literacy gap within the countries that are already suffering from a lot of developmental issues. So this is one of the things that, from a policy perspective and from a legislation perspective, is very important to understand to monitor what that impact is. Because once you’re able to monitor where that impact is and you’re able to know where the gap was left, then you’re able to make steps towards ensuring that that gap is filled. I’ll give you another example. I’m also a telecommunications lawyer, so I work in the telecom space. And what we found is a lot of the telecom players back in Kenya, what they’re trying to do now is partner with the government to ensure these low-cost devices that can access to the internet. Because it’s one thing to have an access to the internet, but to lack the device that actually helps you connect to the internet and get that skill or that education that you’re looking for. Because one thing that’s very clear is that internet for education is very important. There’s a lot that’s happening on the internet in terms of education, you’ve seen your usual Google career certificates, you’ve seen the skills, you’ve seen all these platforms that are offering education. And just like now that I was talking about Asia, even like what, if you followed the SDG conversation that was happening at UNGA, there was the example of the Khan Academy and what that impact has been like. However, we can’t get there if we’re not looking at, number one, connectivity, number two, low-cost devices. And then back to the point of becoming digital ready in terms of the future skills, are we also looking at curriculum integration? What digital subjects, skills, learnings are being put into the, what we’d call the traditional quote-unquote curriculum that’s happening. And this is a full multi-stakeholder approach because what you find as well is that even the educators do not have the capacity to offer some of these digital skills that we are seeing, you require them to be future ready for the future that we are going into. I’ll give you an example. We’ve been hearing all these stories about plagiarism and how generative AI tools work. And the question is, we are now moving into an era where it’s going to be more about critical analysis because what we are saying is that we are bringing in artificial intelligence, we are bringing in all these tools which are extremely of positive impact with the right navigation. Are we also equipping ourselves and are we also equipping our governments, our legislators, our policy makers with the right information to ensure that we are building a digital-ready future for our education systems where we’re now moving from heavily regurgitation of content to more critical analysis and more thinking, allowing students to think and to create in a world that is moving so much across thinking and critical analysis. Yeah, so also to my last point, it would be we are now moving into a space where we are going to require a lot more digital libraries. Previously, it was about having books, but how can we access the books with the kind of generation that we are bringing in? We want to bring in a situation where there’s cross-border collaboration even when it comes to education so that we have a situation where skills can be cross-exchanged, there can be a lot of collaboration across those who’ve developed a bit more and those who are still looking to develop. So I think that’s also one of the points that governments as well as organizations should be looking into to ensure that we have a digital 3D and future for education. Thank you.

Ashirwa Chibatty:
Thank you. Yeah, though we are from diverse region as we’re humans, our basic needs are the same and for education, right to education is one of the basic need now, so is access to internet. So the challenges are same, but obviously we have to have localized context for that. So moving to our next speaker, Umut, they are online speakers, so can we get him on the screen, please?

Umut Pajaro Velasquez:
Hello.

Ashirwa Chibatty:
To you, Umut.

Umut Pajaro Velasquez:
Hello, how are you? Well, thank you for the question. I’m going to share some key points about the Latin America situation when it comes to digital education. Probably as some of my previous speakers already said, COVID-19 changed the situation here in Latin America when it comes to digital educations because we realize that actually we create a bigger gap when it comes to rural areas and also to marginalize communities that were living inside of the cities. So in order to manage that in a better way, some other governments came up with some kind of solution that I found pretty much in common in several governments in Latin America. And I’m based in Colombia for some of the solution that we’re starting to implement in Colombia are the same solution that is starting to be implemented in countries like Brazil, Argentina, Dominican Republic, Uruguay, and others in Latin America. So one of the many solution is access because in the way the governments are trying to do it is to promote, not only that the private sector get to the rural areas, but also creating community networks and I invest in organizations and people in the rural areas also create their own networks to be connected to the internet. So this way to expand the access not only to the social use of the internet, but also to a school and already location institutional institution, as I say, especially in rural areas and only in certain areas in the country. They try, they create laws, especially in most of the country that are trying to promote the partnership between public and private sector and so some kind of socialized race to promote that some private companies get to solve certain rural areas that is hard to reach and all the new domains as a mechanism so as community driving internet accessibility, for example, Mexico, Colombia, and Argentina and recently developed laws related to community driving internet accessibility where they have a special rate for this kind of connection and giving access when the use is mainly to education. The second one is they’re trying to get through affordable devices and connectivity and governance organizations and also working in providing affordable devices and connectivity to a student and teacher through several governmental programs and one of the things that we already said, we are also trying to implement ways to monitor how the resources are being used to get those devices because we had that problem and sometimes those resources are not being used to get those devices or to get internet to the schools or to the students. So, it’s not only to create programs that is school-based distribution and also task-based and another incentive for also monitoring those programs in order that we can give access through internet and access through the different devices to people especially in rural areas and in other areas of the country. Another aspect is that we are working right now a lot is trained teacher administration and administrators on digital tools and resources because we understand that after the pandemic of COVID-19 that most of the teacher wasn’t ready to face the digital spaces and how to teach using different technological resources. So, we understand that we need to train on how to use the tools and resources effectively in the classroom and we understand that this training should be tailored to a specific needs of the schools and the community because it’s not the same being the beach in a rural area or to indigenous communities or in a marginalized part of a city or in a private school than in other spaces. And finally, is develop digital literacy programs for students. Right now, some countries are working in changing the curriculum and create more digital literacy skills because we understand that these skills are needed in the current technological development. So, students can be actually be aware of how to use not only the tool for good but also for their daily lives. Some countries are developing school-based programs and others are working on after-school programs and also developing online resources so people can get, so students of every age can be, we can be a capacity builder for students. So, very much is like the four points that we are working here in Latin America the most. And so, we have other problems, but I think we share a lot in common with Asia Pacific and Africa. So, I don’t want to repeat what my colleagues already said.

Ashirwa Chibatty:
Thank you, Umut. I think that’s a very good start to our session and we can already find so much commonalities within our diversity as well, which is something that we can celebrate about always. So, okay, so we all agree that equitable access to digital education is needed, but it does come with other implications, future implications as well. So, my question next to the panel as well as the audience here is like, what policy measures can be implemented to enhance educators’ capacity and address the cybersecurity risk in digital education space across the region? And how can digital education empower youths with the necessary skills for evolving labor market? Because we know that the future workforce is going to be different. The third industrial revolution is already over. We’re into fourth industrial revolution. And the purpose of education would be to create a labor force that matches the requirements, the needs of the future market. So, I would start this time on the reverse order with Umut first. So, what necessary skills are needed for evolving labor market and how can we enhance educators’ capacity to do that also by making them secured and safe in the cyberspace? To you, Umut, again.

Umut Pajaro Velasquez:
Okay, well, I think one of the things that we can actually do is start to provide professional development opportunities not only for teacher and also develop digital education standards for teachers and students. That means to not only to improve in our curriculars things about cybersecurity and how to prepare ourselves online, but also provide to teacher, for example, digital pedagogy about cybersecurity best practice. And so, one of the things that should do our governance in when it comes to policies and probably will be in this in cybersecurity infrastructure because this could be especially in Tupperware School, especially Tupperware School, another educational institution for cyber attack. Here in Colombia, we received a couple of weeks ago a massive attack on probably the public sector and a lot of public universities were affected by it. So, this showed the importance that implementing strong firewalls, intrusion detection system and other security measures to protect the information inside of schools or educational institutions. And when it comes to probably our job and necessary skills for the development market in that means, I think that digital education program should focus on developing transferable skills that can be applied to various new jobs. This is the school include problem solving skills, capacity, critical thinking, communication and teamwork that probably those things that we are going to be using more in a technological landscape where AI is present, especially critical thinking because we’re going to rely a lot on that in the future, in our future market life. Also provide opportunities for inspiration learning. This means that not only the way we educate our students is mostly just in a regular classroom but also providing we opportunity to gain real experience to interchange apprentices and other program. And it would help to, I don’t know, to scale the skill and knowledge they need to see in the workforce. And finally, I would say that I’m actually more experienced person or some employers that actually can teach the necessary skills what is needed in the actual, in the current labor market, but also with the future or the different trends when it comes to digital changes that we are living.

Ashirwa Chibatty:
Thank you. So Valerie, from African’s perspective, from Africa’s perspective, so what policy measures have been implemented, what lacks and what can be done to enhance educators capacity, addressing cyber risk, as well as making sure that the knowledge that we’re now providing to our future genders and caters the need of the future economy and the future market?

Vallarie Wendy Yiega:
Thank you so much. I’ll definitely give you a Kenyan perspective as well, but also just recognizing that within this conversation that Kenya is quite ahead when it comes to legislation, when it comes to the technology space, which may not be the same case as most African countries. But the first thing is to map out and find out what the gaps are in the education system because as we start to talk about educator capacity, we need to understand what are the gaps in the educator capacity to begin with. Number one, we have two forms of workforce. So we have the educators who are much more senior in the profession, and then we have the educators who are coming in who are much more junior and who may find, quote unquote, more ease in understanding the technological tools. So it’s a question of how are we going to put in place an intergenerational co-creation capacity framework where you’re not only skilling the newcomers, but you’re also re-skilling those who are senior in the profession because you do not want a situation where you’re saying you want to create a future for the education, a future that we definitely are going to see more of technological tools being in play when it comes to education, and the more senior teachers or educators are not able to interact with these tools. So we are going to see a lot more of an intergenerational co-creation and being able to be okay with a mindset shift of where the future is taking us as opposed to holding on to the different educator roles that we have seen before. I also liked Umut’s point on infrastructure. Again, the situation in Kenya and largely in Africa as well is that the more urban areas have access to this digital infrastructure. Most likely, I’ll give an example of Kenya, you’ll find schools within Nairobi are already using the technology, already have computers, already set up in terms of internet connectivity as well, whereas in the rural areas, some of the schools, maybe there’s only a computer in a rural school, one computer that maybe is used by the teachers to illustrate. So again, it questions the issue of digital infrastructure because it’s very hard to understand a technological tool when you don’t have access to it regularly, using it and testing out what can be done in terms of digital scaling as well. So infrastructure is a big one, but we see more and more that governments are putting in resources and funds into creating an opportunity for more infrastructure to come into the country. But I think, again, it goes back to our role as the multi-stakeholder model in terms of what are the internet service providers doing? What are the private sector doing? What are the companies doing who are offering the services? Are the governments actively reaching out to these companies to partner with them to ensure that there’s digital infrastructure when it comes to what can be done differently to create a future-ready workforce? Again, the issue of cybersecurity, I like that as well because what we’ve seen, and recently our Office of Data Protection Commissioner rolled out a penalty notice with a fine, with one of the greatest fines sent out to a school. And this is because what had happened is that the school had used the picture of the children as a form of advertisement to advertise a new intake for a school. The question that lied therein, and I was asking myself when I saw this penalty notice is that, is the school aware of their obligations when it comes to data protection, when it comes to children, especially because children’s data is one of the most sensitive data that is classified out there. So the question is, the same way we would have privacy by design and by default, are we also putting in measures to ensure that we have cybersecurity by design and by default? And I think what I had mentioned earlier, the issue of firewalls to ensure there’s no intrusion, are our educators aware that these are some of the technological tools that are being used to ensure cybersecurity? Because I think what you would not like to see is a data breach in a school because that potentially means that there’s a lot of children data that can then be exposed. And we’ve seen the whole discussions around trafficking, around what that data can be used for. So that’s something also that we need to look into in terms of cybersecurity, not only awareness, but also understanding how some of these tools are used and how they can be presented to educators to then use them as well. The other thing is that, what resources are also being rolled out for the educator capacity? And are we streamlining it in a way that we can have it in the curriculum integration for the educators as well? Because we’re not going to have one or two workshops for educators on what the internet means, what technological tools we are now facing. Because one thing that is very clear is that artificial intelligence and technology is moving very fast. I think previously we didn’t have generative AI, now we have generative AI. And now that we have generative AI, you’ve seen chargeability rollout, you’ve seen bad rollout, you’ve seen being rollout. So clearly there’s more innovation that’s coming through than how are we going to ensure that even as we move to skilling and re-skilling education. that all this is kept in mind to ensure that we have a future-ready workforce as we move forward.

Ashirwa Chibatty:
Thank you. Thank you, Valerie. Very, very interesting and valid points, especially when you talk about the seniors and junior educators that are coming in the field. Young people are very much adapted to technology, but the senior professors might not be, which might create some kind of problems. But yeah, intelligence and solidarity, that’s why it’s very much important in this phase of human development. Also about the future that you said, it’s a very, very valid point and it’s also about not going to the future, but it’s us that we take this society to the future. So the future that we’re going to see is what we are doing now. And the cost of cyber security always has been a problem for cyber security by design, but yes, if we invest on cyber security now, then in the long term, the cost is very much effective. That’s something that governments in developing countries need to understand. And also about the rural-urban divide that you talked, I may very much echo with you because when I started my work, the world was going on dual group computer, but when I took up NTM2 computer to a village, that was very much what attracted us to build up a school. On that note, I’m going to move to Binod before I go to the audience now. So Binod, what are your views

Binod Basnath:
on this topic? Thank you, Ashwin. I think most part have already been covered by my previous speakers, Umut and Valerie. But even so, I’ll try to answer those questions in two folds. First, about policies needed for cyber security. I won’t go much with the policies as of now because for the least developed countries, I think digital literacy is very important. We have a digital literacy of around 31% and we desire to reach to 70% in a couple of years as per our digital framework Nepal 2019. But that’s a hard task. Yes, in terms of literacy rate, we’re moving forward very well, but the digital literacy part seems to be quite stagnant. And to solve those issues, I think we need to learn from the existing frameworks. Like Umut said, we need to make our own frameworks tailored to our custom needs. We can take ideas from the digital intelligence framework. We can use the IST, International Society for Teachers Education framework, and there are various other teachers competency frameworks. But I think my concern or my idea would be for least developed countries to design a diploma course for producing trainers on digital literacy. And those trainings could be taken by teachers, educators, and administrators as well. And once we can create those trainers, those trainers could be hired by CSOs or other organizations or government bodies as well to take those trainers to different marginalized communities and give them training. And it’s quite urgent now. It’s because I think over 70% of the households in Nepal already have a smartphone, and they’re already using social media platforms very intensively. And with no knowledge about cyber security, cyber hygiene, this could be disastrous. And my second point would be especially for the teachers, because if teachers are well-equipped and well-empowered, they can teach the students, and students can go back home to empower their parents as well. We need to devise standards and guidelines for digital pedagogy, online learning environment, learning resources, virtual assessment, digital citizenship, and for educational management and information system. We need to have our local standards, but we can get inspired from the ones that are already existing in the Western countries or developed countries. So that’s my first fold of my question, of my answer. But before I go to my second answer, I’d like to start with a small example, small story that I’d like to share. And then I’d like to get some feedback from the audience, and then I’ll go back to my answer. In Nepal, I’m also the director of Empowering Asia, which gives skills to students that prepare them for the future workforce. And I was talking to one of these students, and he asked me a very serious question that kept me pondering for a while. He said, here in Nepal, I’ve been studying for around 15 to 20 years, 12 years for my college degree, four years for my university. And then I go out to the economy, I go back to the job market, but I strive for getting a job. I just cannot get one, and the one I get pays me so less. But I just do a course for three months, and it’s a job designed by the Japanese government. And I get a certificate, and the Japanese companies are willing to hire me for around $2,000 per month in a specified skilled worker visa. That three months is so little, but they are paying me so much to get a job. And back home in my own country, I studied for 18, 19 years, and I barely get $300 per month job. Why is that? He asked me this question, and I had to give it a thought before I answered him. But before we go back to this answer again, I’d like to ask two questions to my audience. One question is for everybody. The first question is, what do you imagine the future workforce will be like? That’s one question for anyone to answer. Second question is especially for people from, if you are from a developing nation or least developed nation, why do you think we have an issue of employment in our countries? If anyone from audience would like to answer one of these questions, I’d like to give the floor to you. Yes, sir, please.

Audience:
Hello, everyone. This is Narayan Timilshana from Nepal. So it’s wonderful to see Nepalese guys here as a speaker. Regarding your question, what I want to highlight here is basically we have lots of problem in our teaching pedagogy. Basically, when we see that lots of students, they pass out from the universities, they are unable to find their job placement right quickly in the start-ups or other industries. So basically what we are missing is not only the curriculum, but the way the teacher provides their skills and the new technologies in their teaching methodology. So that’s why we are debating about providing some, what we call it, the finishing school concepts in Nepal. We have been debating there in Nepal, but the main problem what I want to share in this platform and find some other experiences from African continent or something like that. Basically, when you talk about digital literacy, digital skills, re-skilling, it’s not a tangible thing, it’s an intangible thing, and it takes a lot of time. So government and everyone does not are willing to invest on these things right away. They like some infrastructure investment and something like that. So it’s very challenging. So what is your thought and how we can cope up with these sort of things? That’s my question. Would you like to answer? Let’s take an answer from one more and then we’ll come back. So hello to all the speakers. I’m Luke, and as a youth I’d just like to add my opinion to this issue about digital education for the future and pose a question as well, which is AI readiness. So currently as a university student, a lot of my friends are wondering should I use maybe applications like ChatGBT to help me with my homework, and if I do use it, what are the best practices in place? Because it’s easy to say, oh, you should not do this, but you should do that. But I feel that, as you said, there should be a diploma in maybe teaching digital literacy. So my question is what are the maybe measurable actions that we as youth can take right now to make sure that we’re using it ethically and not doing any unethical work like copying or stuff like that? Thank you. Could you repeat the question? So basically what are the best practices that youth can take right now to implement AI into

Binod Basnath:
their education? Okay, so for the first part, let me finish mine and then I’m sure you can answer the questions as well. Well, thank you so much for the audience participation and the question you posted. Within my ideas that I’m going to share from here on, I’d like to answer the questions that have been posed, and if I’m not sufficient, please help me out after that. How many of us have actually heard of industry 4.0, if you could raise your hand? Or fourth industrial revolution? I think Africa actually is very ahead in this matter. There are programs being launched to prepare people for the future workforce in terms of industry 4.0. Let me come back to that. You know about the industrial revolutions, right? The first industrial revolution being more being governed by mechanical workforce, like steam engines and stuff like that. The second industrial revolution was more about electrical items, televisions, and all other electrical items. The third industrial revolution being governed much by technology, computers, and the internet. And now we’re moving towards the fourth industrial revolution, and that is going to be governed more by, as one of our audience has already spoken, it’s going to be governed by artificial intelligence, big data, machine learning, and blockchain technologies, and stuff like that. Robotics as well. So, is our economy ready for those kind of activities, and are we preparing workforce that match those things? It’s a very difficult question that we need to answer. In terms of our least developed countries, we’re still producing – it’s a harsh reality, but I think we need to talk about that. We’re still producing workforce that cater to the needs of second industrial revolution, so we are always behind playing a catch-up game with the developed countries. We are like the rear wheel of the bicycle, which never catches up with the front wheel. So, that is one issue, but I think if we talk about this today, and if we go back home and work out for this, I think we can prepare workforce that are ready for the future economy. And that starts, of course, with the school. We need to have a digital curriculum, digital pedagogy, a digital means of assessment system, and prepare – especially the vocational and technical schools have to prepare workforce that we need for the future. And without that, I think we will again get stranded. The situation we’re facing now is the same. In our nations, that’s why we don’t get jobs, because we’re not matching the skills that are needed for our economy at the moment. Along with that, I’d just like to leave a small thought for you. Please hear me out here, and give it a thought as well. Being unemployed and being unemployable are two different things, and I think the latter one is more severe. Thank you.

Ashirwa Chibatty:
You rightly said, Binod, and also about the AI and how Africa is moving ahead. I think Africa and union is also looking for an AI center of excellence, something where all African nations can benefit from that. So Valerie, what are your thoughts on the questions from the audience?

Vallarie Wendy Yiega:
Thank you so much. And yes, just to agree with Binod, Africa actually is moving into a space where we’re looking to see how we can use AI to move the continent forward. Again, just for the background, Africa is the continent that has the most number of young people. It is quite a youthful population, so definitely it’s looking into that. And we’ve also just seen what’s happening with the AI labs in Ghana as well. There’s a lot of work going around that. There’s also a lot of work being spearheaded by the African Union in terms of artificial intelligence, especially in Africa. And I like the question that one of the audience talked about in terms of re-skilling and scaling and how we can move it from being intangible to tangible. Unfortunately, we cannot skip on time and we cannot skip on resources. However, we do need to put in the time and the resources to ensure that we get us there. I’ll give an example. Recently in Kenya, our government has launched what we’re calling a housing levy tax. So this levy essentially should be able to assist the government to ensure there’s affordable housing for low-income earners. And though we are complaining about the tax, we are paying. So you find that you can’t skip on time when it comes to scaling and re-skilling of the workforce, and you also can’t skip on the patience that is required to ensure there’s that mindset shift to ensure that our educators are able to get to a point where they’re skilled enough to have this educator capacity and see more resources to move into the sector that streamlines the whole education sector to ensure that this is being done. Because at the end of the day, we are moving into a time where technology is moving very fast. If you’re not skilled or re-skilled in the technology space, in order to provide this value to the learners, the students, or the workforce, then over time you find yourself being redundant or not relevant to the workforce or to the education service that you provide. Yeah, I like the question on that Luke asked about the issue of using artificial intelligence and generative AI tools, whether that can be used at an education level. I’m a tech lawyer, so I’m very pro-technology, I’m very pro- innovation, I’m very pro the use of tools that can be able to have positive impact. But also, understanding and recognizing that these tools can also work to the detriment of the learners, in terms of we’ve had a lot of stories about cheating, a lot of plagiarism, but I think now we also need a mindset shift and an educational shift as to how tests are being done as well. Because we’re no longer going to look at content, because you could easily just put in a question in chat and it will give you an answer, and you could easily go with that answer. But also as educators, we now need a change as to how this will be done, because at the end of the day, AI is here with us, and it’s here to stay, and it’s here to create even more innovation as the days go by. So then, how can we change that to ensure that even as we test learners, you’re able to pick out critical analysis from a plain regurgitation of facts? Because the thing is, these tools are also for good, in that even if you were to ask AI to assist you to draft an email to your boss, you can already see that there’s some value being created from that email, but that does not replace the human effect. It does not replace the need for a person to apply themselves and to give context to whatever the generative AI tool is going to bring out. So definitely, a lot will be done in terms of critical analysis, especially for best practice when working with generative AI tools, but the truth is, they’re here to stay, and the best way forward would be to understand how to use them for good, how to use them in line with the ethical and responsible guidelines that are being formed around artificial intelligence, and ensuring there’s monitoring, there’s transparency and accountability when it comes to developers of artificial intelligence as well. Thank you.

Ashirwa Chibatty:
Thank you, Valerie. Also, about the question for Luke regarding ethics on AI, I think Umut is one of the experts on that, so Umut, your wise words, please.

Umut Pajaro Velasquez:
Okay. Well, the use of AI in education is something that is really close to me, because AI is one of the things that I work the most. And when it comes to education, I think mostly the youth already know how to use this tool for good, because they already know the limits when it comes to the use of this technology, for example, to complete different tasks in an education system. The problem here is they are not fully aware of the ethical implications or the legal implications of using these technologies in certain contexts. So, we need also to teach that kind of things, so they can use it for good and improve their productivity or the work that they’ve been doing. And also, another thing that we should focus here is to change some capacities, some more human capacities as critical thinking, because as I said before, that will be essential in this context of artificial intelligence, especially in education. We need to understand that the future professionals or the future of the labor force or the youth that is using these technological tools need to understand how they work, how they can use it for good, and also how they can use it to solve bills, not only use it because they want to get an answer fast or something like that. It’s just to use it to build from it.

Ashirwa Chibatty:
Thank you so much. So, moving on to the last part of our session, and if there is anything from the audience that you want to add, please do. So, if there’s anything, please. I think you’re ready for some questions here.

Audience:
So, hello. My name is Ivy, and I’m representing the Chinese YMCA of Hong Kong, and this discussion has been really informative and insightful, and it’s mostly talked about how governments can put out policies. So, I would like to ask, is there anything as individuals or as youth like myself, what we can do to also help with digital literacy?

Ashirwa Chibatty:
You would like to take the question.

Umut Pajaro Velasquez:
involved in the movement inside intergovernance, one of the many things that we came to conclusion during the years is that we shouldn’t be afraid to use our voice to say what we wanted to say in terms of what kind of technologies we want to have, and we know that some governments probably are not so open and not seeking to hear their voices, their young voices, but we when we have the space, which should address all the issues that we actually consider that should be addressed inside of those spaces. We have right now here, for example, the Internet Governance Ecosystem that actually opened the doors to many people from around the world to say exactly what they wanted to say about the Internet they want. So that is an opportunity that we shouldn’t take for granted and we shouldn’t appreciate and talk about exactly what we want. Another aspect that I like to report in this being on your participation and your incident into policy is try to find those places in your countries because there are spaces in your country, even the more closed ones. There are spaces where you can do incidents and you can participate in the construction of the policies that are being developed in your country. And I would say, don’t be afraid to share what your knowledge or what you’ve seen because probably it’s important to build those policies.

Binod Basnath:
Yes, I think that was very right, as Umut said. As youth, I also think you are a very valuable part of the country and you coming here in IGF in itself is already a good start. So you could, as a youth, you could join in into more events, more IGF forums, regional forums, your country-wide forums, and you could do advocacy for your government on those areas which you feel that need to be changed. And you can take the competency you have back to your home, back to your community, and you can also empower and invite more youths to join in. That way, I think we could synergize and have more empowering youths with digital competency and literacy. Thank you.

Ashirwa Chibatty:
Valerie, you are the youth ambassador. You’ve done a lot since you were younger, so please, enlighten us.

Vallarie Wendy Yiega:
Thank you, yes. That’s a question that’s very close to my heart. Again, I coordinate the Kenya Youth IGF, so I understand your question and where you’re coming from. And just like Binod said, I think it’s very important that you’re in this forum right now. That really shows that you’re on the steps to understanding what happens in the internet governance ecosystem. But what I’d say is that there’s a lot of self-education when it comes to this space. So there’s a lot of you trying to actively learn how to engage in this space. And I know the Internet Society has excellent courses that it offers online on how you can engage. And I also know that you’re able to join the relevant youth organizations that you have. I know here in Asia, you have an organization that covers around the Asia region quite well. It’s something that Jenna’s team does. I forget the name, but I’ll get you the name soon after this meeting. But in that organization, you’ll find a lot of young people across the Asia region. And I’ve found that they are very powerful in that there’s a lot of digital literacy that happens within that organization. There’s a lot of advocacy as well. There’s a lot of community building that happens within that organization as well because I’ve seen, I’ve been following the organization for quite a number of years. And I know they’re very forward thinking when it comes to equipping young people with the skills to navigate the internet space and to navigate it effectively. Also just one thing that has also helped me being in this space, having been in this space for about five years now is understanding the specific challenges and opportunities that you have from your own home country. Because yes, we do have a lot of best practices in place, but what also helps is once you understand your context and you’re very clear on what you’re trying to achieve and what change stroke impact that you want to achieve as well. So especially here where they already set organizations, my two cents would be to join those organizations and be able to speak to them. I know the team from Asia Pacific within this Internet Governance Forum, I’ve seen the entire team last night. I think they can also help you to navigate this as well. I think also Ananda can help you navigate this space. He’s just laughing there, but he’s also part of the youth IGF team. So I think he could also be a good start as to how you’re able to navigate this space and with the right persistence and when you keep keeping on in this space, you’ll be able to see that your understanding of digital literacy and your understanding of internet governance will keep growing over the years. Yeah, thank you.

Ashirwa Chibatty:
Yeah, I think the generous team you’re talking about is the Net Mission Academy? Yes, so there’s this Net Mission Academy. I think it’s based in Hong Kong, supported by .asia. So yeah, you can connect to Jenna or Jennifer for that. And if you want, I think you know them. If you don’t know, I can help you connect with them as well. So do not hesitate to do that. I would very much love to speak more and love to hear from you, but we’re nearly in the end of time. We just have last 15 minutes. So before we kind of wrap up, let me also take this opportunity to thank Ananda, who is here helping us take the notes and keep all the things in place so that this discussion and conversation can continue. And also, as he’s in Nepal Youth IGF, very active in this space, I hand it over to Ananda to just tell what you’ve noted.

Ananda:
Thank you so much. Thank you so much for keeping me here. So hello, everyone. It was a nice, insightful discussion today. And like we are discussing about a very important issue. And while today we went through different case studies from Asia to Africa to Latin America, and like what we have witnessed with the, we talk about industry 4.0, a massive development in AI, machine learning, and those are the hot topics of the whole IGF itself. But what we have to also understand that is there’s a big digital gap. There are still people who are unconnected from the internet, who doesn’t have access to the internet because of different barriers. It might be affordability, it might be accessibility. So today we have discussed many things, but in the age of industry 4.0, how do we actually blend technology with education and actually providing these kind of skills to students so that they are ready, industry ready when they are graduating is the most challenging issue of today. And then like we also talked about the re-skilling of educators, contextualization of technology in local context, which is very important, but as I think Valerie mentioned about the universal service fund, there are those kind of service funds which are allocated for developing technology. In case of Nepal, there is a Rural Technology Development Fund which can be used and which shall be used to actually make internet more affordable, inclusive, and secure. And there is a role of, we talk about multi-stakeholder engagement in IGF. So there’s a role of everybody in this process. Government make policies and civil society need to support them with the monitoring and the accountability part and private sector will be supporting. And then we can actually build the ecosystem which creates the students that are ready for the industry that can land the global job landscape. So, and I think Umut also mentioned about the community networks. So if you guys are not aware about community networks, community networks are actually the networks that are owned and managed by community themselves where there is no connectivity. People, last mile connectivity cannot reach at some point and people accessing various funds, they can build their own community network using affordable technologies. I think Africa has so much of example on that and then like ISOC and APC kind of organization are working hard to actually build a community network across the world. In Nepal, there was few, but I think they are not much active these days. But like we have to work on those things so that the people who doesn’t have affordable devices or who doesn’t have access to internet and not any device that could actually connect to the internet. So for those people, we can create community learning centers where they can go learn these things. And we have also talked about some open courseware where content can be accessed online and offline. Khan Academy is one of such examples and I think there are many more. There’s a repo built by Rachel. Rachel Foundation is working on open courseware system where you can find trillions of gigs of information which can be used in rural technologies where there’s no access to internet. It is a local server based content management system. Maybe some community networks has already used that as well. So, tipping up my point, while policies are way behind in case of like developing nations, we have a huge responsibility and Valerie was asking me to share about how youth can contribute on that. So like what I always tell about youth is we are the biggest stakeholder of the internet today. And then with this role comes a bigger responsibility. How do we actually make this internet more inclusive? How do we help people to actually connecting the people that are not connected today? And then how do we create ecosystem that allows everyone to access the content in the internet? So, and the initiatives like civil society, initiatives like internet society, IGF itself, national IGF, regional IGF, and local IGF should actually work hard so that we can actually eliminate these things. So I’ll wrap up my things over here. Thank you so much for being here. I’ll give it back to Asif. Thank you.

Ashirwa Chibatty:
Thank you so much, Ananda. So this is the last call for any interaction from the floor. If there’s any solutions available, if there’s anything. I’ll take it in there.

Audience:
Hi, good morning. I’m Dean Dell from the Philippines. I would just like to ask some initiatives that you’re presently, if any of you are doing right now. For example, in the Philippines, we have more than 7,000 islands. And most of these islands are in remote locations and still doesn’t have any internet connection and even utilities. Okay, I would just like to ask, apart from just what you have said a while ago, any existing initiative or tools that you can recommend that would answer those underserved schools who that soon they will still be able to maximize the use or the advantage of AI and other digital technologies and contents. So that’s it.

Ashirwa Chibatty:
Thank you. Anybody would like to take it? And please know that we have a very limited time now. So please make it short.

Binod Basnath:
I’ll try to be as quick as possible. Thank you for the question. During 2017, we did a pilot project with movable and deployable ICT resource unit. It was a network for the community for places where there were no internet connectivity. This device would create an internal networking system for the whole community. And it would be a community owned network. And with those network, people could share information through voice calls, through video calls, through message system, sharing of photos. But we used it for education. And it was very effective. And we reported that to entity, which was further reported to ITUD. One of the study groups has published the effectiveness of how community led network and devices can be effective in places where there are no internet facilities. And taking this one step forward, we are trying to pilot with locally accessible cloud system, LAC system, that I think has been implemented in Philippines as well. I think you’ve been using it mostly for disaster, but we want it to be used for education and health sector for marginalized and backward communities of Nepal. Thank you.

Ashirwa Chibatty:
Thank you, Vinod.

Ananda:
So talking about, I think Vinodji has covered a lot. So actually, when it comes to open source repositories, there are many. Like Rachel is one of the, I think I have found the most, R-A-C-H-E-L is its spelling. And then like in Rachel, you get Khan Academy integrated over there. And there are a lot of like open source learning resources, which are updated periodically, which can be downloaded on any computer. You can make a local server and then like broadcast it to the network that can be accessible without internet as well. It is not internet based. And when you have connection, you can update the content. And there is another initiative called Colibri. And Colibri is also integrated in Rachel as well, but Colibri is more actually on like user end. You have a content, you downloaded it, and then you can actually transfer it to another person’s phone without internet access. That is the power of Colibri. It’s like it uses peer-to-peer networking technology. So if I have that repo, I think what content I have here, I can transfer it to another person without internet. And then like that is they are doing. And Colibri is integrated with Rachel as well. And inside Rachel, you can find Khan Academy, every content you have ever imagined, and it is updated regularly. So I think if you want more resources, maybe we can discuss or like set up a call, and then we can talk about it. I think Valerie knows more about it as well, because many community networks share the same principle. I deployed one back in COVID, and I said, well, then myself are trying to upgrade that. It is not operational right now, but we are planning to upgrade that. We have gone through feasibility study, and we are looking for funds. So like those kind of technology are there. We can discuss more because of the time limit. There’s like red sign coming up. So I think we should wrap up, but we can discuss it offline as well. Thank you so much for being here.

Ashirwa Chibatty:
So we have last five minutes, as this gentleman showed me. So I give our speakers one minute for the closing remarks, and shorter is better. So we’ll start with Umut online,

Umut Pajaro Velasquez:
Valerie, then Binod before I close. Okay, well, that will be, I will be to remind that digital education and digital literacy are in development right now, and in constant changing right now, because technology is constantly changing, and society is constantly in constant change. So we need to aware that the future of education is a work that is done day by day, especially in spaces in the global south, where all societies still need access, still need equipment, and still need infrastructure. That will be it.

Ashirwa Chibatty:
Thank you, Valerie.

Vallarie Wendy Yiega:
Thank you so much. I think for me, my mantra has always been, each one teach one. That means that just like the member of the audience had said earlier, it’s up to us to ensure that we carry together into the future, this generation of digitally skilled learners as well. So what that means is that, how can we contribute? Is it through policymaking? Is it through building innovative solutions? Is it through putting our voices towards ensuring that we have a future ready digital skilled education system? So for me, it’s always paying it forward and rolling out the information that is required by the stakeholders on the ground. Thank you.

Binod Basnath:
So my last words, I’d like to urge the policymakers, actually, for the Asian countries, especially the South Asian countries. We all had our ICT in education master plan one, and I think most of the countries have completed that, and we’re moving towards the second master plan. But I don’t think there’s much awareness about ICT in education master plan amongst most of the stakeholders. We don’t even know what the master plan is and what we’re trying to achieve. So I think post-COVID, we have learned that ICT way of learning can be more inclusive and accessible if it’s implemented correctly. So I think we need to raise more awareness. We need to map our resources. We need to have realistic plans and not just have plans for the sake of having a plan. And I think post-COVID, ICT master plan two will be a very effective tool for us to reach education 2030 goals. Thank you very much.

Ashirwa Chibatty:
So thank you, Binod, Valerie, Ananda, and Umut for your insights and sharing your expertise. So when we talk about internet or any technology, it’s free from prejudice or harm or anything. But how we go in it decides what is used. That’s why the multistakeholder model is very much important. And that being said, I’d like to go back to the human aspect of technology. How do we get additional resilience in digital education? There are technical aspects, but yeah, we have to be inclusive from design. We have to accept diversity and practice empathy. We have to share each other’s experience. We don’t duplicate things. We have to learn from each other and we have to work as a community for the better shared future. So let’s think about our children and their children’s children when we make any kind of decision. Thank you so much, everyone. I’d like to take this opportunity to thank my SIG leadership team, Shraddha, Samuel, Maxwell, and everybody who is not here, but they have been supporting us for our past two years. So thank you, everybody. And please do join our Internet Society’s special interest group on education. There’s a QR code. If you can scan it, you can join us to connect and let’s move towards a global internet that ensures inclusive, equitable, and quality education, promoting lifelong learning for all. Thank you so much, everyone, for your presence and also the ones that are online. Suara and I see you there. So thank you so much for being there online. Thank you, everyone. Close the session.

Audience:
So I think I’ve had a good photo, yeah. Photo’s always good. So those who are present until last, if we could just take a photo for our memory, that would be great. We could do it right outside the hall because there might be next session here.

Ananda

Speech speed

164 words per minute

Speech length

1232 words

Speech time

450 secs

Ashirwa Chibatty

Speech speed

167 words per minute

Speech length

2986 words

Speech time

1071 secs

Audience

Speech speed

157 words per minute

Speech length

687 words

Speech time

263 secs

Binod Basnath

Speech speed

147 words per minute

Speech length

2882 words

Speech time

1173 secs

Umut Pajaro Velasquez

Speech speed

140 words per minute

Speech length

1910 words

Speech time

817 secs

Vallarie Wendy Yiega

Speech speed

199 words per minute

Speech length

3902 words

Speech time

1177 secs

Green and digital transitions: towards a sustainable future | IGF 2023 WS #147

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Lazaros

During the discussion, the speakers emphasized the significance of supporting repositories in South Africa and collaborating with various institutions such as universities, research councils, national facilities, and museums to promote open access. They recognized the need for effective coordination and cooperation to ensure the success of this strategy.

One of the key points raised was the importance of training librarians to index and categorize content that falls within the criteria of the Sustainable Development Goals (SDGs). This would enable easy access and retrieval of valuable information and research related to these goals. The speakers also highlighted the need to link this content with existing repositories to maximize its visibility and impact.

Moreover, the speakers discussed the use of DSPACE software by a majority of the organized universities in South Africa. By adopting this software, universities can effectively manage their digital collections and make them accessible to a wider audience. They stressed the benefits of using a widely accepted and trusted platform for the efficient dissemination of knowledge.

Furthermore, the development of the South African SDG app was discussed as a means to gather collections within universities. This app serves as a convenient tool to gather and showcase research and information specifically aligned with the SDGs. It provides a platform for researchers and institutions to contribute towards achieving the goals set by the SDGs and promotes open access to this valuable knowledge.

Overall, the speakers had a positive outlook on leveraging library experts and adhering to international best practices for open access in South Africa. They recognized that by working collaboratively and adopting established practices, they can enhance the visibility and impact of research related to the SDGs. The emphasis on training librarians and the use of advanced software and technologies reflects a commitment to the efficient management and dissemination of knowledge.

Online moderator

The analysis reveals that Andrej Khrushchev has raised an intriguing question about the role of technology in supporting the green transition, particularly regarding security and energy efficiency. This inquiry suggests that technology has the potential to play a crucial role in achieving environmental sustainability goals.

It is noted that the global commodity value chain adds complexity to the task of implementing green technologies. This complex network involves the production, distribution, and consumption of commodities across the globe. Understanding and optimizing this intricate system is necessary to ensure that technology adoption does not inadvertently harm the environment or compromise security measures.

The sentiment of the argument is considered neutral, indicating an objective discussion that invites further exploration and analysis of the topic.

The related topics of the argument encompass the Green Transition, Technology, Security, and Energy Efficiency. These subjects are closely intertwined and interdependent, as advancements in technology can significantly impact the transition towards a more sustainable and secure future.

Furthermore, the argument aligns with Sustainable Development Goal 7 (Affordable and Clean Energy) and Sustainable Development Goal 13 (Climate Action). These global goals highlight the importance of renewable energy sources, energy efficiency improvements, and climate mitigation strategies. The question raised by Khrushchev emphasizes the role of technology in advancing these goals and promoting a sustainable future.

In conclusion, the analysis indicates that the question posed by Andrej Khrushchev emphasizes the potential of technology in supporting the green transition, especially regarding security and energy efficiency. Navigating the complexity of the global commodity value chain is crucial to ensure the responsible adoption of technology. The argument maintains a neutral stance, prompting further investigation and exploration. This topic aligns with Sustainable Development Goals 7 and 13, underscoring the significance of technology in achieving a more sustainable and secure future.

Audience

Tarek Hassan, the head of the Digital Transformation Centre Vietnam, is interested in understanding inter-ministerial collaboration in Japan, specifically regarding biodiversity. He wants to gain insights into how different ministries work together and the division of labour among them to effectively address green initiatives. Tarek believes that understanding the roles of these ministries will shed light on whether the digital experts lead the green initiatives or vice versa.

Collaboration between various ministries and levels of government is crucial for wildlife population control. The Ministry of Environment (MOE) has proposed and revised the Wildlife Protection Control and Hunting Management Act. Additionally, the Ministry of Agriculture, Forestry, and Fisheries (MAFF) is responsible for agriculture and the National Forest. Collaboration is necessary due to overlapping issues, ensuring successful outcomes.

In certain domains, like wildlife capture control, there is collaboration between the government and the prefectures. They work collaboratively on some aspects but independently on others. The country establishes basic guidelines and laws, while the prefectures handle practical program implementation. This two-tiered approach ensures shared responsibilities and effective governance.

Tarek is also interested in capacity building within ministries concerning digital transformation. He is curious whether the digital capacity is built within the ministries themselves or if it is outsourced. Additionally, he wonders about the role of the digital ministry within the governance structure.

The establishment of a digital ministry within the cabinet is a significant development. This agency primarily handles human number identification but is not heavily involved in the ICT techniques and technologies proposed by the private sector. Tarek is intrigued by the ICT techniques proposed by the private sector and their potential to contribute to achieving specific goals.

Tarek is curious about the quality of data used in the twin transition. However, no specific evidence or arguments were mentioned in the text. It remains unclear how the data quality could impact the twin transition, but it indicates Tarek’s interest in ensuring the use of reliable and accurate information.

Overall, Tarek’s pursuit of knowledge regarding inter-ministerial collaboration, division of labour, capacity building, and data quality reflects his commitment to understanding Japan’s approach to biodiversity and digital transformation. His goal is to gather insights that can inform his work at the Digital Transformation Centre Vietnam.

Daisy Selematsela

During the analysis, several key points were highlighted by the speakers. The first point emphasised the fact that African leaders have taken the initiative to set their own regional priorities in response to the Sustainable Development Goals (SDGs) agenda. This demonstrates the commitment of African countries to align the SDGs with their specific needs and challenges.

One specific example that was mentioned is South Africa, which invests 50% of its annual research and development budget in collaboration with international partners. This highlights the importance of international collaboration in achieving the SDGs, as South Africa recognises the value of leveraging external expertise and resources to drive progress.

Another interesting point discussed was the role of open access repositories in enhancing South Africa’s SDG hub. Open access repositories facilitate the sharing of information and make open source academic journals available to a wider audience. This is crucial in effectively addressing the SDGs, as it promotes knowledge sharing, collaboration, and innovation.

The analysis also highlighted the significance of knowledge management in relation to the SDGs, particularly in terms of availability, accessibility, acceptability, and adaptability. The effective management of knowledge plays a critical role in achieving the SDGs, as it ensures that the necessary information and resources are readily accessible to those working towards these goals. Furthermore, it was argued that relevant role players, including researchers, policymakers, and citizen scientists, are essential in solving global health problems. This highlights the need for multi-stakeholder involvement and collaboration to tackle complex challenges.

Additionally, the government’s strategies and international collaborations were recognized as crucial factors in supporting the SDGs in South Africa. With 50% of its research and development investment coming from international partners, South Africa acknowledges the importance of working together to achieve these goals. Furthermore, the existence of a Draft Open Science Policy in South Africa demonstrates the government’s commitment to fostering an environment conducive to open science and collaboration.

Overall, the analysis emphasised the importance of African leaders setting their own priorities within the SDGs agenda. It also highlighted the critical role of open access repositories, knowledge management, relevant role players, government strategies, and international collaborations in achieving the SDGs in South Africa. These findings provide valuable insights and recommendations for policymakers, researchers, and various stakeholders involved in driving sustainable development.

Horst Kremers

The analysis highlights the increasing complexity in managing data with the rise of urban digital twins. One of the key challenges identified is the lack of an international standard for the ontology of urban digital twins. This lack of standardisation makes it difficult to compare existing ontologies automatically. In order to ensure coherence and conformity to legal, financial, and ethical boundaries, challenges in coherence analysis need to be addressed.

Furthermore, the analysis emphasises the need for novel mechanisms and models to handle the complexity associated with urban digital twins. The emergence of more sophisticated digital representations of the urban sphere, known as digital twins, has led to the generation of massive data and active data streams from various sensors across cities. This has posed new challenges in data management. The sentiment expressed in this regard is one of concern, as managing the increasing complexity of data becomes a daunting task.

Another aspect that requires urgent attention is the implementation of just-in-time demands in managing digital twin logistics. Prompt implementation is necessary to ensure efficient management of digital logistics, and it is suggested that staging emergency drills and recording action plans will aid in meeting these demands. The sentiment expressed here is one of urgency, highlighting the importance of timely and effective implementation.

Regarding the handling of big data and complex data, it is noted that administrators are not well equipped in this area. The lack of educational resources and training inhibits their ability to effectively handle such data. This is seen as a negative impact, as there is a clear need for administrators to adapt and acquire the necessary skills to navigate the complexities of big data.

In terms of governance, a framework is deemed essential to operationalise long-term systems for the service of citizens. There is a positive sentiment towards the establishment of governance structures that ensure the smooth operation and maintenance of these systems. Additionally, there is an emphasis on the need for participative governance, involving not only the government but also citizens. The involvement of multiple actors is seen as crucial in ensuring a democratic and inclusive decision-making process.

The complexity of the global commodity value chain is acknowledged, and it is argued that a holistic green transition is necessary to address this complexity. This transition should encompass various topics such as food security and energy efficiency. The sentiment expressed here is positive, as the analysis recognises the importance of different professions joining together to guide the green transition. However, joining these ontologies presents a challenge, as it requires careful consideration of the purposes and consequences of data application.

Overall, the analysis sheds light on the complex nature of managing data in the context of urban digital twins. It emphasises the need for standardisation, novel mechanisms, and effective governance frameworks. Additionally, it highlights the urgency of implementing just-in-time demands and the importance of equipping administrators with the necessary skills to handle big data. The analysis also emphasises the importance of a holistic approach to address the complexity of the global commodity value chain.

Ricardo Israel Robles Pelayo

The speakers in the analysis highlighted several key points regarding sustainable development and the role of technology and collaboration in achieving sustainability goals.

One of the main points emphasized was the potential of big data and artificial intelligence (AI) in enhancing the efficiency and reliability of renewable energy sources. Through the analysis of big data and the implementation of autonomous decision-making, AI can revolutionize the generation and management of renewable energy. This can contribute significantly to SDG 7: Affordable and Clean Energy and SDG 13: Climate Action. The speakers provided supporting facts that demonstrated how AI and big data can improve the efficiency of renewable energy sources such as solar and wind.

Another important aspect raised in the analysis was the harmonisation of regulation and policies around digital technology and environmental sustainability. The speakers argued that this harmonisation is crucial and presents a significant challenge. They stated that it is important to consider specific technological aspects that have applications in the environment. By aligning regulations and policies, authorities can foster an environment that promotes the use of digital technology for sustainable development. This alignment may contribute to SDG 13: Climate Action and SDG 17: Partnerships for the Goals.

Collaboration between authorities and various stakeholders was emphasised as vital for achieving sustainability goals. The speakers stressed that authorities should work closely with private business corporations, civil society, and academia at both national and international levels. This collaboration is necessary to address the challenges and complexities associated with sustainable development. They argued that by involving multiple stakeholders, authorities can ensure more effective and comprehensive efforts towards achieving sustainability goals. This close collaboration aligns with SDG 17: Partnerships for the Goals.

The analysis also highlighted upcoming challenges in the pursuit of sustainability. These challenges include the reduction of greenhouse gas emissions, ensuring social justice, and promoting clean technologies. The speakers emphasised that more clean technologies and sustainable practices need to be adopted to combat climate change. Additionally, they highlighted the importance of ensuring social justice in the transition, particularly through training and skills development. By addressing these challenges, authorities can make significant progress towards achieving SDG 10: Reduced Inequalities and SDG 13: Climate Action.

Furthermore, the analysis suggests that authorities should actively participate in international forums such as the Internet Governance Forum (IGF). The speakers acknowledged that Mexican parliamentarians attended the IGF in Kyoto, highlighting the significance of active involvement in these forums. By participating in international forums, authorities can have a voice in shaping global policies and development strategies, aligning with SDG 17: Partnerships for the Goals.

Lastly, the creation and promotion of laws were emphasised as important for achieving digital and green transitions for sustainable development. The speakers argued that laws play a crucial role in driving the adoption and implementation of these transitions. They emphasised the need for laws to incentivise and regulate sustainable development practices. By creating and promoting such laws, authorities can facilitate the transition to more sustainable and environmentally friendly practices.

Overall, the analysis underscores the significance of big data, AI, collaboration, regulation, and laws in achieving sustainable development goals. The adoption of these technologies, collaboration between stakeholders, harmonisation of policies, and the creation of supportive laws are all essential for advancing sustainability efforts and addressing various challenges. By focusing on these aspects, authorities can pave the way for a more sustainable future.

Tomoko Doko

Wildlife management is crucial for the sustainable future of Japan, particularly due to the significant crop and forest damages caused by shika deer and wild boars. In 2015, the Wildlife Protection Act was revised to reflect the country’s commitment to preserving its flora and fauna. This demonstrates a positive sentiment towards wildlife management.

Furthermore, the implementation of ICT technologies has proven effective in monitoring wildlife. Drones are used to track habitats, while sensor systems help identify different animal species. These technological advancements provide accurate data and facilitate better management strategies.

The Japanese government has also introduced a certification system for wildlife capture programs. This initiative aims to counter the decline in hunters and reduce the population of shika deer and wild boars. The government’s goal is to reduce the population to half of 2011 levels, and progress has been made in achieving this target.

However, collaboration between stakeholders in wildlife management is lacking. Government officials, scientists, and private sectors often fail to work together effectively, hampering progress. To address this, bridging individuals or organizations are necessary to encourage cooperation and align goals.

The collaboration between the Ministry of Environment (MOE) and the Ministry of Agriculture, Forestry, and Fisheries (MAFF) is vital for managing wildlife populations. The MOE and MAFF have proposed and revised the Wildlife Protection Control and Hunting Management Act, setting common goals and creating guidelines at the country level. Prefectures are responsible for implementing practical programs.

Collaboration between the government and private sectors is essential for effective wildlife management. While the MOE and MAFF provide high-level goals, private sectors play a key role in implementing programs in collaboration with prefectures.

The establishment of the digital ministry agency primarily focuses on human identification numbers rather than ICT implementation. This highlights the need for collaboration between the government and private sectors to effectively implement ICT systems.

In conclusion, wildlife management is vital for Japan’s sustainable future. The revision of the Wildlife Protection Act, the use of ICT technologies, and the certification system for wildlife capture programs all contribute to positive efforts. However, improving collaboration among stakeholders is crucial. Bridging individuals or organizations can facilitate cooperation, ensuring successful wildlife management and a sustainable future for Japan.

Liu Chuang

Less than half of the Sustainable Development Goals (SDGs) have been achieved, and progress has been hindered by natural disasters, climate change, and the COVID-19 pandemic, particularly in small islands, mountainous areas, and critical ecosystems. These challenges have greatly impacted the advancement of the SDGs, which aim to address issues such as climate action, good health and well-being, and life on land.

To accelerate progress towards the SDGs, it is proposed that open science be embraced. Open science involves the use of big data, the Internet of Things (IoT), and the inclusion of various fields such as engineering. By adopting these approaches, while ensuring systemic management and cultural diversity, it is believed that progress towards the SDGs can be accelerated. Different organizations have their own ways of handling the SDGs, and a wide-ranging, reciprocal cooperation is being proposed among all partners to drive advancements.

In an effort to ensure trackable and high-quality agricultural products, China has launched the Global Institute for Environmental Science (GIES) and the World Data Center. The GIES operates as a decadal programme from 2021 to 2030, and it has initiated 17 different cases in various regions of China in the past two years. This initiative aims to support the SDGs related to zero hunger and responsible consumption and production. By establishing infrastructure like the World Data Center, China is taking steps to ensure the traceability and quality of agricultural products.

The GIES project has yielded several benefits, including increased income for farmers, high-quality products for consumers, and credit for contributors. Over 600,000 local farmers have already benefited from the project, and the quality of the products can be traced to ensure consumer satisfaction. This demonstrates the positive impact that initiatives like GIES can have on achieving SDGs, particularly those related to poverty reduction and decent work and economic growth.

It is important to pay more attention to underprivileged individuals and developing nations, especially those in mountain areas, small islands, and rural villages. These demographics are highly vulnerable and in need of assistance. By utilising technology, science, and commercial sectors to provide aid, it becomes possible to empower and uplift these underprivileged communities. The role that these sectors can play in addressing SDGs related to poverty reduction and reduced inequalities is stressed.

Identifying trustable data for research and business purposes is a challenging task since data comes from various sources, including government, private sectors, and university research sectors. Different policies define how data is opened for use, making it essential to establish standards for data quality and reliability.

The World Data System, which consists of 86 world data centres, provides peer-reviewed data to address this challenge. This global collaboration, under the International Science Council, ensures that data undergoes checks for data security, data quality, and authorship. By providing peer-reviewed data, the World Data System supports the SDGs related to industry, innovation, and infrastructure, as well as partnerships for the goals.

To ensure the accuracy and reliability of data, it is recommended to adopt meticulous processes for data validation and curation. This involves reviewing data quality with the help of experts and capturing information about the data source and production method. By implementing such practices, it becomes possible to address challenges related to data quality and trustworthiness, thus advancing the goals of SDG 9: Industry, Innovation and Infrastructure.

In the realm of data handling, protecting the original authors and ensuring data security are of paramount importance. Proper data handling protocols are observed to uphold privacy and security. Adhering to these protocols allows for the responsible use of data and preserves the rights of authors, aligning with SDG 16: Peace, Justice and Strong Institutions.

In conclusion, the achievement of the Sustainable Development Goals (SDGs) by 2030 requires significant progress, as less than half of the goals have been achieved to date. Natural disasters, climate change, and the COVID-19 pandemic have impeded progress, particularly in vulnerable regions. However, embracing open science, leveraging technology and collaboration, and ensuring the quality and reliability of data are potential pathways to accelerate progress towards the SDGs. Initiatives like the Global Institute for Environmental Science (GIES) and the World Data System in China demonstrate the commitment to ensuring high-quality agricultural products and traceable data. By prioritising underprivileged individuals and developing nations and utilising technology and scientific advancements, it is possible to provide aid and address inequality. The challenges of identifying trustworthy data can be met through meticulous processes of validation and curation, while upholding data protection and security protocols. Overall, a multi-faceted approach is needed to achieve the SDGs and create a sustainable and equitable future.

KE GONG

In this analysis, several key points are highlighted regarding the importance of sustainability and digitalization, the urgency to rescue the sustainable development goals (SDGs), the significance of using digital technology to implement and rescue the SDGs, and the importance of interdisciplinary, intersectoral, and international cooperation.

The first point emphasizes the dual transitions of sustainability and digitalization as crucial for the future of humankind. It is stated that these transitions are a historical process with great significance. Furthermore, it is asserted that digitalization serves as an essential tool in achieving sustainability.

The second point focuses on the urgency to rescue the SDGs. It is revealed that over 30% of the SDG targets have not made any progress or have regressed below the baseline established in 2015. This lack of progress is exemplified through the projection that 575 million people will still be in extreme poverty by 2030. These facts illustrate the need for immediate action to address and advance the SDGs.

The third point highlights the importance of using digital technology to implement and rescue the SDGs. It is highlighted that digital transformation is crucial in three specific areas: addressing hunger, transitioning to renewable energy, and leveraging digital transformation opportunities. Examples are provided to support this argument, such as the use of big data in smart manufacturing, urban planning, and climate action. These examples demonstrate the potential of digital technology in achieving the SDGs.

The fourth and final point underscores the significance of interdisciplinary, intersectoral, and international cooperation. It is emphasized that digital technology should work across disciplines without any borders. Platforms such as the China Association of Science and Technology (CAST) and The World Federation of Engineering Organizations are presented as facilitators of collaborations in this regard. The importance of such cooperation is highlighted as essential for successful digital transformations.

In conclusion, the expanded summary reiterates the key points outlined in the analysis. It emphasizes the importance of the dual transitions of sustainability and digitalization, the urgent need to rescue the SDGs, the significance of using digital technology to implement and rescue the SDGs, and the importance of interdisciplinary, intersectoral, and international cooperation. Through these points, it is evident that sustainability, digitalization, and collaboration are all crucial elements in advancing global goals and ensuring a better future for humankind.

Xiaofeng Tao

The workshop commenced with a series of presentations from six speakers, each focusing on different aspects of the green and digital transition. Professor Liu, the director of global change research, led the session by emphasising the significance of open science in driving sustainable development. She highlighted the need for transparent and collaborative research practices to address urgent environmental challenges.

Following Professor Liu’s presentation, Ms. Tomoko Doko, the President and CEO of Leisure and Science Consulting Limited Company, discussed wildlife management in Japan for a sustainable future. She showcased innovative approaches taken in Japan to protect and conserve biodiversity. Ms. Doko stressed the importance of a holistic and integrated approach involving all stakeholders to ensure effective wildlife management.

Mr. Kremers from Codata, Germany, then shared insights on the practical implementation of digital twins. He explored the role of digital twins in managing complexity, including process models and workflow standards. Mr. Kremers highlighted how digital twins enhance decision-making processes and optimise resource allocation in various sectors.

Next, Professor Ricardo from Mexico presented the challenges and commitments in digital technology and a sustainable environment as outlined in the United States-Mexico-Canada agreement. He emphasised the importance of aligning digital innovation with sustainable development goals and highlighted potential benefits and risks associated with the digital transition.

After the presentations, a discussion session provided an opportunity for participants to ask questions and provide comments to the speakers. The workshop facilitator posed three key questions focusing on government issues, stakeholder cooperation, and policy frameworks. Each speaker addressed one or more of these questions.

Professor Liu shared insights on the key challenges faced by governments in driving sustainable development, emphasising the role of political will and effective governance structures. Professor Ricardo stressed the need for enhanced collaboration and partnership among multiple stakeholders to address complex environmental issues. Ms. Tomoko focused on the role of policy frameworks in guiding wildlife management strategies and the importance of regulations for their effective implementation. Mr. Kremers discussed the potential of policy guidelines in promoting the adoption of digital twins and ensuring their compatibility across different sectors.

The workshop concluded with expressions of gratitude to the speakers, on-site and online participants, and the organisers. The facilitator acknowledged the thought-provoking presentations and insightful questions from the participants. Time constraints prevented a detailed discussion of all topics, highlighting the need for future collaborations and continued efforts to achieve a sustainable future.

Attendees were invited to gather for a group photo, fostering connections and setting the stage for potential future engagements. The facilitator expressed special appreciation to Professor Liu Chuang for their ongoing partnership and work in this field. Overall, the workshop provided a valuable platform for knowledge sharing and networking among experts, contributing to ongoing discussions on green and digital transition.

Session transcript

KE GONG:
Thank you. Thank you, Professor Hao. Now I share my screen with all of you. My title is The Three Musts for Accelerating the Sustainable and Digital Digital Transformations. Because the theme of our workshop is green and digital transformation towards a sustainable future. I’m very pleased to be part of this workshop because this is really important. Talking about the dual transformation, my understanding is that these dual transitions are a historical process which is crucial to the future of humankind. This dual transition, I think the goal is to achieve the sustainable development for humankind and the planet. That’s a value-pulling transition. It is the digitalization which is a very important tool for us to achieve the sustainable development. This transition, I consider, is a technique-driven transition. These two transitions are interacting with each other. They are not just parallel two transitions. They are interactive. The digitalization is a very important tool for us to achieve sustainability. At first, I would like to talk about the urgency of the dual transformation, especially the urgency to rescue the sustainable development goals. All of us know that eight years ago, all world leaders gathered together in New York made a sustainable development agenda which is called Transforming Our World, the 2030 Agenda for Sustainable Development. In this agenda, there are 17 sustainable development goals defined jointly by all member countries of the United Nations. Under these 17 goals, there are 169 targets. However, this year is the midpoint of the whole agenda. 2023 is exactly in the middle of the whole process. Last month, the world leaders gathered together again in New York to review the progress of the sustainable development agenda. However, at this middle point of the 2030 Agenda, the world leaders and the world people are shocked by the current progress. The latest global level data and assessments paint a concerning picture. This is the concerning picture. The blue ones are on track or the target rate. The yellow one is a fair progress but acceleration needed. The red one shows the stagnation or recession of those targets and goals. So this picture shows us only half of them show moderate or severe, not only half of them show moderate and severe deviations from the desired trajectory. More than 30% of these targets have no progress or even worse regression below the 2015 baseline, below than eight years ago. This assessment underscores the urgent need for intensified actions to ensure the SDGs stay on course and progress towards a sustainable future at all. In short, the SDGs need to be rescued. For example, just have a look to the goal one, no poverty in all its forms everywhere. So this figure shows you we not come back on track. There will be 575 million people will stay in extreme poverty by 2030. And here shows the current world vulnerable population remain uncovered by social protection. For example, for children, only 8.5% receive the social protection. For the elder people, only 23% can receive the social protection. That’s why the report, the Global Sustainable Development Report, GSDR, this report is every four years. And the newest report is titled The Times of Crisis, The Times Change. So we have to realize the urgency of this situation for sustainable development and take real actions to make changes. So that’s the second point I’d like to say. The second must is to take actions of using digital technology to implement to rescue the United Nations SDGs. Indeed, if we talk actions, people pay a lot of attention to the decentralization. Let me quote what Secretary General Guterres said in the UN Summit last month in New York. He emphasized the need to take actions in three key areas, including addressing hunger, transitioning to renewable energy, and leveraging digital transformation opportunities. Further, please allow me to quote some words from the Political Declaration of the United Nations Sustainable Development Summit last month in New York. It states, we acknowledge that important lessons were drawn from the COVID-19 pandemic on health, culture, education, science, technology, and innovation and digital transformation for sustainable development. It states, we will continue to take action to bridge the digital divides and spread the benefits of digitalization. We will expand participation of all countries, in particular developing countries, in the digital economy, including by enhancing their digital infrastructure connectivity, building their capacities and access to technological innovations through stronger partnerships and improving digital literacy. It states, we will leverage digital technology to expand the foundations on which to strengthen social protection systems. We commit to building capacities for inclusive participation in the digital economy and strong partnerships to bring technological innovations to all countries. So, digital transformation is stressed again and again in the Political Summit last month in New York. That shows the importance of digitalization as a lever to achieve the Sustainable Development Goals. So here, I just show you some examples. For example, for the electrification, digitalization, digital technology, Internet things play a very important role to achieve further electrification with renewable energies. That’s Goal 7. And for Goal 9, industry innovation and infrastructure. Here, I show you how digital big data has been used for smart manufacturing. And here is an example of a city, a famous city, Hangzhou, in China. Because Hangzhou city, in the center of Hangzhou city is a big, beautiful lake. We call it the West Lake. But that makes the traffic of this city very difficult. And this city was the top fourth traffic jam city in China. But with the help of big data and the implementation of so-called city brain was empowered by artificial intelligence and the fifth generation of mobile communications, 5G, this city has now become the 27th traffic jam of China. So a smart city helps Goal 11 and also for climate action. Here is another example used in China to use big data to help to mitigate to find out the leakage of the water pipes and to help to decide the city more resilient to the climate change. So just a few examples shows that how digital technology can help in different sector, in different country, in different region to help real actions to achieve sustainable development goals. So finally, I would like to stress the importance of cooperation. Because the Goal 17 is the partnership. Nobody can refuse the importance of the partnership. But here I would like to stress the cooperation should happen in interdisciplinary and intersectoral and international ways. So, for example, building information modeling and geospatial engineering is now widely used in construction area. However, these technologies are deeply rooted in different areas of engineering such as internet information communication technology, construction, internet of things and big data with applications in the management of resources and utilities. Telecommunications, urban and regional planning, routing of vehicles, parcel shipping and so on and so forth. They hold great potential to the support of sustainable smart cities. So all these things should work together with digital technology because digital without disciplinary border. And here I show you the China Association of Science and Technology. In short, CAST is a platform for interdisciplinary collaboration within China. We have natural science, industrial technology and engineering, medical science, technology and engineering, agriculture science, technology and engineering and interdisciplinary institutions. Totally, there are more than 200 disciplinary-based institutions representing more than 40 million scientific, technological, engineering professionals. So this platform is idea for interdisciplinary collaboration but also for international collaboration because we have the consultative status to the United Nations. So we have worked closely together with IGF and we try to closer our collaboration. And another example is the World Federation of Engineering Organizations. Now I serve as the immediate past president. And this federation consists of more than 100 so-called national member organizations such as CAST is the Chinese national member of this federation. So our federation is a comprehensive engineering professional organization representing tens of millions engineering professionals across the world. And we are keen on to collaborate with IGF in the near future more closely to work with you all together for celebrating the dual transformations towards a sustainable future. I stop here. Thank you very much for your attention.

Xiaofeng Tao:
Thank you, Professor Gong. I appreciate it. Our second speaker is Professor Liu. Professor Liu is a director of global change research, data collection and repository. Professor Liu is also a professor of Institute of Geographic Science and Nature Resources Research, Chinese Academy of Science. Her topic is about open science for green and digital transition. Professor Liu, you have the floor.

Liu Chuang :
Thank you, Doctor. And I’m glad to be here and share the information with you. Just like Professor Gong said, we are mid-time, mid-term of the SDGs, so we need to accelerate for the actions. So now the challenging, what the challenge is, so now we are mid-term to the 2030s, so less than half of SDGs in the world realized, actually. So climate change, natural disaster, and the COVID, and all impact SDGs, especially in the mountain area, in the small islands, and the critical ecosystem regions. So this is what we next step. So what is the objective next step? So the only one thing is accelerate to the SDGs. We need to focus on the effort, and we need to work together for this target. So what’s the solution? Everybody has their own different organizations, have different… Thank you. So the solution is we need open science. This one. Okay. Oh, this is challenging. Actually, I already said. Accelerate to the SDGs. The solution is open science. The technology also, but the science need a link to the technology, so big data and the Internet of Things, and the link to the engineer, not only science, technology, but also engineering, and working with the cases, and not only talk, but we need to start in the even small village. So also we need together with the systematic management, and then culture diversity. So this is the solution. And so we need to cooperate together, all partners. So in this idea, so in China, we start a new project we call the Geographical Indications, Environment, and Sustainability. Strong name is GIES, and this is a decadal program from 2021 to 2030. So this is if we do this, we have infrastructure. So there was a background, so there is a World Data Center, we call it Global Change Research Center. such data publishing on the reportory, this World Data Center. And then open data, open knowledge, open the geographer site, let the people to visit this to understand what you are and what you are doing. And this infrastructure got the VCS prize in 2018. So, and then technology, we need to dig into the big data and the Internet of Things and then make this, how the product is sustainable. So we give the identifiers and the DOI on the digital object identifiers and the science and technology identifiers and the global change data and the World Data Center IDs. And also have a trademark, give you a trademark and then the quick response system. And the people can very quick to find where your product come from. And then GIS, we take the network, that is the internet. We internet in the internet, wireless and the wireless. And then we have this, sorry. Then data publisher articles and products and then even local observation stations and also the package for the product packages, all linked together. So this is a network in everything together so you can trace where the product come from, what the quality of the product. So in China now we have, there are last two years, we have 17 cases in whole different regions of China. So this is, there are many different kinds of things. So rice, maize, and the bird art and the apple and many, many different agriculture products and the high quality also. So the benefit, one of the benefit, so right now more than 600,000 local people get, farmers get benefit. They got the income, more income. And then customers, for example, maize is one of our customers. We are happy we got a high quality product. So we, because we have some little money, a little bit more money to buy that, but we have less idea which one is good. But from this project we can identify which one is a good one. It’s wider for me to buy that, to spend money. Many people like me go that. And also contributors got benefit. Money is a scientist and a government officer got credit. And then we have how to organize these kinds. We have different partners. So for we have the scientific committee and the program committee and the company programs and we work together. And then the key player, that is the Geography Society of China and the Institute of Geographical Sciences and Natural Resources in Chinese Academy of Sciences. And then in the very beginning, we have 40 partners to join this program two years ago. But now we have 101 partners organizations to join this and they are very, very happy. So now, and this is a work is, and got a great deal from FAO. And FAO started in the new program as a one country, one priority product in the world. And then we work in the FAO, it was this. And then just a few in Bangladesh and several countries in Asia-Pacific, we start this program and they’re very, very, very welcome by different countries. So also we also support not only developing countries, but also industry countries. We work with the United, in the European Union in the Geography Indications Corporation. There, with China has agreement. So we have these ambitions and the change in the products from European, they are good wine and the China has a good tea, you know, they’re changing this. And then both of us need to, what the data is, what the quality is, where it come from? How about the culture? How about the socioeconomic development? And how make this a sustainable development? So the both information can open and end. So the summary, the GIS, the Innovative Methodologies, keyword is open science and the multi-stakeholder engagement. But the open science, not only science, but the link to the original geolocation and its environment, and the link to the product value chain and the open science methodology and technology and the engineering management in the geographical culture. Thank you very much.

Xiaofeng Tao:
Appreciate and thank you for your presentation. The third speaker is Ms. Tomoko Doko, the President and the CEO of Leisure and Science Consulting Limited Company. Her topic is on wildlife management in Japan for a sustainable future. Ms. Doko, please.

Tomoko Doko:
Thank you, Chairman, for introduction. Okay, my name is Tomoko Doko and I have a PhD degree, but I also, I got a hunting license in Japan after that. And today I’d like to talk about something about wildlife management in Japan for sustainable future. Okay, let’s talk about the general background of Japan and wildlife. As you can see in the picture on the right side, Japan’s main island consists of four primary islands, Hokkaido, Honshu, Kyushu, and Shikoku. And there are 97 terrestrial mammals in Japan, including 38 endemic species. Endemic species means the species only exist in Japan. Like the picture in the left side, the center one is Japanese sorrel. That is one example of endemic species in Japan. And let’s talk about wildlife management, why we have to focus today. Because wildlife management is a management process influencing interactions amongst and between wildlife, its habitat, and people to achieve predefined impacts. It attempts to balance the needs of wildlife with the needs of people using the best available science. Here I introduce the most important Japanese law related to this issue. The official name of the law is Wildlife Protection, Control, and Hunting Management Act. This act conducts programs to implement for protecting and controlling wildlife and manages hunting in addition to protecting and controlling wildlife by preventing the risks related to the use of hunting equipment. There are three main components related to this act. The first one is control of population. That is the main purpose of today’s topic related to that we need to reinforce capturing. Second component is management of habitat of wildlife. The third one and the last one is countermeasures of damage prevention. Today I introduce two species of Japanese mammals. One is Shikadiya and another one is wild boar. Those two mammals make troubles in Japan. And what kind of troubles is, for example, in the crop damage, another example is forest area damage. Both Shikadiya and wild boar make significant damage in two domains. This is just example of the picture how the Shikadiya make damage to cropland or forest. The geographic distribution of two species become a critical problem in Japan. As you can see in this graphic from 1978, both species tended to have expanded their geographic distribution. Therefore, Japanese government decided to change the law. So revision of the law was done in 2015 and new goal was set up. The background why we need to do that is negative impact of ecosystem and crop damage by Shikadiya and wild boar has become more severe and we can’t ignore anymore. And people who can do population control of Shikadiya and wild boar has become reduced due to hunters, population decreases, or aging. Therefore, new system of certification of wildlife capture program so that the government can reinforce more capturing and grow up next generation of hunters is implemented. And Ministry of Environment and Ministry of Agriculture, Forestry, and Fisheries did set up a new goal that is by year of 2023, the government of Japan aims to reduce the population of Shikadiya and wild boar to their half of the one in the year of 2011. This is structure of the act and due to the time constraints, I focus on the second component, what is control of capturing wildlife. As you can see in the illustration, Shikadiya and wild boar are designated as wildlife species for control capture program. Very briefly, I introduce what countermeasures we should do. Either we do gun shooting or trap hunting. For the trap, as you can see in the picture ABC, there are three types. And for example, for the wild boar’s case in picture A, wild boar walk without noticing the location of the trap and then the leg can be captured by wire. The second one B is there is a box and we use a bait to attract wild boar or Shikadiya and when they touch the bait, the trap box will be closed. And the third one is a larger scale box type trap. The picture in C is kind of a small one but it could be much larger like 10 or 15 Shikadiya or wild boar could be inside. Then for the digital transitions and green transitions, there are some ICT systems and technologies were proposed and implemented in Japan. These are the examples. Basically, there are three main technologies using, for example, drones are used to monitor habitats or used for sensing technologies and remotely monitor the system like in the forest, for example, deer and wild boar are passing in front of the sensor system then they can report to the user directly through the wireless network. And also the last one is, for example, the system to count numbers or identify animal types so they can differentiate the Shikadiya or wild boar or how many wild boar are there. Then for this case, they can choose the timing when to close the door of trap by this kind of ICT technologies. This is for the current situation and the future. So far, we are doing good. The population of two species tend to be decreased, but we have data until 2019, so we don’t know right now. We should continue to reinforce capturing. Thank you very much.

Xiaofeng Tao:
Excellent presentations, thanks. Our first speaker is Mr. Kremers from Codata, Germany. His topic is on digital twins in action, complexity management, including process models and workflow standards. Mr. Kremers, you have the floor, please.

Horst Kremers:
Yeah, thank you very much for the introduction. Dear colleagues, best greetings from our colleagues from early morning Berlin time, Germany. And I’m very sorry not to have the opportunity to be with you in Kyoto because it’s a fantastic city in Japan. I have been there myself and I hope you enjoy the time also besides IGF conference in the city. My topic today is on digital twins in action, our complexity management, including process models and workflow standards. There are some words in that title that I personally hope that even in the sum up of in the discussion, we may find opportunity because there is a little bit of deficit what we are doing the last years and what we need to do next years when the complexity of what we handle becomes even much larger than we ever thought before. I’m working in the sustainable development goals in resilience topics in disaster prevention and disaster information management, urban information systems. And in that combination, there are, my background is in disasters and hazards. That is what can happen to our environment and to our fellow citizens. We have to try to do our best to be better in many things as that introductory presentation by Professor Kegong very convincingly stated. We’ve seen this and AI to support our society at large. …newspapers, then you see that our life is not very easy because we have to deal with all these things. There are certain facets for dealing with complexity issues, certain facets of urban resilience where we start not only from collecting data and see what we can do with the data, but we have to do a strategic approach, see the whole problem, and then look into the details how they fit together. We start from a holistic approach to information management for intelligent cities and smart cities, what we call it, which is characterized by societal demands, current problems and challenges in technology, financing, and so on. More advanced requirements of urban infrastructures, advanced requirements that we have been working in urban infrastructure since minimum 30 years in 3D. That is when 30 years ago, 25 years ago, we started with 3D urban models, and of course that is now very much advanced in technology and gives also much more problems in information management. There is technology behind that, laser scans becoming better and more sophisticated day by day. Then from the topics of safety and security, we have internal security, which is the security of citizens in the city. disaster prevention, that is what happens regularly. We cannot avoid the disaster, but we can better prepare our citizens to have not so many deaths and also not so many big loss of damaged infrastructure and so on. Ecological and climate perspective, social, sociological issues in the city, and fractionalized production and supply chains. This is one of the major things which makes a city being live, active, with the perspective that everything can happen, but it is not just facts, you see, not just static facts, but it also deals with what happens, and what has to be done. Other facets of urban resilience is gross agglomeration, and you see the problems of the real big cities, which in other parts, not only in Europe here, but much larger cities are in other countries in our world, and there are development problems, there are ecosystem problems, where we have to deal with health and things for our citizens in the city, and what I want to take your attention to is urban ecosystem services, that is the sustainable development principles of not only seeing what is the ecosystems, but what does the ecosystem do for our citizens, make life enjoyable, and have service for urban development, and so on. So this new topic, which came up two or three years ago only, very sophisticated systems of ecosystem services. I come back to these things also when I touch the aspect of processes. Without taking too much time to read, the last point is intelligent transport systems. That is where actually at the moment, there is here in Europe, also in Germany, very much work done in transport, railways transport, air transport, and so on, and much money and development put in intelligent transport systems, with a lot of detailed sensor systems all over, and that is a real stream, streams, multiple streams of information coming just in time. We have the notion, we come from from 3D urban models of different stages of granularity, now to the term of urban digital twins. Now digital twins is not about robotics, but it’s about having different, more sophisticated digital representation of what we call the urban sphere. That there are certain principles to start from the beginning, it’s common good, there is a value behind that, we have to deal with quality, adaptability, openness, security and privacy, curation, standards, and totally have federated models. These, in principle, these principles are not really absolutely new, but they have to deal with what I call much more massive information that becomes available. Granularity comes down to the fraction of millimeters, and information goes not only on top of the landscape, but it’s in the landscape, and it’s in the ground below the landscape. In urban terms, all the pipes and things below the surface of the urban infrastructure is absolutely important, together with all their function, water pipes, sewage systems and so on, metro systems, underground tunnels for transport and everything, to have these things with at least the same high-level digital principles of what we did on top of the ground, with the 3D, typical 3D models, that is something which is a challenge, also for the future, but not only challenges, but of course also perspective of doing much more of organizing our common space in the city. Herein, we have to organize these massive data and active data streams which come from censoring certain aspects in all these systems. We have to organize this in data spaces, and I just give you an overview of the recent data spaces that the European Union is working in, and that is from manufacturing to Green Deal, mobility, health, financial energy, and so on, and on the right of the slide you see smart communities, also mentioned for our action space. This is an open system where, of course, much more other spaces can be joined, but here is a lot of activity going, especially in mobility, the third entry from the left, mobility is, as I said, of absolute priority at the moment, and you see all the kind of industrial ecosystems that deal with that is from construction, tourism, textile, proximity, automotive, health, and so on, and that is what we don’t have, actually, we may discuss that later, we don’t have real good means how to deal with that complexity of information. The ontology of urban digital twins, that is kind of a common conceptualization, and the digital semantic models and procedural models what we are dealing with, that starts with terms, properties, identity, status, annotations, role, causalities, semantic networks, which is more or less known as a principle, and nevertheless we have to do much more with procedural networks, because there is action in the digital twins. The most challenging difference is that we don’t have only static facts, but as I said previously, all these things are on the fly, with sensors all around, information streams coming from every side, and the whole thing is not just for presentation, full stop, then you have the presentation, anyone else can take it, the presentation, but now the whole system is for direct steering the city. I’m not favoring a direct robot working behind that, nevertheless, for traffic management, traffic light management, traffic optimization or something, there is a lot of interactive connection to physical and digital, between physical and digital, but not in all cases that would be possible, but for handling this, we need procedural networks. That doesn’t happen just by chance, but we have to do models and discuss the models of this, and the ontologies that we set up would have, as ontologies on the ontology level, capabilities of comparison, different ontologies globally and through different cities, because at the moment, a lot of different proposals for having ontology of urban cities is on the table. There’s not yet a real international standard for it. We have to compare these ontologies, we have to do that automatically, because it’s so complicated systems. Imagine a data management plan for the whole digital urban twin, and so that is rather complex, and for comparison we need that automatism to do it. We have to function of union of ontologies, we have different subdomains which we model first, and then we have to merge these to get to a more complete holistic aspect of ontologies of the digital twins. We have to do generalization in the ontologies, because we have to deal with technical detailed structures, and we have to support upper management in the city, doing for decision support on very different organizational levels in the city. Coherence analysis, that is the question of, is that ontology and the details of data stored coherent with legal boundary conditions, with financial boundary conditions, with ethic boundary conditions, and so on. So this list is large, there is discussion in detail for this, but we don’t have the time at the moment. We have to homogenize the terminology, we do work on the formats and meta-information. Nevertheless, the most important thing for the future would be a new standardized workflows for standard operating procedures in this big information flow. At the same time, for doing these logistics just in time, we need to do something not just right just in time, but we have to implement just in time. This implementing just in time is the thing that we also have not very good experiences at the moment. Behavior models, the challenges, I come to the end of my presentation. Beside cloud computing, that is all things that is discussed, but also in the end, you’ll see also my hint to implementing just-in-time demands. This is absolutely new science needed behind that. Recommendation of action is, we have to record, work with scenarios, work on complexity management. We have absolute deficits in doing models of complexity management, sorry, but we have to, that is really urgent that we do about it. And for the full management, you see also the entry on the right side, audits, to have independent control of plans, implementations, and does it work, does it have the effect, does it reach the goal, what is planned for. Thank you for your attention, and I’m looking forward for the discussion, and here you have the download link of the presentation. I have more material for those who are interested in digital twins. I am very happy to have direct contact later. Thank you.

Xiaofeng Tao:
Thank you. And our fifth speaker is Professor Ricardo from Mexico. His topic is on challenges and commitments in digital technology and a sustainable environment according to the United States-Mexico- Canada agreement. Let’s welcome Professor Ricardo from Mexico. Thank you very much.

Ricardo Israel Robles Pelayo:
Thank you very much. Hello, everyone. I would like to thank Dr. Liu, Dr. Tao, Dr. Tomoto, and I would like to say hello to Keogh and Dr. Horst. Well, I would like to talk about the challenge and commitments in digital technology and sustainable environment according to the United States-Mexico-Canada agreement that is called USMCA. First, I would like to show how the legal framework is formed in Mexico, since it’s important to efficiently address the issue of challenge in the green and digital transitions towards a sustainable future. In general terms, our Mexican Constitution stands as the highest legal system followed by the international treaties, federal laws, and local legislation, along with the official Mexican regulations. In human rights, both our Constitution and international treaties occupy a place of equal importance, ensuring the protection and promoting of human rights in accordance with pro-person principles. Within our Constitution, Article 4 expressly recognizes that every person has a right to an environment adequate for their development and well-being. This underlines the relevance of environmental sustainability on our fundamental legislation. At the international level, Mexico has signed 62 international instruments on environmental matters, including notable events such as the United Nations Conference on the Human Environment in Stockholm and the United Nations Conference on Environment Development in Rio de Janeiro, Brazil. Among the most important treaties, USMCA seeks to establish a framework for economic and commercial cooperation between the three neighboring nations. Although the USMCA does not address specifically the use of digital technology in the environmental sustainability, it is undeniable that these two areas are crucial to the future of our societies and economies. Chapter 24 of the USMCA focuses on the environment and establishes goal to promote the protection and sustainable management of natural resources. This includes a commitment to create and effectively enforce environmental laws and comply with the international environmental agreements to which we are a party. Although the agreement addresses uses concerning digital technology, e-commerce and data protection, it is important to consider specific technological aspects with application in the environment. Harmonizing regulation and policies around these issues is a crucial challenge. Regarding the Mexican national legal framework, the environmental law, which is called in Spanish, Ley General de Equilibrio Ecológico y Protección al Medio Ambiente, in Article 5 promotes the application of technologies, equipment and processes that reduce pollution and promote scientific and technological research in favor of environment. As I mentioned at IEF 2021, authorities and civil sectors should consider using big data to generate and use cleaning and renewable energy. And the question is, what can we say about the use of artificial intelligence that is discussing during the current IEF in Kyoto? Well, well, I think this is important to know. it’s operation and applicability to protect the environment. Nowadays, the demand of sustainable and efficient electrical energy is an unavoidable priority. In this context, AI emerged as transformative tool that can revolutionize the generation and management of renewable energy. AI, through big data analysis and autonomous decision-making can improve the efficiency and reliability of renewable energy source such as solar and wind. As we can see, we have a solid legal framework to address the green and digital transitions towards a sustainable future. However, we need more. In addition, it is essential that authorities join together and collaborate closely with private business corporation, civil society, and academia to achieve the national and international levels. Moreover, with the support and advice of world experts like my colleagues in this workshop, we can, on one hand, learn from their experience and on the other hand, exchange ideas and strategies to build a green and sustainable world with the support of the technologies. Only by working together, we can take full advantage of these political and legal instruments and build a sustainable and equitable future for all. In addition to above, some of the challenge where technological strategies must be implemented are reduce greenhouse gas emission and in all economic sectors, promoting the adoption of more clean technologies as sustainable practices. Second, to continue with regional and global cooperation and investing in digital infrastructure thus facilitating the transition to a digital and sustainable economy. Third, ensure equity and social justice in the transition through training and skills development so that communities affected by changes in the industry can fully participate in the digital and green economy. And final, continue working on the harmonization of standards and norms related to technology and the environment between the three countries thus facilitating trade and cooperation in areas crucial to the sustainable future of North America. Thank you very much for the invitation again.

Xiaofeng Tao:
Thank you, Professor Ricardo. Maybe there is a technical problem. So our sixth presentation, the presenter is offline for the time being. So we move to open discussion. First of all, I have three questions for our expert. After that, let’s see any onsite or online participant have some questions or comments. There’s three questions. So the first question focus on key challenges on the government issues. The second, focus on strengthening the cooperation among multiple stakeholders. And the third, focus on policy framework, policy guideline, regulations, something like this. So I hope our expert select maybe one, two, three, or please. First, Professor Liu.

Liu Chuang :
Yeah, I think there’s a challenging and for the future under the new transition, I think challenging is, I think challenging is how to make the weak people developing countries at least and especially for the mountain areas, small islands, countryside, villages. There’s so weak people. We need to call all communities, government, international organizations, pay more attention to these people. These people need the help. They really need this hungry free, poverty reduction for disaster, you know, save. So I think the challenging is how we can pay more attention to these people. Not only cities, not only rich people. So this, from my experience, working with these guys, these people, I pay more attention to this, I think. So challenging is science, technology, ICTs, everything, commercial issues, whether we can work together in a best way for them. This is my opinion, my experience.

Xiaofeng Tao:
Yes. Thank you, Professor Liu. Professor Ricardo.

Ricardo Israel Robles Pelayo:
Well, I would like to ask the same question. And I’m going to talk about the law point of view. And as I say, it’s important that the authorities get actively involved in the international forums such like this, the IGF. In fact, I am especially happy because some Mexicans parliamentarians who are interested in the internet governance issues attended this IGF in Kyoto. Without any doubt, this is a great start. Now, it’s important that they do an excellent job in materializing the creation of laws and their due promotion to achieve the goal of taking advantage of the digital and green transitions for sustainable development. Thank you.

Xiaofeng Tao:
Thank you. Miss Tomoko.

Tomoko Doko:
Okay, about the question A, the challenges, I would like to give my opinion to probably the developing countries and industrialized countries situations are completely different. But how I feel now is, for example, the government officers, scientists, and private sectors, there are many people who work on those issues very seriously. However, they tend to work independently. In a way, I feel they work separately in isolation sometimes. In that case, what I feel is some people or organization who can bond these people are lacking. So the people or organizations who could function like bridge or bond will be necessary for future as a challenge in my understanding. Thank you.

Xiaofeng Tao:
Thank you. And Mr. Kremers.

Horst Kremers:
Yeah, I think on governance issues is something where in science, we lag behind. There is a lot of new methodology, not only develop existing methodology, but new methodology needed for complexity and processes. But science is not working alone. As also other speakers, I’m also interested to learn more from Ricardo experiences in Mexico. We have to deal with the administrations people. And I say they are, as I know here from Germany, for big data and complex data, administrations are not so really well equipped. And sometimes it’s an educational setting also needed. So how do we need the needed competencies? Because after science experience and it works, the whole thing normally goes into administration for operational long-term systems running for the service of citizens. That is not only a scientific part. And these kind of governance needed to set these up in a participative mode, as Ricardo also said, we are not only with the government, but also with citizens. Citizens are not general citizens. There are engineers, there are doctors, there are health specific agencies and so on in the service of people. These kind of actors need to discuss with us. And that is what we need to support. Thank you.

Xiaofeng Tao:
Thank you. Please close this one, this presentation. Yeah, so to answer participation, do we have any questions or comments to our four speakers today? Please.

Audience:
Yeah, hi, good afternoon. My name is Tarek Hassan. I’m the head of the Digital Transformation Center Vietnam. I’m working on behalf of the Federal German Ministry for Economic Cooperation and Development, GIZ Vietnam. My question is to Tomoko-san. I was very inspired by the work you do, since we also focus on facilitating the green and digital trend transition. I was wondering more on the ministry collaboration, because I think you mentioned two ministries, the Ministry of Environment and the Ministry of Rural Development, or some sort of ministry focused on biodiversity. Sorry that I don’t have this name in the top of my head. But I was wondering what the division of labor is also with the role of the Ministry of Internal Affairs and Communications. So is this more within the sort of jurisdiction of the Ministry of Environment? Is also the Ministry of Internal Affairs and Communications of Japan also working on biodiversity issues? I think this bounds back to the question of, do the green folks work on digital, or do the digital folks work on green? And what are the sort of collaboration mechanisms surrounding that? Thank you so much.

Tomoko Doko:
Thank you for your questions. Maybe I can show my PowerPoint again. Zuwei, could you show my PowerPoint? PowerPoint, please. Around the page nine. Page nine. Okay, the two ministries you are talking is first, Ministry of Environment. That’s called MOE in English, in short. Another one is Ministry of Agriculture, Forestry, and Fisheries. In short, we call it MAFF, M-A-F-F. Page nine, please. Okay. Maybe I can control it. Okay, yeah. And what’s a different function of these two ministries? MOE is, for example, what I introduced this law, the Wildlife Protection Control and Hunting Management Act that was proposed and revised under the authority of Ministry of Environment. And what is another ministry, MAFF, is doing is they are in charge of agriculture. And they have the land, for example, if their land belong to the government, we call it National Forest. They are in charge of National Forest, too. So MOE and MAFF have a lot of overlapping issues, especially about this Shikadia and wild boars population control, they need to collaborate. Due to the time constraints, I did not introduce very much in detail. But basically, there are the consultation between two ministries. The introduction I did about this new goal, new goal was set up together by two ministries. So this is a common goal by two ministries and also common goal of Japanese government, too. And also inside, how say, under the ministry, there are the prefectures. And under the prefectures, there are the cities and villages, too, in Japan. So how to collaborate is, I did not explain very much, but in this figure, red color means the country’s work. And blue color means prefecture’s work. So they work together in some domain, like I mentioned the second component, control of capturing wildlife. Government should do something and the prefecture should do something together. But some work is divided, how say, independently, too. Yeah, so basically, a country prepares, how say, basic guideline and law. And under that, prefectures do the practical programs. The implementation of programs will be done by prefectures. That is how they collaborate each other. Did I answer your question?

Audience:
The question that for us is really interesting is how do you build up capacity within the ministries? Because you mentioned IoT devices that are being deployed. So is that technical capacity for digital transformation you build within the ministries? Is that something that you outsource? Do you work together with the digital ministry? Or does the digital ministry actually have a role in that?

Tomoko Doko:
In my understanding, there is agency of digital ministry was newly developed inside of cabinet. But they are in charge of issues about, for example, the number identification of human, for example. And not doing, how say, this kind of work very much. The ICT techniques and technologies I mentioned and introduced today were proposed by private sectors. And so what we are doing is from the top level, high level, Ministry of Environment and Math collaborated together to set up a goal as a government. And for necessity, they sometimes need to change the revised act. And based on this revision, private sectors and prefectures start to work on it. I belong to private sector. So I got the certification of implementer of this program. So prefecture develop a program and we are implementing this program. So like that kind of collaboration is occurring in Japan.

Audience:
Yeah, we talk about the twin transition. So what kind of data could be accepted, for example, how do we know the data is good data or poor data we are using, to Professor Liu.

Liu Chuang :
Yeah, good question. So there’s big data in the society now, a bunch of data comes. But how to identify which data is trustable? Which data is good, can be used for your research or for your business? So this is a really challenging, a good question, thank you. And then, you know, there are data divided to different sources, some data from government, some from private sectors, and some from the research, university research sectors. So this is, there are different policies, and then how to open there. And then, for research part, for most of the research part, I’m from Chinese Academy of Sciences. They are, in the whole world, we have their World Data System. World Data System is under the International Science Council. There are, totally there are 86 world data centers in the system. So there is a peer review for the data set for peer review, because all the data come from research part, different scientists, there are different ideas, different methodologies, and different results. So how to make sure this, that we should control, one is whose data, so whose author of the data. So there’s a, we need to protect original authors, where the data come from. And then, where the data, how the data produced, in which model, in which methodology, and make this, you need to have a curation about the data. And second one, we need to check, check what, check this data security. And different policy, different countries, different organizations, whether this is private or personal, security, business security, and so on. We need to check this. And then, we need to check the security, and then check that there’s a quality, data quality. So how do data quality, for geographer, I’m geographer, so geographer, there are different processes, table, test stand, and then the roster, geolocation, different resolution, and it’s very, very complicated, but we need experts, experts to review this kind of data. So there is, we got, our data center is Global Change Research Data Publishing and Reporting, based through the publishing methodology, and the peer review, got it, and the data could be very trustable. So we call this trustable. So I call you, and if you have data, you go to the World Data Center system. That data is, there is an international regulation, and put them, make this trustable. Thank you.

Daisy Selematsela:
Yes, the internet is a challenge where I am. Okay, go ahead, please. Yes, let me go ahead. Yes, please. Okay. Thank you, colleagues. What we want to highlight with you is how we actually look at open access repositories as an accelerator in enhancing our South African SDG hub. And these are all the same data. So I just want to move quickly, based on the time, that we are aware that the African leaders have responded to the SDG agenda by setting their regional priorities based on their common African position. And in post to that, we also looked at the African Union Agenda 2063. And this is what highlights us for sustainability issues. And we also look at the African Union Agenda 2063. And this is what highlights us for sustainability issues. And the African Union Agenda places prominence on research and innovation for sustainable development. And important development is the formulation of the SDGs with a universal recognition of the importance of quality education, especially in the Global South, which is on goal four. And when we look at the goal four targets of particular relevance to us who are in knowledge management and knowledge production, we look at repositories, data stewards, libraries, and information specialists, which is aligned to goal three, which is how to ensure the livelihood and the well-being of our population. So how do we come in from where we are with what we want to do regarding sustainable development and sustainability? We look at goal three and then goal four. And then goal four on education targets the issues around who are the actual role players, especially those who are involved in knowledge production, and who bears the responsibility on the complex and interrelated issues of accessibility and affordability of knowledge resources. And I just want to indicate that knowledge management and its impact on SDGs is highlighted within the four areas. We’re looking at availability, accessibility, acceptability, and adaptability. And here we’re looking at how do we actually facilitate sharing of information, accessibility, the roles that information literacy programs play, and acceptability, making available open source academic journals. Because if you want to address the SDGs, we need to be looking at all these things. The other aspect is the issue of adaptability. And here, how do we consider the training of researchers, policymakers, citizen science, and public outreach support to ensure the application of knowledge in solving the key global disaster health problems? And I would just want to indicate that we have the indicators that are key to us in the Global South, and especially in Southern Africa where we are. We look at the amount of research and development spent around the gross domestic product, which we spend 50% of our annual investment in research and development performed in South Africa comes from international partners. We also look at the indicator on qualitative measurement of use and access to ICTs, and especially now we’re looking at the fourth industrial revolution. Also, the ability to produce high-expert technology, and also the issue around higher education internationalization, because we know that our scientists and researchers and our postgraduate students, they are international, and they co-publish and so forth. And like what we are doing today, we are co-presenting. And also, we look at the indicators on the number of scientists the country produces and the number of patents that are filed in our country. And what’s important also is the number or the impact of articles published in highly ranked journals. And these are the indicators that are highlighted. And quickly, the influences of our indicators, especially in Southern Africa, is the issues around governments’ strategies on national research development strategy, the Higher Plan for Education, for example, the Plan for Science, Technology, Engineering, and Maths, which is the 10-year innovation plan from our Minister of Science and Innovation. And we also have our South African Open Science Draft Policy that’s also assisting us. So, Amodjar, I will now leave to my colleague, Lazaros, to touch base on how do we support the knowledge for sustainable development, and how do we capture the SDGs. Lazaros, you can come in.

Lazaros:
Morning, colleagues. I’m just going to touch quickly, because of time, on some of the things that we are doing. So, in South Africa, through the National Advisory Council, our strategy is to support repositories, as you can see here, working with universities, research councils, national facilities, institutions, museums, and others. So, these are basically where we are trying to ensure that they fall within the institutional policies that they are prioritizing, to ensure that they can generate content that they are producing, and link it into all the repositories that we can push. So, in terms of these, there are two policies, which the first one is research outputs, which each and every university produces. And then the second one is the creative outputs. This could be your film arts, visual arts, music, theater, design, et cetera. So, what we have so far tried is to ensure that all the universities have a repository, both for publications and data, and also through the University of Pretoria. There has been a project where we have developed the South African SDG app to harvest all the collections that are within the universities. This also has to be the issue that is raised, that we need to train our librarians to be able to index some of this content, to ensure that they fall within the SDG criteria, and through a taxonomy, a national taxonomy, that is also supported by the National Development Agenda of the country. So, if we look, I’m not going to touch much. So, the leverage for open access is through the library experts that we are also capacitating within the repository fields, and to ensure that the repositories also fall within the best practices around the world. And also, they have a choice, but most of the organized universities are using DSPACE as a software, and through DSPACE, we are able to collaborate to fix problems that can happen, and et cetera. So, these are some of the repositories that are in the institutions, and also, you can see the data repositories that are created so far from some of the research intensive universities, and also the OJS system. Thank you, Chair.

Xiaofeng Tao:
Thank you, Professor Daisy and Mr. Lazarus. I’m right? I’m sorry. So, I think Professor Zhou is a remote moderator, so whether there are any questions or comments from online. Professor Zhou.

Online moderator:
Yes, Professor Tao. I think I had a few questions online, but due to time restriction, I’m not sure if our speaker can respond to all the questions. There is a question from Andrej Khrushchev from Common Funds for Communities. He indicates global commodity value chain is very complex. Could the speaker speculate how technology could support the green transition for security, energy efficiency? I think also, Halster raised his hand. I don’t know if he has any response or any question. I’ll pass the floor to the onsite chair, Professor Tao, to you. Yes, thank you.

Horst Kremers:
Just a short remark, because such questions are unusual in our normal working groups. There are professions around, as Andrej represents, which would be needed to join the whole thing for all these consequences of what we are doing, not just collecting data. We are doing these processes for certain purposes. And as Andrej said, for green transition, for food security, energy efficiency, and so on, transport efficiency, all these data spaces that I mentioned in my view graphs, they come together and we have to find out how to put them together. There are models in food security, there are models in energy efficiency. In the other view graphs, I said we have to join these ontologies. This is a problem for itself, and I hope to stay in contact for doing more in that direction.

Xiaofeng Tao:
Okay, thank you. I think there is no more questions right now. Because the time is limited. Yes. Okay. Okay, thank you, Professor Zhou. Due to our limited time today, and all of the speakers presented many excellent points of view, I might need maybe another one or two hours to conclude. So this is the end of this workshop. And we want to extend our most profound appreciation to all the experts for their expansional presentations, to both on-site and online participants for their insightful questions, and of course to organizers whose dedication and tireless effort make this workshop a success. Thank you very much. Thank you. I would like to call all of you to come here. We get together to take a picture. Okay? Very good. Good. Come on. So maybe we get to know each other, and next year we can meet again. Thank you. I’d like to take the opportunity to give my special regards to Liu Chuang, for we worked together now more than 20 years in these topics, and I hope we can do so for the future. Thank you. Thank you.

Tomoko Doko

Speech speed

140 words per minute

Speech length

1724 words

Speech time

738 secs

Audience

Speech speed

170 words per minute

Speech length

336 words

Speech time

119 secs

Daisy Selematsela

Speech speed

164 words per minute

Speech length

766 words

Speech time

281 secs

Horst Kremers

Speech speed

123 words per minute

Speech length

2398 words

Speech time

1166 secs

KE GONG

Speech speed

112 words per minute

Speech length

1594 words

Speech time

857 secs

Lazaros

Speech speed

136 words per minute

Speech length

396 words

Speech time

175 secs

Liu Chuang

Speech speed

138 words per minute

Speech length

1749 words

Speech time

763 secs

Online moderator

Speech speed

101 words per minute

Speech length

108 words

Speech time

64 secs

Ricardo Israel Robles Pelayo

Speech speed

124 words per minute

Speech length

990 words

Speech time

481 secs

Xiaofeng Tao

Speech speed

134 words per minute

Speech length

636 words

Speech time

284 secs