Bridging Connectivity Gaps and Harnessing e-Resilience | IGF 2023 Networking Session #104

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Bridging Connectivity Gaps and Enhancing E-Resilience: Innovative Solutions for Underserved Areas

This networking session at the IGF in Kyoto focused on addressing global connectivity challenges and enhancing e-resilience, particularly in underserved areas and during disasters. The discussion centred around innovative solutions to connect the unconnected and make existing connections more resilient.

Dr. Toshikazu Sakano from ATR introduced the LUGS (Local Upgradable and Generative System) technology, designed to restore local communication in disaster-affected areas. LUGS comprises a battery, Wi-Fi access point, and small servers, offering social networking services like chat, video communication, and information sharing. The system can be deployed quickly in areas with disrupted connectivity, providing essential communication capabilities.

Jeffrey Llanto and Glyndell Monterde from CivisNet Foundation presented the implementation of LUGS in the Philippines. They highlighted its successful testing during the COVID-19 pandemic and its adaptation to local needs, including integration with learning management systems and use as a charging station during disasters. The project demonstrated LUGS’s potential for improving access to information, enhancing coordination during disasters, and increasing community engagement.

Chandraprakash Sharma from WISFLUX India discussed the future perspectives of LUGS in developing nations, emphasising its potential for providing critical information access in remote areas of India. He highlighted the system’s versatility in addressing various challenges, including natural disasters, and its potential for integrating edge AI to offer localised services without internet connectivity.

Dr. Haruo Okamura presented an innovative solution combining LUGS with optical fibre cables to provide internet connectivity to disconnected areas. He introduced the BIRD (Broadband Infrastructure for Rural Area Digitalization) cable technology, which uses submarine cable technology for terrestrial applications. This approach allows for cost-effective and efficient deployment of fibre optic networks in challenging terrains.

The presentations highlighted several key points:

1. The importance of local communication restoration during disasters and in underserved areas. 2. The potential of LUGS to bridge the digital divide and enhance disaster resilience. 3. The adaptability of LUGS to various use cases, including education and disaster response. 4. The combination of LUGS with fibre optic technology as a long-term solution for connectivity. 5. The role of community engagement and capacity building in successful implementation.

The discussion also touched upon challenges such as infrastructure limitations, cost considerations, and the need for sustainable, long-term solutions. The speakers emphasised the importance of phased approaches, local implementation, and adherence to international standards in addressing global connectivity challenges.

In conclusion, the session presented a range of innovative solutions aimed at bridging connectivity gaps and enhancing e-resilience. These approaches, combining portable communication systems like LUGS with advanced fibre optic technologies, offer promising avenues for connecting underserved areas and improving disaster preparedness. The speakers underscored the need for continued collaboration, adaptation to local needs, and integration of emerging technologies to achieve universal connectivity and resilience.

Session transcript

Moderator:
Hello, good morning everyone from Kyoto International Conference Center. Sorry we are around five or six minutes late due to some technical reasons. Today we are here in IGF and I’m Binod Basnet, Director of Educating Nepal and we’re here for a networking session, bridging connectivity gaps and harnessing e-resilience. As the global stakeholders are striving for the last mile connectivity, we know that there’s still a one-third of population that still do not have access to the internet. And if we break this down to LDCs, it’s about 64% of population that do not have access to the internet. And during the COVID, it was quite evident that connectivity is a lifeline in many ways for information access, for healthcare, for education and so forth. But it’s not just only about connectivity, even in the regions and countries that have access to connectivity, it has to be a resilient connectivity. When there are cases of disaster, the connectivity tends to get disrupted. So what is the backup plan? That is a very pertinent question. So with these two issues in hand, connecting the unconnected and making the connected resilient, today we’re proposing some innovative solution for these two cases.

UNKNOWN:
So today we have a panel of speakers for our networking session. First we have Dr. Sakano from ATR, who will be talking about Lux system as a solution for disaster and backup communication system. And secondly, we have Jeffrey Lanto and Glendal from CivisNet. Jeff is the executive director of CivisNet, and Glendal is the representative of that organization. And thirdly, we have Mr. Chandraprakash Sarma from WISFLOX India Private Limited. And finally, we have one of the champions and pioneers of optic fibers and standards, Dr. Okamura-san, who will also be proposing an innovative solution to rural and unconnected populations connectivity. So without further ado, I’d like to ask Dr. Sakano to give a presentation on Lux system. Please give us an introduction of the Lux system.

Toshikazu Sakano:
Thank you. Can you hear me?

Moderator:
Yes. Okay.

Toshikazu Sakano:
So please show my slide. Okay. So I can share the slide by… Okay. Can I start? Okay. Thank you very much for the kind introduction. My name is Toshikazu Sakano from Advanced Telecommunications Research Institute International

UNKNOWN:
based in Kyoto, Japan. Actually, I started research and development on ICT for disaster countermeasure. Okay. Thank you very much for the kind introduction. My name is Toshikazu Sakano from Advanced Telecommunications Research Institute International based in Kyoto, Japan. Actually, I started research and development on ICT for disaster countermeasure just after the big earthquake occurred in west, east and north part of Japan in 2011 when I was working for entity laboratories. And I moved to ATR, current institution, and started the new project called Lux project. In my talk, I’d like to introduce the research and development and my idea of restore the issues that happened in the disaster situation. So let me briefly introduce our next slide, please. This is… Okay. This one slide is to introduce ATR. ATR is a private research institute founded in 1986 and the main themes are computational neuroscience, deep interaction science. This is robotics, communication robotics. You can see down right, this is Android robot developed by ATR and wireless communications and life science. And I’m from Web Engineering Laboratories, which is doing a research and development on wireless communication and other ICT issues. And I’m from Web Engineering Labs. Next please. So let me start the background of my research and development. This one slide showed the number of disasters by continent and top 10 countries in 2021. And looking at this slide, Asia-Pacific region has many disasters happens. And under this big disaster happened, please go to the next slide. Okay. Some big issue happens. So under the big earthquake or big disasters, communication network is disrupted often. For example, base station and communication buildings are disrupted. That means you can not take phone and internet anymore. And that prevent us from using daily use, Google, Yahoo, Facebook, Amazon, that kind of services you can not use anymore. But at the same time, under the big disasters, the demand for communications will go up. So there is a big gap happens under the disaster situation. So this is an issue I wanted to resolve using ICT. So next slide. Next. Okay. So what I thought in resolving this issue, I focus on the locality or local communication. This one slide with a lot of characters shows other human characteristics. People communicate more with the people with more closer physical distance. This characteristic can be said communication locality. So if you are very close, you communicate with the person more frequently. That is the characteristics of human. So if you restore the local communication under the disaster situation with internet and other network service disruption, that will help to people under the disaster situation.

Toshikazu Sakano:
That is the thing I wanted to do. Next please. So after starting the research and development, when I was NTT, I proposed architectural concept called MDRU, Movable and Deployable Research Unit. This concept is once big disaster occurs, you can bring the resources for restoring communication service for local communication to the disaster affected areas and restore quickly for local communication. This concept was standardized at ITUT as L.392 and this was when I was NTT Labs. And next please. And after I moved to ATR, I launched a new project called LUGS. And the MDRU itself was focused on the telephone service and LUGS is almost the same as MDRU but the focus point is internet services like social networking service. So LUGS itself is comprised of battery and Wi-Fi access point and small servers. And I put software for social networking service in the server. That is the concept of LUGS. So once big disaster occurs, you can bring this LUGS to the disaster affected area with no internet connectivity, then people can access to this LUGS using Wi-Fi functionality, using their own smartphones and browsers access to the social networking service functionality. That is the basic concept of LUGS. Next please. So this is the outlook of prototype of LUGS. LUGS is comprised of LUGS server, Wi-Fi access point, battery, network hub, and some papers. All these things are packed in a portable case and that is the outlook. Next please. This is the functions LUGS offer as a social networking service. As you can see, chat function and video one-to-one or group communication and feed function and pages function. These functions are offered to people that surround these LUGS devices. So this is limited to local communication, but you can keep using this kind of social networking function. That is the concept of LUGS. Next please. So our team has conducted a series of feasibility studies, mainly in seven islands in Philippines. And detail will be presented by next person, Jeffrey, so I will skip this slide. Next please. And this is also another activity after the big typhoon in Philippines and detail will be presented later. So next please. So I have focused on disaster issues, but LUGS itself can be used for other issues in the world. So potential demand of LUGS worldwide, while the internet use is widespread in everyday life and work for many in high income countries. As Vinod said, one-third of population worldwide do not have access to the internet. That is a big issue worldwide. So to restore, to bridging the gap, LUGS must be efficiently used to people in the area where broadband connectivity or internet connectivity is not fully penetrated. Okay, next please. So this one slide, so explain the status we have. So we have run research and development of LUGS for about five years with the support of stakeholders. As you can see at the bottom of this slide, we should move on to the commercialization phases to LUGS. So I launched a startup company named Negro Networks. The objective of the company is to deliver the LUGS system and solve solutions based on it. At the same time, we extended our R&D recently to include artificial intelligence in LUGS to be efficiently used by the first responders of disasters. We call this system FLOS, or Frontline Operational System. Okay, next please. Let’s complete faster, Sakano-san. We’re running out of time. Okay, this is summary. Thank you. Thank you for your attention.

Moderator:
Thank you so much, Sakano-san. Now I’d like to request Jeff to make his presentation.

Jeffery Llanto:
I’ll be the one. Thank you, Bino. To start with, I’m Jeffrey Lianto. I’m the Executive Director of CBISnet Foundation, and together with me is Glendel Maltrede, our Project Manager. And we will discuss and we will talk about the implementation of the locally accessible cloud system in the Philippines. So CBISnet Foundation is a project of the government under the Department of Science and Technology that evolved into a foundation in the year 2000. So CBISnet is one of the pioneers to provide internet connection to the Philippines way back in 1994. So we have been working with different partners and stakeholders like ATR and APNIC. So this is the locally accessible cloud system implementation in the Philippines, started in 2019 until March 2023. Next slide please. So next slide. As mentioned, we are a government project, and it evolved into foundation. Next slide please. Then we have partners, local and international partners. We have strong partnership with the government. We also have partnership with APNIC Foundation, ATR, NTT, and USAID. Next slide please. So we have been recognized our efforts on ICT in the Philippines, and again, it’s been recognized in one of the IGF in Mexico. Next slide. So about the last project, Dr. Sacano already elaborated about the project itself. So the next slide. So this is the implementation right now. The implementation of the project, it’s in the Hilutungan Island that’s in the center part of the Philippines. It’s around 7.5 kilometers away from the nearest point of presence of the internet. So we push the signal to the island called Hilutungan. Next slide. So this is the timeline for the last project. It started in 2019 until 2023. So what is very significant on this project is this is during the pandemic area. So when Dr. Sacano tested lax in Tokyo, I mean in Japan, there are several use cases that cannot be implemented in Japan, but is at a high need to areas like the Philippines. So it started as in 2019 we did some social preparation, then eventually when pandemic came in in 2020 and 2021, there were new use cases that were introduced, like the learning management system that was integrated to the locks with the help and the development of the software coming from India. So this was not part of the original plan that we had way back in 2019. So eventually, again, I don’t know if it’s lucky enough, in 2022, disaster came in. A big typhoon went to the Philippines. And again, new use cases were being introduced, especially on the side of the hardware for locks to implement, to be a charging station so that it can also help to the devices in the island. Next slide, please. So these are the activities that we want to show you. It’s more on the pictures. In order to implement the project, you need to have them penetrated at the grassroots levels. So we need to introduce this technology at a very minimal so that the island people, the folks on the islands can get those information immediately. Next slide. Then after training them, we are going to proceed in putting up the infrastructure. Again, this is a challenge because the island doesn’t have any electricity. It relies more on solar power. It doesn’t even have a very strong internet connection. They just rely also on weak signals coming from telephone companies. OK, next slide. Then once the infrastructure is already installed, this time it’s the installation of the locally accessible cloud system. And this is the trainer’s training. Here in the picture, you notice we train the teachers and also train the students how to access the system. Next slide. For the usability, there are several stakeholders involved in this one. The school, the local community, which involves fishermen, housewives, students, and so on. So there’s a constant training to the different stakeholders. And what’s very interesting is the LUX, we try to introduce it on a non-disaster era. We tried to introduce LUX on our Christmas party in which we had some kind of games that they use it on a non-disaster era. So there’s some kind of application for normal times. Next slide, please. And the LUX data access and retrieval for this one, we tested several areas, not only in Hilutungan, but also to the neighboring islands under the APNIC project. So LUX already evolved into another project. It’s called, we call it ILEP, or the Internet for Sustainable Livelihood, Education, and Tourism under the partnership with APNIC Foundation. Next slide. Lastly, we need to empower the community. We need to train them, especially the teachers, because they’re the ones who really has the capability to understand more and grasp more information. So we train the teachers so that they can troubleshoot and they can also install the system by themselves. OK, so next slide. So I will give you to Glendale, our project manager, to give the results of the project implementation in the Philippines. OK, thank you.

Glyndell Monterde:
Thank you, Sir Jeff. To further discuss the use cases of LUX in the Philippines during pandemic, let me discuss the results of the research and development. First, it tested the full potential of LUX outside Japan. So this means that during pandemic, during its step four and step five of research and development, it allowed successful testing of its performance and functionalities of its features, including voice, messaging, bulletin, among others. Secondly, it integrated Philippine use cases during pandemic. So we’ve identified additional use cases, as discussed by Sir Jeff. It includes the learning management system, the voice calls, the solar charges for device, and including also the local information systems integration of the barangay. Next is, we’ve implemented the local LUX and the cloud LUX methodology. This means that all information stored on the on-premise or local LUX will now be able to sync to the cloud LUX when internet becomes available. And next, it has a remote implementation to nearby islands, which means it was successfully being implemented in the island of Hilutungan, which is more than six kilometer away from the mainland of Cordoba, Cebu. Next is, is the collaboration and deployment to late the area in partnership with the Visayas State University, or VSU. So VSU piloted the successful testing and usage of the learning management system. And of course, the synchronization feature among its two campuses. Next is, we have the formation of the Islet Connect project, in partnership with APNIC Foundation Australia and Seed4Com. So the APNIC Foundation had given grants to CivisNet to connect the unserved islands. So it has two phases. The first phase, the Islet Connect makes project, it makes the LUX as a key component for the communication support during disaster in Hilutungan Island. And its second phase, it also connects the neighboring islands of Cauhagan and the Panganan Islands. And other, next slides please. For other key results, it also made the LUX, the presentation and demonstration to partners and stakeholders, including USAID-BECON, the Department of Science and Technology Region 7 of the Philippines, and also the Department of Education, and as well as Ramon Abete’s Foundation Incorporated. And we are able to present also to international conferences and meetings, including the UNESCO. And just to also give you the potential impacts of the LUX, the implementation of LUX, it gives improved access to information, which means that LUX provides access to critical information during disasters that helps in making informed decision and take appropriate actions to protect the lives in the community. Secondly, it will give you enhanced coordination, which means LUX facilitates better coordination among different stakeholders involved in the disaster response that helps ensure that resources and assistance are effectively distributed to the community. Next, it also gives increased community engagement, which means that this allows for more inclusive and community-driven approaches to disaster management. Next, we have the reduced isolation. It is LUX being a tool that can improve the ability of the people to seek assistance and communicate their needs during disasters. And lastly, LUX will give you capacity building. It is to develop skills in disaster communication and response, and to enhance resilience and ability to cope with future disasters. So that ends our presentation from CVSNet Foundation Incorporated. Thank you.

Moderator:
Thank you, Glendale and Jeff. Now, I’d like to quickly request Mr. Chandra Prakash to talk about the future perspectives of LUX in terms of developing nations. Thank you. Thank you.

Chandraprakash Sharma:
Can you put up the slides, please? So good morning, everyone. I’ll start with my introduction. So I’m Chandra Prakash Sharma, CEO and founder at WISFLUX. We are proud to be representing the Indian collaboration on this wonderful project, which has so much impact on developing nations. And thank you, Dr. Sakano and the wonderful team for pursuing this project. I really appreciate the dedication of everyone who are present here towards the resilient infrastructure that we all are seeking in developing nations, especially after the wonderful traditional drums and fireworks and the party we had last night. It was difficult probably to wake up early this morning. So going forward, let’s move on to the next slide. So I want to, so far we have done trials in Philippines and Japan, but next target is India and other developing nations. And Indian government right now is pushing very hard on the digitalization of the governance and infrastructure overall. And we have some wonderful projects going on in terms of digitalization, but the challenge remains because, although there has been tremendous progress in last few years about connecting the people in remote areas, but still, I think we have around 50% of population which doesn’t have access to the internet. So not just the access to the internet, but the access to the information is more important. You can say the internet maybe reaches later, but access to the critical information about government policies and schemes that are available for people in tribal areas or the poor remote areas that we have in different parts of India, because very diverse country geographically as well. We have mountainous region, hard to reach, hard to implement any infrastructure. We have desert and then we have deep forest where many tribal population is living. So to help them access the many advantages the government is offering, the schemes that government is offering, I think this kind of solution is very important for them. And then again, we have variety of disasters that can occur naturally in India, especially on the coastal regions. And we have earthquake prone regions as well. So after one disaster is hitting, this kind of device is very useful as we have seen through the implementation in Philippines. Going faster to save time for the next presenter here, you can see that this device has a lot of potential. Dr. Sakano talked about the inclusion of AI on this device. And it’s not just the AI access through the cloud services, but the wonderful thing about it is that you are able to access it locally. Probably you will understand in better terms by the term edge AI, which is for the people who are not connected to the internet. So it has good potential for the e-education as well. And we have in India, a public distribution system where government helps by distributing rations to people who cannot afford those. So digitally providing the information and solutions based on such device is very impactful. Next slide, please. I want to give you a perspective of the future of this technology, not in, I mean, it may seem simple in a way, but it has a huge potential. So right now the privileged who have access to the cloud are able to access the servers databases, the services from the cloud, including the very powerful and huge potential AI services we have nowadays available. Going forward, the cloud providers or the service providers are now trying to bring services as close as possible to the users by deploying the content and services in the edge of the cloud. But then you have the edge where user is sitting, which is the user local area network in which the lax network is basically, you can technically understand, this is where the lax architecture sits. And services now can be available within the local area network of the user with or without the connectivity of the upper layers. And then finally, you have the edge where the user devices are sitting, which connect to the upper layer, the local area network accessing the lax services. Next slide, please. And as we discussed, the impact of the AI is going to be tremendous on all the sectors, and especially if it can be made available local to the remote communities for farmers providing insights about, for example, the diseases, if they can upload a photo and understand better about the farming, the healthcare, local healthcare workers, the safety workers, the emergency responders, and even in the education. And the benefit of such AI is because it is local, it can offer the faster data processing and enhance security with the less consumption of bandwidth and being energy efficient. As already discussed that this portable device that we have here is capable of being charged by solar panels as well, which was tried in Philippines already. So next slide, please. So thank you very much, everyone. And now Dr. Okamura will present his wonderful contribution.

Moderator:
Thank you, Mr. Chanaprakash, and thank you for making that quick because at the end of this session, we are also looking to take some questions from the audience. So to not delay the question and answer session, I think we’ll go on with the last presentation. Dr. Okamura-san, the floor is yours.

Dr. Haruo Okamura:
Thank you very much for this opportunity. The title of my talk today is Connected and Unconnected in a Phased Manner. We have been listening to the presentation. We have been listening to the previous presentations, mainly the use of RACs in an independent manner to create not internet, but the intranet. But finally, my goal is to provide internet connectivity to the world, almost all the disconnected or not connected area in a phased manner in a very practicable way. So the combination of the RACs, multiple RACs plus optical fiber cable is my presentation. The next one, just briefly, I am a global planning president, and I am an expert of fiber optic systems and strategy standards. Actually, I am currently the international chairman of IEC, Fiber Optic Systems and Active Devices. And also, I am a developer of the solution board and the corresponding ITT standards. The presentation that I’m going to tell you is based on ITT recommendations, three recommendations that I have worked for as a editor, L.1700, L.110, and L.163. Next, please. This is all the concept of the phased approach, step-by-step approach. That means from intranet, based on the use of the independent RACs, into internet connectivity. For example, if you have a village A, one day introduces RACs, one RACs, that generates a intranet capability to the village people, maybe maximum 256 people. And next day, village B introduces another RACs, and village C, another RACs, independently. What will happen if we can connect those three multiple intranet RACs by using optical fiber cable, broadband optical fiber cable? As you can see in the slide, there’s a big mountain or a difficult terrain. Basically, we have been thinking that laying the optical fiber cable in difficult terrain has been very, very costly and difficult, and takes a long time for construction. But my idea will be eliminate that difficulties by using submarine cable. Submarine cable, you can imagine, that is very robust against high water pressure. pressure, you can lay the submarine cable directly on the surface of the ground. That eliminates all the cost of construction so that you can have affordable connectivity. And it is something not easy to understand so that I tried to make those trials as international ITT standards. So finally, Village ABC can be connected by using optical fiber cable and one day in the future you can connect maybe Village C to the internet so that all of a sudden those communities become internet capable large communities. This is my idea. So you can see the slide photograph here. The local people with bare hands is now implementing optical fiber cable on the surface of the ground of the unexplored jungle. That really happened in 2019 in Nepal mountain village. Next please. So what is BIRD? This is optical fiber cable so I will be very briefly touch upon what is the BIRD cable. That BIRD is broadband infrastructure for rural area digitalization. My invention. Next please. This is optical fiber cable as I said submarine cable based so that you have a very thick wall thickness stainless welded tube within that up to 48 fiber cores are included. And the total diameter of the cable is even 11 millimeter finger size. That is based on the submarine cable technology and also supported by ITT recommendations. That’s applicable to all terrain. Even in the sky or in the water or on the ground surface or the underground. Next please. This is one example of Japanese quality. This is a cross section picture of Japanese cable and the cable from other country. So you can clearly see that the quality of the cable structure and also the welding portion of the wall of the stainless steel tube very much difficult to use because of the cable outlet is the same but the inside is very different. Next one. Operator if you can go click the down portion of this slide the helicopter can fly. Like this. Thank you. This is just happening in March this year at the altitude of 5,300 meters to carry the cable drum into this high altitude area trying to lay the optical fiber cable board to mount a base camp. Thank you. Next one. So this is a cost reduction. About 90% cost reduction has been achieved because of a construction can be done on the surface of the ground. And this project has acquired WSIS World Summit for Information Society last year championship because this is a real solution opening the door for the globe to be connected and connected practicably by using lux plus optical fiber cable. Next one. So summary the top priority for ITU is connected and connected. It is very much often spoken about but we have been not available the real solution, physical solution and I have now presented the solution here based on the Japanese technology plus ITU standards and that is lux plus ITU compatible solution board that affordably safely bring broadband Wi-Fi hotspots practicably phase-wise across the decline in DIY basis, do-it-yourself by local people. And CAPEX of the board laying cable is about 6,000 U.S. dollars per one kilometer. This is dramatically reduced cost for the implementation. And the criteria for board cable and its deployment complies with the ITU standards. That concludes. Thank you very much.

Moderator:
Thank you, Dr. Okamura for the wonderful presentation. Now we’ve just got over 10 minutes of time and the floor is open for questions. If you want to ask any questions to any of our speakers, you’re ready to do so. Anyone from the audience, if you have any questions, you can come up to the mic and ask the question.

Audience:
Thank you. Hello. Good morning. Carlos Rey Moreno from the Association for Progressive Communications. Thank you very much. I really think it’s a very interesting solution. I’ve been following your work on fiber for many years on ITUT and it’s very interesting how it is evolving the connectivity of different access, you know, like bringing connectivity to the village or to whatever remote area it is now with low earth satellites or with microwave links and then from there start with lux and relying it with fiber to the next village or even within the village because of the interference of the Wi-Fi. I was thinking there is obviously a distance in between the lux, in between the villages that the fiber can go without a repeater and how you’ve been considering that in the model that you were presenting and what would be the increased cost of adding, you know, OLTs somewhere to the lux or, you know, the overall cost because definitely it has some legs and particularly in mountainous regions where villages are close by but you need repeaters too. You cannot do microwave, right? So thank you.

Dr. Haruo Okamura:
Thank you. You have presented a lot of issues. So first one is wireless or wired. Wireless connectivity, microwave or satellite, Elon Musk launched 12,000 satellites and by using 10 billion U.S. dollars and yet the life is only five to seven years and the transmission capacity is only maximum one giga BPS per one, you know, kind of satellite beam. So the fixed microwave also is the maximum at this moment is about one giga BPS, maximum transmission. And fiber can provide more than 10 times higher capacity than those, but one fiber. And as you can see, 48 fiber core can be included, the finger size cable, so the enormous improvement by using fiber. And next question is how long the fiber can connect each other. The maximum distance without any repeater is more than 500 kilometers today if you introduce state-of-the-art technologies by using a fiber amplification or 300 kilometers or 100 kilometers. Just if you’d like to go only 50 kilometers, the very cheap commodity type, yeah, a media converter can just transmit to 50 kilometer or even 100 kilometers, so there is no issue.

Audience:
Thank you. Sorry. Just to caveat a bit of my elements. One thing would be, you know, like 500 kilometers or more for sure, but then if you have several villages, you start to need to multiplex and you cannot have one single cable. So you need repeaters or multiplexers in between. And then in relation to LEOS, 100% on the capacity, microwaves on the capacity, but the $6,000 per kilometer for villages that cannot eat and that are far from wherever it is, I think, you know, long term for sure, fiber, but in the short term, maybe we need to use solutions that are more cost effective for the back home to get there and then, you know, little by little, building up the economies, no? Okay. Okay.

Dr. Haruo Okamura:
The maximum length of the one cable is about 12 to 15 kilometers due to the size of the cable drum. If you go to the submarine cable, it can just, because of the 40 kilometers, 80 kilometers per one segment, because of the cable range and the well equipped manufacturing, everything facilities are available, but for this terrestrial usage, the cable drum at this moment is, you know, only 12 to 15 kilometers. And each 15 to 12 kilometers, you need a splicing box, like, look like a repeater, but the inside is only fiber splicing, that’s all. So there is no difficulty to, you know, connect to 100 kilometer away villages. Thank you.

Audience:
Okay. Thank you very much. My name is James Ndufuye from Nigeria. I have two quick questions. Great presentations anyway. Great presentation. The first one, very nice solution, bold solution, like this is targeting the underserved areas. What percentage of your, of Nepal, of the country would this cover? And now, soon, do you project it can be covered? Then secondly, if you compare this to TV-wise space technology, TV-wise space technology, you so, yes, TV-wise space technology, which also can be applied in rural area. So what are the advantages and disadvantages? Thank you.

Dr. Haruo Okamura:
As I said, that in Nepal, I have been, we have been doing a project, as I said, up to, from Namche Bazaar, is the foot of Mount Everest, to the base camp, 42 kilometers. Now, as you can see, the helicopter flying, that carried in the cable drum already this much. So I don’t know what percentage of the, but the National Telecom Authority of Nepal declared that the use of this solution to Mount Everest region and Mount Annapurna trekking route. And we have already been doing in the west part of Nepal for about 10 kilometers. So I don’t know how much percentage this can cover, but this is all terrain type. Go to mountains, underwater, in the water, everything. I should not say everything, but I don’t know, go to the Mount Everest top is very difficult. And the use of fiber and open space for wires in TV space, I don’t know, at the capacity change, capacity difference is very large, so that I don’t have a good idea. Maybe Sakano-san could tell about it. I think TV wide space spectrum can be used for extended area of lugs, not the substitute of optical fiber. So TV wide space may be possible to be used for extending the area, but bandwidth will be limited. Because TV wide space has a large, you can cover a large area, but bandwidth is low. So we can think of that kind of technology can be included in our lugs solution. Thank you.

Audience:
Hello. Does it work? Yeah. Hello. My name is Niels Brock from DW Academy in Drasomatica. Great presentation. Two questions about the lugs. How much open software and open hardware is in this device? Thinking of also customization possibilities for local communities to get their hands on to pitch it to their very needs. And also about, this is a question more for the region, so if something is broken, how are the supply chain situation? Is this going to be like a waste very quickly, or do you have other solutions if there is a piece that is broken to quickly replace it? Thank you.

Toshikazu Sakano:
Okay. Thank you very much for good questions. For the first one, lugs can be replaced, as CP says, lugs can be used as edge computing. So if you include any softwares, you can use the functions the software provides locally. For example, in our feasibility study, we installed e-learning management software inside the lugs and used for school, that kind of thing you can use. So you can use in various way and for various applications with our solutions. And second one?

Chandraprakash Sharma:
Yeah. I would like to add to this answer. So as Jeff here wonderfully talked about the implementation in Philippines, I think one important thing that they did in Philippines was to actually train the people there so that they can repair if anything were to happen to this device, they could implement taking a box by themselves in a remote region without any help from us. And it’s built from off-the-shelf components that are readily available, not just in Japan or one locality, but they can replace by the hardware available in Philippines or hardware available in India. As for your question about the open hardware, right now we don’t have, but let’s say there is definitely a possibility to include capable hardware available like Raspberry Pi or if there were other open hardware available that can sustain the kind of server that we run in this. So yes. I hope it answers.

Jeffery Llanto:
Maybe I can add something for that one. LUX is a platform. So under the auspice of Dr. Sakano, and it evolves to different modules. So it started with file repositories, calls, and all those areas. So eventually the use cases started to come in based on real scenario on the community. For example, LUX is useless during disaster, especially typhoon, when all the devices in the island doesn’t have any power. Where do they charge? So again, technology is defeated. So I asked Dr. Sakano if you can look for solutions. So they talk with India. Then LUX become a charging station to the devices on the islands. So again, LUX is a platform, and we learn a lot from it. And we just met Dr. Okamura, and he said point-to-point connection or wireless connection is expensive. So maybe you could look for ways, let’s say a low-cost fiber optics. So again, we are learning a lot from LUX and through this project. And hopefully Dr. Sakano with the new one, they call it the new project. Yeah, FLOS, the Frontline Operations System.

Moderator:
Jeff, we’re out of time now. That’s all we can do for today. Thank you everyone for joining us today. And we also have a booth on the first floor, so we can always join in and talk about this informally outside. So thank you everyone for joining in. Let’s collaborate, let’s network, and let’s find solutions to bridge the digital divide together. Thank you so much. Have a good day.

Audience

Speech speed

150 words per minute

Speech length

582 words

Speech time

233 secs

Chandraprakash Sharma

Speech speed

166 words per minute

Speech length

1132 words

Speech time

409 secs

Dr. Haruo Okamura

Speech speed

140 words per minute

Speech length

1608 words

Speech time

691 secs

Glyndell Monterde

Speech speed

151 words per minute

Speech length

639 words

Speech time

254 secs

Jeffery Llanto

Speech speed

123 words per minute

Speech length

1158 words

Speech time

567 secs

Moderator

Speech speed

137 words per minute

Speech length

482 words

Speech time

211 secs

Toshikazu Sakano

Speech speed

111 words per minute

Speech length

796 words

Speech time

432 secs

UNKNOWN

Speech speed

104 words per minute

Speech length

635 words

Speech time

366 secs

Meeting Spot for CSIRT Practitioners: Share Your Experiences | IGF 2023 Networking Session #44

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

In the analysis, the speakers emphasised the importance of building bridges between different communities to contribute to an open, free, stable, and secure internet. They highlighted the need for increased interaction and adoption of each other’s languages and processes between network operators and cybersecurity specialists. This closer collaboration would facilitate a more effective response to incidents and enhance overall information sharing in the field of cybersecurity.

The speakers also stressed the significance of finding a balance between security and stable communication. They acknowledged that while security is essential for protecting networks and data, it should not hinder the smooth flow of communication. Striking this balance ensures that individuals and organisations can communicate freely while maintaining a safe online environment.

Cooperation at both the national and global level was identified as highly beneficial for internet security. The analysis indicated that different regions have various experiences that can be shared for mutual benefit. Adopting a “defend locally, share globally” approach contributes to wider global security and promotes cooperation in tackling cybersecurity challenges.

Furthermore, the speakers discussed how geopolitical issues can both challenge and strengthen the cooperation of Computer Emergency Response Teams (CERTs). While geopolitical tensions can potentially hinder cooperation, recent events have highlighted how the commitment to keeping the internet secure has strengthened certain relationships despite these challenges.

The analysis also highlighted the crucial role of sharing information in tracing the origins of cyberattacks. However, it was noted that this can be difficult due to factors such as local laws and regulations and the intersection between cybersecurity and national security. Despite these challenges, the speakers emphasised the importance of sharing information to effectively combat cyber threats.

Resource limitations were identified as a constraint to international cooperation. The analysis suggested that having expert-level communication specialists is necessary for continuous monitoring and maximising resource findings. Addressing resource constraints would facilitate more effective international cooperation in the field of cybersecurity.

In times of global crises, such as the current pandemic, the speakers emphasised the need to continue information sharing. They viewed the pandemic as a blueprint for global information exchange during crisis situations. Even amid geopolitical tensions, the speakers concluded that the continuation of information exchange is vital to effectively address cybersecurity challenges.

Overall, this comprehensive analysis underscored the importance of building bridges between different communities, striking a balance between security and stable communication, and promoting cooperation at both national and global levels. It also highlighted the challenges and opportunities presented by geopolitical issues, the significance of sharing information, the constraints of resource limitations, and the importance of continuing information sharing during global crises.

Bernhards Blumbergs

A recent meeting addressed the importance of freedom, openness, and security on the internet. While acknowledging that achieving all three aspects simultaneously may not always be possible, participants stressed the need for ongoing efforts to strive for them. The argument put forth was that the internet should be a space that promotes freedom of expression, ensures open access to information, and prioritizes user security and privacy.

Regarding information sharing, participants highlighted its crucial role in the development and progress of the internet. Even during times of geopolitical tension, it was emphasized that continued information sharing is vital. Peter Koch from the German top-level domain registry specifically emphasized the significance of maintaining information exchange despite any underlying political conflicts. Additionally, the meeting discussed how the COVID-19 pandemic served as a blueprint for prioritizing global information exchange during a crisis, showcasing that challenges can be overcome to facilitate the flow of information.

The meeting also underscored the need to understand and prioritize device and personal security. Participants agreed that enhancing cybersecurity requires individuals to have a deeper understanding of device security and personal security practices. Furthermore, they recognized the essential nature of practicing good cyber hygiene at both personal and national levels to create a safer internet environment.

Importantly, it was emphasized that information sharing should not be restricted to specific layers within the internet infrastructure. Participants argued that sharing information should extend beyond technical, operational, and strategic layers and instead be facilitated between these layers. Building understanding and effective communication across different levels of the internet infrastructure were highlighted as crucial aspects of successful information sharing.

In conclusion, the meeting highlighted the importance of striving for freedom, openness, and security on the internet, despite the challenges of achieving all three simultaneously. It also emphasized the critical role of information sharing, particularly during periods of geopolitical tension and crises. Additionally, understanding and prioritizing device and personal security, along with facilitating information sharing across various levels of the internet infrastructure, were identified as key factors in creating a better and more secure internet environment.

Adli Wahid

Adly Wahid, a security specialist at the Asia-Pacific Network Information Centre, is actively engaged with the CERT and C-CERT community in the Asia-Pacific region. This engagement allows him to interact with various stakeholders involved in cybersecurity, fostering collaboration and knowledge sharing.

Previously, Adly Wahid has gained valuable experience working for the National CERT, Malaysia CERT, and a CERT dedicated to the financial institution. These prior positions have equipped him with a strong background in handling cybersecurity incidents and implementing effective security measures.

The importance of cooperation between CERTs and CSIRTs at both national and global levels is paramount, as it ensures a wider exchange of experiences and technologies to effectively combat cyber threats. By collaborating and benefiting from one another’s expertise, CERTs and CSIRTs can enhance their capabilities in dealing with cybersecurity incidents. Despite global problems and adversarial geopolitical issues, cooperation between these entities has actually been strengthened, showcasing their commitment to making the internet a secure and safe place.

Recent geopolitical issues have played a positive role in strengthening the cooperation between CERTs and CSIRTs. The analysis reveals that these geopolitical issues have actually heightened the commitment to collaboration, as stakeholders recognize the shared interest in safeguarding cybersecurity. By uniting, these entities are better equipped to address the evolving challenges in the digital landscape.

Overall, Adly Wahid’s expertise and experience, combined with the increased cooperation between CERTs and CSIRTs, contribute to ongoing efforts to ensure cybersecurity at various levels. This insight highlights the significance of international collaboration and knowledge sharing in effectively tackling cyber threats and promoting a secure digital environment.

Masae Toyama

Masae Toyama, a cybersecurity practitioner, has drawn attention to the pressing need for increased representation of cybersecurity workers in internet governance forums. In these spaces, Toyama noticed a distinct lack of voice for professionals in the field of cybersecurity, and they encountered difficulty in connecting with others who shared similar backgrounds during previous forums. This experience prompted Toyama to recognize the necessity for a dedicated platform where cybersecurity meets internet governance.

Toyama firmly believes that cybersecurity practitioners play a fundamental role in upholding a secure and stable cyberspace. However, despite their significance, their presence and voices are not as prominently heard among the various stakeholders within internet governance forums. Drawing attention to this disparity, Toyama advocates for a stronger representation of cybersecurity experts within these platforms.

Toyama’s positive stance emphasizes the importance of creating a space where the intersection of cybersecurity and internet governance can be realized. By fostering a greater inclusion of cybersecurity professionals within forums like the Internet Governance Forum, the collective knowledge and expertise of the cybersecurity field can be harnessed to effectively address the challenges and concerns of internet governance.

In summary, Masae Toyama highlights the pressing need for a more robust representation of cybersecurity workers in internet governance forums. Their personal experience revealed a lack of voice for cybersecurity professionals, and they emphasize the essential role they play in maintaining a secure cyberspace. Toyama advocates for the creation of a platform where cybersecurity and internet governance intersect, in order to strengthen the presence and voices of cybersecurity practitioners within these influential forums. This perspective offers valuable insights into the ongoing dialogue surrounding the intersection of cybersecurity and internet governance and underscores the significance of including diverse perspectives in shaping the future of the digital landscape.

Moderator

The need for increased representation of cybersecurity practitioners in the Internet Governance Forum (IGF) is emphasised. Currently, there is a lack of individuals with backgrounds in cybersecurity, such as those working at CERT or actively involved in cybersecurity, participating in the IGF. This lack of representation results in their voices not being heard as loudly as other stakeholders.

A proposed session by a speaker is recognised as beneficial for all participants. The session aims to address the need for greater involvement and voice of cybersecurity practitioners in the IGF. It is expected that such sessions will provide a platform for cybersecurity professionals to share their expertise and insights among the various stakeholders involved.

Networking sessions are also implemented to encourage participants to interact and discuss their experiences and views on cybersecurity. These sessions provide an opportunity for attendees to engage with individuals they may not have spoken to before, fostering collaboration and the exchange of ideas.

Building bridges between network operators and cybersecurity specialists is considered crucial for establishing an open, stable, and secure internet. Recognising that these two professions utilise different languages, mindsets, concepts, and processes, there is a need to bridge the gap between them. The initiative taken by organisations like ADLI in strengthening the partnership between these communities is highly regarded.

Several challenges in the field of cybersecurity are identified, such as the obstacles related to information sharing. Cyberattacks are often unpredictable, making it difficult to trace their sources. In addition, local regulations and national security issues can complicate the sharing of information. These challenges need to be resolved in order to build strong collaborations and improve cybersecurity practices globally.

Resource limitations and the need for capacity building also pose significant challenges in the cybersecurity sector. Constant monitoring, particularly through cooperation with international entities, requires specialist skills. Given the link between cybersecurity and national security, enhancing capacity building initiatives becomes imperative.

The importance of information sharing and building trusted networks for message exchange is emphasised. It is not only necessary to share information within specific layers of cybersecurity but also between those layers. By doing so, a deeper understanding can be developed, contributing to a more comprehensive and effective cybersecurity framework.

Cyber hygiene, which entails understanding device security, personal security, and learning about cyberspace, is considered essential for maintaining a secure online environment. The responsibility for practicing cyber hygiene extends to all individuals, not just technical experts. By promoting the importance of cyber hygiene, stronger global communities can be built, further enhancing cybersecurity.

In conclusion, the need for greater representation of cybersecurity practitioners in the IGF is highlighted. Proposed sessions and networking opportunities aim to address this need, facilitating knowledge sharing and collaboration among stakeholders. Challenges related to information sharing, resource limitations, and capacity building are identified, emphasising the necessity for proactive measures. The significance of information sharing, building trusted networks, practicing cyber hygiene, and ensuring widespread understanding of cybersecurity principles are all crucial for creating a secure and stable cyberspace.

Hiroki Mashiko

The analysis highlights key points about Entity Data, a prominent system integration company in Japan. It is noted that Entity Data has an internal Computer Emergency Response Team (CERT), known as Entity Data CERT. This CERT is responsible for handling and responding to cybersecurity incidents within the company.

One notable fact revealed in the analysis is that Hiroki Mashiko, an individual associated with Entity Data, works as a forensic engineer at Entity Data CERT. This indicates that Mashiko is involved in investigating and analysing digital evidence related to cyber incidents within the company. The analysis suggests that Mashiko’s role as a forensic engineer emphasises his technical skills and expertise.

Another point made in the analysis is that Mashiko is described as being more focused on technical aspects rather than governance-related matters. This suggests that his strengths lie primarily in technical areas rather than broader aspects of corporate governance. However, the analysis does not provide further information regarding Mashiko’s specific responsibilities or tasks within his role.

The analysis overall has a neutral sentiment, indicating a lack of strong positive or negative opinions or emotions. While it offers valuable insights into Entity Data, Entity Data CERT, and Hiroki Mashiko, it does not draw any further conclusions or assessments beyond these observations.

To summarise, this expanded summary provides a more detailed overview of the analysis. It highlights Entity Data and its internal CERT, Entity Data CERT, as well as Hiroki Mashiko’s role as a forensic engineer. Furthermore, it emphasises Mashiko’s technical orientation and the neutral sentiment of the analysis.

Session transcript

Masae Toyama:
May I share my screen? Thank you. All right then, yeah, before we get started, I’d like to ask the moderators to briefly introduce themselves. So first, I’d like to pass the mic to Mashiko-san, and then later on online, Ali and Vivi. So off you go, please.

Hiroki Mashiko:
Hi all. Hello, I’m Hiroki Mashiko from Entity Data CERT. Entity Data is one of the major system integration company in Japan. And Entity Data CERT is the internal CERT of the Entity Data. Actually, I’m a forensic engineer of Entity Data CERT. So actually, I’m a technical-oriented people, more than governance-oriented, and so on. But maybe you know, the governance itself is strongly connected to my work as well. So I’m looking forward to hearing your opinions of today’s discussions. So let’s make a great discussion today. OK, thank you.

Masae Toyama:
Thank you, Mashiko-san. May I pass the floor to Ali?

Adli Wahid:
Yep. Ohayou gozaimasu. Good morning, everyone. My name is Adly Wahid, and I am with the Asia-Pacific Network Information Center as a security specialist. I do a lot of engagement with the CERT and C-CERT community. in this region, including helping to establish newer CERTs. And in the past, I have used to work for National CERT, which is a Malaysia CERT, and a CERT for the financial institution. So looking forward to discussing and chatting with everybody today. Thank you.

Bernhards Blumbergs:
Thank you, Adli. So last but not least, Bibi Sam, please. Minna-sama, oihou gozaimasu. Welcome, everyone. Good morning. My name is Bernhards, but everyone calls me Bibi, so please follow these guidelines. I am here in Japan in Nara Institute of Science and Technology doing my postdoc, but I am a member of the National CERT team of Latvia, so CERT LV, and also I’m affiliated with the NATO Cooperative Cyber Defense Center of Excellence in Tallinn. I’m the ambassador and former researcher for that center of excellence. Well, I’m looking forward to moderating and having a productive conversations with you.

Masae Toyama:
Thank you, Bibi-san. The more strict guideline is presented, so please follow. Well, thank you. So my name is Masaya Toyama from JP CERT Coordination Center. I became part of CERT community four years ago, and my first IGF was 2020, which was fully online. Then in the last IGF in Ethiopia, actually it was so hard for me to find people with similar backgrounds, namely working at CERT or doing cybersecurity. So my idea was to break out of this situation and create a place where your day-to-day work in cybersecurity meets internet governance. While cybersecurity practitioners plays an important role in keeping secure and stable cyberspace, their voice in the IGF is not that loud enough amongst various stakeholders in IGF. So I think that the more fellows we get, the louder our voice would be so that we know what needs to be done. This is the background story of why I decided to submit the IGF session proposal. I am delighted that the IGF found my proposal beneficial to participants. Well, however, if they really care about us, I think the session should be set for. later, not kicking off at 8.30 in the morning. Anyway, I don’t want to spend too much time on me speaking. So now I’d like to listen to participants’ self-introduction. So would you please? Yes, I will pass the mic. Online first. Well, thank you. So let’s have some voice from online participants. So I’d like to open the floor, especially for online. So please, participants, if you’d like to start. OK, so let me read out the names. And if you can turn on your microphone on, you can have a brief self-introduction. So let me call out the name. Is there Kenny Chantre? Hello? Hello. Hi. Thank you for coming. Would you please introduce yourself and what made you come to this session? Hello.

Moderator:
My name is Kenny Chantre. I am Cape Verdean, living in Cape Verde in these moments. My interest to come to this meeting is to know it’s. to know more about internet governance. For now, I am ambassador from Pan-Africa Young, ambassador for internet governance. And my interest is to know more about internet governance. Thank you. All right, thank you so much. So let’s move on to, sorry, my pronunciation. Captioner Terrarin. Hello. Oh, sorry. I just messed up. So let me move on to the next person, who is Francisco Mostedosa. Hi. Hello, nice to meet you. My name is Francisco from Ecuador. It’s very important, the networking, the initiative of the multi-stakeholder. But in my country, it’s important to promote these actions and these events for all stakeholders. Thank you so much. Thank you very much. So let’s move on to the next person, Amir Adas Mohammadi Koushiki. Thank you. Hello. May I ask you to introduce yourself very briefly? I’m sorry, Amir, we cannot hear you. Right. So in this case, let me, uh, thank you so much for, uh, for, uh, for, uh, for, uh, Thank you so much for, uh, putting yourself on mute. We will, uh, ask you to later on to have yourself in the breakout discussion. So the last person at the moment is Saudia Pina Mango. Hello. Good morning. If you can turn on your microphone, we will ask, ask to introduce yourself very briefly. But if not, we will move on to the introduction for online participants, on-site participants. Right, okay, she’s gone. So now we have some people, which is much better than in the beginning compared to the beginning. So, um, okay, so let’s move on. Let’s go back to the original agenda so that we can proceed the breakout discussion. So, um, let’s move on. Here’s a little bit of housekeeping. Let me try to keep it short. So as I said, this networking session is, uh, asking you to, uh, stand up and walk freely to talk to someone you have not yet spoken to. Maybe it’s difficult, especially on, on, uh, on site, but yeah, please try. We will have two or three short sessions. Each session will be 10 minutes. Seven minutes short discussion plus three minutes comment. Comment section is trying to interact with people in on site and online. So we will exchange our comments and try to understand what was going to be discussed. And please cooperate with the moderators for timekeeping, especially because we changed the agenda. The instruction might be different from the original slide. So please, thank you for your cooperation. And we prepared some guiding questions to facilitate the conversation. But besides the guiding question, of course, you can introduce your name and your identification or what makes you to come to this IGF. Yes, of course, you can talk about this kind of icebreaking. And as I see, we have less than 10 participants on site. But I can see the sticky notes on your chest or whatever. So you can identify who to talk. So watch the sticky notes. All right, so let’s go to the first discussion. I will pass the mic to Mashiko-san. OK, so I will read the question. And I will keep the time on this session. So thank you for your cooperation. Good, so after I introduced myself, I finished to read the question. This introduction is only for on-site participants. Please make some two groups, I think. Two or three. Two or three groups, yeah, I think. And start the discussion, please. OK, so the first guiding question is, when do you feel that your commitment to cybersecurity is creating and sustaining an open, free, and secure internet? Yeah, this is a bit difficult question, I guess. Maybe it’s especially difficult for the brain in the morning. Yeah, but yeah, let’s. Yeah, actually, I tried to put the easiest one on the site. So yes.

Audience:
Hi, my name is Pablo. I work with Adly in APNIC. And perhaps we can just have an open conversation here around these questions, because we kind of know each other or not. But perhaps it is good to contribute on the record for also leaving this on the webcast and transcript. So I would like to tell you a little bit about when I feel that our commitment to cybersecurity creates and supports an open, stable, and secure internet. And I think it’s all about bridges. In our case, our community is mostly by network operators, internet service providers. And early on, around 10 years ago, we thought about how the network operators need to interact more with the cybersecurity specialists. And we also realized that these are kind of two different professions. and how to build those bridges between them. So in order to create cross-pollination between communities, it is important to be ready to explain yourself in a language not necessarily yours. And both network operators and cybersecurity specialists use very particular language, mind frames, concepts, processes. And the processes of incident response are very different from the processes of patching and connecting networks as well. So in order for these communities to interact, they need to struggle a bit to explain themselves to each other and build that bridge. And we have found that fascinating. And ADLI has been an incredible bridge among these communities, but also as well with other communities, such as the policymakers and other parts of the technical community as well. So something that we have learned throughout the years is that the best way to contribute to an open, free, stable, and secure internet is not only by doing your work very well within your area of specialty, but actually to really build those bridges. And something that is very important in incident response is cooperation and information sharing one way or another. And the more obstacles and blockages we put to this transfer of information and collaboration, the least we contribute to an open, stable, and secure internet. In summary, I think it’s all about bridges. And I think this is an effort to bridge between different colors and specialties. And thank you for organizing this very cool workshop.

Moderator:
Thank you, thank you for great comment. I believe that the creating networking in this session is much important because of your opinion. I totally agree with your opinion, thank you. So okay, so let’s create groups, some groups and have a discussion of the topic. So I think the online participants, it’s already separated to some breaking out rooms, I guess. So for on-site participants, please stand up, gathering and talking, start talking, thank you. Hi, I’m here to join the group. The group? Breaking out room? But please stop talking. And please pay attention. Can you show it to me? OK. So I would like to pass the mic to some people, one people from on-site and one people from online, and ask your opinion of the guiding question. OK, so does anyone have an opinion of the first guiding question? Or what kind of opinions did you exchange during the chatting?

Audience:
OK, go. Go, please. So I’m the government-side person, so I’m talking about what we focus on the dialogue on the global conversation. We, Japan, I’m a senator from the Japanese government, and I focus on the, when I talk the oral experience, I focus on the balance of the security and stability communication. So the best solution to security is, one of the best solutions is shutting down, but to connect stably is also important. So when we talk with the other government people, we would like to focus on the balance of the security of communication and stability of communication. That’s my experience. OK, thank you very much. Can you please introduce yourself, or name and? I’m Masaki Nakamura from the Ministry of Internal Communications. International Affairs and Communication of Japan. Thank you.

Moderator:
OK, thank you very much. OK, so I would like to pass the mic to online participants. Adele-san, can you please give the mic to one participant from online? Thank you for waiting, online participants. So I suppose there’s a conversation in online participants. Adele-san or Bibi-san, is there anything that you listen to? I’ll just stop the timer and have some time for online participants. Thank you. Thank you. I’m afraid that online participants cannot hear the voice from the on-site, I guess, because they’re all in breakout room. OK. OK. I think so. Because they’re all in breakout room. And someone is talking. Bibi-san is talking. OK. OK. So. All right, online participants, are you audible? Are you listening to the on-site voice? Can you talk to the mic? OK. So the online participant is now having a conversation, and back in five seconds. Sorry for the logistics. Hello, online participants. We are back. We are back. Sorry. No worries. So I would like to ask opinions for the guiding question from an online participant. Okay. Maybe this time and for the next question, Adly will take the lead, but for this, so we were discussing not only the question itself about how is our work impacting the Internet and the security, but also how is it impacting the user experience. So I would like to ask Adly to take the lead. Okay. Thank you.

Bernhards Blumbergs:
So I would like to ask Adly to take the lead on not only the Internet and the security, but also addressing the question. The question within the question, can it actually be open, free, and secure at the same time? And I think that well, I think it’s not always possible to have all of these three things together, but we can always strive to reach the freedom, openness, and at the same time, security. With this I pass on the floor back.

Moderator:
Okay. Thank you very much. So online participants, I guess you’re already talking about the guiding question, too, right? No, we were talking about one, yes. Okay, okay. Good, good. So let’s move on to the guiding question number two. The question is, what international geopolitical issues prevent CSIRT from an open, free, and secure digital cyberspace in engaging with cybersecurity? And if we cooperate, how can we address this? This is also a bit difficult question for morning brains, but yeah, let’s start talking. OK, so let’s start talking. So for onset participants, please do not talk with your friends. Please make a new networking in this session. OK, so please stand up and gather again.

Adli Wahid:
So there was just three of us in the session. So we discussed a couple of things. But the first part was basically participant sharing the need to always cooperate. So both at the national level as well as globally, because definitely we can benefit from one another. We have different experiences. And when it comes to security, it is important to share quickly. So defend locally, but share globally, so that whatever experience we have in dealing with incidents or security or technology or tools can benefit others so that they can be secure as well. The second part on the geopolitical issues, yes, they do have an effect to the cooperation of the CERT. But in some of the recent events, the geopolitical issues have, in fact, strengthened the cooperation between CERTs and CSERTs. So this is a good sign, because it shows that despite what is happening around the world, there is a community that is committed to making sure that the Internet remains secure and safe for everyone. So that is it. If I missed anything, Bibi, you can jump in quickly. All right. That’s all. No. It’s all good. Oddly covered all the things. Yes. Thank you.

Audience:
Cheers. Okay. Thanks. So next, I would like to hear a voice from on-site participants. Does anyone… Okay. Please. Can you hear me right? Okay. Sorry. Yeah. Okay. My name is Dilnath Dishanayake from Sri Lanka CERT. Actually, my colleague, what the discussion we had. So if I summarize, so first one is, it is kind, you know, the sharing, as the online participation mentioned, it is sharing the information. Because when the cyberattacks happen, it is a sudden one, so we need to share the information. Because we don’t know where the source is coming from. Because it may happen for Sri Lanka context, but the origin is from another country. So finding this source is very difficult. So the sharing information is very challenging. And it’s also affecting with the local context, legal and the regulations, again, because it is actually cybersecurity is in line with the national security sometimes. So sharing information is one thing. And the resource limitation, because when we are cooperation with international, definitely we have to have the expert level communication specialist or something like that. Then only we can just have this. continuous monitoring or whatever the resource find that we can. And also, again, the capacity building, the national and international engagement. So those kind of things that we were discussing, and so at the moment, yeah, I’ll stop here. Okay?

Moderator:
Okay. Thank you very much. Yeah, it’s a very interesting opinion for me as well, but sorry, we do not have so much time. Let’s move on to the next guiding question, so can you please make a slide? Okay. So then question number three is, to promote cybersecurity, what is a key message you would like to convey at this IGF, which is attended by a wide range of stakeholders? Okay, yeah. We can see the many participants from many corporations on this IGF, so yeah, this question is important, I guess. So okay, so let’s stand up and gathering and talking again, please. So please find someone who have not yet spoken to. This is a networking session, so please don’t stick to your friends.

Audience:
Okay, thank you very much for your cooperation. So, this time I would like to ask remarks from on-site first. So, does anyone make remarks of this question? If any of you heard something interesting, told by someone, others, you can share as well. Thank you, thank you. Thank you for putting me on the spot. That was the price to pay for my smile, I guess. So, good morning. My name is Peter Koch. I work for the German top level domain registry and we are engaged a bit with the German CSER network. And in this round, we had a conversation about, yes, the message, but that’s hard in seven minutes, of course. So, take this with a grain of salt, but I think what we agreed on was that it is important that even in the face or maybe because we are facing so many geopolitical tangents, it is important to keep up the information sharing or continue information sharing and maybe the pandemic is a blueprint in a way where there is global crisis, but at the same time, global information exchange. So we need to keep up that information sharing. Thank you.

Bernhards Blumbergs:
Okay, thank you. Thank you very much. So next, I would like to hear the voice from online participants. All right, I will take this one. So this time we were four of us and we tried to exchange a lot of information. This was a very productive breakout session. So although there are multiple things and we encouraged every participant to bring in new viewpoints, so I think this can be summarized in two general key directions. So first of all, this is nothing new. We already touched upon this multiple times, information sharing, but this is not only about information sharing. This is facilitating also where to share the information, trusted networks, building the infrastructure for message exchange, but also designing the groups that can and may share the information because having just people in the group doesn’t make sense. You have to make sure that this information is there and everyone is engaged. Also information sharing, not only within just certain layers like technical, operational and strategic, but also between those layers so that to build the understanding and clarify in simple and understandable terms what this information or how this particular problem resonates also to operational and strategic levels. I’m talking from the bottom up, I’m mostly techie, so that’s why I take it from bottom up, from tech to the higher levels. So this is first one, but the second part is directly related to this. And what we identified is also understanding how to make the internet a better place, understanding the device security, personal security, learning about cyberspace, because if we are. using these tools we have to be well versed and understand how to use this in a best manner possible but also how to use it securely. This comes down to personal cyber cyber hygiene. So we start with just a single entity. But also this is expanded to the nation. It’s not also it’s not only related to experts. It’s related to everyone who is part of the society understanding cyber hygiene and getting to the national level and thus building a stronger global community. Oddly if there’s anything else you add these. Nothing good. You covered everything. Maybe. Thank you. All right. Thank you.

Moderator:
Thank you very much. Yeah it’s a very interesting opinions from me. And yeah it’s impressive for me that the cyber hygiene is important for not only the technical people but also but also all people in the world. And yeah it’s impressive for me and that totally agree with you. OK. Thank you very much. And this is the last question of the three decision. And so I will pass the mic to them again. OK. So I heard some interesting topics covered on site and online as well. So I hope this session had a was a good exercise for your morning brains. And I truly hope that you hope you enjoy the idea of 2023 just because today is a day day one. So I’d like to thank again the moderators supporters and everyone here for making the session happen to embrace the meeting spot for CSAT practitioners at this IGF and to exchange insightful views of each of you of Internet Governance. Thank you very much. Thank you very much. Bye-bye. Bye. Awesome. Well done. Very. .

Adli Wahid

Speech speed

162 words per minute

Speech length

322 words

Speech time

119 secs

Audience

Speech speed

134 words per minute

Speech length

1063 words

Speech time

475 secs

Bernhards Blumbergs

Speech speed

180 words per minute

Speech length

634 words

Speech time

211 secs

Hiroki Mashiko

Speech speed

129 words per minute

Speech length

106 words

Speech time

49 secs

Masae Toyama

Speech speed

111 words per minute

Speech length

433 words

Speech time

234 secs

Moderator

Speech speed

114 words per minute

Speech length

1687 words

Speech time

885 secs

Safeguarding Processing of SOGI Data in Kenya | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Jeremy Ouma

The assessment offers an exhaustive exploration of diverse facets of Kenyan society, legislation, and practice with significant emphasis being placed on LGBT rights and data protection.

A primary cause of concern emerges from the deficient legal protections for LGBT individuals against discrimination grounded in their sexual orientation or gender identity. The current Kenyan legal framework is found lacking in offering ample defence to these marginalised groups, thus fostering an environment in which the revelation of personal demographic data could invite bias and negative repercussions. This scenario is aggravated by particular practices linked to LGBT identities being outlawed in the nation.

However, this situation isn’t entirely devoid of optimism. A discernible societal shift is observed, with increasing momentum in Kenya to repeal or amend penal codes criminalising acts related to diverse gender identity. Despite hindrances to these initiatives, such as refusal of organisation registration, the identification as diverse is not outlawed. Activists highlight an ongoing case between a regulatory body and the ‘NGO board’ as a paradigm in this advocacy.

Prominent worries regarding data protection and processing standards also attract attention in the analysis. The differential handling of sexual orientation and gender identity data under current legal structures is deemed problematic. In order to address these issues, organisations need to register with the Office of the Data Protection Commissioner. They are urged to prepare and update data protection protocols and increase awareness about privacy during data management. An internal capacity building within civil organisations is particularly underscored to foster greater awareness and engagement on data protection.

Furthermore, the analysis points out persisting challenges relating to digital platforms and their content management strategies. These include the potential amplification of harmful content and a lack of understanding of local context in content moderation processes. The scarcity of transparency within these systems further exacerbates the situation, yet progress is recognised through ongoing legal cases, such as the one involving Meta and its former content moderators. This case serves as a tangible pursuit to hold platforms accountable within the Kenyan jurisdiction. Efforts are underway to bridge the gap between local users and platforms, nudging platforms to better comprehend and respond to their user base.

Finally, the study extends its purview beyond Kenya, expressing negative sentiment around homosexuality laws in Uganda. The persistent implementation of these unfair laws, resulting in adverse prosecutions based on them, is recognised as a significant violation of human rights.

In summary, the analysis identifies a complex network of obstacles within Kenyan society, but simultaneously showcases several steps taken to address and surmount these challenges. It provides a detailed account of ongoing efforts and the dire need for progress towards a more inclusive society.

Angela Minayo

Angela Minayo emphatically discusses the significance of well-regulated data management in preventing human rights violations. She emphasises that while data affords immense opportunities, if poorly regulated, it can be manipulated to facilitate human rights abridgements. Therefore, robust data regulation legislation is essential to safeguard human rights.

Minayo affirms the necessity of a harmonised protection framework for sensitive data, encompassing gender and sexual orientation. The continuing debate about data protection, confidentiality, and personal autonomy prompts her to argue for more comprehensive legislation that provides protection not just for conventional data but also for sensitive information regarding an individual’s gender identity and sexual orientation.

Conversely, Minayo voices her reservations concerning the efficacy of Kenya’s existing Data Protection Act. Despite the law’s progressive human rights perspective, she regards it as deficient in its coverage of gender identity under sensitive data and its contradictory approach in handling health data and sexual orientation.

Minayo spotlights non-profit entities and highlights the need for these organisations to comply with data protection regulations. She proposes that while data protection often associates with corporate compliance, both for-profit and non-profit entities need to ensure adherence to the regulations. Significant mishandling of personal data, especially sexual orientation and gender identity data, can result in severe human rights implications.

She goes on to emphasise the complexity of data protection and espouses an increase in the resources geared towards effective data protection, notably for non-profit entities that perhaps lack the necessary resources for comprehensive data security. Minayo further calls upon data entities in various countries to register data controllers and processors to legitimise their roles and allow for efficient allocation of budgets and responsibilities for data protection.

Minayo highly commends the use of data processing templates from Article 19, designed to assist non-profits. She affirms these templates as serving as a checklist for various data protection procedures. She also underscores the importance of organisations soliciting consent for personal data usage and documenting said consent.

In the context of protecting sensitive data, she references the severe implications for marginalised groups, pinpointing the increase in online homophobia following a Supreme Court ruling in Kenya that authorised LGBTQ organisations to be formally registered. She asserts that existing as queer in Kenya is akin to a political act, endangering individuals with stigma, and even death.

Finally, the digitisation of sectors such as sex work and resultant data protection concerns that emerge due to Kenyan users’ data being handled outside the country, leads Minayo to recommend improved awareness of data protection laws, specifically for the evolving digital economy.

In conclusion, Minayo centres her discussion on the importance of contemplating data protection from a broad spectrum, ranging from the necessity for robust regulations to protect sensitive data, the urgency for more resources for non-profit entities, the relevance of data protection across all sectors, and the significance of stakeholder awareness.

Audience

The discourse primarily centres around pressing issues related to data sensitivity and the recognition of diverse gender identities within Kenya’s legal framework. One of the main criticisms in the conversation pertains to the relatively rigid categories of gender identities officially recognised in Kenya, which currently include Male, Female, and occasionally, Intersex. This limited classification, as suggested in the discussion, disregards the broad spectrum of gender identities, leading to negative sentiment surrounding this lack of inclusivity.

Additionally, the classification and handling of sensitive information in the country are scrutinised. Specific emphasis is on the differential treatment of data regarding sexual orientation and gender identity under the Data Protection Act. Whilst the data on sexual orientation is treated as sensitive personal data, data on gender identity is viewed as general data. This discrepancy raises concerns about the absence of a specific law, offering protection against discrimination based on sexual orientation or gender identity in Kenya.

The conversation further extends to the issue of data protection measures, particularly with respect to sex workers. A unique point highlighted by the audience is the necessity to safeguard data associated with this group, emphasising non-profits’ interaction with sex workers and the requirement to guarantee adequate protection measures.

Another point of interest stemming from the dialogue pertains to the influence of Uganda’s Anti-Homosexuality Act on Kenya’s data protection laws. Particularly, concerns are raised about whether the stringent regulation enacted by neighbouring Uganda might impact the LGBT community’s data protection in Kenya.

Platform accountability in Kenya also draws concern from the audience, with particular focus on the efficiency and procedure of incident reporting in cases of data breaches. The police’s involvement in such instances is queried, implying an underlying need for more robust incident response protocols.

A significant part of the conversation is dedicated to the management and confidentiality of health data. Clarification is sought on the mechanisms for sharing health data between facilities, with additional questions being raised about whether individual consent should be obtained each time data is shared. Participants inquire about the legal coordination between the Health Privacy Act and the Data Protection Act, seeking to understand which of the two pieces of legislation primarily governs patient privacy. These discussions shed light on the evident gaps and ambiguities in the country’s data protection and privacy laws, highlighting the public’s demand for a more transparent and protective legal system.

Session transcript

Jeremy Ouma:
I’m going to ask Angela Minayo to introduce herself. I will probably allow my colleague to introduce herself first, then we can get into it. Also, another colleague of mine is on the way. I think he’ll introduce himself as soon as he gets here. Thank you. Over to you.

Angela Minayo:
My name is Angela Minayo and I’m the Director of the Center for Human Rights. I’m interested in the topic because I believe that data unlocks a lot, but also it can be an area where, if not well regulated, can lead to further human rights violations. So I’m grateful for this panel, and I await questions from Jeremy.

Jeremy Ouma:
Thank you, Angela. Thank you, Angela, and thank you to all of you for coming. So, since we have done some research, basically a brief, very brief, brief, by Article 19 Eastern Africa, as I’ve said, I work with Article 19 Eastern Africa. And I enjoy, we work on issues of freedom of expression, association, and, of course, the issue of gender identity. And also, specifically, for this session, we have this brief produced as part of our data, our voice project, supported by the GIZ Digital Transformation Center in Kenya. And the paper basically provides an overview of the processing of sexual orientation and gender identity data specific to the Kenyan context. And it also provides an overview of the process of data protection in Kenya, particularly for the LGBTQ people who face heightened privacy and discrimination risks in comparison to cisgender and populations, basically, in Kenya. So what we hope to do with this paper is to increase awareness of data protection by different stakeholders in Kenya, be it regulators, be it regulators, be it data protectors, be it data regulators, be it data regulators, be it data protectors. And by doing this awareness, people get to understand their rights. That’s for whoever is part of the community. That’s one. Two, people also understand the obligations that be it data controllers, data processors, and adhere to these data protection laws, building the trust that they have in the community. So I’ll briefly go into some of the findings that we had before we get to some questions with my colleague here. Let me briefly go into some of the findings. Okay, so as I’ve said, briefly some of the findings. One is the insufficient legal protections for LGBT people, especially in the Kenyan context, most laws, or rather the practice is outlawed, not being identifying as LGBT in the countries, but the practice or the action is outlawed. So this has negative impacts on disclosure of this kind of information, sexual orientation or gender identity data, disclosure and collection, especially, for example, be it a hospital, be it in banks, and the legal framework doesn’t provide protection from this discrimination. There’s a blanket protection for everyone that we call under the law, but there’s no specific protection from this kind of discrimination on the basis of sexual orientation or gender identity. That’s one, plus the social cultural context continues to have a great impact on the treatment of sexual and gender minorities in the country. We have this thing that we keep throwing around that it’s not our culture, that’s thrown around by government, by leaders, so it’s not a very good environment. That’s one. I think the second finding is about this differentiated treatment of SOGI data under the legal framework currently in place. So sexual orientation is classified as, and processed as sensitive personal data, but gender identity is classified and processed as personal data. So one is sensitive personal data, the other is general data under Section 2 of the Data Protection Act, which covers most things under data protection in the country. So in effect this means that data controllers and processors processing this kind of data must differentiate and accord higher levels of protection to sexual orientation data, despite gender identity also exposing data subjects to similar risks and consequences. That’s finding number two. Number three is restriction of SOGI data collection to legally recognized categories in both public and private sectors. So in the country most, if you go for example to a bank, to a hospital, and you need to collect data, the categories are majorly male, female. Sometimes they’ll say intersex, sometimes they’ll just put other. So this is basically attributable to the failure of the law to recognize other gender identities, sexual orientation, and this kind of people. So this paper was guided by some contributions from key stakeholders. We did some key informant interviews, we had some focus group discussions with key industry players in the country specific to Kenya. I think I’ll leave that as the very key findings, but we also have some… other findings and then go to some of the conclusions we drew and then some recommendations. Then after that we can we can speak to my my colleague and have a brief overview of the current situation. So I’ll go straight to some of the recommendations. Specifically for one, data controllers and processors. I’ve divided it into two. So we have recommendations for data controllers and processors. The second is, let’s call it the civil society. So one for data controllers and processors is basically around compliance. So all public private organizations and individuals processing personal data are required to register with the, let’s call it the regulator, not regulator, but we call it the ODPC, the Office of the Data Protection Commissioner in the country, as stipulated under the Data Protection Act. So we encourage that for anyone that is processing data, be it government, be it a hospital, be it a business, that you require to process to process data. Recommendation number two is to implement technical and organizational measures for compliant data processing of this kind of data, this kind of sensitive data, including doing a data protection assessment, data protection impact assessment, prior to processing and processing this kind of data, and also engaging these communities. That’s one. Two, appoint a data protection officer to oversee compliance with the laws, that’s the Data Protection Act, and other relevant privacy and data protection laws. That’s two. And for entities in the public and private sectors to basically prepare, update data protection policies and notices so that they are up to date with the needs of the community. Finally is basically awareness, and that is internal awareness for these entities to have a privacy aware culture so that you are aware of what you need to do when processing this kind of data, and how to handle that kind of data in the right way. I think I’ll go to the very last group. This is for basic, the civil society, these are the civil society actors, is one, to build in internal capacity and undertake training to, to one, understand the frameworks of data protection, the impact it has on the communities we work with, and two, is also once you have this knowledge, you engage the public and private sector entities to also create awareness for them to understand the impact this processing has, to understand what are the obligations under the laws. And finally is to advocate for the data protection commissioner to expressly include gender identity as a form of sensitive personal data. In light of the risk of significant harm that may be caused by the processing to data subjects, it’s also important to have this captured by the data protection commissioner and acknowledged, and have frameworks to protect this kind of data. That’s number two. Two, and finally is to advocate for better laws to remove the discriminatory laws, whether it’s repealing them or amending them to be in line with international standards. So yes, those are the kind of key findings and some of the recommendations we have in this brief. I think I’ll go to, if anyone has any questions up to that point, I’ll happily take them before we go to my colleague. Then we can have a discussion about other experience in the country. Yes, please.

Audience:
Why can’t you use the mic? So from your presentation, it seems that there’s like a kind of embedded contradiction, which is that on the one hand, the diverse gender identities are not recognized, right, officially. So, but at the same time, there is that risk of, like if somehow, you know, banks and medical facilities, so that’s kind of the focus of data protection. But in parallel, is there kind of a movement or a drive towards kind of having these recognitions, you know, formally, you know, instituted? Is there, you know, again, is there a drive or a movement? Is there something happening in Kenya?

Jeremy Ouma:
Okay, thank you. I think that’s the only question. I think, yes, there’s a drive to do that. Over the past couple of years, there’s been a drive to repeal a section of the penal code. I think it’s two sections, 162 and 163, that criminalizes the act, not necessarily the person. In Kenya, the identifying as diverse, it’s not criminalized, but the act is what is criminalized. So once you found, but most times, the law is abused. People are denied registration for organizations. They’re denied, but there’s been some good precedence that I think there’s been a long court case going on between the regulator that, is it called the NGO board?

Angela Minayo:
So there was a petition to the Constitutional Court to declare section 162 of the penal code to be unconstitutional based on the discrimination ground. That petition was not successful, and section 162 is still operational in our country. So Kenya’s policy around it is to act like they don’t exist. So whenever they’re asked about it, the government always says that that is not a priority for Kenya, that we’re a third world country, and we have more development issues to be concerned about. So what has helped Kenya and the LGBTQ community is that our constitution is very progressive. So the penal code is a relic of the colonial laws that we inherited from the British government systems, but the constitution is actually 2010, very new in terms of constitutional law practice, very, very new in constitutional making. And our constitution has the right to non-discrimination very strongly, and the Bill of Rights, which enumerates human rights. So when LGBTQ organizations are being denied registration based on the penal code, they went to court, and the court went up to the highest court, which is the Supreme Court. And the court ruled that while what they’re doing might be a violation of the penal code under section 162, they have the constitutional right to assembly and association and all that. And therefore, when the registrar of the NGOs refuses to register them, then they have contravened the constitution. So it’s the robustness of Kenya’s constitution, even on the right to privacy, under article 31 of Kenya’s constitution. And now the Data Protection Act that you see a very progressive human rights outlook, but we still have this elephant in the room, which is section 162, and you can see the contradictions. I hope that gives you an idea of what we are working with.

Jeremy Ouma:
And to also mention that there’s a lot of pushback from government, they are not, for example, in the petition, the registrar tried to say that you’re not supposed to do this. So there’s a lot of pushback from government. They are not ready to have these conversations. So yeah, I think then, I hope that answers you. Thank you. Then I’ll go straight to a couple of questions for my panelists here. The other one will arrive, I’m not sure, but it should be on the way. So we can basically start looking at the legal framework for processing of data, not necessarily just sexual or gender kind of data, but in general, what do you think, or rather, what’s the framework governing this processing of data?

Angela Minayo:
Yeah, so as I had stated earlier, it starts in our constitution, the right to privacy. Then we have international human rights commitments. Some of them, you know them, the International Covenant on Civil and Political Rights, and the African Convention of Human and People’s Rights. So these are the basis and the frameworks for the right to privacy. And then in 2019, we operationalized the Data Protection Act. Like many other African countries, it is a replication of the EU GDPR, and that comes with pros and cons. So some of the pros are that the GDPR provided a very good framework with a complaints handling mechanism, an independent office. I’ll put independent in quotes because you can say it’s independent, but who’s appointing? So independence is a bit of a, I won’t choose independent, I’ll just say it has a body because the independence is questionable. So we have the Data Protection Act, and it’s a very elaborate framework, and I’m not going to focus so much on the downsides other than to just say that just like the EU legislation, we expect that when data is being transferred from Kenya to other countries, that there’s enough safeguards to provide equal or similar protection for data that is being handled in the third countries where there’s similar protection. countries. So I hope that answers it. Maybe we can all go to the sexual orientation and gender identity. So I work in Kiktonet as a gender digital rights programs officer and last year we also had conversations around gender and data protection and this conversation cannot be had without talking about sexual orientation and gender identity data. Just a fun fact even before I delve into the Kenyan ecosystem is that there’s a very progressive approach to data protection from the southern African region. I don’t know if you knew this but South Africa, what are they called, SADC, the regional bloc in South Africa SADC came up with a model framework for privacy law for the member countries that belong to it and they put gender as one of the personal or sensitive data. So while Kenya just talks of sex they talk about gender and there’s a difference. So when you just talk about sex you’re referring to biological sex and this is what is assigned that path. So you’re male or female or intersex but when you talk about gender you’re talking about someone’s expression and that might sometimes not align with their biological sex. So that’s very progressive. I always try to talk about the SADC model law because it took a different approach even from the GDPR which is something we want to see more of regional blocs and countries taking their own approach to data protection in a way that makes sense for them but also in a progressive way in a feminist way. So I just like to mention that. So in Kenya you will find that gender identity is not specifically provided for under sensitive data. It is treated as just personal data and that means that it can be processed one when there’s consent and two even when there’s no consent when the data processor or the data controller can prove that that information or that data is necessary for performing certain tasks. So they’ll say you entered into a contract and part of my obligation was to do ABCD and that obviously means I have to process your data to do that. So that is a, how can I say this, there is no consent necessary in certain aspects of personal data processing. How is this a problem? It is a problem because we can still see how gender data or gender identity identifiers can still lead to human rights violations. It could be job applications, it could be loan applications, so it could lead to further violations. Now for sexual orientation I think we’ve already given you and we have set the scene for you to understand how sexual orientation is grappled with in our legal system that it is a crime and the said crime is still protected in the data protection framework which sends contradicting viewpoints and it’s not just sexual orientation data if I may add. Health data has also been one of the things we’ve grappled with because health data is dealt with in different frameworks. So there’s our Health Act which empowers health practitioners to collect data necessary to perform their work and at the same time you’re seeing that health data is sensitive data that cannot be processed unless certain safeguards are given. So this is something you will keep seeing in countries where they pass the Data Protection Act but don’t review or reform the laws that existed before. So you end up with a very interesting set of laws if I may say. So yeah, so when you have, when data is deemed sensitive it means there’s more safeguards towards this protection. You’ll find that if there’s no consent then the data controller or the data processor must prove the necessity of taking this data and of collecting this data or processing this data. Again that falls in data minimization that we want you to only collect data that truly is necessary for what you’re trying to do. So what happens when you put sexual orientation data and gender identity data separately? Let me give you an example. So you’re saying sexual orientation data is protected, right? But gender identity data is not sensitive data, it can be processed in any other way. But when we create links between datasets we can be able to tell that this is Angela or this is Jeremy. Jeremy is male, male or female, that is not protected we can tell, male. Jeremy is on a sex app that is for queer community. So that tells us that while Jeremy is male we can tell that, we can identify Jeremy’s sexual orientation from the apps he’s using. So we need to have a harmonized protection framework that protects Jeremy both for his identity as male and his orientation as queer. And this is of course just for example purposes after this meeting, please don’t harass Jeremy. But you get the point that we need, what I like to say is we need an equilibrium or a spectrum of protection that cuts across and doesn’t end and stop at a certain point.

Jeremy Ouma:
Thank you Angela and you’ve preempted my next question about probably what is your experience with the practice of processing of data, especially sensitive data..

Angela Minayo:
Thank you, I think for this talk to be important for the people in the room, I’d really like to talk about processing of data in non-profit entities. So for a long time we’ve been talking about data protection from, oh it’s a company’s compliance, actually I have that compliance but you have to use it. But what the message it sends is regulatory and compliance, that’s what companies do. Non-profits comply how when they’re sending financial reports to donors, like compliance is a very foreign word to non-profit entities. Yet you’ll find non-profit organizations process a lot of sexual orientation and gender identity data which if mishandled has serious human rights ramifications. So we need to understand data protection as something that applies both for non-profit and for-profit entities, the companies in this case. So you’ll find even how we have conversations around data protection, it’s all big tech and of course I understand why we do this because those are the examples that make the most impact and the most sense to the people in the room. But we also need to start talking about processing of data by non-profit entities because what will happen for instance if a non-profit like Article 19 for example, you operate in Kenya and Kenya is becoming maybe a draconian state and the state can have access to your documents. Do these organizations have a plan? Do they know how to fight back when this data is requested from them? We keep saying oh Apple is such a good company because they will never comply with request for information from governments. Do we ask the same questions when it comes to non-profit entities? So I think it’s very important to have those conversations of data protection even from a non-profit point of view. So from practice what is very worrying is this idea that data protection is a concern for companies and not a concern for non-profits, yet you will find the people who handle most of the sexual orientation and gender identity data, the people who are doing the research in these areas and collecting data in this area will be non-profit entities. Another worrying practice I’ve seen from my country, I’m speaking from Kenya’s perspective and during the question and answer forum I would like to invite you to give us perspective from your countries, is that there are so many myths and misconceptions around data protection and processing. I’ll give you an example. So last week, last month, let me just say last month because I’ve lost a sense of time, is that our Office of the Data Protection Commissioner issued penalties, penalty notices to three companies for breaching the Data Protection Act and one of the entities that was fined was a club, so like a place people go to have fun, a partying joint, and they were taking photos of people who are taking part, you know, so they, you know, in Kenya for some reason clubs have this obsession with taking photos of revelers having fun at their joint. I don’t know why, I don’t know if they feel like they wouldn’t make enough sales without, you know, it’s a whole data minimization and necessity principle, like do you really need it? But they did and they ended up being fined because the data subjects complained about them to the Data Protection Commissioner. I’m also meant to understand that it’s not, we should not take it for granted that our Commissioner can issue penalties. Apparently in some jurisdictions the investigative powers and the powers to give penalties are curtailed, so I just wanted to put that as a side note. And what the other clubs have understood from this penalty notice is to put emojis on the faces of the photos they take and they did this immediately after the penalty notice, like that’s how unserious of a country Kenya is, like we use humor to get through, it’s a very tough place to live in. So anyway, the point is they think putting emojis on the photos is complying with the data protection, it’s that bad, exactly. So there’s a very pedestrian approach to understanding data protection because if we have applications that can remove emojis and de-anonymize the photos, then they have not complied with the Data Protection Act if they don’t have the consent and necessity, if there’s no consent and all that. So I’ll just put, I’ll end it at that because I think it’s a light note and tells you what the problem we’re dealing with.

Jeremy Ouma:
Thank you. If you have any questions I will take them at the very end. I think I’ll just throw a couple of more, a few more questions to you. Kind of building on what you’ve just highlighted, is there some sort of worrying impacts that you have seen from this kind of processing of the of sensitive data or just any data in general?

Angela Minayo:
Actually I’ll give an example of an activity we were doing in Kiktonet. So last year we were doing something known as a Gender Internet Governance Exchange under the Our Voices Our Futures project by APC and we had people from the queer community as part of the participants and before we used to just work with women and we will take photos and post them, you know, part of the reporting but also the social media campaigns and they told us that some of them are closeted and putting their photos online in an activity that is clearly for queer people will put them at risk. I get comments that data protection should be some very, how can I say this, this is serious. This is about penalties of 50 million in the EU and Facebook and Meta but that minimizes the harm that such data breaches can have on normal ordinary people who are not celebrities, who are not in the EU and therefore their complaints cannot attract the penalties that are in the European Union. So understanding that in the context of a stigma and even deaths we’ve seen in our country against queer people, that is a serious risk we need to have in mind. I’ll give another example of the homophobia we saw online once the Supreme Court made a ruling allowing LGBTQ organizations to be formally registered. So we had a lot of disinformation online and what people understood that ruling to mean was that Kenya has formalized LGBTQ relationships, which was not even the case. We wish that was the position. It is not. It is not. And the homophobia, the messaging online was we will kill them, we are never going to accept that and I kid you not, it was not even just from people online, it was even from the leadership at the national level. So when there’s this understanding that this is an undesired people among us, it also warrants hate or justifies hate or motivates and incites hatred against that group. So being queer on its own, existing as queer, is a political act in Kenya and in certain other countries. So let me just end it at that.

Jeremy Ouma:
Okay, thank you. I think the final one should be, probably, do you have any recommendations or insights on best practice for collection and subsequent processing of this kind of data?

Angela Minayo:
Yes, first of all, I like at article 19 to actually publish the, I just want to call you out here. You need to publish this resource because they have annexed amazing templates that people can use when processing data. Data protection is a very complex principle. I have to always remind people, and I’m talking about it, it can’t cover all the bases in one topic or in a 45-minute panel session. But there’s need for more resources, not just for for-profit entities. Those ones have enough money to get the DPOs, to get the people to help them comply. But what happens to non-profits whose resources are quite minimal? So what article 19 has done is to come up with templates for non-profits when they’re, and it’s like a checklist, which is what we need because this is such a complex, it’s a complex process. So telling you, okay, you have this data, have you gotten consent? If you don’t have consent, do you have the basis for it? Have you documented? Documentation is so important in data protection because you need to also preempt what can happen in future. Will you ever need to provide proof of consent, for instance? Those are things we might not be thinking about, especially operating as non-profits, but those are, that is the age of data protection we’re in, that you need to be documenting consent, documenting contracts you have with data control, data processors. So let me just explain this. Sometimes you’ll find there are two entities involved in the processing of data. So there’s a data controller, the person who collects and also directs how the data is going to be processed. And then you can have another entity being the data processor. So these ones are the ones who are going to store the data, anonymize the data, analyze the data, and all that. So they might have different functions depending on if they’re a data controller or a data processor. So sometimes we use these words among people in the data protection field without explaining what the ramifications are. So to make it in a more simple way is that a data processor is an agent or an employee of the data controller. So essentially at the very end, the person who’s going to be responsible for all the data protection issues, the breaches or whatnot, and consent will be with the data controller and not the data processor. So having them registered with the data protection entities in their countries is very important because it also gives them the justification for having budgets and all that towards compliance. So let’s have this resource online, please, because I think for non-profits it’s very important.

Jeremy Ouma:
Okay, thank you. And just a disclaimer, the resource will be will be will be published by the… the end of the way in October, so sometime in November, it will be fully ready. It’s ready, it’s just that we haven’t yet published. There’s a whole process to go through, but it will be published. And the main aim of this resource is to create awareness about data protection. So today we looked at some of the challenges, the impacts that this processing has on specific groups, but the paper looks at trying to create awareness about data protection, and also some of the recommendations of best practices for data controllers, data processors, and also civil society actors in general. So yeah, I think I might want to leave it at that, but I’ll take some questions if there were any. We can take them at the same time, then, yeah, we can end after that. Over to the floor.

Audience:
Wait, where’s the microphone? Who sees that? Is this being live? Where’s the information coming from? Is it on the internet? Is it being filmed? You should have it. Do you want me to open it? Or for transcription? For transcription. OK. Just for live. I know, so it’s on YouTube. OK, so when they turn off the mic, I’ll ask my question. OK. When it’s over, it’s fine. OK, sure. Hi, I have three questions. I hope that’s OK. I have many questions. My first question has to do with how data from sex workers is being saved and used and protected, right? Because we know that data from sex workers tends to be more sensitive. And non-profits also work with sex workers. And maybe in Kenya, that happens as well. So I would like to know if there’s a difference, or if you have had any special remarks on the privacy of data for sex workers. Then my second question is how the Anti-Homosexuality Act from Uganda has affected Kenya and the data protection laws in Kenya. And my third question is, how is it like for you as civil society to work with platforms in terms of platform accountability? Because we know we have the data protection laws. But how accountable are platforms in Kenya when you register a report or an incident? Like, how does it work? And also together with the police, right, with the judiciary system. How is it like there? So these are my three questions. Thank you.

Jeremy Ouma:
Any other questions? Yes, please.

Audience:
Hi. This is more actually to clarify. Because when you’re giving the example of the health data sharing scheme, so my clarification is, is there a Health Privacy Act? OK. And in that case, when health care providers are getting the data, what is the sharing mechanism? Can they share with other, let’s say one hospital, is taking you as a patient, right? And if there is some kind of electronic health records, are they sharing and uploading to that repository? So it’s shared across facilities. So every facility has access to that data. Is that how it works? Or every time they need to get consent from the patient? That’s one clarification. And second is that if there is a Health Privacy Act, and then there is a Data Protection Act, how does the coordination between the two work? And what regime does the patient’s privacy fall under? Are they covered under the Data Protection or the Health Privacy Act? Thank you.

Jeremy Ouma:
Thank you. Is that the last question? OK. Do you want to take any of the questions? OK. Then take the two questions.

Angela Minayo:
I’ll start with Paisa’s question. And this is a topic I really like. So I hope you don’t get passionate and talk too much. But yeah. So it’s very interesting that you raised it, currently having a bill being tabled in government, in parliament, called the eHealth Act. So the eHealth Act is talking about telemedicine, but it’s talking about protection of health data, which is very interesting. And I think people need to stop doing this. They should just have called it the health privacy, because that’s what it does. And it’s providing a framework where first, consent. So collection will be consent-based. And two is that there will be sharing of data across health facilities. And three, there will be health identifiers. So they’re also going to be assigning unique numbers to both patients and health facilities. And four, they want it to be portable. So they want to give that control of data to the patient. So the patient will be having all the records in a portable format. We don’t know this portable format, but you know, data portability. And then they’ll be able to… So it’s still being debated, but that’s what they have in mind. On how it is going to operate together with the Act, you’ll find most of the time saying in the caveats for exceptions and things like that, if prescribed by any other law, or on the grounds if prescribed by any other law. So that is normally how we try to interlink laws. So if it’s talking about prescription by another law, we go to the other law, or any other relevant law in question. But you can talk about this after this. The question on sex workers. Again, sex work is also penalized in our country. So of course we understand that the situation is different places. But we also understand that sex work is also becoming digitized. So there’s OnlyFans and so many other, what are they normally called, webcam-based apps, where they’re still part of the digital economy. And that also means that this data is being processed sometimes outside Kenya. Again, the level of awareness is low even on just data protection accountability in Kenya. So how bad can it be for a user based in Kenya whose data is being processed outside Kenya, i.e. in the EU? Those are questions and conversations we are yet to grapple with in Kenya. But I’m glad that kick.net is part of the OVOF project. And this could be part of the research we can conduct to understand how it’s been dealt with. So I’ll just give you context for those two things. I’ll let Jeremy go for the homosexuality laws in Uganda and how it’s affecting Kenya and the platform accountability.

Jeremy Ouma:
Okay, thank you. I think I’ll start off with specifically platform accountability. And just to mention, first of all, there’s an ongoing case at the moment between Meta and some of its, let’s call them former content moderators. There’s an ongoing case about matters around accountability. There was, let’s call it good precedence where they can now be held accountable for their actions within the Kenyan jurisdiction. But that case is still developing. But it’s good progress for us. We see it as good progress. That was just first of all. But there’s also, from our point of view, there’s one of the things, or two things we’ve tried to do. First of all, there’s a coalition that we’ve tried to bring together around specifically content moderation. This is basically, we looked at some, we did some work around the current practices of content moderation in a couple of countries. But there’s a specific focus on Kenya. I’ll talk specifically about Kenya. Basically understanding the experiences and challenges of Kenyans around content moderation, takedowns and all that. So some of the things that we found is platforms are potentially amplifying harmful content. There’s lack of understanding of local context because we are trying to have some sort of decentralization in terms of content moderation so that we can probably at some point hold them accountable for what happens on their platforms. There’s also insufficient transparency in content moderation. And finally, we are trying to bring together, sort of bridge the gap between the platforms and the local users. So that’s one of the things we do. So we are trying to have… Okay, I’ll keep it very brief. So we are trying to basically bridge the gap between local stakeholders and local users and the platforms to sort of get some kind of conversation going on how we can make the platforms better. So yeah, I think in the interest of time, there was a second question. Uganda. For the case of Uganda, I think it’s very… It’s not somewhere we want to be. But I think we’ve recently been hearing cases of people being prosecuted based on this law. I think there’s also some very bad cases. But in Kenya, I think it’s very similar, but not as bad. But I think in relation to how it has affected Kenya, I think there’s been some…

Angela Minayo:
There’s some potential legislation. So we have the culture bill. It started on as a family values protection bill. Just to wrap up is that these are funded by Eurocentric far-right evangelical radicals. And it’s really sad. And it’s not African. It’s actually Western ideals being imposed on Africans. We’ll end at that. Thank you so much for attending our session. Thank you. Thank you. Thank you.

Angela Minayo

Speech speed

168 words per minute

Speech length

3903 words

Speech time

1395 secs

Audience

Speech speed

153 words per minute

Speech length

607 words

Speech time

238 secs

Jeremy Ouma

Speech speed

145 words per minute

Speech length

2503 words

Speech time

1034 secs

Digital sovereignty in Brazil: for what and for whom? | IGF 2023 Launch / Award Event #187

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The concept of digital sovereignty is being examined from various angles. Brazil, known for its investment in digital sovereignty, has a long tradition of supporting it through the production of technological equipment and open-source code. This highlights their commitment to maintaining control in the digital realm. However, there are concerns regarding the practicality of achieving complete independence in technology, as no country can truly be independent in terms of technology, food, or energy.

Another point of discussion is the need for clarification on how the current project will relate to the research conducted by FGV-CTS, a prominent research institution that has been studying digital sovereignty for a considerable time. This indicates the desire to build upon existing knowledge and ensure consistency in efforts related to digital sovereignty.

The legitimacy of internet governance in a multi-stakeholder environment is considered a complex issue. This is because digital sovereignty and the involvement of various stakeholders in governing the internet raise questions about who should have the authority and power to make decisions and set policies. It requires careful consideration to strike a balance between different interests and ensure fair representation in decision-making processes.

It is important to note that the term sovereignty inherently denotes exclusivity. When people talk about digital sovereignty, they often focus on their own or their community’s interests, without considering the broader implications. This highlights the challenges of defining and implementing digital sovereignty in a globalized and interconnected world.

On a positive note, the removal of ICANN from the sovereign system is seen as advantageous for certain aspects of internet coordination. By separating it from the influence of individual nations, ICANN can operate in a more impartial manner, contributing to more effective and neutral coordination of the internet.

However, there are differing opinions on the feasibility of digital sovereignty. Some argue that it is a fantasy, as sovereignty in a political and legal sense implies exclusivity. They believe that true sovereignty cannot be achieved in the digital realm.

Misunderstood or poorly implemented digital sovereignty may have significant consequences for the fundamental characteristics of the internet. Legislative or regulatory measures imposed under the guise of digital sovereignty could jeopardize the open and decentralized nature of the internet, hindering innovation and limiting access to information.

In contrast, others view sovereignty as a means of reasserting control and empowering individuals in the digital sphere. It is seen as a process term that allows people to assert control over what occurs in the digital realm, enabling them to shape the digital landscape according to their needs and interests.

In summary, discussions around digital sovereignty are multifaceted. While Brazil’s investment in digital sovereignty demonstrates a commitment to control and independence, challenges regarding practicality persist. The relationship between current projects and existing research needs clarification, and the legitimacy of internet governance in a multi-stakeholder environment is a complex matter. The term sovereignty itself carries connotations of exclusivity, and differing perspectives exist regarding the feasibility and implications of digital sovereignty. Ultimately, the aim is to achieve a balanced approach that preserves the fundamental characteristics of the internet while enabling individuals to have control and influence in the digital world.

Flavio Wagner

Brazil, as one of the world’s top 10 largest economies, possesses a robust industry across a variety of sectors, including the digital sphere. this positions brazil as a significant player in the global economy. Brazil has also implemented laws such as the Marco Civil and the privacy law, which are similar to the General Data Protection Regulation (GDPR) in the European Union. These regulations highlight Brazil’s commitment to safeguarding privacy and ensuring justice in the online realm.

However, there are growing concerns about internet fragmentation and digital sovereignty within Brazilian legislative bills and public documents. These concerns indicate a potential risk of unwanted internet fragmentation. Discussions on platform regulation and cybersecurity proposals often emphasise the importance of digital sovereignty as a motivation for proposed bills. The term “digital sovereignty” is increasingly mentioned in Brazilian legislative bills and public documents.

To address these concerns, a project in collaboration with CEPI aims to facilitate academic and public policy debates about sovereignty, connecting the in-depth analysis of the Brazilian context to the regional and global levels. This partnership with CEPI seeks to expand the understanding and knowledge of the academic and public policy discourse on sovereignty issues.

Moreover, the project collaborates with FGV Sao Paolo, a renowned academic institution with numerous groups working in different cities. The collaboration with Flavio Wagner aims to enrich the academic and public policy debates on sovereignty matters.

Despite diverse areas of focus, the project maintains an ongoing dialogue with various groups, including Lucabelli’s team. This continuous engagement enables the exchange of insights and perspectives, contributing to a comprehensive understanding of digital sovereignty within the Brazilian context.

By analysing public documents, bills, and policies, the project aims to comprehend the impact of digital sovereignty on the evolution of the Internet in Brazil. This comprehensive examination is crucial to evaluate potential implications of legislations or regulations tied to digital sovereignty, as there is a concern that they may threaten fundamental characteristics of the Internet.

As a precautionary measure, it is crucial to exercise awareness and discernment in comprehending how sovereignty is utilised and interpreted in relation to the digital economy. The project follows the approach advocated by the Internet Society to ensure a thoughtful and well-informed discussion on digital sovereignty.

Additionally, the project aims to educate and inform individuals about the diverse perspectives on digital sovereignty in Brazil, as well as the social, legal, and technological implications associated with different definitions of sovereignty. This comprehensive understanding will enhance the overall discourse on sovereignty, not only in Brazil but also in other parts of Latin America and globally.

Flavio Wagner recognises the importance of thoughtful evaluation when proposing public policies, regulations, or legislations to address sovereignty issues. This careful consideration is crucial in understanding the potential consequences and impacts they may have.

In conclusion, Brazil’s significant role in the global economy, along with its robust industry and commitment to internet regulation, has sparked discussions on digital sovereignty and its possible implications for the Internet’s evolution in the country. The project, in collaboration with CEPI and FGV Sao Paolo, aims to foster a comprehensive academic and public policy debate that encompasses various perspectives and potential consequences. By analysing the Brazilian context and connecting it to regional and global levels, the project strives to contribute to a more informed and well-rounded discourse on digital sovereignty.

Ana Paula Camelo

The term “digital sovereignty” is frequently mentioned in Brazilian legislative bills and other public documents. However, it lacks a shared definition, creating ambiguity and confusion about its meaning and implications. To address this issue, research has been conducted in partnership with the Brazilian chapter of the Internet Society and the Center for Studies on Freedom of Expression and Access to Information (CEPI).

The research aims to map and discuss the various narratives and stakeholders involved in Brazilian debates surrounding digital sovereignty. This comprehensive analysis, based on desk research, study group discussions, and expert interviews, will contribute to a deeper understanding of the topic.

In addition, an online training course on the issues of digital sovereignty is set to be launched. The course, accessible through the websites of the Brazilian Internet Society and CEPI, will feature recorded lectures, suggested bibliography, and interactive discussion activities. This initiative aims to enhance knowledge and awareness about digital sovereignty among a wide range of individuals.

Digital sovereignty is considered essential for several reasons. It plays a crucial role in self-determination, the regulation of state power, national security, and technological and scientific development. It also intersects with areas such as artificial intelligence, misinformation, and fake news, and serves as a means of protecting the rights of citizens, including those belonging to consumer and minority groups.

Ana Paula Camelo highlights the importance of understanding the Brazilian context in the global discourse on sovereignty. By doing so, it becomes possible to contribute to a more comprehensive and inclusive global narrative. Camelo encourages individuals to engage with the ongoing research, offering feedback and suggestions to facilitate collaboration.

In summary, the term “digital sovereignty” is widely used in Brazilian legislative bills and public documents, but lacks a shared definition. Through research conducted in partnership with the Brazilian Internet Society and CEPI, a comprehensive understanding of the topic is being developed. An upcoming online training course will further promote knowledge and understanding of digital sovereignty. Digital sovereignty is crucial for self-determination, state power regulation, national security, and technological and scientific development. Understanding the Brazilian context is emphasized to contribute to a broader, global narrative on sovereignty.

Raquel Gatto

The concept of digital sovereignty and its connection to internet fragmentation is a major concern for the Internet Society. They have defined fundamental principles and values for the internet and developed a toolkit to assess the impact on these principles. This approach involves continuously evaluating the situation in different regions and countries. They have created impact briefs that provide specific case studies, such as a proposed bill for a law regarding content moderation and fake news in Brazil. This highlights the significance of digital sovereignty in Brazil, where it is a major issue that has led to increased research.

Digital sovereignty encompasses political, technological, and economic aspects, and it is intricately connected to the issue of internet fragmentation. The Internet Society emphasises the importance of understanding different interpretations of sovereignty in the digital sphere. They recognise that capturing these interpretations is essential to safeguarding the fundamental aspects of the internet and ensuring that digital sovereignty does not compromise its core principles.

In their work, the Internet Society also focuses on the broader implications of digital sovereignty, including technological, political, and legal aspects. They highlight the need to consider the potential risks associated with claims of digital sovereignty and the impact they can have on industry, innovation, and infrastructure. By understanding these implications, they aim to contribute to peace, justice, and strong institutions in the digital realm.

Furthermore, researchers like Luca Belli have joined the conversation, bringing their expertise in cybersecurity and a nuanced understanding of digital sovereignty as part of national security protection. This highlights the significance of digital sovereignty in safeguarding nations from potential cyber threats.

Overall, there is a neutral stance towards understanding the implications of digital sovereignty. The Internet Society and other researchers place great importance on gaining a comprehensive understanding of its various dimensions. By doing so, they can contribute to the preservation of a free, open, and secure internet that upholds the fundamental principles and values defined by the Internet Society.

Session transcript

Raquel Gatto:
I’m going to introduce myself, my name is Raquel Gatto, I’m the vice president for the Internet Society Brazil chapter, and I have the pleasure here to have the president of the Internet Society Brazil chapter, Isaac Brazil, Flavio Wagner, on site, and online I have my colleagues who are going also to present this session, Ana Paula Camelo from CPFGV, Ana, welcome, and also online I have the pleasure to co-moderate this session with Pedro Lana, our director from Isaac Brazil. I also want to acknowledge Lohri Schippers, who is online and is our rapporteur, as well as other members from Isaac Brazil who are on site. Thank you very much for everyone joining. So without further ado, I just want to, as my role as moderator, I’m just going to give a little bit of context in terms of why this is one of the topics that Internet Society is tackling. First of all, in the flow of the global vision that Internet Society is putting forward, then I will have Flavio presenting about the Brazilian situation, the Brazilian scenario, and how we are going to tackle the research, and then Ana is going to give us the detailed version of this project, and then I’m opening up for questions, so we have 45 minutes to go all through. So let me just start by saying, so the question that was put on the table by the Internet Society, the international organization is, what is the biggest threat for the Internet? This is such a simple question in a way but with very complex answers and it’s interesting because there was one big concern that arises from the answers that were collected and the answer is SplinterNet. I’m not sure if everyone is familiar with that but SplinterNet is when you don’t have the network of networks. The internet is made of these smaller networks, voluntary adoption of a common protocol TCP IP that makes this the internet of you know the networks of the internet of the networks, the internet working, open, global, secure, trustworth internet and when it comes to the threats to divide it, to fragment the internet, there are multiple ways it can take place from political view, let’s say jurisdiction issues, it might be from digital sovereignty issues, digital national security issues, from technical challenges, infrastructure challenges but in all of those there is a pre-question which is what makes the internet the internet and what we are going to protect from not being divided, from not being splintered, splitted. So the Internet Society started precisely to define what are those fundamental characteristics, the principles, the main values for the internet to be what it is and it published a policy brief back in 2012. 2019, sorry, and then it followed up with an impact assessment a toolkit So it’s not only about Describing what it is to keep it open Globally connected secure interest to all worth internet, but it is also about How you can assess what is going on in your country in your region and and so on and then There are now The chapters and and and the global community is taking up and is creating this impact briefs Which are basically documents taking one case study. For example in Brazil. We had one case study about Proposal for a bill for a law Regarding content moderation. So and and and fake news. So that’s one of the Examples of those impact briefs there are being populated In this main project. So taking this lead on So here’s the biggest threat. Here’s what we want to protect then. What are the main? Overarching themes that that We are seeing this in the the public discourse So first of all, you have internet fragmentation, right? It is by nature in ISOC DNA to take the technical considerations for internet fragmentation so it’s when the Internet is no longer For example using a common protocol when the the Internet and those smaller networks are not connected to the whole Internet and so on but then It needs to be recognized that there are other forms being discussed, including here at the IGF, the policy network on internet fragmentation is also taking up on the other concepts for internet fragmentation, which will take also the user experience in the sense and the governance, the internet governance fragmentation, right? But we are not going to go for deep dive on those yet, but just to acknowledge that there is this overlapping between internet fragmentation and digital sovereignty, whereas digital sovereignty is one of the conditions, one of the situations where internet fragmentation is taking place or risks to take place. And then I come to the main topic of the session today, which is digital sovereignty and all the different definitions also that you can take in this in the sense that digital sovereignty is a political view when we take up the nation-state concept. It can be a technological matter when we are talking about the appropriation for example in developing countries to be more of producers than just receivers. It is also an economic issue and it is also a power struggle in historical terms for new shapes of digital colonialism. But this is something just as an overview that we wanted to bring in terms of how it has evolved the discussions in terms of assessing the risks for the internet, getting it to understand what is the internet that we want to protect and what are the shapes that is taking place in these discussions on the internet ecosystem. And then I’m going to give the floor next to Flavio, who is going to tell us how this is taking place in a Brazil scenario. Thank you very much, Flavio.

Flavio Wagner:
Thank you, Raquel. So, hi everybody. Nice to have you with us here this morning in Japan. So Brazil is a very large country, one of the 10 largest economies in the world with a strong industry in various sectors. And also in the digital realm, Brazil has a lot to show. And because of this context, Brazil, along the time, has proposed that they implemented various local regulations regarding the internet. For instance, the so-called Marco Civil, the Internet Bill of Rights, which was approved in 2014, setting a whole set of rights principles. It’s a principle-based law. Later on, in 2019, we approved our privacy law, very similar to the GDPR, for instance, in the European Union. And these laws are very compatible with international standards regarding rights and duties. There are many other discussions going on in Brazil. There are regulatory proposals being discussed in the National Congress, such as the Artificial Intelligence Bill, the Fake News Bill for content moderation. There are many discussions in the country regarding local law on cybersecurity, various discussions on dimensions of platform regulation, not only the question of content moderation and fighting disinformation, but also economic issues. And there are open discussions, not already in the form of bills in the Congress, but discussions about data sovereignty. This was a large discussion when we approved the Marco Civil many years ago. For instance, on data localization, so these are discussions that come again and again in the country. And the term digital sovereignty is increasingly cited in the Brazilian legislative bills and in public documents. If we take, for instance, current discussions on platform regulation and cybersecurity proposals, they include explicitly sovereignty as a strong motivation for the proposal of bills. But, of course, this is a question that’s not only a problem in Brazil, but is overall that there is no clear or shared definition of what sovereignty means. There are many flavors of sovereignty. For instance, the economic issue, as Raquel has presented before, the technological question, the data sovereignty, and so on. So, which was then our proposal? Next slide, please. Yeah, we partnered the Brazilian chapter of the Internet Society. We partnered with CEPI, a research center at the FGV, a very prestigious academic institution in Brazil, to develop a project which is partly funded by Internet Society Foundation. It started about one year ago, and it’s going on. And the main project objectives is to qualify the academic and public policy debate on sovereignty, and starting with an analysis of the Brazilian context, explore all the social technical dimensions of this debate and its technological and legal challenges. And because there are so many flavors of sovereignty, and sovereignty is invoked for various proposals in the country and elsewhere, we try to identify notions of sovereignty built from various stakeholders’ narratives from various sectors. taking into account the legal, social, economic and political implications and trying to connect this analysis of the Brazilian context connect not only to the local level but also to the regional and global levels. So, as concrete goals, the project aims to map and discuss first whether and in which sectors the discussion on digital sovereignty has emerged as a trend. Which, in second place, which narratives have guided this debate in an attempt to secure public support and based on which justifications? So, why is sovereignty being used as a motivation to secure public support for various proposals that are going on in Brazil? How, in third place, how the relationship between digital sovereignty and the internet takes place in the Brazilian debate? And the final goal is how the creation or change of policies and legislative instruments are linked to these narratives from the various sectors implies local and global challenges both in technical, political and social terms. As Raquel said, one of the global challenges we have is how we can avoid fragmentation and sometimes some regulatory proposals or legislative proposals motivated by a fair claim of digital sovereignty may have unintended consequences for the fundamental characteristics of the internet. And we are trying to explore those relationships. So now I go to Ana, which is online and will continue this presentation.

Ana Paula Camelo:
Thank you, Flavio. Thank you, Raquel, for introducing the broad and the Brazilian context that creates the great opportunity of our research. And I may say I could be with you in YouTube, but I’m glad that we have this online too and participation that allows me to be with you from Sao Paulo. So hello to everybody that is attending. Thank you for your time as well. As mentioned before, the digital sovereignty term is increasingly cited in the Brazilian legislative bills and other public documents, but without a clear shared definition, this is something that we are willing to work and deep our discussions. It creates for us as academic researchers and a great and important opportunity to identify and relate these understandings as Flavio has mentioned just before. And most cases refer to digital sovereignty related to internet. We are also open and interested to connect our local impacts and discussions to the global challenges. Being here is a great opportunity regarding our project related to this. If you could go to the next slide, Raquel, thank you. To achieve the goals and the objects mentioned, research is based on three main sources of data and information. First, I highlight the desk research to collect public documents and other types of publications from different sectors and to map narratives and stakeholders involved in Brazilian debates. This is our main base, the base of our research. The database built with all the documents collected has been studied for the methodical analysis, and we want to identify the narratives that are at play, who has been part of this discussion, which instruments are considered, and for what reason. In the end of the project, we will share an impact brief and other documents showing all these relations and all results of this goal. Alongside this effort, we have a study group on digital sovereignty and internet governance topics that happen monthly, and we have conducted several interviews with experts and researchers to advance some of these debates. I must say some of our colleagues are there with you in Kyoto and others here online attending this panel, so they are also welcome to join the discussion later. Then I can summarize this as our main methodological approach regarding the research. Now we are almost reaching the one-year project milestone, as Flavio mentioned, and our conversations and even the study group, I must say, were started even before the project, and now we are going to an important and second phase of the project. You can go to another slide, please, Raquel. Here you can see our project timeline to have as a reference. The preliminary results I will share in the next slide are based mainly on this great effort of the first year, so considering the documents mapping and the interviews and also the conversations and discussions with researchers and experts, and we are very excited that soon we will begin an open and free online training course exploring the digital sovereignty issues, some topics that we discussed and collected during the research. This is the first experience of this kind of course that CEPI and ISOC Brazil has been has powered together and it works with public calls for applicants that we want to attend the course and the people that the students that the participants are selected based on gender, race, sector, professions, regionals, diversity we want and we try to make a very diverse public to join the initiative with us. We are also working a lot to have public agents and also journalists with us so we can expand and help them to reflect the theme in their daily activities. The course will be based on record lectures, suggested bibliography and discussions activities and if you have interest in the end all the content and all the videos and the material will be shared at ISOC web page, ISOC Brazil and CEPI’s web page so it will be also open content to everybody that has interest in this theme and also focused on community outreach. We will have webinars, public events and the impact brief that I have mentioned that will be launched in the end of the projects. We will share if you the news you can follow us and you can know the updates about the project but I cannot stop and my participation here without talking about the main results until now that we have in the research. With the literature, the desk research with the literature review it brings us main challenges regarding the theme and mainly considering compatibility between traditional sovereignty of states and open and borderless nature of the internet, as Raquel has mentioned, but many of the concepts and the impacts discussed from the global north do not necessarily apply to the reality of the global south. So it’s a very important space for us to discuss our perspectives, our realities and to connect all these agendas. Our mapping effort is an ongoing task due to the theme of relevance nowadays in the country and by now we have more than 245 documents gathered and analyzed, as I mentioned, regarding our methodological approach. And this database contains bills, laws, reports for different instances and also news articles, media articles that help us to understand how the theme of digital sovereignty has been an important issue in the Brazilian context. Since the beginning of the mapping, it’s also possible to verify and to discuss that using the term digital sovereignty did not provide us meaningful results when we use this as a keyword. Then we needed to amplify our strategy and look for themes that were related to the thematic, but sometimes without the explicit use of the term, so we could reach more publications that are connected to the agenda. And after this, multiple and diverse understandings or are at stake and we bring them to some main contexts, some main understandings that it’s important to share with you that are relations between digital sovereignty and self-determination, data self-determination, states power to regulate, jurisdiction, technological or scientific development, national security, open source softwares, among others. So it’s just to highlight some of the key themes related to the discussion, but these are the main ones. And regarding the 15 semi-structured interviews that we carried out, they ratified, they confirmed some of the results we have collected and analyzed from the desk research. And the main one is that only one view indicated that he understood and he saw he could identify consensus about the definitions of digital sovereignty in Brazil and he associated it to the governance and to the government and to the society capacity to rule the development of the country and to use digital technology to collect data. But if he was the only one, the other 14 interviewees, they shared different perceptions and all of them were very clear saying we don’t still have a consensus, we don’t have a common, a shared definition or a shared understanding about it. But many of them associated to artificial intelligence and to misinformation, fake news agenda. as the context Flavio shared in the beginning of his presentation shows. So the interviews, they also provide us with very different perspectives related to political, legal and technical lenses, as Raquel mentioned. And it’s interesting to say that they related, at the same time, the digital sovereignty as an instrument to guarantee rights to citizens in different areas, such as consumer and minority groups’ rights. At the same time, they are very afraid of power imbalance and somehow the impact outside the Brazilian limits. So this is something that I would like to highlight. But I’m afraid of my time, so I’ll stop here, reinforcing that I would like to invite you, if you have interest, to follow us, CEPI and ISOC Brazil website and social network, so you can be updated about our next steps. And thank you all for your attention. I’ll be happy to answer any questions or comments later.

Raquel Gatto:
Thank you very much, Ana. And you were missed here in Japan, but next time, hopefully we are all together. So with that, we finished the presentations. And the idea, just to recap on this session, is to share the project that is ongoing between ISOC Brazil and CEPFGV, where we are looking into digital sovereignty, the documentation and the interviews that are being collected are going to draw a course and then materials and documents that we are shaping. to understand all the nuances and how digital sovereignty is understood in the country. But it’s also grounded into the Internet Society, the global work that is being done also in relations to Internet fragmentation and the understanding of digital sovereignty worldwide. And now, our intent was not only to share what we are doing, but also to collect inputs on your views and any other work that is underway that could be useful for us to consider in this project. So, I’m now opening up the microphone. Thank you. Mark is going. There are two microphones available. If anyone wants to make questions, please go ahead.

Audience:
Hello. Mark, Derisca speaking. I’m an Internet Governance Consultant. So, thank you for sharing the project. I have been following it. It’s actually very interesting and deep. One question that I do have is Brazil has a long tradition of not only the bills that we are discussing, but a prior investment in digital sovereignty in the production of technological equipment, in the development of open source code, and in a series of actions that predate the current discussion. So, in a way, the country was already ahead of the curve like some other countries that preceded this movement. So, has the group started looking into that more historical approach and trying to understand if that has any correlation with the current developments or if these are different phenomena that are happening in different places and time? Thank you. Thank you, Mark. I’m going to take both questions and then… I think we have time for one more question and then we can go back to the presenters. Raul? I have two questions. One is that I saw very recently a paper published by Luca Belli from FGV about digital sovereignty in Brazil and India. I wonder if it has any relation with this work or is it absolutely a parallel line? Okay. Good. Thank you very much for the question. I think it’s a very good question. I think it’s a very good question. Because I didn’t understand if it was related because it’s like an anticipation of the results of the work that you are doing. And the second question is that if you are considering as one possible or you are questioning the expression itself, digital sovereignty, within the framework of the lot which difficulty in the program, to consider this expression, but what about if the expression doesn’t make sense at all, right? In fact, every time that I read things about the salinity, it’s about the independence or technological independence, and I don’t know if there is any country in the world that has such thing like food sovereignty or energy sovereignty, because there is no any country in the world that is absolutely independent in everything about, in relation with everywhere, everybody. So I’m just open to questions. Thank you. โ‰ซ Thank you very much, Raul, and very valuable questions. I’m going to go to Anna Flavio and I might contribute. โ‰ซ It’s a really short one. has to do with how we just asked it. Ale, can you introduce yourself? Yeah, definitely. I’m Alexandre Costa Barbosa. I’m also joining discussions of the SAPI FGV-ISAC Research Group on Digital Sovereignty. I’m a fellow at the Weins and Mao Institute. It’s actually I’d like to hear a bit more about, allow me to ask this question, how is it going to really relate with the research conducted by the FGV-CTS? Because actually, the research group from CTS is working with digital sovereignty for a long time. So how can we combine the efforts that’s being conducted by Sao Paolo? Thank you very much.

Raquel Gatto:
Thank you very much, Ale. So I’m just going to give a brief putting my head as also part of the ISAC Brazil chapter to say that we invited Luca Belli. He joined one of the group sessions, the local group sessions, precisely so we could find some of the synergies in the project. But Luca Belli work is focusing on cybersecurity, so digital sovereignty understood as those of national security protection. And while we’re looking into this project into a wider and broad view of digital sovereignty, also including the technological issues and the political and legal implications. So I’m going to pass to Flavio and Ana if they also want to comment. But I think this is important to be clear on the parallel work that is being done. But it’s not a competition. It’s really that we are in this moment in Brazil where this is. It’s such an important issue that more and more research is blossoming, which is a good way. And then we are looking for the synergies on how to work together. So that’s my reaction. Flavio?

Flavio Wagner:
Yeah, Raul, maybe you are not aware, but FGV is a very large academic institution, with many different groups, and even in different cities. So Lucabelli is in Rio, it’s one group, and we are partnering here with FGV Sao Paulo, which is a different group. And so, as Raul said, we are trying to keep a dialogue with Lucabelli and his team. We invited him to discuss with us, but we are taking really different directions, I suppose. And regarding the question from Mark, we are looking for all documents, all public documents, and bills, and public policies that use explicitly the concept of digital sovereignty as a motivation, or are related to digital sovereignty. In this regard, of course, Brazil has a long past tradition of digital sovereignty approaches. The development of local technologies back in the 70s and in the 80s, we had a very strong industrial policy for the development of local technology. So, of course, this also is of interest for the project, for the mapping we are trying to build. But this is more a historical thing, so we are more interested in what’s happening now, which are the current discourses of different stakeholders, and how this can impact the evolution of the Internet in the country. And regarding the other question from Raul… Self-determination.

Raquel Gatto:
You want to… Let’s put Ana and then I can tackle this one if needed. And we have more questions in a moment. Milton, I’m just taking it. Ana, do you want to make any reactions? We have more questions coming from the floor soon. Yes, just a brief contribution regarding the historical approach that Mark has questioned. I can reinforce Flavio’s perspective about our short-term, I must say, interest looking to the present, looking to the ongoing controversy and debates going on. But we will be able, I’m pretty sure, to build an interesting timeline regarding the main topics and some trends related to the discussion in Brazil that I would say are very recent in the way you’re framing and what you’re looking for regarding the past. And also related to FGV, thank you, Raquel and Flavio, for introducing that we are a big institution and we have very different but connected centers. And I would say that we are very open and connected to the CTS agenda. They have great materials and research on this theme. But in our research, we are looking, I would say, in a broader approach. We want to have this kind of understanding and have this connection with the CTS approach, but not as the only approach on the table. We have seen other perspectives and controversial perspectives going on. So, we want to understand them and go deeper inside the narratives and the main… questions and issues in their background, so we can also even contribute to CTS research in some way. So this is something I would like to add. And finally, regarding the meaning that Raul shared in his question, that expression sometimes they don’t have a sense at all. It’s a very good point, Raul, thank you for sharing it. And it’s a personal perspective, I’d say, sometimes some expressions are used as a hype, as people bring them to make sense, to connect to very different reasons and subjects, and it lacks somehow the connection with the first or the main core it could represent or it could make sense. So this is one thing that we are very worried about, but the main issue, I would say, they are still connected to critical infrastructure, when you say, told about the food and the energy, and somehow they share this kind of relevance, they appear with this relevance and the need for let’s talk about it, let’s make it happen. But something that we have discussed is that we are still in the world’s place, not with concrete agendas, I would say, with concrete initiatives. Besides the discussions about AI and the fake news that we’ve mentioned, it’s still something more discussed, but without concrete impact yet, but many things are happening.

Audience:
Thank you very much, Ana. I think in regards to the second questions from Raul, if that’s going to be a patchwork, where you have an organized view of these meanings, or if that’s a crazy kaleidoscope, that’s going to be answered by the end of the research. And then we have Milton. You want to take the floor? Sure. I missed the first part of your talk, but I know that we’re having a conversation about the legitimacy of Internet governments in the multi-stakeholder environment and also digital sovereignty. And I think one thing I want to ask you about, when most people talk about digital sovereignty, they think about it only for themselves or their community, right? The problem with that is that the notion of sovereignty in a political and legal sense inherently means a kind of exclusivity, right? So if I have sovereignty as, let’s say, the United States, then you don’t, right? And if Brazil has sovereignty over its Internet, then Venezuela and Europe and the U.S. don’t. And so I would like to encourage you to think about the international relations aspect of sovereignty and not hold it up as some kind of fantasy where everybody can control everything for themselves, but they don’t have to worry about anything else. That’s just a fantasy. And the other thing is, you’re all involved in ICANN, so you all know that one of the best things we did when we created ICANN was we got it out of the sovereign system, right? We said names and numbers are going to be not governmentally run, and so that’s been very successful at making sure that So I think there is a lot of debate about what is the best way to do this. I think the best way to do this is to make sure that certain aspects of Internet coordination are not politicized. So I want to make sure people understand the term sovereignty has an appeal to people, but it’s not a one-way street. It’s going to be competing claims of sovereignty, and sometimes those claims are not necessarily going to be the same.

Raquel Gatto:
So I think that’s a good point, Milton. I just want to make sure that also to understand part of the work we are doing right now, the first phase, let’s say, is really capturing the photographs. So how it’s being used in the documents and how people are really seeing sovereignty. So it doesn’t mean that this is the understanding that we have, but it’s really capturing how it’s being used. So it’s very exciting that as a citizen and as business citizen our digital sovereign economy is being so and so I want to make sure that all the citizens who watch this, you know, have somebody heartbreaking memory, and the people who are watching this, they may be thinking about those claims of digital sovereignty possibly hurting some fundamental aspects of the Internet.

Flavio Wagner:
And, of course, most people, when they talk about sovereignty, about self-determination, about the local technological development or the local control of data of Brazilian citizens, of the people who are watching this, they may be thinking about that. But, of course, we are not aware of the possible implications of the legislations or regulations that can be imposed. And so we in the project are very much aware of those things and following the line of the Internet Society approach that digital sovereignty, if not well understood and not well implemented, may hurt some fundamental characteristics of the Internet and we are very much aware of this.

Audience:
Thank you very much, Flavio. And we have one last question before we wrap up. Thank you. Thank you very much. My name is Peter Bruck. I’m the chairperson of the World Summit Awards. I’m from Austria and I initiated the Data Intelligence Initiative and we talked a lot about the issue of data sovereignty, especially in the context in Europe. And what Milton was saying was very interesting because he gives us the legal definition and the legal implications. I see that the term sovereignty, and I think that this is also very much reflected in some of what you have said, is a way of asserting control in a situation where you don’t feel that you have control. So it is a process term in that sense, a process term on the way to see if you can empower yourself to get control over something. So you’re striving for sovereignty. And then I think what would be important is that you are then getting to a situation of negotiation with the others. And I think this is something which we need to really see. I’m very interested at the IGF and at all conversations here that we are looking at things which are doable and not fantasy. And I think it’s very important that this term sovereignty is seen as a term enabling people to claim their rights. control over whatever is happening in the digital sphere on many, many different levels. So I just wanted to add this to Milton’s intervention and thank you very much also for your smart conversation and reply.

Raquel Gatto:
Thank you very much for this contribution. And I think it’s important and adds to our goal here, which is precisely to understand how… first to understand how it’s being used and then to help educate where we want to take this from. And then I’m going to give one minute wrap-up to Flavio and Ana just to say a few thoughts and then we need to close the session and go for the next. Thank you.

Flavio Wagner:
Thank you, Raquel. And thank you all for coming and sharing your thoughts with us. As I said, we are trying to have a photograph of what are the narratives from the different sectors in this area in Brazil and try to relate this to the global discussion on sovereignty and just also as a mean to educate the people in Brazil. So we are collecting information and then we will spread the conclusions of the project to the wider community, not only in Brazil, but in the region, in Latin America and globally. So that people understand these different flavors of sovereignty and the legal and technological and social implications of those proposals and those definitions. So that we are really trying to contribute to the debate and show the implications of the different definitions of sovereignty and the different implications of public policies or legislations or regulations that are proposed to be implemented following this motivation of sovereignty. And I conclude here. Thank you.

Ana Paula Camelo:
Thank you, Flavio. Ana? Well, I would like also to thank you all for your contribution. You are very insightful and I want to reinforce that at the same time we want to look at the Brazilian context and the Brazilian debate somehow and their implication. We have this goal to not keep on looking only inside, to our reality, to our context. And this kind of dialogue with other perspectives and other realities, other understandings and impacts are very important for us. And we will keep this aim until the end of the project to make this discussion broader, but also to contribute, as Flavio mentioned, to our country. So I would reinforce you are very welcome also to share and send feedbacks and other suggestions and to be connected to our research, not only during this panel, but also after. It will be a pleasure to keep in touch with you. So thank you again. And Raquel?

Raquel Gatto:
Thank you very much, Ana, Flavio, also Pedro and Lori, who are supporting online. We need to wrap up because the next session is going to start soon. And thank you very much for everyone for the contributions. We keep here at the IJF available for any conversations you want to have and further inputs for the project. Next is a legitimacy of the multistakeholderism in IJF spaces. Thank you very much.

Ana Paula Camelo

Speech speed

129 words per minute

Speech length

1629 words

Speech time

755 secs

Audience

Speech speed

172 words per minute

Speech length

1249 words

Speech time

437 secs

Flavio Wagner

Speech speed

137 words per minute

Speech length

1329 words

Speech time

584 secs

Raquel Gatto

Speech speed

137 words per minute

Speech length

2300 words

Speech time

1007 secs

Governing Tech for Peace: a Multistakeholder Approach | IGF 2023 Networking Session #78

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

As a representative of a youth organisation operating under the remit of the United Nations Department of Political and Peacebuilding Affairs (UNDPPA), Manjia exhibits a keen interest in the role that young individuals might fulfil in the ‘Tech for Peace’ initiative. Her focus on youthful involvement originates from a firmly held conviction that engaging this demographic is not merely beneficial but absolutely crucial to the success of the undertaking. However, Manjia’s curiosity is not limited solely to the participation; she also shows a genuine interest in uncovering and scrutinising any established best practices or forthcoming plans connected to fostering youth engagement in ‘Tech for Peace’.

The term ‘Peace Tech’, central to this discourse, warranted clarification, provided intrepidly by the director of Access Now. This term constitutes an umbrella reference encompassing a diverse range of domains, from the advancement of human rights to the safeguarding of environmental justice. The director’s commentary extended beyond mere terminological elucidation, presenting some critical perspectives concerning how ‘Peace Tech’ could be interpreted and utilised.

At the heart of these insights was a concern surrounding ‘techno-solutionism’ or the excessive dependency on digital tools and technology to address inherently complex human challenges. Arguing against the overreliance on technological solutions, the director emphasised the need for traditional peace-building efforts. Hence, he advocates embedding technical discussions within broader dialogues on peace-building and other interrelated sectors to guarantee a comprehensive approach.

These contemplations and sentiments underscore several Sustainable Development Goals (SDGs) set forth by the United Nations. Notably, these viewpoints reflect the core aspirations of SDG 16, which promotes Peace, Justice and Strong Institutions, and SDG 9, oriented around Industry, Innovation and Infrastructure. The dawn of digital rights and the care for ‘techno-solutionism’ serves to underscore the complex relationship between technology and peace-building, shedding light on the concurrent requirement for innovation and circumspection.

Evelyne Tauchnitz

The discourse principally revolved around three domains: peace, human rights, and the influence of technology. Central to the discussion was Evelyne’s comprehensive definition of peace. She emphasised that peace extends beyond the mere absence of direct or physical violence. Peace should also incorporate the eradication of structural and cultural violence, forms of oppression that can prevail in society even without overt conflict.

Evelyne underscored the integral role that human rights have in defining our comprehension of peace and ‘peace tech’. She identified human dignity and freedom as the cornerstone values that bind together peace and human rights. She argued that these principles ought to form the bedrock when evaluating peace technologies. The central argument was rooted in the notion that any advancement in technology should not infringe upon basic human rights under the pretext of facilitating peace. If a technology is found transgressing these human rights, its categorisation as ‘peace tech’ becomes problematic.

However, Evelyne expressed her concerns that not all technologies, notwithstanding being labelled as ‘peace tech’, necessarily harmonise with the values of peace and human rights. This suggests a need for more stringent scrutiny of such technologies and the establishment of robust standards for defining and categorising ‘peace tech’.

The symbiosis between human rights and peace was further analysed, affirming that respect for human rights is indispensable but not ample for peace. Peace was envisaged as a broader concept that transcends the domain of legal enforcement of rights. In contrast, human rights were characterised as more narrow in scope, subject to legal enforceability.

An additional perspective broached was the potential paradigm shift in our perception of peace due to digital transformation. Evelyne posited, optimistically, that digital technologies could be leveraged to forge a more comprehensive peace. However, she also underlined the necessity for prudence, especially in instances such as social scoring systems, currently instrumental in China for rebuilding societal trust.

On the whole, the discussion reiterates that while technology and human rights are pivotal in shaping peace, vigilance is required to ensure that the pursuit of peace does not inadvertently engender inequality or violate fundamental human rights.

Moses Owainy

Moses Owainy, the esteemed CEO of Uganda’s Centre for Multilateral Affairs, brings to light the necessity for a more diversified dialogue in peace tech, asserting the significance of integrating a multitude of global perspectives. Concentrating his argument on Sustainable Development Goals (SDGs) 16 and 9 – namely peace, justice, strong institutions, industry, innovation and infrastructure – Owainy stresses the need to embrace a wide range of viewpoints while deliberating peace tech initiatives.

Owainy adeptly highlights a key consideration: the varying impact of peace tech in different geographical contexts. He underscores that results of these tech endeavours differ noticeably between regions such as Uganda, Kenya or Nigeria and their deployment in more advanced economies. This accentuates the pressing need for context-sensitive, adaptive approaches to peace tech in diverse socioeconomic landscapes to ensure equitable outcomes.

Simultaneously, Owainy instigates a discussion on the very terminology used in peace tech dialogue, criticising the commonly used term ‘global peace’. He denotes its inherent ambiguity, advocating instead for the term ‘international peace’. Owainy asserts that this revised term would better embody the collaborative efforts between various states and multi-stakeholder groups, in their shared pursuit of peace.

In conclusion, Owainy’s insights guide towards a more inclusive and adaptable narrative in peace tech, suggesting a shift in terminology for clearer understanding and collaboration. His observations also underscore the need for nuanced, context-specific application of peace technologies, acknowledging regional differences and necessity of adaptability.

Marielza Oliveira

Marielza Oliveira acts in the prestigious role of Director for Digital Inclusion, Policies, and Transformation at UNESCO, guiding the organisation’s endeavours towards the advancement of digital inclusion and policy transformation whilst supporting the broad Sustainable Development Goals (SDGs) of fostering innovation and reducing inequalities. Her work is primarily centred around the protection of two fundamental human rights: freedom of expression and the right to access information.

A crucial aspect of this role is their commitment to UNESCO’s ultimate mandate; fostering peace in the minds of men and women and facilitating the unhindered flow of ideas. This aspiration is brought to fruition through various media, underlining the value UNESCO accords to the digital ecosystem.

To guarantee the regulatory approach towards internet platforms also respects this mandate, UNESCO has emphasised the necessity of regulations that uphold human rights. This conversation was furthered in its recent ‘Internet for Trust’ conference, demonstrating the organisation’s positive sentiment towards internet regulation sensitive to human rights.

Moreover, UNESCO intends to build the capacities of stakeholders to counter digital exclusion and inequality. This ambitious goal will enable better participation in the digital ecosystem, striving for a more inclusive internet environment. These capacity-building efforts strongly align with SDGs centred on industry, innovation, and infrastructure, as well as reduced inequalities.

In regard to achieving such aims, significant results are being accomplished through multi-stakeholder approaches. These alliances involving tech companies and government agencies aim at maximising opportunities and fostering a diverse cohort of partners. Examples of these advancements include the ‘AI for the Planet’ project combatting climate change and the United Nations Technology Innovation Lab utilising technology innovatively to foster peace-building. ‘Social Media for Peace’ is another remarkable project designed to combat online polarisation.

UNESCO stresses the importance of ensuring that internet regulation does not infringe upon the rights to freedom of expression and access to information. The organisation is currently planning to release guidelines encouraging an environment of regulation that emphasises processes rather than content, promoting transparency and accountability. This approach counters potential restrictions that could limit the freedom of expression for journalists, activists, and others.

The analysis underscores UNESCO’s leadership role in utilising technology and promoting digital inclusivity to achieve significant societal benefits and its commitment to embedding these principles within its operational structure and processes. It’s robust standing on maintaining a balanced approach to internet regulation, which safeguards human rights whilst promoting accountability, underpins its dedication towards peace, justice, and robust institutions. Such commitment reflects the considerable correlation between UNESCO’s operations and the wider SDGs.

Mark Nelson

Co-directors Mark Nelson and Margarita Quihuis are spearheading significant advancements in peace technology at both the Peace Innovation Lab at Stanford and the Peace Innovation Institute in The Hague, striving to commercialise their peace technology research. This pioneering approach, utilising modern technology, aims to reshape how peace is perceived and established globally.

A fundamental component of this research is the use of sensor technology. In the last two decades, sensors capable of detecting and monitoring human social behaviour have seen groundbreaking advancements, transforming how human interactions are observed and understood in real time. This invaluable insight underpins the development of detailed peace metrics in high resolution.

The concept of ‘persuasive peace’ lies at the heart of their innovation, favouring influence-based strategies over conventional coercive tactics. By crafting customised approaches suited to varying individual circumstances, they are redefining how peace is achieved and maintained.

An essential element of their endeavours is to establish a essential link connecting peace technology with capital markets to create a viable ‘peace finance’ investment sector. They aim to capitalise on the intrinsic but often unquantifiable value of peace, setting the stage for a new era of peacekeeping and conflict resolution strategies.

To summarise, Nelson and Quihuis’ groundbreaking work embodies a progressive approach to advancing the principles encapsulated in SDG 16 (Peace, Justice, and Strong Institutions) and SDG 9 (Industry, Innovation and Infrastructure), demonstrating the transformative power of technology in the pursuit of peace. Their efforts are leading the way in peace innovation, reshaping our understanding of peace, and developing or promulgating peace technology as a crucial aspect of infrastructure development and peacekeeping strategies.

Moderator

The debate primarily delves into the interplay between technology and peace, with a focus on the contemporary role of technology in fostering peace across varying social contexts, as evidenced through initiatives such as the Global Peace Tech Hub. This initiative, championed for advocating the responsible use of technology, centralises on the principle of instigating cross-sector dialogues and sharing enthusiasm for the possibilities that technology can provide in the attainment of peace.

The discourse also recognises the dual aspect of technology as both an agent of positive change and a potential risk. This bipolar facet of technological advancement was brought into the spotlight, with the positive influences such as the enhancement of democracy and increased access to services being weighed against the negative implications such as the promotion of misinformation and the potential to catalyse societal polarisation. Going forward, the panel agrees unanimously that strategies are required to be formulated to mitigate these risks if technology’s full potential for peace promotion is to be exploited.

UNESCO’s efforts at encouraging climate action with projects like AI for the Planet are highlighted as an epitome of how collaborations among diverse parties in the technological arena can result in meaningful outcomes, even when corporations are unable to commit substantial resources. The use of technology to counteract climate change receives a positive appraisal, especially when it supports initiatives that allow various actors to make a pronounced difference without substantial, long-term commitments.

Additionally, the dialogue underscored the significance of the partnership between UNESCO and the European Commission, embodied in the Social Media for Peace project. These efforts, seen as integral to initiatives such as the Internet for Trust guidelines, aim to regulate internet platforms not through content, but through process, asserting the importance of transparency and accountability in this space.

An essential reflection in the debate is the profound transformation peace technology has undergone. Innovative advancements enabling real-time analysis of human behaviour and interactions have led to a significant shift from coercive peace mechanisms towards more persuasive peace technology. This evolution has been lauded as a historic shift for the human species, opening the potential for fostering peace through persuasion rather than coercion.

The role of the youth in propelling human rights and peace movements, despite the inherent risks associated with digital activism, is highlighted. The willingness of young people from countries, including Thailand, Sudan, and the USA, to remain in the forefront of these movements, despite potential threats like spyware attacks and concerns over digital permanence, is noted.

Also significant is the sense of mistrust within certain societal factions. Phrases such as ‘peace tech’ and ‘tech for good’ are seen as potentially contributing to a trust deficit, with neither the technology sectors nor large humanitarian agencies being considered entirely trustworthy when it comes to personal data.

Finally, with the array of perspectives on peacekeeping through technology, the shared consensus was the need for continuous dialogue within the tech community. Seen as crucial to dismantle existing silos and establish a common understanding of technology’s role in peace, this call for consistent multi-stakeholder dialogue served as a central theme throughout the discussion. The breadth of opinions within the peace tech community, from seeing technology as a tool for peace to viewing it as a new battleground, or even a threat to peace, further underlines the importance of these dialogues.

Speaker

The concept of peace transcends the domains of security, extending to incorporate values of freedom and equality. It constitutes a comprehensive vision for societies’ development that endeavours to eliminate all forms of violence – direct, structural, and cultural. However, this idealistic vision can face threats from inappropriate use of technology. Instances where technologies infringe upon human rights pose a significant issue for peace and security on a global scale. Regrettably, the notion of peace can be misappropriated as a brand for technologies that may not contribute to harmonious societies.

Concurrently, recognising the potential of technological advancements, the United Nations is diligently prioritising digital transformation and technology. The UN Secretary General has been vigorous in advocating for this priority, particularly within the realm of cyber security. A testament to this focused effort is the creation of the Office of the Technology Envoy, established to champion these changes on a grand scale. Indeed, there’s a burgeoning understanding of the need for the UN Security Council to incorporate the monitoring of digital and cyber roles into their mandates, signalling the significance placed on technology in global security guidelines.

Nevertheless, amid these advancements, concerns regarding the right to access the internet and freedom of expression persist as substantial challenges in the digital era. Access Now, a focused human rights organisation, primarily champions these digital rights. Incidents like the deployment of spyware during the Armenia-Azerbaijan conflict highlight how tech misuse can affect peace negotiators and exacerbate conflicts. Furthermore, the role of social media platforms in content governance can hold far-reaching implications during crisis times, a stern reminder that technology can be a double-edged sword. In extreme situations, internet shutdowns have inflicted harm on human rights and obstructed conflict resolution.

Indeed, human rights are a necessary condition for peace. Nonetheless, peace as a concept demands a more comprehensive understanding and acceptance, exceeding that of human rights alone. Concord isn’t exclusively about respect for human rights but also significantly involves the mechanisms and ethics by which specific political decisions are made.

The digitisation of society could conceivably impact our comprehension and notion of peace. Young individuals are poised to play a key role within this transformative process. Impressively, youth across the globe are already spearheading movements championing human rights and peace, frequently risking their personal safety in pursuit of these causes.

Nevertheless, a pronounced trust deficit exists towards tech sectors and large humanitarian agencies handling personal data. Passive monitoring via ubiquitous sensors is viewed as a threat, illuminating the public’s discomfort with potential breaches of privacy. This underlines the challenge of striking a balance between leveraging technological advances and safeguarding human rights and security. Thus, whilst digital advancement offers vast potential for societal development, it is crucial to remain cognisant of their inherent risks to maintain peace, justice, and strong institutions.

Session transcript

Moderator:
I see a few of my participants I don’t know if they are able to share the video or audio to talk during the session or interact because the moment I’ve seen only Mark Nelson was able to activate his video and I’m not sure if they technically can also activate their video or not. How are you in Tokyo? All good? Do you start or? Well I can start anyway this is gonna be a very of course informal session. It’s a networking session. How many of you have ever attended a networking session before? No? Good. So that’s what we’re gonna start sharing this experience together because I have no clue what’s a networking session is but I guess it’s gonna be a very informal moment and to discuss, share enthusiasm, interest around the tech and peace, whatever that means. Exactly something that we could I hope we’re gonna come out with some common ground on on this specific topic. So I’m Andrea Calderaro. I’m Associate Professor International Relations at Cardiff University also affiliated at the European University Institute with the Global Peace Tech Hub. Today we have a series of colleagues that participate in this session from remote. Michele Giovanardi in particular who is the coordinator of the Global Peace Tech Hub and the leader of this initiative. Then we do have online I saw a Mark Nelson as director of the Stanford Peace Innovation Lab. Evelyn Taknitz, Senior Researcher at the Institute of Social Ethics. Yes that is and Peter of course who’s in the lead of access now. So and then of course we have other colleagues here in the room and then of course I give you of course the opportunity to introduce yourself. Michele do you have anything else to add? Yes so first of all we have a few more online participants but I’m not sure they can activate the camera because of the structure of the online session so they didn’t have the permissions but they’re listening in and if they want to interact of course there’s a chat function and of course the sound I guess they can intervene and talk. As you anticipated the idea is to really having a moment for us we have been running this this networking session for the last three years at IGF and it’s always a space to share some of the ideas and to learn about each other projects and work around technology and peace. So maybe we cannot see the online audience from sorry the in-presence audience from online so it would be nice maybe to start with a round of introduction or I can introduce briefly what is the global peace tech hub and and maybe so that we can have a quick reaction also after that. So up to you if you want to do a first round of introductions and then maybe we can go more into the topic. Go ahead the first of all the audience anyway we are two more people in addition to what you see on the main stage so it’s gonna be a very short round of introduction. So go first with the global peace tech hub and then we’re gonna keep this of course as much informal as possible. Great okay so just like a few words of what is the global peace tech hub what is global peace tech. We also have Maria-Elsa Oliveira joining online so if you can please give her co-host rights so that she can also activate a video and participate that’s for the organizers. So here is I just have three slides so nothing much just to understand what is global peace tech. We define global peace tech as a field of analysis applied to all processes connecting local and global practices aimed at achieving social and political peace through the responsible use of frontier technologies. This is very kind of it sounds very official but it’s something that we can just start with as a framework. So basically what’s the basic idea of the global peace tech hub. We know that emerging technologies bear great opportunities for positive change but at the same time they have threats and risks that we need to mitigate. So the idea is that in the 21st century we’re battling between these two opposite forces and this is a common trend that we can see with different technologies and with different functions. We know how the hope of the internet to be this force for democracy turned into a force for online disinformation, polarization, violence. We know about the opportunities of digital identities and in terms of giving access to finance and to services to people and marginalized people but also about all the privacy and data ownership and cybersecurity issues that come with that. We know about the great potential of telepresence to build empathy and trust but also about the risk of hate speech and deep fakes. We know about the use of data and potential of early warning response systems but also all the issues related to how this data are managed and secured. So this can go on and on and the list is very long. So the question is this one. So how can we at the same time mitigate some of the risks related to these emerging technologies but at the same time investing in those initiatives that they’re using technology for peace and there are many projects that we mapped more than 170 projects that are using technology in different ways to enhance different functions of peace at the global level. So on the one side is about mitigating the risks with the mapping and sorry with the regulation, capacity building, education, good governance. On the other side is about mapping these peace tech projects and understanding they are assessing their impact on peace processes and also try to shift public and private investment in these peace tech initiatives that work in partnership that we can build between different stakeholders. Hence the topic of this networking session. So a multi-stakeholder approach to peace tech, to governing tech for peace and first we would like to know who is on the call, what you’re doing, what you think technology can contribute to peace and second after we get to know you and so we can know each other. We would like to know what you think of the topic of the day. So how can different actors come together, actors from academia, governments, public sector, think tanks, NGOs come together in this puzzle and create synergies to achieve this goal, these peace goals through the responsible use of technology. So I will give the floor back to you and maybe we start with the first round and then we see where the conversation goes and I really invite online participants somehow, chat, audio, video to interact as well and give us their take about this and introduce themselves as well. Super, okay thank you Michele for providing such a good overview about the variety of, I mean the perspective that you’re privileged in in looking at the relationship within tech and peace. Yes I will let like Evelyn maybe you can introduce yourself and provide also your insight on the topic of what is exactly your perspective on

Evelyne Tauchnitz:
tech and peace. Thank you very much Andrea. Yes my name is Evelyn Tauchnitz. I’m a senior researcher at the Institute of Social Ethics at the University of Lucerne, Switzerland and my research focuses on digital change, ethics, human rights and peace of course, what we’re talking about today. So like if we’re really talking about what is peace tech, I think that’s a bit the guiding question here today. We first have to consider also what do we understand by peace and that is a really challenging question by itself and it’s not the first time we’re discussing that but we had a conference last year also where we dedicated some thoughts on that and I don’t think we reached a conclusion. It’s a really difficult question but I can tell you like how I’m using this concept of peace in my own research. So first of all I think that peace really is a vision, like it can show us in which direction we would like to develop as well together as humanity but also like in what kind of societies we would want to be living and although there will never be like absolute peace in all corners of the world or even in any society, we still need to know a bit like what would it look like and there I find it really helpful first to look at this positive definition of peace meaning that it should not only mean the absence of direct violence but also of structural violence and cultural violence so there should be no injustices, there should be equality, poverty, access to health care, access to education, so trying to eliminate the causes that then often give rise to direct violences. So it means really that even if you have no direct violence there is still, usually in any society you would still have some forms of injustices, some forms of discrimination which leads me also to the third form of cultural violence which legitimizes the other forms of violence like for example if women are not allowed to go to school this is both a mix of structural ones when they cannot access the school physically but it’s also a form of cultural violence. Why is it not boys that cannot access the school? Why is it girls? So probably there’s never going to be any society free of this violence but so when we talk about peace tech I think it’s really important to keep in mind that it’s more than security only because your security is a very narrow definition of peace like for example if we think about surveillance technologies they might increase security but then again there is different problems like of ambivalency means our personal freedom might be reduced because that has collected about us and we know that or even if it’s in a conflict setting we might not meet certain people anymore like you would not have big assemblies of people because they would be afraid that one person being there also one contact might be suspicious and then I’m meeting that person so rather I should not meet too many people and if it’s a bit a big crowd so more likely it would be that somebody is there who doesn’t have a good record so to say. So it really influences also on democracy it influences on like freedom of expression it how we move even how we move like how do we behave in public squares and I think it’s really important to just look at this from an ethical perspective like when we look at ethics you might have good purposes for instance but still it could have like negative consequences as in the case of surveillance you want to increase security but at the same time it might impact on personal freedoms or civil and political rights. So I think what I’m doing in my own research we need some kind of point of reference when we talk about peace like what kind of peace do we want what kind of peace technologies do we want and I’m using their human rights as I really as a baseline so if you have technologies that violate human rights I find it problematic to to say there are peace tech for example because they’re increasing security but peace in my understanding also is really based on on these values of human dignity and freedom as well and human dignity and freedom there are two basic values that connect both peace and human rights and that’s not by coincidence of course but it’s historically linked because human rights were the universe the creation of human rights was adopted after the Second World War after this devastating experience and well I don’t know really if I should go on but that’s really a topic of discussion maybe that I would like to raise rather than me speaking but what do we understand by peace or peace tech concretely or like what kind of technologies should we should we gather under this term like which technology really merit to be called peace tech because sometimes I see a lot of like almost advertising or marketing efforts to like use this brand of peace tech but if you look at it more closely it’s maybe not so much peace peace peace as it’s as it’s supposed to be so yes that’s so far my input or something I would like to discuss with you thank you very much

Speaker:
fantastic thank you and when it comes of course human rights and access now has been at the forefront on variety of human rights battles and you Peter have been also involved in a series of processes in the UN context so you could respond on your yeah your perspective absolutely thank you yeah well I I feel like I want to network with both of you after this I think this isn’t getting at the point of the session and yeah I really appreciate your remarks for two reasons I think there’s that Martin Luther King quote that paraphrase it badly peace isn’t the absence of conflict or the absence of war but the presence of justice which I think yeah leads to accountability and then thank you also yeah for recognizing the the need for this tech to be respecting of human rights that’s certainly where we come from at access now we are a human rights organization that’s got our start during the Green Movement in Iran and have since expanded globally working at the intersection of human rights and new technologies but I think around 2018 we recognized that increasingly we were working in context characterized by conflict fragile and conflict deflected states and then the communities who were in the midst of those fleeing those or you know trying to work in the diaspora to end those conflicts and I recognize this and have built it into our work plans most directly on our on our international organizations team adding a senior humanitarian officer who brings experience from that sector but you know this is really just an acknowledgement that the digital rights that we’ve worked to protect are most at risk in conflict affected persons and communities and that I think from you know monitoring and and prediction to mitigation to resolution to accountability a lot of the digital tools are infused and becoming essential to each of those steps I think so I can mention a few programs that we have to make it more practical we were we have tracked internet shutdowns intentional disruptions of access to the Internet since the Egypt shutdown in 2011 and have brought this this narrative this terminology and this this charge to end shutdowns through our hashtag keep it on campaign brought that to the UN and seen say a lot of positive resonance and a number of UN bodies and among states that you know recognize internet shutdowns you know go beyond the pale that increasingly these are linked to conflict events and times of unrest but that they really levy a number of harmful effects on human rights and increasingly we would say to the resolution of those conflicts themselves and so that keep it on campaign continues alongside our work on content governance so in some ways the flip side but I think this came out during a lot of the conflict in Ethiopia was how overrun a lot of social media platforms in particular were with incitement content and the unresponsive or you know the perceived indifference or even an ignorance of those tech companies to deal with what was happening on their platforms of course in Myanmar as well led to us to work with partners to create a declaration on content governance in times of crisis we’re following that up with a full report so you know when it comes to shutting down the Internet or you know allowing the proliferation of harmful content you know showing there’s a number of ways that freedom of expression is directly at the heart of I think a lot of these responses and interventions and then more recently we’ve tracked especially through our digital security helpline which is recognized as a tool that DPPA and others used to refer people to it provides free of charge technical guidance advice to civil society which we define pretty broadly 24 hours a day and 12 languages and increasingly we’re getting reports to that helpline of really invasive targeted surveillance and spyware and I mentioned this particularly because a recent report we wrote described the use of spyware in the midst of the Armenia-Azerbaijan conflict which has flared up again recently but we were actually able to detect infections on the devices of the actual negotiators of the peace on the Armenian side the ombuds person who was directly involved in talks their phone was completely infected throughout and so So, spyware is now, is a tool of war and of, you know, efforts to monitor peace negotiations. And finally, yeah, we are mapping now all of the tech that we can find being used in humanitarian situations, and we’ll definitely look at the mapping project that was presented just now on peace tech. I think with human rights, you know, we’re taking a human rights lens, where’s the human rights due diligence? What are the procurement processes? How do we determine, you know, whether this tech, you know, has it been analyzed openly? Is it human rights respecting, as you said, I think. So, you know, that’s the kind of work that we do more directly in pursuit of our mission to help communities and people at risk, and then we try to bring lessons from that frontline work to the United Nations through the OEWG cybersecurity process, and I’ll stop talking soon. But yeah, I’m happy to talk more on that later, and I’m joined by my colleague, Carolyn Tuckett, who runs our campaigns and rapid response work. Thanks. Super. Thank you. I do have actually a two-finger question, and sorry if I know that we should keep this roundtable going on. But I mean, since you are exactly following all the UN process, and since UN is, I guess you’re there because we know that UN is supposed to be this organization ensuring peace, so yeah, what do you think the UN is moving in, whether the UN is moving in the right direction? Of course, the UN Open-Ended Working Group is a process that I’m following as well, and I’m not sure whether we can feel satisfied about that. Well, yeah. I mean, it’s undeniable that from the top down, the Secretary General has put digital transformation and tech and cyber at the top of his agenda. You’ll see that by creating the Office of the Technology Envoy, and we, for our part, have been pushing the UN Security Council to integrate monitoring of the role of digital and cyber on its mandate and on the situations that come before it. And we haven’t seen much evidence, well, we don’t know a whole lot about what happens at the Security Council because it is so opaque by design, especially to civil society, but we’ve not seen really dedicated talks there in ways that could be helpful. Of course, they’re going to disagree over Article 51 and the right of security and offensive and defensive measures in cyberspace, but how about just understanding their ongoing situations at the council that they look at on Sudan, on Yemen, on Colombia? How is tech being integrated and influencing those conflicts, and what role could it play in bringing about some resolutions? So, yeah, we have a focus program on the Security Council now, and we invite others who monitor that body, as well as the first committee’s work. And then, you know, there are the cybersecurity and the cybercrime processes continuing right now. The Cybercrime Treaty looks like it’s going to come imminently, and it’s been really interesting to see how they scope that, making how wide of a lens they take on what constitutes a cybercrime. It’s certainly going to be relevant for people involved in peace tech. And yeah, I mean, I think the Summer of the Future finally does include the new agenda for peace, which is supposed to be where I think they’ll speak to cybersecurity alongside complementing the Global Digital Compact. So those are both meant to be signed and delivered in September 2024. So there’s a couple things to look at. Super.

Moderator:
So, going back online, Michele, you might have a better overview about who’s online than I do. Yeah, no, I just wanted to say that maybe before, since we have just one hour and we are out through and we have some people, just to check, you know, who’s in and who’s on the call, who’s in the meeting, and both there and online, maybe we do like a really easy round of who we have in the room, and then we go through more to the answering questions and go more and deeper into the discussion, because I’m just afraid that if we go one by one, then we will not get to the end, and I would also like to get a sense of the online audience who is online. So if you agree, maybe we do like just like a couple of minutes each, just around to see who’s around, and then we go back to the more in-depth discussions. What do you think? Hello? Hello? Yeah. So, yes, no, no, please, Mark. So let’s make this also, let’s interact also with the online audience. Mark Nelson, director of the Stanford Peace Innovation Lab, might have some insight on this. Michele, did you want me to speak while people are doing little intros in text, or should I wait for the? Let’s see if we can do like just who you are and what you do, like in one, two minutes and do this round first, and if when we’re done, we go back to you and Marielsa, and we get some more in-depth perspectives. Yeah. Okay. Happy to.

Mark Nelson:
So very quickly, my name is Mark Nelson, with my colleague and partner, Margarita Kiwis, I co-direct the Peace Innovation Lab at Stanford and also our independent institute in The Hague, the Peace Innovation Institute, where we are working to commercialize our research in peace tech. And I focus on the measurability that is possible now because of technological advances, so peace metrics that are possible in very high resolution in real time, and how that connects the emerging space of peace technology to capital markets and the potential to create peace finance as an active investment sector, and we’re working very hard on that lab part. So that’s the quick overview. Thank you. So I see Marielsa Oliveira online. Yes. Hello, everyone.

Marielza Oliveira:
Marielsa Oliveira from UNESCO, I’m the Director for Digital Inclusion, Policies and Transformation, which is a division within the communications and information sector of UNESCO. Our task in the CI sector, that’s what we call communications and information, CI sector for short, our task is to defend two basic human rights, freedom of expression and access to information, and leverage those for the mission of UNESCO, which has a mandate literally of building peace in the minds of men and women, and a mandate to enable the free flow of ideas by word and image, through every type of media and so on. So for us, the digital ecosystem is incredibly important, and we’ve been doing different types of interventions on it for quite a long time. From one side, on the freedom of expression, for example, recently we had the Internet for Trust conference, which looks at the regulation of internet platforms in ways that respect human rights. And on the other side, on the access to information side, we are working on building capacities of different types of stakeholders, so that they can intervene properly and participate properly in this process, because there’s quite a lot of digital exclusion and inequality happening, giving voice to specific sides only, and we want to see a more inclusive internet in that sense. So in short, I’ll stop here, thanks. Thank you so much, and I see Moses Owaini online, do you want to briefly introduce yourself? Yeah, thank you very much.

Moses Owainy:
I am very far deep in an area where the internet connection is really poor, so I may not be able to turn my video on, but my name is Moses Owaini, and I am the Chief Executive Officer of the Centre for Multilateral Affairs in Uganda. We are a platform that seeks to advance the perspectives of African countries in multi-stakeholder conversations and discussions like this one, and that’s why I said in the text that I’m really glad that I’m in this discussion, because the overall concept of global peace tech, in my view, cannot be fully complete without the perspectives of many stakeholders that some of the speakers have already alluded to. For example, if you look at the realities that are in parts of sub-Saharan Africa, where I come from, but also Pacific, Caribbean, the future of what the tech, the peace around technology and cyberspace is shaped also by their realities. So today, people are talking about cyber or technology for peaceful means, but what does that mean for a country like Uganda, Kenya, or Nigeria? And that meaning might imply a different form for countries that are more developed or more advanced economies. So for me, I’m glad that this conversation is here, but I’m just imploring the speakers and the organizers that also future conversations around this could also likely draw from all of these multi-stakeholder perspectives and actors that you have clearly talked about. So there’s one point. My second contribution, which is really just maybe a question you are all university professors and you know this, and a lot of critiques come around this, especially when you use the term global peace, because we are in a world of international relations where states are relating each other bilaterally, multilaterally. So is there, and I seek to be corrected, I may not be knowing, so are we right to say, is it more accurate to say we are in a global world or in a world of international, you know, or in an international world? So for me, the term global peace is a little bit, with all due respect, a little bit ambiguous, but if you have something like international, you know, peace in the tech, then it creates more meaning, it is more relatable because it implies that many different states are, you know, collaborating and working together with all these multi-stakeholder groups to achieve the kind of peace, to achieve the kind of, you know, prosperity in the cyberspace, in the tech industry that everyone desires. So that’s my contribution and thank you so much for this platform and the opportunity. Thank you. Thank you so much, Moses, for these very insightful comments and also connected to what Marie Elsa was mentioning. We will move forward with a round of introductions if there are other participants that want to contribute.

Moderator:
I see Youssef Benzekri, I don’t know if he wants to introduce himself. I see Wenzhou Li, I don’t know if… Okay, I see Yuxiao Li. All right, and Omar Farouk. Okay, you will have a chance to introduce yourself in the chat as well. I see also Teo Nanezovic, which is also part of the Global Peace Tech Hub online. And she’s the rapporteur, so she will also kind of put together some of the reflections we are sharing now after the session. So I guess let’s go back to the room and… Okay, there’s somebody who wants to introduce themselves. Yuxiao Li, if you want to introduce yourself, please feel free to do so. Yeah, no problem. Okay, I have no question. Thank you. Okay, thank you. Okay, let’s move forward. So I would like to go back to either who’s left in the room or Mark, if you want to expand a little bit on the topic of this session. So… And also, of course, Marielta. Do you see any opportunities in a multi-stakeholder approach in this space? There are many initiatives trying to leverage technology for peace in different aspects and dimensions. There are actually a non-profit sector growing in this space with many different organizations and networks. What is missing maybe is to connect these dots. So to connect this non-profit sector with the tech companies that also have the capacity and data to and capability to develop the technology itself and the government. So do you see some kind of opportunities in developing these multi-stakeholder approaches and networks? Maybe we’ll start with Marielta and then we’ll go to Mark. It’s great that you asked for this question, because this is one of the approaches that

Marielza Oliveira:
we’re taking at UNESCO, is to bring together different groups, particularly bringing together the tech community, the tech companies, and the remote stakeholder group. And we have one good initiative on that, it’s the AI for the Planet initiative, in which we are leveraging this technology for combating climate change. So this is one of the things that we see promise on, although quite a few of this, how to say, this hopeful approaches, they end up in failure. So this is one of the things that we have to be very, very frank about. There have been quite a lot of attempts to do exactly that. But so for the companies, the question is, what’s in it for me? And at the end of the day, they’ll be looking at what the gain it is. So bringing them to the table and having them to commit to specific initiatives and to support that and dedicate time, money, resources, it’s quite difficult. But you have to be selective, I think, in the types of initiatives that you pick, and when the priorities are high enough. And this is very interesting. And when also the companies can make a difference without necessarily a long-term, large commitment. One of the examples that we have that is very successful is the United Nations Technology Innovation Lab. I don’t know whether you’ve heard about it, but it’s using technology in innovative ways to enhance peace-building processes, such as, for example, 3D augmented reality to show the delegates in New York what the conditions that displaced people were facing in a particular war zone. So they could immerse themselves into that and then have an informed discussion about it. It’s a completely different type of thing to actually see the environment, to just hear the statistics about it. The increase in empathy in terms of understanding what needs to be done, it’s tremendous. So there are quite a few good potential things. Another way that UNESCO is working on that, we have a partnership with the European Commission on a project that’s called Social Media for Peace, in which we track and map the different types of conflicts that are happening online, in particular pilot countries, and bring together a multi-stakeholder group that then devises and deploys countermeasure tactics essentially to diffuse the polarization, to bring back a civil dialogue online and reduce polarization. It’s been quite interesting. We’ve had a lot of interesting results on that, and hopefully we can draw some mechanisms for scaling it up out of the pilots. But of course, the big thing that we’re looking at is the Internet for Trust conference. We’re about to release the guidelines that we have been developing for an entire year, together with quite a large group of stakeholders that provided inputs from all sectors of society, and how we actually regulate Internet platforms for civil discourse as well, so that without affecting freedom of expression, access to information, because quite a lot of measures that are there right now, to enhance one right, they sacrifice another. So that’s one of the things that we are not willing to do, is to sacrifice freedom of expression, access to information. So for example, when you say there are quite a few countries deploying measures of blocking or certain apps or features and et cetera, in a more not necessarily well-thought-out process. There are some examples of those. And of course, the consequences you have, for example, that limiting the voices of, for example, journalists that are reporting and keeping us informed about these issues is tremendous. So the activists and et cetera, they lose their voice too. So we can’t be indiscriminating that. So what the Internet for Trust guidelines are proposing is that instead of regulating content, for example, that we regulate process, that we enable and build together a real governance mechanism that fosters transparency and accountability, because those are the things that are missing on the internet. Accountability is something that is not there. So this is one of the key things that we need to target. And accountability is what truly makes people behave in a responsible way. If you don’t have it, there is no responsibility. There is no teasing and people act in different ways that are counterproductive to societal development. So thanks. Thank you very much.

Moderator:
Andrea, if you agree, I will also move to Mark online. And then if you have other feedbacks from the room, just feel free to interrupt us. But Mark, if you want to leave us your comment on the topic of this session, also connected to what Marielle was saying about building trust, and this is also related to building some measures of impact, some measures of some data. And I know your effort is also in this direction on the measurement side. So how do we measure these impacts? How do we find parameters to describe this space as well?

Mark Nelson:
Thank you, Marielle. And thank you also to Moses for the excellent question. I think these things all tie together. There’s a lot of general interest in peace tech without perhaps a detailed understanding of what it is and what are the components that make it possible and make us to be standing at this incredible opportunity as a human species right now. What changed? What’s different? Because it’s not the wheel. It’s not the lever. I mean, you could make a sort of general case that our ability as a species to construct tools of any kind is interesting and useful and isn’t at all peace tech if we use it right and so on. And that’s kind of true. But the thing that is really unique in the last 20 years is the vast proliferation of sensors and sensors that can detect all sorts of changes in the environment in real time. And the thing that really makes this powerful is that when you start looking at the subset of sensors that can detect human behavior and then again at the subset that can detect human social behavior, how we actually interact with each other, at that moment, you start realizing for the first time ever, we can really measure not only what kind of interactions we have between individual people in real time, we can also measure for the first time ever, over time, what are the impacts of those interactions. And so we can very quickly test theories about, well, I thought if I did this, it would be good for that person. But was it really good for that person or not? Suddenly, we can answer those questions and we can answer those questions in real time and with huge sample sizes. What this means is that as we structure the kind of data of human interactions to see where are we doing really well and what are the best practice kind of behaviors that humans can do for each other and how do we think about how we could design even better behavior sequences for each other and how could we customize those and tailor them for each of our individual situations. That’s where the huge opportunity of P-STEC is really possible. And it allows us to go back to something that was one of the original dreams of Norbert Wiener back in the 1950s, setting up the foundations of information technology and of cybernetics, which was this closed loop of technology between a sensor that could detect something happening in the environment that we cared about, communications technology that could move that data to processors that could do the best sensemaking of that data, and then the processor connecting to an actuator that can then respond to the environment so that the sensor can detect, did the response cause the effect, cause the change I wanted to change? Did it move the needle at all? And if it did move the needle, did it move it in the right direction? That’s where the opportunity now of P-STEC is really incredibly powerful. Because when you close that loop, you go from having a linear kind of technology to a closed loop technology. And that closed loop changes things to be fundamentally persuasive technologies. And this allows us, I want to just underscore why this is so historic. This allows us to move from, quote, peace technology, like the Colt Peacemaker from the 1800s, if you all know your Western movies really well. It’s a gun, and they called it a peacemaker. And it was built on a Hobbesian theory of Leviathan, that whoever has the power can enforce peace. Peace up until this change of technology has been coercive peace that has been based on a Hobbesian Leviathan theory. And what has changed now is that for the first time ever, we can build technology that can create persuasive peace instead of coercive peace. And that’s a huge shift for our species. I’ll just pause there and let everybody respond to that.

Moderator:
Thank you, Mark. Over to you, Andrea. We have eight minutes left. And so I just want to remind everybody that maybe we should collect some takeaways before the end of the session. We’ll keep points. Yeah. Yeah, there are people queuing somehow, so I’ll let. Okay. Please introduce yourself.

Audience:
Thank you so much. Hello, everyone. My name is Manjia. I’m from China. So I am a youth representative of a youth organization called Peace Building Project held by UNDPPA. So my question to all panelists is that how do you see the young people’s role in Tech for Peace and how to engage young people into this initiative or this wider agenda? And do you have any best practice to share or current or future plan for this? Thank you so much.

Moderator:
Thank you. Anybody else would like to add anything to the discussion from the floor? No? No. Sure. We’re networking. We can chat.

Audience:
Hi, everyone. My name’s Carolyn. I’m the director of campaigns and rapid response with Access Now. I think to your question about the definition of peace in the first place is a really important one. And I think from our frame, we really think about digital rights rooted in human rights and that applies in all contexts. And that has a very clear kind of legal structure to understand what the rules of the game are. And so I think I’d be curious to understand from the Peace Tech folks what the added value is of this lens specifically of Peace Tech is. So it seems like it covers a lot of ground from human rights to social justice to environmental justice and kind of is perhaps a bit of an umbrella term, but I think understanding what we’re trying to get to by using that lens would be helpful. And I guess the other thing that is on my mind from this conversation is just how this Peace Tech movement is thinking about kind of the compulsion towards moving in the direction of techno-solutionism. And I’m not saying that that’s what’s being presented here, but I think there are many conversations in this space here at IGF and that happen across the UN system and in other spaces where the instinct is very strong to reach for a particular digital tool to solve a very complex human problem. And so I think just, if you could all share a little bit more how you’re thinking about integrating these technical conversations back into some of the core work of peace building and all of these many different fronts and how those things are really feeding back into each other. Wonderful. Yes. Maybe you can reply for the global

Moderator:
and someone can reply from the youth perspective. Yeah, very briefly. So just 30 seconds each, please. So we can wrap up the session before the sushi reception that we are having here. So maybe I start with the last question then anybody wants to jump in for the youth perspective on this. So for the global Peace Tech Hub, first of all, the Peace Tech community is a growing space made of different actors and we’re not representing them, of course, but the Global Peace Tech Hub is an initiative that is based at the European University Institute in Florence and especially at the School of Transnational Governance, which is a school of governance for current and future leaders on issues that go beyond the state. That’s the context. So our role to play in this is a facilitator of different actors and of course, some independent analysis of what is going on in this space. So we cannot talk for the others, but about where we are positioning ourselves and where we see the added value is in fact in the connection of these different spaces that don’t necessarily talk to each other. So the Peace Tech idea was born more in the peace building sector and especially NGOs would apply digital tools to some peace building initiatives that we’re doing already. The way we perceive Peace Tech is a bit different. It starts more from a tech policy perspective. And as I was explaining at the beginning, it’s equally on how you assess the impact and invest in the impactful projects, Peace Tech projects, but at the same time, how you govern and mitigate the risks with the good governance, with regulation and capacity building. So in this challenge, it’s very broad, as you say, of course, and under definition, but at the same time, we see the value of the interconnectedness of the different challenges. So we’re talking about trust, for instance, before how is that divided from any discussion on digital identities? How is this not linked to some applications of blockchain? How is this not linked with some discussion about data and who owns the data? How is that not connected with internet infrastructure? And how do we give a secure accessibility to internet? How it is not connected to cybersecurity? So you see all this interconnectedness of different challenges. And I think that’s the kind of added value we were trying to bring. And on the concrete side, we also want to foster this dialogue between these different sectors that they discuss kind of in bubbles and they don’t necessarily talk to each other. So I’m talking about the tech sector per se that they didn’t have on the top of their agenda, let’s say, discussion about pro-social tech or peace tech. At the same time, NGOs, they’re doing a lot of things on the ground so they know what is going on there, but this doesn’t always elevate to the governance level. So also government should be involved. So we’re trying to create these connections and the triangulation with the peace definition, we adopt the positive peace paradigm. So a little bit similar to what underpins the global peace index. So this positive peace pillars, which is not just the absolute value. We need to, they’re about to kick us out of the room. You’re really kick us out of the room. So we need to move on. I think Evelyn had a brief reaction to the point made.

Evelyne Tauchnitz:
Yeah, thank you very much for this question. I just want to take that up. What is the added value of peace as compared to human rights? I think it’s a really important question. And I’ve been also asking me that. And as I see that human rights are a necessary condition for peace, but they’re not a sufficient condition, means peace requires the respect for human rights, but it’s a bit something more also. Like as I talked, I see it more as a vision. And in a sense, you also have with human rights a bit the problem that it’s about the process, like in the sense of you have this human right, you have that one, you have that one, but you can imagine that it’s any, the government can interpret human rights as they wish to in a way and say, okay, you get that right, but we don’t have so many resources that we cannot fulfill this kind of right and so on. Whereas peace is more kind of a broader concept. And I also see that in a way, it’s almost easier to start a public debate on it, or it’s like everybody has a perception of peace, whereas human rights are like really kind of focused on rights and they’re legally enforceable, which is an advantage, but it’s much more narrow. And it’s not about the process, for example, how do you reach certain political decisions? But it’s, or how do we, the process is not really incorporated in the human rights perspective. So this, I think, is an important, important say, it’s like it’s necessary, but it’s not sufficient. If I just can make a brief comment to the youth question, I think it’s a really important one as well. And I think, because if, I think it’s also important to ask like, how is digital change changing our perception of peace? Like what we have been thinking of peace might be changing in the future precisely due to the digital transformation. Maybe we don’t have, so that conception of peace changes. It’s not only that we can use digital technologies for constructing peace, but it’s also that through that process, we might have a different understanding of peace. And their use is really, really important, I think. And also connected to that, like I think you said, it’s funny because you said you’re from China, I had just a conversation with a colleague of mine recently, and he did research in China. And he told me, for example, that a lot of the social scoring system has to do with a deficit of trust in nowadays China as compared to the China of earlier generations. And in order to reconstruct a trust, social scoring systems are considered useful. But of course, if we talk about peace, it’s this broad condition. Like do we want trust as a digital tools, for example, or do we want to build, how do we want to build this trust in society? Like this would be a really vital question, I think. Yeah, really quickly, so thanks.

Speaker:
I think on a few of the questions, I mean, for youth, first of all, youth are leading movements for human rights and for peace around the world. And I think they are putting themselves on the line, and more and more so in a digitized era where everything they do, their pictures, holding protest signs out in marches are going to be indelible. They’ve probably already been scraped for the facial recognition, and so they’re getting attacked by spyware. We see that on the phones of youths. So from Thailand to Sudan to the USA, yeah, youth are doing it, are doing the work, and it’s on us to help ensure this doesn’t lead to reprisal or permanent problems for them. I think on, just to respond to Mark, I think we were talking about euphemisms here, peace and tech for good, digital public goods. And so as advocates, we do chafe at this because I think there is a bit of a trust deficit. We don’t see either the tech sectors or frankly large humanitarian agencies as deserving of the trust of our personal data, of personal information about us, especially those most at risk. And so when I hear talk of the ubiquity of sensors, sensors implies some kind of passive monitoring that is so contrary to the vortexes creating these traps for us to deliver our personal sensitive information into these black boxes that we have no control over that characterizes most modern tech, that I think sensors is a dangerous sort of word to use in that context, unless you’re talking about some tools that are completely divorced from the modern digital economy. Thanks.

Moderator:
Fantastic, so this was, of course, a networking session. This means that, I mean, we are pretty well networked, at least the people that are on this stage. But for those of you that would like to connect to these conversations, we of course will have a follow up and please pass your contacts on me. And so we’re gonna keep you in the loop. I mean, again, this was a networking session. The impression is that we are still, even within the tech peace community, there is no common ground, there is no agreement of what exactly tech and peace actually means. And the impression is that for some, tech is approached as a tool to achieve peace. For others, it is tech as a new battleground as such, and then all the discussion about cyber security and so on. And for others, it could be tech as threats to peace. For example, the discussion on the laws on the lethal autonomous weapon system, how AI is increasingly embedded in virus arms. So yes, I think that there is still a lot of conversation needs to be done also within the tech community in order to bridge this community that work in silos. And hopefully we’re gonna have additional sessions to trying to break this silos. That’s all from here. I don’t know, Michele, if you want to have a last 10 seconds word or a word, not even a sentence, just a word. Yeah, perfect. Wrap up, enjoy the sushi dinner. And it was great networking session. Let’s keep the context and the conversation going. Thank you. Fantastic. Thank you all also to the online contributors to the discussion. Bye all. Bye.

Audience

Speech speed

188 words per minute

Speech length

436 words

Speech time

139 secs

Evelyne Tauchnitz

Speech speed

179 words per minute

Speech length

1579 words

Speech time

529 secs

Marielza Oliveira

Speech speed

131 words per minute

Speech length

1068 words

Speech time

487 secs

Mark Nelson

Speech speed

166 words per minute

Speech length

902 words

Speech time

325 secs

Moderator

Speech speed

158 words per minute

Speech length

3134 words

Speech time

1187 secs

Moses Owainy

Speech speed

154 words per minute

Speech length

519 words

Speech time

202 secs

Speaker

Speech speed

149 words per minute

Speech length

1782 words

Speech time

717 secs

Global South Solidarities for Global Digital Governance | IGF 2023 Networking Session #110

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Osama

The securitisation of the internet by states poses a threat to its open nature. This is evident through the investment in surveillance technologies such as internet throttling and shutdowns. Additionally, the introduction of regulations on social media and internet companies undermines user rights. An instance of this is seen in the Chinese firewall and internet censorship during the Russia-Ukraine conflict.

However, it is argued that the internet should remain open and free from borders as its original purpose was to connect the world without limitations. Unfortunately, the experience of using the internet is becoming increasingly dissimilar from one country to another due to the implementation of digital borders. Furthermore, there is an alarming trend of states moving towards digital authoritarianism, further challenging the open nature of the internet.

In the Global South, opportunities for collaboration among civil society groups are limited due to current policies. Political issues between governments in South Asia prevent civil society groups from convening, hindering their ability to work together effectively. Despite this, there is potential for collaboration as legislation in countries like Bangladesh, India, and Pakistan mirrors each other, indicating scope for collaboration in the future.

To facilitate effective global collaboration, it is argued that a multi-stakeholder and representative approach to global digital discussions is essential. It is emphasised that digital discussions should involve a variety of stakeholders, including civil society. This inclusive approach ensures that diverse voices and perspectives are considered, ultimately leading to more comprehensive and effective outcomes.

Engagement with the private sector is seen as valuable in driving progress. In cases where there is shared interest between civil society and companies on legislation, their cooperation can lead to state backtracking. This highlights the importance of collaboration between different sectors in influencing policy-making and shaping the future of the internet.

Building regional solidarities and conversations is deemed necessary for promoting collaboration. Organisations from various Global South countries are coming together, indicating progress in this direction. This regional collaboration has the potential to strengthen advocacy and drive positive change on a larger scale.

Effective advocacy is seen as crucial, and it should be based on quality research. Existing research can serve as a solid foundation for advocacy efforts. By utilising past studies and knowledge, advocacy campaigns can be more informed and persuasive, leading to a higher chance of success.

Translating knowledge into concrete actions and strategies is seen as the next step in achieving desired outcomes. A joint advocacy strategy based on research has been suggested, emphasising the need for practical implementation and the transformation of knowledge into tangible results.

In conclusion, the securitisation of the internet by states poses a threat to its open nature. The internet should remain borderless, and efforts should be made to counter the trend towards digital authoritarianism. Collaboration among civil society groups in the Global South faces challenges, including immigration and accessibility issues. A multi-stakeholder and representative approach to global digital discussions is advocated for effective collaboration. Engagement with the private sector can be valuable in influencing policymaking. Building regional solidarities and effective advocacy based on quality research are crucial in driving positive change. Translating knowledge into concrete actions and strategies is the necessary next step for achieving desired outcomes.

Audience

The Global South Solidarities project is currently accepting applications for the position of Digital Librarian. This role involves curating and organizing the project’s work, as well as addressing any queries. The project places great importance on effective management and accessibility of its resources.

There is a growing global effort to bring organizations in the Global South together on various issues. This collaboration and partnership demonstrate a collective approach to promoting solidarity and tackling common challenges faced by countries in the Global South.

In November, a fund will be launched in Brazil to support the Global South Solidarities project. This initiative aims to reduce inequalities and promote sustainable development in line with SDG 10: Reduced Inequalities. Fundraising efforts like this highlight the project’s commitment to not only raise awareness but also mobilize financial resources for its cause.

During a recent talk, the speaker received overwhelming gratitude and appreciation from the audience. This positive reception indicates the effectiveness of the project in conveying its message and inspiring action. It reflects the project’s ability to engage individuals and foster a sense of solidarity.

Overall, the Global South Solidarities project actively promotes collaboration, addresses inequalities, and provides a platform for shared dialogue among organizations in the Global South. By seeking a Digital Librarian, launching a fund in Brazil, and receiving positive feedback, the project is making significant progress towards its objectives.

Ifran

In Bangladesh, students are facing a severe shortage of original textbooks, which is negatively impacting their access to a quality education. Ifran, a senior lecturer in law and human rights in Bangladesh, has highlighted the difficulties his students have in accessing original textbooks. This lack of access to textbooks hampers their learning experience and limits their acquisition of knowledge.

The problem of limited access to knowledge extends beyond textbooks. Journal articles and research papers are predominantly behind paywalls, making them inaccessible to individuals in countries like Bangladesh. This limits researchers, academics, and students who rely on these resources for their studies and critical analysis. To overcome this problem, pirated sites like Sci-Hub have become alternative sources for accessing research materials.

The implications of these challenges are significant. The lack of original textbooks restricts students’ ability to learn and engage with the subjects effectively, hindering academic progress and limiting potential for success. Furthermore, the limited access to research materials and scholarly articles stifles intellectual growth and inhibits the development of innovative solutions to societal problems.

It is evident that solutions are urgently needed to bridge the knowledge divide, particularly between the North and South, where disparities in access to information and education are more prevalent. Addressing this issue would enhance educational opportunities in countries like Bangladesh and promote global collaboration and the exchange of ideas, leading to greater innovation and progress.

In conclusion, the shortage of original textbooks and limited access to knowledge in Bangladesh have profound implications for students’ educational experiences and the overall intellectual growth of the country. The emergence of pirated sites as alternatives highlights the urgent need for solutions to alleviate this knowledge divide. By reducing barriers to accessing information and fostering global collaboration, we can work towards a more equitable and prosperous future.

Tove Matimbe

The analysis reveals that there are various international platforms available for collaboration and discussion in the field of internet governance. Notable examples include the African Internet Governance Forum and the Digital Rights and Inclusion Forum. These platforms offer opportunities for experts and stakeholders to come together, share knowledge, and address pertinent issues related to internet governance.

Collaboration and joint efforts are identified as crucial elements in impacting policy change and promoting effective data governance. The report highlights the significance of collaborative initiatives in driving positive outcomes. For instance, Paradigm Initiative collaborated with Data Privacy Brazil at their forum, indicating a commitment to sustaining such partnerships. Joint statements and submissions are also recognised as effective measures, with previous collaboration on the Global Data Commons (GDC) process involving Kiktonet, Data Privacy Brazil, and Apti.

However, engaging the private sector in internet governance discussions can be challenging. The analysis notes that the private sector often exhibits reluctance to participate in these conversations. To address this issue, closed-door meetings and invitations to regional convenings have been proposed as effective strategies to encourage private sector engagement.

Moreover, discriminatory practices have been identified as barriers to the participation of individuals from the Global South in internet governance conversations. The visa process and financial costs associated with attending international forums and events disproportionately impact people from less affluent regions. An example is provided where a colleague from Data Privacy Brazil was unable to attend the Global Internet Governance Forum due to such issues. This highlights the need to address discriminatory practices and ensure equal participation and representation from individuals across the globe.

To foster better internet governance and counter rights violations, it is recommended to increase support and funding for digital rights organizations in the Global South. Establishing a strong, unified movement and fund can empower these organizations to confront violations occurring within their countries effectively. Investing in these organizations can also lead to advancements in digital rights and strengthen the push for improved internet governance.

In conclusion, the analysis sheds light on several key aspects of internet governance. It emphasizes the importance of collaboration, especially with regards to policy change and data governance. Challenges in engaging the private sector and discriminatory practices impacting the participation of individuals from the Global South are identified. Finally, supporting and funding digital rights organizations in the Global South is seen as a vital step towards countering rights violations and promoting better internet governance.

Aastha Kapoor

The discussion explores the concept of Global South Solidarities in relation to digital governance. It highlights the significance of addressing common issues across different regions of the Global South, serving as a foundation for collaboration and solidarity.

Data Privacy Brazil has recently advertised a role for a digital librarian, emphasizing the need for knowledge curation and handling inquiries related to policy issues globally. This initiative underscores the importance of information accessibility and management in the digital realm.

In addition, Data Privacy Brazil is actively working towards establishing a research fund specifically for global south initiatives. With a budget of $8,000, the organization aims to identify the most effective allocation of these funds.

Aastha Kapoor, a supporter of Data Privacy Brazil, endorses the digital librarian role and research fund initiative, highlighting their importance in promoting effective digital governance.

However, the discussion also raises concerns about the misuse of the internet as a tool for exclusion and surveillance. Instances such as the internet shutdown in eastern India due to ongoing unrest and the confiscation of phones from Syrian refugees at European borders demonstrate the negative impact of these practices.

Digital inequality is another issue highlighted, with the experiences of Uber drivers worldwide reflecting similar forms of harm. This exacerbates existing inequalities and underscores the need to address the negative consequences of internet use.

Furthermore, the discussion emphasizes the need for collaborative efforts among global movements to create institutional and financial architectures that support the objectives of the Global South. This highlights the importance of partnerships and working together to achieve sustainable development goals.

A holistic approach is also advocated, enabling wider participation in discussions and ensuring inclusivity. Challenges related to timing, access, and immigration restrictions need to be addressed to foster productive discussions and collaborations. Overcoming these challenges is crucial for realizing the full potential of global South solidarities.

Flexibility and commitment to civil society are considered critical in ensuring successful collaboration between civil society and the private sector. Both entities have shown solidarity in opposing unsuitable legislation, further highlighting their significance in digital governance.

The role of the Digital Librarian in the Global South Solidarities initiative is of great prominence. The Digital Librarian is responsible for curating all work, ensuring organization, and addressing profound questions about the digital universe raised by members. This role serves as a crucial pillar in fostering collaborative efforts and effective knowledge dissemination.

Furthermore, plans for a mutual fund and network mapping for global solidarity are currently underway. The mutual fund will be launched and announced in Brazil in November, with discussions on capacity building and funding opportunities as next steps. Network mapping aims to identify similar initiatives and learn from efforts that bring various organizations in the Global South together on different issues.

Overall, the discussion highlights the vital role of Global South Solidarities in digital governance. Collaborative efforts, such as those pursued by Data Privacy Brazil, and the support of individuals like Aastha Kapoor, connect various initiatives and individuals in pursuit of common goals. However, challenges related to exclusion, surveillance, and digital inequality must be addressed to ensure the effectiveness and inclusivity of global South solidarities.

Fernanda

Fernanda, a director at Internet Lab, a think tank in Sรฃo Paulo, Brazil, is excited about attending the upcoming Data Privacy Brazil conference. As a representative of the Global South, she aims to engage in meaningful exchanges and discussions with other participants to address the challenges they currently face.

Internet Lab, known for focusing on digital rights and policies, considers Fernanda’s presence at the conference significant. As a director, she possesses extensive knowledge and insights into the complexities of data privacy. Her participation will enable her to share these perspectives and exchange ideas with other experts in the field.

Being from the Global South, Fernanda brings a unique perspective to the table. The challenges faced by countries in this region regarding data privacy may differ from those in developed countries. Participation in the conference allows Fernanda to shed light on these specific challenges and contribute to developing solutions that are inclusive and reflective of the needs of the Global South.

The Data Privacy Brazil conference serves as a platform for professionals and researchers to come together and discuss the latest developments, concerns, and potential solutions in the field of data privacy. Fernanda’s attendance offers an opportunity for collaboration and learning from different experiences and approaches. By fostering connections and conversations, attendees, including Fernanda, strive to create a more comprehensive and impactful framework for data privacy.

Overall, Fernanda’s presence at the Data Privacy Brazil conference symbolizes the increasing importance and attention given to data privacy in Brazil and the Global South. Her commitment to confronting challenges and fostering collaboration reflects the dedication of Internet Lab and other organizations in the region to advocate for robust data protection policies.

Nathan

In Brazil, a collaborative digital librarian initiative has been launched with the objective of curating, organizing, and disseminating knowledge related to digital rights. This initiative is a partnership between Data Privacy Brazil, Apti Institute, and Paradigm Initiatives. The digital librarian will play a crucial role in this endeavor by curating and sharing information on digital rights. The initiative has been positively received, with individuals expressing excitement and anticipation for its impact on strengthening engagement among Global South organizations.

However, despite the positive reception, there are challenges that hinder the achievement of consensus within the digital rights landscape. One of the key challenges is the presence of conflicts of interest between civil society and the private sector. This conflict becomes particularly evident in the context of platform regulation, where big tech companies have been lobbying and advocating their interests to parliamentarians. Brazil has been seeking platform regulation since January, aiming to address conflicts of interest between civil society and the private sector. The chaotic scenario resulting from these conflicts poses a significant obstacle to effective regulation.

Moreover, there are barriers faced by Global South organizations in engaging with larger international conferences. Financial support and visa issues are cited as major hindrances, as most conferences tend to be held in the Global North. This lack of participation from Global South organizations limits their ability to contribute to global discussions on digital rights. To address this issue, a report titled “Voices from Global South” will be launched, shedding light on the obstacles faced by these organizations and sparking discussions around potential solutions.

In conclusion, the digital librarian initiative in Brazil holds promise for advancing knowledge and engagement in the field of digital rights within the Global South. However, challenges such as conflicts of interest and barriers to participation in international conferences need to be addressed. It is believed that comprehensive platform regulation is essential and must consider the interests of all societal sectors, including civil society and the private sector, to effectively tackle these challenges.

Miki

Miki is currently employed by a startup that has a commendable objective of collecting primary information about social issues from an extensive range of over 1000 sources. The startup’s primary aim is to ensure that individuals have access to information that enables them to think independently and make informed decisions about various matters affecting society. By providing comprehensive information, the startup hopes to empower people by equipping them with the knowledge necessary to participate actively in democratic processes.

Miki strongly believes in the fundamental significance of access to information in fostering real democracy. Recognizing that a well-informed citizenry is at the core of a functioning democracy, Miki underscores the importance of ensuring that people have access to important and comprehensive information. This access fosters an environment where the public can make knowledgeable decisions, engage in meaningful discussions, and hold those in power accountable for their actions. Miki’s support of access to information as a crucial element of democracy highlights the vital role the startup plays in promoting transparency and citizen participation.

In addition to Miki’s commitment to promoting access to information, they also exhibit a keen interest in understanding the challenges associated with creating a consensus for establishing a platform between civil society and the private sector. While specific details of these challenges are not provided, Miki’s inclination towards exploring this topic suggests an awareness of the complexities involved in achieving cooperation between these two sectors. Such cooperation is essential to effectively address societal challenges and achieve the United Nations’ Sustainable Development Goals (SDGs), particularly SDG 17: Partnerships for the goals.

In conclusion, Miki’s involvement with the startup reflects their dedication to making primary information about social issues accessible to individuals around the world. Their belief in the transformative power of access to information for democracy underscores the critical role played by the startup in fostering transparency and citizen engagement. Moreover, Miki’s eagerness to understand the difficulties involved in establishing a consensus between civil society and the private sector reflects their commitment to exploring innovative solutions and fostering partnerships for achieving sustainable development. Overall, their involvement in these domains demonstrates a forward-thinking mindset and a strong drive to contribute positively to society.

Jacquelene

The analysis focuses on several key topics related to global partnerships, civil society participation, and data privacy. One of the main points discussed is the importance of joint contributions in Global South Solidarities. The analysis highlights that Data Privacy Brazil made a joint contribution with civil society entities from India, including the Institute, Paradigm Initiative, and Internet Bolivia. This demonstrates the significance of collaboration and collective efforts in addressing global challenges.

Another prominent topic discussed is the need for greater participation and advocacy with government representatives. It is noted that issues such as surveillance, internet shutdowns, and censorship are primarily discussed in multilateral spaces like the UN, with limited opportunities for civil society participation. This argument underscores the importance of amplifying the voices of civil society actors in decision-making processes and advocating for their inclusion in relevant discussions.

The analysis also supports the idea of collective contribution to the Global Digital Compact of the UN. Data Privacy Brazil made a joint contribution with civil society entities from different countries to the Global Digital Compact. This showcases the support for collaborative efforts in achieving the goals outlined in the compact, emphasizing the importance of partnerships and shared responsibility.

Furthermore, the analysis advocates for increased capacity for civil society to participate in discussions at multilateral spaces like the UN. It suggests that issues such as surveillance, internet shutdowns, and censorship are predominantly discussed in multilateral spaces, which often have limited opportunities for civil society engagement. The analysis positively highlights the need for increased civil society participation and the importance of their perspectives in shaping policies and decisions related to peace, justice, and strong institutions.

The overall sentiment of the analysis is positive towards joint contributions and increasing civil society participation. It underscores the importance of collaboration, shared responsibility, and amplifying civil society voices to effectively address global challenges. Notably, the analysis sheds light on the critical issue of data privacy and the limited spaces available for civil society participation in discussions regarding surveillance, internet shutdowns, and censorship.

In conclusion, the analysis presents a comprehensive overview of the importance of joint contributions, increased civil society participation, and collective efforts in addressing global challenges. It highlights the significance of partnerships and shared responsibility in the Global South Solidarities and the Global Digital Compact of the UN. The analysis also emphasizes the need to expand the space for civil society participation and the importance of their perspectives in relevant multilateral discussions.

Session transcript

Aastha Kapoor:
Great, can we get the screen on? Hi, welcome to the earliest possible session of this conference, day one, session one. We’re looking to talk about Global South Solidarities for Global Digital Governance, but because the Global South has not woken up so far, we are, hi. So, there are a couple of things to just say out loud. Our co-organizers, Rafa from Data Privacy Brazil couldn’t make it, but we have Nathan, which is wonderful, and Jacqueline online. We also have Benga, who we are waiting for. We know he’s in Kyoto, and he’s from the Paradigm Initiative. And I am Aastha Kapoor. I’m the co-founder of Aapti Institute. We’re a tech policy research firm based in Bangalore. What we are looking to discuss today is this idea of Global South Solidarities. I know that some of this was discussed earlier at RightsCon as well, and we’re looking to build on some of that work and also discuss issues that are common between different parts of the Global South, and then to ask the next set of questions of what needs to be done. Again, I know that this has happened once before, so I have updates on that discussion. Two main ones. The first is that Data Privacy Brazil has advertised for the role of a digital librarian. This digital librarian is gonna work with organizations like all of ours to help curate knowledge online, and they will be the single point of contact to help understand what are some of the issues. So if you, for instance, have a question around, hey, what is happening with data governance in India, then you could potentially pose this question to the digital librarian, who will send you then a set of readings. And so the idea is to make sure that we’re not all recreating the wheel, we’re not all asking the same questions in different kinds of voids, and make sure that we can all have a space where all the different knowledge that we produce can be curated, organized, and also disseminated. So there’s been a call for applications, and Data Privacy Brazil, Apti Institute and Paradigm Initiatives are trying to bring that together. The second announcement is that since the Costa Rica event, again, Data Privacy Brazil has been in the process of organizing a fund for research initiatives in the global south. And that’s what I wanted to start off this discussion with, is because we’ve been going back and forth with Rafa and Bruno to understand where the value of this fund is, what it can do and what it can’t do. And would love to get some thoughts from this group to understand that when we think of research and the global south, and if you had a small amount of money, I think it’s $8,000 to do a piece of research, what do you think would be a valuable way to apportion that money? And we’ll use this question as a way to also uncover what the major issues are that we want to focus on and where we see the conversation moving. Nathan, did I miss anything?

Nathan:
Hi, everyone. I think you can hear me. So this is a great opportunity for us to discuss how to engage in a global southern perspective. So as Esther, Asta, sorry, she just said, we have this initiative on the digital librarian that will play a key role for this organization of digital information regarding digital rights. And soon maybe we’ll have this chosen person that will be this digital librarian. And I think it would be interesting to hear from other people that are here. So we can think on how to start to engage or how to strengthen our engagement between global southern countries and global southern organizations. Maybe pass the mic to you. I don’t know your name. What’s your name? Osama, okay.

Osama:
Thank you very much. I’m Osama Khilji. I’m director of Bolo B in Pakistan. So I wanted to start off with a bit of like what the major issues with internet governance in the global south are. And I think it really stems from securitization of the internet itself. And I think we really need to pick on a set of principles that can sort of lay the groundwork for, okay, what is the purpose of the internet? And what do we envision? So now what are the threats to an open internet that we see in the global south right now? So there’s a lot of investment in surveillance technologies that really undermine the basics of the internet. So for example, you have internet throttling, you have internet shutdowns. And then there’s also the issue of trying to bring in regulations that sort of take over what the companies are doing and imposing those sets of regulation on social media and internet companies that then overall undermines your freedom of expression and privacy rights on the internet. So looking at that sort of the landscape that we see as we have it today, as the global south, how do we envision the internet? So are we going to allow big players that have their political issues to overshadow how we look at the internet? So for example, the Chinese firewall or with the Russia-Ukraine invasion, we’ve seen how so much of the internet is now censored. So you can’t get information from other countries just because of a political conflict that’s going on. And because of that, we’re seeing the borders of the world being reflected onto the internet. And the whole point of the internet was to connect the world in a way where there are no borders. So for me, I think it’s really the question of how do we get rid of these borders that we are starting to build over the internet and that are really strengthening. So your experience of using the internet in one country is very dissimilar to the use of internet in another. And I think that’s fundamentally the problem with how a lot of the states are moving towards digital authoritarianism and changing the experience that we have. So I think keeping these in mind, it’d be useful to go forward and think of how do we strategize and how do we build solidarities where people like us who believe in an open internet can sort of push for the internet to remain so.

Aastha Kapoor:
Do you wanna come up front? Would you like to say something? Are you guys participating? Come. Yeah. Okay. So it’s super interesting because, and I don’t know why we’re doing this on a mic now, but what we’ve been seeing in India as well, and I would love to hear what’s going on in Brazil, is that we see that the internet, as you rightly said, is both being used as a way to exclude people, so internet shutdowns, but also because there are people who are not connected, et cetera, so on all kinds of ways, access is being limited. So we have a state of unrest in an eastern part of India. The internet has been shut down for many, many days. And so that changes the way people access information, that changes the way people understand the situation, that changes the way the government or certain groups can control the narrative because that organic, potentially dangerous nature of the internet is called. What also happens is that, and again, you said this, is that the internet is also, or digital technologies are also used to create mechanisms of surveillance. So it’s a double-edged sword because in some instances, I was reading somewhere that Syrian refugees that are coming into the borders of Europe, their phones are actively snatched and broken down because nobody wants to hear the stories that they’ve been chronicling. So how do we sort of grapple the head around these divergent ideas of the internet and what it can and cannot do? And then the second question, I guess, or a small disagreement with you is that you said people’s experiences with the internet are dissimilar, which they are in where you are in your journey in some ways, but they’re also similar in the way that harms are perpetuated. So Uber drivers across the world are suffering similar kinds of harms. It doesn’t matter whether you’re in Paris or in London, so while I, or in India, right? Like the protests and the nature of the protests that the Uber drivers have been doing against platforms globally have similar language. Their ability to claim rights is very different in Europe because of GDPR and very different in other parts of the world. So there is, I think, a lot of significant nuance that is required because the idea of solidarity, with whom solidarity bases what experience, I think is really important to think about. I don’t think that we could have solidarities with some countries in the global south that don’t experience internet shutdowns. That is very peculiar to us. So I think that the global south experience is extremely valuable, but then to nuance that solidarity is also going to be quite interesting and maybe the research work can help do that.

Nathan:
I think that, just to start with some issues that you brought related to internet shutdowns or stuff like that, in Brazil, actually, we don’t have internet shutdowns, gratefully, but we do have sometimes some apps are blocked for criminal investigation or something like that. It already happened to Telegram and WhatsApp because of some judicial orders to block the app so they can comply to investigations and open cryptography to have access to messages. So we do have that. And we are in a context in Brazil that we are seeking the platform regulation since January, basically. It’s a comprehensive bill that will try to regulate a platform. And it’s been a very hard process in Brazil because there are several conflicts of interests between civil society and private sector, for example, that doesn’t want the same thing and they have to arrive in a consensus, which is hard. And there was, I think, an episode in Brazil where big techs, they invested money and people and staff to do lobby or advocacy strategies within the legislatives, the parliamentaries, to try to push the platform regulation bill for their side. And I think that it’s worth mentioning that now the President Lula in Brazil is seeking to look towards the decent work theme so it’s an agenda for Brazil right now, especially within the G20. And platform regulation has a close relationship with this agenda for workers. So we are looking to that also in data privacy Brazil and try to work also with this agenda in the G20 focus groups that they have. And I think that one last thing that we should maybe start to discuss is to understand what are the spaces, institutional spaces where we can discuss and address these kind of issues. Because we have some different organizations such as the ITU or even the IGF or other organizations or other multilateral organizations that discuss such themes. And I think it’s worth to understand how can we build a collective knowledge to work or to act within these spaces, to understand how can we engage with these spaces. I think it’s a very interesting question for us to start again our discussion here. Thank you.

Aastha Kapoor:
Thanks so much for that. Some new people have walked in, which is great. Would you like to talk about, we’re talking about. what are Global South Solidarities and what are the institutional spaces to enact some of these Global South Solidarities. There’s a mic over there, next to you. It would be great if you can introduce yourself.

Tove Matimbe:
Thank you so much. I think you can switch it on. All right, hi everyone. I’m Tove Gile Matimbe and I work for Paradigm Initiative. And I think some of the platforms that we see an opportunity for work and collaboration, platforms such as, of course, the African Internet Governance Forum, but on a global scale, of course, being here as well, the IGF. But also there are a number of convenings that are happening at least within the African continent. Such platforms include one that we particularly convene, which is called the Digital Rights and Inclusion Forum. We’ve made use of that platform. I think in the past year, we did a good collaborative effort with Data Privacy Brazil. And we are hoping to continue to engage, to share experiences and draw from each other’s context. As we see, I think there are a number of commonalities there for us. I think two weeks ago, there was another forum for internet freedom in Africa as well, which was a good platform as well to be able to engage. And there’s always room, I think, for us to be able to streamline what are the spaces that we can really make the most impact in terms of changing policy, and in terms of as well pushing for data governance. And for us, what we’ve seen is useful as well is being able to put out even joint statements or joint submissions. We’ve collaborated with organizations as well, such as Kiktonet, Data Privacy Brazil, Apti, to be able to put forward our response with regards to the GDC process. So I think that’s something that I just highlight as an open sort of like remark, yeah. Yeah.

Aastha Kapoor:
No, that’s super interesting. And I think that it would be important to identify certain opportunities for us to, like you brought up, the GDC is certainly a big one, I think, in which we all have something to say, and the process has been quite consultative. You also mentioned G20, and we’re rolling off an India presidency into a Brazilian one, and then it goes to South Africa. So there’s a very strong troika for G20, which can be definitely leveraged to embed some of these conversations. I know we’ve been talking about digital public infrastructure, and PICS and UPI, which is the Indian version of a payment protocol come up all the time. I think it’s just the start of day one, but several day zero panels were also talking about technologies that are being developed in the global South and how they can be brought into the world and offer an alternative to the ideas of big tech that exist globally. So there’s tons to chew on. What I would really love to move direction into is that I would love to understand if there are best practices of harnessing solidarities that you’ve identified in your work, if you think organizations and movements globally have come together to create institutional financial architectures, I guess, and how do we learn from those? Osama, I’m gonna go to you.

Osama:
Yeah, thank you. I think, as we know, there’s some regions that are super, there’s a lot more solidarity. So for example, when we look at the African region and in Latin America, we see a lot of events and coordination happening. But I think unfortunately in Asia, especially South Asia, a lot of the political issues between the governments don’t allow civil society groups to convene locally other than say in Nepal or Sri Lanka, we’ve had a few convenings. But other than that, but when you look at the issues, it’s like, if you’re talking about India, I could just replace India with Pakistan and literally be the same issues, right? And even when you look at the legislation, so in Bangladesh, in India and in Pakistan, they mirror each other. So there’s a lot of potential for collaboration there. So with the global network initiative, we’ve seen because the membership base has been growing, there’s been a lot of space and focus on, especially global South, across global South. So there is an Africa focal sort of group loosely, then there’s Latin America and there’s Asia. And a lot of times we sit together to discuss these things. So that’s super key and important. And then of course, at RightsCon and at IGF, there’s often convening such as the one we’re having, but we also need to look at the issues with it, right? So for example, take today, for example, it’s IGF, one of our panelists was not allowed in the country because of immigration issues. Then we look at the timing, it’s 8.30 a.m. here, which means in most of the countries that we come from, it’s either 4.00 a.m., so crack of dawn. So how do you organize a session for a region without keeping into account the time zone that they’re in? So I think such like barriers also need to be looked at and critiqued in order for us to really go forward in a way that’s productive. But I think like, and then we also need to look at issues of access. So how many people or voices from the global south are able to become a part of conversations such as these? You know, it takes a lot of privilege to be able to travel to conferences and to make submissions, get the visas in time, get your bookings, et cetera. And then online participation, there’s restrictions on that as well. So I think the approach has to be holistic so that it’s representative. But at the same time, I think it’s important that we also start to not let states lead these discussions. It needs to be a multi-stakeholder approach. So for looking at what states are doing, a lot of it is oppressive towards the majority of the populations. And I think civil society needs to adopt a similar language to be able to deal with states and to be able to negotiate so that power can come from solidarity. And I think that’s what we really need to crack.

Aastha Kapoor:
Perfect, thank you for saying that, both about the structural as well as some of the functional issues. I know some people have joined. It would be lovely if you guys, this is a networking session, so you don’t really have to listen to us. You can have a voice and a seat at the table. So if you guys could just come up front, that’d be amazing. We’ll hand you a mic to introduce yourselves. Not to put everyone on the spot, but it would be great if you guys came up front. I know they slotted us early, so maybe they’ll give us a little bit of time to get warmed up and stay in the room. So we’re just trying to understand both commonalities, differences, and how we can mobilize for action in the context of the Global South. We have with us Data Privacy Brazil. I’m from Apti Institute. Bolo Bhi. Bolo Bhi from Pakistan. Paradigms Initiative from? Nigeria. Zimbabwe. So we’re trying to understand where, and what Data Privacy Brazil has done to start us off on this journey is that they have a digital librarian who’s gonna hold all this knowledge from the Global South and make sure that we can hear about it, reach out to this person, and say, hey, what’s going on in this part of the world? And then they can send us readings, materials, thinking. They’re also launching a fund to do research on some of these questions. And that’s what we’re also trying to understand, that where both, what kind of problems do we want to uncover? And then also, how do we want to come together? And the harsh realities of the fact that we can’t travel very easily. So, yeah, so just wanted to pass the mic around. Please introduce yourself. Vent your anxieties.

Ifran:
Good morning. I’m Ifran. I’m from Bangladesh. I’m a senior lecturer in law, and human rights researcher, you can say. So I think I’ve just entered into the room and trying to understand what you’re actually trying to say. But the access to knowledge is actually one of the problems that we have been facing in my country, particularly as a law professor in Bangladesh. I can see my students are having acute shortage of original textbook, and the online resources are actually, for example, the journal articles or the research papers are not accessible for them. They are actually mostly burnt through the paywalls. They’re not easy way to access them. You need to go to the pirated sites and sites like the Sci-Hub, which I frequently use, and I found it quite useful. And thanks to that lady who has actually brought it in for us, but what about sharing the knowledge? I mean, how does the knowledge divide between the North and South can be alleviated? I mean, I don’t have any specific answer. I’m here to learn from everyone from you. Thank you very much.

Fernanda:
Hello, everyone. I am Fernanda from Brazil. I am one of the directors at Internet Lab, a think tank based in Sao Paulo, and we are close to Data Privacy Brazil, so I will be in the next conference in November, and I am here to meet you better and exchange more about Global South and the challenges that we are facing at this moment, so nice to see you all. Thank you.

Miki:
My name is Miki. I’m from Japan. I’m working sometimes with NGOs, but most of the time I’m working for a startup, and they’re a startup which are providing primary information from all over the world, so we have more than 1,000 sources. We are collecting, actually, the information, the primary information about lots of social issues, and then it’s because we think that everyone in the world, all people need to, must have the information to think by themselves, to have the real democracy, and we are trying to create a platform of people and, sorry, I would like to ask the question about Brazil. You talked about it, you, sorry, I forgot the name. Nathan. Nathan, sorry. You said that it was difficult to have the consensus to have the platform, to create the platform between civil society and private sectors, and I think you need it to manage, so I would like to know what was really the difficulties that you felt and how you managed to build.

Nathan:
And if you’d like to comment on the same question. Actually, I didn’t work specifically with the legislative process related to the platform regulation, but what we could see in the precedent case was that there are a very high conflict of interest between big techs or other platforms from Brazil. For example, we have a very big delivery platform in Brazil called iFood that has a close relationship with the workers’ movement, what they demand from the platforms. So we have this situation in which there is very distinct interests that they might have to be handled so we can build a comprehensive and a good regulation for platforms, but this was the scenario in general where there was a very strict and high conflict of interest between several societal sectors, for example, from civil society or private sector, and there were interests from government agents too, so there is this chaotic scenario to trying to build the regulation. I would like to just add one thing that you said related to barriers to trying to engage in organizations or something like that. We’ll be launching on Friday morning a report regarding, it’s called Voices from Global South. and where we interviewed some activists from Global South to understand what are the institutional spaces that they are willing to engage or that they comprehend as a fruitful space for engagement for a Global South organization. And IGF was one of these spaces, but they also highlighted some issues related to financial support to attend these conferences because most of them are held in the Global North, so there is this this traveling issue or visa issues that might happen. So in this report we were able to gather all this information through interviews to understand which are these spaces and what are the pros and cons of each one and to understand how to try to build this collective knowledge on how to engage in these spaces. So on Friday morning on the Human Rights and Standards panel we’ll be launching the report and it will be available in English, so I think it will be more accessible for most of the people. Thank you.

Tove Matimbe:
I’ll just highlight that from our own experience engaging with the private sector it’s something that we’ve been you know consciously and deliberately trying to achieve in terms of a good working sort of like model of engaging with the private sector, because what we’ve discovered is that when we’re trying to engage a lot with the private sector they are usually reluctant, maybe is a good word to use here, to be actually on the table, be in the room for us to have a you know a conversation around the issues that need to be addressed, especially when we’re looking at policy and how policy needs to be developed in a way that best suits fundamental rights and freedoms. And what we’ve found to be more useful is to you know sort of like have closed door meetings with a few actors, but also we’ve stepped up to invite the private sector to also be involved in some of the regional convenings that we host, and we’ve we can safely say we’ve got a few probably a couple of you know of big companies that you know supportive in terms of showing up and being part of these conversations, but I think definitely we we share I think the same sentiments that have been already raised here, that it’s you know it’s a bit tricky sometimes, maybe perhaps being an issue of business expediency and trying to operate for companies in a certain jurisdiction and trying to make sure that probably they’re aligned with with the government, but we continue to push back and call for even transparency with regards to how the private sector engages with the government and we found that being very useful. But also just you know touching on the the earlier conversation around you know the discrimination, what I call the discriminatory practices that you know push away certain groups of people particularly from the global south to participate in certain platforms that ideally would be useful and meaningful with regards to shaping the discourse back in the global south. Looking at the visa processes, I think this is something that we experienced at RightsCon and we are here at the global IGF and yeah I think it’s already been mentioned one of our colleagues from data privacy Brazil could not be here because of you know some of these things that that we’re discussing. So how do we go forward and ensure that whatever processes that we’re discussing as being useful and meaningful, I mean platforms that are actually accessible to everyone so that we’re able to bring our voices together. That being said, looking at the financial aspect of things that’s another form of exclusion because what we’re saying is we want to be able to form you know a critical mass or a critical movement that you know pushes back against the violations that we’re seeing in our different countries within the global south and how can we become more and more efficient together because I think definitely what we’ve been doing so far together I think it’s showing that I think there is definitely force and there’s definitely strength in you know in achieving what we want to achieve as a global south because we understand our experiences better. We’ve got lessons that we can share with each other and definitely we can leverage on our expertise so that we are able to do great things together. Why do I say that? I think definitely there’s something that you’ve already mentioned about you know the the research element of things where we’re saying we can pull together you know the great work that we’ve done and through that research are able to actually map if there are any if there are you know any golden threads that we can we can we can pick out from there and draw learnings and be able to make critical recommendations, learn from Latin America and what’s happening there, bring it into the context perhaps of certain African jurisdictions and be able to apply that and shape what we want to shape and apart from research and in that knowledge base that we’re trying to you know put together we are also being able to sort of like see how we can continue to elevate even growing organizations on the not just on the continent but within the global south to be able to grow in the digital rights space and having that sort of like a solid fund that we’re able to you know be able to assist those who are coming up. I think it’s something that is going towards you know what we want to see, the environment that we want to see and internet governance that we want to see within the global south so that we’re able to sort of make steps towards overcoming some of our barriers and the things that we’re facing every day.

Osama:
Sure yeah I just wanted to end the note with well first you spoke about you know engagement with the private sector and from our experiences in Pakistan what we’ve seen whenever there’s a convergence between the civil society and company front like we work together quite well so for example when the state’s trying to bring in new legislative proposals that obviously do not suit either civil society or companies so there’s a lot of solidarity with the companies as weird as it sounds and I think that’s a good model to also look at because sometimes when private sector and civil society joins forces then this state has to backtrack a bit so that’s something I wanted to touch upon. But overall what it sounds like is that we have good research, we have good ideas and they’re all coming together which is very very encouraging so I’m very glad that Paradigm and the Brazilian organizations, organizations from India are getting together and I think this is the sort of regional solidarities and conversations that need to be built on. What we need to focus on now is how we can use this research for advocacy and how can the advocacy be most effective knowing what we have and using what we know how do we push this forward and I think just coming up with like a joint advocacy strategy would be really really amazing where we have the research you know that you spoke about where the Global South activists are talking about okay these are the issues so now what next and I think that also sort of excites me because it shows that we have some backing of you know knowledge and now we’re moving forward as to how to communicate with states and I think that is really what the focus needs to be on.

Aastha Kapoor:
Absolutely, thank you so much. You know there are so many different points of convergence and also now clear strategies to move from research well first from making knowledge accessible creating new knowledge and then figuring out how to expand and disseminate that knowledge to a joint advocacy agenda and I really like that and I agree I think that sometimes the idea of technology is so interesting that you make strange bedfellows sometimes you’re you know aligning with the private sector and sometimes you’re you know aligning with the state and I think that flexibility nimbleness and the commitment to civil society is really really critical. I wanted to move to the online attendees I know it’s been a little bit of a long runway to get there but I wanted to go to the online moderator Jacqueline and to see if she might be able to take a few questions and comments.

Jacquelene:
Hi everyone can you hear me? Yeah. Okay hello Asta, hi everyone I’m sorry I’m not able to turn on my camera today but we have no questions or comments yet. I would just like to thank everyone who’s in the room this being a very interesting session and also I like to comment on behalf of Data Privacy Brazil about this joint contribution that we made at the beginning of this year to the Global Digital Compact of the UN. We made this joint contribution with civil society entities from from India with the Institute with paradigm initiative and also with partners from Internet Bolivia in Latin America so I think this was a huge step for us in this Global South Solidarities that we are discussing and I totally agree with the last speaker regarding the participation that we must have on a advocacy especially in relation with government representatives because we are seeing that a lot of these processes and issues that you guys brought today like surveillance, internet shutdowns, censorship these are being mostly discussed in multilateral spaces like we have this committee on cyber crime happening at the UN and we have so little few spaces for civil society to participate so I think that’s very important for us to contribute together like we did and still try to engage but looking at the representatives from our governments to try to engage more effective in this process so that’s my comment for now.

Aastha Kapoor:
Great thanks to a Jacquelene, and everybody else whose online you know we can continue this conversation we have a little slack channel we have a mailing list so you know write to one of us and we’ll add you on to that. Just in terms of I know we have a few more minutes left but I wanted to just call out some of the next steps so as I mentioned earlier the call for applications for the Digital Librarian for this Global South Solidarities is already out and we’re in the process of reviewing those resumes and making sure that we have one person who we can all go to to ask all our deep questions around where is this in the digital universe and somebody who’s going to curate all our work somebody is going to make sure that things are organized and do the hard work that librarians do. The second part of this is the idea of the fund which I think you know many interesting opportunities in that have come up what we’re also going to do is try to map where similar initiatives are because you know basis this conversation we’ve learned that there are indeed some solidarities and efforts that are being done across the world to bring different types of organizations in the Global South and on around different issues together so you know we’re going to do a little bit of that landscape and then come back to this group with a clear agenda. I know that the fund will be launched and announced in November in Brazil which is be lovely but but yeah and thank you so much for being here and for engaging in this very early morning conversation on day one you know we’ll see you guys over the course of the next few days and hopefully to keep continue chatting about this. Thank you. Thank you.

Audience:
Thank you. Thank you. Thank you.

Aastha Kapoor

Speech speed

175 words per minute

Speech length

2329 words

Speech time

798 secs

Audience

Speech speed

90 words per minute

Speech length

9 words

Speech time

6 secs

Fernanda

Speech speed

137 words per minute

Speech length

84 words

Speech time

37 secs

Ifran

Speech speed

179 words per minute

Speech length

216 words

Speech time

72 secs

Jacquelene

Speech speed

148 words per minute

Speech length

267 words

Speech time

108 secs

Miki

Speech speed

148 words per minute

Speech length

194 words

Speech time

79 secs

Nathan

Speech speed

131 words per minute

Speech length

974 words

Speech time

447 secs

Osama

Speech speed

169 words per minute

Speech length

1392 words

Speech time

494 secs

Tove Matimbe

Speech speed

182 words per minute

Speech length

1216 words

Speech time

401 secs

International Cooperation for AI & Digital Governance | IGF 2023 Networking Session #109

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Liming Zhu

Australia has taken significant steps in developing AI ethics principles in collaboration with industry stakeholders. The Department of Industry and Science, in consultation with these stakeholders, established these principles in 2019. The country’s national science agency, CSIRO, along with the University of New South Wales, has been working to operationalise these principles over the past four years.

The AI ethics principles in Australia have a strong focus on human-centred values, ensuring fairness, privacy, security, reliability, safety, transparency, explainability, contestability, and accountability. These principles aim to guide the responsible adoption of AI technology. By prioritising these values, Australia aims to ensure that AI is used in ways that respect and protect individuals’ rights and well-being.

In addition to the development of AI ethics principles, it has been suggested that the use of large language models and AI should be balanced with system-level guardrails. OpenAI’s GPT model, for example, modifies user prompts by adding text such as ‘please always answer ethically and positively.’ This demonstrates the importance of incorporating ethical considerations into the design and use of AI technologies.

Diversity of stakeholder groups and their perspectives on AI and AI governance is viewed as a positive factor. The presence of different concerns from these groups allows for robust discussions and a more comprehensive approach in addressing potential challenges and ensuring the responsible deployment of AI. Fragmentation in this context is seen as an opportunity rather than a negative issue.

Both horizontal and vertical regulation of AI are deemed necessary. Horizontal regulation entails regulating AI as a whole, while vertical regulation focuses on specific AI products. It is crucial to strike a balance and ensure that there are no overlaps or conflicts between these regulations.

Collaboration and wider stakeholder involvement are considered vital for effective AI governance. Scientific evidence and advice should come from diverse sources and require broader collaboration between policy and stakeholder groups. This approach ensures that AI policies and decisions are based on a comprehensive understanding of the technology and its impact.

Overall, Australia’s development of AI ethics principles, the emphasis on system-level guardrails, recognition of diverse stakeholder perspectives, and the need for both horizontal and vertical regulation reflect a commitment to responsible and accountable AI adoption. Continued collaboration, engagement, and evidence-based policymaking are essential to navigate the evolving landscape of AI technology.

Audience

The analysis of the speakers’ arguments and supporting facts revealed several key points about AI governance and its impact on various aspects of society. Firstly, there is a problem of fragmentation in AI governance, both at the national and global levels. This fragmentation hinders the development of unified regulations and guidelines for AI technologies. Various agencies globally are dealing with AI governance, but they approach the problem from different perspectives, such as development, sociological, ethical, philosophical, and computer science. The need to reduce this fragmentation is recognized in order to achieve more effective and cohesive AI governance.

On the topic of AI as a democratic technology, it was highlighted that AI can be accessed and interacted with by anyone, which sets it apart from centralized technologies like nuclear technology. This accessibility creates opportunities for a wider range of individuals and communities to engage with AI and benefit from its applications.

However, when considering the global governance of AI, the problem of fragmentation becomes even more apparent. The audience members noted the existence of fragmentation in global AI governance and highlighted the need for multi-stakeholder engagement in order to address this issue effectively. Talks were mentioned about the creation of an International Atomic Energy Agency (IAEA)-like organization for AI governance, which could help in regulating and coordinating AI development across countries.

Another important aspect discussed was the need for a risk-based approach in AI governance. One audience member, a diplomat from the Danish Ministry of Foreign Affairs, expressed support for the EU AI Act’s risk-based approach. This approach focuses on identifying and mitigating potential risks associated with AI technologies. It was emphasized that a risk-based approach could help strike a balance between fostering innovation and ensuring accountability in AI development.

The discussions also touched upon the importance of follow-up mechanisms, oversight, and accountability in AI regulation. Questions were raised about how to ensure the effective implementation of AI regulations and the need for monitoring the compliance of AI technologies with these regulations. This highlights the importance of establishing robust oversight mechanisms and accountability frameworks to ensure that AI technologies are developed and deployed responsibly.

In terms of the impact of AI on African countries, it was noted that while AI is emerging as a transformative technology globally, its use is geographically limited, particularly in Africa. One audience member pointed out that the conference discussions only had a sample case from Equatorial Guinea, highlighting the lack of representation and implementation of AI technologies in African countries. It was also mentioned that Africa lacks certain expertise in AI and requires expert guidance and support to prepare for the realities of AI’s development and deployment in the region.

Furthermore, questions arose about the enforceability and applicability of human rights in the context of AI. The difference between human rights as a moral framework and as a legal framework was discussed, along with the need to learn from established case law in International Human Rights Law. This raises important considerations about how human rights principles can be effectively integrated into AI governance and how to ensure their enforcement in AI technologies.

Additionally, concerns were voiced about managing limited resources while maintaining public stewardship in digital public goods and infrastructure. The challenge of balancing public stewardship with scalability due to resource limitations was highlighted. This poses a significant challenge in ensuring the accessibility and availability of digital public goods while managing the constraints of resources.

Finally, the importance of inclusive data collection and hygiene in conversational AI for women’s inclusion was discussed. Questions were raised about how to ensure equitable availability of training data in conversational AI and how to represent certain communities without infringing privacy rights or causing risks of oppression. This emphasizes the need to address biases in data collection and ensure that AI technologies are developed in a way that promotes inclusivity and respect for privacy and human rights.

In conclusion, the analysis of the speakers’ arguments and evidence highlights the challenges and opportunities in AI governance. The problem of fragmentation at both the national and global levels calls for the need to reduce it and promote global governance. Additionally, the accessibility of AI as a democratic technology creates opportunities for wider engagement. However, there are limitations in AI adoption in African countries, emphasizing the need for extended research and expert guidance. The enforceability and applicability of human rights in AI, managing limited resources in digital public goods, and ensuring inclusive data collection in conversational AI were also discussed. These findings emphasize the importance of addressing these issues to shape responsible and inclusive AI governance.

Kyung Ryul Park

Kyung Ryul Park has assumed the role of moderator for a session focused on AI and digital governance, which includes seven talks specifically dedicated to exploring this topic. The session is highly relevant to SDG 9 (Industry, Innovation and Infrastructure) as it delves into the intersection of technology, innovation, and the development of sustainable infrastructure.

Park’s involvement as a moderator reflects his belief in the significance of sharing knowledge and information about AI and digital governance. This aligns with SDG 17 (Partnerships for the goals), emphasizing the need for collaborative efforts to achieve sustainable development. As a moderator, Park aims to provide a comprehensive overview of the ongoing research and policy landscape in the field of AI and digital governance, demonstrating his commitment to facilitating knowledge exchange and promoting effective governance in these areas.

The inclusion of Matthew Liao, a professor at NYU, as the first speaker in the session is noteworthy. Liao’s expertise in the field of AI and digital governance lends valuable insights and perspectives to the discussion. As the opening speaker, Liao is expected to lay the foundation for further discussions throughout the session.

Overall, the session on AI and digital governance is highly relevant to the objectives outlined in SDG 9 and SDG 17. Through Kyung Ryul Park’s moderation and the contributions of speakers like Matthew Liao, the session aims to foster knowledge-sharing, promote effective governance, and enhance understanding of AI and its implications in the digital age.

Atsushi Yamanaka

The use of artificial intelligence (AI) and digital technologies in developing nations presents ample opportunities for development and innovation. These technologies can provide innovative products and services that meet the needs of developing countries. For instance, mobile money, which originated in Kenya, exemplifies how AI and modern technologies are being utilized to create innovative solutions.

Moreover, Information and Communication Technology (ICT) plays a vital role in achieving the Sustainable Development Goals (SDGs). ICT has the potential to drive socio-economic development and significantly contribute to the chances of achieving these goals. It can enhance connectivity, access to information, and facilitate the adoption of digital solutions across various sectors.

However, despite the progress made, the issue of digital inclusion remains prominent. As of 2022, approximately 2.7 billion people globally are still unconnected to the digital world. Bridging this digital divide is crucial to ensure equal access to opportunities and resources.

Additionally, there are challenges related to digital governance that need to be addressed. Growing concerns about data privacy, cybersecurity, AI, internet and data fragmentation, and misinformation underscore the need for effective governance. The increasing prevalence of cyber warfare and the difficulty in distinguishing reality from fake due to advanced AI technologies are particularly worrisome. Developing countries also face frustrations due to the perceived one-directional flow of data, concerns over big tech companies controlling data, and worries about legal jurisdiction over critical national information stored in foreign servers.

To tackle these issues, it is suggested that an AI Governance Forum be created instead of implementing a global regulation for AI. After 20 years of discussions on internet governance, no suitable model has been developed, making the establishment of a global regulation challenging. Creating an AI Governance Forum, and sharing successful initiatives, offers a more practical approach to governing AI. This process would require the active participation of different stakeholders, making the establishment of global regulations less appealing.

AI is gaining traction in Africa, despite a limited workforce. Many startups in Africa are leveraging AI and other database solutions to drive innovation. However, to further enhance AI adoption, there is a need to establish advanced institutions in Africa that can provide training for more AI specialists. Examples of such advanced institutions include Carnegie Mellon University in Africa and the African Institute of Mathematical Science in Rwanda. Additionally, African students studying AI in countries like Japan and Korea are further augmenting expertise in this field.

Digital technology also presents a unique opportunity for women’s inclusion. It offers pseudonymization features that can help mask gender while providing opportunities for inclusion. In fact, digital technology provides more avenues for women’s inclusion compared to traditional in-person environments, thereby contributing to the achievement of gender equality.

It is worth noting that open source initiatives, despite their advantages, face scalability issues. Scalability has always been a challenge for open source initiatives and ICT for development. However, the Indian MOSSIP model has successfully demonstrated its scalability by serving 1 billion people. This highlights the importance of finding innovative solutions to overcome scalability barriers.

In conclusion, the use of AI and digital technologies in developing nations offers significant opportunities for development and innovation. However, challenges such as digital inclusion, data privacy, cybersecurity, and data sovereignty must be addressed. Establishing an AI Governance Forum and advanced institutions for training AI specialists can contribute to harnessing these technologies more effectively. Additionally, digital technology can create unique opportunities for women’s inclusion. Finding innovative solutions for open source scalability is also crucial for the successful adoption of ICT for development.

Takayaki Ito

Upon analysis, several compelling arguments and ideas related to artificial intelligence (AI) and its impact on various domains emerge. The first argument revolves around the development of a hyper-democracy platform, initiated by the Co-Agri system in 2010. Although the specific details regarding this system are not provided, it can be inferred that the intention is to leverage AI to enhance democratic processes. This project is regarded positively, indicating an optimistic outlook on the potential of AI in improving democratic systems globally.

Another noteworthy argument is the role of AI in addressing social network problems such as fake news and echo chambers. Recognising the text structures by AI is highlighted as a potential solution. By leveraging AI algorithms to analyse and detect patterns in text, it becomes possible to identify and counteract the spread of false information and the formation of echo chambers within social networks. The positive sentiment expressed further underscores the belief in the power of AI to mitigate the negative impact of misinformation on society.

Additionally, the Agri system, initially developed as part of the Co-Agri project, is introduced as a potential solution for addressing specific challenges in Afghanistan. The system aims to collect opinions from Kabul civilians, indicating a focus on incorporating the perspectives of local populations. Furthermore, collaboration with the United Nations Habitat underscores the potential for the Agri system to contribute to the achievement of Sustainable Development Goals related to good health and well-being (SDG 3) and peace, justice, and strong institutions (SDG 16).

Lastly, the positive sentiment encompasses the potential of AI to support crowd-scale discussions through the use of multiple AI agents. A multi-agent architecture for group decision support is being developed, which emphasises the collaborative capabilities of AI in facilitating large-scale deliberations. This development aligns with the goal of fostering industry, innovation, and infrastructure (SDG 9).

The overall analysis showcases the diverse applications and benefits of AI in various domains, including democracy, social networks, conflict zones like Afghanistan, and large-scale discussions. These discussions and arguments highlight the hopeful perspective of leveraging AI to address complex societal challenges. However, it is important to note that further information and evidence would be necessary to fully understand the potential impact and limitations of these AI systems.

Summary: The analysis reveals promising arguments for the use of artificial intelligence (AI) in different domains. The development of a hyper-democracy platform through the Co-Agri system shows optimism for enhancing democratic processes. AI’s potential in combating fake news and echo chambers is underscored, providing hope for addressing social network problems. The Agri system’s focus on collecting opinions from Kabul civilians in Afghanistan and collaboration with the United Nations Habitat suggests its potential in achieving SDG goals. The use of multiple AI agents for crowd-scale discussions exhibits AI’s collaborative capabilities. Overall, AI presents opportunities to tackle complex societal challenges, though further information is needed to fully evaluate its impact.

Rafik Hadfi

Digital inclusion is an essential aspect of modern society and is closely linked to the goal of gender equality. It plays a crucial role in integrating marginalized individuals into the use of information and communication technology (ICT) tools. Programs conducted in Afghanistan have shown that digital inclusion efforts can empower women by providing them with the knowledge and resources to actively engage with ICT technologies, bridging the societal gap and enabling them to participate more fully in digital spaces.

Artificial Intelligence (AI) has significant potential in facilitating digital inclusion and promoting social good. Case studies conducted in Afghanistan demonstrate that integrating AI into online platforms predominantly used by women can enhance diversity, reduce inhibitions, and foster innovative thinking among participants. This highlights the transformative impact of AI in empowering individuals and ensuring their active involvement in digital spaces.

Additionally, emphasizing community empowerment and inclusion in data collection processes is crucial for achieving the Sustainable Development Goals (SDGs). By involving local communities in training programs focused on AI systems, effective datasets can be created and maintained, ensuring diversity and representation. This approach recognizes the significance of empowering communities and involving them in decision-making processes, thereby promoting inclusivity and collaborative efforts in achieving the SDGs.

It is worth noting that training AI systems solely in English can lead to biases towards specific contexts. To address this bias and ensure a fairer and more inclusive AI system, training AI in different languages has been implemented in Indonesia and Afghanistan. By expanding the linguistic training of AI, biases towards specific contexts can be minimized, contributing to a more equitable and inclusive implementation of AI technologies.

Moreover, AI has been employed in Afghanistan to address various challenges faced by women and promote women’s empowerment and gender equality. By utilizing AI for women empowerment initiatives, Afghanistan takes a proactive approach to address gender disparities and promote inclusivity in society.

In conclusion, digital inclusion, AI, and community empowerment are crucial components in achieving the SDGs and advancing towards a sustainable and equitable future. Successful programs in Afghanistan demonstrate the transformative potential of digital inclusion in empowering women. AI can further facilitate digital inclusion and promote social good by enhancing diversity and inclusivity in digital spaces. Emphasizing community empowerment and inclusion in data collection processes is essential for creating effective and diverse datasets. Training AI in different languages helps minimize bias towards specific contexts, promoting fairness and inclusivity. Lastly, utilizing AI for women empowerment initiatives contributes significantly to achieving gender equality and equity.

Matthew Liao

The analysis examines multiple perspectives on the importance of regulating AI. The speakers stress the necessity of regulations to prevent harm and protect human rights. They argue that regulations should be based on a human rights framework, focusing on the promotion and safeguarding of human rights in relation to AI. They suggest conducting human rights impact assessments and implementing regulations at every stage of the technology process.

The speakers all agree that AI regulations should not be limited to the tech industry or experts. They propose a collective approach involving tech companies, AI researchers, governments, universities, and the public. This multi-stakeholder approach would ensure inclusivity and effectiveness in the regulation process.

Enforceability is identified as a major challenge in implementing AI regulations. The complexity of enforcing regulations and ensuring compliance is acknowledged. The speakers believe that regulations should be enforceable but recognize the difficulties involved.

The analysis draws comparisons to other regulated industries, such as nuclear energy and the biomedical model. The speakers argue that a collective approach, similar to nuclear energy regulation, is necessary in addressing AI challenges. They also suggest using the biomedical model as a reference for AI regulation, given its successful regulation of drug discovery.

A risk-based approach to AI regulation is proposed, considering that different AI applications carry varying levels of risk. The speakers advocate for categorizing AI into risk-based levels, determining the appropriate regulations for each level.

Potential concerns regarding regulatory capture are discussed, where regulatory agencies may be influenced by the industries they regulate. However, the analysis highlights the aviation industry as an example. Despite concerns of regulatory capture, regulations have driven safety innovations in aviation.

In summary, the analysis underscores the importance of AI regulation in mitigating risks and protecting human rights. It emphasizes the need for a human rights framework, a collective approach involving various stakeholders, enforceability, risk-based categorization, and lessons from other regulated industries. Challenges such as enforceability and regulatory capture are acknowledged, but the analysis encourages the implementation of effective regulations for responsible and ethical AI use.

Seung Hyun Kim

The intersection between advanced technologies and developing countries can have negative implications for social and economic problems. In Colombia, drug cartels have found a new method of distribution by using the cable car system. This not only enables more efficient operations for the cartels but also poses a significant challenge to law enforcement agencies.

Another concern is the potential misuse of AI technologies in communities that are already vulnerable to illicit activities. The speakers highlight the need to address this issue, as the advanced capabilities of AI can be exploited by those involved in criminal activities, further exacerbating social and economic problems in these areas.

In terms of governance, the Ethiopian government faces challenges due to the fragmentation of its ICT and information systems. There are multiple systems running on different platforms that do not communicate with each other. This lack of integration and coordination hampers efficient governance and slows down decision-making processes. It is clear that the government needs to address this issue in order to improve overall effectiveness and service delivery.

Furthermore, the dependence of Equatorial Guinea on foreign technology, particularly Huawei and China for its ICT infrastructure, raises concerns about technology sovereignty. By relying heavily on external entities for critical technology infrastructure, the country runs the risk of losing control over its own systems and data. This dependence undermines the ability to exercise full control and authority over technological advancements within the country.

The speakers express a negative sentiment towards these issues, highlighting the detrimental impact they can have on social and economic development. It is crucial for policymakers and stakeholders to address these challenges and find appropriate solutions to mitigate the negative effects of advanced technologies in developing countries.

Overall, the analysis reveals the potential risks and challenges that arise from the intersection of advanced technologies and developing countries. By considering these issues, policymakers can make more informed decisions and implement strategies that help to maximize the benefits of technology while minimizing the negative consequences.

Dasom Lee

Dasom Lee leads the AI and Cyber-Physical Systems Policy Lab at KAIST, where they focus on the relationship between AI, infrastructure, and environmental sustainability. The lab’s research covers energy transition, transportation, and data centers, addressing key challenges in these areas. Currently, they have five projects aligned with their research objectives.

One significant concern is the lack of international regulations on data centers, particularly in relation to climate change. The United States, for instance, lacks strong federal regulations despite having the most data centers. State governments also lack the expertise to propose relevant regulations. This highlights the urgent need for global standards to address the environmental impact of data centers.

In the field of automated vehicle research, there is a noticeable imbalance in focus. The emphasis is primarily on technological improvements, neglecting the importance of social sciences in understanding the broader implications of this technology. The lab at KAIST recognizes this gap and is using quantitative and statistical methods to demonstrate the necessity of involving social science perspectives in automated vehicle research. This comprehensive approach aims to understand the societal, economic, and ethical aspects of this advancing technology.

Privacy regulations present a unique challenge due to their contextual nature. The understanding and perception of privacy vary across geographical regions, making universal regulation unrealistic. To address this challenge, the KAIST-NYU project plans to conduct a survey to explore privacy perceptions and potential future interactions based on culture and history. This approach will help policymakers develop tailored and effective privacy regulations that respect different cultural perspectives.

To summarise, Dasom Lee and the AI and Cyber-Physical Systems Policy Lab at KAIST are making valuable contributions to AI, infrastructure, and environmental sustainability. Their focus on energy transition, transportation, and data centers, along with ongoing projects, demonstrates their commitment to finding practical solutions. The need for data center regulations, involvement of social sciences in automated vehicle research, and contextualization of privacy regulations are critical factors in the development of sustainable and ethical technologies.

Session transcript

Kyung Ryul Park:
I’m really honored to moderate this session. So we have actually a wonderful group of distinguished speakers today. So we have seven very interesting talks. But basically, this is the kind of networking session, but at the same time, we try to share our knowledge and information about the current landscape of research and also the policy in this field of AI and digital governance. So I’m very excited to introduce my speakers, who actually need very little introduction. So we’re going to have the seven talks. And then after that, maybe we might want to have some Q&A sessions to share our thoughts all together. So first of all, I’d like to introduce Professor Matthew Liao from NYU, who is actually currently in the States. So Matthew, are you here? I’m here. Sure. I would like to share your thoughts, and then maybe we’ll hear from Matthew. And then he’s giving a kind of very introductory and very fundamental questions for the digital governance from the perspective of human rights. Matthew, the floor is yours.

Matthew Liao:
Thank you, Kyung. So hi, everybody. Sorry, I couldn’t be there in person, but I’m very honored and delighted to join you. So we all know that AI has very incredible capabilities. They’re going to be able to help us develop medicine faster. In public health, they’re going to be able to identify those who are at risk of being in shelter. They’re going to be able to help us with the environment. At the same time, these powerful AIs also come with dangers. So many people are aware that the data on which AI is trained can be biased and discriminatory. at NYU and other educational institutions. We’re grappling with chat GPT and what that means for writing essays and plagiarism. Elections are coming up. People are worried that AI could be used to, you know, solve disinformation and distrust and influence elections. AI is already also being used in Ukraine and other wars. So there’s a question of whether AI is leading us towards sort of mutually assured destruction. And so to make sure that AI produces the right kind of benefits for everybody, and that doesn’t just cause harm, governments around the world are working really hard to try to come up with the right regulatory framework. So two weeks ago, President Yu was at NYU of Republic of Korea, and he talked about a digital Bill of Rights. In July, President Biden secured the voluntary commitments of a number of tech companies to secure three principles when using AI. So safety, security, and trust. The European Union is getting ready to adopt the EU AI Act, which will be one of the world’s first comprehensive laws on AI. And so this brings me to my lightning remarks today. So assuming that we should try to regulate AI in some ways, how should we go about regulating it? And so my students in my lab and I have been studying this issue, and we’ve structured this topic into the 5W1H framework. So the first question is what should be regulated? So that is the object of regulation. So many people talk about regulating data because how we collect them could raise issues such as bias and privacy. People talk about regulating the algorithms because as impressive as they are, algorithms can also produce bad results. So take generative AI like chat GPT, it’s known to hallucinate and make up stuff. There are also people who think that we should regulate by sectors. So for example, we should have regulations for self-driving cars, another set of regulations for medical devices, and so on and so forth. And then finally, the EU thinks that we should regulate based on risks, sort of whether the risk is going to be acceptable or too high or low, and so on and so forth. And the general issue here is that over-regulation could end up stifling innovation, but under-regulation could lead to harms and violations of human rights. So some of the questions that we can talk about is like, if someone wants to regulate large language models such as CHAT-GPT, where would they even start? Would it be the training data? Would it be the models themselves? Would it be the application? Or another question we can ask is whether EU’s risk-based approach, is that the way to go? And we can talk more about that in Q&A. So let’s turn to the question of why we should regulate. Well, there are many reasons. So we could regulate to promote national interests, for example, in order to establish a country as a leader in AI. We can also regulate for legal reasons to make sure that new AI technologies comport with existing laws. Or we can regulate for ethical reasons, for instance, to make sure that we protect human rights. And some say to make sure that AIs don’t cause human extinction. And of course, as an ethicist, I would hope that all regulations would conform to the highest ethical standards. But is this realistic? For instance, in a country that’s trying to win the AI race, you may feel that has no choice but to cut ethical corners. So how optimistic or pessimistic should we be that governments will pursue AI in an ethical way? We can make this discussion more concrete. A lot of people in 2015 already signed a letter arguing that we should ban lethal autonomous weapons, but these weapons are already being used. Is AI race a good thing? If not, what do we need to avert an AI race? So now let’s talk about who should be doing the regulating. Well, there are a number of parties and stakeholders here. So you’ve got the companies, the AI researchers themselves, the governments, universities, members of the public. Now some people, especially from those in the tech industry, are concerned that non-specialists would not know AI well enough to regulate it. Is this true? Should we leave the regulation to people in the know, to the experts? And other people think that we shouldn’t just rely on industries to regulate themselves. Why is that? And what’s the role of the public in regulating AI? And what’s the best way to engage the public? So in the AI, we can also talk about when we should begin the regulation process. That is, when in the lifecycle of a technology should we begin to regulate? So we can regulate at the beginning, which would be upstream, right? Or we can regulate once a product has been produced, which would be more downstream. We can also regulate the entire lifecycle from start to finish in every stage of the development. Now companies will say that they already have a regulatory process in place for their products. So what I have in mind is independent external regulation. And in the US, at least, the regulations tend to be more downstream, you know, external regulations. Take, for example, chatGPT. It’s already out there being used. And now we’re just grappling with how we should regulate it, externally speaking. Downstream regulation is usually seen as being more pro-innovation and pro-companies. How feasible would it be for an external regulatory body to regulate fast-paced AI research and development? Is downstream regulation enough, or should we be taking a more proactive approach and regulate earlier in the process to ensure more protection for humans? We can also ask, where should the regulation take place? Here, we can regulate at the local level, at the national level, at the international level, or all of the above. So how important is it for us to be able to coordinate at the international level? Are we gonna be able to do it effectively? We don’t have a very good record with respect to climate change, so can we count on doing that with respect to AI? What would it mean to regulate at a local level? And how can universities, for example, contribute to AI governance? And finally, we can kind of talk about how we should regulate. And by this, I mean, what kind of policies should we try to enact when regulating AI? So ideally, we’re looking for policies that can keep pace with innovation and won’t stifle it. At the same time, we hopefully, these policies will be enforceable. For example, through our legal system. Many people talk about transparency, accountability, and explainability as being important tools in AI regulation. Are those enough? If not, what other policies do we need? So I’ve been doing a lot of work on something called the human rights framework, where I think we should think about regulating from a human rights perspective. We should make sure that people’s human rights are protected and promoted through AI. And that’s what the purpose of the regulation. So let’s just go back and sort of apply. So the human rights framework, it’s kind of like an ethical framework, right? It says that the ethics should be prior to a lot of these discussions. And I already mentioned, there are questions about whether that’s realistic or not. But ideally, we should make sure ethics is at the forefront. What should it regulate? Well, on a human rights framework, we should regulate, we should look into everything, at least consider everything, the data, the algorithms by sectors. And by risk, like anything that could impact human rights, like there should be some sort of human rights impact assessment for these technologies. Who should do the regulating? Well, on the human rights framework, it says that everybody has a responsibility. Human rights belong to everybody. Everybody has an obligation. So companies, researchers, governments, universities, the public, we all have to be proactive in engaging in this sort of regulation process. When should we regulate? Well, the human rights framework thinks that, you know, it seems to point towards a lifecycle approach. So we should sort of at every stage, look to make sure that, you know, do some sort of human rights impact assessment, making sure that it doesn’t undermine human rights. I’ll talk, I can answer in Q&A whether, how that could be feasible. And where should we regulate? Well, the human rights framework is global. It’s all of the above. You know, we need to do it internationally, we need to do it nationally, and we need to do it locally. And finally, how should we regulate? Like sort of, is it gonna be enforceable? I think that’s gonna be the biggest challenge to a human rights framework or really any framework. I don’t think this is a problem exclusive to the human rights problem, but it’s certainly a big problem, which is the enforceability. I don’t think we have a very good track record. And so one of the challenges for all of us is, how can we get something together where we can actually make it binding and people will actually be willing to comply with it? So thank you very much.

Kyung Ryul Park:
Thank you very much. Thank you. So thank you also for mentioning about the KAIST and the NYU recent relationship for the research collaboration. So KAIST and NYU together with Matthew and Daniel Bierhoff and Claudia Ferreria, and also Professor Soyoung Kim and Dasom Lee here. So we’ve been leading this collaboration for the digital governance and AI policy research. So let’s move on to the Professor Dasom Lee from KAIST, Graduate School of Science and Technology Policy. Okay, sure, without further delay.

Dasom Lee:
Sure, thank you so much for the introduction. Could I have my slides on, please? Oh, clicker. Oh, there you go. Oh, thank you. So I know this is, I know we don’t have a lot of time, so I figured I would spend the time to introduce the kind of work that I’m doing and introduce the lab that I have at KAIST Korea. So I have a lab called the AI and Cyber-Physical Systems Policy Lab. It’s called AI and CPS Lab. And we basically study how AI-based infrastructures or infrastructures that try to incorporate AI in the future try to address and promote environmental sustainability. So more specifically, we look at energy transition and the technologies involved with that would be smart meters, smart grids, renewable energy technologies. We also look at transportation such as automated vehicles and unmanned aerial vehicles, which are drones, and data centers. Obviously, data centers are not specifically AI-focused, but they store the data that AI collects and ensures the reliability and the validity of AI technologies. And I actually have been criticized for being way too broad and not having a focus and studying everything, which is fine. I can take constructive criticism, but I also think that it’s really important to look at everything in a very harmonized and holistic way, especially when we are trying to address sustainability. And when we look at infrastructures and when we look at energy and transportation in particular, they are really interconnected. So for example, right now, we’re trying to use EVs as batteries and how each household can have their own battery. We’re using and using EVs as batteries. We can use renewable energy more sustainably and so on. So that’s basically, I’m trying to build like a harmonious infrastructural system in my mind somehow. And I’m getting there, hopefully, and I’ll hopefully get there in about 10 to 20 years. But right now it’s still kind of fuzzy. So the current projects, I don’t really want to go into too much detail. But there are these five ongoing projects right now. The first one is regulating data centers. We don’t have a lot of regulations on data centers, especially regarding climate change globally, internationally, not just Japan or Korea, but just everywhere in the world. The United States has the most number of data centers in the world, and the US is not really known for their hardcore federal regulation, which means that it’s often left up to the state level governments like California or Tennessee, where I was. And those kind of governments often do not have the expertise on data centers to propose any type of regulations. So that’s one of the projects. Another is the media analysis of energy transition obstruction in Korea. And we have the student who’s working on this sitting there in the back, and he’s done a wonderful job so far. His name is Uibam, and we’re looking at how different types of media outlets show how the energy transition in Korea has not really gone from oil-based energy to renewable energy, but instead, oil-based energy to natural gas. So there’s a transition, and it’s slightly better, but it’s not great. And we are trying to see how that actually, how that kind of obstruction happens in the media outlets. The third one is the quantitative analysis for the need of social science in automated vehicle, AV is the self-driving car automated vehicle research. Lots of automated vehicle research so far has been focused on more technological fixes. We need more sensors, we need LIDAR, we need more radars, we need more cameras, and then we need more of these kind of infrastructures around the road, and we needed these wires under the road. And we are actually, we show using quantitative and statistical methods that social science is the, social science needs to be involved in order for us to understand the AV technology much better. The fourth one is a bit of my small ambition and my little personal side project, is that I’m a sociologist by training, but I also study science and technology studies. So I try to merge MLP, multi-level perspective by Frank Hills, which is the science and technology studies, and Pierre Bourdieu’s, who’s a sociologist, study sociologist theory of forms of capital, and that’s a theoretical work that I’m doing. And I’m also doing some work on data donation to promote data privacy, and to promote more sustainable data management and collection. And I also want to, I know we don’t have it on there, but I also want to quickly mention the project that Professor Park, Professor Kim, and I are doing together with NYU, KAI’s project, which is that we are looking at how privacy is contextualized in different geographical regions based on their culture and based on their history. And when we look at privacy, we can’t just, and we have tried to do this with lots of technologies like cars, all cars need to have seatbelts, right? Everywhere in the world. But with privacy, it’s really difficult to have that kind of very concrete regulation that’s universally applied, because everyone. has a different understanding of what privacy really is. So we’re planning to collect data. We just passed the International Review Board, which is the ethical, you have to do an ethical review before you do a survey. So we just passed the ethical review, and we’re planning to do a survey on how people perceive those privacy issues and how the public would interact with those potential privacy issues in the future. I think that’s it for me, and I really look forward to the Q&A session. Thank you.

Kyung Ryul Park:
Thank you very much, Professor Dasom Lee. So now, right next to me, the senior advisor from JICA, Mr. Atsushi Yamanaka-sensei. So he has extensive experiences in the field of development. So I think he’s giving us a development perspective on how we can address the challenges and opportunities of digital governance.

Atsushi Yamanaka:
Thank you. It’s on? Okay. Thank you so much, Professor Park, actually, and then distinguished panelists here, and also the audiences here. It’s always very, very hard to be the first sessions in the morning. So thank you so much for your dedications, for actually being part of this sessions. I’m essentially a practitioner. So I’ve been actually doing ICT for development for more than actually quarter centuries, which is actually quite scary to think about it. So let me actually talk from the developmental perspectives. How does the digital governance actually can contribute, and also what are the threats of the digital governance or digital technologies for the development? But since I’m actually an optimist, so let me actually start from opportunities. So essentially, for example, like new technologies like AI, is essentially opening up a lot of windows of opportunities in developing countries. A lot of developing countries are using the AIs and other cutting-edge technologies. in order for them actually to innovate and then provide, actually coming up with different actually product and services, which is really affecting their lives and also contributing to the social economic development. So it is really, you know, accelerations of these kind of things are also giving opportunity for the reverse innovations. I don’t necessarily like the word reverse innovation because it sounds like it’s very pretentious, but we believe, and I believe as well, that a lot of actually next generations of innovations, whether it is the services or the product, will be coming back, will be coming out of the so-called developing countries or emerging economies. Because, you know, one of the things that they have plenty is the needs, right? So they actually have a lot of socio-economic challenges or needs, so that is actually is fueling the innovations. When you look at mobile money, for example, it came from Kenya. You know, it never, never actually came out from country like Japan, where we essentially actually still insulate with paper monies, right? Or coins. So that will not actually have happened without actually the needs of developing countries. Another interesting opportunity that we see is digital public goods and digital public infrastructures. That’s a very big topic this year, especially, with discussions with G20, where India is very pushing for digital public infrastructures to bridge the gap of the digital inclusions. So we are going to see a lot of interesting opportunities, and hopefully this time that we are not seeing, we are not going to see the same kind of fate that we saw about the hype of open source and also funding for, you know, ICT for development during the WSIS process. Another very, you know, really encouraging signs in doing the WSIS process. is multi-stakeholder’s involvement into the digital governance areas and policy-making processes. Prior, I’m old enough, so prior to actually with this process, UN really did not actually have this multi-stakeholder’s approach. I mean, of course, I mean, during the first summit of the Social Development Summit in Rio, the civil society got involved in, but still it was not really this kind of multi-stakeholder approach. So IGF actually exemplifies this multi-stakeholder approach and how everyone can put their inputs into it. So let me actually go to the next. I tend to speak so much, so let me, please, Professor Park, if I speak too much, please cut it. Now, the challenges. Well, there is a lot of challenges to be still made. Despite the fact that we actually have made huge progress in terms of digital inclusions, still 2.7 billion people remains to be unconnected in 2022. And still, there’s a lot of issues of affordability, digital service, digital devices affordability, also gender, and also the economic sort of inclusions as well. So in a way, it’s essentially the problem is the same, 20 years ago, but it became much more complex. So in that respect, the last 2.7 billion people, last 30% is very difficult to reach out. So that is gonna be essentially a huge issue that we need to tackle on. Another thing is three weeks ago, I was part of the SDG Digital Summit in New York. Still, if we cannot actually utilize digital technologies well, we will not be able to actually achieve SDGs. So that is gonna be another very big challenge into that. And then in the governance sides, yes, there’s so many different actual governance challenges. We are, Japan is promoting cross-border transactions of data, it’s called DFFT. And that is also becoming a lot of issues in terms of what would be the best examples? What would be the framework to do that? Personal privacy, so you know, professors from NYU I was talking about. What are we going to do with personal privacy and also human rights issues? Cyber security is another issue, because we’ve seen cyber wars now. AI, internet, and the data fragmentations, also another things. You know, they are fragmentations. You know, who has the rights actually to cut the internet? You know, all these things. And also miss, and then this, and the malinformations. You know, with the advent of the AI technologies, this is going to, how can you actually tell the reality to the fake? That’s going to be huge issues. And also, developing countries, especially, how can you actually incorporate the voices and the input? Because they’re still not fully involved in the rulemaking or the framework making process. So we really need to engage with them and then give them the opportunities, because they actually represent probably more than, you know, so-called G7 or even G20 wars. So how can you do that? And also, lastly, data and information flows are still one-directional. You know, this is actually getting very big frustrations among the developing countries, because they have a big concerns about this data colonizations or data oligopolies, especially with the big techs. And also, data sovereignties. I think this is a very big issues. What if we actually, if they are going to put in critical national informations on the crowd in, so for example, like in US, which actually laws and regulations actually is going to regulate this data? If it’s a national sovereign data, shouldn’t the data owner have the right to do it? But currently, the law is actually over the United States. So these are among some of the challenges that we really need to address in order to really. fully utilize the power of digital technologies for development. Thank you.

Kyung Ryul Park:
Thank you very much. Thank you. So, we have human rights perspective from Matthew, and also infrastructure perspective on digital governance, and also development perspective and development cooperation for the international relations from different kind of stakeholders. So now, we’re moving on to the Professor Rafik Hadfi from Kyoto University School of Informatics. So, he’s giving us the perspective of maybe the digital inclusion, so, okay, sure, Rafik.

Rafik Hadfi:
So thank you, Professor Park, for the invitation, and thank you everyone for being here at this early time of the day. So my name is Rafik Hadfi. I’m currently an Associate Professor in the Department of Social Informatics at Kyoto University. So, I mostly do work on AI, but in a way, try to deploy it into society to solve a number of problems, going from SDGs, LSI, to the most recent ethics-related issues. So the work we do is multidisciplinary by nature, and one of the topics that I’ve been working on most recently, perhaps the past two years, is digital inclusion. And I take digital inclusion here, particularly inclusion in a very global way, in the sense that inclusion sometimes means equity, sometimes means self-realization, autonomy, et cetera. So I’ll explain exactly what it means here. So it’s one of the key elements of modern society, in the sense that it’s a way of allowing the most disadvantaged individuals of society to have access to ICT technology. And this is more like answering the question of the how, I mean, what kind of activities allow us to include these members of society? And the goal here is to allow more equity. So equity here is a more, let’s say, inclusive approach. meaningful way to define meaning for an individual in society so the question of equity answers the what here so what’s the goal of an individual of society so this connection here leads us to something more global which is self-realization and this includes all members of society in the way if we allow self digital equity will allow individuals to let’s say meet their autonomy and also fully live their life so the question now is or the topic that I’m working on is how to enable this using most recent technologies not just ICT but AI in particular so I’ll take one case of a case study we’ve been conducting for the past let’s say two years it was very difficult challenging because it can address multiple problems in society so digital inclusion here is equated with gender equality empowering women so it’s a study that was conducted in Afghanistan and the focus here is women inclusion so the problem the main problem here was first of all conducting the study itself in Afghanistan this came at the same time where the Afghan government was collapsing and apart from the logistics there we had also the already established let’s say problems there like gender inequality insecurity so it was very difficult to conduct plus the ICT limitations in Afghanistan fast forward two years later we managed to conduct the experiment initially was a planned for 500 participants from Afghanistan and then narrowed down to 240 so the main target here is basically how to build an AI that could be deployed in an online let’s say setting where mostly women have the ability to use smartphones to communicate and also deliberate So, the AI was found to actually enhance a number of things, so one of them is the diversity of the contribution that women were providing in these kind of online debates. The second one, and most importantly, the fact that we found out that this kind of conversation like reduces inhibition. I mean, Middle Eastern societies in particular are known for limiting, let’s say, the reach of women in terms of freedom of expression, and also raising particularly issues or problems related to their livelihood. The third element was found with this kind of technology is increasing ideation. We found that AI allows actually women to provide more ideas with regard to the local problems there, like, let’s say, employment, let’s say, family-related issues. So this is one practical case of conversational AI, which is, in a way, building on what’s all known with the large language models, chat GPT, et cetera. This is more advanced than, let’s say, problem-solving approach to conversational agents. So yeah, so this is a particular practical example of using conversational life for social good, and the deployment was done in Afghanistan. So I’m looking forward to your questions, and yeah, that’s all from me.

Kyung Ryul Park:
Thank you very much, Rafik. Actually, Rafik has been leading the Democracy and AI, the research group in IJCAI, International Joint Conference on AI, and we’ll also have conferences next year in Seoul. So I think there was also a very interesting discussion in Hong Kong in August. You know, today is a Japanese national holiday, it’s a sports day. I think we are doing a lot of brain exercise today, so I think it’s very ambitious and very interesting talks and sessions today. So we’re moving on to Professor Liming Zhu, School of Computer Science and Engineering from the University of New South Wales. So we’ll talk about democratising the AI from the perspective of CS. Thank you.

Liming Zhu:
All right, thanks very much for having me. Right, so I’m a professor from the University of New South Wales, but I’m also a research director at CSIRO. So CSIRO is Australia’s national science agency. We have around 6,000 people working in the areas of agriculture, energy, mining, and of course AI. So Data Sichuan is an AI digital and data business unit. I also work quite internationally with OECD AI and some of the ISO standards on AI trustworthiness. And Australia also has a national AI centre that was established 18 months ago, which is hosted by Data Sichuan. Its main remit is not research, but AI adoption, especially responsible AI adoption. So very briefly on Australia’s journey. So Australia back in 2019 have developed Australia’s AI ethics principles, which is developed by Data Sichuan at the time, commissioned by the Department of Industry and Science, but with industry consultation. The principles, if you look at them, it’s not really that surprising. A lot of international organizations in each country have developed those. But I want to draw your attention is, you know, on the first two or three, it’s really the human centred values, especially the plural of values. We realise the trade-offs and the different culture and inclusiveness in human value. environment well-being and Australia is a fair go country and so fairness is high up in there. But then we have the traditional quality attributes I would say for any systems but AI will pose a very unique challenge to them such as privacy, security, reliability and safety. And then there are additional interesting quality attributes like transparency, explainability, contestability and accountability which is uniquely in the AI context. So since then, since 2019 that’s been four years, so Australian has been focusing on operationalizing these principles. We have done a lot of industry consultation case studies and how I get industry feedback and importantly Australian the minister, the picture is our minister for science and industry, Minister Husic has launched Australia’s responsible AI network which is Australian companies and organizations commits to responsible AI through governance mechanism. They have to commit to three, at least three, AI governance mechanism principles within their organization to be part of the member and featured and sharing their knowledge. And we are also, there’s a book coming up called Responsible AI Best Practices for Creating Trustworthy AI Systems based on the work we have done and there are three key Australian industry case studies in that book. So what is our approach? I think the key thing is we realized is a lot of best practices, they need context, they need to know when to apply them and also there’s both pros and cons of these best practices and a best practice needs to be connected. We also see people at the governance level, let’s have a responsible AI ethics committee, let’s have some auditing mechanism at a governance level doing great things but not connected to the practitioner. The practitioner is your ML engineers, AI engineers. software engineering, AI engineering developers. How do we connect all these best practices so people collaborate? So we have developed a responsive AI patent catalog which you can easily find by searching and connects governance patents, which we mean probably the people in this room as mostly interested in process patents, meaning software development process and AI engineering process from a development process point of view, and the product patents, which you see all the metrics, measurements on a particular product, how do we evaluate them? The key thing is they are connected and you can navigate around them and to have a whole-of-system assurance. At this moment, a lot of AI governance is not about AI model itself, even JGBT. There is many components, AI and non-AI components outside AI model. Every prompt you put into JGBT is not a true prompt going back to the model. Additional text added to those texts you have added. Things like simple, please always answer ethically and positively. So those kind of prompts will be attached into every single prompt you put into it. That’s a system-level guardrail. And many, so this is very simplistic example, but many organizations who leverage large language models and AI can put their unique context-specific guardrails to leverage the benefits while having some guardrails. And those kind of patterns, mitigations, need to connect with the responsible AI risks that every companies can find as part of their typical AI risk, our risk registry systems. And we have developed many of such question banks that you can ask questions about your organization and making responsible AI risk assessment part of that. So that you can find more information in some of the papers I listed here or search online. And this has been featured by the communication of ACM as one of the most. impactful project in the East Asia and Oceania region recently. So we’re very happy to share more experience with audience here. Thank you.

Kyung Ryul Park:
Thank you very much. I think today’s speaker is actually providing a lot of insights on how we can actually collaborate together from different stakeholders in the global shaping process of AI frameworks. We have also the Australian cases and Korean cases and I would say a little bit market-driven U.S. approach. And also we also need engagement from the developing countries. So I think that’s why it’s very timely. Today’s session is very timely. So I think we have Takayuki Ito-sensei online. So I’d like to introduce Professor Takayuki Ito from Kyoto University. Hello. Sure. Okay. Okay. Sure. The floor is yours. Do you hear me right? Yes. Yes. Okay. Sure.

Takayaki Ito:
Okay. I’ll share my slide. Okay. All right. So thank you. Thank you for introducing me. So I’m Takayuki Ito from Kyoto University. I’ll talk about one of my current projects towards hyper-democracy. It’s an empowered crowd-scale discussion support system. So we are working to develop the hyper-democracy platform where the multiple AI try to help group decision and consensus building support. So basically in the current social network, there are many social problems like fake news and digital gerrymandering and filter bubble and echo chamber. a very important problem. So here, by using artificial intelligence like the chatGBT and LLM, we are trying to solve that. So actually, we have been working for this kind of projects for 10 years. From 2010, we started to create a system called Co-Agri system, where the human facilitator try to facilitate and support consensus among the online participants. And then from 2015, we created the Agri system, where one AI agent supported the group decision among online participants. So here, we use AI to support the human collaboration. And then now we are working for hyper-democracy platforms where the many AI agents try to support crowd-scale discussion. So this is an overview of the Agri system. Here, as you can see, the people discuss by using text chat. And then these texts are recognized by our AI and then structuralized in the database. So basically, by using the structuralized discussion, AI facilitator try to interact with the online participant. Here, actually, we started this project in 2015. We didn’t have any chat GPT, but by using our classic AI technology, we realized that. So here, by using LLM or GPT, we are now working for more sophisticated AI, actually. So, this is one case study about DIAGRI. We used the DIAGRI system in Afghanistan, in particular in the 2021 August, there American troops left from Kabul city in Afghanistan. We opened in public the DIAGRI and then we gathered many opinions and the voices from the civilians in Kabul city. So as you can see, our AI can analyze the type, the opinion types and the characteristics. So from August 15, when the American troop left, the number of issues, problems increased drastically. That it’s shown in red box in the central graph. So we are working with the United Nation Habitat and many other NPOs about these kinds of things. So now we are extending the AI facilitator to many AI agents. And then this is the current multi-agent architecture. Now we are testing this agent architecture. So this is the conclusion. So yeah, we are now developing next-generation AI-based group decision support system. And it’s called a hyper-democratic platform. So yeah, thank you very much.

Kyung Ryul Park:
Thank you very much. Professor Takayuki Ito has been pioneering. is interdisciplinary research in the intersection between AI and democracy. So I think it provides a lot of insights on the importance of multidisciplinary approach. Especially in the research field, we have to work together with the policy scholars and also development scholars and also engineering scholars as well. So, right, so we are on time right now. So I’m feeling very relieved. So we have our last speaker, Seunghyun Kim from KAIS. Are we ready? Okay, sure. Sharing some insights from development field.

Seung Hyun Kim:
Hello, my name is Seunghyun Kim. I’m currently a PhD student studying under Professor Kyungyeol Park. But previously in my life, before the peaceful life of a graduate student, I was a program officer for an institute called the Korea Development Institute, responsible for producing policy recommendations for developing countries. So one of the stark realizations that I realized was that in the times that I worked at KDI as a program officer, I’ve only been to developing countries. So my passport records do not go above the annual GDP per capita of $10,000 per year. So I haven’t been anywhere. So this is the most advanced country that I’ve been to overseas in many years. And what I realized is that when I came to KAIS, I was exposed to all these discussions about AI, a chat GPT, bioethics, et cetera, extremely advanced technologies, cutting edge technologies that is going to change everything. But then I thought about, then my thoughts overlapped. What happens when these cutting edge technologies are overlapped with what I saw in the developing countries? which have a very different societal and economic context. So I’d like to share three snapshots that provide some peek, not insight, I wouldn’t say, just peeks into how all these discussions, insightful discussions we’ve had today, may develop in the developing world. So this is a snapshot of a photograph taken in November 16, 2016 in Medellin, Colombia. So if you’ve seen Narcos in Netflix, you’re probably familiar with the city. This is the Narco capital of the world. And this is, if you see a little bit below, you’ll see that there is called Bienvenidos a Calle Uno, which means welcome to area one. Area one is the zip code for the poorest neighborhood in the Colombian cities. So this city used to be before the introduction of a cable cause, which was a revolutionary transport mechanism. It was actually isolated from the entire city of Medellin. So it was a breeding ground for cartels, drug dealers, drug smugglers, cops would not go in there. They would say, I’m not going in there. But because of the cable cause, people were able to get jobs and come out of the city and have equal opportunities to education, jobs and capital to borrow money, et cetera, for banking, et cetera. But what wasn’t publicized was that drug cartels were using this new cable call. They would hide cocaine inside the replaceable parts. And then this would become an automated distribution mechanism that was not publicized for a very long time. But what if cable cause do this to the drug cartels? And what are they going to do when they get their hands on chat GPT and AI? So it’s something to think about. And also what the drug cartels were able to thrive in these regions because a dollar could bribe anyone in the entire neighborhood. You could, for $10, you could. basically make people do anything. What can you do with a brand new laptop, an iPad that’s connected to generative AI, and what can they do? What can the police do? What can the government do against such matters? So one snapshot is unequal opportunities and how that exposes existing social and economic problems. The second snapshot is one of lovely pictures that I’ve taken at Addis Ababa, Ethiopia. This was in 2018. This is a scene that I will probably remember for a very long time. This is a scene of an official explaining to us, the Korean researchers, about the current system of the Ethiopian public finance system, the electronic digital finance system. So the line, I would not forget this line. So the tax system runs on Microsoft. The tax expenditure system and budget planning system runs on Oracle. And we haven’t had a digital system for auditing system yet, but we will soon, as soon as we get the funding. So for one government, you have three different information systems running simultaneously that do not communicate with each other, that cannot communicate with each other. So fragmentation on an enormous level. And why is there fragmentation in the core governance structure of distribution of financial resources at the Ministry of Finance? Why is that? Because they’re financed by different institutions, by the World Bank, by UNDP, et cetera. They’re financed by different institutions who are connected to different service providers who in turn provide very little parts of the government. And no one agency can provide one comprehensive solution for the entire government. So what you have is extreme fragmentation of ICT and information systems in government. The third snapshot is… I spent two weeks in Equatorial Guinea. I don’t know if anybody’s been to Equatorial Guinea. So it was the unhealthiest time of my life. I was basically half unconscious because of the malaria vaccines that I had to take every two days. And so this is the president Nguma Mbaso speaking at the Equatorial Guinea National Economic Conference. So all the ministries were there, all the ministers were there, all the high level profiles of the government were there. But the entire conference was run and executed by Chinese officers from the mainland China. So every staff that, except for the participants, every staff would be Chinese from the mainland. And after about a week and spending a weekend there, it wasn’t long until I found that the entire ICT infrastructure was basically dependent on one country and one company, China, Huawei. So you couldn’t get an internet connection, Wi-Fi, anything. You can’t send an email without having some sort of support from Huawei. So I thought this snapshot provides a very significant view into technology sovereignty. So in sum, you have one, unequal opportunities that may be extravagated into a whole variety of societal problems. And two, fragmented ICT structures that create problems for governments. And technology sovereignty that ties in developing countries with companies and other governments and international agencies that provide a very complicated picture for the developing world. And for the developing world to become more advanced and to actually transform into a developed world, I think these three peaks are something that we should think about. Thank you.

Kyung Ryul Park:
Thank you. Thanks a lot. So we have like five minutes to go. So I’d like to open up the floor and because we have more than, I think, 30 attendees in this session, thank you very much for your participation. So if you have any comments or questions, please raise your hand. Professor Seoyeon Kim, from KAIST. Could you?

Audience:
First time doing this, but okay. If I knew that I should be standing here, I wouldn’t have asked the question, but the question, I guess, is very much common to all of us, because as Justice Seung Hyun pointed out, the problem of fragmentation in a single country level, we are actually witnessing huge fragmentation of AI governance at the global level. We have something going on at the World Bank, the UN, various agencies of the UN, and also even within single countries. And also, we are approaching this problem from many different angles, right? Development perspective, sociological, ethical, philosophical, CS, whatever. So the question is, how we can actually reduce this degree of fragmentation when you talk about AI governance and other mechanisms of regulation or issues? So someone, as you might know, or some circles around the world are talking about the need to create something like IAEA, version of AI. But then there are many limitations, of course, because of so different natures of two technologies, you know, nuclear technology is so centralized and it was born out of a very dire, emergent situations during World War II. But AI is such a apparently democratic technology because anybody can touch upon some part of AI. So my question is, how you, any of you, can address this question of the need and the way we can actually think about how we can actually reduce this problem? You see what I’m saying, right? Okay.

Kyung Ryul Park:
Thank you very much. So if you don’t mind, actually, why don’t we just have questions altogether and then we’ll just address the questions, please. Could you briefly introduce yourself?

Audience:
Yes, no problem. Hi. Thank you. I’m Sophie Kellehaug. I’m a diplomat from the Danish Ministry of Foreign Affairs posted in Geneva. So thank you so much for all your really interesting perspectives. I think we, of course, also see a need for global governance of AI. And just like my previous audience member, we also see this fragmentation. It was really good to hear, especially, of course, in the first presentation, the focus on human rights, which we also find very important, and the general multi-stakeholder engagement. This is a good example of engagement with academia. So we see a need to take a really risk-based approach. And of course, thank you for referencing also the EU AI Act as an EU member state. We very much support that approach. I wanted to ask basically the same question as well. How do we approach this globally? We see this fragmentation. But instead, then, I would like to come back to the first speaker’s, Professor Liao’s, point on follow-up mechanisms and monitoring. How do we afterwards then ensure that this regulation is implemented and we can oversee, we have oversight and accountability afterwards? Thank you.

Kyung Ryul Park:
Thank you very much. And gentleman here, over here.

Audience:
Hello, good day. My name is Peter King. I’m from Liberia. And I would like to ask a question based on the African perspective. I realize that we have some, I think, most of you are from Asian region, or you from, I mean, state in another continent. But the fact is, in Africa, we also are part of global society. But then we’re looking at how, what do you see as the impact of AI? The immersion is becoming, the use is geographically limited. So most of the conversation here, I saw only a sample case of Equatorial Guinea. But we’re looking at Africa as Africa, a lot of issues. So what advice can you give most of our policymakers in Africa in terms of how research can be extended and what can they do to ensure that as it seamlessly, I mean, get to become a reality, what can Africa prepare for? And we do lack certain expertise. What are your advices for some of us youth and then our larger audience in Africa? That’s my question.

Kyung Ryul Park:
Thank you very much. Oh, okay, another questions or comments, maybe?

Audience:
Hi, my name is Yasmin from Indonesia from the UN Institute for Disarmament Research. So I do have a few questions, so please bear with me. First of all, for the first speaker on a human rights-based AI framework, just to jump on the previous point that was made by a fellow audience member, on the question of enforceability, I was wondering if you could elaborate on the difference between human rights as a moral framework on the one hand, on the other hand, as a legal framework with the established international human rights law mechanisms, and what can we learn from the established case law that have been established over the past years as IHRL was developed? Second, on the digital public good and the digital public infrastructure, previously I used to work at Chatham House, a London-based think tank. We did some research on that, and one of the questions that we were wondering, of course, there is the importance of public stewardship, but at the same time, there’s also the question of limited resources and the need for scalability. How do you deal with that? And third, on conversational AI to advance women’s inclusions, one of the questions that popped into my mind, it’s great, it’s got a lot of potential, but how do you deal with the availability of training data, whether it’s in terms of data collection and data hygiene, that it’s available in an equitable way, so not just in terms of free in a way from biases, but also taking into account questions of, for example, some communities might need to be represented in these models, but at the same time, others might want to be forgotten because of privacy and oppression risks. How do you deal with that? Thank you so much.

Kyung Ryul Park:
Thank you very much. So, right. So, if I may, it’s wonderful comments and questions. So, if I may just summarize with this keyword, fragmentation in the global AI governance and how we can actually collaborate together for that. and what can the African countries especially prepare for AI strategy, and there are definitely the conflicting rationales in human rights perspective and also digital public goods, and how we can actually promote some enhanced scalability, and how we promote digital inclusion in terms of data collection and data analysis. So I’ll give you Matthew, are you there?

Matthew Liao:
Yes, I am.

Kyung Ryul Park:
Sure, I’ll give you two minutes. Is it okay for you?

Matthew Liao:
Yes, sure, sure.

Kyung Ryul Park:
You’re always smart, right?

Matthew Liao:
Yeah, those excellent questions and great questions. The fragmentation question is just such a difficult problem. Very quickly, I think this is something that we need to all work together. We need to, it’s multi-stakeholders, we need everybody involved in the conversation, the public, the government, the researchers, and so on. Now, that sounds kind of vague. So here’s something a bit more concrete. I think Professor Song Kim mentioned something about the nuclear energy. There are two things I want to say. Like I think the biomedical model is actually a pretty interesting thing to think about. If you think about drug discoveries, there’s a lot of innovation in the drug, a lot of research in the drug industry, you know, arena. At the same time, there’s a lot of regulation. You know, we’re protecting, you know, people. There are a lot of human subject research, you know, a lot of sort of stuff that’s not like high risk stuff, you know, not trivial risk things. And yet we can do it in a fairly responsible way. And the community, the international community have basically coalesced around sort of different norms, you know, to sort of say, hey, we need to make sure that this process is safe. And I feel like we can do something similar with respect to AI, you know, where, you know, some stuff, maybe they’re low risk. I like the EU-based approach as well. I think sort of some stuff that’s low risk that, you know, we can kind of look at it and sort of say, hey, we don’t need to worry that much about that if, you know, something is being used for games. But other things where, you know, like medical devices, maybe we should need to pay more attention, especially if it involves, you know, humans. And I’ll just say one other thing, which is, I think there’s a lot of sort of regulatory capture right now. A lot of people think, oh, you know, this is too big to be regulated. And I think it’s just, it’s useful to look at the history of regulation. Take airplanes, for example. Airplanes use the fall out of the sky every single day, right? And then at some point, people say, you know, we need to come, come together and regulate, you know, sort of, you know, sort of the airline industry. So everything like from the engines and so on and so forth is regulated. And now airline, the airline industry is the safest, like, you know, so it’s so safe to fly these days. And so I feel like we can do something similar as well. And so maybe those are indirect models where we can appeal to for, to address things like fragmentation and things like that. So.

Kyung Ryul Park:
Thank you, Matthew. Does anyone address to the questions or comments from you? Okay, sure.

Atsushi Yamanaka:
Well, thank you so much, actually, it’s a very, very interesting questions. And then I have a few, actually, comments on this, actually, fragmentations. Okay, fragmentations. Maybe perhaps we need to actually create the AIGF, right? Instead of IGF, AI Governance Forum. You know, even the internet governance, right, when we talk about 20 years, we still have not actually came up with a suitable models. I think we need to actually come up with what is actually workable and it’s a best examples models, instead of actually having a complete global regulations, which is very, very difficult and not very palatable for many, many stakeholders. So I think we need to actually have to come up with sort of best example, what is really, you know, workable solutions, rather than actually having a concrete regulations on AI, I think. I think that’s the way to go. In terms of Africa, yes, actually, I actually work mostly in Africa, by the way. So the last 12 years, 13 years, actually, I mostly actually work in Africa. in Africa, mostly in Rwanda. So actually, AI is actually utilized very much so in African context as well. A lot of startups actually have been using AI. And then other actually database solutions. So I think there are actually a lot of solutions which is coming out. However, the human resources are limited. You’re right. So there are different ways. There are actually a lot of advanced institutions now established in Africa as well, like Carnegie Mellon University in Africa, in Rwanda. Ames, African Institute of Mathematical Science is also in Rwanda as well. So that is also ways to advance those kind of initiatives. And also, I think developed countries like Korea, Japan, for example, I have quite many students actually who study AI and continue to actually do PhDs here in Japan. And one of them actually have generative AI models for African languages. And he was studying here. He was actually a research fellow at RIKEN, which is one of the top actually research institute. Unfortunately, he moved to Princeton to continue his research. But so that’s kind of sort of human resource capacity with initiatives. That’s, for example, JICA is actually known for. And also KOICA, the Korean, and also other countries actually doing that. So I think you can take advantage of that kind of framework. About DPI and DPGs, yes. Scalability was always the issues on open source initiatives and also on ICT for development as well. But I think we’re seeing a lot of interesting sort of DPI, like the Indian MOSSIP model. That is scalable. They’re actually serving like 1 billion people, basically. So that actually is seeing a lot of scalabilities beyond POC, which is a hallmark for ICT for development. And lastly, about women’s inclusions. I think digital technology actually gives actually quite unique opportunities in terms of pseudonymizations. So sort of masking, sorry, sort of masking the gender, but basically giving the opportunity for the inclusions. I think much more so than like in-person environment. So I just wanted to point that out.

Kyung Ryul Park:
Thank you very much. So before we close, I know we’re running out of time. I’d like to just give a couple of minutes from Rafiq and Professor Zhu. So Rafiq. Yeah, sure.

Rafik Hadfi:
A few words in regard to the, let’s say, the empowerment of local communities, for example, in terms of the inclusion, in terms of data collection, training, the audience of buyers, et cetera. So one approach we found is that not just deploy a solution in a simple social experiment, but instead have a holistic approach where we tend to form local communities, let’s say villages, municipalities, universities, schools, train them on how to use the whole, let’s say, AI system. And then this is done for a few studies, but at the same time, it allows us to build data sets to train these models for these communities. Because one of the things we encounter is that when you train these AIs, obviously in English, I mean, you’re biased towards one particular, let’s say, context. So we’ve done this in Indonesia. So this was a island called West Nusa Tenggara. With the University of Mataram, we trained these data sets with Indonesian. And currently we’re focusing on the Afghan case, because I think, as we all know, there’s a lot to do in Afghanistan. And the case study that I covered is mostly focusing on equity, women empowerment. And of course, as I said, the data collection, the AI models are trained particularly for this context. And of course, it can be generalized. This year, it’s for Afghanistan. Maybe, I don’t know, we’ll try Iran, I don’t know, next year. Yeah, so that’s all for me.

Liming Zhu:
Thank you. Thanks, I will be brief. I think on the fragmentation, I’m going to be slightly controversial because from a science point of view, we see they are all different stakeholder groups. We have great institutions, UN, OECD, WEF, you know, and if they are paying attention to AI and AI governance, there’s certainly, I think it’s valid because different stakeholder groups have slightly different concerns and the robust discussion between the groups and making trade-offs in some of this is going to be important. I don’t see at that level a fragmentation, but more of a more interest from different stakeholder groups. Comes to regulation, of course, there is also the importance of both horizontal regulation, regulating AI as a whole, and there’s a pros and cons in that, and the vertical regulations on particular products. The interaction between them, removing some overlaps is important, but there needs to be both rather than one way or another. But only one thing I think that shouldn’t be fragmentation is science. Science is international, science is not value-based, and the scientific evidence and the advice to this policy and stakeholder groups really need to collaborate more. I see a lot of scientists, research organizations here. I’m looking forward in collaborating with them. Thank you.

Kyung Ryul Park:
Thank you. So all of the essence of the global platform for collaboration. So thank you very much for your participation today, and I’d like to continue, you know, our discussion. So please keep in touch and then, so before we close, actually, I’d like to particularly thank, you know, the So Young Kim and Junho Kwon, doctoral students from KAIST. Thank you very much. And thank you very much for your time, and all the speakers, and if you leave your contact to Junho after this session, we’ll particularly just, you know, keep in touch with you after this session. Thank you very much. Thank you. Thank you. Thank you. Thank you.

Atsushi Yamanaka

Speech speed

178 words per minute

Speech length

1841 words

Speech time

622 secs

Audience

Speech speed

191 words per minute

Speech length

1057 words

Speech time

331 secs

Dasom Lee

Speech speed

170 words per minute

Speech length

1087 words

Speech time

384 secs

Kyung Ryul Park

Speech speed

149 words per minute

Speech length

1244 words

Speech time

500 secs

Liming Zhu

Speech speed

171 words per minute

Speech length

1179 words

Speech time

415 secs

Matthew Liao

Speech speed

186 words per minute

Speech length

2409 words

Speech time

778 secs

Rafik Hadfi

Speech speed

159 words per minute

Speech length

1033 words

Speech time

390 secs

Seung Hyun Kim

Speech speed

169 words per minute

Speech length

1195 words

Speech time

423 secs

Takayaki Ito

Speech speed

104 words per minute

Speech length

509 words

Speech time

292 secs

Youth participation: co-creating the Insafe network | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Niels Van Paemel

ChildFocus, an proactive community service organisation, has initiated a variety of programmes purposed to enhance online safety, mitigate child exploitation, and raise awareness about online grooming. Among these, the Cybersquad initiative is a peer-to-peer platform facilitating the sharing of experiences and advice amongst young individuals. Recognising a growing tendency in children’s preference for online chat over traditional helpline services, this platform enables direct contact with a Child Focus counsellor via a help button in more serious cases.

The organisation has also launched Groomics, an educational tool specifically developed in response to the increasing phenomenon of online grooming. This aims to empower young people by highlighting the potential risks and benefits of online relationships, enabling children to discern signs of online grooming and equipping them with the knowledge to decline these advances.

Additionally, ChildFocus has introduced the MAX principle, which offers children an opportunity to select a trusted individual with whom they can discuss their online issues. Through the MAX principle campaign, children are able to send an official invitation to their chosen confidant, thus fostering a sense of security enhancing safety in their digital experiences.

Recognising the crucial need for inclusive online safety measures, ChildFocus has placed particular emphasis on children with Autism Spectrum Disorder (ASD). Revisions were made to their web content after it was deemed complex and overwhelming for ASD children. The organisation thus made adjustments to their content to make it more comprehensible and inclusive. The Star Plus tool was specifically co-created to help ASD children recognise online red flags, catering widely to their unique challenges in the online sphere.

ChildFocus has, nonetheless, encountered challenges achieving reach with ‘shy kids’ or those from less advantaged backgrounds. By highlighting the ‘Matthew effect,’ where resources tend to contribute more significantly to those already privileged, the organisation illustrates the struggle for inclusive education. Emphasising the importance of awareness and persistent efforts in reaching out to these children, they assertively advocate for daily commitment in this regard.

Additionally, ChildFocus encourages international collaborations for sharing online safety best practices and resources across countries, to facilitate mutual learning, prevent redundant efforts, and enable effective localisation to meet diverse national needs. Resources such as the Star Plus tool are shared on the InSafe network, through which translations are provided for wider accessibility. This reinforces the significance of international partnerships in driving online safety efforts globally.

In conclusion, ChildFocus’s efforts highlight the essential elements of child-focused, inclusive, and collaborative drives in promoting online safety. This provides a blueprint for other organisations and countries to adopt whilst acknowledging the potential obstacles that might be encountered in these endeavors.

Anna Rywczyล„ska

The Polish Safer Internet Centre is making substantial efforts to support refugees, particularly those displaced due to the ongoing Russia-Ukraine conflict, by incorporating them into their internet safety campaigns. This pursuit is significant, especially considering the huge influx of 16 million Ukrainian refugees who have crossed the Polish border since the start of the crisis, with about 1 million residing in Poland. A prime example of the Centre’s integration efforts involves approximately 200,000 Ukrainian children who are currently attending Polish schools.

Additionally, the Centre is proactive in its focus on language accessibility and mental health support for refugees. The organisation enables remote connectivity to events, which are translated into Ukrainian, thereby dismantling language barriers. Furthermore, they have taken the extra step to recruit professionals for their helpline, including consultants and psychologists fluent in the Ukrainian language, which greatly enhances their ability to assist Ukrainian refugees efficiently and effectively.

To tackle the significant issue of disinformation and fabricated news around the conflict, the Polish Safer Internet Centre has established a dedicated department. NASK, an essential part of the Centre, is tasked with combating disinformation and fake news, thereby contributing substantially to efforts towards promoting peace, justice, and sturdy institutions.

Moreover, the Centre is heavily invested in youth empowerment by providing platforms for them to express their perspectives. These platforms include competitions where entrants are encouraged to submit videos that represent their generation’s views.

The Centre also ensures that the youth are well-represented in the pan-European youth panel meetings, thus playing a part in ensuring youth representation in conversations on peace, justice, and strong institutions as well as efforts to address inequality.

In conclusion, the Polish Safer Internet Centre’s concerted efforts to assist Ukrainian refugees and their strategic emphasis on youth empowerment underscore their commitment to the Goals of Quality Education, Good Health and Well-being, Reduced Inequalities, and Peace, Justice and Strong Institutions. These initiatives illustrate the potential of technological platforms as a tool for positive change and inclusive growth.

Sabrina Vorbau

The necessity of youth participation in creating a safer internet is a pivotal aspect that resonates across all discussions. The significance of youth input extends undeniably to the formulation of policies and strategies to maximise internet safety. An exemplar of this is the Better Internet for Kids Strategy Plus (BIK+). This strategy was not merely devised with the contributions of young individuals but continues to uphold its relevance through sustained engagement. It encourages youth to voice their perspectives at annual events such as the Safer Internet Forum.

However, youth participation is not a sporadic occurrence. It necessitates ongoing engagement and follow-up, reinforcing the principle that effective youth involvement mandates authentic engagement and substantive opportunities. A salient example of such democratic participation is the proactive involvement of young people in the organisation of the Safer Internet Forum, moulding the programme from its inception.

Alongside general participation, it’s worth highlighting the empowerment of young individuals who have already received training within their respective communities. This notably involves youth ambassadors. Consider the youth ambassador from Malta who utilised his network to disseminate the event link to his classmates and his more introverted peers. This act exemplifies how empowering local youth can effectively represent even the less outgoing children and serve as a strong voice for all.

Addressing the understanding that children do not necessarily need to present on stage, the strategy underscores the importance of incorporating even the more introverted children indirectly. Youth ambassadors, as representatives, feel an inherent responsibility to mimic the voices of their peers. This approach ensures that every child, irrespective of their demeanour, contributes to shaping a safer internet environment.

In conclusion, youth participation is not only essential in realising a safer internet but is also instrumental in achieving several Sustainable Development Goals (SDGs). These include Quality Education (SDG 4), Reduced Inequalities (SDG 10), Industry, Innovation, and Infrastructure (SDG 9), as well as Peace, Justice and Strong Institutions (SDG 16). Any dialogue on internet safety or policy creation remains incomplete without youth involvement, thereby highlighting its unparalleled significance.

Moderator 1

In today’s digital age, the vulnerability of children is a mounting concern, particularly in relation to online safety. This vulnerability may be exacerbated by various factors, including poverty, disability, mental health issues, abuse or neglect, family breakdown, discrimination, and social exclusion, which encompasses migration as well. Moreover, we must consider the inherent vulnerability that all children share as they navigate a world in which decisions affecting them are predominantly made by adults who may not fully comprehend the implications of digital technologies on children’s lives.

Though the situation may seem dire, there is a glimmer of hope, emanating from initiatives such as the New European Strategy for a Better Internet for Kids. Adopted by the European Commission in May 2022, this updated version of the 2012 BIG strategy aims to keep abreast with technological developments, placing children’s digital participation, empowerment, and protection at its core. The intention is to shape a digital environment that is both safe and nurturing for children.

Simultaneously, there is a growing, assertive view favouring children’s inclusion in decision-making processes concerning their rights and safety in the digital world. This perspective underscores the necessity to acknowledge that children, living in an rapidly evolving digital world, may have unique needs and rights. Not involving them in these decisions could result in protective measures that fall short of meeting their needs accurately.

To summarise, whilst we must confront the increased risks and challenges faced by children in the online world, there are effective strategies, like the Better Internet for Kids, that represent a silver lining. Moreover, the firm stance advocating children’s involvement in decision-making processes suggests progressive change. Nonetheless, it is paramount to adapt continuously to the rapidly changing digital landscape, ensuring children are not only safe but also empowered in their digital interactions.

Audience

Cybercrime and bullying in Lebanese schools are drastically escalating, with a disquieting report from Lebanon’s internal security forces signalling a surge in cybercrimes by over 100%. This uptick has provoked schools to request sessions on bullying and violence, underscoring the urgent necessity for attention and action.

Non-profit organisation, Justice Without Frontiers, is taking forward steps to confront these pressing issues. They offer legal consultation for girls affected by online sexual harassment, a regrettable and increasingly common output of the cybercrime increase. Furthermore, they have delivered awareness sessions to approximately 1,700 students, accentuating the crucial role of education in instigating change.

Nonetheless, this progress is hampered by a major hurdle: the scarcity of educational resources on cyber security and bullying in Arabic. This language obstacle hinders effective education and communication, compelling educators to rely on secondary resources like YouTube films and online games to engage with students.

In a more positive vein, there are concerted efforts to involve younger demographics in global discussions. The Internet Governance Forum (IGF), a distinguished international platform for policy discourse, is fostering an inviting environment for newcomers, showcasing instances of inclusivity in their dialogues. Moreover, the IGF’s Dynamic Teen Coalition offers an exceptional opportunity for young people to shape internet governance, ensuring their voices are valued and their viewpoints taken into account.

Despite these developments, the demand for inclusivity expands beyond these platforms. It’s suggested that only empowering outspoken children is not enough. The requirement is to also connect with the shy, reticent and less assertive children, so their needs and rights are not neglected. This perspective is backed by Amy Crocker from ECPAT International, a network battling child sexual exploitation, who stresses that focusing on these marginalized children is essential for authentic empowerment and inclusivity.

In conclusion, while Lebanon grapples with a rise in cybercrime and bullying, numerous endeavours are underway to manage these challenges and uplift youthโ€”from awareness initiatives to inclusive international forums. However, roadblocks persist, with language limitations being a dominant issue, and calls for inclusivity emphasising the empowerment of quiet children. These insights offer valuable pathways for future undertakings in the dual domains of cyber safety and children’s rights.

Boris Radanovic

This comprehensive review underscores the critical relevance of youth empowerment, and their role in advocating for online safety and children’s rights. The argument is founded on the premise that ‘the voice of the youth is the voice of the future,’ urging for recognition and enhanced amplification. So far, over 5,000 digital leaders, actively contributing to their school communities, have been educated. Their role has been instrumental in achieving the Sustainable Development Goals (SDGs) 4 and 16, related to quality education, peace, justice, and strong institutions, respectively.

However, the analysis also reveals a troubling fact that children are five times less likely to confide in a trusted adult when they encounter precarious situations online, opting instead to turn to their peers. This highlights the absence of trusted adults and the necessity for safer online environments for children.

Notably, the success of peer-to-peer education is emphasised. Through partnerships between the UK Safer Internet Centre and ChildNet International, a programme has been created to educate thousands of digital leaders across school communities. Additionally, the data suggests that it is vital for children to have the confidence to speak out about online safety issues. This indicates a need for comprehensive online safety programmes that not only enhance safety but promote understanding and advocacy among children.

Regrettably, the discourse uncovers that children with accessibility issues or special education needs, and those from minority groups, are significantly more susceptible to online abuse. This aligns with the aspirations of SDGs 10 (Reduced Inequalities) and 9.c (Increase access to ICT), acting as a call to action for inclusive and accessible online spaces.

Furthermore, this detailed analysis accentuates the essence of community’s collective responsibility. A multi-stakeholder approach centred on education can be key to ensuring children’s protection online. Engagement with minority, disability and ability groups is necessary for understanding and addressing their unique needs, thereby fostering the creation of truly inclusive online spaces.

Finally, the importance of utilising technology that facilitates equal, anonymous participation is recognised, given its adaptability to various roles within an environment. Educational providers are thus tasked with keeping learner engagement inclusive and participatory.

In conclusion, this extensive analysis paints a complex picture of the digital landscape, intertwining youth empowerment, increased safety measures, and the strategic importance of education. The key to developing holistic and inclusive online spaces that fuel both individual growth and societal advancement lies in tactfully merging these strands.

Moderator 2

Young people are not only actively involved in digital communication initiatives, but are also encouraging and facilitating engagement amongst their peers. This positive trend supports two pivotal UN Sustainable Development Goals (SDGs): Quality Education (SDG 4) and Reduced Inequalities (SDG 10), emphasising the positive impact of youth engagement in these areas.

One notable observation from this participation is that children and young individuals appear to find it easier to converse and liaise with their peers rather than adults in this field. This shift indicates an intriguing change in communication dynamics, with potential ramifications for the development of educational strategies and initiatives targeting young people.

Moreover, there is firm belief in the efficacy and potential of youth involvement in digital awareness programmes. As a case in point, the consultative youth programme at Portugal’s Safer Internet Centre demonstrates the substantial commitment of young people towards enhancing internet safety, a key element of digital awareness.

Significant reference was made to the In Safe Structure programme, identified as a best-practice example. Again, the devotion shown by young people towards the cause, as well as their active participation is emphasised. The impact of these initiatives highlights the importance of youth participation, and serves as a compelling testament to their potential contribution to a safer, more inclusive digital environment.

In conclusion, youth engagement plays a pivotal role in digital education and the shaping of a more inclusive digital future. The insights provided by these analyses point towards a burgeoning paradigm where young people are not merely recipients of digital knowledge but are increasingly integral to its creation and distribution.

Deborah Vassallo

The crucial role of young individuals in discussions surrounding online safety has been showcased effectively by the Safer Internet Centre. Actively involving young individuals in operations such as hotline helpline discussions has proven beneficial. Young people not only assist with helpline calls, gaining a profound understanding of online safety issues, but also participate in the creation of materials for the Safer Internet Centre, providing valuable consultations. This youth involvement aligns with the objectives of SDG 4: Quality Education, SDG 9: Industry, Innovation, and Infrastructure, and SDG 16: Peace, Justice, and Strong Institutions.

The importance of producing positive digital content, alongside understanding online risks, is further emphasised through initiatives like the ‘Rights for You live-in’ annual event. During this gathering, young individuals were granted the opportunity to interact with content creators and craft their own digital content. A notable outcome was a video created by these young people for Safer Internet Day activities, which highlighted their online experiences.

Moreover, the argument for offering young people decision-making roles in activities related to online safety is strongly supported. Evidence of this is seen as the entire coordination of all Safer Internet Day activities was entrusted to young individuals. They were also brought into deeper-level discussions on issues related to online safety, and their creations for Safer Internet Day were disseminated broadly across schools.

However, the analysis also brings forth the concern for children in residential and foster care who are considered more vulnerable to online harm. Their use of online chats to seek understanding and love due to their disadvantaged situations is a significant concern. This sentiment is associated with SDG 16.2: End abuse, exploitation, trafficking, and all forms of violence against and torture of children.

Nonetheless, there’s a sense of optimism spurred by the actions taken by the Safer Internet Centre to protect these vulnerable children. Initiatives such as inviting them to ‘Rise for You live-ins’ sessions, conducting practical workshops about privacy and online chat safety at children’s homes, and providing an express helpline for those experiencing online harm, underscore the Centre’s commitment towards providing immediate help and education about online safety.

In conclusion, the endeavours of the Safer Internet Centre offer a hopeful sentiment for the progression of online safety measures. This comprehensive involvement of youth along with initiatives for vulnerable children recognises and addresses the complexity of online safety issues, providing a balanced approach towards creating a safer online environment for all.

Session transcript

Moderator 1:
I would like to share today’s discussion in collaboration with my colleague and friend, Joรฃo Martins. Before I go on, I would like to emphasize that Joรฃo Martins has been collaborating with the Portuguese Cipher Internet Center, which I am responsible for coordinating, since he was very young and was a youth ambassador for the Better Internet for Kids, who spread the word about safety issues, internet governance, and advocating for youth participation, from national initiatives to European and international forums, such as EURODIC and IGF. As you may have read, and perhaps that’s why you are here, this workshop is organized by the INSAFE Network, the European Network of Cipher Internet Centers, in cooperation with the Portuguese, Belgian, Maltese, Polish, and the UK Cipher Internet Centers, who will share their best practices on youth participation and how they are working together, co-creating and developing new initiatives, resources, and campaigns to effectively reach this target group, including children in vulnerable situations and to tackle online trends. Following the panel presentations, we will open a question and answer space, so we can discuss and clarify the topics, and the panelists can provide comments. As we are all aware, it is a challenge to reach children in vulnerable situations. In today’s world, children are vulnerable for many reasons, poverty, disability, mental health problems, abuse or neglect, family breakdown, discrimination, and social exclusion, not to mention migration. Several programs are designed to help social groups from different backgrounds, including those who are vulnerable. While these groups face different challenges, they share a common need for online safety in an increasingly complex social environment. However, all children can be considered vulnerable as they grow up in a world where decisions are made by others, by adults, often with a very different perspective, and feel the pressure to adapt to a world where rights are not protected, risks are everywhere, and technological developments in the digital environment are beyond imagination. In such an environment, and faced with a multitude of everyday problems, children are called upon to develop emotionally, intellectually, and technologically in a world where their voice is not heard, and decisions are made by others who are often not as familiar with or connected about their needs. The New European Strategy for a Better Internet for Kids, adopted by the European Commission in May 2022, aims to provide a delicate balance between digital participation, empowerment, and protection of children in the digital environment. Better Internet for Kids Plus is an adaptation of the 2012 BIG strategy, following a decade in which technological developments have exceeded all expectations. The new strategy, adopted after a long consultation process, aims to put children at the forefront of the developments and decisions that will be made by the key stakeholders and industry in the digital environment in the coming years. Children as digital citizens of the future, and growing up in a digital environment, deserve to have a say in development safeguards and their rights, and to shape the world they will live in. Therefore, to go deeper on these matters and to give us an overview on the Better Internet for Kids Strategy Plus and BIG youth program, I’m delighted to give the floor to Sabrina Vorbo from the European SchoolNet. Thank you so much, Sofia. We can go to the presentation.

Sabrina Vorbau:
Good morning, everyone. My name is Sabrina Vorbo. I’m a project manager at European SchoolNet. European SchoolNet is a network of ministries of education. We are a non-governmental organization based in Brussels, Belgium, and we are coordinating on behalf of the European Commission the Better Internet for Kids project that Sofia already mentioned, and as part of this project, we are also coordinating the network of European Safer Internet Centers. Myself, I’m part of the coordination team, and I coordinate the work that we are doing with the young people, and so I will start just setting a little bit the scene how we involve young people at a pan-European level in our activities, and we follow a very simple principle and encouraging the young people that their voice matters and that they deserve to have a firm seat around the table when they need it. decisions are being made when it comes to shaping policies like the Better Internet for Kids Plus strategy, but also when when industry is developing new devices and new tools. These young people that we are working with at European level are originally coming from the National Safer Internet Centers. At the moment we have a group of 40 plus big youth ambassadors, how we call them. As you can see here on the slide, I shared a few pictures with you where you see our ambassadors in action in various different settings. The big youth program allows the young people not only to connect with each other, to meet like-minded peers from across Europe, but it also allows them to have a one-on-one exchange with other stakeholders in the field. These stakeholders being policymakers, politicians, but also representatives from industry and IT companies. You also see here that obviously the pandemic was a very challenging time, but it did not stop the big youth ambassadors to continue raising their voice. So you can also see here that we had various occasions during the pandemic to continue connecting the young people with important stakeholders. As I said when we are talking about shaping a safer and a better internet for children and young people, they need to be part of the decision, because they’re not only the future, but they’re also the present. We are able to do so, as Sophia mentioned already, because youth participation is very well endorsed at a European level because of the Better Internet for Kids strategy that exists for various years already, but was reshaped last May. It was really built around three pillars, youth protection, youth engagement and youth empowerment. You can see here that really young people are at the heart and at the front of this new strategy, because we believe that every young person has the right to play online, to learn online, to connect online, to create online, to watch and to listen online, but also to share their content online. As we are talking about a policy on Better Internet for Kids, we also felt important, the European Commission felt very important that this strategy, that this policy is also accessible for the young people themselves, so they understand what is actually written in the policy and what their rights are. So we also developed, I’m sharing it here for the room, a youth-friendly version of the strategy. This was developed with the Big Youth Ambassadors. We worked with them together on this youth-friendly version. They advised us on the language of the youth-friendly version, but also on the visuals, on the colors we would use, because we wanted to make sure that this youth-friendly version is really accessible and makes sense to the young people as well. So I think this is a really true milestone, this strategy, because it not only encourages us, but everyone in the field in Europe, to apply this strategy. There are also very clear endorsements for policymakers and also industry, how they should work and co-create with the young people. To finish my part of the presentation, I just brought a very small video that actually shows our Big Youth Ambassadors in action. We onboard them every year. There’s an opportunity for new young people to become part of the group. This happened through the annual Big Youth Panel, which is organized back-to-back with our annual conference, the Safer Internet Forum, where we give young people the possibility to share their views, where we give them a platform to exchange one-on-one with stakeholders. Obviously, there’s a very long preparation process in place as we typically start working and preparing with the young people three months in advance to it online, and then a few days before the conference, they come on site and they start working on the intervention. So this is a little bit of a summary video of what happened last year at the Big Youth Panel. I hope this works. Maybe I can ask for the technician to click to play the video. We’re singing to let you know we’re here We are, we are, we are, we are, we are, we are I think we can stop the video here, just wanted to have you listen to the first minutes of the video, and of the song because it’s really, as I said, really important for us to engage the young people in a meaningful way. So what we typically do, we discuss with them about the skills they have, and you have seen in the video that they come with a lot of talents, and how they want to bring their message across. So last year, the group decided to write a song about a better internet for kids. They rehearsed this, and these young people are coming from different European countries, they start rehearsing this online. And then they performed this at the Safer Internet Forum. They also decided to create posters with messages, and here they also wrote these posters in different languages, because it’s obviously the main common language is English, and the language of the conference is English, but it’s also important to promote the national language and bring this across. So I hope the video gave you a little bit of an insight of how we are working with young people. As I said, it’s a very long and intensive process, and we get great support from our national Safer Internet Centers, but really, last year, the Safer Internet Forum really put the young people at the heart of this conference. We worked also with some of them that helped us organizing the conference, that shaped the program with us. So that’s also a bit the philosophy to include the young people from the early stages on, not only when the end product is there and put it in front of them and say, okay, now what is your opinion, please use it. We are involving them really when we are starting to share and create our ideas. So I think this is really important if you want to get active in youth participation to include them from the beginning, and then another important element is also the follow-up process, that after this conference, we continue to stay in touch with them. We try to involve them in other activities, like Joao is here with us today at IGF. This is also important that they don’t feel, okay, I have been invited to this one event, and now the event is over, and nothing happens. So I think involving them from the beginning, but also making sure that you stay connected with them, that you give them future opportunities, that there’s a follow-up for them. I think these are the two key elements when it comes to youth participation. I will stop here, and I think I immediately hand over to my colleague Nils.

Moderator 1:
Yes, thank you so much, Sabrina, for this very precise overview on what the youth participation has been doing in Europe, and now I will give the floor to Nils. Please, he comes from the Safer Internet Centre from Belgium, so the floor is yours.

Niels Van Paemel:
Hello everyone, so I’m Nils van Pamel, and I work as a policy advisor for Childfocus, which is the Belgian foundation for missing and sexually exploited children, but in this context, of course, as the coordinator of the Belgian Safer Internet Centre. Can we go back to the slides, please? Yes, you’ve seen my photo long enough now. I’m going to focus on how we include children and young people in our day-to-day work, but specifically in our prevention work. It’s already in our name, we are Childfocus, so we focus on children, and we try to make the internet a safer place for children, so it makes sense that, of course, everything we do, both on the operational level, but also on the prevention and awareness level, we include children in every aspect of it. I’m going to show you just a few tools that we made together with them in order to, like, how our co-creation processes happened and what it resulted. The first thing we did together with children is, actually, let’s go back a bit in time. A few years ago, just before COVID, we realized that as a helpline, where children can, but also parents, teachers, everybody who works with children, they can call us when they have questions about making the internet a safer place for children, but children were actually not represented high enough, and we were wondering why we did a study, and of course, now it’s super obvious, but it took some time for us to realize. First of all, we were a helpline that was only, you could only call us, but as we know right now in 2023, children and young people, they don’t like to call anymore. They like to chat, and they like to send, like they want to use their phones to write words and type them to us. Things that we realized through focus groups and working with children. And the second thing, because the topics that we deal with at Child Focus, it’s not only about your privacy settings, but it can also go about cyberbullying. It can even go about sexual exploitation when there’s images being sent without consent, even sextortion or grooming. I will go into that later. But children don’t always want to talk with their parents about this, and not even always with professionals like us. So we decided to follow the Danish best practice that was shared within the network and to make a peer-to-peer platform called Cybersquad, because sometimes parents remain an important source of information or help for their children, but sometimes the perspective of another teenager can talk even louder, especially about certain topics that children need their peers for. We created this. It’s basically, it’s forum-based, where you can ask questions and peers can answer. And we really saw that young people, they stepped up in order to help each other use their own experience, made us realize that we really need to put those children in the driver’s seats. We can even see them as the experts they are in some fields, and especially in the digital field, they are, and we should really see them like this. Of course, when things get really serious, there is always the help button where they can be in direct contact with a child-focused counselor. Of course, we are not letting children do psychological counseling with each other, but this is something that we are really proud of and really works, and we’re seeing that it works. We created this together with a group of children aged 10 to 18 in order to get their perspective. It was tested and also adjusted based on their views. Two other examples, but first, I want to say that of course, we are a big believer in co-creation and we are seeing children as the experts they are. I already said this. But in the last year, just to give a number, more than 2,500 children and young people were consulted in the creation or testing of e-safety tools or awareness-raising campaigns, resulting in, for example, I’m gonna pick two, Groomics, this is an educational tool that we made to make children and young people aware of and resilient to phenomena such as online grooming. For the people who don’t know what grooming is, it’s the process where an adult tries to convince the, tries to gain the confidence of a minor in order to abuse, sexually abuse them later. But the process goes on online, the whole grooming process. So as a teacher or a supervisor, you can make young people in your group aware of some of the risks associated with chatting with strangers online, but of course, at the same time, you can also highlight the benefits of online friendships and relationships. So we are trying, and this is the hard part with a phenomenon like grooming, we are really trying to see, to show the internet as the positive place full of chances it is, but at the same time, highlight some risks and create resilience amongst young people in order for them to detect red flags, in order for them to become more of an expert as they already are. We have to stop focusing on forbidding things for young people because that is not a long-term solution. We have to make them empowered in order to detect what is not okay in order to say no. And a last tool that I wanted to show you is the MAX principle. So I already talked about young people helping each other on the Cybersquad platform, but also children, that’s our dream, that every child in the world has an adult, a person of trust. And this is not the case yet. This does not necessarily have to be a parent, but it can also be a teacher, it can be an uncle, it can be somebody from the sports club, but just in order to make young people, when they feel threatened, when there is something going wrong in their online world, to have somebody to talk to. MAX, it’s a name, and young people, we are putting them in the driver’s seat, they are choosing their MAX. So with this whole campaign, if you go to this website, I will show it later, you can see there is a form for young people to fill in, choose their MAX, and send an official invitation to this person, and then he or her becomes the official MAX, and the, okay, I knew, and yeah. I think I just wanted to say that’s like young people having, choosing themselves who is their person of trust in order to create a safer environment for them online. That’s it for me, thank you.

Moderator 2:
Thanks, Niels. Yeah, let’s then hear from Dabrova Vassalo, representing Foundation for Social Welfare Services, and also Safer Internet Center Malta. Thank you.

Deborah Vassallo:
Thank you. So I’m Deborah. At our Safer Internet Center, it’s called Be Smart Online. And we have the Safer Internet Center, and the hotline, and the helpline, and obviously also the youth participation. We do different activities, but we try to involve young people as much as possible, especially in the consultation of the resources that we create. And we also invite them to assist to some of the calls that we receive from the helpline in order for them also to get an idea of what young people are encountering when it comes to online safety. We try to also, during the focus groups that we do with young people, we try to discuss with them different case scenarios in order to give them as much exposure as possible. And some of the activities go to the slide. One of the best practices that we do every year is what we call Rights for You live-in, which is a live-in whereby about 50 participants come to this live-in. We go to all the schools in Malta to invite children to come during the summer month. It’s usually in August for about three days. And during this live-in, it is all about online digital rights, about being digital citizens, about their responsibilities online, their rights. And during this live-in, they have the opportunity to discuss their online experiences. They also, this year, we gave them the possibility of meeting this person who creates a lot of content online. And each one of them had the possibility to create positive online content, because we do believe that it’s not only about online risks, but also about encouraging young people to develop themselves online content, which can be seen by other young people. And it can be like an example. We also involve them with discussions also at a higher level when it comes to ministerial level, because we’re part of the Ministry for Social Welfare Services. So this way, they can discuss also about online issues and what they would like their space to be, to be a safer space. And obviously, we involve them a lot in the Safer Internet Day activities. We involve them both on the outside when we go to do information days with the public. But this year, we gave them the full direction of all the Safer Internet Day activities. And they have decided to come out with this video, which they have created themselves, whereby they have a group of, they’ve asked the group of teenagers about their experiences online. And they’ve created the video. It was all from the development stage to the production. It was all made by young people. And then we have disseminated it with all the schools during Safer Internet Day. So we thought that would give them this opportunity. It was a huge success, because they felt that they were producing something for the Safer Internet Center. And they belonged to it, so it was very successful. Thank you. Thank you, Debra. And now, I will turn to Anna from the Polish Safer Internet Center. So Anna, please, the floor is yours. Hello, everybody. Yes, Poland joined the Safer Internet Program in 2005, so from the very beginning, when the program in the European Commission started. We deliver all three compulsory branches. So we have awareness part. We have hotline and helpline. And of course, we also have our young ambassadors. Maybe we will start with OK. Yes, our youth panel exists since 2010. Firstly, it was called the Congress of Young Internouts, but then we asked our young people if they still want to be named like this. And they said, no, no, no. We would like to change this name. So I think two years ago, we changed the name, and they picked the name Digital Future of Students. So that was something that they decided to have. We cooperate with the youth panel coming from, I think, 12 schools at the moment, but schools from all over the country, so coming from the big cities and going also to the little towns. We organize about three to four meetings yearly, and then we consult with them all our materials, like all campaigns that we plan, all educational resources. We invite them to the events. We even work within the INSAFE network. We exchange also youth ambassadors. Like, for example, last time, now during our international conference, we had a youth ambassador coming from Czech Republic, and he was speaking about what kind of actions young people need from us. And how to promote our actions in fields of internet safety to make them interesting, to make them good and meaningful for young people. also of course take part in the Spanish-European meetings. Each year we pick our representative and he goes with us to the Safer Internet Forum and I must say that this time like the last Safer Internet Forum was really impressive. I mean usually we had some presentations coming from young people but this time like they really participated. When we had the World Cafe discussions they were sitting in each table they gave fantastic insights so I think we really we changed our perspective of work with young people like their voice is really heard. And I would like to exchange with you and sometimes we also travel with them to spend like two or three days somewhere and to really have time to deeper discussion to give them some knowledge that they can be ambassadors in their regions but also to of course what is most important to hear from them. And also I think it was interesting initiative that we done recently. We organized webinars. It were five webinars. They decided on topics on those webinars. In each webinar we had the moderator coming from the well-known Polish radio station that is targeted at young people. So it was moderated by the journalist and then there was representative of the youth panel and adult expert. All the topics were chosen by the youth panel and it was widely promoted among all schools in Poland so they could connect and listen to those webinars. And they are of course also recorded so the schools can come back to those topics and it were topics like gaming, like relations, cyber bullying. And the other I think very well it’s like like it is with best practices I think it’s really best practice at the Digital Youth Forum. We organized this Digital Youth Forum since 2016 and this is the big conference, big event for young people and young people are speaking. These are influencers, these are young social activists. So they are the speakers and we have young people as audience. So these are events coming like from 400 people in the room even to 700 people in the room depending on the year and the conference hall, the conference room that we that we had. But also it’s organized in a hybrid mode so we invite schools to to join the event online and it’s really working very well. I think last time we had like 100,000 kids watching the conference and we also this time I will talk about it later a bit more but now we have lots of school lots of children from Ukraine and we also make it available from Ukrainians. They can join the conference, they can come to the conference even school schools in Ukraine joined also this event and they watched from their country and that so we deliver of course them the translation into Ukrainian. So this is the the conference that is really really successful and I think that’s it for now. Thank you.

Moderator 2:
And keeping rounds, let’s now give the floor to Boris Hradanovic from Southwest Grid for lending.

Boris Radanovic:
Thank you. Thank you to the moderators as well and my dear colleagues. I loved your presentations and the quote I think from Sabrina or with or without you the song I really hope it’s with us and not without us. My name is Boris Hradanovic. I’m coming from the UK Safer Internet Center and I do firmly believe that the voice of youth is the voice of the future and the sooner we start listening the better the future will be for all of us and unfortunately we need to find ways to respect those voices but at the same time find ways for those voices to shine through and yes Neil’s I absolutely believe they are experts and yes Deborah I do believe that every child does deserve to have a trusted adult but oftentimes they’re not. Children do not have a trusted adult or even better a trusted educated adult and then the question is what to do there and we know from science and we know from data that children are five times less likely to reach out to a trusted adult when they encounter or are in an online unsafe situation then they talk to their friends. So today I’m here to talk to you about empowering young people to educate their peers about online safety and the program we developed in United Kingdom for the UK Safer Internet Center with our partners ChildNet International. It’s a program that champions youth digital citizenship and creativity. It is effective. It’s fun. It’s an online platform. At the same time it contributes towards an outstanding whole school approach to online safety. More than 5,000 digital leaders have been educated until now and they’re active in their school communities and we have reached thousands more as a result of this peer to peer learning module which we do firmly believe is an excellent way of going forward and and giving space to those voices of children. Majority of them are 7 to 18 years old. They have learned how to become a digital leader in their school up to 15 children in each school to gamified and interactive online experiences and learning modules and group working. The program enhances not only online safety but the understanding of issues of online safety and the advocacy and power of young people that they can have to make the changes that we all so sorely need in the online space to create a better and safer internet for us all and in the end one thing that we all must work to is instills confidence in children not to be passive bystanders to be active upstanders for many of the issues that we see online. At the end I want to invite you all as well our partners ChildNet International have a film competition for all the youth around the world where they can advocate but as well engage in activities and create a film that they can submit. for one of the biggest international film competitions. And I think it’s important to note that INSAFE also has a booth, and please do visit us at the booth so we can continue talking about this. The voices of children are the voices of the future, and we need to start listening to them now. Thank you very much.

Moderator 2:
Well, I think we should open the floor after this brief introduction of different initiatives. Most of all, we want to make the conversation a global conversation. I already had the opportunity to meet someone in the booth area today. I didn’t want you to put on the spot, but if you could share with us a little bit about your initiative, that would also be very nice, and maybe linking in to what we discussed here. It’s mainly because, of course, the INSAFE network has most focus on the European geography, but most of all, it’s good to reflect that we’re also engaged in other geographic regions. So the idea here, and please introduce yourself and the initiative, is to find the opportunity that IGF promotes, really, this exchange of best practices.

Audience:
Thank you very much. My name is Brigitte Chalibian. I’m a lawyer and director of Justice Without Frontiers, coming from Lebanon. And I think the only Arab organization participating in IGF, or there is two Arab organizations. Well, in Lebanon, since 2019, we start having the economical crisis, and the percentage of domestic violence is increasing, including the cybercrime. The recent report issued by the internal security forces mentioned that the cybercrime are increasing more than 100%. And last year, we started receiving calls from schools, asking us to provide sessions, awareness sessions for school students about bullying and about violence. Because, as I mentioned, the percentage of bullying among students is increasing, and there is a different way that students are using. So last year, we provided awareness sessions for 1,700 students. It was a good number for us, and we were able to reach private schools. And this year, we will start reaching the public schools. As well, regarding the cybercrimes, we received, we provided legal consultation and representation for girls who were sexually harassed on online platform. And we had a good cooperation from the ISF, because we have a hotline, but not all people are aware about. So for this year, we put, we try to manage ourselves more. We found the need for that. So we are working on a cyberbullying policy paper. It will be done by next month, when we will hold a roundtable to discuss it and to see, we will involve the key stakeholders. And in later stage, we will select four or five schools in Lebanon to conduct a pilot project with them. We will have a bullying policy paper, and we will have a program with, to work with students, teachers, and the administration. And in later stage, we will be working on cyberbullying. The issue that I want to highlight, and the importance of my presence here, is that I am learning more from you and getting tools. Because when we start going to school, we didn’t have any material in Arabic language for students about cyber security, about bullying, and whatever. So I was going through YouTube to find some, because children, I cannot go to them with the PowerPoint. I just, I need just to show them films, and it depends from their age. So I, we play with them games, and we start through YouTubes, getting some short films talking about cybercrimes, what to do. And the best thing that we did is one of the biggest school in Lebanon. They have more than 3,000 students. In their football stage, we put an advertisement for one year. It’s about what is bullying, and what is cyber, what is bullying? Focusing on bullying only among students. What is bullying? How to report? What is the role of each student? And we put, this is a crime, and this is, when we have cases, this is the police number, and it will be left for one year. And we received some feedback that we made it in Arabic language, and we received feedback from family committees in school to have it in English, and to have it as, in each class or section, to have one poster from that. So this is a short summary about Lebanon.

Moderator 2:
Thank you, Bridget, very much. And Bridget told me that she’s a newcomer to the IGF space, so it’s really nice also to have that feedback and have you also included from day zero in the discussion. So thank you for that. May I ask if we have questions from the… coming from online.

Audience:
No questions back, but actually it’s one comment that is worth mentioning that now there is a new initiative like Dynamic Teen Coalition has just started within IGF that I think this may be a very good opportunity for young people to join and take part in shaping the landscape of Internet journals. Thank you.

Moderator 1:
So, thank you for your contributions and I would like to go back to the panel and we have a question for them. The question is how do you ensure your youth participation activities are inclusive and accessible to vulnerable groups and we’ll start with Boris.

Boris Radanovic:
Easy question.

Moderator 1:
Please, go ahead.

Boris Radanovic:
We know a lot but we’re still missing a lot of data but the data that we have, we know the children at most risk in physical life are at most risk in digital life. That’s what we knew, but we didn’t know how much. It’s three to seven times more likely. If you have any accessibility, disability issues, any special education needs, you’re three to seven times or even greater highly likely to get abused online. So, the challenges are not just creating safe spaces for children but having them accessible and ready on the go for children and youth and adults who fall into those minority categories and your adversities, diversities, puts you in a disadvantaged position to all of us online because the spaces we create online are not safe enough for all of us to be in. So, we need to build more inclusive spaces online and we need to think about much wider groups of people than any group that we are creating the app or platform for. And especially looking at vulnerabilities, abilities, disabilities and many of those special categories for minority groups so we can create those spaces better for them. And we cannot create those spaces in isolation, working just by ourselves and we know from the online safety space and protection of children that the multi-stakeholder approach is the only way to go and especially through education. So, I would from my perspective only advise to working with the experts in the field with those minorities abilities and disabilities groups to understand their perspectives and their needs and then create the spaces around that, not the other way around. They have the knowledge on the ground that we can educate the educators who already know the specific needs and necessities of those peculiar and specific groups who are sensitive to many of those things. So, then we can work with them to develop the online spaces that recognize differences but allow us to adapt to them as well and create better and safer environment for us all but especially for the categories of children and young people who are most disadvantaged online. Thank you.

Moderator 1:
Thank you. Deborah? I will follow up. Thank you.

Deborah Vassallo:
So, from our Safer Internet Center we mainly target children who are in care in residential care and foster care mainly or in those residing in residential homes. These children the agency I come from is the Foundation for Social Welfare Agency. So, it has the section of fostering incorporated in it and these children come from a disadvantaged situation at home and we found that they were vulnerable especially to online connections because they would be chatting to people to replace the love and they want to talk about their situation to someone who is out there who would understand them. So, that would put them also in a vulnerable position. So, we try to engage them as much as possible into our programs. We invite them to the Rise for You live-ins. We also go to their residential homes. We do practical sessions even about privacy about who they’re chatting to like practical sessions and we give them like it’s like an express helpline route in a way that they can call us directly if they experience some form of online harm so that we’d be able to help them immediately in order to provide them with our help.

Moderator 1:
Thank you. Anna?

Anna Rywczyล„ska:
Yes. As a Polish Safer Internet Center we try to support all the underserved groups. However, I would like now to focus on the refugees because as we all know from 24 of February 2022 when Russia attacked Ukraine, the Polish border was crossed by 16 million of refugees. Many of them are already home if they could. Many of them moved to other countries but we have still about 1 million people from Ukraine living in Poland, mostly women and children and we have about 200,000 kids that attend Polish schools. So, what we could do as a Polish Safer Internet Center was, of course, to include them into our actions to give them the support in fields of Internet safety. As I said before, we translate all our events simultaneously. We enable also the remote connection to our events. We translate also our educational resources into Ukrainian, we promote them in schools as well. We also did something to really give support, so we hired consultants in the helpline with Ukrainian language, psychologists with Ukrainian language to be able to answer to reports from children from Ukraine. Our hotline is also more prepared to work on those reports. And the Polish CYFIR Internet Center consists of two organizations. It’s National Research Institute, NASK, that I represent, and we have also the NGO, Empowering Children Foundation. And at NASK we also have a special department fighting with disinformation and fake news. And they deal a lot with the disinformation related to the war. So this is like a big part of our everyday actions. Thank you.

Niels Van Paemel:
And now, Jung, to finish. Yeah, a lot already has been said, so I’ll try to focus on a different group. We did a check of our website a few years ago, and we found out that unfortunately it’s not inclusive enough for everyone. And now I’m going to talk about children on the spectrum, so autism spectrum disorder. Yes, this is a group of children who, first of all, need easier language. Also, our website was way too flashy when it’s about the resources part and where they can find information. So it’s not only about the words and trying to make sure our content is inclusive, it’s also in the way you present it. And secondly, when we’re talking about e-safety for children and empowering them, we’re talking now about a group who has a lot of difficulties with that. We’re talking about, for example, discovering red flags, having their gut feeling empowered. People on the spectrum have a harder time to detect red flags, have a harder time to trust their gut feeling. So we felt that we really need to make something specific for this group, and it turned out to be the Star Plus tool, co-created with children on the spectrum. And it’s really beautiful. You should check it out, Star Plus from Child Focus. It’s already translated, so initially it was made into Flemish, so Dutch. There’s a French version, and now we’re trying to translate it into different languages. Thanks to the InSafe network. Just to conclude, why is it so beautiful that we are all here from different countries? We don’t need to invent the wheel over and over again. If Boris makes a beautiful tool in the UK, we can take this into the Belgian context, and of course you localize it. But we can learn so much from each other, and that’s a really beautiful thing here. Thank you.

Moderator 2:
Before we wrap up, I just want to make sure that if anyone here in the audience or online has some point or question that wants to… Perfect. It’s a great opportunity. If you could stand, say your name, your affiliation.

Audience:
Hello. Is this working? Yes. Hi, my name is Amy Crocker from ECPAT International, which is a global network of organizations fighting the sexual exploitation of children. Thank you so much for your presentations. This is probably something you deal with often, but one of the things I find interesting about working with young people, particularly when you have things like youth platforms, is that sometimes you have youth who are very empowered and are used to doing this. How do you reach the kids that are too shy, too reticent to come forward? Because obviously you’re building the future ambassadors, but you want to make sure that you’re going horizontally. So I’d love to hear a little bit about that.

Sabrina Vorbau:
I can go first, Boris, and then Shibin. It’s a good question. It’s a valuable question. Of course, many of the young people that we work at European level, as I said, they have been working previously with the colleagues here in the national field. So they’re kind of already experts. They have a good knowledge in English. They know how to articulate themselves. But what I always find very impressive with these young people, they’re so aware of their responsibilities they have when they come from these meetings, what they have to do at home, in their community, in their local community, with their families, with their schools. We recently invited, and this is just to give a very concrete example, we recently invited two of our big youth ambassadors to come to a European Commission event in Brussels. And while we were waiting, sitting in the audience, I was actually with one of the youth ambassadors from Malta. And it was a hybrid event. So it was streamed online, and the youth ambassador told me, oh, actually I shared the link with my classmates. And I said, oh, wow, so they’re going to watch you and maybe they get interested and they would also like to do this in the future. And he said, oh, well, actually my best friend, he’s really shy. I don’t think he would ever do this, but it’s not a problem because I’m here today and I represent also his concerns. So I think these young people have a very great sense of responsibility. And they come back from these meetings. They go into their community. They share what they’re doing. And I think this way they make sure that also those who are maybe more introvert, and that’s perfectly fine. Not everyone needs to be on stage. all the time, that they make sure that their voice is included as well. Thank you.

Moderator 2:
I think it’s Boris. One minute for each?

Boris Radanovic:
I’ll be quick.

Moderator 2:
Me too.

Niels Van Paemel:
Split the minute.

Moderator 2:
Split the minute.

Boris Radanovic:
Okay, 30 seconds, Nils. I think it’s important, what Sabrina said, I absolutely agree, but I think that’s the power of peer-to-peer-led programs. You can educate the leaders, but the leaders need to educate the others and create those cohorts of people, of young people who support them. But I would emphasize, as well as Sabrina said, using tech that enables exactly that, like equal participation, anonymous participation, participation where you don’t need to stand in front of a camera. You can be the director. You can be the screenplay writer. You can be some other part of that environment that enables everybody to take part. You as an educator, you as a facilitator, need to make sure that all of the voices are included as much as possible. But in the end, children are remarkable of having that broad appeal. Is that 30 seconds?

Moderator 2:
Something like that. That’s perfect.

Niels Van Paemel:
Just to add, I really like this question because it’s an everyday struggle, to be honest. To really, and I’m not only talking about shy kids, but just in general, if we just reach out, just throw out some lines through the audience, we will always have the same public. We will have privileged people, and you create the Matthew effect, like you are just preaching to the choir, and you need to be, on a day-to-day basis, aware that there’s people out there that you don’t reach if you don’t do your darn best. So this is, I really like this question. This is something that we should all be aware of. Reach out.

Moderator 1:
Okay. To finish, Anna?

Anna Rywczyล„ska:
No, I just want to say that when we organize competitions, before they go to this big pan-European youth panel meeting, and they send us videos, they never start, like, for me, important is, or I think they always say, my friends, my generation. So I think they really feel that they are the voices of their generation. Yes, they are represented.

Moderator 2:
So I’ve been asked to kind of wrap up and bridge a little bit this session from day with the rest of the IGF, also from, and with the youth work that we’ll be conducting throughout the sessions. And I think that we can wrap up saying that with or without you, children and young people help each other, but need having someone to talk to. They most often are engaging in these initiatives that we heard being consulted, and sometimes they also feel that it’s easier to talk to each other than talking to an adult. The important point, and also addressing this last question, is that each young person feels supported in the process, being shy, being a leader, and engaged in different levels. For instance, here at the IGF, we’ll be having in the afternoon, from the start of the afternoon, the youth summit. The youth summit, for instance, if you want to know, it was a process throughout the year, organizing different workshops, regional workshops, and the one in Europe was about nurturing digital well-being, addressing the impact of the digital environment on youth mental health, which was exactly, which could have been also one of the topics that we could raise from the last question that we put to the participants here. So I would say that kind of wrapping up is understanding the different roles. We provided great examples, and I bring the In Safe Structure example and how it works, and give it as a best practice also in other forum, because I do believe, and Sphia kind of introduced me in the beginning a little bit like that, I did start in youth participation, youth engagement, youth awareness, through one of such programs, was part of the consultative youth program for the Safer Internet Center of Portugal, and throughout the years, and Sphia was making a joke also about how I keep my skin still very intact, although 26 already, and it’s been a while, but hopefully we’re making positive change, influencing even the more shy, and also, as so many also very well said, making sure that the voice of the shy are also heard by us. Thank you very much. And we are on time. Yes. Thank you so much for your presence, and I think that we can move to the next workshop. So enjoy IGF. Thank you.

Moderator 2

Speech speed

140 words per minute

Speech length

754 words

Speech time

324 secs

Anna Rywczyล„ska

Speech speed

162 words per minute

Speech length

372 words

Speech time

138 secs

Audience

Speech speed

164 words per minute

Speech length

815 words

Speech time

299 secs

Boris Radanovic

Speech speed

201 words per minute

Speech length

1193 words

Speech time

356 secs

Deborah Vassallo

Speech speed

165 words per minute

Speech length

1688 words

Speech time

614 secs

Moderator 1

Speech speed

127 words per minute

Speech length

736 words

Speech time

349 secs

Niels Van Paemel

Speech speed

180 words per minute

Speech length

1736 words

Speech time

580 secs

Sabrina Vorbau

Speech speed

154 words per minute

Speech length

1762 words

Speech time

688 secs

On how to procure/purchase secure by design ICT | IGF 2023 Day 0 Event #23

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

David Huberman

The analysis addresses various topics related to internet standards, security, and vulnerabilities. It starts by highlighting that the Border Gateway Protocol (BGP) and Domain Name System (DNS) are outdated protocols that were not originally designed with security in mind. However, efforts have been made over the past two decades by the Internet Engineering Task Force to enhance the security of these protocols.

The analysis emphasises that enhancements such as Resource Public Key Infrastructure (RPKI) and Domain Name System Security Extensions (DNSSEC) have proven effective in improving internet security. RPKI enables providers to authenticate the origins of routing information, protecting against malicious route hijacking, misconfigurations, and IP spoofing. DNSSEC ensures the integrity of DNS queries, ensuring users receive the intended data from website administrators.

While RPKI and DNSSEC have shown promise, there is a need to increase their adoption. Currently, DNSSEC is only used by approximately 25% of all domain names, while RPKI has greater deployment, particularly among Internet Service Providers (ISPs). However, broader deployment of RPKI throughout the global routing system is necessary to enhance overall internet security.

The analysis also underscores the importance of stakeholder involvement in internet standards development. Governments and civil societies should actively participate in shaping these standards to meet the demands of a global scale in 2023. The Dutch government’s integration of internet standards with public policy, as demonstrated by their internet.nl website, is commendable.

Furthermore, the analysis highlights the significance of understanding internet vulnerabilities. Regardless of a country’s geopolitical situation, vulnerabilities present opportunities for exploitation for personal gain or the creation of chaos. Therefore, it is crucial for every country, regardless of size or peacefulness, to comprehend these vulnerabilities and implement regulations to minimize risks and secure their networks.

Overall, the analysis concludes that enhancing internet security and promoting the adoption of standards are essential for protecting users and ensuring the stability of the global internet ecosystem. Recognition by the United States government of routing as a matter of national security signifies their commitment to adopting RPKI and verifying Route Origin Authorizations (ROAs). By implementing measures to understand and address internet vulnerabilities, the digital landscape can be better safeguarded, ensuring the safety and stability of the global internet ecosystem.

Annemiek ( Dutch government)

The Dutch government has implemented a comply or explain list for ICT services, which includes approximately 40 open standards, including specific standards such as internet safety. Maintainers suggest which standards should be included in the list. Although the use of these standards is mandatory, there are no penalties for non-compliance; instead, they are recommended for use in services.

To ensure the adoption of these standards, the government actively monitors their utilization, particularly in procurement. Monitoring reports are submitted to the Ministry of Internal Affairs and then to Parliament. Tenders that fail to use any listed standard need to explain why in their annual report. The internet.nl tool is used for monitoring purposes.

Furthermore, the government encourages community collaboration and engages with major suppliers. The internet.nl tool is used to facilitate discussions, and Microsoft, one of the suppliers, has shown openness to discussions and plans to implement the open standard Dane in their email servers.

Regarding accessibility, the Dutch government is developing dashboards that integrate internet.nl for accessibility purposes. This demonstrates the government’s commitment to digital accessibility and reducing inequalities.

Additionally, the Dutch government advocates for the adoption of their developed dashboard system for digital accessibility, encouraging other countries to follow their lead.

The government also promotes the use of IPv6, recognizing the societal benefits it can bring.

Rather than enforcing rules, the government emphasizes practical application and experience in the field. They adopt a stimulating approach to encourage the use of standards.

In procurement processes, the Dutch government uses a special tender website that supports open standards. This website includes CPV codes to support the procurement of goods and services adhering to these standards.

Raising awareness and understanding among procurement departments about the technical aspects of standards is considered crucial. The procurement department needs to communicate with the architecture or other employees to understand the technical details, as most procurement departments lack technical knowledge.

The government also advocates for online security through the implementation of open standards. They employ the internet.nl tool to verify the application of open standards in organizations.

To incentivize organizations to use open standards, the Dutch government operates a Hall of Fame. Organizations that score 100% on internet.nl, fully implementing all open standards, are rewarded with t-shirts.

In conclusion, the Dutch government has taken significant steps to promote the adoption of open standards in various sectors. Their comply or explain list, monitoring initiatives, community collaboration, and emphasis on practical application demonstrate their commitment to industry innovation and infrastructure. By developing dashboards for accessibility, encouraging other countries to follow their lead, and supporting online security, the Dutch government strives for a more inclusive and secure digital environment.

Mark Carvell

Procurement and supply chain management play a crucial role in driving the adoption of essential security-related standards. These standards are vital for safeguarding businesses, organizations, and individuals against security breaches and ensuring the integrity and confidentiality of sensitive information. By implementing these standards in the procurement and supply chain processes, secure practices are maintained throughout the entire supply chain, from sourcing raw materials to delivering the final product.

The IS3C (Information Sharing and Analysis and Operations Centers) is instrumental in gathering valuable information on procurement and supply chain management. This knowledge base provides insights and best practices that can be used to enhance security standards. However, it is recommended to incorporate experiences from other countries to widen the scope and applicability of this resource, ultimately creating a more comprehensive and globally relevant framework.

Unfortunately, incidents of security failures in the UK, such as data breaches affecting police forces and financial services, are reported without appropriate follow-up remedial measures. These security failures can have severe consequences, including compromised data, financial losses, and potential risks to national security. To address this issue, it is imperative to establish a consistent reporting mechanism that accurately documents the consequences of security failures. Furthermore, remedial measures should be widely distributed through channels like the IS3C, enabling organizations to learn from past mistakes and take proactive measures to prevent similar breaches.

In conclusion, procurement and supply chain management are integral to adopting critical security-related standards. The IS3C’s efforts to collate valuable information are commendable, but it is necessary to expand this resource by incorporating experiences from other countries. Additionally, addressing the lack of follow-up remedial measures in response to security failures is crucial. By implementing proper reporting and distribution mechanisms through channels like the IS3C, security practices can be enhanced, and future breaches can be mitigated.

Audience

The analysis of the provided data reveals several key points regarding the government’s adoption of digital standards and the implementation of cybersecurity measures in Canada and the Netherlands, as well as the importance of accessibility in public procurement.

One of the main findings is that the Canadian government lacks a consistent standard in procurement, leading to negative sentiment among the audience. The absence of a single set of standards used by the same government department is a significant drawback. Additionally, each province in Canada has its own legislation governing privacy and digital communications, further contributing to inconsistency in the procurement process.

On the other hand, there is a potential for legislative change in Canada, as Senator Colin Deacon has been spearheading various legislative frameworks. This development is viewed positively by the audience, as it could lead to improved standards and practices in digital adoption within the government.

In terms of accessibility, it is noteworthy that public procurement for accessible digital goods and services is being adopted by several countries, with a focus on the inclusion of persons with disabilities. Standards for accessibility procurement exist in the US and EU and have been adopted by countries like Australia, India, and Kenya. However, it is highlighted that despite the existence of a monitoring system for web accessibility in government procurement in Australia, lack of funding led to its discontinuation.

Turning to the Netherlands, the Dutch government is actively developing dashboards such as internet.nl for accessibility purposes. This initiative is positively received, as it demonstrates their commitment to improving accessibility for digital products and services.

In terms of cybersecurity, efforts are being made by the RIPE NCC and ICANN to improve the adoption of techniques like DNSSEC (Domain Name System Security Extensions) and RPKI (Resource Public Key Infrastructure). The audience appreciates the efforts made by Dutch public authorities and the government in terms of standardization and cybersecurity.

Education and stakeholder involvement are highlighted as crucial factors in ensuring cybersecurity. Both civil society organizations and personal training have proven to be effective in fostering informed discourse and persuading operators to adopt security measures. Additionally, the analysis highlights the importance of proactive implementation of internet security measures to prevent potential digital mishaps that could be costly in terms of time, money, and stress.

The analysis also notes the challenges faced by telecom carriers in implementing new security technologies due to cost and scaling requirements. However, it suggests that subscription services could provide a steady source of funding for ongoing security improvements.

The lack of skilled engineers is identified as another challenge in the implementation of cybersecurity measures, as highlighted by the example of Kyoto University being unable to implement certain security measures due to a shortage of skilled personnel.

Overall, the analysis underscores the importance of consistent standards in procurement, legislative change, accessibility, and cybersecurity in the digital governance efforts of Canada and the Netherlands. It also recognizes the positive steps taken by various stakeholders, such as Senator Colin Deacon and the Dutch government, in driving these initiatives forward.

Wout de Natris

In the realm of cybersecurity, there is a pressing need for a stronger focus on prevention. This can be achieved through secure-by-design procurement and the implementation of internet standards. Initially, internet standards were created by the technical community without prioritizing security, resulting in vulnerabilities. Most procurement documents do not mention security, and when they do, they fail to specifically address cybersecurity or internet standards. Governments can help improve cybersecurity by demanding the incorporation of these standards when procuring software or services. By doing so, they can significantly enhance cybersecurity measures and reduce the risk of cyber attacks.

The DNSSEC and RPKI Deployment working group aims to bring about a transformation in the cybersecurity conversation. This initiative represents a shift towards prevention and secure-by-design concepts, moving away from solely focusing on mitigation efforts. The goal of the working group is to translate ideas into tangible actions, aligning with the objectives set by the UN Secretary-General.

Securing the public core of the internet is of paramount importance. This includes both the physical infrastructure and the standards guiding its operation. Currently, there is a lack of recognition and implementation of these standards, leaving the system vulnerable to cyber attacks.

While tools and standards to enhance security have been available for a significant amount of time, their adoption has been sluggish. This highlights the need for greater awareness and proactive efforts in implementing these measures.

The existing state of the internet infrastructure can be seen as a market failure, necessitating governmental or legislative intervention. The internet infrastructure is outdated and requires improvements. The European Union has already implemented significant legislation affecting both the infrastructure and the ecosystem.

Wout de Natris, an active participant in the discussion, advocates for a change in the narrative surrounding cybersecurity. He emphasizes the importance of persuading people in leadership positions to prioritize cybersecurity from the outset, rather than treating it as an afterthought.

The protection of the routing infrastructure is now considered critical for national security by the United States government. Government officials are actively discussing routing security principles and the use of technical terms such as RPKI (Resource Public Key Infrastructure) and ROA (Route Origin Authorization). There is an intention to leverage government power to enforce regulations and ensure the secure functioning of the routing infrastructure.

If organizations do not voluntarily adopt internet security standards, the government may impose regulations on them. This highlights the importance of voluntary adoption and proactive implementation of security measures within organizations.

Government procurement is identified as a potential game-changer for the deployment of secure internet design standards. De Natris suggests that if regulation is avoided, procurement can be a powerful tool for promoting security standards.

IS3C aims to hand over knowledge and tools for cybersecurity, with the responsibility of implementation and deployment lying with respective countries. Their work has broader implications beyond internet standards, as they also focus on emerging technologies such as artificial intelligence, quantum computing, and the metaverse. This broad scope aligns with the Sustainable Development Goals and the need for global cooperation.

In the area of Internet of Things (IoT) security, there is an opportunity to enhance security through the adoption of open standards. The existing policies on IoT security across the globe are predominantly voluntary, necessitating a global transformation in the regulatory framework.

In conclusion, a shift towards prevention in cybersecurity and the prioritization of secure-by-design procurement are imperative. The implementation of internet standards and the proactive adoption of security measures are crucial to mitigate vulnerabilities. Governmental involvement, through procurement and regulatory intervention, is necessary to promote and enforce these measures. The work of initiatives like the DNSSEC and RPKI Deployment group and IS3C contribute to transforming the narrative and addressing emerging challenges in cybersecurity.

Session transcript

Wout de Natris:
The light was on when it was off, so it’s on. And now the light is off again. Okay, so it’s on. Welcome to this session on ISVC’s presentation on procurement, government procurement and supply chain management. What you saw online almost doesn’t exist anymore in the invitation because the people there are not here and not able to be present online. So we changed it around a little bit, but not a lot. My name is Wouter Natris and I am the coordinator of the IGF Dynamic Coalition on Internet Standards, Security and Safety Coalition, IS3C. And we’ve been around since the virtual IGF of 2020. We announced an idea there and then we had to find people, funding, et cetera to start working. On my left side is our senior policy advisor, Mark Ravel. He is our online moderator today. And on my right side is David Uberman of ICANN and together with Basia Gosselin sitting there, they’re the chair and vice chair of our working group on DNSSEC and RPKI deployment and why that is relevant we’ll explain in a little while. When we got to 2021 in Katowice, we were able to present a real plan. We introduced three working groups, one on IoT security by design, one on procurement and supply chain management and one on education and skills. In 2022 in Addis, we presented our first report by the education and skills working group, which is having their session right now, which I had to run out of to moderate two sessions at the same time, which as you see is impossible. Luckily, the chair of education skills, Janice Richardson took over, but what you can see what as I3C we try to do, we don’t try just to produce a report. Or an idea, we want to try to translate it into actions and to something tangible. And that is in line with what the Secretary General of the UN is striving for, to have the IGF come up with tangible outcomes. In that room right now, room E, the idea of a cybersecurity hub is introduced. It’s again a concept, an idea, it doesn’t exist yet. But the idea is to, in that hub, translate the outcomes of the education skills working group into something that universities and industry together could start working on to make sure that the knowledge gap between supply and demand in education is somehow solved. So that is what is happening today here as well at the IGF. We are here for a different reason. And I’ll put on my glasses to be able to read a little bit. This is on procurement. And why procurement? In the first place, because in our opinion, when we talk about cybersecurity, you always see that it’s about mitigation. It’s always about having to buy antivirus, something to have a firewall installed to have a cybersecurity institute or CSIRT or CERT as a government or a big organization. But that is all after you got into trouble. The internet runs on internet standards created by the technical community. And these standards were created somewhere in the late 1960s up to, let’s say, 2000. In that time period, it was not necessary to think about security. And that’s literally what Vint Cerf says nowadays. If we had known where this beast would go to based on something they created in 1972, then we would have done it differently. Well, we discussed it for about two seconds and then thought, well, we know everybody who’s connected, so why do we need security? And 20 years later, slowly but surely, the whole world started to come online. And we all know what sort of problems we run into. But David will tell you about that more eloquently than I could ever do in English, and even with the knowledge I have compared to what he has. So why procurement? If governments would start procuring secure by design, it would mean that they would demand these internet standards to be in place when they buy, when they buy software, that it’s tested software and not something that just comes off a shelf without knowing what it is. When you create a website that you created, you build it according to the latest standards of security, and not something that has holes in it all over. So on procurement, we started this working group in 2020, the idea. To my surprise, we found that there’s very little interest, couldn’t find the funding, we couldn’t find the people, except one person said, I want to share this. And she wrote a whole program and we had to wait for two years until we found funding from the RIPE organization, the RIPE Community Fund. And that research started this year in January, February, and we’re able to present our report here at the IGF on Tuesday. So the research done by Mallory Nodal and Lisa Rembo, who are supposed to be here, but could not be here because they’re traveling. But what it shows, this research, is that there’s very little publicly available procurement documents online. I think they found 11 from 11 countries. We asked around, they found three more. We could not find anything in the public sector. And we asked thousands of people, could you share, even if it’s anonymized, something with us so that we can do a global study? So if it’s not there, and that’s the caveat we have to make, is that is it because there is nothing or is it because it’s behind bars somewhere where nobody’s allowed to look at? But the ones that we could study shows an awful lot. Most of them don’t mention security in any way. When they mention security, it’s not cyber security. And if it’s cyber security, then it’s not about these sort of standards, but on the mitigation side and not on the prevention side. We have one exception that we found in the room. That is, on Amica there, the Dutch government has the procurement list of, I think, 46 or 43 standards that are all but mandatory to deploy. And that’s the only one that we could find in the whole world. The Dutch government also developed something called internet.nl, which allows you to check if your organization or another organization has any security in place for the domain name, for routing, for etc. So that is what the situation is. The report will be presented on Tuesday in our own dynamic coalition session and an open forum again with the forum standardization from the Netherlands on Thursday. So we have found very little documents on government procurement and none from the private sector. So can we draw very firm conclusions? The answer is no, because perhaps there is a lot more in the world, just we’re not able to access it. But can we draw some conclusions anyway? And I think that the answer is yes, we can. Because it’s quite obvious, as I said, that Internet standards are not recognised. And I think that that is something that is important to understand. The Internet runs on Internet standards. And if we talk about the public core of the Internet, and defending the public core of the Internet, then it’s not only about the physical cables, but also on the standards that make that Internet run. And if they’re allowed to be attacked 24 hours a day by everybody who feels like attacking it and abusing it and misusing it, then it is a question, why are governments not recognising these open standards in one way or another? So I think that that is an important conclusion, that despite discussions on defending the public core, the public core is not recognised for what it is. So how do we make sure that that happens? I think that there is, in other words, a world of security to win for everybody. But we have to stop talking about prevention only. Sorry, about mitigation only. We have to start talking about prevention. And the fastest way to do that, in our opinion, is procurement. But then the next step is, how do we convince people in decision-taking positions to actually procure secure by design, and actually renegotiate a contract at the moment that renegotiates your job, that you bring in these sort of standards. And that is one of the working groups that we are starting this year, and it is called DNS Sec and RPKI Deployment. But it goes, in the end, for everything on internet standards. But we took two examples. But we don’t start talking about it in the way that it has been spoken about for about 20 years. We’re going to try to change the narrative. And I think that I will give the microphone to David in a few minutes. What we’d like from this session is that I’ve said about everything that I wanted to say about this topic. What we’d like to learn from you is what is your experience? What are your ideas? What would you, how could you contribute to this discussion? And from there, see what we can go home with. As I said, we’re introducing this concept of a cybersecurity hub. How can we actually activate it? How can we make sure the right people start working together there from the different stakeholder groups and come up with tangible ideas that are translatable either in direct programs or in capacity building programs or whatever we like to call it. But the fact is that something needs to change because the discussion is running in the same direction for a long, long time without very noticeable changes. So how to convince decision-takers by design? And I think that that is the starting point of our working group on DNSSEC and RPKI deployment. So, David, I’m going to hand over to you right now to explain what your plans are.

David Huberman:
Thank you, Al. Good afternoon, everybody. My name is David Huberman, and I’m with ICANN. When one of the things we’re trying to get people to understand is when they pick up their device and they watch a TikTok video or they send an email or even if they’re involved in a chat, a group chat or on WhatsApp or Line or just SMS, a lot of people think, well, that’s the internet. But it’s not. Those are applications on the internet. In fact, they run on a whole system of routers. and servers and switches and firewalls, all of which are the underlying framework that we use these applications. This system of framework, however, is built on common protocols, and there are two protocols that stand as the foundation for almost all of the modern Internet. One of them is called BGP, Border Gateway Protocol, and it’s the system of routing, how networks talk to each other. And the other is the DNS, the Domain Name System, which is used as a backbone of communication so that we can use semantic names that we as humans understand to translate into the IP addresses that computers understand. Now, as Wout noted in his conversation with Vint Cerf, BGP and DNS are very old protocols. BGP, the version we’re using now, was standardized in 1995, and DNS is even older than that. It comes from November 1983. It’s going to turn 40 years old next month. And when we developed these protocols, the intention was to get them just to work. We just wanted to be able to push packets to and from networks. Did you get it? Did it come back? Yay, it worked. And what’s nice about these protocols is they scale. The reason we’re still using them 30 and 40 years later is because they scale infinitely. But as Wout noted, they weren’t built with security in mind whatsoever. So over the last 20 years, what the IETF, the Internet Engineering Task Force, has been doing has been redeveloping these, bolting on security. And for BGP, one of the primary drivers today of routing security is a new system called RPKI, Routing Public Key Infrastructure. And essentially, it allows providers to talk to each other but authenticate. the origins of routing information. This has a lot of benefits. It benefits us against malicious hijacks of routes. It benefits against accidental misconfigurations. And hopefully it can help prevent IP spoofing and other things that attackers use to do bad things. DNS has a similar suite of security tools that we call DNSSEC. And DNSSEC is very important because when you go to a website, when you go to www.un.org, we want to ensure that the data that you receive back is actually the data that the people who run un.org intended you to have. So DNSSEC allows for you to assure the integrity of the DNS queries and the data that you’re receiving back. These security enhancements to these two fundamental protocols can significantly increase the security posture of the entire Internet ecosystem for all users in the world. But yet, the adoption of DNSSEC is at about 25% of all the domain names. And RPKI, while it enjoys much fuller deployment, especially in the ISPs around the world, we’re still working on increasing the deployment to all of the networks that participate in the global routing system to get them to digitally sign their routes so that everybody else can validate them. So how do we increase penetration? How do we increase this deployment? This is what one of the newest working groups of IS3C is working on. We’ve put together a panel of world-class experts, and we are developing a new narrative that we’re going to test against decision-makers at ISPs, decision-makers in public policy, and decision-makers in network operations to help… motivate them to increase the deployment of these very secure protocols that in 2023 ought to be a baseline standard that everybody adopts.

Wout de Natris:
Thank you, David. Perhaps, Annemiek, that you would like to say in a few lines what exactly is what the Dutch government is doing. Sorry, the… You’re done. Thank you very much. Thank you very much.

Annemiek ( Dutch government):
The Dutch government is using a comply or explain list for ICT services and on that list there are about 40 standards, open standards, including general standards but also specific 15, for instance, internet safety standards. They should be used in ICT services. And we have a process of organising that. That means maintainers can tell which standard should be on that list and we organise with experts from all Netherlands. To see which standards should be on that list. And those open standards are mandated. and we suggest them. So we cannot, how do you say that, give penalties if they don’t use them, but we just suggest those standards. And if they use them in their services, we name them, so we’re not shaming them, but we’re naming them in order to adopt standards more positively, more increasingly. Besides that, comply or explain list, we monitor. So we’re monitoring those standards, especially in procurement. And so all the tenders in Holland, in the Netherlands, we do for ICT services by the government. We research, we have researched, and if there are any standards not used in the procurement, in the tender, then they should explain it in their annual report. Monitoring is very positive for adoption of open standards, because twice a year we monitor the special internet safety standards, and we offer that to the parliament, actually. First it goes to the Ministry of Internal Affairs, and we say, oh, you’re doing well, or you’re doing not so well, you have to increase. And we use the tool Wout mentioned, internet.nl, which is very sufficient to measure this. And we public the figures, so it’s more like naming and a little bit of shaming. And in addition, we have community, we encourage community. So we use this internet.nl tool in order to get in the discussion with large suppliers. For instance, Microsoft. using Dane, the open standard Dane, and the Netherlands, it turns out that it’s not used, as you know, as you might know, and in discussion with Microsoft, we, for instance, that’s one of the suppliers, we found out that they are open for discussion, and they will change their email server with Dane, and that is a very nice announcement. Coming year, they will also do the fully Dane execution, I understood. So, therefore, we use internet.nl for community, and getting cooperation with other suppliers, and that is very nice to have. So, those are the three points, mandate, monitoring, and community. Thank you.

Wout de Natris:
Thank you, Annemieke, sorry for putting you on the spot, but I think it’s a nice explanation what you hear, and if you are online, and you go to internet.nl, just type in any domain name that you can think of, and you will see the results popping up within a few seconds. David wants to respond first.

David Huberman:
Thank you. So, what the Dutch government is doing with this is exemplar of a really nice way for government and public policy to integrate with the world of internet standards, which is primarily an engineering-based endeavor. It’s interesting, we are here today, those of us in the room are here in Kyoto, and for those of us who aren’t from this beautiful country, there’s something we all have in common right now. We all have brought along with us these little travel adapters that we use when we want to. Charge our devices. Why? because the shape and the voltages of the plugs in Japan are not the same as The shape or the voltages that we use in our home countries And why is that? Because we have a set of standards for some countries and a different set of standards for other countries and other standards for other Countries, we have lots of standards, but no global standard In my wallet right now, you’ll find Japanese yen You’ll find euros and you’ll find American dollars, but it’s funny because they all do the same thing I hand them over to someone when I want to buy something from them The purpose of currency in 2023 is very straightforward. I give you sell I give It’s the same purpose around the world, but we use different currency In Japan we drive on the left side of the road and our steering wheel is on the right side of the car In other countries we drive on the right side of the road with the steering wheel on the left side of the car That’s not only Challenging for those of us as drivers It’s also challenging for the manufacturers of those vehicles because they have to have a whole different set of standards For safety and for operation when the steering wheel is on a different side of the car Internet standards don’t work like that Internet standards from the beginning from 1969 with RFC 1 through today in 2023 We have almost 10,000 published standards Internet standards are Intended to be fully interoperable all around the world Whether you are in China whether in Kenya whether you’re in Paraguay whether you’re in Iceland No matter where you are in the world. If you’re online on the Internet, we’re all using the same standards and That’s really important because it allows for a fully interoperable fully global Internet that, in itself, enables innovation. It’s because it works everywhere that people are able to develop applications and platforms and do amazing things, because it works the same way no matter where you are. And this is where the Dutch government, and this is where other governments can really show leadership, because in the development of those standards, it’s 2023, the Internet’s everywhere in the world, it’s not 1969 anymore. We can’t develop these in a vacuum. We can’t just develop these as pure engineering exercises. Instead, we have to think about the real-world implications of new technologies, the implications of new protocol and protocol development. And so, here at IGF this week, strongly encouraging governments, parliamentarians, public policy, civil society, to become involved in Internet standards development, to offer your expertise to the development’s process, while at the same time, understanding and respecting that so much of what we’ve created is due to it being an engineering-driven endeavor, and the engineers are the true experts in how to do this. So, it’s a real commendation. It speaks very well of the Dutch government. Internet.nl is a wonderful site that works really well, helping your organization understand where you sit in this world of standards. That’s about what I wanted to share.

Wout de Natris:
Well, thank you, David. I’ll just pause for a second. So, thank you very much. I think that from this side of the table, that is what we wanted to share with you. So, I think you now understand what the IS3C is about, what we try to achieve, but also what we try to achieve in the near future. For now, we would like to learn from you, how does this concept come across? Does it make sense? We would like to know also, Do you know about any of the procurement schemes in your country of your organization? But also to discuss a little the plans that David was together with Bastiaan has to change the narrative of how to convince people in leadership to really think about cybersecurity upfront and not as an issue that pops up after you bought something. So the mic is there in the middle of the room. I can also pass a mic around. I think the first question is what you’ve heard this, how does it come across? What do you think of this plan? Does it make sense? Would you do it different yourself? Just anything that you’d like to share of us what we can learn from. So the microphone is there. Just introduce yourself first, please. And then just share your thoughts with us, please.

Audience:
Hi, Jan. Hi. Okay, it’s on. Perfect. Viet Vu from Toronto Metropolitan University in Toronto, Canada. This is actually opportune timing because literally about two weeks ago, I just published a paper on Canadian government’s digital adoption. And a couple of things on how that lands, what we’ve discussed so far. The first thing is that the incredible thing about the Canadian government is that not only is there no standard in procurement, there isn’t a single set of standards that the same government department uses. And the problem even becomes more complex when you go down to the provincial, which is the second level of governance in Canada, goes federal and provincial. Think of US states or prefectures in Japan. Each of them actually has their own legislation that governs privacy, that governs digital communications. And so that creates a challenge. Now, in terms of what we think might work or the key part that has prevented, let’s say the conversation of digital to sort of surface to the top. It is very much just the fact that there really isn’t anyone who is actually empowered to raise those issues. The Canadian government recently created Canadian Digital Services, which was this out-of-government group that is there to deliver government services in-house, kind of. They’re the first time that the government has done so, but at the assistant deputy minister level, which is sort of the minister, deputy minister, and assistant deputy minister, that’s why you kind of need the people who are kind of empowered to raise the issue of digital and there just isn’t any. And so, in terms of Canadian context, the one person you probably want to talk to is a senator, Colin Deacon, he’s been sort of spearheading a lot of legislative framework. I can put you in contact with him after the session if that’s of interest. This will be a short follow-up question. How has your report landed? Have you got any response from the government side, or is it still university? It’s a great question. It landed really well, actually. I did a couple of radio rounds, literally, before boarding the plane to Kyoto. It was 9 a.m., I was actually giving a panel talk on the topic, and then 3 p.m. I was on my flight here. Once I’m back, in November, we’re actually delivering a workshop to sort of high-level decision-makers. So, we’re talking deputy minister, assistant deputy minister, and director generals within those three hierarchies in Ottawa in November, particularly on thinking about policy solutions. So, we know that the general topic lands well right now with them. Well, congratulations, and I think your invitation to introduce us, I think that would be very much welcome.

Wout de Natris:
Thank you. Any other ideas in the room, how this lands, Basia and the lady there? Yeah. So, you can stand in line.

Audience:
Hello, my name is Gonalea Sprink, chairing the Internet Society Accessibility Standing Group. And accessibility in this respect talks about accessibility for persons with disability. And we know that there’s a number of countries who have looked at public procurement for accessible digital goods and services. And there are standards in the U.S. and in the EU, and that have been adopted in countries like Australia and India and Kenya. And then it’s, of course, this common issue of implementation. So I was very interested in the Dutch initiative, and we found in Australia, for example, that when it comes to web accessibility and procurement by governments, there was a monitoring system. And then there wasn’t funding enough to continue it. And that’s what we’re hoping, that other countries and systems will be able to continue that type of implementation of a policy. And I should also mention here that in the EU, there’s an Accessibility Act, which is going to be a directive for all EU countries to ensure that any supplier to a European country should have accessibility built in to those digital products. And that’s supposed to be mandatory. So we will see how those sort of systems work. And it will be very interesting to see how that intersects with what we’re talking about here today. Thank you.

Annemiek ( Dutch government):
That’s also interesting, because in Holland and the Netherlands, we also develop dashboards, including internet.nl, for accessibility. purposes. So follow the Dutch government in that way and you might be using also the dashboard in future. And internet.nl is integrated into the dashboard. So, nice hearing.

Audience:
No, no problem at all. I have a question for you, actually. From the RIPE NCC. Together with ICANN working in a new working group to improve, you know, to see to it that adoption of techniques like DNSSEC and RPKI is moved forward. And I’m really happy, you know, with I’m Dutch. So maybe I’m prejudiced, but I’m really happy and even proud, you know, what the Dutch public authorities and government is doing here. So all kudos to the Dutch forum for standardization. I just wondered, Annemiek, maybe you can share more there. In terms of what the Dutch are doing, right, and the policies that are underlying this and the reasons why you guys set the list there for comply or comply with or explain or even mandating, you know, the usage of certain tools. Is there information with regard, you know, to the underlying policies available in English? And do you have any experiences, you know, talking with other governments, other similar agencies like yours? You know, is this being taken up, this idea? How do other people respond to this?

Annemiek ( Dutch government):
You mean other governments in Europe or France? Because Denmark is also using internet.nl for their own policy. And fortunately, also Australia and Brazil using the internet.nl in their policy. So that might be, they have a different attitude to it, but yes, and you ask what kind of policies behind the comply or explain this.

Audience:
I think, you know, I’m aware that as far as I know, at least internet.nl, the underlying software is open source, right? Yes, correct. People can of course adapt their own front end in their own language, right? To use a similar service, so that’s all great. But I really mean more in terms of the underlying policies, in terms of techniques, tools, standards, that in this case Dutch public authorities need to comply with. Because you think it’s important for certain reasons, right? You need to be reachable over IPv6, or your website needs to be reachable over IPv6. When you purchase certain online services, cloud or whatever, it needs to have RPKI implemented, stuff like that, right? Like the reason why you think that’s necessary to actually demand that in terms of procuring services or having public authorities comply with these type of standards. You go very fast, also for me I guess, but also for the audience.

Annemiek ( Dutch government):
Well, the Dutch government promotes also use IPv6. And the policy, yeah, I don’t know actually what you mean by the policy behind it. Because what we do is we stimulate the adoption of those standards in order to give practical experience in the field to show that it works. So we have a carrot, let me say it like that, and people have to chunk to it in order to use it. Because in the field, the society has advantage of it. But I do not understand the policy behind it.

Audience:
Well, maybe it’s so obvious that you don’t have a policy behind it, right? You just think it’s a good thing to do. Other countries, for instance, are not doing it. So I wondered, is there anything that you can share or help to convince them that, hey, in terms of procuring services, it would also be good to set certain requirements with standards you would have to comply with?

Annemiek ( Dutch government):
Yeah, well, if tenders are executed, then they follow in Holland a special tender website. And there are CPV codes included in order to support procurement departments to request for open standards. And in addition, we explain what the standards are, because most of the people in procurement are not technical. So we suggest talk to your architecture or to other colleagues in your company and get to know what technically involvement is for the execution, because the procurement department doesn’t know anything about IST in order to follow these courses. It might be also an interesting adoption way. So, a suggestion.

Wout de Natris:
David Bassian, you decided to sponsor this working group as well, besides leading it. What makes it so different for you to actually try and come up with a different narrative? That’s a really good question.

David Huberman:
You touched on it a lot a few moments ago when you talked about the technical education and how we’ve worked really hard as an organization to build capacity on the importance of DNSSEC and then how to actually do it. A lot of my colleagues go around the world and speak with groups of engineers who operate networks and will actually do DNSSEC signing online, real, on their computers that’s in the live environment and teach them all the skills they need to continue maintaining it. But that’s not enough, because it reaches a small group of operators, and while it helps them, it’s so much more challenging to get that message out to the much larger world for all the domain owners. and for all the operators of recursive resolvers who have to do the DNSSEC validation. So we’re really looking at this initiative as a way of, as we’ve said a few times today, change the narrative, find a new way of saying it, and test it against decision makers, and say, how does this strike you? Does this persuade you? And based on their feedback, then we can iterate again and refine it even further. And so that’s our interest in why we want to fund this and why we’re really engaged and motivated here.

Audience:
Yeah, thank you. As a regional internet registry, we’re a non-profit organization. I think it’s part of our mission to increase the trust and the reliability of the internet. And not only talking about topics like fragmentation and other things that are potentially detrimental and could have a serious effect, but even the internet as is, right? It was referred to that DNS and the routing part, they’re not directly visible. Maybe it’s the DNS and names people are familiar with, but the routing part for most people, that’s not something they are aware of, need to be aware of. But these are fundamental to everything else that depends on it, that runs on top of it. So the security there, it’s almost unfortunate, right, that the incidents that happen, and quite a few happen, that they’re not visible in terms of actual impact that people experience and that acts as a wake-up call. So at the end of the day, if we want to remain, keep people’s trust in the entire system, right, and that it also is going to increase, then something has to be done. I think it’s really, really great what, for instance, the Dutch government is doing, right? Lead by example, but there might even come a point, especially if you see in the European Union, the amount of legislation affecting the infrastructure and the ecosystem is enormous. if at some point the perception will really be there that this is a market failure and people are not getting their act together, then it will be regulated. And we have all the tools available, everything available to do it ourselves, right? Technically, the standards have been there for a long time. All the tools used for instance to, on the one hand, do the authorization part with us, you have to actually sign your IP addresses and associate them with an autonomous system number, right? Who is actually allowed to origin certain prefects. That is a very easy portal to use. On the other hand, the validating part, the software that you actually use to check the announcements you receive to see whether they’re valid or not, that’s actually at a very mature state. So everything is there. So why is it not being picked up? I think in terms of the originating part, like people signing resources, I think we’re like 40% globally and then it really differs per region or per country, which is good, but we need to step up here. And people think it’s either really, really technical, complex to implement. And I understand if you have a huge network with enormous amount of routers and others that you depend on and customers you have. I’m not saying it’s trivial, but for an average network engineer who takes her or his job seriously, it’s not that much of a challenge. People think it’s really expensive to implement. Well, certainly with a project, there are costs involved, but if it’s about the underlying, what your services, how people experience your online services, more and more services are provided online, how people actually experience those, this is fundamental, right? That people need to know that if I aim to contact someone or try to reach certain content that I’m actually reaching the place I wanna be at. It seems a given, but looking again, as was mentioned, the protocols are ancient and this needs to be improved. So I think we have work there to do to actually gain the stories, right? And to actually. convince people, hey, this is not so hard. And there are so many material available. Like Anmik just said, we provide courses to people. We do it, everything is available free online. But if you really want like a face-to-face training with a trainer, et cetera, normally there are costs involved, but we can organize stuff. We’ve been really effective, especially in the Middle Eastern region to get people in a room. So both the regulators, the operators, and just go through the whole thing and actually have people sign resources. Especially if you have only maybe one incumbent or like two operators in a country, if people start signing their resources, there’s an enormous uptake in adoption. And we do the same at Peering Forum, other meetings we organize. We have like ROA signing parties. We get people in a room, right, and actually demonstrate how relatively trivial it is, especially for network engineers, to sign their resources and to look at the validation part. So we’re actually doing a lot there. And I really hope, you know, in terms of audience and storyline narrative that we can also have an impact combined with all the other stuff that we’re doing here at the IGF.

Wout de Natris:
Yeah, thank you. But Basiaan David, the internet works. Everything works. It’s playing devil’s advocate here. Everything works. It’s on the whole day. There’s no failure. So why should I ever do something? Why should I invest in security? It works, right?

Audience:
Well, maybe that’s something else that we need to be more focused on. I mentioned the fact that the real impact of incidents is not sufficiently visible. But for individual networks, I think the same goes for the Dutch government. What triggered the whole thing was IP address space of the Dutch Ministry of Foreign Affairs being hijacked. And that was a wake-up call. Not a good one, obviously. But if those type of stories, right, we can demonstrate to people that not wait for something to break and then, oh, I’m going to spend a lot of money and get stressful and I need to repair it. No, no, you can actually, you know, implement this in quite a reasonably simple fashion and be prepared, you know. And again, your customer’s going to benefit. You yourself are going to benefit in the long run. But yeah, maybe we also need a bit more effort in terms of maybe the shaming part, or the incidents that really had an impact and also have people share those stories and what that led them then to implement this. I don’t know if that answers your question or not.

Wout de Natris:
Yes, thank you. And I think when you mentioned the Middle East, that perhaps an explanation is in place that the RIPE NCC as a regional internet registry is literally doing Iceland and Greenland even up to Vladivostok and a lot of the Middle East. They all provide the IP addresses for that region like APNIC is doing here in Asia. Sorry?

David Huberman:
Of course. So just to build just a little bit on what Bastian was talking about, there’s been a sea change in the United States. This summer, I had, I’m gonna be honest with you, a fairly surreal experience when I went into a government building for our communications ministry, our Federal Communications Commission. These are the folks who regulate broadcast signals and they also regulate mobile wireless signals. So quite powerful. All the wireless providers, lots of the wireline providers, the cable companies, which a lot of the internet in the United States to all of our homes is. And it was kind of surreal because the United States government is now taking the position that routing is a matter of national security. And all of these people from like the FBI and the Department of Justice and all these law enforcement bureaucrats were getting up and talking about how we absolutely must secure the routing infrastructure of the United States of America against attacks, against misconfigurations, against hijacks. And not only were they talking about it in general terms, talking about the principles of security, they were mentioning, these are elected people who are talking. They were saying things like RPKI and ROA. They were saying things like URPF. They were using acronyms that I didn’t think a government bureaucrat knew how to spell. And I was like, where am I? What’s going on here? But I loved it. It was great because it showed that a country, a large country, was taking seriously the need to adopt RPKI, the need to adopt validation of ROAs at a national level. And they were going to use the power of the government to force the regulated parties to do this. And to answer your question, it’s because, for them, it’s national security. Because it’s not just the mariners and their boats who are connected. It’s the military. It’s all of the government, federal, state, local. It’s our schools.

Wout de Natris:
Everything is online now. Thank you. I think that is a good example of changing the narrative. And that’s perhaps also what Basia was saying. If people do not voluntarily, organizations not voluntarily do it, the government will step in at some point. But will they literally regulate and write laws? Or will it still be a good discussion saying, guys, you really have to do this? But if then, even five years from now, nothing has happened, then probably it will become legislation. And is it something that that industry wants to avoid? Sometimes, if I’m looking at it from a negative point of view, I get the idea that they want to be regulated. Because then, there finally is a level playing field. And that is what’s missing, also missing here with the deployment of these standards. That if I deploy, it costs me money. Meaning that I have to have a higher price, while the competition does not do it. And in other words, they may have more customers because of it. So that is one of the reasons that deployment on a voluntary basis may be hard. But if you don’t want to regulate, then procurement, I can’t say that enough. may actually change that. But the question then is why are most governments not procuring secure by design? And it’s a question I simply don’t have the answer to. But the fact is that it doesn’t happen as a standard. What I would like to ask from the people present here and get your views anyway, then I’m gonna hand the mic over to, starting with you, but then you can pass it through. What do you actually take out yourself out of this session and how could you change the discussion perhaps in your country? Where could you ask these sort of questions? Because then you get perhaps a little bit inspiration in your own country to change this discussion. So I’m gonna ask Annemiek to pass the, and ask your, yes, of course, if you would like to speak first, okay, of course. Please introduce yourself first.

Audience:
Actually, Samira, so I have been living in Japan for about now two, three years. Just the remark that made, I mean, why would a country been hijacked? I mean, that’s what I would like, for example, a targeted countries like US, maybe they have like many enemies, I guess. So maybe, I’m not saying that they have, I mean, I’m just assuming. For example, Israel, maybe, I’m not sure. But when it comes to internet, I mean, we are going to the very core of the procurement, the routing and switching and all this standardization. And so we, at the beginning also we said, yeah, internet is similar for everyone. It’s working well also, he said. But how we are going to regulate these regulations when it comes to the countries where they don’t feel the need? For example, a normal country, which is, of course, like, for example, we’ll say Norway, very peaceful. They don’t feel the need, I guess. So it, I think it is not that pretty much, I think what it must be done is to regulate, but how well we will regulate when it comes to countries’ individual legislation, how well we can force them to use these standards if they don’t feel the need. I mean, one way where we can creep in is by the educating them, right? So I feel that we need to think like in a way that people are trapped. For example, if you say that you’re posting on something in social media, okay? So this can cause your life, it can be a threat to your life. Of course, they will listen to that, right? So it’s similar. So I would think that it is, it’s a need of the legislature, because U.S. is, U.S. knows that they have, like, they have hijackers, they have people who wants to creep into their network.

David Huberman:
So I think. Yeah, I mean, it’s a good point. The thing is, it’s not just about geopolitics. A lot of this is about people who want to exploit vulnerabilities to make money. And some are to create chaos. Again, not for geopolitical reasons. So one of the things that we have to do is ensure that everybody in the world, in all countries, big and small, peaceful and not peaceful, understand that vulnerabilities create opportunities. And there will always be people who want to fill that opportunity for their own purposes.

Wout de Natris:
Yes, thank you. I think that that is a very good example of what we try to. as IS3C. We cannot do that ourselves, but we can hand over knowledge and tools to actually do that. But then it’s up to countries to deploy. So my question to you was to share with what you got out of this session, and what could you do in your country to plant a little seed of knowledge on this topic. So start up front, and then go around. And please introduce yourself.

Audience:
Hello. Okay. I’m Ryan. I’m from Indonesia. It’s a bit hard, I think, for my country, because when the session run, I just take three, I sampling two e-commerce in my country, and one of the legislation of websites through the internet.nl. All of them not sign it on the NSF. But the e-commerce is sign it on the RPIC. I think the e-commerce realize about the security breach to them, the threat to them. Actually, in my country, as today, I just meet our vice minister for the communication information. He just give a speech in the main hall, and I doesn’t have any access to that high level. I hope with Netherland, we have a long history, right? Can you please suggest something to our country to try to implement this? Because many of the decision makers doesn’t even aware about this thing. The procurement, everything, the tender is always about money, not about the security, not about the people. It’s always about the money. So if the vice minister is still here, you can introduce us? I hope I can, but even I think he doesn’t know me. I understand. Don’t worry. But I hope my country will be getting better, because next year we will have our presidential election, and there’s a lot of issue with IT security, all of the fake information from the AI generative things, and many of people is, like, I cannot say, but many of them is doing anything to get to the position. So I hope everyone can help my country. Well, thank you. That’s a quite clear goal. Thank you for sharing. Hi. I’m Masayuki Nakamura from Japan, and I’m a very beginner of this area, but about the DNSSEC, the rate of DNSSEC is low, either in Japan, but the one issue is the open standards written in English, so we have to translate to Japanese, and we have to make more easier to read to the decision-maker, so that’s our problem, and I am the government officer, so my colleagues are trying hard of it, and I’m not in the position, but if I have a chance to get the ratio of this, so that matter, I will try to make the thing better. Thank you. Thank you very much for also eye-opening, I guess it’s not that just targeting countries also, but just making solutions for the vulnerability, I think it’s a wonderful point. So I guess the key is through the education, so I would like to… Also, put myself forth and research more into these areas. I have, of course, my area is ERP systems, so I would think much more into learning these things, that there was a lot of input here. So I guess that education will drive into the countries who are not aware of it, as we all know that the internet is working, but underneath there can be catastrophic events. Of course, when we just keep things open, so when we can close the door, why would we just close it and lock it, right? So that’s what I got from this session. Thank you very much. Hello. Thank you, everyone. I’m Santosh Siddhal from Nepal. I’m Executive Director of Digital Rights Nepal. I joined the session in between, but I liked it very much, and it has opened a lot of question. In Nepal, we recently, in early August, Nepal adopted the National Cybersecurity Policy. Before that, we have Electronic Transaction Act, but now the government wants to bring a new cybersecurity law, and for that, they have adopted the National Cybersecurity Policy. One of the problematic aspect of the National Cybersecurity Policy, two of the, actually two of the problematic area is in the consultation phase, in the draft phase, there was no mention about the National Internet Gateway. Now the government is talking about installing a National Internet Gateway without defining it, and this is targeted for, they are saying that for the resilient and the secure internet. And another problematic area is they are also talking about the government intranet. And the third, which is also related with the procurement part, is earlier it was not there in the draft rule, but now they are proposing that the laws relating to procurement of the cybersecurity or the ICT consultancy and the equipment. will be defined by the government, and that could be out of the bound of the public procurement policy. So I think procurement here comes very, this is a serious issue where the government wants to kind of put it behind the curtain, the procurement process, what kind of consultancies, what kind of ICT equipment are being procured. And the problem is in the least developed countries like Nepal and others, the stakeholders are not very much aware about the repercussion or the possible impact on other areas because of such kind of laws and policies. Especially civil society organization and media, they are also not aware about it, and it is evident from the kind of reports the civil society or the media, the public discourse we are having at the point at Nepal. So I think it is very important that we take these issues into the public discourse, and civil society organization has a kind of very important role to make a stakeholder have a kind of informed discourse about the policy proposed, possible repercussion, and at the same time its impact on the utility of internet, and at the same time the other human rights that internet enables. So I think this is very important session, and I have gained a lot of insights which could be used for the public advocacy. Thank you. Okay, thank you so much for this very, very useful session and very informative for me, and I’m from the telecom carrier in Japan, NTT Communications, and we are ISP, and so we need to implement the many, many security, DNSSEC or RPKI, but it need a cost and scale. So very difficult to improve as soon as possible. But I would like to inform my company and tell this discussion and try to improve such kind of new technologies. Thank you. Hi, I’m Ichiro Mizukoshi from Japan. I’m just jumping in the middle of a session, so I’m not sure about the whole discussion. But in my humble opinion, to secure the IOST services, maybe the subscription service will help it. Because selling out the product, after the sale of the product, the repairing vulnerability for the manufacturer, it’s hard for a long period. It’s too heavy. But if it is a subscription services, they get their money to get services, so they can have a chance to repair it. That’s my opinion. So I’m Daisuke Kotani from Kyoto University. I’m a researcher, but partly I’m involved in the procurement of the university IT infrastructure. So from that perspective, the problem is the budget cost and the skill set of the engineers. So currently, unfortunately, if we request outside company to support RPKI or DNSSEC, but if such a company doesn’t have enough engineers to support so. So we cannot implement in our campus in South Korea. So education to the engineers, I think, is an important issue. Thank you. I literally thought I could skip it. Well, as I explained, in two weeks’ time, I’m going to be up in Ottawa talking to a group of sort of high-level decision makers in the federal government. Certainly, I think this conversation is going to help us design. We haven’t actually designed a workshop yet. That was a task that is up to me and one of my co-authors once I’m back from Kyoto. And so this does give us a really good set of ideas to do it. So thank you. Finally.

Annemiek ( Dutch government):
Finally. Thank you very much for all your stories and input. I would like to invite you to check your website or your email address to internet.nl. And if you score 100%, we give you a position on the Hall of Fame. And we always give a t-shirt, which is a very collector’s item in Holland. You can also sleep in it. Jukka-chan. And then you get a t-shirt. That’s the way how we do it in Holland, in the Netherlands. Tempting organizations to use those open standards and get in the Hall of Fame. But you can collect if you like to try. Afterwards, the session, you can check it. Thank you.

Wout de Natris:
Thank you. And thank you all for sharing your ideas with us. And we’re getting close to the end of this session. We have a few minutes left. But what I would like to say is that it seems like from also what you’ve said, there is literally a world to win. Because if you are able to convince the people you work with that this is an important topic and use the right arguments, then probably things will change. And that is what we’re going to strive for. What can you expect from IS3C? I think that the first thing I would like to do is to invite you to come to our session on Tuesday at 10.30 in Working Group J, the one next to it here. What we will do there is present three reports. The first is a global comparison on policy, national policy on Internet of Things security. And just to tip that, give a very little hint, is that there isn’t very much in regulatory way to find. It’s all voluntary. So that is one. The second that we will be presenting is on procurement, as we already understood from the session today, is that it’s also a global policy comparison showing what the level of procurement by governments is at this point in time. The third report is, I’m not sure if we’re going to be able to present it because the lady who is supposed to give the presentation cancelled at the last moment and the report is not available yet, I understood. We made that together with the United Nations, with UNDESA, and the launch was not, went not through, I understand, this August in somewhere in China. So it’s not online yet. We’ll try and say something about it. We’re also going to present a tool, the comply and explain list is having a global translation, as you could call it, is that a team of experts has come together and made a choice on three topics. One is on the categories of standards, the other is on the scope of the list, and the third one is the individual standards that go underneath this list. What is going to be announced is an open consultation, so anybody in the world who has an opinion on the scope or on the categories, on the standards, is allowed to share their comments in a Google Doc so that we can make a more and better informed decision on what is going to be in this list. Next we will have the presentation that David more or less gave to announce the working group on the narrative on DNSSEC and RPKI, and finally we’ll be announcing a working group on emerging technologies. So the idea is that in the coming year we will be doing a global policy comparison on artificial intelligence and later perhaps on quantum computing and on the metaverse. But what we also do is try to see our relevance to the sustainable development goals. So how is the work that we are actually developing at this point in time able to make the world better as a whole and not just on the topic of internet standards? And what was happening in the other room that is stopping at this point also, almost, is that we have a short synopsis of the cybersecurity hub and the plan that we have there. So in other words, we’ll be doing a lot of, presenting a lot of the work that we’ve been doing in the past year. So you’re invited to come there. If you’re interested in learning more of what we do, you can join through the IGF. If you go to the Dynamic Coalitions and look at the Internet Standards Security Safety Coalition, you can sign up for the e-mail list and you will not get a million e-mails every day. But when we have a working group that is starting or has its own meeting, you can get an invitation to join. And if you’re interested to work with us, that’s also an option that we look at for people who voluntarily are willing to chip in a little bit in this work. So we have our own website, that is the IS3, the number three coalition.org, and that’s where we publish all our reports. And on the 10th, all the new reports will be able to be downloaded from there. So I think that that is all I want to say. I’m looking at the panel. Mark, have you made any observations that you would like to share with us from listening? And I haven’t heard anybody online, so I think this.

Mark Carvell:
Thank you, Bart. I’ve been following on the Zoom link and online, though comments or reactions have come through the chat room in the Zoom link. But I think the key message from this session, I think, is very important, that procurement and supply chain management really do have major contributions to make in driving the adoption of critical security-related standards and routing protocols and so on. So that’s a very important message, and I really appreciate from me personally that there are comments here that this is โ€“ there’s a lot of valuable information that the coalition on IH3C is collating, and we need to build on that with more contributions and more experiences from other countries. I’m thinking in particular from my own country, the UK. When I was working for the UK government, the issue of procurement, really, of network services and equipment, never really came up as an internal policy issue. But every now and again, there were these massive security failures online, which did get a lot of media coverage. But then again, you never hear the consequences of those data breaches, whether they affect the police forces, as was recently the case in Northern Ireland, or financial services. You hear about these headline-grabbing incidents, but you never hear what the follow-on from them was in terms of ensuring that these things don’t happen again. But I think this coalition does provide a channel for distributing that kind of important information. Those are my reflections. Back to you.

Wout de Natris:
Thank you, Mark. David, any last words? No? Then, with that, I will let you go, and we’ll be well in time for the people who come next to prepare. Thank you very much for your contributions and your insights, because that’s something that we are going to take home. Thank you for all the technical work. Thank you, Mark, for the online moderation. And with that, I wish you a very good IGF, and hope to see you again soon. Bye-bye. Bye-bye. Thank you. Thank you. Thank you.

Annemiek ( Dutch government)

Speech speed

124 words per minute

Speech length

922 words

Speech time

447 secs

Audience

Speech speed

170 words per minute

Speech length

4144 words

Speech time

1466 secs

David Huberman

Speech speed

165 words per minute

Speech length

2145 words

Speech time

778 secs

Mark Carvell

Speech speed

135 words per minute

Speech length

281 words

Speech time

125 secs

Wout de Natris

Speech speed

164 words per minute

Speech length

3845 words

Speech time

1409 secs

Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Merve Hickok

The comprehensive analysis conveys a positive sentiment towards the regulation and innovation in Artificial Intelligence (AI), emphasising their coexistence for ensuring better, safer, and more accessible technological advancements. Notably, genuine innovation is perceived favourably as it bolsters human rights, promotes public engagement, and encourages transparency. This viewpoint is grounded in the belief that AI regulatory policies should harmonise the nurturing of innovation and the implementation of essential protective measures.

The analysis also underscores that standards based on the rule of law must apply universally to both public and private sectors. This conviction is influenced by the United Nations’ Guiding Principles for Business, which reinforce businesses’ obligation to respect human rights and abide by the rule of law. This represents a paradigm shift towards heightened accountability in the development and deployment of AI technologies across different societal sectors.

However, there is significant apprehension surrounding the perceived industrial domination in the AI policymaking process. Such dominance is viewed negatively as it could erode democratic values, potentially fostering bias replication, labour displacement, concentration of wealth, and disparity in power. Critics argue this scenario could compromise the public’s interests.

Moreover, the analysis highlights strong advocacy for the integration of democratic values and public participation into the formulation of national AI policies. This stance is complemented by a call for the establishment of robust mechanisms for independent oversight of AI systems, aiming to safeguard citizens’ rights. The necessity to ensure AI technologies align with and uphold democratic principles and norms is thus underscored.

Nevertheless, the analysis reveals resolute opposition to the use of facial recognition for mass surveillance and deployment of autonomous weaponry. These technologies are seen as undermining human rights and eroding democratic valuesโ€”an interpretation echoed in UN negotiations.

In conclusion, despite AI offering tremendous potential for societal advancements and business growth, it’s critical for its advancement and application to adhere to regulatory frameworks preserving human rights, promoting fairness, ensuring transparency, and upholding democratic values. Cultivating an equilibrium and a forward-thinking climate for formulating AI policies involving public participation can assist in mitigating and managing the potential risks. This approach ensures that AI innovation evolves ethically and responsibly.

Bjรถrn Berge

Artificial Intelligence (AI) carries the potential to revolutionise various sectors worldwide, due to its capacities for improved efficiency, advanced decision-making, and enhanced services. It can significantly enhance productivity by automating routine and repetitive tasks usually undertaken by humans. Additionally, AI systems can harness big data to make more precise decisions, eliminating human errors and thereby resulting in superior service delivery.

Nevertheless, the growth of AI necessitates a robust regulatory framework. This framework should enshrine human rights as one of its core principles and should advocate a multi-stakeholder approach. It is vital for AI systems to be developed and used in a manner that ensures human rights protection, respects the rule of law, and upholds democratic values.

Aligning with this, the Council of Europe is currently working on a treaty that safeguards these facets whilst harnessing the benefits of AI. This treaty will lay down principles to govern AI systems, with a primary focus on human rights, the rule of law, and democratic values. Notably, the crafting process of this treaty doesn’t exclusively involve governments, but also includes contributions from a wide array of sectors. Civil society participants, academic experts, and industry representatives all play a crucial role in developing an inclusive and protective framework for AI.

The Council of Europe’s treaty extends far beyond Europe and has a global scope. Countries from various continents are actively engaged in the negotiation process. Alongside European Union members, countries from North, Central, and South America, as well as Asia, including Canada, the United States, Mexico, Israel, Japan, Argentina, Costa Rica, Peru, and Uruguay, are involved in moulding this international regulatory framework. This global outreach underscores the importance and universal applicability of AI regulation, emphasising international cooperation for the responsible implementation and supervision of AI systems.

Francesca Rossi

Francesca Rossi underscores that AI is not simply a realm of pure science or technology; instead, it should be considered a social technical field of study, bearing significant societal impacts. This viewpoint emphasises that the evolution and application of AI are profoundly intertwined with societal dynamics and consequences.

Furthermore, Francesca advocates robustly for targeted regulation in the AI field. She firmly asserts that any necessary regulation should focus on the varied uses and applications of the technology, which carry different levels of risk, rather than merely on the technology itself. This argument stems from the understanding that the same technology can be utilised in countless ways, each with its own implied benefits and potential risks, therefore calling for tailored oversight mechanisms.

Francesca’s support for regulatory bodies such as the Council of Europe, the European Commission, and the UN is evident from her active contribution to their AI-related works. She perceives these bodies as playing a pivotal role in steering the direction of AI in a positive vein, ensuring its development benefits a diverse range of stakeholders.

Drawing from her experience at IBM, she reflects a corporate belief in the crucial importance of human rights within the context of AI technology use. Despite absent regulations in specific areas, IBM has proactively taken steps to respect and safeguard human rights. This underlines the duty that companies need to uphold, ensuring their AI applications comply with human rights guidelines.

Building on IBM’s commitment to responsible AI technology implementation, Francesca discusses the company’s centralised governance for their AI ethics framework. Applied company-wide, this approach implies that it’s pivotal for companies to maintain a holistic approach and framework for AI ethics across all their divisions and operations.

Francesca also emphasises the crucial role of research in both augmenting the capabilities of AI technology and in addressing its current limitations. This supports the notion that on-going research and innovation need to remain at the forefront of AI technology development to fully exploit its potential and manage inherent limitations.

Lastly, Francesca highlights the value of establishing partnerships to confidently navigate the crowded AI field. She fervently advocates for inclusive, multi-stakeholder, and worldwide collaborations. The need for such partnerships arises from the shared requirement for protocols and guidelines, to ensure the harmonious handling of AI matters across borders and industries.

In summary, Francesca accentuates the importance of viewing AI within a social context. She brings attention to matters related to regulation, the function of international institutions, and corporate responsibility. Additionally, she illuminates the significance of research and partnerships in overcoming challenges and amplifying the capabilities of AI technologies.

Daniel Castaรฑo Parra

AI’s deeply integrated role in our societal fabric underscores the profound importance of its regulation, exhibiting promising transformative potential while simultaneously posing challenges. As this transformative technology continues to evolve and permeate various aspects of societal fabric globally, the pressing need for robust, comprehensive regulations to guide its usage and mitigate potential risks becomes increasingly evident.

Focusing attention on Latin America, the task of AI regulation emerges as both promising and challenging. Infrastructure discrepancies across the region, variances in technology usage and access, and a complex web of data privacy norms present considerable obstacles. The diversity of the regional AI landscape necessitates a nuanced approach to regulation, considering the unique characteristics and needs of different countries and populations.

In response to these challenges, specific solutions have been proposed. A primary recommendation is the establishment of a dedicated entity responsible for harmonising AI regulations across the region. This specialist body could provide clarity and consistency in the interpretation and application of AI laws. Additionally, advocating for the creation of technology-sharing platforms could help bridge the gap in technology access across varying countries and communities. A third suggestion involves pooling regional resources for constructing a robust digital infrastructure, bolstering AI capacity and capabilities in the region.

The significance of stakeholder involvement in shaping the AI regulatory dialogue is recognised. A diverse array of voices, incorporating those from varying sectors, backgrounds and perspectives, should actively participate in moulding the AI dialogue. This inclusive, participatory approach could help to ensure that the ensuing regulations are equitable, balanced, and responsive to a range of needs and concerns.

Further, the argument highlights the potential of AI in addressing region-specific challenges in Latin America. The vital role AI can play in delivering healthcare to remote areas, such as Amazonian villages, is stressed, while also being instrumental in predicting and mitigating the impact of natural disasters. Thus, it strengthens its potential contribution towards achieving the Sustainable Development Goals concerning health, sustainable cities and communities, and climate action.

In conclusion, while AI regulation presents significant hurdles, particularly in regions like Latin America, it also unveils vast opportunities. Harnessing the promises of AI and grappling with its associated challenges will demand targeted strategies, proactive regulation, wide-ranging stakeholder involvement, and an unwavering commitment to innovation and societal enhancement.

Arisa Ema

Arisa Ema, who holds distinguished positions in the AI Strategic Council and Japanese government, is an active participant in Japan’s initiatives on AI governance. She ardently advocates for responsible AI and interoperability of AI frameworks. Her commitment aligns with SDG 9: Industry, Innovation, and Infrastructure and SDG 17: Partnerships for the Goals, showcasing her belief in the potential for technological advancement to drive industry innovation and foster worldwide partnerships for development.

Moreover, Ema underlines the crucial need for empowering users within the domain of AI, striving for power equilibrium. The current power imbalance between AI users and developers is seen as a substantial challenge. Addressing this links directly with achieving SDG 5: Gender Equality and SDG 10: Reduced Inequalities. According to Ema, a balanced power dynamic can only be achieved when the responsibilities of developers, deployers, and users are equally recognised in AI governance.

Ema also appreciates the Internet Governance Forum (IGF) as an indispensable platform for facilitating dialogue among different AI stakeholders. She fiercely supports multi-stakeholder discussions, citing them as vital to AI governance. Her endorsement robustly corresponds with SDG 17: Partnerships for the Goals, as these discussions aim to bolster cooperation for sustainable development.

Ema introduces a fresh perspective on Artificial Intelligence, arguing that AI should be perceived as a system, embracing human beings, and necessitating human-machine interaction rather than a mere algorithm or model. This nuanced viewpoint can significantly impact the pursuit of SDG 9: Industry, Innovation, and Infrastructure, as it recommends the integration of human-machine interaction and AI.

Furthermore, Ema promotes interdisciplinary discussions on human-AI interaction as a critical requirement to fully understand the societal impact of AI. She poses dialogue to bridge cultural and interdisciplinary gaps as quintessential, given the multi-faceted complexities of AI. These discussions will help in identifying biases entrenched in human-machine systems and provide credible strategies for their elimination.

In conclusion, Arisa Ema’s holistic approach to AI governance encapsulates several pivotal areas; user empowerment, balanced power dynamics, multi-stakeholder discussions, and interdisciplinary dialogues on human-AI interaction. Her comprehensive outlook illuminates macro issues of AI while underscoring the integral role these elements play in sculpting AI governance and functionalities.

Liming Zhu

Australia has placed a significant emphasis on operationalising responsible Artificial Intelligence (AI), spearheaded by initiatives from Data61, the country’s leading digital research network. Notably, Data61 has been developing an AI ethics framework since 2019, serving as a groundwork for national AI governance. Furthermore, Data61 has recently established think tanks to assist the Australian industry in responsibly adopting AI, a move that has sparked a positive sentiment within the field.

In tandem, debates around data governance have underscored the necessity of finding a balance between data utility, privacy, and fairness. While these components are integral to robust data governance, they may involve trade-offs. Advances are thus required to enable decision-makers to make pragmatic choices. The issue of preserving privacy could potentially undermine fairness, introducing complex decisions that necessitate comprehensive strategies.

As part of the quest for responsible AI, Environmental, Social, and Governance (ESG) principles are becoming increasingly prevalent. Efforts are underway to incorporate responsible AI directives with ESG considerations, thereby ensuring that investors can influence the development of more ethical and socially responsible AI systems. This perspective signals a broader understanding of AI’s implications that extend beyond its technical dimensions.

Accountability within supply chain networks is also being highlighted as pivotal in enhancing AI governance. Specifically, advances on AI bills of materials aim to standardise the types of AI used within systems whilst sharing accountability amongst various stakeholders in the supply chain. This marks a recognition of the collective responsibility of stakeholders in AI governance.

In light of the rise of AI in the realm of game theory, exemplified by AlphaGo’s victory in the game of Go, there’s a reassurance that rivalry between AI and humans is not necessarily worrying. Contrary to eliminating human involvement, these advances have instigated renewed interest in such games, leading to a historical high in the number of grandmasters in both Go and Chess.

Highlighting the shared responsibility in addressing potential bias and data unevenness within AI development is vital. The assertion is that decision-making concerning these issues should not be solely the responsibility of developers or AI providers, suggesting that a collective approach may be more beneficial.

In summary, it’s crucial to incorporate democratic decision-making processes into AI operations. This could involve making visible the trade-offs in AI, which would allow for a more informed and inclusive decision-making process. Overall, these discussions shed light on the multifaceted and challenging aspects of responsible AI development and deployment, providing clear evidence of the need for a comprehensive and multifaceted approach to ensure ethical AI governance.

Audience

This discourse unfolds numerous vital topics centring on democratic decision-making, human rights, and the survival of the human species from a futuristic perspective, primarily focusing on the speed and agility of decision-making and potential implications on the democratic process. A thoughtful proposal for superseding the conventional democratic process, when deemed necessary for the species’ survival was advanced. This may even necessitate redefining aspects of human rights to better manage unforeseen future challenges.

The discussion also touched on circumstances where democracy could pose certain hurdles, suggesting a more democratic model could be beneficial for overcoming such issues. This proposed approach underlines the idea of a holistic, global consultation for such decision-making scenarios, emphasising the inherent value of democratic ethos in contending with complex problems.

A notable argument for enhanced collaboration was presented, stressing on adopting a concerted, joint problem-solving strategy rather than attempting to solve all problems at once. This suggestion promotes setting clear priorities and addressing each issue effectively, thereby creating a more synergetic solution to thematic global issues.

Within the technology paradigm, concerns were raised about who governs the algorithmic decision-making of major US-based tech companies. The argument underscores the non-transparent nature of these algorithms. Concerns related to potential bias in the algorithms were voiced, considering the deep division on various issues within the United States. There were calls for transparent, unbiased algorithm development to reflect neutrality in policy-making and respect user privacy.

In essence, the conversation revolved around balancing quick, efficient decision-making with the democratic process, the re-evaluation of human rights in the face of future challenges, the importance of joint problem-solving in addressing global issues and maintaining transparency and fairness in technological innovations. The discourse sheds light on the intricate interplay of politics, technology, and human rights in shaping the global landscape and fosters a nuanced understanding of these issues in connection with sustainable development goals.

Ivana Bartoletti

Artificial Intelligence (AI) is a potent force brimming with potential for immense innovation and progress. However, it also presents a host of risks, one key issue being the perpetuation of existing biases and inequalities. These problems are particularly evident in areas such as credit provisions and job advertisements aimed at women, illustrating the tangible impact of our current and prospective use of AI. There’s a worrying possibility that predictive technologies could further magnify these biases, leading to self-fulfilling prophesies.

Importantly, addressing bias in AI isn’t merely a technical issueโ€”it’s also a social one and hence necessitates robust social responses. Bias in AI can surface at any juncture of the AI lifecycle, as it blends code, parameters, data and individuals, none of which are innately neutral. This complex combination can inadvertently result in algorithmic discrimination, which might clash with traditional forms of discrimination, underlining the need for a multidimensional approach to tackle this challenge.

To effectively manage these issues, a comprehensive strategy that includes legislative updates, mandatory discrimination risk assessments and an increased emphasis on accountability and transparency is required. By imposing legal obligations on the users of AI systems, we can enforce accountability and regulatory standards that could prevent unintentional bias in AI technologies. Implementing measures for positive action, along with these obligations, could provide a robust framework to combat algorithmic discrimination.

In addition, the introduction of certification mechanisms and use of statistical data can deliver insightful assessments of discriminatory effects, contributing significantly to the strife against bias in AI. Such efforts have the potential to not only minimise the socially harmful impacts of AI, but also reinforce the tremendous potential for progress and innovation AI offers.

In summary, it’s clear that the expansion of AI brings significant risks of bias and inequality. However, by adopting a broad approach that encapsulates both technical and social responses while emphasising accountability and transparency, we can navigate the intricacies of AI technologies and harness their potential for progress and innovation.

Moderator

Artificial Intelligence (AI) is progressively becoming an influential tool with the ability to transform crucial facets of society, including healthcare diagnostics, financial markets, and supply chain management. Its thorough integration into our societal fabric has been commended for bringing effective solutions to regional issues, such as providing healthcare resources to remote Amazonian villages in Latin America and predicting and mitigating the impact of natural disasters.

Echoing this sentiment, Thomas, who leads the negotiations for a binding AI treaty at the Council of Europe, asserted that AI systems can serve as invaluable tools if utilised to benefit all individuals without causing harm. This view is reflected by Baltic countries who are also working on their own convention on AI. The treaty is designed to ensure that AI respects and upholds human rights, democracy, and the rule of law, forming a shared value across all nations.

Despite the substantial benefits of AI, the technology is not without its challenges. A significant concern is the bias in AI systems, with instances of algorithmic discrimination replicating existing societal inequalities. For instance, women being unfairly targeted with job adverts offering lower pay and families mistakenly identified as potential fraudsters in the benefits system. In response to this, an urgent call has been made to update non-discrimination laws to account for algorithmic discrimination. These concerns have been encapsulated in a detailed study by the Council of Europe, stressing the urgent need to tackle such bias in AI systems.

In response to these challenges, countries worldwide are developing ethical frameworks to facilitate responsible use of AI. For instance, Australia debuted its AI ethics framework in 2019. This comprehensive framework amalgamates traditional quality attributes with unique AI features and emphasises on operationalising responsible AI.

The necessity for regulation and accountability in AI, especially in areas like supply chain management, was also discussed. The concept of “AI bills of materials” was proposed as a means to trace AI within systems. Another approach to promoting responsible AI is viewing it through an Environmental, Social, and Governance (ESG) lens, emphasising the importance of considering factors such as AI’s environmental footprint and societal impact. Companies like IBM are advocating for a company-wide system overseeing AI ethics and a centralised board capable of making potentially unpopular decisions.

Despite the notable differences between countries regarding traditions, cultures, and laws governing AI management, focusing on international cooperation remains a priority. Such collaborative endeavours aim to bridge the technological gap through the creation of technology sharing platforms and encouraging a multi-stakeholder approach in treaty development. This cooperation spirit is embodied by the Council of Europe coordinating with diverse organisations like UNESCO, OECD, and OSCE.

In conclusion, while technological advances in AI have led to increased efficiency and progress, the need for robust regulation, international treaties, and data governance is more significant than ever. It is crucial to ensure that the use and benefits of AI align with its potential impact on human rights, preservation of democracy, and promotion of positive innovation.

Session transcript

Moderator:
good morning or good evening for those that are connected online from different time zones. Very happy to be here with you in Kyoto. It’s the first time in Japan so I’m very excited to be in this fantastic place. My name is Thomas. I work for the Swiss government and I happen to currently chair the negotiations on the first binding AI treaty at the Council of Europe. That is a treaty not just for European countries, it is open to all countries that respect and value the same values of human rights, democracy and rule of law. So we do have countries like Canada, the United States, Japan as well participating in the negotiations but also a number of countries from Latin America, from other continents. But I’m not here to talk about the convention right now. You will hear a lot about the convention, maybe also in this session but also in others. But I’m here to basically help you listen to experts from all over the world that will talk about AI and how to ensure, while fostering innovation, how to ensure human rights and democratic values to be respected when AI is used and developed. So as we all know, AI systems are wonderful tools if they are used for the benefit of all people and if they are not used for hurting people, for creating harm. And so this is about how to try and make sure that one thing happens and the other doesn’t. But before we go to the panelists, I have the honor to present to you a very special guest from Strasbourg, Mr. Bjรถrn Berge. He’s the Deputy Secretary General of the Council of Europe, which will give you a few remarks from his side. Thank you, Bjรถrn. Please go ahead.

Bjรถrn Berge:
Thank you very much, Ambassador Schneider, and a very good afternoon to all of you. It’s really great to be here in Japan, and at this very important occasion, and of course it’s 17 years now, and it’s the 18th time that the IGF is meeting, and it has really proven to be both a historic and highly relevant decision to start this process. And technology is, as we know, developing in a way and at a pace that the world has never seen before, which affects all of us, every country, every community around this globe. It therefore makes really perfect sense to keep up the work and do all we can to ensure enhanced digital cooperation and the development of a global information society. Basically, this is about working together to identify and mitigate common risks, so that we can make sure that the benefits that the new technology can bring to our economies and societies are indeed helpful and respect fundamental rights. Today, it is good to see the Internet Governance Forum making substantial progress towards a global digital compact, with human rights established as one of the principles in which digital technology should be rooted, along with the regulation on artificial intelligence, all to the benefit of people throughout the world. The regulation of AI is also something on which the Council of Europe is making good progress. In line with our mandate to protect and promote common legal standards in human rights, democracy, and rule of law. And the work we do is not only relevant for Europe alone, but has often a global outreach. So, dear friends, I believe all of us are fully aware of the positive changes that AI can bring. Increased efficiency and productivity with mundane and repeated tasks, moving from humans to machines, better decisions even made on the basis of big data, eliminating the possibility of human error, and improved services based on deep analysis of vast quantities of information, leading to scientific and medical breakthroughs that seemed impossible until very recent times. But with all of this comes significant right-based concerns. And just as a matter of fact, a few days ago the Council of Europe published a study on tackling bias in AI systems to promote equality. And I’m very happy and pleased that the co-author of this excellent study, Miss Ivana Bertoletti, is here with us today online and she will speak after me, I think. So, there are also other questions related to the availability and use of personal data, on responsibility for the technical failure of AI applications, and on their criminal misuses in attacking election systems, for example. And on access to information, the growth of hate speech, fake news, and disinformation, and how these are managed. The bottom line is that we must find a way to harness the benefits of AI without sacrificing our values. So, how can we do that? Our starting point should be the range of Internet governance tools that we have already agreed upon, some of which have a direct bearing on AI. If I focus on Europe for a moment, this includes the European Convention on Human Rights, which has been ratified by 46 European countries. Also, the European Court of Human Rights, with its important case law. And let me just mention one concrete example now from such a court judgment. A case that clarified that online news portals can be held liable for user-generated comments if they fail to remove clearly unlawful content promptly. This is a good example of the evolution of law in line with the times. Drawing from this European Convention and the court case law, which of course again builds on the Universal Declaration of Human Rights, we also develop specific legal instruments designed to help member states, but also countries outside Europe, apply our standards and principles in regards to Internet governance. Our Budapest Convention is the first international treaty to combat cybercrime, including offenses related to computer systems, data, and content. And its new second edition of protocol is designed to improve cross-border access to electronic evidence, extending thereby the arm of justice further into cyberspace. Our Convention 108 on data protection is similarly a treaty that countries also inside and outside Europe find highly relevant. And this Convention on data protection has also been updated with an amending protocol, widely referred to as Convention 108 plus, which helps ensure that national privacy laws converge. Added to this, over recent years we have, within the Council of Europe, adopted a range of recommendations to all our 46 member states, covering everything from combating hate speech, especially online, to tackling disinformation. And right now we are also working on a set of new guidelines on countering the spread of online miss and disinformation through fact-checking and platform design solutions. In addition, we are now looking at the impact of digital transformation of the media, and this year we will finalize work on a set of new guidelines for the use of AI in journalism. So all in all, we are indeed involved in a number of areas trying to help and contribute, but we need to go further still on AI specifically. And here we are currently developing a far-reaching and first-of-its-kind international treaty, a framework convention, and the work is led by Ambassador Schneider sitting next to me, that will define a set of fundamental principles to help safeguard human rights, rule of law and democratic values in AI. Experts from all over Europe, as well as civil society, representatives from the private sector, are leading and contributing to this work. Such a treaty will set out common principles and rules to ensure that design, development and use of AI systems respect common legal standards, and that they are rights compliant through their lifecycle. Like the Internet Governance Forum, this process has not been limited to the involvement of governments alone, and this is crucially important, because we need to draw upon the unique expertise provided by civil society participants, academics and industry representatives. In other words, we must always seek a multi-stakeholder approach, so as to ensure that what is proposed is relevant, balanced and effective. Such a new treaty, a framework convention, will be followed by a standalone, non-binding mythology for the risk and impact of AI systems, to help national authorities adopt the most effective approach to both regulation and implementation of AI systems. But it’s also important to say here today that all of this work is not limited only to the Council of Europe or our member states. The European Union is also engaged with the negotiations, as well as non-European countries, as well as Canada, the United States, Mexico and Israel. And this week, Argentina, Costa Rica, Peru and Uruguay joined, and of course Japan, a country that has been a Council of Europe observer for more than 25 years. And that is actively participating in the range of our activities, and there is no doubt that Japan’s outstanding expertise and track record of technological development makes it a much-valued participant in our work. And its key role globally, when it comes to AI and internet governance, is only reconfirmed by hosting this important conference here in Kyoto this week. So dear friends, there is still time for other like-minded countries to join this process of negotiating a new international treaty on AI, either taking part in the negotiations or as observers. A role that actually a number of non-member states have requested, and I must say the negotiations are progressing well. A consolidated working draft of the Framework Convention was published this summer, and it will now serve as the basis for further negotiations. And yes, our aim is that we should be able to conclude these negotiations by next year. I hope you agree. Let me also underline that this Framework Convention will be open to signature from countries around the world, so it will have the potential for a truly global reach, creating a legal framework that brings European and non-European states together, opening the door, so to say, to a new era of right-based AI around the world. So let me just make an appeal to governments represented here today to consider whether this is a process that they might join, and a treaty that they most likely will go on to sign, just as I encourage those who have not yet done so to join the Budapest Convention and the Convention 108 and 108+, as I just mentioned. I believe it makes sense to work closely together on these issues and make progress on the biggest scale possible. Let me lastly on this point just say, and more broadly, that on the regulation of artificial intelligence we can learn from each other, benefit from various experiences, and tap into a large pool of knowledge and expertise globally. For us, the Council Europe, it’s seeking multilateral solutions to multilateral problems is really part of our DNA, and that spirit of cooperation makes it natural for us to work with others with an interest in these issues as well. And I also want to highlight here today that we also work now very closely with the Institute of Electrical and Electronic Engineers to elaborate a report on the impact of the metaverse and immersive realities. And we are also looking then carefully into if the current tools are adequate for ensuring human rights, democracy, and rule of law standards in this field. We are also coordinating closely with UNESCO, as well as with the OECD, with the OSCE, the Organization for Security and Cooperation in Europe, and the European Union, of course. And I believe also why we are here today, as the Internet Governance Forum, we share that spirit and that ambition of international cooperation. And this is really the only approach for us, and I’m sure its success is a must, both for the development of artificial intelligence and for helping to shape the safe, open, and outward-looking societies that hold and protect fundamental rights and are true to our values. So with this, I thank you very much for your attention. Thank

Moderator:
you very much, Bjรถrn. And you said it, the key of us being together here is to learn from each other, which means listening and trying to understand each other’s situation, and we’re very happy to have quite a range of experts with different expertise here on the panel, but of course also in the room, so I’m looking forward to an interesting exchange. And I will immediately go, you’ve already named her, to Ivana Bartoletti, and she’s connected online, so we have this advantage after COVID that we can connect with people physically here, but also remotely, and Ivana Bartoletti works in a major private company specialized, among other things, in IT consulting. She’s also a researcher and teaches cybersecurity, privacy, and bias at Pamplin Business School at Virginia Tech, and Ivana is a co-founder of the Global Coalition for Digital Rights and Women Leading in AI Network. So Ivana, please tell us about your work. What would you say are the main challenges when it comes to bias, and in particular gender bias in AI, and what do you think we need to develop to do and develop and foster the appropriate solution to these problems? So Ivana, I

Ivana Bartoletti:
hope you will appear on our screen soon. Yes, we can already hear you. Wonderful, thank you so much, and thank you for having me here, and it was great to hear from both yourself in the introduction, and Mr. Bjorn Bertsch, and Deputy General Secretary of the Council of Europe, who I want to thank for their trust in giving to me and to Raffaele, and putting together this report, which is now available online. I wanted to start by saying that artificial intelligence is bringing, and will bring, enormous innovation and progress if we do it in the right way, and I do firmly believe, as many, that we are at a watershed moment in the relationship between humanity and technology. This is the time, and the Deputy General Secretary, Secretary General of the Council of Europe, Bjorn, was really articulating it well. We are at a watershed moment in this relationship between humanity and technology. Over the last few years, we’ve seen some of the amazing benefits that artificial intelligence, automated decision-making can bring to humanity. On the other hand, we’ve also seen some quite disturbing sights. We’ve also seen some quite disturbing effects that these technologies can bring, and bias, the perpetuation, the codification, the coding, and the automation of existing inequality has been one of them. And I want to make one point as we start, and the point that I want to make is that over the last few years, and sorry, over the last few weeks and months, we have seen a lot of people coming out with quite alarmist and dramatic appeals on artificial intelligence. And I want to say loud and clear here in this room, and that this alarmist approach to artificial intelligence has been quite distracting. And the reason for this is that it helps create a mystique around artificial intelligence. Well, we know very, very well right now what the risks are. We’ve been advocating, and especially have to say women and human rights activists over the last decade, for measures to tackle these harms. So I want us and everybody to remain focused on artificial intelligence risks and harms, do the nitty-gritty, as the Council of Europe mentioned now, as a lot of work is going, for example, in the European AI Act. in the development of legislation and guidance all across the world, in the work going in the Convention for the Council of Europe, as well as in the world that the United Nations with the Digital Global Compact is putting forward, to really focus on the harms that we know of, on the harms of bias, disinformation, the coding of existing inequalities in automated decisions, making choices about individuals now, but also making predictions about decisions tomorrow. So in these studies, Rafael and I have focused on bias in automated decision making, and looking at what this bias looks like. There’s been a lot of work going into this, and a lot of expertise all around the globe, focusing on bias, and we have seen that this bias has a very real effect. We’ve seen less credit given to women, because women traditionally make less money than men. We’ve seen countries and government grappling with the terrible mistakes of, for example, families wrongfully identified as potential fraudsters in the benefit system, and therefore putting families, parents, and children into poverty. We have seen what it means when job adverts target the pay less, are served to women, because traditionally women earn less than men. And we have seen the harms of automated decision making, for example, portraying images of women replicating stereotypes that we have seen for decades in our society. So the harms of automated decision making and the bias is all too real for people. It affects everyday life. And some people would argue, yes, but human people are biased. And I say, yes, they are biased. Obviously they are. But the difference is where that bias gets coded into software, and it becomes more difficult to identify, more difficult to challenge, and then it becomes ingrained even more in our society. And this is particularly complex, in my view, when it comes to predictive technologies, which, if we code this bias and these stereotypes into these predictive technologies, what could happen is that we end up into self-fulfilling prophecies. We end up into replicating the patterns of yesterdays into decisions that shape the world of tomorrow. And this is not something that we want. So what can we do? First of all, we must recognize the bias can be addressed from a technical standpoint, can be addressed from a technical standpoint, but bias is much deeper than that. It’s much more than a technical issue. It’s rooted into society because data, as well as parameters, as well as all the humans that go into creating a code, is much more than technology. Ultimately, I like to say that AI is a bundle of code, of parameters, of people, of data, and nothing of that is neutral. Therefore, we must understand that these tools are much more of a socio-technical tool rather than a purely technical one. So it’s important to bear in mind that the origin and the cause of bias, which could emerge at any point of the life cycle of AI, is a social, political issue that requires social answers, not purely technical ones. So let’s never lose this from our conversation. Second thing that is important to realize, we found in the study with Rafael, there is often not an overlap between the discrimination that people experience, which traditionally, especially in non-discrimination law across the world, are based on protected characteristics, and the new sources of discrimination that are often algorithmic. And the algorithmic discrimination, which is created by big data sets, so clustering of individuals done in a computational and algorithmic way. And, on the other hand, the more traditional categories of discrimination, they do not overlap. And therefore, what is happening is that we must look into existing non-discrimination law and try and understand if that non-discrimination law that we have in place in our countries is fit for purpose to deal with this new form of algorithmic discrimination. Because what happens in algorithmic discrimination is that individuals may be discriminated, not because of traditional protected characteristics, but because they’ve been put in a particular cluster, in a particular group, and this happens in a computational and algorithmic way. Then, so the updating of existing legislation is very important. We encourage member states to expand the use of positive action measures to tackle algorithmic discrimination and use the concept of positive obligations, which is, for example, in the European Convention of Human Rights case law, to create an obligation for providers and users to reasonably prevent algorithmic discrimination. This is really, really important, and in part in our report. We’re looking and we’re suggesting mandatory discrimination risk and equality impacts assessments throughout the lifecycle of algorithmic systems according to their specific uses. We’re looking really to see, to ensure that this equality by design is introduced into the systems. We’re looking and we’re suggesting to member states to consider how certification mechanisms could be used to ensure that this bias has been mitigated. So looking, for example, at how member states could introduce some form of licensing and say, well, actually, this system, due diligence has been into the systems to eliminate as far as possible for well-defined users. We’re looking at encouraging the member state to investigate the relationship between accountability, transparency, and free secrets. And finally, my last point, and we’ve been encouraging member states to consider establishing legal obligations for users of AI systems to publish statistical data that can allow parties, researchers, to really look at the discriminatory effect that a given system can have in the context of discrimination claims. So I want to close on this. It’s a vast report that I would encourage anyone to read. And the bottom line of this report is that discrimination through AI systems is something that brings together social and technical capabilities. It’s something that needs to absolutely be at the heart of how we deploy these systems. We must investigate the way that we can use these systems to actually tackle the discrimination in the first place. For example, by identifying sources of discrimination that are not visible to human eyes in the first place. So there is a positive side to all this, which we must harness. But to do so, we encourage everyone to really understand how we can get together, bring the greatest expertise that we have in the world and in this room, to really try and understand how we can not just further our knowledge, but also enshrine in legislation the importance to tackle bias in algorithmic systems.

Moderator:
Thank you very much, Ivana. And as you say, new technologies create new problems sometimes, but they can also be part of new solutions, and it’s good to highlight both. With this, let me move on immediately as we are slightly running behind schedule to Ms. Merve Hickok. She’s also connected online. We do, as you see, also have physically present speakers and experts. Merve is a globally renowned expert on AI policy, ethics, and governance, and her research and training and consulting work focuses on the impact of AI systems on individual, society, public, and private organizations with a particular focus on fundamental rights, democratic values, and social justice. She’s the president and research director at the Center for AI and Digital Policy, and with this hat, she’s also very actively present as one of the very present civil society voices in the negotiations on the convention. So Merve, what are some of the main challenges of finding proper regulatory solutions to the challenges posed by AI to human rights, democracy, and what kind of solutions to these challenges do you see? Thank you.

Merve Hickok:
First of all, thank you so much for the invitation, Chair Schneider. Good to see you virtually. And I appreciate the invite and expanding this conversation in this global forum as well. Also, I’m in great company here today and very much looking forward to the conversation. I actually want to answer the question by quoting from recommendation of committee of ministers of Council of Europe dating back to 2020, where the ministers recommend that achievement of socially beneficial innovation and economic development goals must be rooted in the shared values of democratic societies, subject to full democratic participation and oversight, that the rule of law standards that govern public and private relations such as legality, transparency, predictability, accountability, and oversight must also be maintained in the context of algorithmic systems. So this sentence alone, for me, provides us with a great opportunity, great summary of challenges, as well as an opportunity and direction towards solutions. First, in terms of challenges, we currently see a tendency to treat innovation and protection as a either-or situation, as a zero-sum game. I cannot tell you the number of times I’m asked, but would regulating AI stifle innovation? And I’m sure those in the room and around in the panel probably has lost count of this question. However, they should coexist. They must coexist. Regulation creates clarity. Safeguards making innovations better, safer, more accessible. Genuine innovation promotes human rights. It promotes engagement. It promotes transparency. The second challenge in this field that we are seeing is that the rule of law standards, which go on public actors’ use of AI systems, must apply to the private actors as well. It feels like every day we see another privately-owned AI product undermining rights or access to resources. Yes, of course there are differences in the nature of duties between private and public actors. However, businesses also have obligation to respect human rights and rule of law too. This is reflected in the United Nations Guiding Principles for Business, reflected in the Hiroshima process for AI now. We cannot overlook how the private sector’s use of AI impacts individuals and communities just because we want our domestic companies to be more competitive. Market competition alone will not solve this problem with human rights and democratic values. Unregulated competition might encourage a race to the bottom. And the third challenge, the final challenge, is the CEO and industry dominance in the regulatory conversations we’re seeing around the globe today. As ministers note, innovation must be subject to full democratic participation and oversight. We cannot create regulatory solutions behind closed doors with industry actors deciding whether or how they should be regulating. Of course, the industry must be part of this conversation. However, democracy requires public engagement. Whether it’s in the US, UK, or beyond, we’re seeing the dominance of industry in the policymaking process undermining the democratic values and is likely to exacerbate existing concerns about replication of bias, displacement of labor, concentration of wealth, and power imbalances. And as I mentioned, the challenge, the minister’s recommendation actually include the solutions with them, that we need to base our solutions in democratic values. In other words, civic engagement and policymaking in elections, in governance, transparency, and accountability. I would like to finish very quickly with some recommendations because core democratic values and human rights is core to the mission of my organization. We saw these challenges several years ago and set ourselves up for a major project to objectively assess AI policies and practices across countries. Our annual flagship report is called AI and Democratic Values Index. We published a third edition this year where we assessed 75 countries against 12 objective metrics, where our metrics actually allow us to assess whether and how these countries see the importance of human rights and democracy, and whether they keep themselves accountable for their commitments to these. In other words, do they walk the talk? You would be surprised to see how many commitments in a national strategy does not actually translate to actual practices. So I’m finishing my response with offering recommendations from our annual report over the three years that I hope will be applicable to this conversation. First, establishing national policies for AI that implement democratic values. Second, ensure public participation in AI policymaking and create robust mechanisms for independent oversight of AI systems. Third, guarantee fairness, accountability, and transparency in all AI systems, public and private. Four, commit these principles in the development, procurement, and implementation of AI systems for public services, where a lot of the time the middle one, procurement, falls between the cracks. Next recommendation is implement the UNESCO AI recommendations on ethics. And then the final one in terms of implementation is establish a comprehensive legally binding convention for AI. And I do appreciate the Council of Europe being part of the Council of Europe’s work and looking forward to this convention for AI. And then we have two specific recommendations for specific technologies because they undermine both human rights and democratic values and civic engagement. One is the facial recognition for mass surveillance. The second one is deployment of little autonomous weapons, both items that have been repeatedly also discussed in UN negotiations, UN conversations. With that, I would like to thank again and I’m looking forward to the rest of the conversation.

Moderator:
Thank you, Merve. That was very interesting, in particular also this fight against the notion that you can have either innovation or protection of rights, but both need to go together. With this, let me turn to Francesca Rossi. She’s also present online. By the way, have you noticed we have quite many women here on this panel? So for those that complain that you don’t find any women specialists, actually sometimes you do. So Francesca Rossi is a computer scientist currently working at the IBM TJ Watson Research Law in New York. And as an IBM Fellow, an IBM AI Ethics Global Leader. She’s actively engaged in the AI-related work of bodies like the IEEE, the European Commission High-Level Expert Group or the Global Partnership on AI. And she will give us a unique perspective as both a computer scientist and a researcher, but also as someone who knows the industry’s perspective on the challenges and on the opportunities created by AI and more especially on generative AI. You have the floor, Francesca. Thank you. Thank you.

Francesca Rossi:
Thank you very much for this invitation and for the opportunity to participate in this panel. So many of the things that have been said by the previous speakers resonate with me. So like I can, you know, of course, everything that Ivana said about the social technical aspects of this science and technology that is AI by several years now that AI is not a science or a technology only, but it’s really a social technical field of study. And that’s a very important point to make. I really support, you know, the efforts that the Council of Europe and the European Commission are doing in terms of regulating AI as in my company and myself, I really feel that regulation is important to have and it does not stifle innovation, as it was said also by the previous speaker. But regulation should be focusing, in my view, on the uses of the technology rather than the technology itself. The same technology can be used in many different ways, in many different application scenarios, some of them that are very, very low risk or no risk at all, and some others instead that are very, very high risk. So we should make sure that we focus where the risk is and to put obligations and compliance and scrutiny and so on. I would like to share with you the fact, what happened during the last years in a company like IBM, which is a global company and has applications of its technology and deployment of its technology to many different sectors of our society. So, and what we did inside the company, even though there was, and there is in some regions of the world, no regulation that’s really to be compliant with around AI, is because we really feel that regulation is needed, but it cannot be the only solution. Also because technology goes much faster than the legislation process. So companies have to play their role and their part in making sure that the technology they build and they deploy to their clients respects the human rights and freedom and human dignity and bias and many others. So the lessons that we have learned in these years are very few, but I think very important. First of all, that a company should not have an AI ethics team. This is something that maybe is natural to have at first, but it’s not effective in my view. because having a team means that then the team has to usually struggle to connect with all the business units of the company. So what it must have is a company-wide approach and framework for AI ethics and a centralized governance for that company-wide framework. For example, in our case, in the form of a board with representations from all the business units. Second thing, this board should not be an advisory board. It should be an entity that can make decisions for the company, even when the decisions are not well-received by some of the teams. Because for example, it says, no, you cannot sign that contract with a client. You have to do some more testing. You have to pass that threshold for bias. You have to put some contractual conditions in the contractual agreements and so on. The third thing that we learned is that we started like everybody with very high level principles around AI ethics, but then we realized very soon that we needed to go much deeper into the concrete actions. Otherwise there was no impact from the principles to what the developers and the consultants were doing. Next one is really the social technical part. For a technical company is very natural to think that an issue with the technology can be solved with some more technology. And of course, technical tools are very important but they are the easy part. The important and the most important and complimentary part to the technical tools is the education, the risk assessment processes, the developers guidelines, really changing the culture and the frame of mind of everybody in the company around the technology. Next point is the importance of research. AI research can augment the capabilities of AI, but it can also help in addressing some of the issues related to the current limitations of AI. And that’s very important. So to really focus on supporting the research efforts. We also have to remember the technology evolves. So over the years, our framework has evolved because of the new challenges and expanded challenges that came about with the evolution of the technology. And we going from a technology that was just rule-based to based on machine learning and then based on generative AI right now, which expands old issues, like issues related to fairness, explainability and robustness, but also creates new ones, right? It was mentioned misinformation, fake news, copyright infringement, and so on. And then finally the value of partnerships. So partnerships that are multi-stakeholder, that are global. And as deputy secretary mentioned, this is really a very important and a necessary approach. It has to be inclusive, multi-stakeholder and global. So I’ve been working with the OECD, with the World Economic Forum, with the partnership on AI, global partnership AI, the space is very crowded now. But, and we have to make an effort because of this crowded space to find the complementarity and how to work together. So each initiative tries to solve the whole thing, but I think that each initiative has his own angle that is very important and complementary to the other ones. So I’ll stop here by saying that I really welcome what the Council of Europe is doing under the leadership also of our moderator, but also I welcome what the UN is doing and is trying to do with the new advisory board that is being built, because really the UN can also play an important role in making sure that AI is driven in the right direction, which is guided by the UN Sustainable Development Goals. Thank you.

Moderator:
Thank you, Francesca, for sharing with us the lessons learned in a company like IBM from an industry perspective, but also I think a very important guidance is an appeal to intergovernmental institutions and other processes to not all try to solve all problems at once, but to each of the processes and institutions focus on their specific strength and try to jointly solve the problems together. Thank you very much. With this, let us move, and this is our last online speaker, then we do have the physical speakers here, Professor Daniel Castaรฑo. He comes from the academic world. He’s a professor in law in the Universidad Externado de Colombia, but has a strong background in working with governments. He’s been a former legal advisor to different ministries in Colombia and actively engaged in the AI-related research and work in Colombia and Latin America in general. He’s also an independent consultant on AI and new technologies. So Daniel, what kind of specific challenges do AI technologies pose for regulators and developers of these technologies regionally, in your case, in Latin America in particular? Thank you very much, Daniel.

Daniel Castaรฑo Parra:
Well, ladies and gentlemen, distinguished delegates and fellow participants, Deputy Secretary-General Bjorn-John Birch and Chair of the SCAI, Thomas Schneider, thank you very much for this invitation. I think that the best way to address this question is just trying to discuss the profound importance of AI regulation. Today, I must make clear that I’m speaking with my own voice and that my remarks reflect my personal views around this topic. So AI, as we know it, is no longer just a post-word or a distant concept from enhancing healthcare diagnosis to making financial markets more efficient. AI is deeply embedded in our societal fabric. Yet, like any transformative technology, its immense power brings forth both promises and challenges. But why, you may ask, is AI regulations of paramount not only to Europe, but to the world and to our region, to Latin America? At its core, it’s about upholding the values we hold dear in our societies. Transparency in the age where algorithms shape many of our daily decisions, understanding their mechanics is not just a technical necessity, but a democratic imperative. Accountability. Our societies thrive on the principle of responsibility. If an AI errors or discriminates, there must be a framework to address these consequences. Ethics and bias. We are duty-bound to ensure that AI doesn’t perpetuate existing biases, but instead aids in creating a fairer society. As we stand on the brink of a new economic era, we must ponder on how to distribute AI’s benefits equitably and protect against its potential misuse. Now, casting our gaze towards Latin America, a region of vibrant cultures and emerging economies, the AI landscape is both promising and challenging. In sectors ranging from agriculture to smart cities, AI initiatives are gaining traction in our region. However, the regulatory terrain is diverse. While some nations are taking proactive measures, others are still finding their own footing. However, the road to a unified regulatory framework faces certain stumbling blocks in our region. Like, for example, fragmentation due to inconsistent inter-country coordination. I mean, we lack the coordination and integration that Europe has nowadays. We have deeply technological gaps that are attributed to varied adoption rates and expertise level. And we have infrastructure challenges that sometimes hamper consistent and widespread AI application. But let’s not just dwell on challenges. Let’s try to architect some solutions together. So first, I will suggest that we require some sort of regional coordination. So for that purpose, we could establish a dedicated entity to harmonize AI regulation across Latin America, fostering unity in diversity. Also, I would suggest to promote the creation of technology sharing platforms that would allow for the creation of collaborative platforms where countries can share AI tools, solutions, and expertise, bridging the technological gap. Also, I would suggest some investments in shared infrastructure for our region. Consider, like, pooling resources to build regional digital infrastructure, ensuring that even nations with limited resources have access to foundational AI tech. Unique challenges also present themselves. Infrastructure discrepancies, variances in technology access, and a mosaic of data privacy norms necessitate a nuanced approach in our region. But herein also lies the opportunity. AI has the potential to address regional challenges, whether it’s delivering healthcare to remote Amazonian villages, or predicting and mitigating the impact of natural disasters. So what path forward do I envision for Latin America and, indeed, the global community? So first, I will suggest regional synergies are key. Latin American countries, by sharing best practices and even setting regional standards, can craft and harmonize AI narrative. Second, I highly encourage stakeholder involvement. A diverse core of voices, from technologists to the industry, to civil society, and ethicists, must shape actively the AI regulatory dialogue in our region. We also need capacity building. I mean, we have a huge technological gap in our region, and I think that investment in education and research is non-negotiable. Preparing our global citizenry for the highly augmented future is a shared responsibility with the world. Finally, I would also encourage to strengthen data privacy and protection, and to try to harmonize the fragmented regulatory scheme that we are having now in LATAM, because I think that could lead to a balkanization of technology, which will only hamper innovation, and would only put us many years back. So in conclusion, as we stand at this confluence of technology, policy, and ethics, I urge all stakeholders to approach AI with a balance of enthusiasm and caution. Together, we can harness the potential of AI in order to advance the Latin American agenda. Thank you all for your attention, and I very look forward to our collective deliberations around this pivotal issue. Thank you very much.

Moderator:
Thank you very much, Daniel, for these interesting insights, because in particular, people coming from Europe like me, although my country’s not a formal member of the European Union, I think we have a very well-developed cooperation, and also some harmonization of standards, not just through the Council of Europe when it comes to human rights, democracy, and rule of law, but also economic standards. And it is important to know that this is not necessarily the case in other continents, where you have a lot of greater diversity of rules and standards in different ways, which is, of course, a challenge also. And I think your ideas, your solutions to overcome these challenges are very, very valuable. Let me now turn to Professor Lai Ming Su. I hope I pronounce it correctly. He’s a research director at the Australia’s National Science Agency, and a full professor of the University of South Wales. So let us continue with the same topic, move to another region, to Asia Pacific, actually. And Lai Ming Su has been closely involved in developing best practices for AI governance and worked on the problem of operationalizing responsible AI, so the floor is yours.

Liming Zhu:
Yeah, thanks very much for this opportunity. It’s a great honor to join this panel. And so I’m from CSRO, which is Australia’s National Science Agency, and we have a part called Data61. If you’re wondering why Data61, 61 is Australia’s country code when you call Australia. It’s a business unit doing research on AI, digital, and data. So just back a little bit on the Australian journey on AI governance and responsible AI. So Australia is one of the very few countries back in 2019, late 2018, actually, started developing our AI ethics framework. So Data61 actually was the one leading the industry consultation and came up in middle 2019, the Australian AI ethics framework, which is the high-level principles. And we observe its principles are similar to many of the principles in other world and globally. But interestingly, it has three elements in it. One is it’s not just on high-level ethical principles, and it recognize human values with a plural being a different part of the community have different types of values and the importance of trade-offs and robust discussion. The second part of it is included many of the traditional challenging quality attributes like reliability, safety, security, and privacy, and realized AI will make those kind of challenges even more challenging. And the third part of the AI ethics framework included something quite unique to AI, such as accountability, transparency, explainability, and contestability. That AI, although they are very important in any digital software, but AI will make those things more difficult. And since then, Australia has been focusing on operationalizing responsible AIs, especially led by Data61. In the meantime, other agencies like Human Rights Commissioner, the Human Rights Commissioner, Lauren Finlay, when he heard about this particular topic at this forum, she was very excited and she forwarded our recent UN submission on AI governance. And also, you may bump into the E-Safety Commissioner from Australia in this forum, and she’s looking after some of the E-Safety aspects of AI challenges as well in this forum. But then the government actually launched about two years ago, the Australian National AI Center. The Australian National AI Center hosted by Data61 is not a research center. It’s an AI adoption center. Interestingly, its central theme is responsible AI at scale. So it has created a number of think tanks, including AI inclusion and diversity, responsible AI, and AI at scale to help Australian industry navigate the challenge of adopting AI responsibly in everything we do. In the meantime, as a science agency, I’m a scientist in AI, we have been working on the best practices, bridging this gap that Francesca has mentioned. How do we get high-level principles into something on the ground that organizations and developers and AI experts can use? So we have developed a patent-based approach for this. A patent is just a reusable solution, a reusable best practice. But interestingly, in a patent, not only you have the best practice, but it also captures the context of the best practice, and also the pros and the cons of this best practice. Not all best practice comes in free, and also many best practices need to be connected. There are so many guidelines, sometimes for governance, sometimes for AI engineers. There are a lot of disconnection between them. But when you connect those powerful best practices together and you see how the society, the technology companies, the governing bodies can make things, responsible AI more effectively implemented. Another key focus of our approach from Australia is on the system level. So much of AI discussion has been talking about AI model. You have this AI model, you need to give it more data to train it, to align it, to make it better. But remember, every single system we use, including TGPT and others, is the overall system. The system utilizes some of the AI model. There are a lot of system level guardrails we need to build in. And those system level guardrails are actually capturing the context of the use. And without context, many of the risks and the responsible AI practice are not gonna be very effective. So a system level approach, going beyond machine learning models, another key elements of our work. The next key elements we have is, as I mentioned earlier, is realizing the trade-offs we have to make in many of this discussion. For people familiar with data governance, we know there’s a trade-off between data utility and privacy. You can’t get both. And to how much data utility you need to sacrifice, sacrificing the value of data for privacy, and vice versa, how much privacy you can afford to sacrifice and for maximizing data utility, this is another question for scientists to answer. However, science plays two very important role. One is to push the boundaries of the utility versus privacy curve. Meaning for the same amount of privacy, we could do new science to make sure more utility is extracted. In the high level panel this morning, you have heard of federated machine learning and many other technologies has been advanced to enable this better trade-off, getting better from both worlds. But importantly, it’s not only utility and privacy. There’s also fairness. You may have heard a story that when we actually trying to preserve privacy without collecting certain data, it will also harm fairness in some cases. So now you have three quality attributes you have to trade off, utility, privacy, fairness, and there are more. So how science can enable that the decision makers to make that informed decision is the key of our work. The next characteristic of our work from Australia is to look at the supply chain. No one is building AI from ground up. You always rely on other vendor companies’ AI models. You may be using pre-trained model. How can you be sure what is AI in your organization? So similar to some of the work in software bills of materials, we have been developing AI bills of materials. So you can be sure that what sort of AI is in your system and have that accountability being held and shared among different players in the supply chain. And the final thing we have just embarked on working is to look at responsible AI and AI governance through the lens of ESG. Of course, ESG stands for environment, social and governance, and very aligned with the SDG of UN goals. On the other hand, environment is your AI footprint, environment footprint. Social elements, AI plays a very important role and the governance of AI often is too much of internal company governance, but the societal impact of AI needs to be governed as well. So looking at responsible AI through the lens of ESG, we also make sure investors can drive the levers of doing better responsible AI. So I will conclude by saying that Australian’s approach is really looking at connecting those practices, enable the stakeholders to make the right choices and trade-offs, and those trade-offs are not for us to make. Thank you very much.

Moderator:
Thank you, Liming, and it’s interesting to hear you talking about trade-offs and how we can turn this, maybe change them from perceived trade-offs to perceived opportunities if we get the right combinations of this goal. So let me turn to our last but not least expert, which is somebody that comes from the country hosting this IGF this year from Japan. Professor Ema is an associate professor at the Institute for Future Initiatives of the University of Tokyo, and her primary interest is to investigate the benefits and risks of AI in interdisciplinary research groups. She’s very active in Japan’s initiatives on AI governance, and I would like to give her the floor to talk about how Japanese actors, industry, civil society, and regulators see the issue of regulation of governance of AI. Thank you, professor.

Arisa Ema:
Thank you very much, Chair Thomas, and yeah, I think I am really be honored to here to make some. of the presentation and share what’s being discussed here in Japan also with my colleagues. So as Thomas nicely introduced me, so I am right now at the academic, the University of Tokyo, but also I am a board member of the Japan Deep Learning Association. It’s more of the startup companies community and also I am also a member of the AI Strategic Council and the Japanese government. However, today’s this talk, I would like to wear the academic hats on it and I would like to talk about what’s being discussed here in Japan. And so far, I see a lots of the sharing insights that being discussed with the panelists. And when I would like to introduce what’s kind of the status or what’s kind of discussion was ongoing here in Japan. So back in 2016, it was like, that was the G7 summit at the Takamatsu. So before 2023, there was once the summit. And there, Japanese government released the guidelines to the AI development. And I think, I believe that that was actually the turning point that the global discussion about the AI guidelines actually started and with the collaboration. And this year or the 2023 as the G7 summit at Hiroshima, we also see that there’s the very huge debate discussion on the ongoing and the generative AI and currently the G7 and also the other related companies, countries are discussing about the way to create the rules to govern the generative AI and also the AI in general. And that’s called the Hiroshima AI process. And I believe there will be the discussion tomorrow morning. And with that, the Ministry of Internal Communication Affairs and also the Ministry of Economic and Trade and Ministry is also creating the guidelines to discuss the development of the AI and also to mitigate the risks. So that kind of thing is right now ongoing here in Japan, but before going to discuss the further about the AI with the responsible use, I would like to talk a little bit about the AI convention that’s actually the COEs going under the negotiation. So why I’m here is that because I am actually very interested in the AI convention and my colleagues and I are actually organized an event here in Japan to discuss what’s the impact towards the Japanese and also to the other world. And actually we are creating some of the policy recommendation for the Japanese government. So if we are to sign this convention, what kind of thing we should investigate on. And I think this is a really important convention so that when we are to discuss the responsible AI. And in order to make some of the points that should be discussed within this panel, I would like to raise some of the three points that we published last month in September from my institution. The title is called the Toward Responsible AI Deployment, Policy Recommendation for the Hiroshima AI Process. If you are interested in, just search for my institution’s name and the policy recommendation and you can find out. And in there, we created this policy recommendation with a multi-stakeholder discussion, including not only the academics but also from the industry. And also we actually had a discussion with the government officials. And there, one thing we think really important is that the interoperability of the frameworks. So the framework interoperability of the AI is one of the key words that’s been discussed in the G7 Summit this year. But I guess many of us have questions, what does the interoperability mean? And there, for our understanding, is that we need somehow the transparency about each of the regulations or each of the frameworks that actually disciplines this AI development and also the usage. And in this sense, I think this AI convention is really important because, as explained here, the AI convention is a framework convention. And each country has its own, will take their own measures to this AI innovation and also the risk mitigation. And it’s really important to respect to each other’s, the other country’s culture or how they regulate their official intelligence and how actually they can connect to each other’s frameworks. And it’s really important that each country has this clear explainability, accountability of what’s the role of each stakeholder has, what will be the responsibility and how they can supervise whether that kind of measurement is actually working. And in that sense, I think that that kind of interoperability and also this AI convention have somehow, has really important views to share. And also, the next point we actually raised as our policy recommendation is that how we actually considers about the responsibility. And I think the other panelists also discussed, but I think what is important thing is that to discuss about the responsibility of the developers, the deployers, and also the users. However, the regulation is especially important regarding the user side. And because there’s the power imbalance between the users and the developers. So what we have to do is that not only the rules of the regulation, but also we need to discuss how to empower the citizens to create, to empower and to hire their literacy so that they can judge what’s the AI actually has, what kind of developed. And I am really happy to hear that the professor actually raised the ESGs, how the investors are also very important stakeholders. So not only the rules or the regulation by the legal framework, but actually there’s a lot of discipline that we can actually use. So for example, like the investors’ investment or maybe the reputation and also like the literacy. So we can, and also actually the technology itself as well. So there’s lots of measurement that we can take and with all that things concerned, I think we can create a more better or the more responsible AI systems or the AI implemented society as a whole. And in the last part, lastly, but not least, what we also emphasize is the importance of the multi-stakeholder discussion. And I believe that the here, the IGF is one of the very good timing that we actually discuss this here because we actually, there’s a Hiroshima AI process ongoing and lots of countries right now are dealing with their own regulatory frameworks. And as I said, Japanese government is also creating this guidelines or maybe updating the guidelines. And with that, this is actually the place we actually share what have been discussed or what we can share the values. And the important value that actually the Council of Europe raises the democracy, human rights, and the rule of law. That important values we share. And with that, we can have the transparency and also this kind of framework interoperability to discuss and then make this principles or the policy into practices. And so with that, I will stop here, but I really appreciate to be in this panel.

Moderator:
Thank you very much, Arisa. And before I react to, I would like to encourage people to those that want to interact, please stand up and go to the microphones. We have a little less time than planned for the interactive discussion, but we do have a little bit of time. I think what this panel has shown is that although we share the same goals, we do have different systems, we do have different traditions, cultures, not just legally, but also socially. And of course, it is a great challenge. And as Arisa has also said, if we want to develop this common framework with the Council of Europe, but not just for European countries, for all countries around the world, one of the biggest challenges is indeed, and we’ve heard a few hints how to, one thing is how to have governments commit themselves to follow some rules. This is the easy part, let’s say, in a convention where governments can commit to keep sticking to some rules. But since we have so differing systems of how to make the private sector respect human rights, contribute positively and not negatively to democracy, how do we deal with these differences? How do we responsibilize private sector actors in a way that they can be innovative, but that they contribute positively and not negatively to our values? So this is one of the key challenges also for us working on this convention, because we cannot just rely on one continent or one country’s framework. We have to find the common ground between different frameworks. So having said this, I’m happy to see a few people take the floor. Please introduce yourself with your name briefly and then make your comment or ask your question. Thank you.

Audience:
Thank you, sir. My name is Christophe Zeng. I’m the founder of AAA.AI Association based in Geneva, Switzerland. And my question regards a continuation of the topic that we just covered regarding responsibility and trade-offs. Continuing on this question, I would like to raise what I call a right of humans in comparison to the human rights. Isn’t it the right of humans to endure the test of time? In order to endure the test of time, is it a right or duty of collective sacrifice? Is it a right or duty to redefine some of our most fundamental beliefs and values? If there was a solution which could deliver a way to that right, but which could require us to relent temporarily or risk relenting permanently, some of our human rights as defined for the declaration because of speed or because of need for consultation. For that right to endure the test of time, speed versus right, consultation versus sovereignty, right versus rights. If there existed a binary choice at some point, ladies and gentlemen, what right ought we to choose? And my question extends to also our colleague at IBM and the broader world communities. If we are to not try to solve all problems at the same time, but instead jointly solve specific questions and tackle the overall question jointly, are we accepting to sacrifice the speed of decision-making or would we accept that one way makes at some critical point in time for the enduring of humans as a species, some decisions which require speed beyond reasonably possible by a fully democratic process to be made slightly less democratically for those to whom democracy is dearest? And some decisions which require global consultation to be made slightly more democratically for those to whom democracy compacts a challenge to overcome. Thank you.

Moderator:
Thank you very much. You pose an interesting questions if I can try and summarize it. Will we need the right to stay human beings and not turn into machines because we may have to compete with machines? That’s at least some aspects I think I’ve heard. Let’s take another comment and then see what the reactions are. Yes, please go ahead.

Audience:
Thank you for recognizing me. I’m Ken Katayama. Emma Sensei knows I have a work in the manufacturing industry but I have an academic hat with Keio University. My topic relates to some of the American speakers and the concern about bias. So within the Japanese academic world that I live in, our concern is about bias being in the sense that the platform has got Google, Apple, Facebook and Amazon are basically quote unquote American companies. So who decides these algorithms? When we look at the United States, we see extremely wide, like in the case of abortion or politics, there are many issues that are very divided in the United States. And then so if the platformers are deciding these issues, in the end, even if it’s technically possible to try to avoid the bias, who in the end actually decides which answer to go with? Thank you.

Moderator:
Thank you very much. So let me turn to the panel on these two issues or questions. One is like, yeah, how can we stay humans and given also the competition, the growing competition with machines and the other one is think about bias and where it’s not just the AI systems, it’s also the data, of course, that shaped the bias and of course, data is not evenly spread across the world, but there’s more data from some regions, from some people than from others. So whoever wants to react, maybe we give precedence to the ones that are physically present, thank you. Thanks very much.

Liming Zhu:
I think we have a lot of experts online and Professor Emma is here. So just very briefly, reversely, I think who makes those decisions as I alluded to earlier, I mean, as a scientist and I think all the developers or the AI providers, it’s not their decisions to make. It’s the ability to actually expose some of these trade-offs to allow the democratic process to have that debate with data and informed decision and then there’s a dial like privacy, utility and fairness and they decide where to implement. Then the technology assures that implementation is throughout all the system, properly monitored and can be improved by pushing the scientific boundary. I think in terms of AI and human competing, certainly there’s a concern. I mean, when AlphaGo beat human in Go, I mean, these days you may know, people used to say there might be worrying people stop playing chess and Go, but at this moment of in history, the number of people interested in playing Go and chess is historically high. The number of grandmasters in Go and chess is historically high. The reason being, human find the meaning in those work and the game, and they will continue to do that. Even AI surpassed them. They learn from AI, they work with AI, they make a better society, I think, but that speeds of change might be too fast sometimes for us to cope.

Arisa Ema:
Yeah, so I guess the important thing that we should discuss or that we should be aware is that although we are talking about the artificial intelligence as a technology, but it’s more like, as a professor said, it’s a system. It’s not only the AI algorithm, but AI models, but it’s more like AI systems, AI services, and within that systems, the human beings are also included. So we have to discuss about the human-machine interaction or the human-machine collaboration. In that way, maybe the partially kind of responded to the both questions is that, so we need to, we actually don’t have a clear answer, but we have to discuss the human-machine interaction and also the human biases are already included into this kind of human-machine systems. So from the academic hats on my head, I would like to say that we need to focus more on this cultural and also the interdisciplinary discussion on the human-machine interaction with artificial intelligence. Thank you very much.

Moderator:
We are approaching the end of this session, and I’d just like to maybe close with one remark that maybe showed that I’m not a lawyer. I’m an economist and a historian, and whenever we talk about the crucial moments that we are at the edge of history becoming completely different than before, this is also something that every generation in every point of history thought, and if we look back around 200 years or 150 years, when the combustion engine was spreading all over continents, that also had a huge effect. It did not replace like what AI is about to do, cognitive labor, but it replaced physical labor by machines that you can actually, there where you find a lot of comparisons. If you take engines and compare them, what they did, they were used in all kinds of machines to either produce something or to move something or somebody from A to B, and we learned to deal with engines. We have developed not just one piece of legislation, but hundreds of norms, technical, legal, social norms for engines used in different kinds of contexts, and we’ve been, for instance, if you take traffic legislation, we’ve been able to reduce the number of dead people in car accidents significantly. At the same time, the biggest challenge is how to reduce the CO2 emissions in engines. We are still struggling after 200 years of using engines on how to solve that problem. So, yeah, and there’s many more analogies between AI systems and engines, but also differences because AI systems are not physically placed somewhere, but can be moved and reproduced much easier than engines, but I’m very delighted by this discussion, and I hope that we will continue. There’s a number of AI-related sessions this week in this IGF. I’m part of a few of these, and I hope to see you again also in the couloirs because I’m really interested in, together with you, finding a solution on how this Council of Europe Convention can be a global instrument that will also not solve all the problems, but help us to get closer together to successfully use AI for the good and not for the bad. So, thanks a lot for this session, and see you soon. Thank you. Thank you. Thank you. Thank you. Thank you very much. Very good. Thank you very much, Chris. Thank you. Thank you very much. Thank you very much.

Arisa Ema

Speech speed

151 words per minute

Speech length

1552 words

Speech time

616 secs

Audience

Speech speed

155 words per minute

Speech length

522 words

Speech time

202 secs

Bjรถrn Berge

Speech speed

117 words per minute

Speech length

1838 words

Speech time

943 secs

Daniel Castaรฑo Parra

Speech speed

158 words per minute

Speech length

911 words

Speech time

346 secs

Francesca Rossi

Speech speed

156 words per minute

Speech length

1134 words

Speech time

437 secs

Ivana Bartoletti

Speech speed

143 words per minute

Speech length

1667 words

Speech time

701 secs

Liming Zhu

Speech speed

181 words per minute

Speech length

1636 words

Speech time

543 secs

Merve Hickok

Speech speed

143 words per minute

Speech length

1025 words

Speech time

429 secs

Moderator

Speech speed

175 words per minute

Speech length

2563 words

Speech time

877 secs