DC-SIG & DC-IUI: Schools of IG and the Internet Universality Indicators

DC-SIG & DC-IUI: Schools of IG and the Internet Universality Indicators

Session at a Glance

Summary

This discussion focused on integrating UNESCO’s Internet Universality Indicators (IUIs) into Schools of Internet Governance (SIGs) curricula. The session brought together representatives from UNESCO, various SIGs, and other stakeholders to explore collaboration opportunities.

Participants discussed the importance of multi-stakeholder involvement in both IUI assessments and SIGs. They highlighted the value of SIGs as platforms for diverse stakeholder engagement and capacity building in internet governance. The potential of using IUIs as a framework for SIG curricula was explored, with suggestions to incorporate IUI concepts into existing modules or create dedicated sessions.

Several challenges were addressed, including the time-consuming nature of IUI assessments and the need for specialized knowledge to facilitate IUI-related content. Participants proposed solutions such as creating simplified simulations or using imaginary countries for educational purposes. The importance of balancing technical and social aspects of internet governance in SIG curricula was emphasized.

The discussion also touched on the role of civil society organizations in promoting digital inclusion and rights-based approaches. Participants shared experiences from different regions, highlighting the adaptability of both IUIs and SIGs to various contexts.

Key outcomes included suggestions for developing guidelines on integrating IUIs into SIG curricula through the Dynamic Coalition on Schools of Internet Governance. Participants agreed on the need for continued collaboration between UNESCO and SIGs to enhance internet governance education and promote the use of IUIs globally.

Keypoints

Major discussion points:

– Integrating Internet Universality Indicators (IUIs) into Schools of Internet Governance (SIGs) curricula

– Challenges and opportunities of conducting IUI assessments at national levels

– The importance of multi-stakeholder participation in both SIGs and IUI processes

– Ideas for simulating IUI assessments as learning exercises in SIGs

– Potential for collaboration between the IUI and SIG communities

Overall purpose/goal:

The discussion aimed to explore ways to incorporate UNESCO’s Internet Universality Indicators framework into the curricula and activities of Schools of Internet Governance, in order to enhance understanding of internet governance issues and promote multi-stakeholder approaches.

Tone:

The tone was collaborative and constructive throughout. Participants were enthusiastic about the potential for cooperation between the IUI and SIG communities. There was a sense of excitement about new ideas being proposed, balanced with pragmatic considerations about implementation challenges. The tone remained positive and solution-oriented as participants worked to identify concrete next steps.

Speakers

– Anriette Esterhuysen: Convener of the African School of Internet Governance, collaborator with UNESCO on Internet Universality Indicators revision

– Olga Cavalli: Involved with European Summer School on Internet Governance and South School on Internet Governance

– James Kunle Olorundare: President of Internet Society Nigeria, involved with Nigerian School on Internet Governance

– Tatevik Grigoryan: UNESCO, leads activities on ROAM-X IUI and coordinates Dynamic Coalition on Internet Universality Indicators

– Avri Doria: Dynamic Coalition on Schools of Internet Governance

– Ariunzul Liijuu-Ochir: Led IUI assessment in Mongolia, works with ADINA Equal Opportunity NGO

– Sandra Hoferichter: European Summer School on Internet Governance

– Fabio Senne: Brazil, NIC.br

– Ileleji Poncelet: Lead researcher for IUI assessment in Gambia

– Abdelaziz Hilali: From Morocco, involved with North African School of Internet Governance

Additional speakers:

– Luis Martinez

– Dr. Jose Fisata

Full session report

Revised Summary: Integrating Internet Universality Indicators into Schools of Internet Governance

Introduction

This discussion brought together representatives from UNESCO, various Schools of Internet Governance (SIGs), and other stakeholders to explore opportunities for integrating UNESCO’s Internet Universality Indicators (IUIs) into SIG curricula. The session aimed to enhance understanding of internet governance issues and promote multi-stakeholder approaches through collaboration between the IUI and SIG communities.

1. Overview of Internet Universality Indicators (IUIs)

Tatevik Grigoryan from UNESCO explained that the IUI framework is based on the ROAM-X principles: Rights, Openness, Accessibility, Multi-stakeholder participation, and Cross-cutting issues. The framework has been implemented in over 40 countries and provides a comprehensive tool for assessing internet development at the national level. Grigoryan also mentioned recently launched enhanced indicators and her coordination of the Dynamic Coalition on Internet Universality Indicators.

2. Potential Integration of IUIs into Schools of Internet Governance (SIGs)

Participants broadly agreed on the benefits of incorporating IUIs into SIG curricula. Anriette Esterhuysen, convener of the African School of Internet Governance, shared their experience with integrating IUIs, emphasizing that SIGs are an excellent platform for promoting and implementing the framework. Olga Cavalli, involved with the European and South Schools on Internet Governance, suggested that integrating IUIs could enhance content and student learning.

James Kunle Olorundare described the Nigerian School on Internet Governance’s approach, which includes virtual sessions and colloquiums. He proposed making IUIs a specific module in SIG curricula. Sandra Hoferichter from the European Summer School on Internet Governance suggested using simulations of IUI assessments as learning exercises, proposing the use of imaginary countries to avoid potential political sensitivities. This idea received positive responses from other participants.

3. Multi-stakeholder Collaboration in Internet Governance

The importance of multi-stakeholder collaboration was a recurring theme. Olga Cavalli noted that SIGs help create networks between diverse stakeholders. Anriette Esterhuysen acknowledged that involving government representatives in SIGs is important but takes time. James Kunle Olorundare suggested that National Internet Governance Forums can facilitate multi-stakeholder collaboration. Tatevik Grigoryan emphasized that the IUI framework itself fosters multi-stakeholder cooperation and discussions.

4. Challenges and Opportunities in Implementing IUIs

Participants discussed various challenges and opportunities associated with implementing IUIs. Fabio Senne from Brazil noted that while first IUI assessments can be time-consuming, subsequent ones become easier, highlighting the importance of follow-up assessments. Poncelet Ileleji, lead researcher for the IUI assessment in Gambia, highlighted the difficulty of obtaining data from government agencies.

James Kunle Olorundare emphasized the crucial role of multi-stakeholder advisory boards for successful IUI implementation. Tatevik Grigoryan explained that UNESCO provides technical support and capacity building for IUI assessments, addressing concerns about expertise and capacity. Abdelaziz Hilali from the North African School of Internet Governance pointed out that IUIs can help address basic connectivity issues in underserved regions. Ariunzul Liijuu-Ochir, who led the IUI assessment in Mongolia, highlighted the important role of NGOs in implementing IUI recommendations.

5. Future Considerations and Action Items

Several key takeaways and action items emerged from the discussion:

1. Develop guidelines for integrating IUIs into SIG curricula, as suggested by Olga Cavalli

2. Explore the creation of simulations using imaginary countries for IUI assessments as learning exercises in SIGs

3. Use the Dynamic Coalition on Schools of Internet Governance to facilitate collaboration between SIGs and the IUI framework

4. Consider including IUIs as a specific module in SIG programmes

5. Address challenges in streamlining the IUI assessment process and obtaining data from government agencies

6. Explore the development of specialized thematic SIGs for specific internet governance topics

7. Balance core SIG curriculum with regional focus areas and emerging topics

Conclusion

The discussion demonstrated strong interest in collaboration between the IUI and SIG communities to enhance internet governance education and promote the use of IUIs globally. Participants were enthusiastic about the potential for cooperation and proposed several concrete steps for moving forward. The overall tone was collaborative and constructive, with a focus on finding practical solutions to integrate IUIs into SIG curricula and activities, ultimately aiming to improve internet governance education and assessment processes worldwide.

Session Transcript

Anriette Esterhuysen: Okay, let’s see. Hi everyone, we’ll just open the event and then I’ll hand over. My name is Anriet Esterhuyzen, can you all hear me? And I guess I’m kind of, I’m from the Association for Progressive Communications, but I’m actually the convener of the African School of Internet Governance, and I have had the privilege of collaborating with UNESCO and CETIC and NIC.br in the revision of the Internet Universality Indicator, so I should be sitting in the middle. So, as you probably all know that this Dynamic Coalition session is being co-organized by two Dynamic Coalitions. You know what has happened here, why I have a weird expression on my face is that my channel switched to another workshop. Let me take this off. So the background to this session is that two Dynamic Coalitions are co-organizing it. The Internet Universality Dynamic Coalition, which was established in 2021, I think, launched at the Poland IGF, which is made up of a community of stakeholders from countries that have applied or would like to apply the UNESCO Internet Universality Indicators, which is a self-organized bottom-up framework for assessing the state of Internet universality at a national level that was launched by UNESCO in 2018. And then the other Dynamic Coalition that is organizing this session together is the Dynamic Coalition… on schools of internet governance and on this note I’m actually going to hand over to Olga and James and James is the co-facilitator of Tatavic and they will tell you more about the session and what to expect. So over to the SIG side of the SIG of the DC.

Olga Cavalli : Thank you very much Henriette and thank you very much for the for allowing us to do this co-hosted workshop of these two dynamic coalitions. I think they perfectly match. I started with one Euro SIG leaded by professor Wolfgang Klangwächter and Sandra Hoffenrichter. That was the first one almost 20 years so far. Yes and then we started in Latin America with the South School on Internet Governance. We will organize our 17th but after that there are many other initiatives. The African School of Internet Governance that has been organized for 10 years, 11 years and then many other national initiatives like the Brazil Internet Governance. I saw our colleague from Brazil over there and some other. I will hand out to James to speak more in especially in Africa. There are several national initiatives. I was invited to speak at the Afghanistan School on Internet Governance two weeks ago. I woke up at 2 a.m. and in my home Buenos Aires to participate. I was really very honored to be invited to share our experiences which is the purpose of the schools is to train about all aspects related with Internet Governance and fundamentally open the door to people who are not so much included in this community, explain which are the policy issues, the technical concepts that they must have and which are the spaces where they can participate so they can bring home in relation with their own interests what is important to have in mind at the national or regional level and also what we have learned and I think other colleagues from other SIGs should know is the fantastic network that is created in between the fellows and fellows and experts and I will hand over to James maybe he can share with us some comments.

James Kunle Olorundare: All right thank you very much Olga and I think it’s a pleasure to to be here on this session and I’m so much excited about the school on internet governance. One thing that I’m so much happy about the school is the fact that we are able to do capacity you know development and that has been one of the pedestals through which we’re able to reach out to other stakeholders especially when it comes to issue of internet governance and in Africa a lot of school has sprung up a very good example is of course the Nigerian school on internet governance. We had a fifth session you know this year and of course we’ve been we’ve been consistent with that and apart from that I know that other schools you know abounds in Africa another one is the Ghanaian school on internet governance. I know about the Kenya school on internet governance that is from the east eastern parts of Africa and yesterday I think my colleague from Niger was also in the annual meeting and he shared his experience that is Nigerian school on internet governance. As a matter of fact I see a link between you know by the way I’m from internet society I’m the president of internet society in Nigeria so I see a link between you know internet society to a large extent and the school because we observe that that is a very good initiative for us to push you know capacity building and that has been working. I’ll give you an example take for example last year after the school in Nigeria what we normally do is that we tell the fellows look it is not just about acquiring the knowledge what do you do with the knowledge you need to start to engage within the ecosystem and I think that is very important that should be one of the takeaways for each and every one of us after you know the school just let the fellows lose let them start to engage within the ecosystem that is one of the things we normally in fact we I’m thinking now that probably we’re going to make it like a model in our curriculum to see how we can showcase some of those niches, some of those ecosystems where they can start to engage within the bigger internet space. And I think that’s helped us last year, because the feedback I’m getting. And as a matter of fact, I now observe that that may be a very good indicator for us to have feedback. OK, how much engagement have you had even within the last one year after you’ve been a fellow, after you’ve come out of the School on Internet Governance? So last year, I observed that some of our fellows that we advised that, look, you need to engage. It’s also a matter of just acquiring certificate. No. It’s about using that knowledge within the ecosystem. I observed that some of them have started engaging within the ecosystem, and we’re getting a very good report from their performances. And even this year, I think this year we had a session in October. And after that, this December, October, November, December, after that, we’ve started getting information about the kind of engagement they’ve been involved in. And some of them have been organizing events on cybersecurity, talking about child online safety, human rights issues, and so on and so forth. How I got to know is based on the fact that in most of those events, they do send invitations to me, oh, please, can you come and attend that event? Although I may not be able to attend all the events, but I think that feedback mechanism has shown me that, OK, look, I think the school is working. And I believe that this is one of the things that probably we need to take away from this session, that, OK, even after the school, the fellows, well, we call them fellows in NSIG. I wouldn’t know if there’s any other parlance that is being used in other jurisdictions. The fellows should engage in the industry, within the ecosystem. And even in the IGF space, I believe that some of them should be around here to have conversations on some of those hot topics, which I think is going to be a way of advancing the same conversation all about internet governance. So without wasting time, I think this afternoon, we’re going to be having conversations about the school. And of course, we want to see how we can now synergize between the school coalition and the IUI. And for me, I think we should start to think about… how we can integrate some of the IUIs, you know, as part of the modules that can be, you know, taught in the school. As a matter of fact, when we were having our discussion yesterday, you remember that I made mention of the fact that, okay, we should be looking at, okay, IUI being a module as part of the school. However, since we are trying to look at, okay, there are national schools, there are regional schools, you know, we are thinking there should be some form of handshake, right? So maybe we have, like, primary modules, right? So but what I’m not too sure of now is this IUI module that I’m proposing, I don’t know probably if it should be part of the primary module, or probably maybe something that we need to take a second look at, okay, this should be around it. So I think that’s some of the things we need to, you know, have conversations on. So let me open the floor, right?

Anriette Esterhuysen: I think Tatevic is next.

Avri Doria: If I may, if I may jump in, this is Avri. Yes, just very glad that you’re, that the two groups came together. As far as the agenda goes, just to let people know, having done these quick introductions, the first thing was to get a quick introduction to IUI, which Tatevic will go. And then there were basically two discussions planned on the agenda. The first one is to talk about existing teaching experience. So this is something, for example, Henriette has done, Ariunzo, if I’ve pronounced it correctly, has also done. And then a roundtable of those of you that are there and online, of anyone that has been teaching it, or has been looking at it. And then go into looking at curricula and IUI, and how we fit that in. And then Aga will lead off that discussion, with Henriette leading off the one before. And then again, going to a roundtable. So at this point, I’d really like to pass it off to Tatevik and thank you all for the introductions. Thank you all for having come to this joint. I’m really quite excited that it was able to come together. Thank you.

Tatevik Grigoryan: Thank you very much Avri and James and Olga for the introduction and Avri for the clarification. As Avri mentioned, my name is Tatevik Krikorian and I work for UNESCO. I am leading the UNESCO’s activities on ROAM-X IUI as well as I’m coordinating the Dynamic Coalition on the Internet Universality Indicators. I’m very happy that we could come, these two Dynamic Coalitions could come together to discuss what I see as a further cooperation or we are already cooperating with, working with some schools, including Africa School on Internet Governance. But I wanted to give a quick introduction on the ROMEX very quick because our Dynamic Coalition is basically based on Internet Universality of ROMEX Indicators. We basically provide, including through this Dynamic Coalition, provide this tool for analysis on Internet development as well as we use it as a method sort of to foster multi-stakeholder cooperation and discussions and contributions. Very quickly, why we’re using, why do we have this tool? It was following UNESCO’s governing body decision endorsement of Internet Universality and its four principles which state that the Internet should be based on rights, should be open and accessible to all. and nurtured and governed by multi-stakeholder participation and following the endorsement of these principles we then created this framework but then we thought that in addition to these principles an important issue is the cross-cutting issues that we should consider such as gender equality safety and security online and so we did that and added the x category which is stands for cross-cutting issues so this framework what is this framework this framework is a set of indicators based on these principles and thematic areas in each principle which help the stakeholders that use the framework to assess the development of their internet development at the national level without doing any ranking or any comparison to see where the gaps are and where it is in the country at the national level and all of this is happening under together with the multi-stakeholder advisory board which is set up at the initial stage of the assessment that brings together different diverse stakeholder groups, government, private sector, civil society organizations, academia to contribute to the research and then at a later stage to validate it so this is how the final product work looks like following the completion of the assessment I must say that the IUI’s have been conducted in over 40 countries and you can see the distribution for region and countries. Countries, I think the leading region is Africa. So far, 17 African countries have done the assessment with support from UNESCO and Kenya being one of the first country to do a follow-up second assessment to see and monitor the progress. So we’ve had this framework since 2018. And as we reached the, which was when we reached the five-year mark and it was already envisaged that UNESCO started the revision process to make sure that the indicators remain relevant to incorporate the lessons learned from all these countries, make sure that both the relevance and more faster implementation of the IUIs implementation and then publication and to see and incorporate this new thematic area. So we initiated the work and we worked together with CETIC.br and NIC.br and we have Fabio and the critical role in meeting together with UNESCO the revision process. We have Anette who has been essential. She was the member, she was managing the project when we initially were creating the IUIs and also she’s been in the steering committee for the revision. And of course she’s been doing lots of other things to support and promote both the revision and the framework. So after wide consultations, public consultations and also targeted interviews. I’m very happy to say that last year, not last year, sorry. On Sunday, we launched the enhanced indicators and you can see, unfortunately or fortunately, the indicators were really popular, so we ran out of copies, but I’m more than happy to share the link with you. I am cautious of time, but just to give you a look of some key figures here, we maintain the ROAM-Xprinciples and indicators as the basis because they remain relevant, they remain essential, but we reduced the number of indicators and the questions. You can, here you can see, and share with you and on this slide, I’m just demonstrating the creation of two new topics, advanced digital technologies, mainly focusing on AI and also. And of course, my contact number, I’m happy to liaise and give more information. I think my presentation of the IUIs end here, but I continue with the conversation. I’d like to give the floor now to Agnieszka. I gave a very brief introduction to Agnieszka, but in addition to what I’ve mentioned already, And yet, we’ve been also working with Andiette. She’s the convener of the African School on Internet Governance. And I think it’s been the second year, Andiette, no, no, since you talk about, since we have IUI, no? At least it’s the second year for me. Just ask me the question. Okay. I’ll answer that question, but ask me the question in the agenda. I think you have the question. I just basically wanted to speak. We’ve been, we had the IUIs in the, I’m just trying to stop my presentation, stop sharing. Yes.

Anriette Esterhuysen: So the question is, well, I’ll answer the question. So the question is how, you know, what have we learned from this, in the case of the African context, and I think we’ll have the same question going to Brazil, from having this kind of collaboration between the IUIs and the school. And I think, Tadevic, something that didn’t occur to me until right this minute is that, in fact, the collaboration goes back to before the IUIs existed. And I think in that sense, at AFRICYC, we actually are fairly privileged in that way because when the Rome X, so UNESCO, as Tadevic explained, the indicators are based on these principles of internet universality, which actually emerged from the IGF. And I think that’s something UNESCO brought to the Internet Governance Forum in 2013. Initially, that was the first public exposure of the idea of rights, openness, accessibility, and multi-stakeholder being core principles for internet universality. The internet governance community liked the idea, and UNESCO then worked with it, and they convened this huge big event in Paris in 2015, where people from all over the world could participate, a little bit like NetMundial in 2014, they could submit textual inputs, and they could also participate in the event to take these principles and work with them as a framework for actually collaborative assessment of where are we going with the internet? Are we moving towards more internet universality or less, but at a national level? And because the African school, I think it was in 2015 that UNESCO first used the African school as a platform for sharing. Schools of IG, a useful platform for UNESCO in this case, but I think it applies to other people as well. It’s a platform where you get a cross-section, as James was saying, it’s a cross-section of people from different stakeholder groups. They are intergenerational. Some are professionals. I see James there, we have a senior government official from Zimbabwe, who’s an alumni of AFRICIC, and we have people that are starting their careers as well. It gives you that opportunity, and because it’s smaller than a whole IGF, you’re able to have, I think, more focused discussion. I think one of the things that then emerged when we developed the indicators was that rights, openness, accessibility, and multi-stakeholder are not enough. X, the cross-cutting issues. There are so many and they’re so important. Gender equality, children’s rights, security and safety and stability. I think now I’m looking there at Fabio, we’re looking at artificial intelligence and emerging technologies, climate change. The number and the range of cross-cutting issues also are growing. There again, I think, the schools of Internet governance gives UNESCO an opportunity to get feedback on the way you’re thinking about this. Then I think the big learning really is that, I think we’ve learned in the schools of Internet governance that convening a multi-stakeholder event is not easy. You need to do it with care, with thought. We use devices such as some of us use the idea of the practicum, where you have role-play or negotiated output from the different stakeholders. Some schools of Internet governance use small group discussions. But I believe they are extremely powerful in breaking through the gloss, the veneer, the surface, the kumbaya surface of the wonderful multi-stakeholder process. I think in schools of Internet governance, where people are in a safe, small, more Internet space, you get a much better understanding. of the tensions that are between stakeholder groups, the different interests, the different understandings. And because the indicators are so deeply committed to strengthening the multi-stakeholder process, I think that’s also very useful for the multi-stakeholder process. For the indicators, sorry.

Tatevik Grigoryan: I think, yes, you made references, but you talked about how SIGs are an excellent platform for the IUIs, which we acknowledge and appreciate a lot. How about you reflect on the other way around?

Anriette Esterhuysen: The schools for the IUIs?

Tatevik Grigoryan: The IUIs for schools.

Anriette Esterhuysen: The IUIs have been useful for schools, and I think maybe that responds quite a bit to the second question, which is how can we use the Internet Universality Indicators as we think and plan and evolve our curriculum? I think one of the challenges, I can speak for AFRI-SIG, a little bit for EUROSIG, because I sometimes am faculty, sometimes at the North African school as well, but I think in the schools, we have to respond to a change in context. The Internet governance environment isn’t static, and the challenges that we face and that we try to address with Internet governance evolve as well. I’m not sure how many schools of IT are dealing with climate change. In AfriSIG, we’ve started for the last few years adding a module on Internet governance and environmental sustainability, for example. I think the role X-Principles gives us is a framework to look at how we balance that fairly technical baseline curriculum that we need to keep doing well, how the Internet works, how Internet governance takes place, where and by whom, what kind of decision and how to participate, with more of the social implications, the social environmental challenges that digitalization of the Internet addresses. I think for us in AfriSIG, we’ve always had a very strong focus on digital inclusion and rights, but I think that rights, openness, multistakeholder accessibility, it’s a useful checklist. Now, for example, if you look at openness, openness involves competition. Competition is trade. I mean, for this year at AFRICEC, we focused on the African Union’s African Continental Trade Area’s digital protocol, supposed to make it much easier for African businesses trade across borders, deal with data flows across borders, but in ways that do not violate national sovereignty or security or personal data protection. And I think it’s that sort of broad overview that invites openness, accessibility, and multi-stakeholder, plus the cross-cutting, that for us has been quite a useful frame of reference. Even if we don’t use it always consciously, I think it creates a useful frame of reference for assessing whether your school’s curriculum is succeeding in balancing and combining both the more traditional IG topics and the emerging IG topics, but also allows you to localize them, to approach your curriculum in a way that is not imported from somewhere else, but that is relevant to your region and to the people in your school. So for us, that I think has been quite useful. And the partnership for UNESCO has been useful in that respect. So just having the UNESCO staff in the room is useful as well. Thank you very much, Annette. And it’s very delightful to be in the same room with all these wonderful people from diverse backgrounds. And actually now, thank you for that comprehensive remarks. I wanted to, we talked about the multi-stakeholderism, I wanted to turn to Adil Zuhl, who not only has led, who is joining us online, she not… only led the IUI assessment in Mongolia, but I wanted her to bring the viewpoint from the NGO. She’s working on ADINA, Equal Opportunity NGO, and I just wanted to ask you, Agyemzul, to give your experience in an NGO-led efforts to promote digital inclusion and right-based approaches, and I wanted you to reflect, please, on why do you think the promotion of IRS rule, the schools of the Internet governance, could be important for NGOs? Agyemzul, can you hear us? Is she unmuted? I see that she is online, but I see that her microphone is still muted, as is her camera. Adi, can you unmute your camera to speak, please? Okay, I think perhaps, in the interest of time, we could move to the next speaker while she would try, perhaps, to solve the issue, yes? Last call, Agyemzul.

Ariunzul Liijuu-Ochir: Oh, oh, sorry, I was unmuted, and now I think it’s working.

Anriette Esterhuysen: Did you, did you hear?

Ariunzul Liijuu-Ochir: Yeah, yeah, yeah. Yeah, yeah, I can hear you.

Anriette Esterhuysen: Over to you. Can you hear me? Yes, yes.

Ariunzul Liijuu-Ochir: Okay, so, hello all. I’m very happy to be a part of this event, and again, congratulations for the UNESCO team, and also the other steering committee to launching the updated IEI. And as I led the international National Assessment Team to conduct IEA and Mongolia first time in 2021, I realized that NGO and especially also the civil society organizations’ role to implement and enforce the recommendation from the IEA findings are very crucial. Because NGOs and civil society organizations have better understanding about their community needs, what works in the field and what doesn’t work in the grassroot level compared to the government, right? So in the Mongolian case, IEA was led and conducted by the NGO. Therefore, I believe we included more voices from the diverse groups, such as communities who work with the human rights, communities who work with the blind community, NGOs working for the deaf community, NGOs and civil societies who work also and collaborate with the older communities as well as we also include the Kazakh community whose first language is not Mongolian. And findings of Mongolian National Assessment of IEA suggested that there were many rooms to improve in all role mix pillars, of course. So we have been promoting the findings throughout our work. For example, together with currently our team, we are creating a website where secondary school teachers can learn about how to work with the children with disabilities, including the children with visual impairment, children with hearing impairment, children with speech impairment, children with autism or children with ADHD and so on. So we have also assumed among the teachers who use our website, they may have a certain impairment which can also restrict their access to use our website. So we co-designed our website with the IT guys and also teachers. teachers and teachers with old age and teachers with low digital literacy as well as persons with disability. And then because the Mongolian IELTS report displayed that government websites were not really accessible for everyone, especially for the persons with disability and all the senior citizens, I would say, we want to show, we want to make our website as a good example for the government agencies and public school principals and teachers. Because back in time, back in 2021, we even couldn’t find the single website which is accessible for everyone. So with this, our current initiative, we really want to show the best example and also the model for the government officials, especially for the secondary schools, especially during this time, they were using a lot of, you know, the digital training module and the digital training modules, which is not yet accessible for everyone. We changed their mindset and we changed their attitude and we changed the way to work on it. So the website will be launched in, we hope it will be, we will launch it in the mid-January in 2025. And then that website will be also used in the nationwide for all of the secondary school teachers who work with the children with disability and also school principal itself. I hope they will learn from our initiative and the practices to produce and create their own, you know, the online learning platform to be more accessible for everyone. Because if the school, you know, the resource material is not accessible and not user-friendly for its users, then it’s, what’s the meaning, right? So that’s what we want to promote in Mongolia at the moment. And I’m aware of that. There is very…

Anriette Esterhuysen: less time. So that’s it from my side. If there is any question, I’m happy to answer it. Over to you, Dr. Devik.

Tatevik Grigoryan: Thank you very much, Adi. I forgot to mention that in addition to her role and her work on the IUS, she also contributed greatly to the revision of the IUS. She was a member of the steering committee for the revision. And thank you, Arjunzul, for your great contributions to the revision and the new enhanced indicators. I think I will not ask you the follow-up question, Arjunzul. Cautious of time, I will now hand over. Thank you so much. I will hand over. I’m not sure that we’re that short on time. Correct me, but this is a 90-minute session, I believe. And so there is some time. Everybody keeps worrying about time, but I just want to make sure that we’re okay on time and we’re only about halfway through at this point. We were supposed to finish at 1.45. I think everyone is desperate for the IGF to end because it’s the last day and we’re all exhausted. Okay, fine. If it’s self-generated, I just wanted to make sure we’re feeling limited by the schedule. I’ll hand over. Okay. There’s a question, please.

Sandra Hoferichter: Thank you, Sandra Hoferichter from the European Summer School on Internet Governance. Is it the right time to ask questions or was there a plan to do it at the very end of the session? It’s always the right time. Thank you very much. I think it comes very timely because Tadej just described the revision of the indicators. And I have two questions and they are… fairly related but go in the same direction. I’ve heard from several governments, even those who did the first assessment of the Rome Indicators, that this is such a lengthy process and that it takes so much time, that for some governments or for some nations it will be hard to do it for the first time, and those who did it already will possibly not have the resources to do it for a second time. But I understood, on the other hand, that the benefit of those indicators are only developed or only being seen when you are doing it several times, so that you see how your country is developing. So my question here, and that’s the first, would be in the revision or later on, is this on your radar that this process should be streamlined a little bit more, so that it is not so exhaustive and that it can be done by several governments that at the moment struggle with resources? And a little bit related to that, at the moment it’s not existing, and we had the Rome Indicators at the summer school not this year but last year, but it was merely a presentation of that it exists, and since the practicum is regarded a very useful element in all these schools, is there a way or can we maybe start thinking about how we do a mini-assessment provided by and facilitated by UNESCO in those schools, so that basically when we are doing the practicum, this Rome assessment could be the practicum, but of course that needs to be a very kind of trial or just to show how it works, and then the multipliers that are attending the SIGs could then maybe lobby to include it in the several countries. So these were the two questions. Thank you.

Tatevik Grigoryan: Thank you, Sandra, for these questions. I think Adyat really wants to answer the second question and Adyat really wants Fabio to answer the first question. I won’t monopolize it. Fabio, please, do you want to take the first question or do you want to start with the second?

Anriette Esterhuysen: Fabio, did you hear the question? Were you listening? Good. Now you can answer.

Fabio Senne: No, thank you, Adyat. No, I’m Fabio Sena. I’m from Brazil, from nick.br, and I can talk about the Brazilian experience. I cannot talk for all the countries, but I agree with you that, yes, the first assessment was really time-consuming. First, because you had to mobilize the stakeholders that are needed for the process, and then you have to understand what are the sources that are available or not available in your country. So, I agree that there is a process that is more time-consuming, but we had the revision of the IY take this into consideration, so we had a reduced number of questions and indicators in the second version, which is, I think it’s more accessible to a group of different countries. The second thing is that when you are doing the second assessment, you can also take into advantage what you did in the first one. So, first, legal aspects that don’t change like each year or each five years, you can use the same material to rely on, and you can just update the more quantitative and the aspects that are more dynamic into the process. So, I do think that a group of countries can do the second version in a more easy way. And finally, just to mention that this framework has proven to be very effective in very different contexts. So, if you take G20 large countries, as Germany, Brazil, and Argentina did the process, as well as small islands in the Pacific region, so it’s very flexible to be adaptable to the context of the different countries. So, I do think that this is a good contribution.

Avri Doria: Thank you. I also wanted to point out that we do have We have a question from consulate online with his hand up. So please fit him into your queue.

Anriette Esterhuysen: And should we take his question And should we take his question before we respond. Every, maybe we should. Yeah

Avri Doria: Every, maybe we should. Yeah, I think we should his hands. So thank you.

Anriette Esterhuysen: So let’s hear you. So let’s hear you. Except now we have to go because

Ileleji Poncelet: I think our tech support don’t really follow the. Yeah, yeah, that’s fine. So, thank you for allowing me to speak So, thank you for allowing me to speak on I wish to thank all the speakers, our first like to all the speakers, our first, like to, to States. First, as the lead researcher for the recently completed iUi for the Gambia. That one thing that was good for the success of that the iUi for the Gambia was the ability that we had an advisory body that comprised of a number of institutions and a number of institutions and organizations who have been part of national internet governance initiative. So, that was a success but it was also difficult sometimes getting data for from our Ministry of Finance, that, and to input some certain data governments were not able to do it so I were not able to do it so I would like to hear advice from UNESCO how do you go about that and how do you go about that and how do you go about that and how do you go about that and how do you go about that and how do you go about that and how do you go about that. And I will, I want to commend it for what she has been doing with the African school of Internet governance but there’s something I would like to throw to her like to get our government’s more involved is idea possibilities like example interest in this issue and the current one since the last day. In simple and transparency. Africa. Next time, North African countries will be nominated. So we’ll have governments also being involved in this school. They are young professionals. Thank you. I’m sorry for taking your time.

Anriette Esterhuysen: Can I respond? Thanks very much, Poncelet. Actually, we had a fantastic young woman from the Gambian government who was nominated by the Gambian government this year. And so I think we also had, we had Cameroon, we had Zambia. We had about five governments this year. And then we had about seven parliamentarians, including a senator from Nigeria, who’s here. So we do do that, Poncelet, and we always have government participation. I think the response from governments, many of the governments that we invite to nominate don’t always respond. But the ones that do remain partners for life. So it’s really good. So it’s, and I think you’re absolutely right. I don’t think you can have an effective multi-stakeholder leadership development event if you don’t have government in the room. I just want to respond to Sandra’s. I think it’s a fantastic idea, Sandra. And I think a national school could do it at a national level. And if the school, like I know some, I think, James, you said the Nigerian school is three days. Now, some schools are not as long as AfriSig or EuroSig. I’m not sure how long the South School is. But you could then have your practicum be that advisory committee, because as Poncelet said, that’s not easy, but it’s very important. And as Fabio also mentioned, the IUI methodology is that you establish a multi-stakeholder advisory board. So your role play or your practicum exercise at the school could either to have maybe an imaginary country, and you assess that imaginary country, that could be really quite fun. And the people in the school have to be national statistical agencies, internet service providers, associations, researchers, teachers, child rights activists, feminist activists, you know, whatever. You can really play with that idea if it’s a kind of imaginary country. But if it’s a real country, you can actually make it focused on how do you convene the MAB? How do you identify data sources? So, Sandra, I really love that idea. I think I’d be very happy to do that, actually. I think it would be good for the IUIs, because the IUIs will also learn from that. UNESCO will learn from that as well. Plus, it will be a great exercise for the school, particularly because I think it will make the technical people it will force them to think a little bit about what are the social impact, what are the gender equity components of a universal interoperable Internet. And likewise, it will make the social people, the human rights people, or the content people think a little bit more about the infrastructure, its security, you know, how stable it is, and so on. So I really love that idea.

Tatevik Grigoryan: Thanks very much for the questions and answers. I think I’ll hand over now to James and Olga to move forward with the rest of the agenda.

James Kunle Olorundare: All right. Thank you very much. Thank you for those brilliant ideas. All right. Before we advance, I just want to quickly add something, because I now realize that if you really want to do a very good assessment of IUI, right, I think you need a very good time. It’s not something that you can say, okay, you want to do in three days, right? And take, for example, like in the Nigerian School of Internet Governance, like I explained yesterday, what we normally do, right, apart from the intensive classes, right, that we do in three days, we do have, you know, like a virtual session. And even this year, we had it for like five weeks, right? And I realized that that has helped a lot. Why? Because we’re able to organize something like, we call it colloquium, right? Well, you can call it symposium, you can call it anything. But we call it colloquium. Just like Harriet was saying the other time, as she was talking about it, I was just figuring out what we did in line with that. So we call it colloquium, right? So we created four groups, okay? And in each group, we have to have people that we act roles, okay? So and I think this is going to be a very good model if we want to do this IUI thing, and the paraventures will be in need of data. And data is not something that you can just get at the snap of a finger, especially when you’re talking about the government, you know, the bureaucracy and all of that. So by the time we start, if we start, let’s say, maybe like three weeks, right? Even when we want to start, I think we would have even maybe made request of the kind of data we would need from the government so that by the time the data is coming in, it will fit into our program. I know quite well that, yeah, we may not have so much time to run, you know, the School of Internet Governance as a physical because of resources and so many other things. But integrating these virtual classes into it as part of the preliminary process, right, I think will help a lot. And right from the beginning, when you’re starting the virtual classes, would have known that, OK, yeah, whatever we’re going to do, if it’s going to be the IUI, if it’s going to be the IIB, like we call it in Internet Society, that is impact assessment brief, right? We’re going to start from the beginning. We have figured this out. We have a theme that we’re working with. So I think this may work in the case of IUI, too. But what I may want us to also deliberate on is if we really want to get a very good result from these, right, so that means we need to get all the stakeholders together, right? So we need to involve the government, even right from the beginning. Because for us to get a very good result, the government must be involved. So if we want to contribute, so that’s one perspective I want you to throw into this discussion, why I hand over to you.

Olga Cavalli : So I think it’s a good place to think about new ideas to bring to the schools. But I think at the same time that all the schools have been going through all these different concepts like access, multi-stakeholder, environment, human rights, cross-cutting things, and environment. And the experience in the South School of Internet Governance is that we could perfectly blend it into now we have three parts in the school. So the school is not one week. It’s like six-month program. We have a pre-training that it’s online and self-assisted. Perfectly would fit in there some videos and some important information. And thank you for inviting me to the NetMundial meeting that you presented the indicators there. Thank you for that. I was present there. And Riet was there as well. in Brazil, in Sao Paulo, and that could perfectly fit in this self-assisted part of the training that we do through two months before the school. Then it’s the five days hybrid training, and then there is the research with the university. All this program is also supervised by a university in Argentina that we have partnered with. So I think we can work in including it very deeply into the program of the school. We also run the School of Internet Governance in Argentina, which is shorter, it’s a three days program, more focused on the Argentina issues. The relevance of multi-stakeholder, I think that the revision and bringing to our memory the principles of NETMundial has been for all of us very, very important. Organizing a multi-stakeholder environment is not easy. It’s much easier to do a multilateral one where all the governments sit together with their advisors and they talk among them and they do a paper or whatever, which is perfectly important. Really doing a relevant multi-stakeholder space needs a lot of time in bringing the right stakeholders. And for the schools, it’s also very interesting to build a group of fellows that it’s really a multi-stakeholder itself. So for having a governmental representative, as you were rightly mentioning, in Africa and Argentina, we have the same important mission to bring them. And once they get engaged, as Anri rightly mentioned, they always get engaged in the program because they love the interaction with the fellows from different stakeholders and from experts also from the different stakeholders. But it takes time. It takes time to build a group of fellows. it takes time to build a program. So that is the part of the work of the schools and the beauty of one of the schools that we organize. And each one has its own particularities and its own focuses. But I think that the match with all the work that UNESCO is doing with the indicators is perfectly good. So, and now Avri especially, she is leading various activities that will focus on and other things that as a dynamic coalition, which is very important for all the schools. So I think it’s a perfect moment. So once I knew that we were going to do this blend workshop, I think it’s a start of a new initiative that will enhance the content of all our schools. I will stop here and maybe I can add something in a moment.

Avri Doria: Hi, this is Avri. Jumping in now as I don’t see any hands and while you’re figuring out where to go next. I think that some ideas that have been brought up here are great, especially the idea of as the dynamic coalition itself starts to look at its curriculum, because we had talked about, one of our ongoing activities is to have sort of sample curricula online. And we realized that we need to update them this year or starting this year is to actually bring this IOI element and how one would do it there into that particular document, which would then give a hand and a starting place to many of the schools that do look at that sample curriculum when they’re starting up. So, I think at this point, we’ve got a bit of time left and I think it’s- Sorry. You can go on with discussion, but please.

Anriette Esterhuysen: Yes, we have a contribution from the room. Good. Sorry.

Abdelaziz Hilali: Thank you very much. Sorry, Avri. I am Aziz Hileri from Morocco. And I want, if I may, just to give an example from North Africa, where it has been six since 2018 each year. And when we choose, it’s from seven countries of North Africa. And we try to have a multi-stakeholder, even within the three participants from each country. So the last SIG we organized, it was in Mauritania. And each time, we have meetings with the government, with all stakeholders, private sectors, and governments, to speak about the problem of North Africa, but particularly in Mauritania. Because in North Africa, we have the Sahara, is the biggest desert in the world. It’s taken, I think, 30% from Africa. And we try to have some. So I have a question for the panel. Is how this AI, dedicated to work with the global group, and improve connectivity in this area, like the Sahara that I said, because we have communities, is related, and lack of telecommunication infrastructures? Thank you.

James Kunle Olorundare: So sorry. I think that’s because you see I have work to do here. Because I think the question is actually directed to you. Yeah. Yeah. And for you to just expand on that more, I think we also need to talk about, OK, yeah, we want to integrate IUI into the SIG module, or curriculum. I beg your pardon. So what? What do you think we should do? Because now IUI is a new concept, right? Which is just coming out. And yeah, it’s been around, right? But we just, it’s new to the SIG. That’s what I mean when I say it’s new. New in the sense that we are just integrating it into SIG. I guess, I know that Areth has worked with that before. Maybe the first time, and probably maybe she’ll be going for the second one now, right? But like in some of the national schools, I’m sure we’ve not done that. So I’m telling you from Nigerian experience. So, but I’m not thinking, I mean, it’s time for us to look at how we can integrate that into the national school. Although we have agreed that, right? We should have like a cascaded model, okay? For the national, then going to the region, it should be more encompassing. But then you know that even if you want to carry out IUI, it’s still going to be at the national level. So I think the SIG at the national level should be involved in the sense that, okay, we need to let people be aware of what is going on, that is one. Then if you want to do this assessment, I think somebody will, I think it was Areth that mentioned the fact that probably we need to do like a simulation, you know, even in the school, you know? So as to bring people up to speed with how IUI can be conducted. So I think we need to start looking at that and of course, the issue of, you know, faculty talking about somebody that’s going to facilitate that. Of course, if you say, I should go and facilitate something on IUI now, I have to start reading and reading. But for somebody like you, for being part of the system, you know so much about it. So what will be your advice in terms of getting faculty, you know, to facilitate that? Yeah.

Anriette Esterhuysen: So very, sorry, did you want me to be very quick? I wanted to respond to, sorry, so just a quick response. First to, that’s a very good question, James, but just to Aziz. Aziz, I think your question is how do the IUIs actually help us deal with some of the basic problems, such as the lack of connectivity? I mean, and that’s an, an IUI question, not necessarily a SIG question. And I think the idea of the IUIs is that once the national assessment has been made, that it comes up with recommendations and that there are actions that are identified to be addressed. But it does then remain up to the country to decide how to do that. And what we feel it does though, is that because you do the assessment in a collaborative multi-stakeholder way, that creates a very good basis for which to collaborate on identified, implementing and addressing identified priorities. And James, just in response to your question, I think it’s a really good question. I think Sanda made the brilliant suggestion that we can do simulations. I think you are pointing out that if we wanna do them well, we need the people who can do them well. And maybe this is something that can come out of this recommendation, some form of guideline on how that can be done. I think the one thing though I would caution is to separate simulation from real assessments. Assessments are also political. They are political at a national level. It’s not always easy, as Poncelet also said, to get agreement, to get the data. So I think one would have to be fairly cautious that if you are using the IUIs in SIGs, that you’re sensitive to that. And that’s why maybe having at a national school, deciding how you do it would have to be sensitive and careful. And maybe having made up imaginary countries might in fact sometimes work better. Still serve the same pedagogic purpose, but I do think one would have to be careful that you don’t unintentionally undermine the opportunity to do an actual national assessment because you’ve used it in a school and somehow it has raised concerns. Sorry, that just occurred to me while you were talking.

Tatevik Grigoryan: I just wanted to add about people who don’t know about the IUIs and would be willing to do the IUIs. Just to let you know. that this IUI framework, it contains a very detailed step-by-step guidelines on how to implement the indicators, but UNESCO’s support doesn’t stop by just providing the guidelines. UNESCO provides technical support at every step of the assessment. Since this inception, we do build the capacity. We know that it’s not something taught at schools or it’s a new idea for many countries, and the way we work, we provide the technical support, we do capacity building for the research team that wants to carry out the assessment, we work with the multi-stakeholder advisory board, and we basically accompany the research team and the country at every step of the assessment. So this is something that we’ve been doing and we will continue to do so. Yes, and in addition to also to react on what you said about the national schools, I think, you know, as I mentioned, the multi-stakeholder advisory board is an integral part of the assessment, and in some ways, if a country has not yet engaged or started the assessment, in a way the AFRI-SIG as a multi-stakeholder group, if we can call it like that, could be the core of the multi-stakeholder advisory board that can then steer and guide the assessment with support from UNESCO. Thanks.

Avri Doria: This is Avri, if I can jump in for a second. I just want to add a comment that was on the chat and not everybody’s being able to read the chat, but I think fits into this discussion, and this was from Luis Martinez. Is it time to have specialized IG schools, meaning SIGs on single themes such as human rights or connectivity? And I would add in the context that you are all talking about in terms of the URI. I think the idea, by the way, of using an imaginary country instead of your actual country is an excellent one, having designed many of these exercises at the time. And there was a question, an answer from Henriette, and then she can follow through on it. I think it is. AfriCIG is already doing that. So I’ll put my hand down now, but I just wanted to make sure I got the online comments in. Thank you.

Olga Cavalli : Thank you. I think, Avri, this is the perfect starting point to build guidelines from our coalition to all the schools. And then each school has its own way of organizing the group of fellows, to call the experts, to build a real multi-stakeholder program. So with these guidelines, perhaps not that as an assessment, that could be a little bit more into the particularities of each region country, but to build from the dynamic coalition a group of guidelines so we can use the indicators within blended into the program. And responding to my dear friend, Luis Miguel, in our school, every year we have a special point of focus on the program. It’s cyber security or development goals, or every year we find a focus, which is not the only thing that is included in the program, but there is a special emphasis. And it changes every year depending, last year was artificial intelligence and cyber security. It depends on what is happening in the internet environment.

James Kunle Olorundare: Just to add one point and to confirm what Olga said. So I think that’s the right way to go in terms of for every session of the school we are going to have, there should be a focal theme, a thematic area that you want to focus on. However, of course, you are still going to deal with other modules that are relevant to the school, but there should be a focus and the focus will be what you will be building your output document on, be it a policy brief, be it internet assessment brief, be it the IUI, like this one we’re talking about. So I think that is very important. But for me, the issue of having specialized school on internet governance, well, I think, well, I don’t know how that is going to work out. What I’m saying that is because now we’re talking about having something like a syllabus, right, something that can work for all school, which would be like, okay, first there must be basic fundamental modules that you need to take, right, before you now talk about where you’re focusing on based on your environment or based on those hot topics that are coming up within your environment. So I think we should focus more on that for now, right, because that will help, you know, this coalition. Thank you very much. And I don’t know if anybody wants to make any…

Tatevik Grigoryan: Okay, Avery. No, there was a question in the audience.

Audience: Good afternoon. My name is Dr. Jose Fisata from CHAT. I co-ordinate the CHAT this year. Well, in terms of collaboration, artistic engagement in these processes, I think CSO play a very important role, I mean, creating awareness, et cetera. So I think it’s quite important to employ, I mean, civil society organizations to be able to address these issues together and contribute in the process. So how does the SIG can enable us to have a program that will bring together all the SIGs in countries that we have even not yet are conducting national assessment, talent development to have a framework or let’s say formal guidelines to work on together in collaboration? Thank you.

James Kunle Olorundare: If I may just make a comment on that before, maybe if you want to make a comment on that. Yeah, so let me use our model as an example. So in Nigeria, we have the NTX and NITF. that is Nigeria Internet Governance Forum, it is the KUDA advisory group, that group consists of the civil society, of course represented by Internet Society, by the way I’m the president of ISOC in Nigeria, so ISOC is a primary member of the NIGF MAG in Nigeria. However, we have other members too, right, we have the relevant government agencies that are members of the same MAG I’m talking about, and now we’ll just go straight, we have the National Information Technology Development Agency, because of its role within the IG space, we have the Nigerian Communications Commission, as a member of the NIGF MAG, okay, then we have the NIRA, that is the Nigeria Internet Registration Association, that’s also a member, then we have other members too, the academia, yes, academias too are nominated into the NIGF MAG, so if we want to have even the school, what we normally do is we have to collaborate with the NIGF MAG, we carry them along, as a matter of fact, any time we are, the year is going to start in a matter of weeks now, so the process starts all over again, so when we start the process, we have meetings, as we are having the meeting with the NIGF, the NSEG, that is the Nigerian School on Internet Governance, do give reports at every NIGF meeting to show that, okay, this is where we are, this is what we are doing, and this is where we want you to come in, so it’s more like a collaborative effort to work together, so that multistakeholderism has been established, so we already have a structure that we are working with, so it’s just, okay, yeah, ISOC is the convener of the school, so we work with all these guys, and in fact, when we want to make a call for application too, we do let them know that we want, you know, fellows that are going to participate in the school, that is the cohort for that year, to be from all those places too, in addition to that, of course, we make it like, okay, let everybody participate, that means it’s thrown open, the youth, especially the youth, the youth and the aged ones, interestingly, this year, when we finish the school, we have one very old man in the school, and I was like, okay, of course, I want to know what is going on within the IG space, you know, thank you very much.

Avri Doria: seem to have any hands up on the online space. With 10 minutes left, perhaps people want to take a little chance to sum up and such. I have been working since we have to have almost already contributed our takeaways and our calls to action. I’ve been trying to pick those up as you all spoke. But please, as I say, don’t have any online requests to speak, but you’ve got a few minutes left to do any summing up and such that you would like.

Anriette Esterhuysen: Thank you very much, Avri. I don’t have much. I think just in terms of the comment from Chad, I think we should discuss that. I think you are raising good suggestions, which also I think we can use the dynamic coalition to have those discussions. I think these are fantastic ideas. It just also makes me realize that, and this is my takeaway, is that I think most schools are operating on a fairly shoestring budget. The dynamic coalition. Avri for doing that work. I think if we do have more capacity as a community of SIGs, that will also make it easier for us to partner with other initiatives like the IUI-1. I don’t think it will happen overnight, but I do think there’s a need and a demand and a will to kind of partnership. So just thanks to. everyone, to UNESCO, to AVRI, to the SIGs and the IUIs, and everyone for coming to the session. Over to you.

Olga Cavalli : This session will be recorded and also will be available for the schools that were not participating today and were not here on the IGF or online, so it’s a reference just from our school, regional and national, we offer our help in sharing experiences and I think that the coalition is the perfect space for work together with the other coalition and build upon all the work that AVRI is doing in framing all these indicators into our curricula and our activities, so feel free to contact us even though if you are not in this in this room, virtually or on site, and if you look at this recording afterwards, and thank you all for and thank you AVRI for being there. What time is it there AVRI? Is it very early for you now?

Avri Doria: Oh it’s morning, you know, it’s 6.30 in the morning, it’s nothing compared to the one o’clock in the morning session I did earlier. So very much, I totally believe in living a flexible schedule as I can and moving my schedule around to suit the place I am online participating in. I really appreciate, you know, all the contributions and the talk that came in. I appreciate having received the points that I’m now trying to put in edit into the report. We’ll be going out with a report on this. I’ll be consulting you all and thank you AVRI very much and especially as this was the last of the regular sessions or in the last of the regular sessions. slots before the end. I understand how you’re all eager to get yourself to the final sessions, the closing sessions, and thank you so much, and thanks for helping me do this from an online perspective. So I’ll pass it back to people on stage to end it, to close it, but thank you very much.

Tatevik Grigoryan: Thank you very much. I don’t have much to add. I just need to say that I really enjoyed this discussion and the concrete suggestions that came out of this discussion, and we’re willing and ready to continue the cooperation from UNESCO on the IUIs with the SIGs. Thank you so much, Avri, for coordinating and organizing this session. Thank you to Andiet, Arjun Zool online, and Olga, James, and the contributions from the audience. Thank you so much, and I look forward to working with many of you. Thank you.

James Kunle Olorundare: All right, so on behalf of the SIG Coalition, we want to thank you, Tatibi, for a good job well done. We appreciate this, and I hope this is just going to be the beginning of the collaboration. So we’re just starting, and we want it to continue. So please be available anytime we call, and of course, I’m sure any moment from now, you’ll be receiving calls from even the national schools, especially with respect to the IUI. All right, ladies and gentlemen, I think we have come to the end of this session, and I think we should give ourselves a round of applause. Thank you. I wasn’t on this panel. I’m not in the photograph. So we can take you you you you

T

Tatevik Grigoryan

Speech speed

122 words per minute

Speech length

1538 words

Speech time

754 seconds

IUIs provide a useful framework for assessing internet development at national level

Explanation

The Internet Universality Indicators (IUIs) are a tool for analyzing internet development and fostering multi-stakeholder cooperation. They are based on UNESCO’s ROAM-X principles and help assess internet development at the national level without ranking or comparison.

Evidence

Over 40 countries have conducted IUI assessments, with Africa being the leading region having 17 countries complete the assessment.

Major Discussion Point

Major Discussion Point 1: Integration of Internet Universality Indicators (IUIs) into Schools of Internet Governance (SIGs)

Agreed with

Anriette Esterhuysen

Olga Cavalli

Sandra Hoferichter

Agreed on

Integration of IUIs into SIG curricula

UNESCO provides technical support and capacity building for IUI assessments

Explanation

UNESCO offers comprehensive support for countries implementing IUI assessments. This includes providing detailed guidelines, technical assistance, and capacity building for research teams and multi-stakeholder advisory boards.

Evidence

The speaker mentioned that UNESCO accompanies the research team and the country at every step of the assessment process.

Major Discussion Point

Major Discussion Point 2: Challenges and Opportunities in Implementing IUIs

The IUI framework fosters multi-stakeholder cooperation and discussions

Explanation

The Internet Universality Indicators framework is designed to promote multi-stakeholder cooperation and dialogue. It serves as a tool for bringing together diverse stakeholders to assess and discuss internet development at the national level.

Major Discussion Point

Major Discussion Point 3: Enhancing Multi-stakeholder Collaboration in Internet Governance

Agreed with

Anriette Esterhuysen

Olga Cavalli

James Kunle Olorundare

Agreed on

Importance of multi-stakeholder collaboration

A

Anriette Esterhuysen

Speech speed

138 words per minute

Speech length

2644 words

Speech time

1144 seconds

SIGs are an excellent platform for promoting and implementing IUIs

Explanation

Schools of Internet Governance (SIGs) provide a cross-section of people from different stakeholder groups, making them ideal for discussing and implementing IUIs. The smaller, more focused environment of SIGs allows for more in-depth discussions compared to larger events like IGFs.

Evidence

UNESCO has been using the African School of Internet Governance as a platform for sharing IUIs since 2015.

Major Discussion Point

Major Discussion Point 1: Integration of Internet Universality Indicators (IUIs) into Schools of Internet Governance (SIGs)

Agreed with

Tatevik Grigoryan

Olga Cavalli

Sandra Hoferichter

Agreed on

Integration of IUIs into SIG curricula

Care must be taken not to undermine actual national IUI assessments when using them in SIGs

Explanation

While simulations of IUI assessments in SIGs can be valuable, it’s important to be cautious about how they are implemented. Real assessments have political implications at the national level, and simulations should not unintentionally create concerns that could hinder actual national assessments.

Major Discussion Point

Major Discussion Point 1: Integration of Internet Universality Indicators (IUIs) into Schools of Internet Governance (SIGs)

Differed with

Sandra Hoferichter

Differed on

Integration of IUIs into SIG curricula

Involving government representatives in SIGs is important but takes time

Explanation

Government participation is crucial for effective multi-stakeholder leadership development in SIGs. While it can be challenging to get government representatives involved, those who do participate often become long-term partners.

Evidence

The speaker mentioned having government participation in AfriSIG, including representatives from Cameroon, Zambia, and Nigeria.

Major Discussion Point

Major Discussion Point 3: Enhancing Multi-stakeholder Collaboration in Internet Governance

Agreed with

Olga Cavalli

James Kunle Olorundare

Tatevik Grigoryan

Agreed on

Importance of multi-stakeholder collaboration

Guidelines for integrating IUIs into SIG curricula should be developed

Explanation

There is a need to create guidelines for incorporating Internet Universality Indicators into the curricula of Schools of Internet Governance. These guidelines would help standardize the integration process and ensure effective implementation across different SIGs.

Major Discussion Point

Major Discussion Point 4: Evolution of SIG Curricula and Formats

O

Olga Cavalli

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Integrating IUIs into SIG curricula can enhance content and student learning

Explanation

Incorporating IUIs into SIG programs can enrich the curriculum and improve student learning experiences. This integration can be done through various parts of the program, including pre-training, hybrid training, and research components.

Evidence

The speaker mentioned that the South School of Internet Governance has a six-month program with three parts, including online self-assisted pre-training where IUI content could be integrated.

Major Discussion Point

Major Discussion Point 1: Integration of Internet Universality Indicators (IUIs) into Schools of Internet Governance (SIGs)

Agreed with

Tatevik Grigoryan

Anriette Esterhuysen

Sandra Hoferichter

Agreed on

Integration of IUIs into SIG curricula

SIGs help create networks between diverse stakeholders in internet governance

Explanation

Schools of Internet Governance facilitate networking and interaction between participants from different stakeholder groups. This multi-stakeholder environment is valuable for fostering understanding and collaboration in internet governance.

Evidence

The speaker noted that government representatives who participate in SIGs often become engaged in the program due to the interaction with fellows from different stakeholders.

Major Discussion Point

Major Discussion Point 3: Enhancing Multi-stakeholder Collaboration in Internet Governance

Agreed with

Anriette Esterhuysen

James Kunle Olorundare

Tatevik Grigoryan

Agreed on

Importance of multi-stakeholder collaboration

Virtual components can extend SIG programs beyond short in-person sessions

Explanation

Incorporating virtual elements into SIG programs can extend the learning experience beyond brief in-person sessions. This approach allows for more comprehensive and flexible training in internet governance topics.

Evidence

The speaker described the South School of Internet Governance as a six-month program with online pre-training, a five-day hybrid training, and research with a university partner.

Major Discussion Point

Major Discussion Point 4: Evolution of SIG Curricula and Formats

S

Sandra Hoferichter

Speech speed

160 words per minute

Speech length

374 words

Speech time

140 seconds

Simulations of IUI assessments could be valuable learning exercises in SIGs

Explanation

Conducting simulations of IUI assessments within SIGs could serve as effective learning tools. This approach would allow participants to gain hands-on experience with the IUI framework in a controlled environment.

Major Discussion Point

Major Discussion Point 1: Integration of Internet Universality Indicators (IUIs) into Schools of Internet Governance (SIGs)

Agreed with

Tatevik Grigoryan

Anriette Esterhuysen

Olga Cavalli

Agreed on

Integration of IUIs into SIG curricula

Differed with

Anriette Esterhuysen

Differed on

Integration of IUIs into SIG curricula

F

Fabio Senne

Speech speed

137 words per minute

Speech length

276 words

Speech time

120 seconds

First IUI assessments can be time-consuming but subsequent ones are easier

Explanation

Initial IUI assessments require significant time and effort to mobilize stakeholders and identify data sources. However, subsequent assessments become more efficient as countries can build on previous work and update only the most dynamic aspects.

Evidence

The speaker mentioned that legal aspects that don’t change frequently can be reused in subsequent assessments, while more dynamic and quantitative aspects can be updated.

Major Discussion Point

Major Discussion Point 2: Challenges and Opportunities in Implementing IUIs

I

Ileleji Poncelet

Speech speed

165 words per minute

Speech length

299 words

Speech time

108 seconds

Obtaining data from government agencies for IUI assessments can be difficult

Explanation

Collecting necessary data from government agencies for IUI assessments can be challenging. This difficulty can impact the completeness and accuracy of the assessment results.

Evidence

The speaker shared his experience as the lead researcher for the Gambia’s IUI assessment, mentioning difficulties in obtaining data from the Ministry of Finance.

Major Discussion Point

Major Discussion Point 2: Challenges and Opportunities in Implementing IUIs

J

James Kunle Olorundare

Speech speed

162 words per minute

Speech length

2736 words

Speech time

1007 seconds

Multi-stakeholder advisory boards are crucial for successful IUI implementation

Explanation

Establishing multi-stakeholder advisory boards is essential for effective IUI implementation. These boards ensure diverse perspectives are included in the assessment process and help overcome challenges in data collection and stakeholder engagement.

Evidence

The speaker described the structure of the Nigeria Internet Governance Forum (NIGF) MAG, which includes representatives from civil society, government agencies, academia, and other stakeholders.

Major Discussion Point

Major Discussion Point 2: Challenges and Opportunities in Implementing IUIs

Agreed with

Anriette Esterhuysen

Olga Cavalli

Tatevik Grigoryan

Agreed on

Importance of multi-stakeholder collaboration

National Internet Governance Forums can facilitate multi-stakeholder collaboration

Explanation

National Internet Governance Forums (IGFs) can serve as platforms for multi-stakeholder collaboration in internet governance. These forums bring together diverse stakeholders and can support initiatives like Schools of Internet Governance.

Evidence

The speaker described how the Nigerian School on Internet Governance collaborates with the NIGF MAG, providing regular updates and seeking input on their activities.

Major Discussion Point

Major Discussion Point 3: Enhancing Multi-stakeholder Collaboration in Internet Governance

SIGs should have a core curriculum with flexibility for regional focus areas

Explanation

Schools of Internet Governance should maintain a core curriculum covering fundamental topics while allowing flexibility to address region-specific issues or emerging hot topics. This approach ensures a balanced and relevant learning experience for participants.

Evidence

The speaker suggested having basic fundamental modules that all schools should cover, followed by more focused topics based on regional environments or current issues.

Major Discussion Point

Major Discussion Point 4: Evolution of SIG Curricula and Formats

A

Abdelaziz Hilali

Speech speed

97 words per minute

Speech length

171 words

Speech time

104 seconds

IUIs can help address basic connectivity issues in underserved regions

Explanation

The Internet Universality Indicators can be used to identify and address connectivity challenges in underserved areas. This is particularly relevant for regions with geographical barriers to internet access.

Evidence

The speaker mentioned the Sahara desert in North Africa, which covers 30% of Africa and poses challenges for telecommunications infrastructure and connectivity.

Major Discussion Point

Major Discussion Point 2: Challenges and Opportunities in Implementing IUIs

A

Audience

Speech speed

120 words per minute

Speech length

116 words

Speech time

57 seconds

Civil society organizations play a key role in internet governance processes

Explanation

Civil society organizations are important stakeholders in internet governance, particularly in raising awareness and contributing to policy processes. Their involvement is crucial for addressing internet governance issues comprehensively.

Major Discussion Point

Major Discussion Point 3: Enhancing Multi-stakeholder Collaboration in Internet Governance

A

Avri Doria

Speech speed

149 words per minute

Speech length

820 words

Speech time

329 seconds

Specialized thematic SIGs could be developed on specific internet governance topics

Explanation

There is potential for creating specialized Schools of Internet Governance focused on specific themes within internet governance. This approach could allow for more in-depth exploration of particular topics.

Evidence

The speaker shared a comment from the chat suggesting the idea of specialized SIGs on single themes such as human rights or connectivity.

Major Discussion Point

Major Discussion Point 4: Evolution of SIG Curricula and Formats

A

Ariunzul Liijuu-Ochir

Speech speed

140 words per minute

Speech length

614 words

Speech time

262 seconds

SIGs play an important role in capacity building for internet governance

Explanation

Schools of Internet Governance are crucial for developing capacity in internet governance among diverse stakeholders. They provide a platform for learning about policy issues, technical concepts, and participation spaces in internet governance.

Major Discussion Point

Major Discussion Point 4: Evolution of SIG Curricula and Formats

Agreements

Agreement Points

Integration of IUIs into SIG curricula

Tatevik Grigoryan

Anriette Esterhuysen

Olga Cavalli

Sandra Hoferichter

IUIs provide a useful framework for assessing internet development at national level

SIGs are an excellent platform for promoting and implementing IUIs

Integrating IUIs into SIG curricula can enhance content and student learning

Simulations of IUI assessments could be valuable learning exercises in SIGs

Speakers agreed that integrating Internet Universality Indicators (IUIs) into Schools of Internet Governance (SIGs) curricula would be beneficial for enhancing learning experiences and promoting the use of IUIs.

Importance of multi-stakeholder collaboration

Anriette Esterhuysen

Olga Cavalli

James Kunle Olorundare

Tatevik Grigoryan

Involving government representatives in SIGs is important but takes time

SIGs help create networks between diverse stakeholders in internet governance

Multi-stakeholder advisory boards are crucial for successful IUI implementation

The IUI framework fosters multi-stakeholder cooperation and discussions

Speakers emphasized the importance of multi-stakeholder collaboration in both SIGs and IUI implementation, highlighting the need for diverse perspectives and engagement from various sectors.

Similar Viewpoints

Both speakers highlighted challenges in implementing IUIs, particularly regarding the time and effort required for data collection and stakeholder engagement.

Fabio Senne

Ileleji Poncelet

First IUI assessments can be time-consuming but subsequent ones are easier

Obtaining data from government agencies for IUI assessments can be difficult

Both speakers advocated for flexible and adaptable SIG curricula that can address both core topics and region-specific issues, with the potential for extended learning through virtual components.

James Kunle Olorundare

Olga Cavalli

SIGs should have a core curriculum with flexibility for regional focus areas

Virtual components can extend SIG programs beyond short in-person sessions

Unexpected Consensus

Use of imaginary countries for IUI simulations in SIGs

Anriette Esterhuysen

Sandra Hoferichter

Care must be taken not to undermine actual national IUI assessments when using them in SIGs

Simulations of IUI assessments could be valuable learning exercises in SIGs

While discussing the integration of IUIs into SIGs, there was an unexpected consensus on the potential use of imaginary countries for simulations. This approach could provide valuable learning experiences while avoiding potential political sensitivities associated with real national assessments.

Overall Assessment

Summary

The main areas of agreement centered around the integration of IUIs into SIG curricula, the importance of multi-stakeholder collaboration, and the need for flexible and adaptable approaches in both IUI implementation and SIG programs.

Consensus level

There was a high level of consensus among speakers on the potential benefits of integrating IUIs into SIGs and the importance of multi-stakeholder engagement. This consensus suggests a strong foundation for future collaboration between IUI implementers and SIG organizers, potentially leading to more comprehensive and effective internet governance education and assessment processes.

Differences

Different Viewpoints

Integration of IUIs into SIG curricula

Anriette Esterhuysen

Sandra Hoferichter

Care must be taken not to undermine actual national IUI assessments when using them in SIGs

Simulations of IUI assessments could be valuable learning exercises in SIGs

While Sandra Hoferichter suggests using IUI simulations as learning exercises in SIGs, Anriette Esterhuysen cautions about potential negative impacts on real national assessments.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the implementation of IUIs in SIG curricula and the structure of SIG programs.

difference_level

The level of disagreement among speakers is relatively low, with most differences being nuanced rather than fundamental. This suggests a general consensus on the importance of integrating IUIs into SIGs and enhancing multi-stakeholder collaboration in internet governance. The minor differences in approach do not significantly impact the overall goals of improving internet governance education and assessment.

Partial Agreements

Partial Agreements

Both speakers agree on the need for flexible and comprehensive SIG curricula, but differ in their approaches. James emphasizes a core curriculum with regional adaptations, while Olga focuses on extending programs through virtual components.

James Kunle Olorundare

Olga Cavalli

SIGs should have a core curriculum with flexibility for regional focus areas

Virtual components can extend SIG programs beyond short in-person sessions

Similar Viewpoints

Both speakers highlighted challenges in implementing IUIs, particularly regarding the time and effort required for data collection and stakeholder engagement.

Fabio Senne

Ileleji Poncelet

First IUI assessments can be time-consuming but subsequent ones are easier

Obtaining data from government agencies for IUI assessments can be difficult

Both speakers advocated for flexible and adaptable SIG curricula that can address both core topics and region-specific issues, with the potential for extended learning through virtual components.

James Kunle Olorundare

Olga Cavalli

SIGs should have a core curriculum with flexibility for regional focus areas

Virtual components can extend SIG programs beyond short in-person sessions

Takeaways

Key Takeaways

Internet Universality Indicators (IUIs) provide a valuable framework for assessing internet development at the national level

Schools of Internet Governance (SIGs) are an excellent platform for promoting and implementing IUIs

Integrating IUIs into SIG curricula can enhance content and student learning

Multi-stakeholder collaboration is crucial for successful implementation of IUIs and internet governance processes

SIGs play an important role in capacity building for internet governance

Resolutions and Action Items

Develop guidelines for integrating IUIs into SIG curricula

Explore the possibility of creating simulations of IUI assessments as learning exercises in SIGs

Use the Dynamic Coalition on Schools of Internet Governance to facilitate collaboration between SIGs and the IUI framework

Consider including IUIs as a module in SIG programs

Unresolved Issues

How to effectively streamline the IUI assessment process to make it less time-consuming and resource-intensive

How to address challenges in obtaining data from government agencies for IUI assessments

Whether specialized thematic SIGs should be developed for specific internet governance topics

How to balance core SIG curriculum with regional focus areas and emerging topics

Suggested Compromises

Use imaginary countries for IUI assessment simulations in SIGs to avoid potential conflicts with real national assessments

Implement a cascaded model for IUI integration, with different levels of depth for national, regional, and global SIGs

Combine in-person SIG sessions with virtual components to extend program duration and allow for more in-depth coverage of topics like IUIs

Thought Provoking Comments

I think it’s a fantastic idea, Sandra. And I think a national school could do it at a national level. And if the school, like I know some, I think, James, you said the Nigerian school is three days. Now, some schools are not as long as AfriSig or EuroSig. I’m not sure how long the South School is. But you could then have your practicum be that advisory committee, because as Poncelet said, that’s not easy, but it’s very important.

speaker

Anriette Esterhuysen

reason

This comment introduced a creative way to incorporate the Internet Universality Indicators (IUIs) into the curriculum of Schools of Internet Governance (SIGs) through practical exercises.

impact

It sparked discussion about how to integrate IUIs into SIG curricula in a meaningful way, leading to further ideas about implementation and considerations of different school formats.

I think at this point, we’ve got a bit of time left and I think it’s- Sorry. You can go on with discussion, but please.

speaker

Avri Doria

reason

While brief, this interjection was important in guiding the flow of the discussion and ensuring all voices were heard.

impact

It opened up the floor for additional comments and questions from participants, leading to a more inclusive discussion.

So I think we need to start looking at that and of course, the issue of, you know, faculty talking about somebody that’s going to facilitate that. Of course, if you say, I should go and facilitate something on IUI now, I have to start reading and reading. But for somebody like you, for being part of the system, you know so much about it. So what will be your advice in terms of getting faculty, you know, to facilitate that?

speaker

James Kunle Olorundare

reason

This comment raised an important practical consideration about the expertise needed to teach IUIs effectively in SIGs.

impact

It led to a discussion about capacity building for SIG faculty and the need for guidelines on how to incorporate IUIs into curricula.

UNESCO provides technical support at every step of the assessment. Since this inception, we do build the capacity. We know that it’s not something taught at schools or it’s a new idea for many countries, and the way we work, we provide the technical support, we do capacity building for the research team that wants to carry out the assessment, we work with the multi-stakeholder advisory board, and we basically accompany the research team and the country at every step of the assessment.

speaker

Tatevik Grigoryan

reason

This comment provided crucial information about UNESCO’s role in supporting IUI implementation, addressing concerns about expertise and capacity.

impact

It clarified the level of support available for implementing IUIs, potentially alleviating concerns about the complexity of the process and encouraging more SIGs to consider incorporating IUIs.

Overall Assessment

These key comments shaped the discussion by moving it from theoretical considerations of incorporating IUIs into SIG curricula to practical implementation strategies. They highlighted the need for creative approaches to integration, raised important questions about faculty expertise and capacity building, and provided information about available support from UNESCO. The discussion evolved from simply considering the idea of using IUIs in SIGs to exploring concrete ways to make it happen, considering challenges, and identifying resources and support mechanisms. This progression led to a more nuanced and actionable conversation about the potential collaboration between IUIs and SIGs.

Follow-up Questions

How can the Internet Universality Indicators (IUI) assessment process be streamlined to make it less time-consuming and resource-intensive for countries?

speaker

Sandra Hoferichter

explanation

This is important to enable more countries to conduct initial assessments and repeat assessments over time to track progress.

How can a mini-assessment or simulation of the IUI process be incorporated into School of Internet Governance (SIG) programs?

speaker

Sandra Hoferichter

explanation

This would help familiarize participants with the IUI framework and potentially encourage its adoption in more countries.

How can artificial intelligence be leveraged to improve connectivity in remote areas like the Sahara desert?

speaker

Abdelaziz Hilali

explanation

This addresses the challenge of providing internet access to isolated communities lacking telecommunications infrastructure.

How can Schools of Internet Governance (SIGs) create a framework or formal guidelines for collaboration among SIGs in countries that have not yet conducted national IUI assessments?

speaker

Dr. Jose Fisata

explanation

This would help standardize approaches and facilitate knowledge sharing among SIGs, particularly for countries new to the IUI process.

How can the Internet Universality Indicators (IUI) be integrated into national School of Internet Governance (SIG) curricula?

speaker

James Kunle Olorundare

explanation

This would help raise awareness of the IUI framework and potentially increase its adoption and implementation at the national level.

Is it time to have specialized Internet Governance schools focusing on single themes such as human rights or connectivity?

speaker

Luis Martinez (via chat)

explanation

This could allow for more in-depth exploration of specific internet governance topics.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Lightning Talk #136 The Embodied Web: Rethinking Privacy in 3D Computing

Lightning Talk #136 The Embodied Web: Rethinking Privacy in 3D Computing

Session at a Glance

Summary

This discussion, led by Stanford Law School professor Brittan Heller, focuses on the privacy implications of emerging 3D computing technologies, particularly extended reality (XR) and spatial computing. Heller explains how these technologies, which blend physical and digital realms, collect deeply personal data including body movements, eye tracking, and physiological responses. This data collection is far more extensive than traditional computing platforms and poses significant privacy risks.

Heller highlights that current privacy laws are ill-equipped to handle the nuances of immersive technologies. For instance, opt-out mechanisms are ineffective as spatial computing relies on body-based data for basic functionality. Recent studies have shown that behavioral data from XR devices can uniquely identify individuals and reveal sensitive information about age, gender, and even political affiliation.

The discussion delves into the potential misuse of this data, including targeted advertising based on involuntary bodily responses and the extraction of medical information unknown even to the user. Heller emphasizes the need for new privacy frameworks that address these unique challenges, including protections for environmental and body-based data.

The talk also touches on recent developments in generative AI for creating 3D virtual worlds, which while exciting, further complicate privacy concerns. Heller advocates for integrating privacy by design principles into these technologies as they evolve. She concludes by calling for proactive measures to develop legal, technical, and ethical standards that ensure user control over personal data in this new “embodied web” era.

Keypoints

Major discussion points:

– The rise of spatial computing and extended reality (XR) technologies that blend physical and digital realms

– Privacy risks associated with XR devices collecting deeply personal biometric and behavioral data

– Gaps in existing privacy laws and frameworks in addressing XR-specific data collection and use

– Recent developments in generative AI for creating 3D virtual environments

– Need for new privacy frameworks and safeguards to protect user rights in spatial computing

Overall purpose:

The purpose of this discussion was to raise awareness about the privacy and ethical implications of emerging 3D and spatial computing technologies, particularly extended reality (XR) devices. The speaker aimed to highlight the unique challenges these technologies pose to existing privacy frameworks and advocate for proactive development of new safeguards and regulations.

Tone:

The overall tone was informative and cautionary. The speaker presented the topic with a sense of urgency, emphasizing both the exciting possibilities of these technologies and the critical need to address their potential risks. While highlighting concerns, the tone remained optimistic about the potential to develop responsible and ethical approaches to spatial computing if action is taken proactively.

Speakers

– Brittan Heller: Professor at Stanford Law School

– Nouha Ben Lahbib: Project manager for an incubator for creative startups using new technology like VR and XR

Full session report

Extended Reality (XR) and Privacy: Navigating the Challenges of Spatial Computing

This discussion, led by Stanford Law School professor Brittan Heller, explores the privacy implications of emerging 3D computing technologies, particularly extended reality (XR) and spatial computing. The conversation emphasizes the urgent need for new privacy frameworks and safeguards in light of these technologies’ unique data collection capabilities and potential risks.

Introduction to XR and Spatial Computing

Heller introduces the concept of the “embodied web,” where our physical bodies become the interface for digital interactions. This new paradigm of computing blends physical and digital realms, creating immersive experiences characterized by presence, immersion, and embodiment. While offering exciting possibilities, these technologies also present unprecedented privacy challenges.

Key Privacy Concerns in XR Technologies

XR devices collect deeply personal data far more extensive than traditional computing platforms, including body movements, eye tracking, and physiological responses. The privacy risks associated with this data collection are significant:

1. Unique Identification: Recent studies have shown that behavioral data from XR devices can uniquely identify individuals and infer over 40 personal attributes, including age, gender, substance use, and political affiliation.

2. Sensitive Information Extraction: Eye tracking data can reveal highly sensitive medical and personal information, including truthfulness, sexual attraction, and preclinical signs of physical and mental health conditions.

3. Targeted Advertising: The potential misuse of involuntary bodily responses for targeted advertising, as illustrated by Heller’s scenario of receiving car insurance advertisements after playing a virtual reality racing game.

Challenges in Regulating XR Technologies

The discussion highlights several key challenges in regulating and protecting privacy in XR environments:

1. Inadequacy of Current Laws: Existing privacy laws are ill-equipped to handle the nuances of immersive technologies.

2. Essential Data for Functionality: XR devices rely on body-based data for basic functionality, complicating privacy protection efforts.

3. New Data Categories: Environmental and body-based data collected by XR devices are not adequately covered by existing regulations.

4. Limitations of Opt-Out Mechanisms: Traditional opt-out approaches are ineffective in XR environments due to the essential nature of data collection for device functionality.

Advancements in Generative AI and 3D Environments

Recent developments in generative AI have revolutionized the creation of 3D virtual worlds. Heller notes that companies like NVIDIA, MIT, and Google have made significant strides in this area, allowing for the rapid generation of navigable 3D environments from text prompts. While these advancements open up creative possibilities, they also further complicate privacy concerns in XR environments.

Psychological Impact and Legal Implications

The immersive nature of XR experiences necessitates considering them as extensions of lived reality. Heller cites the example of a UK public prosecutor investigating sexual abuse in the metaverse, highlighting the need for strong safeguards to protect users’ rights and safety in virtual spaces.

Approaching XR Privacy Issues

Heller suggests four steps for addressing XR privacy concerns at home:

1. Understand the technology and its implications

2. Identify personal boundaries and comfort levels

3. Research privacy settings and options on XR devices

4. Advocate for privacy protections and responsible development

The Future of Privacy Forum has also introduced the concept of “bot-based data” to describe the unique data generated in XR environments.

Awareness and Education

Both Heller and Nouha Ben Lahbib, a project manager for an XR startup incubator, stress the importance of awareness and education regarding XR privacy for developers, users, and the general public.

Conclusion

While XR technologies offer exciting possibilities for innovation and creativity, they also present significant privacy challenges that require urgent attention. The discussion concludes with a call for proactive measures to develop legal, technical, and ethical standards that address the unique challenges posed by XR technologies. As these immersive technologies continue to evolve, it is crucial to ensure responsible development and use that prioritizes user privacy and safety in the emerging landscape of the embodied web.

For further information or inquiries, Brittan Heller can be contacted at [email protected].

Session Transcript

Brittan Heller: I am a professor at Stanford Law School. I teach international law and study new forms of computing hardware like 3D computing, spatial computing, and AI. I’ve done so for about eight years at this point. What I’m going to talk to you about today is what happens when AI grows legs and starts walking around amongst us. I know this is a little different than most of the content that we get at IGF, but I think that this is a forum where we can talk about the future of computing and the type of privacy. Imagine this. The three of us are playing a car racing game in virtual reality. So we put on the headsets and what car do you pick? What type of car do you pick? So you pick a Volkswagen buggy. You do? What kind of car do you pick? A Bugatti. That’s a good one. Maybe something fast and pure like car racing or something. I don’t know. I pick a cherry red McLaren. And when I see that car, I race and I beat your VW buggy and even your Ferrari, I think you said, but I really like this car. And what happens is that my heart starts to race and my pupils dilate, my voice, these bodily reactions to the data because I really like what I’m seeing. Later on in virtual reality, I start checking my email and I get messages that tell me why now is a great time to renew my auto insurance. I go into… Later on in virtual reality, I go about my day. I check my email and I see I’ve gotten advertisements about why is a great time to renew my auto insurance. I go into a social club and I see somebody who looks like a person that I find to be very attractive ask me what car I drive. Finally, I go back into the game and I see the red McLaren go by driven by somebody who looks more than a little bit like me. This sounds like science fiction, but all of the capabilities I talked about are already present and deployed to some extent in virtual reality environments. The hardware tracked my heart rate increase, my body’s instinctual reactions when I saw something I liked, my pupil dilation rate and my gaze vectors in particular. And so these type of preferences and behaviors are the type of data that can be shared with advertisers and data brokers in most jurisdictions. So it’s not distant. It’s not virtual reality. It’s actual reality and not hypothetical happening now. We’re transitioning away from traditional… What this shift belies is a critical issue, making sure that our frameworks match the sophistication of the new 3D computing technologies. So over the last few years, we’ve seen the rise of spatial computing. And when I say spatial computing, I mean technologies that blend physical and digital realms. And these are called a couple things. The term that seems to be winning is extended reality or XR. You can see that with legislation going out. You can see that with the types of hardware that are getting the most investment. They’ll call it extended reality. But you’ll also see terms like virtual reality, augmented reality, or to a lesser extent now mixed reality. Companies that were selling what they called mixed reality headsets are now being phased out. But XR is the term that seems to be winning. And these allow for immersive experiences and it transforms industries like gaming and healthcare and education in particular. This is because unlike traditional computing platforms, the XR devices collect deeply, deeply personal data, including body movements, eye tracking, and your physiological responses to stimuli. But they also create a record of the stimuli itself that you are reacting to. And this makes it the most rich of any other data flows that we’ve experienced in a computing platform. It’s the same reason that people get excited about XR stuff, where they say this is the best tool for learning we’ve ever developed. A study just came out from Harvard Business School verifying that. But that’s the reason. Because there are these reciprocal data flows from your body to the computer and back again. It also creates really significant privacy risks that were not contemplated when we were writing laws for the traditional flat screen internet. Some of the challenges to traditional privacy protections are that basically, I’m a professor of international law. These laws around the world are not equipped to handle the nuances of immersive technologies. One example is that opt out mechanisms were kind of the standard that many legal regimes relied on, that you could opt out of your data collection. But that’s not effective when you’re looking at a 3D computer. Spatial computing relies on body-based data for functionality. The way these headsets are built, you have six cameras facing in, six cameras facing out. And you need that to position yourself in physical space to put the digital overlays on it. You also need it to calibrate the device so that you don’t feel nauseous or seasick when you’re using it. So if you take out this eye tracking information, you can’t use the computer. So having these opt out mechanisms for sharing your biometric data, which I’ve seen a lot of legal proposals sort of contemplate, just won’t work based on the way the computers actually function. Binary biometric responses can also be exploited for targeted advertising without users being fully aware of what’s going on. Behavioral data, like head and hand motions, is actually unique enough to identify individuals. And when I started doing this work, I was saying that privacy law may not be the best regime to handle these data flows. Because what that meant was privacy law is premised on personal identifying information. And until about two years ago, you couldn’t uniquely identify a person from these data flows. But last year, there was a study that came out from Berkeley. And it used VR motion data from Beat Saber, which is the most popular video game you can play in XR. If you haven’t tried it, it’s actually really fun. Blocks come at you and you chop them to music. Great exercise. There was a publicly available data set with the locomotion of how people were playing the game. The study was by Vivek Nair and it demonstrated how over 40 personal attributes, including age, gender, even substance use and political affiliation could be inferred from motion patterns alone. And it was accessible by a publicly available data set. There were other studies that were done by both Stanford and Berkeley that found that the way you tilt your head and point was as physically identifying as your fingerprint. By that, there was again the same data set was used to try to identify one. It took 90 seconds of recorded data with the way you tilt your head and your point. And a data set, first Stanford did it with about 2000 people and they were able to uniquely identify one person. Berkeley redid the study with 55,000 people. So not one person in a university class, but one person in a football stadium. And they were able to, based on the way you moved, identify one person out of the crowd by your teletromy. So in many privacy laws, we talk a lot about our digital fingerprints, but when you look at a 3D version of the internet, the way you move is as fundamentally identifying as what you say and the kind of mosaic of information available about you. The move to 3D technology also brings different risks that extend beyond traditional concerns. I think foremost, based on what I described, is the privacy invasion. Sensitive data can be obtained from the eye tracking information that you need to calibrate these devices. And by that, the example that I gave at the beginning about the car racing game, the way that your eyes react to the light. You have six cameras in, six cameras out, and it’s It’s normally an infrared camera looking at your eyes. It gauges the way your pupils dilate in response to stimuli, and that can also give you medically significant data that most people and most legal structures don’t understand is that rich. Through your pupillometry information, I could tell you whether or not you were likely to be telling the truth. I interviewed one of the first creators of these headsets who worked for the U.S. military and he asked me why I wanted to put a camera, a polygraph with six cameras on my face. You can tell whether or not somebody has, is sexually attracted to a person that they’re looking at, which is why I had the example of the very sexy person at the bar. So you can tell somebody’s protected characteristics like their sexual orientation through their involuntary bodily responses to what they’re looking at. There’s no way to control your eyes dilating when you see a person you like. Finally, there are other physical and mental health indicators that are contained in these datasets and they are preclinical signs, so they’re things your doctor does not know about you yet. They’re things your doctor doesn’t necessarily know to look for at this point, and a lag in your pupil dilation can be a sign of Parkinson’s disease, Huntington’s disease, autism, schizophrenia, or some forms of ADHD. So very rich data, significant to me, data that is protected by human rights laws and is supposed to be protected data in many, many jurisdictions around the world. In many jurisdictions, there is also not a bar on this type of personal information being shared or sold without consent. So there’s a possibility that this could be misused and abused by creating targeted advertising to you based on personal characteristics you weren’t aware you were giving away or things that you and your doctor don’t know about you yet. Profiling and surveillance risks can also increase with the granularity of data protected. XR is not just the self-contained headsets, it’s also how many people have tried a virtual reality headset? A couple people. How many people have used a Snapchat or an Instagram, a lens or a filter on your pictures? How many people have used a QR code to order food at a restaurant? So congratulations, you’re all part of the embodied web, and if you think about it, digital overlays on physical space, not necessarily a self-contained, Tron-looking video game playing headset. That’s the type of hardware that we need to be aware, thinking about for encompassing these new privacy risks. With the profiling and surveillance risks, we’ve actually seen this come mostly out of the video game context, where some video game companies who had massively popular games around the world were criticized by users as developing real-time location-based devices. So when you played their game, it showed where your location was in relation to other people playing the game, and users were not aware that that was being transmitted as they were playing a game like Pokemon Go. So people talked about it as a new form of stalkerware, where as you were playing the game and using the digital overlays, it was actually transmitting your physical location to other people with minimal levels of consent. That may be the way that some of the games operated, but that is a risk when you are playing with something that has the physical-digital hybrid information. I think the real risk is that, I come from an American jurisdiction, so at least under American law, behavioral and inferential data is just starting to be included in privacy law. It’s not a very common thing, and it’s really not common around the world unless you live in a jurisdiction that focuses on neuro-rights. So unless you’re from Chile, basically, it’s not necessarily going to be covered. There are people who argue that it is covered under GDPR and in European contexts, but it’s not clear that these applications are contemplated in the formation of the law, and sometimes the laws are… Hopefully, we’ll get there. …the overlays or the headsets are actually going to look like. Behavioral and inferential data can be exploited and used to influence users or push people towards purchases or beliefs, and there’s a high risk of targeted manipulation when you’re looking at this type of a context. It would not be a tech talk today if we weren’t talking about generative AI. So recent developments for generative AI in 3D environments. Go back and see if I… In the last few weeks, we’ve actually seen some pretty cool developments using generative AI to create content for 3D virtual worlds, and I think this is really exciting. I know I talk a lot about the risks of this, but this is the stuff that creates the future of the internet and makes it accessible to people without an engineering background. NVIDIA has tools that can now automate the creation of virtual worlds from a text-based input. MIT’s diffusion models transform 2D images into realistic 3D shapes just based on a description. ThinAI has SHAP-E, so Shape E, which enables 3D model creation from text or image inputs, making 3D design even more accessible. Google’s DeepMind Genie builds interactive 3D environments from text prompts, and this enhances training and immersive experiences. Basically, at this point, if you are familiar with generative AI tools, you can go from a text-based prompt to a navigable 3D world in a way that used to take video game studios six months to 18 months to build. It’s really impressive. I think it was two weeks ago we saw the advent of the creation of a 3D world from one still photographic image. So this is not something that is a future concern if you’re actually looking at the trajectory of the hardware. This is something that is incumbent on us now to look at when people can create these type of worlds through a narrative. They open up creative possibilities, but again, it also will result in new privacy or ethical concerns. So the way that we should look at this is trying to integrate privacy by design principles into these technologies as they continue to evolve. I could throw at you more developments, more companies, but unless they’re thinking about privacy and safety concerns at this point, it dampens my enthusiasm for the creative possibilities. What this means is that we have a need for new privacy frameworks. We need new measures to address the unique challenges that come with 3D spatial computing. So first is identifying gaps in existing laws. Like GDPR has limited coverage of XR-specific data. I think this is a very critical first step. Solutions from jurisdictions around the world should include protections for environmental data and body-based data. And when I say environmental data, it’s because of the way the hardware works, like we described at the beginning. Six cameras in, six cameras out. Or with other headsets, more sensors out so you don’t even need a controller. You can use your hands to navigate the world. Yeah, I see you used it. Or the absolute newest ones use your eyes, so eye tracking, or can even use your thoughts to control the controllers in kind of a very rudimentary form of brain-computer interfaces. So you see we’re getting into the land where privacy concerns will be more serious when we look at the way that the hardware will develop in new forms. Solutions need to include protection for environmental data. When we’re looking at the way that trust and safety regimes will work in this, normal flat-screen trust and safety regimes look at conduct and content of a world. These need to encompass conduct, content, and environment. And all the examples I just gave you about generative AI being able to create your immersive world really make that clear now that the environment is another vector that traditional content moderation systems don’t look at. They haven’t had the technical capability to do that in the past, but now that computer vision-based systems are getting better and are helping generative AI walk off the screen and into the world, we are going to have to think about different ways to help keep those environments safe and to provide adequate controls to help companies meet their legal obligations if content moderation is found to apply to 3D worlds in addition to 2D platforms. I say body-based data because there’s there’s another legal hole where biometric data does not necessarily include some of the risk factors that I’ve talked about in every jurisdiction yet. So certain groups like the Future on Privacy Forum have started saying bot-based data, which does not have a legal definition as opposed to biometric data, which does. I think looking at this in a comparative law perspective, you may be able to say biometric data depending on the jurisdiction you’re from, but that means you have to really look at what is encompassed under those type of protections in your home state. I would also see the way your home jurisdiction looks at sort of new challenges like neural data. I described how the cutting edge systems will let you control through either your thoughts or your eyes the way you interact through the environment. If neural data is not included as a protected category of data in your jurisdiction, you will have a problem. So you need to start thinking about this stuff now and not just in the limited context of brain computer interfaces, but looking at it as more of a general type of data category and include eye tracking in it because your eyes are not just the window to your soul in poetry. That is the way that you identify and access your central nervous system when you’re looking at the sensors that we need for 3D computing. It’s not as sophisticated as reading your thoughts. That’s basically reading your thoughts. So looking at some of the further implications for rights and safety, the type of things we’re looking at with spatial computing actually affect our fundamental human rights. So we have to have strong safeguards to protect data from unauthorized use. And with this, technological innovation can align with individual rights to assure ethical deployment. In the last panel that I was privileged to speak on, I said that it’s like we have a second bite at the apple. All puns intended if you work for Apple. But we have a chance to look at this new ecosystem and create laws and standards, both technical, legal, and ethical standards that will really ensure that users have control over their personal data in a different way than we saw with the evolution of the flat screen internet. And I think that’s really exciting. But it does create an obligation to look at this proactively in a way that so far we haven’t. Transparency and user control over our personal data will be integral in building trust. But this trust is important because then we can support broader goals like some of the strategic development goals through the UN. So that can be a wider application of this if you need to have some sort of, if you want to have sort of a further point to developing these types of standards in your jurisdiction. It can work under reducing inequalities, which is strategic development goal number 10. Maybe fostering strong institutions for justice, strategic development goal 16. There’s lots of ways that you can look at the uses of 3D computing now, which are fundamentally looking at industry and education and creativity and sort of see how that intersects with your national plans. Oh, and strategic development goal 9, industry, innovation, and infrastructure. All of this comes together in what I’ve been calling the embodied web. And that’s because of this reciprocal relationship with information gathered from your body by sensors. And it’s the sensors that separate it from the traditional flat screen web. And computers. And how you feed your information to the computer, it calibrates and sends it back to you. And it’s this circular relationship. Both are needed to create the environment that you want to live in. And relationship. Both are needed to create this new type of web, which is why I’m calling it the embodied web. And it won’t just be virtual reality. You’ll see lots of things. I just had a 20-minute conversation with a robot down the street. Or I guess down the road. So, it’s coming. And it’s exciting. But it’s also means we should be mindful. There are factors in immersive computing. Presence, immersion, embodiment. I could go into what those mean. But basically, these are all psychological characteristics that work together. That also make virtual reality different than flat screen computing. Because it feels real to your cognition. You process it through your hippocampus. All of your interactions in VR. It is your actual reality when you’re in it. And you respond to it like your actual reality. Which make any of the harms that you experience in there even more acute. To the point where the UK public prosecutor’s office has opened an investigation into the sexual abuse of a minor in the metaverse. Because of the psychological impact on the victim. So, states are slowly starting to look at this as not a separate reality, but an extension of people’s actual lived experience. Because it feels real, it fundamentally changes how users perceive and interact with content. And this transition creates immense opportunities across industries. But we will have to prioritize user rights and human dignity. Spatial computing must evolve responsibly and balance innovation with ethical considerations. So, if you’re looking for kind of four steps into how to start approaching this at home, you can do research on XR privacy implications based on how you see it being deployed by your companies and your government. You can collaborate and really engage with user groups and policy discussions. You can innovate and look at privacy preserving XR technologies. So, it may not be limiting access to data like we have done for flat screen computing, but developing privacy preserving technologies on the other end that allow the technologies to be used and calibrated, but still protect people’s right to not be personally identified. And to educate, to raise awareness about 3D computing risks and benefits. Because it’s not science fiction, the time for 3D computing is here. So, we should start thinking about it now to get ahead of all of these risks so that we can maximize the benefits. I know my time is almost up. I wanted to leave a few minutes for questions if we have any, and I’m happy to stay afterwards if that’s more comfortable for people.

Nouha Ben Lahbib: Thank you for this insightful session and information. I’m Nouha, I’m project manager for an incubator for creative startup that’s using new technology like VR and XR technology. And today, when I’m hearing about what’s the challenge, especially with data, we are just pushing them to develop this such experience. And special because they’re enhancing our art culture identity. And the data that they are looking for, also it’s, how to say it, it’s not available in this AI tool or in the digital space. So, today that I know that maybe you need to be aware what type of data and how you can provide for your customer as a developer of immersive experience that you can offer privacy for the data and they need to manage this. Like, they need to inform their client and their customer about this important remarks because, yes, they are doing now using the AI to develop 3D modeling and using this VR experience for special for events. So, a lot of people are using their VR headsets and collecting. So, this is like now, wow, they are collecting data from other people, our people are aware about it. So, maybe not a question, but, yeah, I want to know more about this subject that I can, when I give them, I have a program, let’s talk, and we can offer them this insightful topic to talk about it. It’s important to be aware that you are, sure, developing a new experience, but also you need to be aware about what type of data, how to manage this data for your client and for customer.

Brittan Heller: Thank you. I think that’s really insightful. I’ve done a lot of workshops for national governments with groups of their top content creators in 3D to actually talk about, have people think about privacy and safety, and if they use cloud computing, how they’re exposing people’s data elsewhere, and also talking to startups and hardware providers in the country’s home jurisdiction so that they look at privacy-developing technologies so that people, individuals don’t have to take this burden. They can be aware of the risks and see the ways to mitigate it, but it kind of makes it a more responsible ecosystem. It looks like our time is up. Thank you very much for coming today. My email is britain.heller at stanford.edu, and I’m very happy to continue the discussion later. Thank you.

B

Brittan Heller

Speech speed

139 words per minute

Speech length

3988 words

Speech time

1711 seconds

XR devices collect deeply personal data including body movements, eye tracking, and physiological responses

Explanation

Extended Reality (XR) technologies gather highly personal information from users. This includes data on physical movements, eye tracking, and bodily responses to stimuli.

Evidence

Example of car racing game where user’s heart rate, pupil dilation, and voice reactions are tracked.

Major Discussion Point

Privacy Risks of Extended Reality (XR) Technologies

Agreed with

Nouha Ben Lahbib

Agreed on

XR technologies collect sensitive personal data

Behavioral data like head and hand motions can uniquely identify individuals

Explanation

The way people move their heads and hands in XR environments is unique enough to identify specific individuals. This creates a new form of biometric data.

Evidence

Studies from Berkeley and Stanford showing that 90 seconds of recorded motion data can uniquely identify a person out of thousands.

Major Discussion Point

Privacy Risks of Extended Reality (XR) Technologies

Eye tracking data can reveal sensitive medical and personal information

Explanation

Eye tracking technology in XR devices can capture data that reveals highly sensitive information about users. This includes potential medical conditions and personal characteristics.

Evidence

Examples of eye tracking data revealing truthfulness, sexual attraction, and preclinical signs of diseases like Parkinson’s, Huntington’s, autism, schizophrenia, and ADHD.

Major Discussion Point

Privacy Risks of Extended Reality (XR) Technologies

Current privacy laws are not equipped to handle nuances of immersive technologies

Explanation

Existing privacy laws were not designed with XR technologies in mind. They fail to address the unique challenges and data types associated with immersive experiences.

Evidence

Example of opt-out mechanisms being ineffective for spatial computing due to the necessity of body-based data for functionality.

Major Discussion Point

Challenges in Regulating XR Technologies

Opt-out mechanisms for data collection are not effective for spatial computing

Explanation

Traditional opt-out methods for data collection don’t work well with XR technologies. This is because spatial computing relies on certain types of data for basic functionality and user comfort.

Evidence

Example of eye tracking data being necessary for device calibration and preventing nausea in users.

Major Discussion Point

Challenges in Regulating XR Technologies

Existing privacy laws have limited coverage of XR-specific data

Explanation

Current privacy laws do not adequately cover the types of data collected and used by XR technologies. This leaves gaps in protection for users of these immersive technologies.

Evidence

Mention of GDPR having limited coverage of XR-specific data.

Major Discussion Point

Challenges in Regulating XR Technologies

Need for new privacy frameworks to address unique challenges of 3D spatial computing

Explanation

The unique nature of XR technologies requires new approaches to privacy protection. These frameworks need to account for the specific types of data and interactions in 3D spatial computing environments.

Evidence

Suggestion to include protections for environmental data and body-based data in new privacy frameworks.

Major Discussion Point

Challenges in Regulating XR Technologies

Recent tools enable creation of virtual worlds from text or image inputs

Explanation

New generative AI tools have made it possible to create complex 3D virtual environments from simple text or image inputs. This dramatically reduces the time and expertise needed to create immersive digital worlds.

Evidence

Examples of tools from NVIDIA, MIT, ThinAI, and Google’s DeepMind that can create 3D environments from text or image prompts.

Major Discussion Point

Advancements in Generative AI for 3D Environments

Generative AI opens up creative possibilities but also raises new privacy concerns

Explanation

While generative AI tools for 3D environments offer exciting creative opportunities, they also introduce new privacy and ethical challenges. These need to be addressed as the technology develops.

Major Discussion Point

Advancements in Generative AI for 3D Environments

Strong safeguards needed to protect data from unauthorized use

Explanation

Given the sensitive nature of data collected by XR technologies, robust protections are necessary to prevent misuse. This is crucial for protecting individual rights and ensuring ethical deployment of these technologies.

Major Discussion Point

Implications for Rights and Safety

Opportunity to create new laws and standards for user control over personal data

Explanation

The emergence of XR technologies provides a chance to develop new legal and ethical standards. These can be designed to give users greater control over their personal data than was achieved with traditional internet technologies.

Evidence

Reference to this being a ‘second bite at the apple’ in terms of creating user-centric data protection standards.

Major Discussion Point

Implications for Rights and Safety

Psychological impact of XR experiences necessitates considering them as extensions of lived reality

Explanation

XR experiences can have significant psychological effects on users, feeling as real as physical experiences. This requires treating these digital interactions as extensions of real life, particularly in legal and ethical contexts.

Evidence

Example of UK public prosecutor’s office investigating sexual abuse of a minor in the metaverse due to psychological impact on the victim.

Major Discussion Point

Implications for Rights and Safety

Need for education on 3D computing risks and benefits

Explanation

As 3D computing technologies become more prevalent, it’s crucial to raise awareness about both their potential benefits and risks. This education is necessary for informed use and development of XR technologies.

Major Discussion Point

Awareness and Education on XR Privacy

Agreed with

Nouha Ben Lahbib

Agreed on

Need for awareness and education on XR privacy

N

Nouha Ben Lahbib

Speech speed

129 words per minute

Speech length

272 words

Speech time

126 seconds

Developers need to be aware of data collection and management in XR experiences

Explanation

Creators of XR experiences should understand the implications of data collection in their products. This awareness is crucial for responsible development and use of immersive technologies.

Major Discussion Point

Awareness and Education on XR Privacy

Agreed with

Brittan Heller

Agreed on

XR technologies collect sensitive personal data

Importance of informing clients and customers about data privacy in XR

Explanation

It’s essential for XR developers to communicate clearly with their clients and end-users about data privacy issues. This transparency is key to building trust and ensuring ethical use of XR technologies.

Major Discussion Point

Awareness and Education on XR Privacy

Agreed with

Brittan Heller

Agreed on

Need for awareness and education on XR privacy

Agreements

Agreement Points

XR technologies collect sensitive personal data

Brittan Heller

Nouha Ben Lahbib

XR devices collect deeply personal data including body movements, eye tracking, and physiological responses

Developers need to be aware of data collection and management in XR experiences

Both speakers acknowledge that XR technologies gather highly sensitive personal data, which requires careful management and awareness from developers and users.

Need for awareness and education on XR privacy

Brittan Heller

Nouha Ben Lahbib

Need for education on 3D computing risks and benefits

Importance of informing clients and customers about data privacy in XR

Both speakers emphasize the importance of educating developers, clients, and users about the privacy implications and risks associated with XR technologies.

Similar Viewpoints

Both speakers recognize that the current understanding and regulation of data privacy in XR technologies are inadequate, and there’s a need for increased awareness and potentially new frameworks to address these challenges.

Brittan Heller

Nouha Ben Lahbib

Current privacy laws are not equipped to handle nuances of immersive technologies

Developers need to be aware of data collection and management in XR experiences

Unexpected Consensus

Psychological impact of XR experiences

Brittan Heller

Psychological impact of XR experiences necessitates considering them as extensions of lived reality

While not explicitly agreed upon by multiple speakers, Brittan Heller’s point about the psychological impact of XR experiences being treated as extensions of real life is an unexpected and significant consideration in the discussion of XR privacy and regulation.

Overall Assessment

Summary

The main areas of agreement revolve around the sensitive nature of data collected by XR technologies, the need for increased awareness and education on XR privacy, and the inadequacy of current privacy frameworks to address the unique challenges posed by these immersive technologies.

Consensus level

There is a moderate level of consensus between the two speakers on the importance of addressing privacy concerns in XR technologies. This agreement implies a growing recognition of the need for new approaches to data protection and privacy in the context of immersive technologies, which could potentially drive future policy discussions and technological developments in this field.

Differences

Different Viewpoints

Unexpected Differences

Overall Assessment

summary

The speakers shared concerns about privacy implications of XR technologies and the need for awareness and education.

difference_level

Minimal to no disagreement. Speakers were largely in agreement, with Nouha Ben Lahbib’s question reinforcing Brittan Heller’s points about the importance of data privacy awareness in XR development.

Partial Agreements

Partial Agreements

Similar Viewpoints

Both speakers recognize that the current understanding and regulation of data privacy in XR technologies are inadequate, and there’s a need for increased awareness and potentially new frameworks to address these challenges.

Brittan Heller

Nouha Ben Lahbib

Current privacy laws are not equipped to handle nuances of immersive technologies

Developers need to be aware of data collection and management in XR experiences

Takeaways

Key Takeaways

Resolutions and Action Items

Unresolved Issues

Suggested Compromises

Thought Provoking Comments

Imagine this. The three of us are playing a car racing game in virtual reality… Later on in virtual reality, I go about my day. I check my email and I see I’ve gotten advertisements about why is a great time to renew my auto insurance.

speaker

Brittan Heller

reason

This opening scenario vividly illustrates how XR technologies can collect and use personal data in ways users may not expect, making abstract privacy concerns concrete and relatable.

impact

It set the tone for the discussion by immediately highlighting the privacy implications of XR technologies in a way that captured attention and made the topic feel relevant and urgent.

Spatial computing relies on body-based data for functionality. The way these headsets are built, you have six cameras facing in, six cameras facing out. And you need that to position yourself in physical space to put the digital overlays on it. You also need it to calibrate the device so that you don’t feel nauseous or seasick when you’re using it. So if you take out this eye tracking information, you can’t use the computer.

speaker

Brittan Heller

reason

This explanation reveals a fundamental challenge in protecting privacy in XR – that the very data that raises privacy concerns is essential for the technology to function properly.

impact

It deepened the discussion by highlighting the complexity of the issue and the need for novel approaches to privacy protection that go beyond simple opt-out mechanisms.

There was a study that came out from Berkeley… It used VR motion data from Beat Saber, which is the most popular video game you can play in XR… The study was by Vivek Nair and it demonstrated how over 40 personal attributes, including age, gender, even substance use and political affiliation could be inferred from motion patterns alone.

speaker

Brittan Heller

reason

This reference to scientific research provides concrete evidence of the extent to which seemingly innocuous data can reveal sensitive personal information in XR environments.

impact

It elevated the discussion from theoretical concerns to documented risks, underscoring the urgency of addressing privacy in XR technologies.

Through your pupillometry information, I could tell you whether or not you were likely to be telling the truth… You can tell whether or not somebody has, is sexually attracted to a person that they’re looking at… Finally, there are other physical and mental health indicators that are contained in these datasets and they are preclinical signs, so they’re things your doctor does not know about you yet.

speaker

Brittan Heller

reason

This explanation of the depth and sensitivity of information that can be gleaned from XR data reveals privacy risks that go far beyond what most users might expect.

impact

It significantly expanded the scope of the privacy discussion, moving it from concerns about targeted advertising to potential impacts on personal relationships, health privacy, and even human rights.

In the last few weeks, we’ve actually seen some pretty cool developments using generative AI to create content for 3D virtual worlds, and I think this is really exciting… Basically, at this point, if you are familiar with generative AI tools, you can go from a text-based prompt to a navigable 3D world in a way that used to take video game studios six months to 18 months to build.

speaker

Brittan Heller

reason

This comment highlights the rapid pace of technological development in XR and AI, showing how quickly the landscape is changing and potentially outpacing regulatory efforts.

impact

It shifted the discussion to consider not just current privacy concerns, but also the need for forward-looking policies that can adapt to rapidly evolving technologies.

Overall Assessment

These key comments shaped the discussion by progressively revealing the depth and complexity of privacy issues in XR technologies. Starting with a relatable scenario, the speaker built a comprehensive picture of the unique challenges posed by XR, from the necessity of collecting sensitive data for basic functionality to the unexpected insights that can be gleaned from this data. The inclusion of scientific research and recent technological developments grounded the discussion in concrete realities while also emphasizing the urgency of addressing these issues. Overall, these comments transformed what might have been a speculative discussion about future technologies into a pressing examination of current and imminent privacy challenges.

Follow-up Questions

How can privacy-preserving technologies be developed for XR that allow the technologies to be used and calibrated while still protecting people’s right to not be personally identified?

speaker

Brittan Heller

explanation

This is important to balance the functionality of XR devices with user privacy, as current opt-out mechanisms are not effective for spatial computing.

How can existing laws and regulations be updated to address the unique challenges of 3D spatial computing and XR technologies?

speaker

Brittan Heller

explanation

Current legal frameworks are not equipped to handle the nuances of immersive technologies, creating gaps in protection for users’ sensitive data.

How can environmental data and body-based data be effectively protected in XR contexts?

speaker

Brittan Heller

explanation

These types of data are fundamental to XR functionality but also pose significant privacy risks not covered by existing regulations.

How can trust and safety regimes be adapted to encompass conduct, content, and environment in 3D virtual worlds?

speaker

Brittan Heller

explanation

Traditional content moderation systems are not designed to address the environmental aspects of 3D worlds, creating new challenges for safety and moderation.

How can neural data and eye-tracking information be protected as categories of sensitive data in various jurisdictions?

speaker

Brittan Heller

explanation

These emerging forms of data collection in XR pose significant privacy risks but may not be covered by existing legal definitions of protected data.

How can XR developers and companies be educated about the types of data they are collecting and the importance of managing this data responsibly?

speaker

Nouha Ben Lahbib

explanation

Many developers may not be aware of the extent and sensitivity of the data they are collecting through XR experiences, highlighting a need for education and awareness.

How can the psychological impact of experiences in virtual reality be addressed in legal and ethical frameworks?

speaker

Brittan Heller

explanation

The immersive nature of VR can make experiences feel real, potentially leading to psychological harm that current frameworks may not adequately address.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #32 Harnessing Youth Voices to Transform the Data Economy

WS #32 Harnessing Youth Voices to Transform the Data Economy

Session at a Glance

Summary

This discussion focused on the importance of youth participation in shaping the data economy and digital policies. Speakers from various organizations highlighted the challenges and opportunities in engaging young people in these crucial conversations. They emphasized that youth, comprising a significant portion of the global population, bring unique perspectives and innovative ideas to the table.

Key issues discussed included the need for digital literacy, data privacy concerns, and the importance of contextualizing data solutions for diverse youth communities. Speakers stressed the importance of addressing intersectionality, particularly considering the challenges faced by young women and marginalized groups in accessing and benefiting from digital technologies.

The discussion highlighted several good practices for youth engagement, such as ITU’s Generation Connect Leadership Program and regional youth-focused events like YouthLock IGF. Participants emphasized the need for intergenerational collaboration and the integration of youth voices into governance structures and decision-making processes.

Challenges in youth participation were also addressed, including the scheduling of youth-focused sessions at major forums and the need for more funding to support youth-led initiatives. Speakers called for raising awareness among policymakers and other stakeholders about the value of youth contributions to the data economy.

The conversation concluded with a call for broader inclusion of diverse youth voices, emphasizing the importance of moving beyond tokenism to meaningful engagement. Participants agreed that youth involvement is crucial for creating a more equitable, innovative, and sustainable digital future.

Keypoints

Major discussion points:

– The importance of including youth voices in shaping data and digital policies

– Challenges in defining “youth” and addressing intersectionality within youth groups

– The need for digital literacy and skills development for youth, especially in rural/marginalized communities

– Creating incentives and mechanisms for meaningful youth participation in policymaking

– Building trust through inclusive approaches to technology development and governance

Overall purpose:

The discussion aimed to explore ways to bridge the gap between youth and decision-makers in the data economy, highlighting the importance of youth perspectives in shaping digital policies and technologies.

Tone:

The tone was collaborative and solution-oriented throughout. Speakers shared insights from their work engaging youth, while also acknowledging challenges. There was a sense of urgency about the need to better include youth voices, balanced with optimism about youth potential to drive positive change. The tone became more reflective towards the end as participants considered next steps and key takeaways.

Speakers

– Sophie Tomlinson: No specific role mentioned

– Mariana Rozo-Paz: Moderator, from the Datasphere Initiative

– Celiane Pochon: Senior Policy Advisor at the Swiss Federal Office of Communications, International Relations Department

– João Moreno Falcão: Lead facilitator for the Youth Standing Group from Internet Society

– Jenny Arana: Program manager at International Telecommunication Union, working on digital inclusion and Generation Connect initiative

Additional speakers:

– Francola John: Caribbean Telecommunications Union

– Gregory Duke Dey: Member of Internet Society, works with Be Tech Connected

– Melody Musoni: Digital policy officer at European Centre for Development Policy Management

– Emad Karim: UN Women regional office for Asia and the Pacific, leads innovation portfolio

– Online Audience (Advocate Zainouba): Cyber lawyer from South Africa

Full session report

Youth Participation in Shaping the Data Economy and Digital Policies

This discussion, moderated by Mariana Rozo-Paz from the Datasphere Initiative, focused on the critical importance of youth participation in shaping the data economy and digital policies. Speakers from various organisations highlighted both the challenges and opportunities in engaging young people in these crucial conversations.

Importance of Youth Participation

There was strong consensus among speakers on the value of youth involvement in digital governance. Mariana Rozo-Paz emphasised that youth are early adopters and innovators of digital technologies, while also facing unique vulnerabilities in the digital age. Jenny Arana from the International Telecommunication Union’s (ITU) digital inclusion area highlighted youth as catalysts for innovation and social movements, bringing fresh perspectives and creativity to the table. Celiane Pochon from the Swiss Federal Office of Communications noted that youth are aware of topical issues and advocate for important values.

The speakers agreed that youth, comprising a significant portion of the global population, bring unique perspectives and innovative ideas that are crucial for creating a more equitable, innovative, and sustainable digital future. João Moreno Falcão from the Internet Society’s Youth Standing Group stressed the importance of youth expertise in shaping policies that will affect their futures.

Challenges in Youth Engagement

Despite the recognised importance of youth participation, several challenges were identified:

1. Digital Literacy: João Moreno Falcão pointed out a significant gap between using technology and meaningful participation in digital governance.

2. Trust Issues: Celiane Pochon highlighted a lack of trust in data governance arrangements, which hinders digital inclusion and innovation.

3. Intersectionality: Emad Karim from UN Women emphasised the need to address intersectionality, particularly considering the challenges faced by young women in accessing and benefiting from digital technologies.

4. Defining ‘Youth’: Melody Musoni raised an important point about the varying definitions of youth across different cultures and regions, ranging from 18-35 years in Europe to up to 70 years in some countries.

5. Digital Divide: Gregory Duke Dey from Be Tech Connected stressed the need to address digital inclusion challenges in rural areas.

6. Lack of Representation: Melody Musoni pointed out the lack of youth representation at the IGF itself, highlighting a broader issue of youth inclusion in important forums.

7. Understanding Emerging Technologies: Advocate Zainouba emphasized the importance of youth understanding emerging technologies in the context of territorial integrity and data sovereignty.

Strategies for Meaningful Youth Engagement

Speakers proposed several strategies to enhance youth participation:

1. Institutionalisation: Jenny Arana advocated for institutionalising youth participation in governance structures and showcased ITU’s Generation Connect Leadership Program as an example.

2. Multi-stakeholder Inclusion: Celiane Pochon suggested including youth in multi-stakeholder forums and consultations.

3. Education: Enhancing data literacy programmes in schools and universities was proposed as a crucial step.

4. Diverse Representation: João Moreno Falcão emphasised the need to broaden reach to include more diverse youth voices.

5. Context-specific Approaches: Mariana Rozo-Paz stressed the importance of contextualising data solutions for diverse youth communities.

6. Prioritizing Youth Sessions: Melody Musoni suggested scheduling youth sessions earlier in conferences like IGF to ensure better participation.

Youth for Data Future Project

Mariana Rozo-Paz detailed the Youth for Data Future project, an initiative by the Datasphere Initiative. The project aims to engage youth in data governance discussions through three phases:

1. A global survey to understand youth perspectives on data issues.

2. Regional workshops to dive deeper into specific concerns.

3. A policy hackathon to develop concrete policy recommendations.

The project has reached over 500 young people from 80 countries and has already influenced policy discussions at various levels.

Thought-Provoking Insights

Several comments sparked deeper reflection:

1. Celiane Pochon highlighted trust as a fundamental issue, connecting individual empowerment to broader societal outcomes in the digital space.

2. João Moreno Falcão shared a striking statistic: one child accumulates 72 million pieces of personal data by their 13th birthday, raising important ethical questions about youth data rights and privacy.

3. Ahmed Karim challenged assumptions about young women’s participation in digital spheres, noting that while many are fighting for space, the sector is not providing equal opportunities.

4. Sophie Tomlinson pointed out that 90% of AI datasets come from Europe and North America, with less than 4% from Africa, highlighting the lack of global representation in AI development.

5. João Moreno Falcão emphasized the importance of youth understanding what they are dealing with in terms of digital policy to participate meaningfully.

6. Celiane Pochon stressed the importance of returning to core questions about why youth voices are important in these discussions.

Conclusion and Future Directions

The discussion concluded with a call for broader inclusion of diverse youth voices, emphasising the importance of moving beyond tokenism to meaningful engagement. Participants agreed that youth involvement is crucial for creating a more equitable, innovative, and sustainable digital future.

Several follow-up questions emerged, including how to effectively define youth in the context of digital inclusion, strategies for engaging rural communities, addressing the gender gap in digital transformation, and ensuring AI and language models reflect local cultural heritage.

The conversation highlighted the complex, multifaceted nature of youth engagement in the data economy and digital policy-making. It underscored the need for nuanced, context-specific approaches that consider various cultural contexts and intersectional identities when addressing youth engagement in digital policy and technology development.

Mariana Rozo-Paz concluded by mentioning the Datasphere Initiative’s commitment to the UNICEF data governance initiative, further emphasizing the organization’s dedication to youth inclusion in data governance discussions.

Session Transcript

Sophie Tomlinson: sure. Oh yeah, thank you. Yeah, I was saying yes, so great. Great. Brilliant. And we have, so we can see Mariana who’s our moderator, but can also the videos of Celiane and Jenny be activated by the sound technicians? I’m not sure if they can turn on their videos. I can’t turn on mine. But perhaps, yeah, we can have online participants’ videos activated so the people in the room can see us. That would be, that would be great. Celiane and Jenny, are you able to unmute yourselves? And also, I can see we’ve got participants Gregory and Lawrence as well. It’d be great to have your insights for the discussion. So if you can also unmute yourselves. No, they can’t unmute themselves. So yeah, if we can, I see Jenny’s now a co-host, so hopefully she’ll be able to. Okay, I think you’re okay. Yeah, so now I can have my video and now we can see Jenny too. Great. And yeah, so if we can also have the same for Celiane and Gregory and Lawrence, if you’re okay with that, we’d love to see you as well, since we’re quite a small group. Be nice to have as many participants as possible.

Mariana Rozo-Paz: But actively influence the policies and innovations that will define their future. We’re seeing lately that we’re living in a time where youth and various youth communities are more connected to digital technologies than any generation before them. Yet, they’re paradoxically the most disconnected from the decision-making spaces that govern those technologies. They are, in a way, the early adopters, the innovators, and in so many ways, the ones that are most deeply impacted by digital transformation. But not having them in the conversation is creating a series of challenges, both for them and for other actors and stakeholders. This gap is definitely not sustainable. And we’re also seeing that youth are facing unique vulnerabilities in the digital age, from the risks of social media dependency and online abuse, to also being disenfranchised as data subjects. of experiencing both the promise and the perils of digital transformation. So while this is happening, we’re also seeing that policies and technologies that shape their lives are often being designed by those who may not fully understand or prioritize their needs. So we’re seeing different platforms creating services that they’re hoping young people to be utilizing as main users, but they’re not being included as one of the main stakeholders in the conversation. So at the Datasphere Initiative, we believe that to unlock the true value of data and digital technologies for everyone, we must assign much more bolder, inclusive, and creative solutions to engage young people in these conversations. And that means building spaces where their insights, their ideas, their concerns can truly shape data policies and technological innovations to make them more equitable and effective, and actually make sense for the context of young people. So this panel today is an invitation to explore how can we really bridge this gap. And we hope to be discussing issues that matter most to young people today, from the rise of artificial intelligence and generative AI, to climate change, mental health, education, and the urgent need to reskill for the future. We’ll also examine how we can create pathways to engage youth from every region in the world in these crucial conversations. And now before I turn it to our amazing speakers today, which I thank again for joining, I just want to share very briefly about our Youth for Data Future project at the Datasphere Initiative, which has been one of our flagship initiatives over the past two years. So to give everybody some context, around 2023, at the beginning of the year, last year, the Datasphere Initiative was one of the winners of the Future of Data Challenge, which was a challenge put out by the Meteor Network. And we were selected to be able to materialize our youth project, which pretty much sought to address this gap that I was mentioning before. So we’re really seeing that youth are absent from these conversations, and we wanted to explore ways in which we could really bring them in and get them familiarized with the data governance conversation, with data and digital policies, and create spaces for them to engage safely and to be able to voice their experiences and their concerns about how digital technologies are being both developed and governed. So the project has had two phases. The first phase, which was a social media campaign. You can actually, I think, look for us both on Instagram and TikTok as the Youth4Data project. And you’ll see that we launched a social media campaign that sought to engage young people in this conversation. So there are different fun videos. and funny videos, pretty much talking about various data governance topics and really how are young people being impacted and could be engaged in this conversation. And then after that social media campaign, for the past year, we have been engaging youth in the conversation around how would they like new policies to be shaped. So we have engaged young people through a series of youth labs that we have hosted in different parts of the world. And that has also been a very rewarding experience. And our workshop today marks pretty much the end of a very exciting year in terms of engagement. So let me just share some, and I hope that people, so in the room and everybody can see my screen. So this is pretty much a summary of what has been going on with the Youth for a Data Future project. I can share the link to our website in the chat a bit later, or maybe Sophie can post that so that you can all access that. So over the past two years, we have engaged over 15,000 young people through our social media campaigns, through the workshops that we have been hosting and the labs that we have been able to organize. We’ve hosted over seven youth labs and workshops over the year. And as you can see, we have two pictures here. The first picture is from the UN World Data Forum a couple of weeks ago in Colombia that brought together the data community, the global data community. And we were able to engage youth, particularly coming from Africa and Latin America in this conversation. And then the picture from below is from a workshop that we hosted at the COP16, which was focused on biodiversity. It was organized in Colombia this year, and we brought together over 90 young people to think about how data and AI could be better leveraged to address biodiversity change. challenges. And what to me is most exciting and valuable about the project has not only been able to talk to youth directly. And this first picture that you have down here on amplifying youth voices to shape the data future was a campaign in which we engage young people from all of the regions in the world to pretty much share their experiences with data. And what I’m mentioning regarding the most exciting thing is actually being able to translate all of the insights from the project into concrete policy recommendations. So we have been able to engage with the G20. And the process that has been led by Brazil this year, and as well, who’s one of our speakers today can speak more about this. But we drafted together a policy brief, sharing more about the challenges and the concerns that young people are experiencing in the online era. And at the same time, the solutions that they’re thinking about and the possibilities to really create solutions that make sense for the context of various youth communities. And we’re also very proud to announce that we have joined UNICEF data governance fit for children commitment, which is bringing together various partners working on data governance, that response to the context of children and youth. So hopefully, we’ll also be able to share more insights about that in the coming months. And if there is anybody interested in joining that commitment, we will be happy to share more details later. And with that, I I’ll finish my monologue. And I’d love to give the floor to our amazing speakers, I’d love to also give them the chance to introduce themselves and maybe briefly tell us what what what work are they doing with young people, maybe how they ended up working in this topic. And I know that the majority of our speakers were also pretty young. So that’s one of the reasons, but yeah. So for now, I’d like to give the floor to Céliane, who’s the first person that I have right here next to me. So Céliane, thank you. Welcome, and yeah, please introduce yourself. Thank you so much for your introduction. So my name is

Celiane Pochon: Céliane Pochon. I’m joining you today from Switzerland. I’m a Senior Policy Advisor at the Swiss Federal Office of Communications in the International Relations Department, and we mainly work on internet governance, AI governance, and data governance. So that is why I’m here to talk today, and I think I am still considered youth, so I think I can also give my insight working in this field, and I’m very happy to be here today. Thank you.

Mariana Rozo-Paz: Thank you. Amazing. Thank you for joining us. And now I see that João is also in the room, but now we can see his face online, which is great. So João, over to you.

João Moreno Falcão: Hello, everyone. Thank you for joining. Well, my name is João. I’m Brazilian, and I’m the lead facilitator for the Youth Standing Group from Internet Society. There, we work to empower young voices inside of internet governance-related events through engaging with them, making them create connections inside this ecosystem and last their views and share what they want to the internet of today and tomorrow.

Mariana Rozo-Paz: Amazing. Thank you. And now, over to you, Jenny.

Jenny Arana: Thank you so much, Mariana, and nice to meet you, everybody. Nice to see all the speakers and participants today. So my name is Jenny Arana. I’m also joining from Switzerland, by the way, from Geneva. I’m from the International Telecommunication Union and working specifically in the area of digital inclusion. So, I’m a program manager and specifically for this, I’ve been working on youth for the past five to seven years as part of the Generation Connect initiative of ITU, which is an initiative that’s been in place since before the year 2020 and aiming to engage global youth and encourage their participation as equal partners alongside the leaders of today’s digital change. We work together with young people, empowering young people, but hearing the voices of young people, but empowering with skills and opportunities to advance the vision of young people for a connected future. So, thank you very much and looking forward to the conversation.

Mariana Rozo-Paz: Love this, amazing. All right, well, fantastic. You’ve heard from our amazing speakers and now I’d like to kickstart the conversation, maybe venturing much more into the challenges. As you can see, we have a diverse and exciting pool of speakers. So, I’d love to ask each of you what issues related to the data economy and youth are you most concerned about? And this can be from like the different emerging technologies that we’re seeing to also development challenges that we’re experiencing as pretty much humankind. So, that can be like climate change or education, but I’d love to hear about your experiences and then what issues related to the data economy are you most concerned about? Let’s start with that question. I don’t know if anybody would like to jump. Joelle, maybe you can start. Oh, Jenny, go.

Jenny Arana: Thank you, Mariana. Thank you so much. Yeah, yeah. So. From the perspective of the International Telecommunication Union, we recognize several pressing issues related to data economy and use, including the rapid development of AI, which raises concerns about ethical usage, as we all know, data privacy, but also potential biases in algorithms that we find that disproportionately impact use, as many other groups as well that are traditionally disadvantaged, let’s just say. But this is not just based on, you know, it’s actually even based on consultations. Recently, we led some consultations with our youth networks, the Generation Connect Youth and Boys, and specifically I’ll talk about one of the groups, the group of, the regional group of Africa, you know, shared their insights and what they consider are the issues that need urgent action and the challenges and opportunities that need to be addressed. So they found that cyber security is a critical issue. The rise in cyber attacks and online threats highlights the need for stronger measures to ensure digital safety, and the youths are disproportionately affected by cyber crime, online harassment and privacy violations as well. So they see the need for enhanced education to build awareness and skills, particularly, of course, among underserved communities. Cyber security frameworks have to, they said, must align with international standards, but also reflecting local realities and focus on gender diversity in cyber security careers is also essential to foster inclusivity. Artificial intelligence they saw, it represents a double-edged sword. Specifically, this group for Africa mentioned that it offers vast opportunities, but also, sorry, vast opportunities such as healthcare, improving healthcare diagnostics, optimizing agriculture, accelerating economic development, but it also introduces challenges, challenges that include ethical concerns and the risk of biased algorithms. So these consultations really highlighted the need for AI systems that are inclusive, ethical, and reflective of the unique context and values of specific regions. And in this case, they talked about the Africa continent. And this requires significant investments in local AI research and development to really ensure that African perspectives shape the technology. Also, to fully harness the potential of AI, they saw that there is a need to develop data science capacity across the continent. And this involves improving infrastructure for data collection and management and fostering a culture of data-driven decision-making. And you were talking about, you know, the example of good practices. And I think that we believe that from the IT, as I mentioned, I just briefly mentioned the ITU Generation Connect initiative. And we have some practices that we believe have demonstrated, you know, how we can meaningfully engage youth participation to really drive impactful change. And some of these examples are youth engagement in policy discussions. For example, through this initiative, we really make efforts to integrate youth perspectives in global policy dialogues by giving these young leaders a seat at the table in high-level discussions. At ITU, what we try to do is ensure that their voices contribute directly to shaping policies that are more inclusive. on the data economy, and obviously this for us includes active participation in forums and consultations, I was just mentioning the consultations that were run in the past two months, where young people can really provide input on their pressing issues, and the pressing issues for their communities, for their regions, such as data privacy, equitable access, ethical AI, and for example these young leaders have shared their views during the different ITU consultations, conversations, and intergenerational dialogues with ITU member states, resulting really in useless, co-created policy recommendations that really we hope is going to influence global agendas. So thank you very much, Mariana.

Mariana Rozo-Paz: I love this answer, Jenny, thank you, very, very comprehensive. Now I’d like to turn over to João, please.

João Moreno Falcão: Thank you, Mariana. Well, I would like to bring two issues, actually, to start. The first one is also in our policy paper we wrote to the T20, but they measured that one child from his birth to his 13th birthday will gather 72 million pieces of personal data. So before you grow, you will already have a huge digital footprint, and we need to bring awareness about this, because they aren’t able to agree on it, and the implications of this amount of data will be seen. well, will be available for their whole life, and the other thing that I really wanted to stress on my talk is regarding the, sorry, I closed the thing, so it’s regarding the digital literacy, so when we have young adults and we usually take for granted their capabilities with technology since we’re already born with like a cell phone laying around, maybe a computer, and the truth is that it’s very different to use it as a tool to communicate and to use it in a meaningful way to be part of the digital economy, so this really needs to be worked on, especially when we think that more than 50% of the global population is less than 25, 24 years old, so well, to me, I would like to give these two provocations about the issues that we face.

Mariana Rozo-Paz: Fantastic, João, I think this is going to be very relevant, and I agree, this whole assumption of digital native, or like youth being digital natives is one of the big challenges, so Liane, would you like to share your insights regarding challenges?

Celiane Pochon: Yes, thank you so much, and I think what João said about half of the world being youth is a very big number, and actually, according to research, it’s been the largest generation of youth in history, and I think it will only keep on growing, and youth is very present everywhere, whether we’re just scrolling through social media or studying on digital platforms, we generate massive amounts of data. but there’s a big problem and we have little say on how our data is used or who profits from it, and I think this brings me to the point I want to make and one of the issues I see and we see here in Switzerland is trust. Trust in data governance arrangements and empowering young people and individuals in the digital space is key for the data society we want to have. Without the control over our data we lose trust in institutions and companies and technologies themselves which hinders digital inclusion, innovation and all the other topics that Jenny and Joelle already mentioned. So here in Switzerland we attach great value to inclusive global data governance arrangements and they’re based on the notion of digital self-determination of individuals and stakeholders which means that citizens, businesses, young people, everyone should be given the opportunity to decide for themselves how their data is being used and we want to underline the importance of trust and adaptability from all sectors including public-private partnerships to help address these critical issues. And maybe an example I can give you is that in Switzerland we have set up a national network on digital self-determination and it consists of public administrations so the government, me for example, universities where we find a lot of young people not surprisingly, but also industry and civil society and in this network all these people work together to develop shared approaches on how to give maximum control on data to citizens and the individual organizations and we’re very happy to share that we have many young people, many young students involved in this network who can give their very own perspective

Mariana Rozo-Paz: on this. Thank you. This is very very valuable, thank you Celiane, and in a way the fact that, yeah, what you were mentioning, trust is kind of like the foundation of the challenges that we’re seeing. And it’s kind of like something that’s not only present when it comes to digital technologies, but also in terms of how we’re addressing various sustainable development challenges. This is something that we’re hearing from climate and nature activists on how they want to trust the people that are making the decisions. But for that to happen, they want to be involved in that conversation and they want to see how are they really bridging the gaps that they’re experiencing in their communities and really addressing the challenges that they are facing or that they see other people are facing. So I think that you’re highlighting a very, very important point. And I think that given that point, I think we can actually think about potentially the good practices to engage youth effectively, to be really able to advocate for their perspectives on the data economy. I think that the majority of you have already touched on the solutions that in your context are being developed to be able to include youth much more effectively. But coming back to this issue of trust, I think it would be very interesting if you could maybe highlight again or highlight an example of good practices that you’re seeing out there in your context or maybe that you’re aware of that are really engaging youth so that they can effectively advocate for their perspectives on the data economy and that are contributing to fostering trust. So, yeah, Jenny, why don’t we start with you again for this one?

Jenny Arana: Thank you. Thank you so much, Mariana. Absolutely. Yeah. And as I was mentioning before, you know, bringing really young people to the table and having those policies. This discussion is very, very important and as we mentioned before, we must first of all recognize that young people are not just beneficiaries of policy but essential contributors and practices that provide a model for other organizations and stakeholders to ensure that young voices shape the future of the data economy are very important. I’d like to highlight a program that we are leading at ITU that’s called the Generation Connect Leadership Program. This is a very exciting program, I believe, that aims to engage, empower, and inspire young digital leaders and change makers and leaders in their own communities. So through this program, we are supporting young visionaries from around the world that have really come up with proposals for creative, far-reaching, innovative, and feasible community-driven projects that are aimed at creating a more inclusive and empowered digital future for their people, for their communities. So in partnership with other organizations such as Huawei, we are providing guidance, training, financial support to 30 young fellows that, per year, we help them to practically implement these digital development projects in their diverse communities across the world. And of course, through the program, besides taking their own projects to the next level, we’ve found an opportunity to bring them to the core of policy and digital development discussions that happen at ITU year by year and the connections that they can make with all sorts of stakeholders that attend our meetings. So I think that it’s very, very, very important, as we mentioned and as others have mentioned before, to really find these intergenerational opportunities. for dialogue exchange and for proposal of solutions to digital development issues. So, thank you, Mariana.

Mariana Rozo-Paz: So important and thank you for highlighting that, Jenny. João, is there anything you’d like to point us to?

João Moreno Falcão: Yeah, definitely. I would like to talk about an example of a project that helped to empower the young voices. It’s the YouthLock IGF. The YouthLock IGF is a project co-organized by the Youth Standing Group. Our last event happened in November. We had three days, 117 people from all over Latin America together in Santiago to discuss and think of the internet future and how we can fit our imagination, our plans inside of the digital economy. It was really enriching. It was the biggest one that we made so far and we had 15 on-site countries represented there and 43 if we count online too. So, this kind of events really can foster further discussion into how we can better participate and also share the perspectives from different regions.

Mariana Rozo-Paz: This is fantastic. Thank you for highlighting that. As a fellow Latin American, I pretty much appreciate those efforts. So thank you for pointing that. And so Leanne, what about you? Is there anything you’d like to share with us?

Celiane Pochon: Yes, I think one very topical example is what we’re doing here today, actually engaging in different for us. For example, here, the Internet Governance Forum at the global level, but also as Joao mentioned, at regional and national level. So in Switzerland for our last Swiss IGF, which was last summer, we had more young voices around the table and young people are interested and want to be part of the conversation. So I think opening up the space to them is key and including them in these multi-stakeholder forums, technology summits, and in all policymaking consultations, we can allow them to not only be consumers of the technologies, but also contributors to them to have a fairer and more equitable data economy and governance landscape in a whole. So I think that’s a first thing we could do. And Switzerland very much believes in the multi-stakeholder fashion. So we also want to include all these voices. And then again, maybe to link with what Joao said previously on data literacy and digital skills, enhancing programs in schools and universities. For example, in Switzerland, we have two universities, the EPFL in Lausanne and the ETH in Zurich, where there are programs that focus on building data literacy and critical thinking amongst young people. And we want to prepare them, or the professors want to prepare them to engage meaningfully in the conversations about the data economy, about AI and digital transformation as a whole. So we can see that all these different initiatives that happen at different levels, which is also very important, they combine capacity building, they combine education, and they give young people, youth a platform to participate. And it’s about giving young people the tools to participate, but also ensure that their voices are included in the policy dialogues and in the decision-making processes. Thank you.

Mariana Rozo-Paz: I love this, Liliane. Thank you so much for pointing it out. And I feel that I’m going to kind of take the freedom, because I’m moderating, to also answer my own question. I just wanted to highlight something that’s actually very important or that we’re seeing much more often. Because we’re seeing big organizations like the ITU being able to establish these programs, right? And that is very valuable. But then we’re also seeing that there are many different youth local experiences and projects and initiatives that are pretty much being born all across the world to tackle very specific challenges that youth are facing with technology. And one of the challenges that we’ve been identifying at the Diaspora Initiative is the funding gap that exists to really be able to translate all of these small initiatives into bigger impact projects, so that people can really have the sustainability to have their projects and to impact the communities that they want to impact or somehow touch upon. So I wanted to point out to one funding effort. I just pasted the link on the chat, which is the Responsible Technology Youth Power Fund, which was put out by the Amedia Network, but also a coalition of funders that are interested in funding young people to really drive a much more inclusive and equitable technology ecosystem. So it’s a very interesting philanthropic initiative that is aiming to support youth and intergenerationally led organizations that are seeking to shape a much more responsible technology movement. And that can be either at the design of technologies. but also at the policy level. So this is just something that I wanted to highlight because in a way, I feel that these policy spaces are also key spaces for us to be discussing what else do we need? And in order for all of these good examples that our great speakers have highlighted, how could we really bring in the resources that are needed to make sure that not only the, as Salim was pointing out, the global or the regional efforts are funded and supported, but also how could we translate all of these multi-stakeholder collaboration and efforts into funding and support for all of these youth initiatives that are in the end responding to a very specific context and bringing value to very specific communities that are in many cases, not including in this global governance conversations and are equally important to be able to learn from them and really shape the conversation at the global, regional and national levels. And so with that, I would like actually to, and I think as Sophie mentioned at the beginning, we want this to be an interactive conversation. We’re a close, small, nice community in this session today. So I’m seeing that Francola is also pointing to a youth corner on the chat. I don’t know if maybe Francola could be granted Coho’s rights so she could maybe speak about this. And I know that we also have Gregory in the room who was sharing some insights earlier. So I don’t know if any of you or anybody else joining us in person or online would like to chip in in terms of any initiatives that you are aware of or challenges that we have definitely not mentioned because I think that, yeah, we have not been fully exhaustive. So yeah, Francola, I don’t know if you can unmute. yourself. I think I’ll ask you to unmute.

Sophie Tomlinson: Yeah, sorry, Mary. I’ve been told by the people who are the technical teams that are in the room at the moment that they can’t unmute anyone who wasn’t listed as a speaker, unfortunately, because they’ve had some security kind of just challenges throughout the IGF that they need to stick to this policy. So it looks like unfortunately, we can’t hand the mic to Frankola, because it would be great to actually hear you and you’re sharing all these very I managed to, to, to give her

Frankola Chan: I forgot. Okay, so I can’t speak now. Are you all hearing me? Oh,

Mariana Rozo-Paz: amazing. Great. There you go. Try again, Frankola. I think you’re mute again.

Frankola Chan: Okay, can you hear me now? Yes. Okay, good. So I am Frankola Chan from the Caribbean Telecommunications Union. And the CTU is a CARICOM organization with 20 member states in the Caribbean. So we do fall under the Latin American and Caribbean bracket, but we’re focused mainly on the Caribbean and we work very closely with the ITU and CTEL. So even this year, I was highlighting that we had the Secretary General of the ITU in the Caribbean attending our ICT week. And I am the focal point for the CTU’s network of for the ITU network of women, and we just recently established our own Caribbean CTU network of women. And that focuses also there’s an overlap with the youth. So we also have a youth envoy in the person of Ms. Nia Nannan.

Mariana Rozo-Paz: Fantastic. Thank you for sharing that. That’s amazing. In a way, as a gender activist myself, I’m always very happy to hear about all of these women’s initiatives. So thank you for sharing that. I don’t know, Gregory, if you want to maybe speak out loud about your comment that you posted in the chat, because I think that participants in person cannot read that. Maybe we can try to do something similar as we did with Fran Cola, asking you to potentially unmute. If not, yeah. Can everyone hear me? Yes. Great. Yeah, that’s good. Thank you for the opportunity. Yeah, once again,

Gregory Duke Day: my name is Gregory Duke Day. I’m a member of the Internet Society. And I also work with a group called Be Tech Connected. Ideally, most of the points that I’ve put together is in the chats, but I was looking specifically at policymaking. And I gather from a previous session on how quick or how fast these policies go in terms of its approval, looking through it, trying to understand the various parts of this policy and how that works to our effect. Ideally, I believe that as the future or as people with great sensibilities, we have knowledge, we have skills of all of these important technologies, it is important that we are also included in policymaking across various levels. That way, we are able to relate with the kind of activities that go on, the kind of challenges that have been shared across many, many times, and that we’re able to contribute meaningfully to all of these policies. Ideally, we would grow to a point where we would also have to share all this knowledge to those who are coming behind us, and it would be important for us to understand and start to educate people around all these policies. So once we are giving, or within the process that we have been given, chances to contribute meaningfully to these policies, we’re also able to carry them into the future, and we’re able to give back to those who are coming with us. And finally, speaking to the inclusion aspect, well, some of these are governmental, but for the most part, it has to do with rural areas and infrastructure that is bridging the digital divide or the digital gap between those areas. Also, the accessibility of these technologies to persons with disabilities, that way we’re not leaving them out, we’re also including them, because they also have some skills, some knowledge to share on all these phenomenons that are happening, really. And then finally, around creating something that is relatable to countries. So ideally, creating or bringing about certain technologies or elements within those technologies that can help in creating some context-specific. So for example, languages, in terms of the technologies that we create, once we’re able to try and fashion or create these technologies within certain languages for certain countries, we are able to have a unified force, I would say, for the future of technology. Right. Thank you.

Mariana Rozo-Paz: Thank you, Gregory. That was very valuable. This is really great. And I think that you’re raising a very important point regarding also the digital divide and what real digital inclusion means and how, of course, that means something very different in different communities, in different regions, depending on the specific context of the community that we’re thinking about. I’ll actually share on the chat the link to one blog that we drafted lately based on some of the conversations that we have been having with young people, particularly in Africa and Latin America. And one of the things that they highlight, I mean, actually two things that they highlight. First is the importance of contextualizing data and how sometimes even if we do have data about young people, we need to make sure that we’re really contextualizing and having context-specific data solutions that reflect the challenges and needs of the diverse youth communities. And then the other thing that they have highlighted extensively is how rural communities are facing barriers to digital inclusion and data access and how, in the end, including rural communities is not only a matter of building the connectivity infrastructure, but also designing tools so that in terms of literacy and education, people have the skills. And as I think Solian was pointing earlier, even the critical thinking skills and kind of the soft skills so that they can navigate technology with much more awareness. So I wanted to highlight that and thank you everyone for participating in that first section. Now I want to dive deeper into the challenges that we’re facing in terms of international and global policy to really bring young people together. We’re seeing in many cases…

Sophie Tomlinson: Sorry, Gerard was just trying to speak, I think.

João Moreno Falcão: Sorry, Mariana, for cutting you, but we have a non-site participant that wants to speak too.

Mariana Rozo-Paz: This is fantastic. Yeah. Great.

Melody Musoni: Thank you so much. My name is Melody Msoni. I work as a digital policy officer for a think tank called the European Centre for Development Policy Management, or ECDPM. So mine, it’s more of observations. Maybe let me start with one question, define youth, because I’m coming from a country where people who are in their 70s, they still qualify as youth. And then living in Europe, normally it’s between 8 to 18 years to 35. So that’s the first question I have. And then when it comes to my observations, we are here at the IGF. I was expecting this room to be full. There is two of us here on site. But if you go outside, there are so many young people who are sitting in the lobby right now. So I was wondering, like, why is that there are not a lot of young people in the room? And why is it more people are participating? Young people I see is that 42 participants online instead of being here. So it’s just an observation. I hope you’ll be able to look into that to see what are the reasons. And just thinking about that, I also thought perhaps for future platforms and engagements like this, it helps if your session is to be prioritized, maybe the first or the second day of IGF, because the last day, it’s very difficult to rope people in. And then having these discussions, I think it also helps if we have more senior policymakers who are also part of the conversation. Because they also have that pool to bring people in into the conversations. So I think those were my two observations. And I also wanted to say that perhaps that could be an additional question that can also help you in future engagements to say what is exactly different about the youth. Because all the issues that you are raising, they apply to women, they apply to everyone. So why exactly is it important for us to talk specifically about youth and issues on inclusion and data? So I think being very clear in articulating why it’s important for us to focus on youth, it actually helps in pushing the agenda and pushing the, I guess, achieving the objectives that you have to, that you intend to achieve. And then there was a comment again that I also agree with, issues on rural communities and engagements. I think just from listening from all the conversations here at IGF, people tend to be very superficial when it comes to marginalized communities. I’m originally from a very marginalized community, even up to now. It takes me so long for me to get hold of my family because they still don’t have access to the internet. And some of the people in my village, they still don’t have phones, so they have to use someone else’s phone. So when we are saying that we need to engage rural communities, what exactly do we mean by that? So at least if we start thinking about what role can secondary schools in these communities play? Because a lot of people in rural areas, they go to school very late. So you find someone is in their 20s, but they are still in high school. And that’s youth. And you should be engaging with them, not just engaging with people who are already in college or who are already in universities. So I think that there is more that can be unpacked and trying to pull in the senior policy makers to be in the room and actually demanding that your sessions must be prioritized because if we have it on the last hours of IGF, there will be less interest in people participating in your… in your session, thank you. We have another question.

João Moreno Falcão: Hello, so do you think we have enough time for another intervention from one of our colleagues here?

Mariana Rozo-Paz: I think we do, that’s all right. Maybe we should address these first three questions and then we can answer the others because I see someone online has a comment too. So actually, thank you for that. I absolutely loved your comments. Regarding youth kind of age and range, I agree that it’s a pretty diverse group and I’ll hand it over to our speakers so that they can also share with us how they see this. And I think that in a way, every organization or community or forum even is defining youth in a very different way. So, and in a way, I think there are some like policy interests behind that, either because you want to expand the pool and be able to include much more people or even if you’re thinking about engaging them in the conversation, if you have people under age that involves additional challenges of getting parents approval. So I think that it kind of depends, at least in the Data Sphere Initiative, we have been involving youth or considering that youth are those people from around 13 or 15 years old up to 25 or even 29. So it can be a pretty broad range. Thank you for your comment regarding the time and day of the event. And as Sophie was pointing on the chat, this is a challenge that we have been witnessing that’s not only true for the IGF, but actually for many other forums on internet and data governance. For the UN World Data Forum. actually, our youth workshop was barely the last session on the last day, last time. And that is a challenge that I think we can all advocate together for, to make sure that we really can have a much more meaningful participation, not only of policymakers, but also of the youth, so that they are aware of what are the sessions they can also participate in, and will actually be hopefully harnessing their voices. But I want to stop here, because I know that we have an amazing pool of speakers. So I don’t know if anybody has any comment regarding the conversation around digital inclusion, or how we’re defining youth, or rural communities, and also why are youth particularly important, which I feel is kind of the key question that we don’t only want to answer today, but the policymakers are trying to answer worldwide, in terms of which communities to involve in the conversation and why. So yeah, this is kind of free. So I don’t know if Joao, Silviane or Jenny have any comments in that regard.

João Moreno Falcão: Okay, so I can go. About why we need to include youth specifically, not only like specific marginalized communities, is that they have a very, when we talk about like the most stakeholder approach, we are a group with specific characteristics, like, how can you define someone that is it’s just started in the internet governance space without a proper stakeholder group, like, we will call them researchers because they were in college, we will call them private sectors, because they just got into a job. So this really showcases that this group isn’t already included in the multi stakeholder approach. And about the aspirations are different too, because the things that I suffer are completely different than the issues that a woman suffers because she is a woman. So it’s good to have this difference, because we cannot put everything in one place and say that we will address it. And yeah, I believe this is it.

Jenny Arana: If I may, can I? Yes, of course. Yeah, thank you, Mariana. No, I mean, I think, of course, you know, when looking at every group, you know, the work that somebody mentioned, women’s empowerment, gender, etc., there’s intersectionality everywhere. And it’s important to take that into consideration. But here, why are you so important? Well, there’s a very important reason. There’s a demographic power, 1.2 billion people across the world, you know, that’s power. There’s a lot of people that, you know, are a significant demographic group. And in some regions, particularly in developing countries, they actually form the grand majority of the population. Young people are also, you know, and in the topic that we’re discussing today, they’re catalysts for innovation, you know, bringing fresh perspectives, creativity, technological fluency, and are found to be key drivers of innovation. But also, as we have seen across, you know, generations and historically, young people have also been at the forefront of social movements and advocating for the rights of everyone, not only their own rights, but the rights of a lot of, you know, older people, women, etc. And I think that that’s a very important, you know, thing to keep in mind when we are discussing these issues. And yes, of course, we can always look at intersectionality. I think when I introduced myself, I said that I work on digital inclusion and we actually look at this from the perspective of different target groups, but there are intersectionalities. For example, we work on the topic of ICT accessibility, but we’re not only looking at persons with disabilities, we’re looking at all sorts of people, looking at older persons that perhaps can no longer hear well and there are many, many different technologies that can put, can help them better integrate and better take advantage of what’s today, what the benefits are of today. For example, let’s talk about e-health, let’s talk about different areas of life and, you know, education, having access to digital skills, et cetera. So I think that, yes, we have to look at intersectionality, but also there has to be space for specific groups and for us to look at their specific needs. Yes, there will always, you know, many of the issues in the world are important to be looked at through the perspective of different groups, but why not give each of these groups a specific focus so that we can actually look at the possibilities that we have to tackle them. So thank you, Mariana.

Mariana Rozo-Paz: I absolutely love your answer, Jenny. I really have nothing else to add. I love this notion of intersectionality and I feel that that’s literally it when it comes to thinking about digital inclusion. So thank you. Now, I know that we have another question from somebody in person, so maybe we can get that draw and then we can, the organizers have asked me to tell people online that if anyone wants. to talk, you need to put your camera on and write your full name and that’s the only way in which they will allow you to speak. But for now we can have the participant in person to ask the question. Thank you.

Ahmed Karim: Thank you so much, everyone. My name is Ahmed Karim. I’m from UN Women, regional office for Asia and the Pacific, and I lead on innovation and the US portfolio in the office. Just adding into the conversation of intersectionality, and I think it’s one of the challenges when we talk about youth issues, that we talk about youth as a homogenous group. And with that intersection between youth and gender, bring in the angle of young women, which is half of the population of youth, that is kind of, you know, discredited and kind of seen as part of that bright future of digitalization and youth. And I think when we look at that bright future of young people are the most avid and take saving and like leading on the innovation, it’s like, it’s also like shades light and that young women are also part of that, which is not true. I think there are a lot of amazing women leading in the digital spheres, they’re fighting really hard to find the space, but the sector is not really giving them that full space or an equal space. I’m just going to mention three, like, you know, either new studies that were saying first one from the World Economic Forum, women are going to be the most affected by job loss due to digital transformation. The second one is that women are less likely to use AI and emerging technology in the workspace because of all of, you know, access, work environment, male dominated sectors and all of those kind of thing. And the third one is that women are also the most affected by the misuse of technology and AI for technology-facilitated violence. This is just like a glimpse of the reality of what young women are dealing with in this sector and I think they are not well represented in the conversation where it’s really bright and futuristic and shows that technology is working for young people. It’s not for everyone and that intersectional angle that you can go even deeper and you’re going to uncover an ugly reality of digital transformation, the more you go to more intersectional layers. A young woman with disability, a young woman with disability in a rural area, a young woman with disability from a minority. So it’s really hard just to see a lot of this conversation within IGF and within international communities and talking about the potential without having to put that angle into perspective. I look into specific programming, resources, funding, policy changes that is more specific, more gender-specific, more targeting young women and I wonder if you have some reflection on what could be done to accelerate that progress. We missed an opportunity when the internet came out. We missed an opportunity when social media revolution came out and we left a lot of women behind with a wider gap that is getting even wider with the AI revolution. So how can we catch up, making sure that women, other genders, other marginalized groups like young women are catching up with that technology and benefiting from that sector, not just as a user, but also shaping the infrastructure of this new revolution. Thank you.

Mariana Rozo-Paz: Absolutely fantastic question. Thank you so much for that. I am going to hand it over to Nosipo. I think you can unmute yourself now so that you can ask your question and then we can take questions.

Online Audience: Thank you very much. I’m Advocate Zainouba from South Africa. I’m running a practice that focuses on intellectual property, cyber law, ICT law, that is now small-scale mining, and international trade law. Of course, AI is included there. And I’ve done a lot of research. In fact, I’m a cyber lawyer by education and by profession. I also have been part of lecturing at UNISA, the Faculty of Law, the oldest university in South Africa, and I’m responsible for the same. But the reason I felt I need to participate here is because I’m part of the South Africa IGF. Probably, in line with what my colleague was talking about, you might have to look at how we are dealing with IGF in South Africa. We open the curtain with youth. What that does, it gives the traction across the whole program, and it makes them to participate right through the program. So the focus, and it’s informed by our understanding of the sacrosanctcy and the role youth play in everything that we do, because the youth are the ones that must be made through social cohesion to understand that emerging technologies for Africa must emerge from Africans for Africa. And that is part of the indoctrination that we need to do as part of data sovereignty and as part of AI sovereignty. Yesterday, we were in a session where we were talking about all the capabilities that are coming from other jurisdictions being coming to the South. And it is important for the youth to be able to understand what that means. in terms of territorial integrity. I happen to be in the military, so for me territorial integrity is a non-negotiable. And for as long as our youth have not got into what that means in terms of technologies, in terms of systems, in terms of infrastructure, because we do have youth that has brains, youth that understands technology, and in the deep rural you have youth that is having so much creativity. We have youth that is able to generate energy from back of trees, as we speak. But if we don’t harness that by making sure that they are in the forefront and they are made to understand that the future is in them, we will miss the game. But my question then is, how do we ensure that our youth are able to to utilize the skill sets that they have, right, to improve the home ground? Because for me the challenge we have at this point is our youth that acquires the skills and then they go and use it somewhere. Europe is full of youth from South Africa. How do we enable the youth to be able now to, yes, act globally, but ensure that they embrace where they come from, so that we are able then to improve even the LLMs that we’re talking about, because those LLMs must talk to our cultural heritage, and the cultural heritage is embedded in those engineers that are currently in Brazil, in Asia, in Europe, in Australia, but they are from South Africa. How do we deal with that? Thank you.

Mariana Rozo-Paz: Thank you very much for that question. I want to turn it back to our speakers to see if any of you would like to answer one or the two questions. If you have any insights or anything you’d like to share, any thoughts regarding those very thoughtful questions.

Celiane Pochon: I think maybe if I may, just without answering the question, but just thanking everyone for all of these statements, questions, interrogation, provocative thoughts. And I think, at least for me, these are things I will take back and try and reflect on. And I can’t answer them in a, you know, thought out manner right now. But they’re definitely things I will take back with me and think about and try to act on in, you know, our everyday work. Thank you for that. Jenny, Joao, anything you’d like to share? Sophie, would you like to share that out loud, potentially?

Sophie Tomlinson: Yeah, sure. No, thank you so much, everyone, for these invites and Nospio for what you were saying. I’m just sharing a study in the chat that came out this week that looked at the types of data. What is the data that’s building AI right now? I think some of the findings are things that we know, but it’s nice to have concrete evidence emerging to show this. That it’s pretty shocking how 90 percent of the data sets for AI are coming from Europe and North America and less than 4 percent coming from Africa right now. So I think these points of. the need for inclusive approaches, and also the point of intersectionality to how is AI going to actually fit the needs and resonate with young women in Africa, for example, or as the speaker from UN Women was saying, how is, to his point, what even about young disabled women from a rural part of Africa, if we can’t even get the data that’s built into AI right now, just reflect the whole world, we’re in deep trouble. So I think this is something to further discuss.

Mariana Rozo-Paz: I agree. And I absolutely love that, Sophie. Yes. And in a way, that is the big challenge that we have right now, in terms of coming back to Sulian’s point at the beginning, how can we build technologies in a way that we’re fostering trust. But in order to foster trust, we need to think about all of these intersectionalities, all of these communities, and how they relate in the end. And if we have this very big percentage of the population worldwide nowadays being youth, knowing that it is a very diverse pool of people, and it is not just young people with the same level of privileges and opportunities and challenges, but that the youth community is actually a very diverse community, not even worldwide, but like just even thinking about one country, we cannot even categorize youth as just like, oh, the, I don’t know, Swiss youth, but it’s actually very different and diverse communities within one country. And this is something for us to keep on thinking about. So thank you for bringing it up. I wanted to ask one last question to the speakers before we wrap up. And I think this connects to everything that we have been discussing regarding incentives and the need to actually raise awareness among this adult community, and this adult community can be policymakers, governments, private companies, international organizations, and the importance of youth voices and to enhance their participation in policies and the design of data-driven technologies. So I’d like to ask our speakers what incentives are needed to raise awareness among this community about the importance of involving youth. And if you have any examples of efforts that could be implemented by different types of stakeholders to make sure that in the end we can really build technologies in this base foundation of trust, because I know that this is pretty much one of the challenges that we’ve been discussing, and how can we really raise awareness and have the right incentives. So, João, why don’t we start with you with that question?

João Moreno Falcão: I’m sorry, I had to help with something. I need one second to think.

Mariana Rozo-Paz: No worries, no worries. Maybe, Jenny, would you like to take that question first regarding incentives to the adult community?

Jenny Arana: Yes. Are you hearing me well, Mariana?

Mariana Rozo-Paz: Yes, we can.

Jenny Arana: Okay, I lost you for a little bit, but thank you so much. So, all the incentives, right? So, I think that to really foster meaningful recognition of youth contributions to the data economy, it is truly essential to provide targeted incentives that bridge generational gaps in the understanding and engagement of youth. First, policymakers, governments, and private sector really have to acknowledge the unique expertise and innovative perspectives that youth bring and the unique views. And by highlighting the creativity, the insight, and the perspectives, stakeholders can really recognize young people as valuable contributors in the digital transformation. One way is to institutionalize youth participation. That’s a crucial step. This can be achieved by integrating young voices into governance structures, and there are policy working groups, committees, advisory boards, but of course we have to avoid that these mechanisms become something where youth participation and use are tokenized, but that actually become a core part of decision-making processes, and it’s important, I think, when we can think about funding and resources, and specifically for youth-led initiatives, to really provide a tangible incentive for the active involvement of these youth networks. I think that showcasing success stories of youth-led initiatives, and I think we all have shared from the different organizations that we represent and the different work that we do, these success stories can really serve as a powerful evidence of the positive impact of youth involvement, and really highlighting these achievements should inspire confidence among what we’re calling the adult community, and encourage broader support for integrating youth perspectives into the economy, and so various stakeholders can take concrete actions to empower youth and ensure the inclusion of these demographics in shaping the data economy. And from the side, I think, of international organizations, we can make efforts to foster this intergenerational collaboration and equipping young people with the knowledge and the experience that is required, ultimately, to influence global digital policies and the conversation on digital development as a whole. So thank you so much, Mariana.

Mariana Rozo-Paz: That was great. Thank you so much, Jenny. So Leanne, I’ll turn it over to you.

Celiane Pochon: Yes, thank you. And I’ll build this on what Jenny just said, because it really inspires me. I think we also need to highlight the intergenerational impact of the digital decisions and provide clear examples how youth participation enhances innovation, enhances equity, enhances confidence, enhances positive outcomes, not to only focus on the challenges and maybe the more negative side, which is also very important, but also to see how we can highlight innovation. And everything I just mentioned, I think this could talk to the adult community and help them decide in what way they want to shape the policies and different designs for the technologies. And I think one thing also the youth community, we have, which is very important, is that we’re aware of topical issues in the world and the problems the world faces. We have a very raised awareness of social justice, equality, inequality, and we can also help to have this be at the forefront of new policies, to have this aspect be taken into account. And I think we have very important values that young people, we advocate for, and I think we can really put them in the forefront. And I think just ensuring youth voices are heard and are heard. in the discussions and the development is only ensuring the future generation has access to the knowledge and to the tools to continue innovating while preserving core values such as human rights. And I think this approach of including youth now for the future can only lead to a fairer and more innovative and equitable data economy that can benefit everyone now but also in the future. Thank you.

Mariana Rozo-Paz: Thank you. Thank you, Zilian. That’s very, very valuable. João, over to you.

João Moreno Falcão: Thank you. So, yeah, we definitely I will speak complimenting again what has already the other panelists has already said but we need to bring awareness about the advantages of having youth on board in the decision making, in the policy making about the digital economy because, yeah, we can bring novelty, we can bring other points of view that are very, very important to develop a specific direct policy. So, in this sense having a more diverse participation and, well, when we are thinking of developing projects to youth they are the group that better understands what they are dealing with. So, it’s definitely a group that needs to be included in this process.

Mariana Rozo-Paz: You’re right. Thank you, João. Well, I want to thank everybody for joining us. Thank particularly our speakers who have been amazing in sharing their work and insights. Thank you for agreeing to joining us. And thank you everybody for participating. Just to wrap up I’d like to ask one final mini question to our speakers, but it’s more of like, could you please maybe share with us one final takeaway in maybe 30 seconds, one minute each of a final message or highlight that you’d like to share with people. And with that, we can wrap up. I don’t know if anybody would like to start actually.

Celiane Pochon: Maybe I can jump in. I think for me it was something that was very valuable is that we also need perhaps to go back to the core questions. Why are youth voices important? And maybe we’re already two steps ahead in tackling these very important questions as well. But maybe for next sessions or something else, start back from the basics to really state why youth voices are important. And maybe also I think intersectionality has really been brought up a lot in this discussion, also to see how that can be brought into the conversation. So I think we’re already two steps ahead. So maybe to include more people to have these starting basic questions, but I for sure have gained a lot of insights. And this has been a very, very good discussion. And I want to thank everybody for that.

Mariana Rozo-Paz: Thank you, Celiane. Thanks to you for your wonderful contributions. Jenny, do you want to go next?

Jenny Arana: Yes, thank you, Mariana. Yeah, I just think that, you know, the issues we’ve touched on today, AI, education, you know, skills, etc., are just not challenges. They are also opportunities for innovation. And youth not only bring these fresh perspectives that we were talking about, but also a sense of urgency to these issues, the needs that we have right now. But we also need to ensure that the voices of youth are not only heard, but integrated into governance structures, into policy discussions, and the design of data-driven technologies. And to raise awareness, again, among this adult community, it is essential to recognize that, you know, these values and these perspectives and contributions of youth invest in their initiatives and create platforms where they can lead. It’s important to highlight the importance of youth-driven solutions to bridge the gap between policy and implementation, and we must stress the need for multi-stakeholder collaborations involving governments, private sectors, civil society, and these youth organizations. We want them at the center of this discussion, so I think this is my takeaway, and I hope that we can continue more of these fruitful discussions in the future. Thank you, and thank you all.

Mariana Rozo-Paz: Thanks to you, Jenny, this is amazing, and João, over to you, you have the big responsibility of wrapping it up.

João Moreno Falcão: Okay, so, to me, the main takeaway here is that how we need to broaden our reach, so we are discussing here about including the young people in the digital economy and how to do this, and, yeah, so I think for our next discussion, our next step is to get the people involved that we are talking about. So we have a lot of interesting and very important initiatives here that could also be part of this, like we can include the people that we work together, I believe this.

Mariana Rozo-Paz: Thank you, João, that’s very good. So thank you everyone again for joining us, thank you to the organizers, to the IGF for helping us host this, and, yeah, I think as one of our contributors mentioned today, let’s keep pushing to make sure that these sessions are scheduled earlier in the program, and let’s also advocate to have, hopefully, much more funding to keep leading these initiatives with very diverse youth communities. Thank you again so much for joining us. Have a nice rest of your days, both in the IGF or wherever you are in the world. Wishing you a very good rest of the IGF and happy holidays to everybody who’s taking some time off because I know that it’s pretty much Christmas now. So thank you everyone. Thank you. Thank you so much for the organization and happy holidays to everyone. Bye everybody. Thank you so much. Thank you so much everyone, bye.

M

Mariana Rozo-Paz

Speech speed

151 words per minute

Speech length

4223 words

Speech time

1677 seconds

Youth are early adopters and innovators of digital technologies

Explanation

Mariana Rozo-Paz argues that youth are at the forefront of adopting and innovating with digital technologies. This positions them as key stakeholders in the digital transformation process.

Evidence

Youth are described as ‘the early adopters, the innovators’ in the context of digital technologies.

Major Discussion Point

Importance of youth participation in data governance and digital policy

Agreed with

Jenny Arana

João Moreno Falcão

Celiane Pochon

Agreed on

Importance of youth participation in digital governance

Youth face unique vulnerabilities in the digital age

Explanation

Rozo-Paz highlights that young people are particularly vulnerable to risks associated with digital technologies. This includes issues such as social media dependency and online abuse.

Evidence

Mentions of ‘risks of social media dependency and online abuse’ and youth ‘being disenfranchised as data subjects’.

Major Discussion Point

Importance of youth participation in data governance and digital policy

Youth are disconnected from decision-making spaces

Explanation

Rozo-Paz points out that despite being heavily impacted by digital transformation, youth are often excluded from the decision-making processes that govern these technologies. This creates a gap in representation and understanding.

Evidence

Statement that youth are ‘paradoxically the most disconnected from the decision-making spaces that govern those technologies’.

Major Discussion Point

Challenges in engaging youth in data governance

Create pathways for youth from all regions to participate

Explanation

Rozo-Paz emphasizes the importance of creating opportunities for young people from diverse geographical regions to engage in digital policy discussions. This approach ensures a more inclusive and globally representative youth voice in shaping the future of digital technologies.

Major Discussion Point

Strategies for meaningful youth engagement

Agreed with

Jenny Arana

João Moreno Falcão

Celiane Pochon

Agreed on

Need for inclusive and diverse youth engagement

Contextualize data solutions for diverse youth communities

Explanation

Rozo-Paz stresses the need for data solutions that are tailored to the specific contexts of different youth communities. This approach recognizes the diversity within youth populations and ensures that technological solutions are relevant and effective for various groups.

Major Discussion Point

Need for context-specific and inclusive approaches

J

Jenny Arana

Speech speed

127 words per minute

Speech length

2051 words

Speech time

964 seconds

Youth have fresh perspectives and creativity to contribute

Explanation

Jenny Arana emphasizes that young people bring innovative viewpoints and creative ideas to discussions on digital technologies. Their unique perspectives can lead to more effective and relevant solutions.

Major Discussion Point

Importance of youth participation in data governance and digital policy

Agreed with

Mariana Rozo-Paz

João Moreno Falcão

Celiane Pochon

Agreed on

Importance of youth participation in digital governance

Differed with

Melody Musoni

Differed on

Definition and scope of youth

Youth are catalysts for innovation and social movements

Explanation

Arana argues that young people have historically been at the forefront of social movements and driving innovation. In the context of digital technologies, they can play a crucial role in advocating for rights and pushing for positive change.

Evidence

Reference to youth being ‘at the forefront of social movements and advocating for the rights of everyone’.

Major Discussion Point

Importance of youth participation in data governance and digital policy

Agreed with

Mariana Rozo-Paz

João Moreno Falcão

Celiane Pochon

Agreed on

Importance of youth participation in digital governance

Youth have unique expertise and innovative perspectives

Explanation

Arana highlights that young people possess specific knowledge and viewpoints that are valuable in shaping digital policies. Their expertise, particularly as digital natives, can contribute significantly to discussions on technology governance.

Major Discussion Point

Importance of youth participation in data governance and digital policy

Agreed with

Mariana Rozo-Paz

João Moreno Falcão

Celiane Pochon

Agreed on

Importance of youth participation in digital governance

Institutionalize youth participation in governance structures

Explanation

Arana suggests that youth participation should be formalized within governance structures. This involves integrating young voices into policy working groups, committees, and advisory boards to ensure their perspectives are consistently included in decision-making processes.

Evidence

Mention of ‘integrating young voices into governance structures, and there are policy working groups, committees, advisory boards’.

Major Discussion Point

Strategies for meaningful youth engagement

Agreed with

Mariana Rozo-Paz

João Moreno Falcão

Celiane Pochon

Agreed on

Need for inclusive and diverse youth engagement

Showcase success stories of youth-led initiatives

Explanation

Arana proposes highlighting successful youth-led projects and initiatives. By demonstrating the positive impact of youth involvement, this approach can inspire confidence and encourage broader support for integrating youth perspectives into the digital economy.

Major Discussion Point

Strategies for meaningful youth engagement

J

João Moreno Falcão

Speech speed

100 words per minute

Speech length

837 words

Speech time

501 seconds

Digital literacy gap between using technology and meaningful participation

Explanation

João Moreno Falcão points out that there is a significant difference between casually using technology and understanding how to participate meaningfully in the digital economy. This gap in digital literacy needs to be addressed to ensure effective youth engagement.

Evidence

Statement that ‘it’s very different to use it as a tool to communicate and to use it in a meaningful way to be part of the digital economy’.

Major Discussion Point

Challenges in engaging youth in data governance

Broaden reach to include more diverse youth voices

Explanation

João Moreno Falcão emphasizes the need to expand efforts to include a wider range of youth perspectives in discussions about digital economy and policy. This involves reaching out to and engaging with diverse youth communities to ensure comprehensive representation.

Major Discussion Point

Need for context-specific and inclusive approaches

Agreed with

Mariana Rozo-Paz

Jenny Arana

Celiane Pochon

Agreed on

Need for inclusive and diverse youth engagement

C

Celiane Pochon

Speech speed

143 words per minute

Speech length

1264 words

Speech time

529 seconds

Lack of trust in data governance arrangements

Explanation

Celiane Pochon identifies a lack of trust in current data governance systems as a significant issue. This mistrust can hinder digital inclusion and innovation, particularly among young people who are major data generators.

Evidence

Statement that ‘Without the control over our data we lose trust in institutions and companies and technologies themselves which hinders digital inclusion, innovation’.

Major Discussion Point

Challenges in engaging youth in data governance

Youth are aware of topical issues and advocate for important values

Explanation

Pochon argues that young people have a heightened awareness of current global issues and strongly advocate for values such as social justice and equality. This awareness and advocacy can contribute to more equitable and effective digital policies.

Evidence

Mention of youth having ‘a very raised awareness of social justice, equality, inequality’.

Major Discussion Point

Importance of youth participation in data governance and digital policy

Agreed with

Mariana Rozo-Paz

Jenny Arana

João Moreno Falcão

Agreed on

Importance of youth participation in digital governance

Include youth in multi-stakeholder forums and consultations

Explanation

Pochon suggests involving young people in various multi-stakeholder forums and policy consultations. This approach ensures that youth perspectives are considered in discussions shaping digital governance and policies.

Evidence

Reference to including youth ‘in these multi-stakeholder forums, technology summits, and in all policymaking consultations’.

Major Discussion Point

Strategies for meaningful youth engagement

Agreed with

Mariana Rozo-Paz

Jenny Arana

João Moreno Falcão

Agreed on

Need for inclusive and diverse youth engagement

Enhance data literacy programs in schools and universities

Explanation

Pochon advocates for strengthening data literacy and digital skills programs in educational institutions. This approach aims to equip young people with the knowledge and critical thinking skills necessary to engage meaningfully in discussions about the data economy and digital transformation.

Evidence

Mention of programs at EPFL in Lausanne and ETH in Zurich that ‘focus on building data literacy and critical thinking amongst young people’.

Major Discussion Point

Strategies for meaningful youth engagement

A

Emad Karim

Speech speed

156 words per minute

Speech length

538 words

Speech time

205 seconds

Intersectionality and diversity within youth populations

Explanation

Emad Karim highlights the importance of recognizing the diverse experiences and needs within youth populations. This includes considering factors such as gender, disability, and geographical location when addressing youth issues in digital spaces.

Evidence

Reference to ‘young women with disability in a rural area’ as an example of intersectionality.

Major Discussion Point

Challenges in engaging youth in data governance

Underrepresentation of young women in technology sectors

Explanation

Karim points out the significant underrepresentation of young women in technology-related fields. This gender disparity in the digital sector leads to unequal opportunities and representation in shaping the future of technology.

Evidence

Mention of studies showing women are more likely to be affected by job loss due to digital transformation and less likely to use AI in the workplace.

Major Discussion Point

Challenges in engaging youth in data governance

G

Gregory Duke Dey

Speech speed

131 words per minute

Speech length

404 words

Speech time

184 seconds

Address digital divide and inclusion challenges in rural areas

Explanation

Gregory Duke Dey emphasizes the need to tackle the digital divide, particularly in rural areas. This involves not only improving connectivity infrastructure but also designing tools for digital literacy and education that are accessible to rural communities.

Major Discussion Point

Need for context-specific and inclusive approaches

O

Online Audience

Speech speed

133 words per minute

Speech length

526 words

Speech time

236 seconds

Brain drain of skilled youth from developing countries

Explanation

The online audience member raises concerns about the migration of skilled young people from developing countries to more developed regions. This brain drain can hinder the development of local digital economies and technologies in the youth’s countries of origin.

Evidence

Statement that ‘Europe is full of youth from South Africa’.

Major Discussion Point

Challenges in engaging youth in data governance

Ensure technologies reflect local cultural heritage

Explanation

The online audience member emphasizes the importance of developing technologies that incorporate and respect local cultural heritage. This approach ensures that digital solutions are culturally relevant and meaningful to diverse communities.

Evidence

Mention that ‘LLMs must talk to our cultural heritage’.

Major Discussion Point

Need for context-specific and inclusive approaches

Agreements

Agreement Points

Importance of youth participation in digital governance

Mariana Rozo-Paz

Jenny Arana

João Moreno Falcão

Celiane Pochon

Youth are early adopters and innovators of digital technologies

Youth have fresh perspectives and creativity to contribute

Youth are catalysts for innovation and social movements

Youth have unique expertise and innovative perspectives

Youth are aware of topical issues and advocate for important values

All speakers emphasized the crucial role of youth in shaping digital policies and technologies due to their unique perspectives, innovative thinking, and awareness of current issues.

Need for inclusive and diverse youth engagement

Mariana Rozo-Paz

Jenny Arana

João Moreno Falcão

Celiane Pochon

Create pathways for youth from all regions to participate

Institutionalize youth participation in governance structures

Broaden reach to include more diverse youth voices

Include youth in multi-stakeholder forums and consultations

Speakers agreed on the importance of creating inclusive mechanisms to ensure diverse youth participation in digital governance across different regions and forums.

Similar Viewpoints

Both speakers emphasized the importance of empowering youth through education and highlighting successful youth initiatives to encourage their participation in digital governance.

Jenny Arana

Celiane Pochon

Showcase success stories of youth-led initiatives

Enhance data literacy programs in schools and universities

Unexpected Consensus

Addressing trust issues in data governance

Celiane Pochon

Mariana Rozo-Paz

Lack of trust in data governance arrangements

Youth face unique vulnerabilities in the digital age

While most speakers focused on youth participation, Pochon and Rozo-Paz unexpectedly highlighted the importance of addressing trust issues and vulnerabilities in data governance, which is crucial for meaningful youth engagement.

Overall Assessment

Summary

The speakers largely agreed on the importance of youth participation in digital governance, the need for inclusive engagement strategies, and the value of youth perspectives in shaping digital policies. There was also consensus on the need to address challenges such as digital literacy and trust in data governance.

Consensus level

High level of consensus among speakers, with strong agreement on core issues. This suggests a unified approach to youth engagement in digital governance, which could lead to more effective policies and initiatives for involving youth in shaping the digital future.

Differences

Different Viewpoints

Definition and scope of youth

Melody Musoni

Jenny Arana

Define youth, because I’m coming from a country where people who are in their 70s, they still qualify as youth.

Youth have fresh perspectives and creativity to contribute

There was a disagreement on how to define ‘youth’, with Melody Musoni pointing out that the definition varies greatly between countries, while Jenny Arana focused on youth as a group with fresh perspectives, implying a younger demographic.

Unexpected Differences

Digital literacy assumptions

João Moreno Falcão

Jenny Arana

Digital literacy gap between using technology and meaningful participation

Youth are catalysts for innovation and social movements

While Jenny Arana emphasized youth as catalysts for innovation, João Moreno Falcão unexpectedly highlighted a significant digital literacy gap among youth, challenging the assumption that all young people are equally capable of meaningful participation in the digital economy.

Overall Assessment

summary

The main areas of disagreement centered around the definition of youth, approaches to youth inclusion in governance, and assumptions about youth digital literacy.

difference_level

The level of disagreement was moderate. While speakers generally agreed on the importance of youth participation, they had different perspectives on how to achieve it effectively. These differences highlight the complexity of engaging youth in data governance and the need for nuanced, context-specific approaches.

Partial Agreements

Partial Agreements

All speakers agreed on the need for youth inclusion in governance and policy-making, but had different approaches. Jenny Arana suggested institutionalizing youth participation, Celiane Pochon advocated for including youth in existing forums, while João Moreno Falcão emphasized broadening the reach to more diverse youth voices.

Jenny Arana

Celiane Pochon

João Moreno Falcão

Institutionalize youth participation in governance structures

Include youth in multi-stakeholder forums and consultations

Broaden reach to include more diverse youth voices

Similar Viewpoints

Both speakers emphasized the importance of empowering youth through education and highlighting successful youth initiatives to encourage their participation in digital governance.

Jenny Arana

Celiane Pochon

Showcase success stories of youth-led initiatives

Enhance data literacy programs in schools and universities

Takeaways

Key Takeaways

Resolutions and Action Items

Unresolved Issues

Suggested Compromises

Thought Provoking Comments

Trust in data governance arrangements and empowering young people and individuals in the digital space is key for the data society we want to have. Without the control over our data we lose trust in institutions and companies and technologies themselves which hinders digital inclusion, innovation and all the other topics that Jenny and Joao already mentioned.

speaker

Celiane Pochon

reason

This comment highlights trust as a fundamental issue underlying many challenges with youth engagement in the digital economy. It connects individual empowerment to broader societal outcomes.

impact

This shifted the discussion to focus more on trust as a core issue, leading to later comments about building technologies in a way that fosters trust across diverse communities.

One child from his birth to his 13th birthday will gather 72 million pieces of personal data. So before you grow, you will already have a huge digital footprint, and we need to bring awareness about this, because they aren’t able to agree on it, and the implications of this amount of data will be seen.

speaker

João Moreno Falcão

reason

This statistic provides a striking illustration of how pervasive data collection is for youth, even before they can consent. It raises important ethical questions.

impact

This comment deepened the conversation around youth data rights and privacy, leading to further discussion about digital literacy and empowerment.

Define youth, because I’m coming from a country where people who are in their 70s, they still qualify as youth. And then living in Europe, normally it’s between 8 to 18 years to 35.

speaker

Melody Musoni

reason

This question challenges the fundamental assumptions of the discussion by pointing out how ‘youth’ is defined differently across cultures.

impact

This led to a more nuanced discussion about how youth is defined in different contexts and the need to be specific when discussing youth engagement.

Young women are also part of that, which is not true. I think there are a lot of amazing women leading in the digital spheres, they’re fighting really hard to find the space, but the sector is not really giving them that full space or an equal space.

speaker

Emad Karim

reason

This comment introduces an important intersectional perspective, highlighting the specific challenges faced by young women in the digital sphere.

impact

This shifted the conversation to consider more deeply the intersectionality of youth issues, particularly gender disparities in tech.

90 percent of the data sets for AI are coming from Europe and North America and less than 4 percent coming from Africa right now.

speaker

Sophie Tomlinson

reason

This statistic starkly illustrates the lack of global representation in AI training data, which has profound implications for AI’s applicability and fairness globally.

impact

This comment deepened the discussion on the need for inclusive approaches in AI development and highlighted the importance of diverse data representation.

Overall Assessment

These key comments shaped the discussion by broadening its scope from general youth engagement to more specific issues of trust, data rights, cultural definitions of youth, intersectionality, and global representation in tech development. They moved the conversation from abstract concepts to concrete challenges and potential solutions, emphasizing the complexity and diversity of youth experiences in the digital economy. The discussion evolved to recognize the need for more nuanced, inclusive approaches that consider various cultural contexts and intersectional identities when addressing youth engagement in digital policy and technology development.

Follow-up Questions

How to define youth in the context of digital inclusion and data governance?

speaker

Melody Musoni

explanation

Different regions and organizations define youth differently, ranging from 8-18 years to up to 35 or even 70 in some countries. A clear definition is important for targeted policies and initiatives.

Why is it important to focus specifically on youth in discussions about data and digital inclusion?

speaker

Melody Musoni

explanation

Understanding the unique aspects and importance of youth engagement can help in articulating and achieving objectives related to youth inclusion in the digital economy.

How can we engage rural communities more effectively in digital inclusion efforts?

speaker

Melody Musoni

explanation

Many discussions about marginalized communities are superficial. There’s a need to explore specific strategies for engaging rural youth, such as through secondary schools.

How can we accelerate progress in addressing the gender gap in digital transformation, particularly for young women?

speaker

Emad Karim

explanation

Young women face specific challenges in the digital sphere, including job loss due to digital transformation, lower likelihood of using AI in the workplace, and increased risk of technology-facilitated violence.

How can we ensure that youth utilize their skills to improve their home countries rather than migrating to use their talents elsewhere?

speaker

Advocate Zainouba

explanation

There’s a need to address the brain drain of skilled youth from countries like South Africa to ensure local development and preservation of cultural heritage in technological advancements.

How can we create AI and language models that reflect local cultural heritage?

speaker

Advocate Zainouba

explanation

Current AI systems often don’t reflect diverse cultural perspectives, particularly from regions like Africa. There’s a need to involve local talent in developing culturally relevant AI.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #198 Advancing IoT Security, Quantum Encryption & RPKI

WS #198 Advancing IoT Security, Quantum Encryption & RPKI

Session at a Glance

Summary

This session at the Internet Governance Forum focused on the intersection of quantum encryption, Resource Public Key Infrastructure (RPKI), and IoT security in shaping the future of internet security. Experts discussed how quantum technologies are revolutionizing fields like communication and sensing, with potential applications in healthcare, defense, and environmental monitoring. However, they also highlighted the threat that quantum computing poses to current cryptographic standards, emphasizing the urgent need to develop and implement quantum-resistant encryption methods.

The discussion then shifted to RPKI, a security extension for internet routing. Speakers explained its importance in preventing route hijacks and misconfigurations, while noting challenges in adoption and implementation. They stressed the need for widespread adoption to maximize RPKI’s benefits and protect against routing vulnerabilities.

The session also touched on IoT security, particularly the challenges of implementing robust security measures in resource-constrained devices. Experts emphasized the need for lightweight, quantum-resistant protocols for IoT devices to ensure their protection in the face of advancing quantum capabilities.

A significant portion of the discussion focused on the global disparities in adopting these security measures, particularly in the Global South. Speakers highlighted the need for capacity building, resource allocation, and policy harmonization to ensure equitable adoption of advanced security protocols across different regions.

The session concluded by underscoring the critical importance of collaboration among stakeholders in addressing the challenges and opportunities presented by these emerging technologies. Participants agreed that proactive measures and international cooperation are essential to secure the digital ecosystems of the future against evolving threats.

Keypoints

Major discussion points:

– The potential impacts and security implications of quantum computing on current cryptographic systems

– The importance of implementing post-quantum cryptography and quantum-safe security measures proactively

– The role of RPKI (Resource Public Key Infrastructure) in securing internet routing and challenges with its adoption

– The need for capacity building and resources, especially in developing regions, to implement advanced security measures

– Considerations for securing IoT devices against quantum and other emerging threats

Overall purpose:

The goal of this discussion was to explore the intersections between quantum technologies, internet routing security (RPKI), and IoT security. The speakers aimed to highlight current developments, challenges, and future considerations in these areas to help prepare for a more secure digital ecosystem.

Tone:

The tone was primarily informative and forward-looking, with speakers providing technical explanations as well as policy and practical considerations. There was a sense of urgency in addressing these issues proactively, balanced with acknowledgment of the challenges involved, especially for regions with fewer resources. The tone remained consistent throughout, maintaining a focus on collaboration and the importance of multi-stakeholder efforts in addressing these complex technological challenges.

Speakers

– Nicolas Fiumarelli: Moderator

– Maria Luque: Expert in technology foresight, corporate diplomacy and quantum technologies; Managing Director of the Future of Literacy Group

– Sofia Silva Berenguer: RPKI Programme Manager at APNIC

– Wataru Ohgai: Representative from JPNIC with expertise in RPKI operations

– Athanase Bahizire: Online engagement assistant

Additional speakers:

– Yug Desai: Rapporteur from South Asian University

– Wout de Natris: Consultant for the Dynamic Coalition on Internet Standards, Security and Safety Wout de Natris

– Michael Nelson: Commenter (mentioned in chat)

Full session report

Revised Summary of IGF Session on Quantum Encryption, RPKI, and IoT Security

Introduction:

This Internet Governance Forum session explored the critical intersection of quantum encryption, Resource Public Key Infrastructure (RPKI), and IoT security in shaping the future of internet security. Experts from various fields discussed current developments, challenges, and future considerations to prepare for a more secure digital ecosystem.

Quantum Technologies and Cybersecurity:

Maria Luque, an expert in technology foresight, opened the session by highlighting the rapid advancement of quantum technologies and their implications for cybersecurity. She emphasized that quantum sensing and communications are maturing quickly, with potential applications in healthcare, defense, environmental monitoring, and space exploration. Luque painted a picture of a future where global communications, both terrestrial and space-based, are integrated through optical networks.

The discussion emphasized the urgent threat that quantum computing poses to current cryptographic standards. Nicolas Fiumarelli, the moderator, stressed that post-quantum cryptography standards need to be implemented now, rather than waiting for quantum computers to become a reality. This sentiment was echoed by Sofia Silva Berenguer, who highlighted the vulnerability of current cryptographic systems to quantum computing threats.

Luque underscored the immediacy of the quantum threat, stating, “My message today is that not only a cryptographically relevant quantum computer and its advent is a threat to this future that I just pointed out to you, to how we leverage this data for good, not only Harvest Now and Decrypt Later is a threat to this vision alone, the threat is up today.” This statement shifted the discussion towards more urgent consideration of quantum-safe security measures and their implementation.

RPKI Adoption and Challenges:

The conversation then moved to the importance of RPKI in securing internet routing. Wataru Ohgai, representing JPNIC, reported that global IPv4 ROA coverage has exceeded 50%, indicating progress in RPKI adoption. He also noted that Tier 1 networks like Google are pushing for RPKI readiness, which is encouraging wider adoption.

Sofia Silva Berenguer highlighted that RPKI adoption faces a collective action problem, particularly for smaller network operators. She discussed the RPKI program and its challenges, emphasizing the need for capacity building and support for smaller ISPs. Berenguer also mentioned the development of ASPA (Autonomous System Provider Authorization) as a complementary security measure.

Ohgai provided a real-world example of vulnerabilities in current security systems, stating, “ROA revalidation is done based on what is written in ROA. So the trust in ROA is a considerably big issue. This year, one of the large network operator in the world located in Spain, which is a ripe region, had their online account used to creating or modifying ROA taken by bad actor.” This comment grounded the theoretical discussion in practical concerns, leading to more focus on operational challenges and the need for robust authentication methods.

IoT Security and Global South Challenges:

Nicolas Fiumarelli highlighted the importance of creating lightweight post-quantum cryptography protocols for IoT devices, given their resource constraints. This led to a discussion on the specific challenges faced by IoT devices in implementing advanced security measures.

Athanase Bahizire, the online engagement assistant, stressed the need for harmonization of cybersecurity policies across regions. He pointed out the challenges faced by developing regions, particularly in Africa, in implementing advanced security measures. Bahizire commented, “We tend not to take very seriously cryptography as what I was giving examples whereby you know putting in place to filter authentication in your database and some other very little best practices. We are not adopting them. We are waiting for when it’s like mandatory or it’s like as a regulation to adopt it’s what is not really a good practice and it doesn’t have that much in securing our system.” This observation shifted the conversation towards discussing capacity building and the need for proactive security measures, especially in the Global South.

Future Internet Security Measures:

Looking towards the future, the speakers agreed on several key points. Ohgai emphasized the need to develop quantum-safe RPKI protocols. Fiumarelli highlighted the importance of post-quantum cryptography standards, mentioning Crystal Deletion, Crystal Kyber, and Sphinx as examples discussed during the Q&A session.

The role of the Dynamic Coalition on Internet Standards, Security and Safety (IS3C) in supporting work on quantum computing and security was also discussed, as mentioned by Wout de Natris during the Q&A.

A notable point raised during the Q&A was the potential vulnerability of blockchain and Bitcoin to quantum computing attacks, further emphasizing the need for quantum-resistant cryptography across all digital technologies.

Conclusion and Future Directions:

Yug Desai, a rapporteur from South Asian University, emphasized the crucial role of multistakeholder collaboration in addressing these complex security challenges. Nicolas Fiumarelli concluded the session by reiterating the importance of collaboration in tackling the challenges posed by quantum computing and in implementing robust security measures.

The session identified several unresolved issues, including effectively implementing quantum-safe cryptography for resource-constrained IoT devices, overcoming the collective action problem in RPKI adoption, and ensuring equitable adoption of security protocols in regions with limited resources.

Overall, the discussion highlighted the need for proactive measures, international cooperation, and continued research to secure digital ecosystems against evolving threats, particularly those posed by quantum computing advancements. The session underscored the urgency of implementing post-quantum cryptography standards and the importance of capacity building initiatives to ensure global preparedness for the quantum era of cybersecurity.

Session Transcript

Nicolas Fiumarelli: Good morning, everyone. Good afternoon. Good evening, wherever you are in the world. Let’s proceed. Okay. Welcome to our session on quantum encryption, RPKI, that is resource public infrastructure, and IoT, Internet of Things, security. We are going to tackle intersections on these future challenges. My name is Nicolas Fiumarelli. I came from a tiny country that is called Uruguay in South America. I am pleased to serve as the moderator today. Assisting with the online engagement is Athanasi Vajisiri, ensuring that both virtual and in-person participants are fully integrated in our discussions. The session will tackle three essential pillars for the future of the Internet security. One, the first one is quantum security, which is redefining cryptographic protocols to withstand the power of quantum computing. Just a note here, because this is the only one session about quantum computing in the entire ICF. As you may know, next year is called the year of the quantum, because of the recent advancements on this technology from the different big tech giants around the world. So, this is an important topic for us, and we needed to include it in the ICF. So, the second topic will be on the RPKI, the resource public infrastructure, that is about securing the integrity of the Internet routing. You know, this protocol that is used for routing, that is BGP, or the Gateway Protocol. And finally, the third topic will be IoT security. We will address unique vulnerabilities. and availability of billions of interconnected devices worldwide. So it’s a challenge this session. So our objective is to examine the intersection of these three technologies, right? The challenges and the opportunities, particularly in shaping secure and inclusive digital ecosystems. So the format for today will include individual presentation for our expert panelists, each from each of the topics. We are offering 15 minutes of deep insights into the areas of expertise. So following these initial presentations, we will open the floor for a 30 minutes discussion and questions and answers. One of the ideas to address policy questions and take input from both on-site participants and virtual participants. And finally, our rapporteur, Yuke Desai from the South Asian University will summarize the session and share some insights about his research on the internet engineering task for the mappings. So let me now introduce our speakers and the flow of the contributions. The first speaker is… Okay. First, first speaker. Okay. Okay, first speaker is Maria Luque. Maria Luque is an expert in technology foresight, corporate diplomacy and quantum technologies. She is also the managing director of the Future of Literacy Group. She has extensive experience in creating cross-national innovation schemes to advance the integration of quantum technologies in strategic sectors. So Maria will begin the session with a presentation on cybersecurity for a quantum future, presenting a comprehensive vision of the place of quantum technologies in our shared future, highlighting development in post-quantum cryptography and quantum key distribution. So Atanasis, you can confirm that Maria is online. Okay, so Maria, the floor is yours. She has a presentation to share, so maybe the technical team will help her to share screen. Maria, can you confirm you can speak? Hello? Okay, while the technical team help us to put Maria on the floor. Okay, we are waiting for the presentation. Good afternoon, can you hear me?

Maria Luque: Yes, we can hear you but we cannot see your presentation, so please wait some seconds. Some technical issues here at the stage. Can you see my screen now? Yes, we can see your screen. If you want, you can also open your camera or if you already open, we’ll tell the technical guys to put you on the screen. Let me try that. First the presentation, then you can hear me. You need to allow me to open my camera. Allow her to open the camera. You can start my presentation and then later they will allow you to. put your camera down. Okay, okay. Thank you. So, good afternoon to all of you. Good afternoon, Nico. Good afternoon to those in the audience. Clever enough to pay attention to this presentation right before the closing of the IGF this year. Thank you for being here. I really want to do a brief exercise to start the session. And this is on pair with you not being able to see me through video, that’s very timely, because I’m going to ask you to close your eyes, if you may, those of you in the audience. I want to paint the picture, why we think about quantum technologies today at the IGF, and go a little bit forward. So, eyes closed. By 2045, the world looks nothing like it does today. Profound transformation has taken place, and all global communications, both terrestrial and space-based, they are somehow integrated through optical networks. It is a rather sophisticated infrastructure, born from decades of collaboration and innovation, and it enables real-time data transfers at great speeds, allowing instantaneous communications across the globe, and into the farthest reaches of our solar system missions. The integration of quantum communications, in this time, has extended our reach even farther, and is facilitating secure, near instantaneous transmissions across Earth to the Moon and Mars. This breakthrough is paving the way for us humans to explore and inhabit even other planets, while advancing how we understand the cosmos. But what’s more important, in this age, mature quantum sensing and computing. computing technologies have helped us unlock a new era of capabilities, of possibilities. We now have distributed quantum sensing networks, and they provide us with extreme precision in environmental monitoring, in space exploration, and early warning systems for disaster in every location. This is enhancing our quality of life and our ability to protect both Earth and our expanding presence in space. Quantum computing now is deeply integrated into this global network, and it enables the confidential processing of highly, highly sensitive data, securing information that is critical to national security, to finance, and to global governance. Our relationship with trust in this era is definitely changed. These technologies have transformed sectors, ranging from healthcare to defense, enabling the secure integration of intelligence, of military and defense efforts, and it’s making it possible to effectively confront asymmetric, cyber, and kinetic threats to our infrastructures and well-being. Well, now those of you who share the audience with us can open your eyes, and you can tell them. This picture of 2045 is a bit of a yes, but it’s one of the possible futures I’d like you to start betting for starting today. Now, we set our eyes on 2045 today, but some nations and cutting-edge RTOs are getting ready for 2030. For example, the way we understand smart cities is changing with the introduction of quantum technologies, such as quantum sensing. Quantum sensing is a very mature technology, and it allows us to sense the air. the electromagnetic fields way beyond the scope of today’s parameters. Here at my presentation, you can see a crystal clear bed by TNO in the Netherlands of quantum technologies to help up communities with the energy transition. In this picture, quantum sensors can optimize the efficiency of power grids. They can enhance battery performance and they can improve the detection of leaks in pipelines. And since sensors are abundant in every domain on the energy transition, there are countless opportunities where they can be employed to gather critical data. For example, heat and carbon uptake in industrial environments, to have better models of our reality so that we can choose to make it more sustainable. So if we are lucky, five years from today, most of the critical data that we will gather will be gathered through quantum technologies. And to make that happen, quantum computing and all applications of quantum tech are precise. We mentioned applications for environmental and climate modeling, but we can also think of wearable IoT devices with a quantum sensor transmitting live and critical biomedical data from a soldier to a logistic base. This critical data will become increasingly valuable because they say they’re going to give us a competitive advantage for the industry, a strategic advantage, for example, for defense, or more quality of living, just as we saw. And this data is going to be needed to compute and craft the knowledge models of the future. As we all know that synthetic data is short-lived. So in a few words, the learning curve of future knowledge models, AI models, to compute solutions for the energy. transition, for defense, for field security, depends on this high quality data. So we can expect in the near future that quantum technologies gathered data will be feeding up these models, first through AI power compute, which is happening today, later through quantum plus AI compute, which will be happening in the next five years, and ultimately through quantum computing. That leaves us in a scenario where this exchange coin for our well-being, which is data, is going to circulate all over Earth and space. And my message today is quite different from the one that I gave the last IGF in Kyoto last year. My message today is that not only a cryptographically relevant quantum computer and its advent is a threat to this future that I just pointed out to you, to how we leverage this data for good, not only Harvest Now and Decrypt Later is a threat to this vision alone, the threat is up today. I mean, the current standards for our IT and OT cybersecurity in our industry environments, in our critical infrastructures, these standards and measures are very low-key, very unclear in most cases. Some of them still operate on cybersecurity by obscurity. And the bad news is that it’s only going to get worse, because on the one side, there’s this trend of AI sub-organization of everything that is going to expose our critical infrastructure to more and more blind spots, and also our definition of critical infrastructure is growing in assets. For example, we now have satellites in LEO and ground stations for science and optical communications. So, my bet is that to protect the future that we are trying to build, that we are talking about throughout the entire IGF these years, to protect all of our collective investments in AI, in compute, in quantum, in space, a new framework of cybersecurity and networking is essential. And for this, or to make this happen, quantum security is essential. Now this year, we’re going to make it easier, I’m giving a very high-level and zoom-in and zoom-out overview of cybersecurity in the quantum era, given that today is about a multi-stakeholder future for all, I’m going to give you an overview of what we need to focus on today, tomorrow, or let’s say the day after, to unlock the kind of secure communications that we need for the next set of quantum plus AI progress in our industries. Today, the new normal of today has a focus on protection regarding quantum security. This means that we are working on integrating what are called post-quantum cryptography standards and quantum cryptography algorithms, such as those approved by North America’s NIST, into our existing digital and physical infrastructures. Under the belief that under a quantum computer attack, these algorithms will resist for the most part and not unveil the underlying information behind the data. We are working means with the help from the tech industry and building common understanding for this, such as under the GSMA post-quantum crypto task force, our national governments are issuing guidelines to help us start our migration. to post-quantum cryptography frameworks. And most hyperscalers, such as Amazon, Google, Apple, with their iPhones, are introducing them into our daily cloud-based platforms. Also during November this year, we have seen the mandate to work on RSA keys by 2030, by North America’s needs, and some similar statements in the European Union’s Cyber Resilience Act. So today, well, diagnosing the problem is the easy part, and we have done that. But aligning national and international policies is much harder, and we are on those efforts right now. In the meantime, if the future is quantum, as I presented earlier, the present is hybrid. The new normal of tomorrow, two day and three years from now, the focus is to gather very mature advances in quantum communications, such as quantum key distribution, which promises to render the data unusable impact to the interaction with the physical properties of light, to start hybridizing it in classical telco networks and infrastructures. The advances in QGD and quantum communications we shared last year, I can tell that they keep upscaling. For example, this last week in the European Union, we just signed a contract for the IRIS-2 constellation, and this constellation will be ready for optical communication links. So we will ensure with it the space segment of the European quantum communications infrastructure. Regions worldwide are very active in proof of concepts of the integration of QGD into classical telco. networks, for example, and again, MatQCI, the Madrid Quantum Communications Infrastructure, those in the Netherlands, or for example, GovNQ, which is a network in New York. This today, two years from now example of what we need to do to start integrating quantum communications in existing infrastructures is undergoing a lot of challenges. First one is the challenge of standardization, and this might be the second most important message of the session today for me. The challenge of working on interoperability of these technologies with existing operators and infrastructures. Also the challenge of starting the substitution of RF for optical communication ground stations globally, or even how to develop quantum memory so that we can make quantum networks with size beyond the regions that we’re working with now. Now the day after tomorrow, and this is the part where we collide with the vision at the start, to be really in 10 to 15 years we have projects such as the Quantum Internet Alliance, which is in the European Union striving to mature quantum internet working capabilities to start deploying these dreamy applications that we started the session with. Distributed quantum sensing networks, decentralized and blind computing of data between actors who don’t necessarily have to trust each other to make joint decisions. Thanks to this technology, you name it. We also have statements from NASA’s CAM program speaking about the US-led global quantum network by 2035, but we don’t have much more than that. And for global benefits to occur for this scenario of 2045 to occur, we need to build these networks. as jointly as possible with interoperability as a key priority. Otherwise, the bright promise of the quantum future will turn into a zero-sum nightmare between societies. We need to start with quantum security today, right where you are. Remember that global investment in quantum reached 42 billion in 2024, and it’s outpacing historical tech projects like the Polar project. Many people speak of a quantum Manhattan project in different countries and nations among them, China. My final message in this very high-level overview that I’m presenting today is that quantum-gathered data is needed for all of the knowledge models that we want to advance with artificial intelligence and high-performance computation. In a very short time, we’re going to deal with very sensitive data in our communication infrastructures and our critical infrastructures affecting us personally. So the time to start investing in quantum security is today, right where you are. Thank you so much, Maria. It was very clear. Your message, I think, is a huge moment we are having now in the era of the internet future. It’s important to see that the first approach is to have… I am in… Sorry, my microphone. Sorry. Okay. Can you hear me? Yes. Okay. Thank you. Thank you.

Nicolas Fiumarelli: Thank you. Thank you. Okay, sorry. What I was saying is that the first step, as Maria mentioned, is to have post-quantum cryptography. What is that? It’s to have the algorithms that we have today like RSA, AES, etc., are not quantum resistant. That means that when you send a WhatsApp message, for example, from your phone to another phone, you can see that the message is encrypted from end to end, right? But for the most powerful classical computer nowadays, it will take like 200,000 years to decrypt any WhatsApp message. But for a quantum computer, it will be so rapid, like seconds, right? So with the post-quantum cryptography algorithms, we are having a way that quantum computers will take hundreds of years to also decrypt this new cryptography. So that is like the first step, right? To have post-quantum cryptography algorithms in all the corridors of the internet and ICTs. Then the next step, if I remember from Maria’s presentation, was about the quantum key distribution. It’s a technique that uses the quantum physics to send information in a way that is teleporting information. It’s a property of the quantum physics and in this sense, no one will know your key. So once you exchange this key via the quantum network or the quantum facilities, then you can encrypt your messages with this key in the classical internet. So that is the second step. And the third step is about the quantum internet, that is that everything goes in this new model of the quantum. And the last mile is the quantum internet working session because with these sensors and with distributed quantum computing, you could have more calculations and you could have a lot more features that uses this technology. So now I am heading to Sofia. to introduce the second topic of today, that is the RPKI, and then later in our session we will address some policy questions. So we will return to you, Maria. Sofia, Sofia Silva-Berenguer is the RPKI Programme Manager at APNIC. She is specialising in securing internet routing and improving the adoption of cryptographic frameworks across regions. Sofia will delve into the critical role of the RPKI. RPKI is Resource Public Infrastructure. It’s a security extension for the internet routing. You know that the internet goes with packets, and these packets are being routed by something that is called autonomous systems. So Sofia will delve into the important critical role of the RPKI, explaining about route origin authorisations, there is a concept, route origin validation, that’s safe word against route hijacks and misconfigurations. She will also highlight some about regional adoption challenges, ongoing capacity building efforts, and some solutions that at the IETF, you know there is a company that is, there is a standardisation body that is called the Internet Engineering Task Force, where every protocol you have heard about, HTTP, DNS, FTP, everything was made that, in that standardisation body. And they are every day having new discussions in mailing lists about different new security extensions. One of those is ASPA, Autonomous System Provider Authorisation, that is looking to secure routing paths. So with that, Sofia, the floor is yours for your presentations. Thank you.

Sofia Silva Berenguer: Thanks so much, Nico, for the introduction, and thanks for having me today. Hello, everyone. I’m connecting from Uruguay today, although I’m normally based in Australia. I’m originally from Uruguay, Semblan. Nicolás and visiting family. So as Nicolás mentioned, I will be talking about securing the internet routing. But I want to start by briefly sharing why we need to do that. So internet is not just a network. Internet is the network of networks in which networks learn where other networks are using the border gateway protocol that Nicolás mentioned, BGP. So basically networks exchange BGP announcements where they tell each other you can reach these prefixes through me. But the thing is that this protocol was designed under the assumption of trust. Back in the day when the internet started, everyone knew each other. They could trust what everyone else was saying. But then in the 80s when the internet was open to the commercial sector and it started growing exponentially, this assumption of trust didn’t work that well anymore. And it started to become clear that security needed to be addressed in some way. The problem was that the internet was already working. We could not just replace the routing protocol with a new one. So new layers had to be built on top of existing protocols. And RPKI was one of those layers to add security. So I will be talking a bit about RPKI today. So you may have heard about route hijacks. And that’s one of the incidents that can happen in the internet nowadays. In particular, back in 2008, there was a big incident where the Pakistan government instructed Pakistan Telecom to not allow traffic towards YouTube. And in trying to do that, there was an accident in the configuration where routes were leaked and went beyond Pakistan. The idea was to keep that. local but it went outside of the Pakistan borders and it cost an availability of YouTube for a little while. And that is just one big incident that made the news, but there have been other incidents that have been quite big and could have been avoided. So sometimes those incidents are malicious, are a proper attack, but in some cases they are what we call fat fingers. So it could be someone that mistyped something, for example, or in the case of YouTube and Pakistan Telecom was an error in configuration. And so I will be talking a bit more about adoption in a moment in a couple of my slides but the good news is that incidents like that that happened in 2008, there were a few back then that made the news, and were quite big. And we hear less and less about those incidents in the news, and that’s a good thing. So I’ll tell you more why we’re hearing more about that. So as I mentioned RPKI is that like layer of security that has been added on top of BGP, and how that works is that RPKI allows network operators to make statements of what are the routing intentions. In terms of what is the origin autonomous system that is allowed to originate the prefixes that they are responsible for. And that is in the case of route origin authorization sorry I kind of skipped about. RPKI in general allows for statements on routing intentions that are cryptographically verifiable. And the most popular the most popular type of object. Nowadays, is route origin authorizations that is to authorize a specific origin as to originate a set of prefixes. And that is one side of RPKI that is like creation of ROAS allows to to make those statements. But then on the other side, someone needs to use that information. And so on the other side, what we call route origin validation is using the information in ROAS to decide what to do about BGP announcements. So what my very simple diagram in this slide is trying to show here is that when a router, that black thing in the middle, receives a BGP announcement, based on what they see in the RPKI system, based on the RPKI data, they can decide whether to use that BGP announcement to create a new entry in the routing table and learn maybe a new path, or if they just ignore it and discard that BGP announcement. So where are we on this journey to securing the internet routing? ROAS, this particular object type that as I mentioned is the most popular nowadays. It was standardized more than 10 years ago. And at first, like any technology, it took a little while for it to start being used. But as you can see in the last five years or so, it has been more quickly being used. And these charts in particular are from NIST, from the US government. And it shows the percentage of unique prefix origin pair that are covered by ROAS. And you can see that for IPv4 and for IPv6, we are in a very similar situation right now where it’s 54% for IPv4, it’s 60% for IPv6. But as I mentioned, creating ROAS is just one side of RPKI. The other side is using that information to do validation. And this is where it gets a bit tricky to answer the question where we are on the journey. And actually, I recently saw an article from RIPE Labs, the blog from one of the regional internet registries that was talking about IPv6 adoption. I will be not talking about IPv6 today, but it mentioned Schrodinger’s cat and how IPv6 exists in these two states at the same time. And I feel that it’s very similar with RPKI, depending on who you ask. Some people may tell you, adoption of RPKI has been a success. Recently, there was an article from Job Snyder’s who is very active in the technical community. He works for Fastly and was describing an incident that was kind of similar to the YouTube versus Pakistan telecom incident that I mentioned, but this time the incident didn’t make the news. And that is because there was no real consequence or bad consequence of that incident because of RPKI. So in his article, he thinks that RPKI adoption is a success and this is proof that RPKI works. But I’ve also seen presentations, for example, Jeff Houston, who some of you may have heard of, just recently presented about RPKI and DNSSEC. And from his perspective, he sometimes uses the expression even market failure. He believes RPKI should have been adopted much more quickly. So again, there’s different measurement projects. So depending on where you look, you may find different stats and depending on who you ask, the perception on whether we are at a good level of adoption or not may change. It’s a bit subjective, but one of the projects is RoVista. And I like this project because they have an academic paper. So if you go to that URL of the project, you can check the methodology. There’s also particular challenges on how to measure RoV. I feel with other technologies that stop attacks or that. mitigate risk, it’s hard to measure things that don’t happen. So I will not go into the technical details on how this is measured, but there are challenges on how to measure route origin validation. So according to this particular methodology, one of the charts on my slide here, the one on the left, shows the percentage of autonomous systems that are protected by route origin validation. And they split this into partially protected and fully protected, because network operators may decide to do route origin validation on some of their interfaces and not all of them. So partially protected is when there’s at least one interface where they do route origin validation. And you can see that that number, when I got this chart, and it was just a few days ago, was around 90%. So that’s pretty good. But if you look at fully protected, where all the interfaces are doing route origin validation, that’s just a bit below 25%. I also included a chart, I will not go into the detail of how economies are doing comparing to each other. But I thought it was interesting that Rovista also defines this ROV score. And in this particular chart, they do kind of a weighted average based on the cone size. So it’s based on the customers and customers of customers that an autonomous system has. And so you can see how different economies are at different stages of deployment of this. As Nicolas mentioned, there’s also ASPA. So I wanted to briefly touch on, as I said, route origin authorizations. They prevent some types of attack, but it’s just based on the origin autonomous system used on a BGP announcement. But in order to protect the rest of the path, there’s a new object type that is being discussed in the ITF. That also, thanks to your introduction, Nico, people know now that the ITF is the body that standardized protocols in the Internet. So there is a discussion that is actually has made a lot. of progress, and it’s quite close to being completed and ASPA becoming a standard, but it’s still being discussed. ASPA stands for Autonomous System Provider Authorization, should soon become a standard. And it has already been implemented in a way. So I included a couple of links, if anyone is interested. There was an article about a first route leak that was prevented by ASPA. And also earlier this year, Hurricane Electric announced that they already support ASPA. As I mentioned, depending on who you ask, you may be told that RPKI is going great. Some people think that adoption should have been faster. And I wanted to touch on what are some challenges of, in particular, route original authorizations and validation adoption. As we mentioned, there’s the signing part, creating ROAS, the validating part, ROV. And there is a concept in social sciences that I think may help understand part of the challenge for adoption. And it’s that for RPKI to provide maximum benefit

Nicolas Fiumarelli: to the internet, to everyone, we need each autonomous system in the internet to do their part. We need each autonomous system to create ROAS for all their space, but also to start doing route origin validation. And at some point, there was a bit of a chicken and egg situation, where if there’s not enough ROAS out there, why would I do validation? But also the other way around, why would I create ROAS if no one is doing route origin validation? I personally believe we are past that point, as you have seen from the statistics. I think there is enough level of adoption that there should be more motivation nowadays to create ROAS to do validation. But also a bit of a challenge that I’ve heard sometimes is that technical people do understand the importance of this. But when non-technical decision makers are involved, it may be hard sometimes to justify. the work required to implement best practices because sometimes the commercial benefit cannot immediate. And to that, what I want to say is that we need to keep in mind that by implementing best practices and not just RPKI, but best practices in general, what we’re basically doing is preventing reputational damage. So that should be enough for justification. I know I’m running out of time. So I’ll just try to pick up the pace, Nico, sorry to try to keep on time. Because I am the program manager for the NRO RPKI program, I wanted to briefly touch on what we do to encourage adoption of RPKI. So first, very generally some approaches to encouraging adoption, one is providing support. So by raising awareness, building capacity, engaging with organizations and those that are responsible for implementations, working on system improvements is a way that we can encourage adoption. And then there’s also two big approaches that you may have heard of that is based on reputation. There’s an example of MANRS, which is the Mutually Agreed Norms for Routing Security, where there’s different aspects of routing security that are described as best practices and network operators can subscribe to MANRS and then become part of kind of this ranking on how much they implement those best practices.

Sofia Silva Berenguer: But there’s also regulation-based approaches where you may have heard earlier this year, the United States, for example, is the big example of publishing a roadmap to enhancing internet routing security by which governmental agencies are now mandated to create routes for the space and to start doing route origin validation. And there’s also a similar example from Finland. So as I mentioned, I’m directly involved with the Regional Internet Registry. I work for the NRO, which is the number resource organization that brings together the five RIRs and what the RIRs do to support RPKI adoption. is to organize training events. They have e-learning platforms where they help with that like capacity building side of things. They engage with member organizations, with governments and other entities to support them in the adoption of RPKI. And we have recently launched, actually just in January this year, we launched the RPKI program that I am a program manager for to create more consistency across the RIRs. Because each RIR is an independent organization that has implemented RPKI in their own way. It has become more and more important strategically to create more consistency among the five of them. And a geographically relevant example is the RIPE NCC, that is the RIR that covers this part of the world. In 2023, worked closely with the Saudi Arabia government organizing workshops, both for decision makers and for technical people. And that showed an immediate increase in the uptake of RPKI. So I think that that’s a good example of how we support RPKI adoption. As I mentioned, I’m the program manager for the NRO RPKI program. And what we want to do is to bring more consistency to the RPKI implementations of the five RIRs. But most importantly, we want to create this space of more structured coordination and collaboration. Historically, the RIRs do coordinate and collaborate. But for RPKI in particular, we wanted to create this like more structured with like clear priorities. And we have some specific objectives that we want to achieve in 2025. And I’ve left a couple of links there so that if you want to learn more about the program or if you want to get in touch, you can do that. So in bringing my presentation to a close and trying to connect with the previous topic that is quantum, I am no expert in quantum, but my reflection is. As I mentioned, the statements that we produce through RPKI rely on cryptography, they can validate them cryptographically. And as quantum computing represents a disruptive force that could undermine the current cryptographic standards, RPKI may be affected. So my question for reflection is whether the cryptographic algorithms that are used nowadays by RPKI could eventually be replaced once there’s suitable post-quantum algorithms that are standardized, whether the ones we use nowadays in RPKI could be replaced with those new ones. So I’ll leave the question out there, we can come back in the discussion, I guess. Thanks everyone for your time and thanks again for having me today.

Nicolas Fiumarelli: Sofia, thank you so much for your contributions. RPKI can sound very strange for non-technical persons, but basically this is a security extension of the Internet, right? And one thing that I would like to highlight here is that these technologies in general, the security extensions, are optional, right? It’s something that the operator needs to deploy, this technology, and also the operator needs to deploy the validation of this technology, the routing validators. So there are several reasons for that, right? While we are having enforceable mechanisms like Sofia mentioned in the USA, different countries mandating for deploying these security extensions, you know, there is a topic that is very highlighted in the Internet society that is about fragmentation, right? So what happens, if you mandate that everyone needs to have RPKI, you can be disconnecting networks in some manner, because for the ones that does not implement RPKI. But on the other side, you will be exposed to high-sharking and route high-sharks, so we need to have a balance. And I think these approaches we are seeing… in different countries and Saudi Arabia example that Sophia mentioned is some of the examples that ways to go. So now continuing with RPKI, we have Wataru Ogai, that is from JPNIC, from the Japan ICF. We knew Wataru last year when we went to the Japan Global ICF. He is a representative from JPNIC with the extensive expertise in RPKI operators and he has been instrumental in advancing this routing origin validations adoption in the Asia-Pacific region. So Wataru will present about the global movement of policy and operators in RPKI, discussing some milestone on the global IPv4 ROA coverage, and he will address different aspects of RPKI, also articulating more on the quantum, post-quantum cryptography and RPKI. But yes, let’s talk more, Wataru, about this, how to deploy, what are the strategies of deploying RPKI and about this global movement. So the floor is yours, Wataru. Thank you, Nico, for the introduction and hi everyone.

Wataru Ohgai: My name is Wataru Ogai from Japan Network Information Center, JPNIC. From me today, let me talk about the global movement of policy and operation in RPKI world in 2024. For those who may not know, JPNIC is a national internet registry for Japan, which is a kind of like a national version of the RIR. And we are not the one operating .jp domain name, but instead we manage IP addresses and AS numbers in Japan, and of course, running on RPKI repository based on the registry database. Oh, it’s already December, and let me… Let’s first look back on what RPKI related matter happened in this year. The biggest news was that the global IPv4 ROA coverage have exceeded 50% in NIST RPKI Monitor and other global measurement platforms. This was the first time in this history for exceeding the 50% more than the half of the global network is covered by ROA. An IPv6 has been already achieved a few years ago. So that means over how the internet is now ready to protect it by RPKI. This is not just a wonderful achievement, but it also means that we are already in the next stage, the ROV. Regardless of tier one or not, applying ROV in the network is becoming no longer optional. Over half of the world is ready to go and there is no reason anyone can stop it. The stage of maybe or considering for ROV is already a past. Why can I so sure? Let me explain some background in the next slide. The first one, and it’s also a big step for BGP world, is that one of the tier one network operator, Google, is now phasing out route server based peering in IX, Internet Exchange Point. And moving forward for bilateral direct peering strategy. This affects many networks who have been peering with Google via route servers to shift their peering plan and also requires them to be RPKI ready. In Google’s peering policy, there is no clear sentence for that, but they apparently agree. requires ROA assurance for any direct bilateral peering network as the best current practice, refusing everyone peers with Google to be RPKI, resulting everyone peers with Google to be RPKI ready. This could be an implication that the tier networks like Google is now gearing up for full scale ROV and of course Google and other big parties are already studying ROV in their networks. And the second background is the national security. We’ve talked about the importance of RPKI in private sector so far. The same thing can be also applied for the government who wants to protect the whole environment of the country. The United States are considering seriously about ROV implementation mandatory not only in the federal government organization but also big companies in the country, business sectors for the national security. The U.S. is not only a country but some other countries are also presenting their interest on ROV this year. Thus in some day, whether we like or not, some countries will force domestic companies to do ROV but clearly the internet persons don’t like governments to decide what we do or what we don’t for security so we should go do this by our own hands before they force us. So far, I’ve talked about what happened and what is going on. Then let’s see what will happen or what could happen in the near future. The first ballpoint there is a dedicated, decided future as I talked. In the future, implementing ROV will be just one of the other normal operations, nothing special. Then the not found routes, which have no ROA for associating it, no RPGA ready routes will be vanished from the global routing entry. Of course, not to mention the invalid routes vanishing from the table. In the second bullet, there will be the operational challenges. As you may know, ROV is in fact not a predefined term like ROA. If you say we are doing ROV, then you can handle invalid routes to be rejected, or give some lower local preference value so that they are not likely to be used for routing. That’s your organizational matter, not a predefined matter. This year, we, JPNIC, in collaboration with Japanese government authority and experts from private and academic sector, published operational guidelines for both corporate executives and engineers with command-by-command reference, which I hope contribute to this situation, but it’s still your choice. Another concern is SLAM. SLAM is a way to intentionally ignore some ROV results based on the other trust. Technically, if someone issued an invalid ROA by mistake, and you notice that the ROA and actual routes coming from BGP-PRT first, you can just apply SLAM to ignore that operational failure. But how do you know if it’s SLAM? just an operational failure. Well, how can you tell the incident from malicious attack or even the intended changes of network? We already have a technical protocol slum, but we still in need of the operational policy. We are also facing the trust issue in the ROA itself. ROA revalidation is done based on what is written in ROA. So the trust in ROA is a considerably big issue. This year, one of the large network operator in the world located in Spain, which is a ripe region, had their online account used to creating or modifying ROA taken by bad actor. And that bad actor modified their ROA so that the original routes advertised in BGP to be invalid in ROV result. The recovery took a few hours and the rest of the world are forced to trust the forged ROA. The company already changed their password and recreated that genuine ROA after the incident. And RIPE also responded quickly to this incident and they introduced two-factor authentication on their platform and PASCYs, the newer authentication methodologies for their entire customer account in a few months to prevent further attacks. As I talked in previous slides, we have technological slum, but handle this type of incident from the viewpoint of non-victimized network operators, we still don’t know when to apply slum. The current answer to this scenario is double-checking the information in several community mailing lists. However, I believe there is more sophisticated ways to be able to evolve. Let’s move on to the brighter future now. And there is another technology based on RPKI, which is Asper. Current ROA and ROV is basically just a matching of the IP address prefix, and it’s already originated as but as many of you may know, the internet, the BGP is consisting consists of the exchanging route information. So there should be a certain path that the packet should go this network through this and through this like that. And ROA and ROV is not sufficient to do that. And currently Asper has finished the most standardization process in the IETF and we are seeking for the implementation and actual operation. Post quantum cryptography is another topic of this session. Yes, we are talking about the post quantum cryptography implementation in RPKI world. The current situation in PQC is like something that they can adapt after the actual compromisation to ROA or other algorithm happen by quantum computers. And others like thinks that they need to implement PQC before something is destroyed. One key, I think, to end this binary trade off is to implement quantum quantum safe RPKI today, the day before the entire world is done in ROV implementation. So this is my last slide. And the ultimate question for me is that, who can you trust? Why are they trustworthy? What mechanisms established a necessary trust? Things all is all about the trust cryptography RPKI, PQC, the internet, everything is about the trust. Policymakers and engineers are now required to collaborate to design flexible policies as a way to answer these questions. Thank you and I’m giving it back to you, Nico.

Nicolas Fiumarelli: Thank you so much, Wataru. You made very interesting points. I was wondering why this could happen, right, that you have a password for accessing your RIR platform to create this and route origin validations and once ROV, it says the validators will go to validate you, your routes. This is a huge problem because what happens is someone tamper with your login credentials and then change, as you say, the ASN of origin that is intended to be the origin. So you, the clear effect here is that you will be out of the internet. Your entire network will be fragmented from the rest of the internet and that will be a very complicated thing. So yes, I think that managing credentials of the RPKI systems is something that is very important. And also interesting that you mentioned that there are some quantum safe RPKI way. In my opinion, that needs to be before that everything explodes on the internet when the quantum computer develops and finally develops. So that are some of the challenges that we have for the future now. So heading to our other part of the session today. Sadly, some of our speakers, that is Sorina Sefa from UNECA wouldn’t make it, but Atanase that is with us here will cover her part. The idea was to focus now. Now we have established a about RPKI, about, you know, quantum computing and we need to talk about how to integrate it in governance frameworks, right, across the different regions. So, Atanasi will briefly talk to us about some advanced security measures or how this can be integrated into multi-stakeholder efforts or governance frameworks, particularly in the Global South. So, how can Atanasi address these challenges in harmonising policies across diverse regions? The floor is yours, please.

Athanase Bahizire: Thank you so much, Nico. Yeah, thank you so much, Nico, and this is a very important topic and particularly in Africa, is the capacity of different regions are different and with the advance of quantum technologies, you see, we need actually enough resource to whether host a quantum computer, which resource some of the time we don’t have actually in Africa till now one quantum computer, but the idea is that we should be proactive and what Taru was talking about it again, we don’t have to wait for till we will have the full capacity to start leveraging on this technology and I’m going to share some of the great things that are being done in Africa in order to embrace actually these emerging technologies. One of the things when you’re talking about actually the measures that are being put in place, the UN Economic Commission for Africa, which is the ECA, has brought a programme to build capacity of different governments in Africa when it comes to security measures and in these security measures, we tend to go around some of the DNA-seq, which is very important and how we can secure systems and at this level, we are building the capacity of the different governments so they can understand these techs, but then we didn’t manage to get to very technical aspects such as RPKI and quantum encryption, which I believe we should incorporate in these capacity building initiatives. We also have now the MANRS and what is happening with the MANRS is that it’s voluntarily, so there is no obligatory measure that says to the ISPs that you need to implement the MANRS and which is kind of becoming challenging and we have tried to discuss with the ISPs and you see they will tell you that to deploy these technologies, it’s additional resources and additional technical stuff and some of the time, we can’t see the emergent needs for it now, that’s why we put it for later, but then again, I’m emphasising on why we need to be proactive and start thinking, having an idea of the future when we are developing our solutions, when we are securing our system and when it comes to harmonisation of policies, many of the African countries are developing their cyber security. policies and legislations but not all of them have actually ready policies that are ready. In our country we have a bill that is actually being examined in the parliament but yet we haven’t seen a very much involvement of the technical community in country in the development of this and we have a framework at the African level but whereby technical community have had enough space to influence actually what is entering in this legislation but at the country level we haven’t seen this much involvement of the technical community, the technical community that actually has the capacity and the technical understanding. So that why it’s I believe it’s very important to harmonize what is being done in country with the different aspect, the different aspect that are regional, the African level or the different protocols that are being adopted by whether the ITF and extra level but then again our big challenge is resources for our technical community to be able to keep the pace in the advancement of these technologies, the advancement of the cryptography. We need capacity and some of the time we don’t have the capacity. So we are really calling for more investment in capacity to the technical community actually to be able to strengthen our country’s strategies and also the collaboration between the government and the legislation that are putting in place cyber security strategies with the technical community and various stakeholders. And there is also one thing I wanted to mention here when it comes to the security measures we are having now. We tend not to take very seriously cryptography as what I was giving examples whereby you know putting in place to filter authentication in your database and some other very little best practices. We are not adopting them. We are waiting for when it’s like mandatory or it’s like as a regulation to adopt it’s what is not really a good practice and it doesn’t have that much in securing our system. So I believe it’s time for us to embrace with our low resources as we are still building our resources but also to embrace the benefits that these technologies are bringing to us, embrace the best practices in security and that will be really very helpful. And one other thing you asked Nico it’s about capacity building in to ensure equitable adoption of all these security protocols. There are some organizations that are working in capacity building Africa like we have the Internet Society that has done a lot of workshops with policy makers, with IXP operators and ISPs for around the miners, the mutual agreed security. And what is happening here is we have seen quite an increase in adoption of the miners after these capacity building initiatives. But I believe we need to do more. This capacity building initiative sometimes doesn’t touch those communities with very low operators, operators who have very low capacity, who are managing very small network. So I believe we need to increase this capacity building initiative and go up to reaching the various actors who are into playing this. And that where it comes into play, the different stakeholders here. If the IETF has programs like this to build capacity or other organizations, if they do have these initiatives, I believe Africa is very open to embrace and collaborate in order to all go together in these technologies that are coming very fast. I’m going to stop here, Nico, and back to you. Thank you.

Nicolas Fiumarelli: Thank you so much, Atanase. You raised a lot of different points, very important ones. One thing to mention from my side is that in August this year, the National Institute of Standards and Technologies created three standards for the post-quantum cryptography that are Crystal Deletion, Crystal Kyber, and Sphinx. Sorry for the technical words, but these algorithms are already prepared to be deployed. There are some challenges like the length of the key that is a little longer than the previous algorithms such as RSA and AES. But as the spokesperson from NIST said, there is no need to wait. We need to start deploying these standards now to be more on the proactive way, as Guattaro said. And also, you mentioned, Atanasia, about the costs, right? I think that one of the main objectives maybe of the ANARA or RPKI program, with having these unified global platforms or, I don’t know, documentation or manuals on how to use these interfaces, sometimes provided by the RIRs or by the ANARA IRs like JPNIC, could help people have more capacity building on this. You know that the regional registries are doing the regional meetings every year and they do tutorials about this and everything. But yes, I think that you also mentioned something important that is about the small and medium operators. These ISPs that some kind attend a low portion of population in isolated places, they would be being outside of these efforts and maybe they are the ones that have not this RPKI prepared yet. So another thing that I want to mention is about the IoT, right? We missed also one of our speakers today, that is Shoao, because he had a collision with other session. But what happened with the IoT Internet of Things, you know, it’s about these constrained devices that sometimes has constraints in battery, constraints in energy, and also in memory. So these devices cannot rapidly implement this post-quantum cryptography that demands more and more power computation to do the encryption. So the IETF, that is the standardization body, is looking for a lightweight protocols that could be post-quantum cryptography, post-quantum resistant. I mean, that is something that we need to take a look of, because there are millions and billions of devices, IoT devices, coming around. And if these devices are not fully protected, we will be in a very huge problem, right? So that is another thing to look for, to how to have a hybrid approach on post-quantum cryptography in the IoT. And now is the part of the session when we open the floor for questions for the on-site and online speakers here. Also, the panelists, if you have been, if you have something in your mind you want to also say after all this conversation, please, Atanasis will be looking forward for the hands on the online and also the hands here on site. So we have at least 15 minutes for the Q&A part. So just go with this. I will give the floor then to Atanasis to moderate this part of the Q&A. So we will be receiving questions and our panelists will be responding. Yes, thank you so much. We have one question already in the room. So we are

Audience: going to start by one. Good afternoon. My name is Wouten Atres. I’m a consultant in the Netherlands, but also here at the IGF as the consultant for the Dynamic Coalition on Internet Standards, Security and Safety. And what we’ve been doing in the past, sort of, and are going to do in the near future, encompasses everything that we heard today. And my question to the panelists after I finished is how can we actually as a Dynamic Coalition help you with the situation that you have been describing? Last year at the IGF, we presented a report on IoT security by design. And Nico was the project lead for that as our working group chair for that topic. But we’re going to start a new iteration this year and present that in Liljestrøm near Oslo in June 2025, which combines the post-quantum cryptography and the state that that is at the moment with IoT security. But what we’re going to look in also is the societal implications when things go wrong, the political implications go wrong, and a bit more that the people who are leading are better in voicing than me as coordinator. But the fact is that we’ve been looking at this comprehensively. And my final comment is on RPKI, with thanks to ICANN and the RIPE NCC, we presented here at the IGF a document that helps the technical people convince their bosses to deploy DNSSEC RPKI, but by default all other internet standards, by providing them with arguments that are not technical, but exactly the sort of arguments the CEOs and CFOs want to hear what the implication for a company is if you don’t have that, the implications for your reputation, the implications for your customers or your own employees. So that is what we produced this year, but what I would like to hear is what can you do with us, because we invite you to join. You can go to our website is3coalition.org and we’re going to ask Nico to put it in the chat for me please, our website, but also what could we do for you, because we want to be as relevant as possible. So that’s an invitation, but also perhaps that some of the panelists can reflect on it and from there we can take that with us. Thank you. Anyone who wants to comment on that, my panelists please.

Nicolas Fiumarelli: Yes, Sofia. Thanks for that comment and I guess from my side what I wanted to comment that in terms of answering the question of what can be done, my final question of my presentation, inviting for reflection, can also be extended to an invite for some more work as we discussed the…

Sofia Silva Berenguer: ITF is a space where internet standards are developed. And currently, although there is an RFC describing, I think it’s called Algorithm Agility RFC, but it’s quite old and it has never been implemented. And so if there is kind of a theoretical framework for replacing parts of RPKI, but it has never been put in practice and some people believe it wouldn’t really work. So there is room there for anyone who wants to be more involved in the ITF or who is already involved in the ITF, but wants to be more involved in this space to do work on how to actually impractice the cryptography in RPKI could be replaced for like something that is post-quantum state. So I guess that that’s my only comment. I’m no expert in that space. So I’m not the person to help in the actual work, but I’m just pointing out an opportunity for some work that anyone interested could be involved in. Thank you.

Athanase Bahizire: Thank you. And actually, thank you all for mentioning the work you’re doing at the ISTC. We believe more people need to hear about the work you do. And as for us in Africa, we believe your resources might be very helpful to us. Thank you for sharing these resources. We have a thought in the chat, by Mike Nelson. After Google’s announcement of the Willow quantum computing chip, there was speculations that someday soon Google would use the chip to break the obsolete encryption used to protect the huge stash of Bitcoin created and controlled by Satoshi. He’s wondering, is the only one fascinated about this possibility, or is it that important? Anyone want to comment on this?

Nicolas Fiumarelli: Well, I think… We didn’t include a blockchain and Bitcoin at this session and panel, but blockchain is at risk as well. Because if quantum computing can proceed, if you have your public key from your wallet at the Bitcoin, you will definitely get your private key very instantly. So that means that you will have the money of that wallet. But yes, we are talking about a near future, right? Every day we see a new quantum development on the superconductors, on the different parts of the quantum chain. There was some news recently about quantum annealing. That is a new technique. So you don’t need to have millions of qubits to perform the quantum computation. Now with some thousands of qubits, and Google is close, right? They have this Google Seeker with 1,000 qubits machine already happening. They cannot maintain the state of the photons a lot of time, but they are close to reach these gaps. In my opinion, when people say 10 to 15 years, for me it’s like five years. I think that Maria stated very well in their graphic and statistics about this development. Just leaving with this to answer the question of the blockchain in that. Please, if you have another question, I’ll return to Atanasis. Yes, thank you. Nico, if you have any other question in the chat or in the Zoom room, you can raise your hand, we will give you the floor. In the room, do we have a question? No question for now. There is a comment on the Bitcoin case saying the advancements are important. signify what is to come.

Athanase Bahizire: But at this point, it is mostly hype. There is little practical than they can achieve right now. But yes, Niko said it, maybe not right now, but in the very coming, we may see a big change in this. Yuke, do you want to take the floor and comment on this one?

Audience: Yuke, can you hear me? Yes, I see you are unmuted. Sorry, Yuke needs the permission to speak. Hi, can you hear me now? Yes, sure. Yeah, so quantum is a very new technology and as is the case in any new technology,

Yug Desai: there’s going to be a lot of hype in addition to the actual technological advancements that are happening. And it is important to separate the hype from what is real because that is where the policy interventions will come from. And in case of a lot of the advancements that these big companies are making, they have to hype it up because a lot of investment is going into these areas. So it is important that we take a measured approach when we see these announcements and focus on what the practical implications are and take actions according to that. So, we are going to the conclusion of this session, yeah, okay, Wout, you can have a comment. Yes, Wout Naats is again of IS3C, I have a question to Maria Luka that I just am reminded of myself. You are doing a lot of work on both quantum and on quantum computing and we are going to do that as IS3C in the coming six months, where do we supplement each other or perhaps are we doing double work, what is your impression and how can we potentially cooperate in the coming months? Thank you. I don’t see Maria online anymore, I think, so I will translate this question to her, but I will let our rapporteur, Hugues, to also summarise on the key takeaways from our session, because we are running out of time, Jacques will highlight on the topic, actionable insights and then we are going to conclude, please, Hugues. You have, now you have permission to, to unmute yourself. Okay, yes, thank you, Nicolas. So what I’ve been talking about and great insights by our speakers that I’ll try to quickly summarise so that we have good takeaways to take and think about from this session. So Maria started with the revolutionizing power of quantum technology, especially in fields of communication and sensing, which are relatively more mature technologies, and have great potential in making precise measurements of the electromagnetic field, for instance, they have Hi, Nicholas, can you hear me? Okay. Yeah. So I’ll start again. Actually, I’m speaking. So yeah, so Maria told us about the revolutionizing power of quantum technologies and the more mature fields of quantum communication and quantum sensing, and how they promise to transform industries like healthcare, defense and military infrastructure security. The critical challenge of course lies in reaching cryptographically relevant quantum computers, which would threaten the current security frameworks that we have, especially in in deployed industrial environments. The risk is also particularly acute as we begin to collect data using these quantum instruments and use them to advance AI and existing knowledge models. The global response is already underway and governments are already providing information on how to migrate to quantum secure technologies. hyperscalers like Amazon and Google are already implementing quantum security in their platforms. And a lot of effort is also underway in making sure that the new technologies can integrate with existing technologies so that you don’t have to create everything a new the quantum investments in the quantum space are also at increasing year by year. And this is the exactly the reason why we cannot wait in in moving towards quantum secure technologies. Then we also had a very good discussion on RPKI and how the protocol, the BGP protocol was created with the assumption of trust, but we don’t really live in that reality. So RPKI was created as a secure layer on top of BGP and it has two key components, ROAs and ROVs, but the adoption has not exactly been heterogeneous, is actually heterogeneous, not homogenous across the world. And depending on who you ask, they will tell you that whether it is having the desired impact or not. The main challenge stems from the collective action problem where networks need widespread adoption to see the benefits creating this sort of a chicken and egg situation. Additionally, the non-technical decision makers often struggle to justify the investment that is needed in this transition. However, adopting RPKI is absolutely crucial and many tier one ISPs are making this their priority. And in future soon, it will become important to have RPKI deployed to connect to some of these networks. Also, RPKI is also under threat from quantum computing because it uses cryptography that is vulnerable to potentially cryptographically relevant quantum computers. So we’ll also need to work on making sure that RPKI also becomes quantum safe in the future. I also want to highlight what Anthony has mentioned about the situation in Africa and how capacity building is really important when we are trying to ensure security in this age of emerging internet technologies and posing newer risks to security, the technical communities in Africa and global south alike. need to need more resources to combat these emerging threats and also more capacity building to make sure that the networks of the future remain secure. I will end at that.

Nicolas Fiumarelli: Thank you so much, Yu. And well, I would like to thank you. We are just in time to thank our distinguished panelists for the invaluable contributions they have made, as well as all of you both on site and online. Well, I think with today’s session, we demonstrated the critical importance of collaboration in addressing different challenges and opportunities presented by these three technologies, quantum encryption, RPKI, and IoT security. I think that you at least will bring something to your home from all these learnings by exploring this intersection of technologies. I think we could be better prepared to secure our digital ecosystems of tomorrow. So hope you ensure the rest of the IGF 2024. Thank you so much. Applause.

M

Maria Luque

Speech speed

130 words per minute

Speech length

2094 words

Speech time

960 seconds

Quantum sensing and communications are maturing rapidly

Explanation

Quantum sensing and communications technologies are advancing quickly and becoming more mature. These technologies have the potential to transform various industries and improve measurement capabilities.

Evidence

Examples given include optimizing power grids, enhancing battery performance, and improving leak detection in pipelines using quantum sensors.

Major Discussion Point

Quantum Technologies and Cybersecurity

N

Nicolas Fiumarelli

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Post-quantum cryptography standards need to be implemented now

Explanation

There is an urgent need to start implementing post-quantum cryptography standards. This proactive approach is necessary to prepare for the potential threats posed by quantum computing to current cryptographic systems.

Evidence

Mention of NIST creating three standards for post-quantum cryptography in August: Crystal Deletion, Crystal Kyber, and Sphinx.

Major Discussion Point

Quantum Technologies and Cybersecurity

Agreed with

Maria Luque

Sofia Silva Berenguer

Agreed on

Urgent need for post-quantum cryptography implementation

Differed with

Maria Luque

Differed on

Urgency of implementing post-quantum cryptography

IoT devices need lightweight post-quantum cryptography protocols

Explanation

Internet of Things (IoT) devices have constraints in battery, energy, and memory. These limitations make it challenging to implement full post-quantum cryptography, necessitating the development of lightweight protocols.

Evidence

Mention of IETF looking into lightweight protocols that could be post-quantum resistant for IoT devices.

Major Discussion Point

Future Internet Security Measures

Small and medium ISPs may be left behind in RPKI adoption

Explanation

There is a concern that small and medium-sized Internet Service Providers (ISPs) might lag in RPKI adoption. These ISPs, often serving isolated areas, may lack the resources or awareness to implement RPKI.

Major Discussion Point

RPKI Adoption and Challenges

S

Sofia Silva Berenguer

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Quantum computing poses a threat to current cryptographic systems

Explanation

The development of quantum computers presents a significant risk to existing cryptographic systems. This threat extends to technologies like RPKI that rely on current cryptographic methods.

Evidence

Mention of the need to replace parts of RPKI with post-quantum cryptography.

Major Discussion Point

Quantum Technologies and Cybersecurity

Agreed with

Maria Luque

Nicolas Fiumarelli

Agreed on

Urgent need for post-quantum cryptography implementation

RPKI adoption faces a collective action problem

Explanation

The adoption of RPKI is hindered by a collective action problem. Networks need widespread adoption to see the benefits, creating a chicken-and-egg situation that slows implementation.

Evidence

Reference to the challenge of justifying RPKI implementation to non-technical decision makers due to unclear immediate commercial benefits.

Major Discussion Point

RPKI Adoption and Challenges

Agreed with

Wataru Ohgai

Agreed on

RPKI adoption is crucial for internet security

A

Athanase Bahizire

Speech speed

116 words per minute

Speech length

1142 words

Speech time

586 seconds

Africa needs more resources and capacity building for quantum security

Explanation

African countries require additional resources and capacity building initiatives to address emerging quantum security challenges. There is a need for more investment in technical expertise and infrastructure.

Evidence

Mention of UN Economic Commission for Africa’s programme to build capacity of governments in security measures, but lacking in advanced topics like RPKI and quantum encryption.

Major Discussion Point

Quantum Technologies and Cybersecurity

Harmonization of cybersecurity policies across regions is needed

Explanation

There is a need for better alignment of cybersecurity policies across different regions. This includes involving technical communities in policy development and ensuring consistency with international standards.

Evidence

Reference to African countries developing cybersecurity policies and legislations, but lacking involvement from the technical community at the country level.

Major Discussion Point

Future Internet Security Measures

W

Wataru Ohgai

Speech speed

122 words per minute

Speech length

1331 words

Speech time

651 seconds

Global IPv4 ROA coverage has exceeded 50%

Explanation

The global coverage of IPv4 Route Origin Authorizations (ROAs) has surpassed 50%. This milestone indicates significant progress in RPKI adoption and readiness for improved routing security.

Evidence

Reference to NIST RPKI Monitor and other global measurement platforms showing this achievement.

Major Discussion Point

RPKI Adoption and Challenges

Agreed with

Sofia Silva Berenguer

Agreed on

RPKI adoption is crucial for internet security

Tier 1 networks like Google are pushing for RPKI readiness

Explanation

Major tier 1 network operators, such as Google, are actively promoting RPKI readiness. This push is influencing other networks to adopt RPKI to maintain peering relationships.

Evidence

Mention of Google phasing out route server-based peering in Internet Exchange Points and requiring RPKI readiness for direct bilateral peering.

Major Discussion Point

RPKI Adoption and Challenges

Agreed with

Sofia Silva Berenguer

Agreed on

RPKI adoption is crucial for internet security

Quantum-safe RPKI needs to be developed

Explanation

There is a need to develop quantum-safe versions of RPKI to protect against future threats from quantum computing. This development should be prioritized to ensure long-term security of internet routing.

Major Discussion Point

Future Internet Security Measures

Y

Yug Desai

Speech speed

123 words per minute

Speech length

830 words

Speech time

404 seconds

Multistakeholder collaboration is crucial for addressing security challenges

Explanation

Addressing the complex security challenges of the future internet requires collaboration among various stakeholders. This includes technical communities, policymakers, and industry players working together to develop comprehensive solutions.

Major Discussion Point

Future Internet Security Measures

Agreements

Agreement Points

Urgent need for post-quantum cryptography implementation

Maria Luque

Nicolas Fiumarelli

Sofia Silva Berenguer

Post-quantum cryptography standards need to be implemented now

Quantum computing poses a threat to current cryptographic systems

There is a pressing need to implement post-quantum cryptography standards to protect against future quantum computing threats to current cryptographic systems.

RPKI adoption is crucial for internet security

Sofia Silva Berenguer

Wataru Ohgai

RPKI adoption faces a collective action problem

Global IPv4 ROA coverage has exceeded 50%

Tier 1 networks like Google are pushing for RPKI readiness

While RPKI adoption faces challenges, it is crucial for internet security, and progress is being made with major networks pushing for its implementation.

Similar Viewpoints

There is a need for increased collaboration and capacity building, especially in developing regions, to address emerging internet security challenges.

Athanase Bahizire

Yug Desai

Africa needs more resources and capacity building for quantum security

Multistakeholder collaboration is crucial for addressing security challenges

Unexpected Consensus

Immediate action required for quantum-safe technologies

Maria Luque

Nicolas Fiumarelli

Sofia Silva Berenguer

Wataru Ohgai

Post-quantum cryptography standards need to be implemented now

Quantum computing poses a threat to current cryptographic systems

Quantum-safe RPKI needs to be developed

Despite representing different aspects of internet security, all speakers agreed on the urgency of implementing quantum-safe technologies, which is somewhat unexpected given the typically slow pace of adopting new security measures.

Overall Assessment

Summary

The main areas of agreement include the urgent need for post-quantum cryptography implementation, the importance of RPKI adoption for internet security, and the necessity for increased collaboration and capacity building in addressing emerging security challenges.

Consensus level

There is a high level of consensus among the speakers on the urgency of addressing quantum computing threats and improving internet routing security. This strong agreement implies a clear direction for future internet security measures and highlights the need for immediate action in implementing post-quantum cryptography and expanding RPKI adoption.

Differences

Different Viewpoints

Urgency of implementing post-quantum cryptography

Maria Luque

Nicolas Fiumarelli

Quantum-gathered data is needed for all of the knowledge models that we want to advance with artificial intelligence and high-performance computation.

Post-quantum cryptography standards need to be implemented now

While both speakers emphasize the importance of post-quantum cryptography, Maria Luque focuses on the future need for quantum-gathered data, while Nicolas Fiumarelli stresses the immediate necessity to implement post-quantum cryptography standards.

Unexpected Differences

Focus on regional challenges vs. global solutions

Athanase Bahizire

Maria Luque

Africa needs more resources and capacity building for quantum security

Quantum sensing and communications are maturing rapidly

While Maria Luque focuses on the rapid advancement of quantum technologies globally, Athanase Bahizire unexpectedly highlights the specific challenges faced by African countries in terms of resources and capacity building for quantum security. This difference in focus reveals a potential gap between global technological progress and regional readiness.

Overall Assessment

summary

The main areas of disagreement revolve around the urgency of implementing post-quantum cryptography, the current state and challenges of RPKI adoption, and the focus on global technological advancements versus regional capacity building needs.

difference_level

The level of disagreement among the speakers is moderate. While there is a general consensus on the importance of quantum security and RPKI adoption, the speakers differ in their perspectives on implementation timelines, regional challenges, and the current state of adoption. These differences highlight the complexity of addressing global cybersecurity challenges while considering varying regional capacities and needs. This has implications for developing comprehensive and inclusive strategies for future internet security measures.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of RPKI adoption, but they differ in their assessment of its current state. Sofia Silva Berenguer highlights the challenges in adoption due to the collective action problem, while Wataru Ohgai emphasizes the progress made with global IPv4 ROA coverage exceeding 50%.

Sofia Silva Berenguer

Wataru Ohgai

RPKI adoption faces a collective action problem

Global IPv4 ROA coverage has exceeded 50%

Similar Viewpoints

There is a need for increased collaboration and capacity building, especially in developing regions, to address emerging internet security challenges.

Athanase Bahizire

Yug Desai

Africa needs more resources and capacity building for quantum security

Multistakeholder collaboration is crucial for addressing security challenges

Takeaways

Key Takeaways

Quantum technologies are advancing rapidly and pose both opportunities and threats to cybersecurity

RPKI adoption is progressing but faces challenges, especially for smaller network operators

Post-quantum cryptography standards need to be implemented proactively

Multistakeholder collaboration and capacity building are crucial for addressing emerging security challenges

IoT devices require specialized lightweight post-quantum cryptography solutions

Resolutions and Action Items

Implement post-quantum cryptography standards now rather than waiting

Increase capacity building efforts for RPKI adoption, especially for small and medium ISPs

Develop quantum-safe RPKI protocols

Harmonize cybersecurity policies across regions

Integrate technical community input into national cybersecurity legislation

Unresolved Issues

How to effectively implement quantum-safe cryptography for resource-constrained IoT devices

How to overcome the collective action problem in RPKI adoption

How to ensure equitable adoption of security protocols in regions with limited resources

How to balance mandatory security requirements with avoiding network fragmentation

Suggested Compromises

Adopt a hybrid approach for post-quantum cryptography in IoT devices

Use reputation-based approaches like MANRS alongside regulation to encourage security best practices adoption

Implement RPKI in phases, starting with larger networks and gradually including smaller operators

Thought Provoking Comments

By 2045, the world looks nothing like it does today. Profound transformation has taken place, and all global communications, both terrestrial and space-based, they are somehow integrated through optical networks.

speaker

Maria Luque

reason

This comment paints a vivid picture of a potential future transformed by quantum technologies, challenging participants to think long-term about the implications.

impact

It set the stage for discussing the far-reaching potential of quantum technologies beyond just cryptography, broadening the scope of the conversation.

My message today is that not only a cryptographically relevant quantum computer and its advent is a threat to this future that I just pointed out to you, to how we leverage this data for good, not only Harvest Now and Decrypt Later is a threat to this vision alone, the threat is up today.

speaker

Maria Luque

reason

This comment highlights the immediacy of the quantum threat, challenging the common perception that it’s a future problem.

impact

It shifted the discussion towards more urgent consideration of quantum-safe security measures and their implementation.

ROA revalidation is done based on what is written in ROA. So the trust in ROA is a considerably big issue. This year, one of the large network operator in the world located in Spain, which is a ripe region, had their online account used to creating or modifying ROA taken by bad actor.

speaker

Wataru Ohgai

reason

This comment introduces a real-world example of vulnerabilities in current security systems, highlighting the complexity of trust in digital infrastructure.

impact

It grounded the theoretical discussion in practical concerns, leading to more focus on operational challenges and the need for robust authentication methods.

We tend not to take very seriously cryptography as what I was giving examples whereby you know putting in place to filter authentication in your database and some other very little best practices. We are not adopting them. We are waiting for when it’s like mandatory or it’s like as a regulation to adopt it’s what is not really a good practice and it doesn’t have that much in securing our system.

speaker

Athanase Bahizire

reason

This comment highlights a critical issue in cybersecurity adoption, especially in developing regions, pointing out the reactive rather than proactive approach.

impact

It shifted the conversation towards discussing capacity building and the need for proactive security measures, especially in the Global South.

Overall Assessment

These key comments shaped the discussion by broadening its scope from purely technical considerations to include long-term societal impacts, immediate security threats, practical operational challenges, and regional disparities in adoption. They moved the conversation from theoretical possibilities to urgent practical needs, emphasizing the importance of proactive measures and global cooperation in addressing quantum and routing security challenges.

Follow-up Questions

How can the cryptographic algorithms currently used in RPKI be replaced with post-quantum algorithms once they are standardized?

speaker

Sofia Silva Berenguer

explanation

This is important to ensure RPKI remains secure against future quantum computing threats.

How can we develop operational policies for SLAM (Selective Lifting of Anomalies in MANRS) to distinguish between operational failures and malicious attacks?

speaker

Wataru Ohgai

explanation

This is crucial for properly implementing SLAM and maintaining network security.

How can we develop lightweight post-quantum cryptography protocols suitable for IoT devices with limited computational resources?

speaker

Nicolas Fiumarelli

explanation

This is essential to protect the growing number of IoT devices against future quantum computing threats.

How can the Dynamic Coalition on Internet Standards, Security and Safety (IS3C) collaborate with and support the work being done on quantum computing and security?

speaker

Wout de Natris

explanation

This collaboration could help advance the development and adoption of quantum-safe security measures.

How can we increase capacity building initiatives to reach smaller network operators, particularly in Africa and other developing regions?

speaker

Athanase Bahizire

explanation

This is important to ensure widespread adoption of security measures like RPKI across all levels of network operators.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #125 Balancing Acts: Encryption, Privacy, and Public Safety

WS #125 Balancing Acts: Encryption, Privacy, and Public Safety

Session at a Glance

Summary

This workshop focused on balancing encryption, privacy rights, and public safety, particularly in relation to child protection online. Experts from various fields discussed the challenges and potential solutions to this complex issue.

The discussion highlighted the tension between privacy and security, with some arguing that privacy concerns are sometimes weaponized at the expense of child safety. Participants emphasized the need to reject framing this as a choice between privacy and security, instead advocating for solutions that address both.

Key challenges identified included global inconsistencies in laws and standards, rapidly evolving technologies, and the difficulty of protecting against online abuse while maintaining privacy. The importance of international collaboration was stressed, with calls for finding common ground and developing harmonized legal and technical standards.

Participants suggested several approaches, including client-side scanning for known child sexual abuse material, age verification tools, and considering sub-contexts for different user groups. The need for public awareness and education about encryption and its impacts was emphasized.

The discussion also touched on the role of internet standards bodies like the IETF, with calls for greater multi-stakeholder engagement in these technical forums to ensure societal implications are considered. Participants agreed that finding solutions requires input from diverse stakeholders, including government, private sector, and civil society.

Overall, while acknowledging the complexity of the issue, the panel expressed optimism that balancing encryption, privacy, and public safety is a “mission possible” with continued dialogue and collaborative efforts.

Keypoints

Major discussion points:

– Balancing encryption, privacy rights, and public safety, especially regarding child protection online

– The need for multi-stakeholder collaboration and international cooperation on encryption policies

– Challenges of evolving technologies and inconsistent global regulations around encryption

– Educating the public and raising awareness about encryption’s impacts

– Engaging technical standards bodies like IETF to consider societal implications of encryption decisions

The overall purpose of the discussion was to explore the complex challenges of balancing encryption, privacy, and public safety, and to identify potential paths forward through multi-stakeholder collaboration and public education.

The tone of the discussion was thoughtful and solution-oriented. While speakers acknowledged the difficulty of the issues, there was an emphasis on finding pragmatic ways to make progress rather than viewing it as an impossible task. The tone became more optimistic and action-oriented by the end, with calls for stakeholders to get involved in technical standards bodies and educate the public.

Speakers

– David Wright: Director of the UK Safe Internet Centre and CEO of UK charity SWGFL

– Andrew Campling: Director of 419 Consulting

– Taddei Arnaud: Deputy Director, Cyber Information Development Bureau, Cyberspace Administration of China

– Makola Honey: Manager of policy research and development unit at Independent Communications Authority of South Africa, vice chairperson of ITU-T study group 17

– Boris Radanovic: Head of engagements and partnerships at SWGFL

– Alromi Afnan: Vice chairman of ITU-T study group 17, director of cyber security operations centre at CST

Additional speakers:

– Cynthia Lissoufi: Works with ITU, from South Africa

– Catherine Bielek: Infectious disease physician at Harvard Medical School

Full session report

Balancing Encryption, Privacy Rights, and Public Safety: A Multi-Stakeholder Approach

This workshop, part of the Internet Governance Forum (IGF), brought together experts from diverse backgrounds to discuss the complex challenges of balancing encryption, privacy rights, and public safety, with a particular focus on child protection online. The discussion highlighted the tension between privacy and security concerns, while emphasising the need for collaborative, multi-stakeholder solutions to address these interconnected issues.

Key Challenges and Framing of the Debate

The participants identified several key challenges in addressing encryption and privacy:

1. Global inconsistencies in laws and standards

2. Rapidly evolving technologies

3. Difficulty in protecting against online abuse while maintaining privacy

4. Balancing various human rights and interests

A significant point of contention emerged regarding the framing of the debate. Andrew Campling, Director of 419 Consulting, argued that privacy rights are sometimes weaponised at the expense of child safety, stating, “In my view the weaponization of privacy is being used and has been and is continuing to be used to override all of the human rights of children and other vulnerable groups and I think that’s a fundamental problem.” Campling also highlighted the scale of the issue, noting, “We’re seeing roughly 100 million reports of CSAM images and videos every year and that’s roughly three new images being found every second.”

In contrast, Boris Radanovic, Head of Engagements and Partnerships at SWGFL, advocated for rejecting the framework of privacy versus security altogether. He used a vivid analogy to illustrate his point: “We should utterly reject the framework of conversation of having privacy versus security. And if we reject it, I’ll just remind everybody that most of us flew to this wonderful country, and what if 90% of our flights had 90% of a chance to land in Ankara, maybe in Zagreb, maybe in London? None of us would take that option or those odds.” This reframing encouraged participants to think about achieving both privacy and security simultaneously rather than trading one for the other.

Proposed Solutions and Approaches

The discussion yielded several proposed solutions and approaches to address the challenges:

1. Technical Solutions:

– Client-side scanning for known child sexual abuse material (CSAM) images, as suggested by Andrew Campling

– Consideration of sub-contexts with different encryption requirements for various groups, proposed by Arnaud Taddei, Global Security Strategist at Symantec by Broadcom

2. International Collaboration:

– Makola Honey, Manager of Policy Research and Development Unit at Independent Communications Authority of South Africa, emphasised the importance of international collaboration to find common ground

– Alromi Afnan, Vice Chairman of ITU-T Study Group 17, highlighted the need to address global inconsistencies in laws

3. Public Awareness and Education:

– Boris Radanovic stressed the need for adaptable education for different capabilities and age groups

– Alromi Afnan noted that public awareness is a key part of online safety

– Panelists suggested developing targeted educational programs to help users understand the complexities of encryption and privacy

4. Engagement with Technical Standards Bodies:

– Andrew Campling called for civil society groups to engage with technical standards bodies like the Internet Engineering Task Force (IETF)

– Campling also highlighted how changes in underlying technology can affect parental controls and other safety measures

5. Learning from Other Models:

– Some participants suggested using the COVID-19 pandemic response as a model for balancing individual privacy and public safety needs

Multi-Stakeholder Approach and International Cooperation

A key area of agreement among participants was the need for a multi-stakeholder approach involving diverse perspectives. Speakers emphasised the importance of international collaboration and the development of harmonised legal and technical standards. Makola Honey suggested that international bodies can convene neutral dialogues to find balanced solutions, stating, “We need to find common ground, and international bodies can facilitate these discussions in a neutral way.”

The role of internet standards bodies like the IETF was highlighted, with calls for greater multi-stakeholder engagement in these technical forums to ensure societal implications are considered. Andrew Campling emphasised this point, stating, “What I think we need to do is to get people from the groups that are here, at least some of them, to engage over there, so civil society groups, governments, regulators, others who have got sufficient technical knowledge to engage in the standards bodies need to attend and pay attention to what is happening there, and the implications for some of the decisions being taken.”

Unresolved Issues and Future Considerations

Despite the productive discussion, several issues remained unresolved:

1. Developing global standards that balance needs of different regions and technical capabilities

2. Effectively educating the public about complex encryption issues

3. Addressing challenges posed by emerging technologies like quantum computing

4. Resolving conflicts between different legal and regulatory frameworks across countries

Arnaud Taddei contributed significantly to the discussion by emphasizing that security is not provable, stating, “Security is not provable. We can only prove insecurity.” This perspective added depth to the conversation about the challenges of implementing robust security measures.

Conclusion

While acknowledging the complexity of the issues at hand, the panel expressed optimism that balancing encryption, privacy, and public safety is a “mission possible” with continued dialogue and collaborative efforts. The discussion emphasised the need for ongoing multi-stakeholder engagement, international cooperation, and public education to address these challenges effectively. As the digital landscape continues to evolve, finding solutions that protect both privacy and public safety remains a critical goal for policymakers, technologists, and civil society alike.

Session Transcript

David Wright: is now 1.45 here. We will make a start if that’s okay with everybody. And a very warm welcome to this workshop, workshop number 125. And here we are looking at balancing acts in terms of encryption, privacy, and public safety. My name is David Wright. I’m director of the UK Safe Internet Centre and CEO of a UK charity, SWGFL. And I’m delighted to be able to introduce you to the panel that we have today, both here and online as well. This is obviously the last workshop of this IGF, and it’s a real pleasure to be able to close out this IGF ahead of the closing ceremony with this particular subject. So it is, we are going to be looking at this complex balance between encryption, privacy rights, and public safety, and the needs for structured multi-stakeholder discussion. We have an hour with you, so we are going to have to keep comments, questions, perhaps brief, but it is such an important subject that we see. I’m just first of all going to introduce the panel to you. And for the panellists, I’m going to have to probably just condense some of the bios just because of time. And I’m going to run through them in terms of the sequence of questions as well. So I’d like to welcome Andrew Campling, who’s here with us. Andrew is Director of 419 Consulting, a public policy and public affairs consultancy focused on the tech and telecom sectors. He has over a decade of non-executive experience. experience backed by nearly 40 years of experience in a wide range of increasingly senior roles in a mainly business-to-business technology context. It’s been engaging in initiatives linked to encrypted DNS, encrypted SNI and related developments in internet standards, primarily to understand their impact in real-world deployments. It’s worthwhile pointing out Sir Andrew is a trustee of the Internet Watch Foundation, which is a global charity, one of our partners within the UK Safe Internet Centre and holds an MSc in strategic marketing management and an MBA. Andrew is currently studying in his spare time, studying law and plans to complete his LLM in the next couple of years. Online joining us is Arnaud Taddy. If we might be able to bring up Arnaud onto the screen. Arnaud is a global security strategist semantic by Broadcom. He’s an executive advisor of security strategy and transformation for the top 150 semantic customers. As part of his mission, Arnaud participates in specific standardization defining organizations and in particular, and this warrants congratulations Arnaud, elected chair of the ITU SG17 representing the UK and works at the IETF. He started his career in 1993 at the famous CERN IT division in Geneva, which created the World Wide Web and where he led the team responsible for communication, authentication and authorization. In 2000 joined Sun and became one of the 100 elected global principal engineers. In 2007 he joined Symantec and hold chief architect roles up to director of research as direct report to Dr Hugh Thompson, Symantec CTO and actual RSA conference chairman. I’d then also like to move on to the next panelist, which we can also bring Honey onto the screen, please. So Honey McCulloch is. the manager of the policy research and development unit at the Independent Communications Authority of South Africa where she guides the regulator in navigating its evolving roles across various aspects of ICT developments including cyber security. She also serves as a vice chairperson of the ITU-T study group 17 which focuses on setting international standards for cyber security. Within study group 17 Honey co-convenes the correspondence group on child online protection working to identify gaps in technical measures and promote initiatives that create safer online environments for children. If I can now draw in back into the room if I can introduce Afnan. Afnan Alromi is an accomplished cyber security leader with over 12 years of experience in managing complex projects and shaping cyber security strategies. As vice chairman of the ITU-T study group 17 cyber security and director of the cyber security operations centre at CST she plays a key role in advancing cyber security resilience locally and globally. Afnan holds advanced degrees in software engineering and computer science along with several industry professional certifications and is known for her expertise in strategic planning, vulnerability management and fostering international collaborations. Afnan welcome. Finally if I can welcome my colleague Boris. Boris Rudanovich is an expert in the field of online safety and currently serves as the head of engagements and partnerships at SWGFL the UK-based charity. Like me he also works with the UK Safe Internet Centre which is part of the European InSafe network. His work involves educating and raising awareness about online safety for children, parents, teachers and other stakeholders across the world. Boris has worked extensively with various European countries including Croatia where he worked at the Safe Internet Centre and he’s been involved in numerous missions to countries like Belarus, Serbia, Montenegro, North Macedonia. to present online safety strategies to government officials and NGOs. His focus is on protecting children from online threats, such as cyberbullying, child sexual exploitation and scams, as well as empowering professionals through workshops and keynote speeches. So I would just like to welcome the panel, and also this afternoon, forgive me, I’m joined by my colleague, Niels here is going to moderate the online conversations for when we get through to chats. If I can just invite the panellists just to give a quick opening couple of sentences. Andrew, if we can start with you, please.

Andrew Campling: Good afternoon, everyone, and also hello to everyone online. As David said, my name is Andrew Campling. I’m a trustee of the Internet Watch Foundation, amongst other things. I think this is an incredibly important issue, which we’ll get into in a moment. And I think, although we’ve focused on the trade-offs and specifically talk about privacy, as I’ll expand on it in a short while, I really want to get into the debate about privacy versus other human rights. Because I think we over-inflate the importance of privacy and completely ignore often all of the other human rights, including fundamental rights, as opposed to privacy as a qualified right. But that’s something we’ll come on to when David asks us questions, I’m sure.

David Wright: Thank you, Andrew. Arnaud, if I can throw it to you, please.

Taddei Arnaud: Yes, can you hear me correctly? Yes? Yes, we can. Thank you for the chance to be in this workshop. The topic is really heartbreaking when you start to… understand what is at stake. It’s both concerning to see the level of harm that is increasing, perhaps accelerating, and at the same time we are facing a real design issue and we need to make trade-offs that are really difficult for a number of humans. So observing this from ITU and SG17 is an interesting journey and hopefully we are maturing and putting ourselves in the conditions where we can have a meaningful discussion. Thank you.

David Wright: Thank you. If I can now just turn to Hani.

Makola Honey: Yes, thank you. I just wanted to draw attention a little bit to the work that I do as the convener of the correspondence group, where we are focusing on identifying and addressing gaps in child online protection standardization within the study group 17. We have done a lot of work in reviewing the regulations, the standards that are currently in place, and you know the work has progressed well and we are on our way in identifying the gaps. But I would also like to take this opportunity please to invite the people in the group, as well as online, to please join the correspondence group on child online protection. This can be done through subscription on my workspace, but for the purpose of today’s meeting, for me, I think encryption is a very important and powerful tool that can help us safeguard communications and information, but it also creates significant challenges in protecting children online. So for the purpose of today’s meeting, I just want us to try and find a balance between privacy on the other hand, and other issues such as the protection of children online. It’s a challenging balance. But one, I believe that it is essential for the effectiveness of child online protection, and I look forward to the engagements this afternoon.

David Wright: Thank you. Honey, thank you very much. Now turning back to the room, Naftali, if I can hold it to you, please.

Alromi Afnan : Good afternoon, everyone, and looking forward to this wonderful discussion today and engaging in this topic, and looking forward to discuss the challenges that just my colleague Honey just discussed that are to be discussed today, and also see how we can succeed with balancing the encryption and privacy rights and at the same time public safety. So looking forward for this discussion, thank you.

David Wright: Thank you, Afnan. And finally, Boris, if you can introduce.

Boris Radanovic: Thank you very much. I appreciate the invitation and ability to contribute, especially from a diverse set of points looking at this. And I love the title. It says encryption, privacy, and public safety, and I think that is the framework of conversation that I think we should all have and support. And one of the questions in my mind that we can later hopefully answer is how do we create meaningful and impactful discussions on these topics that takes into account a wide array of different perspective needs and abilities and representation, but equally respecting the direction that we all want to take to a better and safer world, which includes protections of children in itself. So really proud to be here.

David Wright: Thank you, Boris. Okay. So moving on to the particular questions that we’re going to pose, and then panelists will share and discuss with everybody, after which point we will then open the floor and the virtual floor to questions. So please do hold on to questions. There will be a time. Because of timing as well, I’m going to keep panelists to perhaps four minutes. So please keep contributions succinct. So Andrew, I’m going to turn to you without any further delay. further ado first, and I wonder if you could elaborate a little on how should governments and tech companies approach the creation of lawful access mechanisms without infringing on privacy rights, straight into this point.

Andrew Campling: Fantastic, thank you David for the question and to provoke hopefully a response from some of the participants in the room and online. In my view the weaponization of privacy is being used and has been and is continuing to be used to override all of the human rights of children and other vulnerable groups and I think that’s a fundamental problem. As I said earlier, remembering that privacy is a qualified right and we need to think about all of the human rights and also again to provoke a response, encryption let’s remember is not the same as security, they’re fundamentally different things but they’re often conflated and for example if you begin as is happening increasingly in the internet standards world to encrypt indicators of compromise and metadata, you end up with weakened security and if you weaken security you have no privacy but you think you have and that’s also a big problem. So very briefly let’s put a scale to the problem and I’m going to focus on child sex abuse because that’s what the Internet Watch Foundation does. We’re seeing roughly 150 million victims of child sexual violence every year around the world and we actually are recording in the order of 100 million reports of CSAM images and videos every year and that’s roughly three new images being found every second, new images or videos. So even in the course of this workshop that’s a scary number of images and videos being found. The internet has magnified the scale of the problem of CSAM significantly. It happened pre-internet, but the ability to publish and share the images globally means, and remembering every time an image is shared that’s a crime and there’s a victim, the scale of the problem is huge compared to what it was pre-internet. We know from research that end-to-end encrypted messaging platforms are widely used to find and share CSAM and there’s a very large sample size behind that research. But to get directly to the problem, in terms of things like lawful interception, you don’t need to backdoor encryption to help solve this problem. Client-side scanning for known CSAM images would immediately reduce the size of the problem. It doesn’t break encryption, it doesn’t break privacy, so that’s an easy way to make an impact, as would be the use of tools like age estimation and age verification to keep adults off of platforms intended for children and children off of platforms intended for adults to try and keep the victims away from the criminals. So those would be my suggestions, places to start, and hopefully that will provoke a response from people in a few minutes. Thank you.

David Wright: I’m pretty sure it will do, Andrew, thank you. Just can I ask a follow-up question, just to prime the microphone again. Are developments in internet standards helping or harming human rights? So in my view, the increased use and requirement to use encryption

Andrew Campling: in some of those standards, I could give examples but probably that’s too much detail for here, is making the problem worse, not better. It’s making it harder to find where the crimes are happening and it’s making it easier for the criminals to hide. So some of the developments are actually problematic in the standards bodies, an area where I’m active, and they also coincidentally weaken security as well. So I think that’s why we need civil society to engage in places like the standards bodies. It’s mainly technologists making these decisions and we need, dare I say it, multi-stakeholder engagement to actually shine a light on things that are causing huge societal problems.

David Wright: Thank you. Thank you very much. Okay. I’m going to move on now to Arnaud, if I can, please, onto the screen. So Arnaud, the question that we would pose to you. So what technical innovations or solutions do you see as viable for achieving a balance between privacy and public safety?

Taddei Arnaud: I like the mission impossible. It’s a very difficult problem. So in one hand, one of the issues is that we have only one model for internet as we know it today. And so anything we do is going to impact various communities. So far, so good, until some versions of TLS and other considerations, we sort of managed to keep the community together and everybody could find what it needed from the setup. But for various reasons, some directions were set and for good, bad and ugly reasons, it’s not a judgment. It’s just that they were set in a certain direction and now it is pushing the solution for one specific part of the spectrum. all the humans. So that means that now we have situations where some are going to get benefits and some are going to have a problem with what is happening. So this is very difficult to move from there because we have only one design where and it’s difficult to get out of this thing. So now that doesn’t mean we cannot be creative. For other areas we started to realize that maybe the problem is the fact that we have in the background of that the anthropological assumption that was made behind was one model for all the humans. That means a very narrow model for all the humans to make sure it fits the maximum. But when you do that you lose the fact that there are sub-contexts that are very specific needs. Child land protection is one sub-context, education is one of those sub-contexts. I would add elderly people is another sub-context. And all these sub-contexts have different requirements and needs and when we have to take a step back and make better, let’s say, when we have to make design choices, the issue of a design is always the trade-offs. Which trade-offs are you going to do for a specific set of use cases and requirements? That’s what the engineer will think. So when you go in this approach perhaps the direction we could consider is sub-context. So it’s not related here to child land protection but I see for example sub-contexts that may be already happening. I discovered the event of what we call enterprise browsers that allow specific requirements for enterprise use cases. Equally you see more and more family solutions by some hyperscalers. So the question I have is, is the premise of the solution not to consider that we should perhaps re-highlight the concept of sub-context and from there we could start to envision perhaps some technical solutions. I will stop at that, thank you.

David Wright: Thank you, thank you very much for trying to take on Mission Impossible. Okay, moving swiftly on to Honey, which we can’t see you at the moment Honey, but if I can, there we go, there we go, if I can pose you the question, so how can international collaboration improve or complicate the encryption debate, especially when balancing privacy

Makola Honey: with cross-border safety issues? Thank you for the question, so I have a lot of experience in international collaboration and also having recently checked the cybersecurity resolutions of the WTSA 2024. I want to draw a little bit on that experience, right, and I would like to start with what complicates the debates in the collaboration. First of all, the participants come to the collaboration table, if I may, with different legal and regulatory frameworks. They could be divided into the ones that are pro-data protection and privacy, then you get the ones that are for government control, for national security purposes, then you get the ones that want balance, so those differences make it very difficult to agree on a unified approach to encryption across borders, so you can even imagine for the frustration for the companies operating across borders regarding encryption. Then there’s also the imbalance in cybersecurity capabilities across the different nations. You get nations with advanced cybersecurity capabilities that may argue for stronger encryption to protect the critical infrastructure. which is fair. And then on the other hand, for example, at the moment, a priority for Africa could be the protection of children online versus individual privacy. And that is understandable because currently Africa has the highest dividend when it comes to young people. So you would understand why from this perspective, protecting the children online now is actually protecting the future economy and protecting everything. And then you get the MOUs and the multilateral treaties and so forth. But from the readings, it could be said that this can be quite slow and cumbersome when issues need to be dealt with right away and governments are seeking swift access. And then there’s the geopolitical tension and distrust as well that we witness. And from my work with the ITU so far, there’s also a question of how do you develop global standards that are balanced in addressing the needs of the different regions, taking into account the imbalances in technical capabilities, the skills that different countries are having, and the level of skills actually, and the culture of privacy in itself. But the complications are just the negatives. Moving to how it can improve, taking into account obviously having what complicates it in the background. I think the starting point in the encryption debate is that there are two sides, but how do we develop common ground? And that is what the international collaboration can improve. It has the ability to facilitate dialogue between nations with the different stance on encryption. And moving from that established common ground, countries can then establish harmonized legal and technical standards. There will be compromise. to be made by different groups, but remembering or recalling that the common ground is important for facilitating the debate. And I personally believe, and from my experience, that it’s during this international dialogue where innovative solutions can be birthed. For example, if you cannot compromise, what do you do? You look for solutions. And that can be in the form of research into privacy preserving technologies, which is what the work of the CG Correspondence Group on Travel and Protection of the ITU Study Group 17 is doing. You know, we’re looking for the solution that balances the two public-private partnerships, you know, development, the work that is done by the ITU development sector in sharing information and also capacity building. But the most important thing to also remember in all of this improvement that can be brought about by international collaboration, it requires active participation and contribution from the private sector as well, not just government and regulators. So what makes the collaboration work is everybody finding that common ground first and then starting to move on there and see how they can compromise. Ani, thank you very much. And I very much share the hope that we will find solutions at events like this, despite it, as Arnaud was saying, being mission impossible.

David Wright: Okay, I’m next going to go back to the room and to Afnan. Afnan, if I can pose to you from your perspective, what are the most critical challenges in balancing encryption with public safety and privacy rights? Thank you, David.

Alromi Afnan : First of all, before I go into answering your question, I would like to thank my colleagues here in this panel discussion for their great insights and the points they brought in. They’ve actually pointed out a couple of challenges that I was hoping and planning to discuss in this talk. Although, as we say, encryption is an important tool and aspect that helps us to secure and our sensitive data and to have it more safe, although, yeah, it poses a couple of challenges. And in today’s session, although we have a little time, I’m going to discuss just a couple that are most likely those with us and the attendees are more familiar with. The first challenge that we can sense and see is when it comes to global inconsistency, and I think Arnold and Hani mentioned a couple of points on that aspect, and also the conflicting international laws, where it brings it more challenging to discuss this. For example, countries having varied legal standards and laws and regulations, and when we see international companies working in those different countries, they have to comply with each standard and they have to fulfill all those standards as well. So this brings a hard challenge in that aspect. Another challenge is when it comes to the evolving technologies, and we’ve discussed this as well in SG17, and we have a specific question to emerging and evolving technologies in that aspect. Evolving technologies and the rapid pace of technology advancements means that we have to keep up with them and we have to address those encryption standards that bring in, and one of the evolving technologies that we are considering or are discussing is the processing in SG17 is quantum computing, and quantum computing in the future will most likely break some of the encryption standards that we have today, and this poses a challenge that we need to consider or start the transition to secure quantum infrastructure. Another challenge, and it’s very important, and we have a couple of initiatives in Saudi Arabia that is looking into that aspect, which is the challenges of protecting against abuse online, and as we see, some of the tools or the applications that we have today, or even chatting applications that we use in a day-to-day life, they have a lot of encryption or they have encryption implemented to secure our communication, and although this is vital and important, however, it creates a challenge for law enforcement to help secure or have public safety more safe in the environment, and most importantly, children, where we cannot see what type of text that is being communicated and the abuse that could happen in that discussion. So these are just some of the challenges that we need to consider when we balance between encryption and also public safety and privacy rights as well. So I think this is just something that we need to consider, and the only way to reach a collaboration is to have feedback either from both, from different areas, either from the government, from the private sector, from civils, and all parties that could collaborate in that discussion and try to find a way to reach a balance in that aspect.

David Wright: So thank you. Thank you very much. That’s the ensuing conversation afterwards, which I’m going to encourage everybody to to think about questions, then when we open the floor, it does appear we’ve got some questions online too. So I say that just before I throw it over finally to my colleague, Boris. And so my question here, Boris, is what role should the public play in this discourse and how can awareness be effectively raised on the impact of encryption policies on privacy and security?

Boris Radanovic: Thank you. Short answer, education. But how to do it properly is a much longer discussion. And I think we should start with defining frameworks for meaningful discussions that allow us a communication goals and structure that allow exactly these kinds of discussions on a multiple levels of representation, diversity, and abilities and disabilities that could contribute to these conversations that I might not be able to do now. But on a broader point, I think we should all be aware that vision can only pull as hard as reality can follow. And the current reality that in these chairs around us is not 150 million children across the world being sexually abused every year, and that number is rising. That is a reality we need to face. And while the vision we can all agree is magnificent, the reality is something that we need to take into account. And thinking about the discussions, I’m gonna raise more questions than answers here, but I think this is the perfect space, is how do we make sure that we do not allow dominance and dominant discussions or part of this discussion to be apart from a certain area, agenda, stakeholder, or interest? And how do we have a meaningful level playing field for anybody contributing to this discussion? How do we make sure that we develop initially technological solutions that take into the account of the benefit of the user or the benefit of the child first and foremost, and then we continue developing those solutions? And I think all of that builds up into our, what I personally consider our principle. duty as adults to create a better and a safer world for people and young children following in our footsteps. And I’ll come back to our nod and say, I love the movie, Tom Cruise and Mission Impossible, our nod, but I don’t know if you remember that in each of those movies, a great team of people working towards their own abilities and capabilities, working together, make the movie, in the end, Mission Quite Possible. I know that doesn’t make a good marketing title, but I think that should be a good notion. And just to come on with something that Honey said, which I think is important, what we are trying to do is not easy, but we have to ask ourselves, what is easy and what is right? And Lean, I would suggest on the side, what is right and finding solutions for that. To just come back on that point for my final, we need to be able and find a way to develop global standards with local sensitivities that respect many of the things that I mentioned. And I wholeheartedly ask all of you today listening to us online and here, do ask us questions. We have a discussion on so many levels and so many representations that we all need to understand, but if we can all take into account that all of us, at one point, I don’t know if you remember, were a child and we all needed somebody to stand up for us and defend what is the benefit of us. So I ask you today to look to that prism while we are discussing this topic. Thank you very much. Okay. So that concludes the contributions from

David Wright: the panellists to set the scene as well for everybody. I am going to ask, if somebody behind me puts their hand up, can you help me out? I haven’t got eyes in the back of my head. I do also like, as well, this theme of mission possible or mission impossible, depending on which one perhaps we should be reaching out to the producers or Tom Cruise to find us a way through here. So I open the floor to any particular questions if anyone anyone has. I can see behind me, yes. Okay, if I can ask if you could just introduce yourselves, that would be helpful to start with, too. Thank you.

Audience: Yes, thank you for giving me the floor. My name is Cynthia Lissoufi. I come from South Africa and I work with most of the panelists in this session at the ITU, and it’s quite refreshing to listen to the diverse views of different stakeholders on this important topic, which is quite dear to South Africa, but not only South Africa, but many of the countries that participate in the ITU work, and specifically in the study group 17, which is the technical study group of the ITU when it comes to issues of standard and security. For me, specifically as South Africa, we believe that actually we stand a good chance, and why we are saying this is because we are looking at the upcoming WSIS plus 20 review process, where we are also bringing in the issue of the global digital compact, and we believe that some of these issues we can, as I would say a community of stakeholders that are concerned with this particular issue, we find ways, because what I’m actually picking up here today is that we are all concerned, but as we’ve said, the issue is how do we deal with this? And I’m also hearing that we need this continuous discussion, and for us to continue with this, we also need to take advantage of all the processes that are currently happening, to make sure that this issue is not pushed at the back of other priorities, because different stakeholders will fight for their priorities. So all I’m pleading for, for all the stakeholders in this room, let us take advantage of the processes. that are happening, and we make sure that the issue of the child online protection also takes the forefront in all of these decisions, especially at the UN level. Thank you.

David Wright: Okay, yeah. What we’ll do is we’ll take three questions, and then we’ll come to the panel. So,

Audience: thank you. Thank you. My name is Catherine Bielek. I’m an infectious disease physician at Harvard Medical School. And not to add another layer of complexity to this, but I certainly wonder if public health and pandemic response might balance this a little bit as well. There are some lessons I think that we can pull from how we navigated the COVID-19 pandemic in terms of data privacy, security, and public health and safety. There’s perhaps a little bit simplified, but when we did contact tracing for the COVID-19 pandemic, people can give up their own right, their qualified right to privacy. They can volunteer that information. And then certainly, how much is surveilled, it does not necessarily dictate how much data is kept, how it’s kept and where it’s kept. I think that is important too for other pandemics or syndemics, which are overlapping pandemics, especially as related to HIV, which is my area, can carry a lot of stigma or criminalization laws. So, when that information is kept, there is surveillance related to that. But in the United States, that’s kept in a secure encrypted facility at a state health department, for instance. And the amount that you surveil is not necessarily proportional to the amount that you keep. So, my question perhaps is related to how these lessons might apply to this discussion

David Wright: in other areas as well. Online, I’m seeing a lot of interaction, a lot of compliments for the speakers as well. I’ve got a question here from Cheryl. Does balancing necessarily mean we need to rank rights and risks to properly weigh them against each other? If not, how do we begin an objective comprehension? comprehensive review. If so, how do we do this on a global level? There was another question which basically comes up to, can somebody please clear the air because there’s a lot of misinformation, a lot of discussion where there’s also a lot of fake news in there. For example, when we are talking about privacy versus child protection, how can we, is it true that if we want to go towards child protection that we are giving up on privacy, I think there’s a lot of questions there to solve that one. Nils, thank you. Okay, so those three questions addressed to the panel. I’m gonna go to Andrew first.

Andrew Campling: So let me have a go at two of those, but briefly. Firstly, on the weakening of encryption question, I would argue, and I’ll be as precise as I can without hopefully getting too detailed, that specifically to detect known child sex abuse material, that needs to have no impact whatsoever on encryption. Again, to expand that ever so slightly, if in the end-to-end messaging applications, if they agreed to scan any images before they were uploaded to see if they contained known CSAM and then encrypt, there are no privacy implications to that because you don’t learn what the image is. You simply learn that it isn’t known CSAM by something called hash matching. Matching, for those of you that have knowledge in that area, you don’t need to look at the content of the message either. So you’re simply saying, does this image in a mathematical sense match a database of known CSAM? So that doesn’t, in my opinion, have any privacy implications, unless there’s a match. And if there’s a match, then you’ve committed a crime and you’re. qualified right to privacy is surrendered anyway. So that’s fine. And then just briefly the other point on the sort of, I think the question was ranking or trading off different rights. Yes, and I would always say that if you have to trade rights, you ought to bias towards the most vulnerable in society. And at the moment, in my opinion, the weaponization of privacy is largely benefiting privileged adults at the expense of lots of different vulnerable groups. And that is an unacceptable trade off. So if we have to make trade offs, we should bias the vulnerable or advantage the vulnerable, not the privileged. That’s the wrong way around, in my view. Andrew, thank you. Thank you very much. Arnaud, if I can just bring you in here.

David Wright: Yes, thank you. Thank you to Boris about Mission Impossible or Mission Possible. I like it,

Taddei Arnaud: but I had to be provocative, of course. No, I think to come back on the issue, it’s a real design problem in the sense of theory of design. And I really like the previous intervention, I could not capture the name of the person who made the analogy with the COVID-19 learnings. That’s exactly what we should do. We should learn from others and other areas where they have resolved the problem. Because sometimes there is a lot of hype about issues of, and this is not private and the data are not secure and people are losing their rights and so on. There are areas where no. The problem of why the Mission Impossible is because of something else. The problem that is underlying behind the scene is the fact that very few people realize that security cannot be proven. At the low level, altitude level on encryption, yes, you can perhaps prove mathematically some cryptography and other things. But the moment you elevate the you lose the possibility to prove that your system is secure. And if you ask anybody about is whatever control I put is secure can I trust it the answer is fundamentally no. You cannot. And that’s the problem of who guards the guards behind the scene and there is no way or I could not find a way that we can resolve the problem. In fact it’s not even the problem. The problem is not that that this is impossible per se. The problem is that few people missed that point. So when security discusses privacy it’s an unequal battle because security has little to offer to the privacy side. So we are we are turning into circle with some people trying to split us for dogmatic reasons versus let’s recognize things how they are and let’s be pragmatic and let’s recognize the design problem. If we get it back to a design problem we would then include back the ethics, the anthropology, the experience, the law. We could do something about it. That’s where the mission becomes possible. So one approach of breaking the problem in pieces of getting a subcontext could back to the question should we have different weight between people. Of course not. That would be terrible. If we end up in a place where we have to put priorities of some humans versus some others I don’t think that we have done the job. It has to be equal. All humans should be respected in this story. So if we could take a step back, understand perhaps from others, I really like the example of the COVID-19 and regroup all the possibilities with a strategy of perhaps divide to conquer so that we can split the technical design and open the options. So I believe the risk of doing this is more a problem of is going to impact significantly the way that we have built and developed our entire Internet at the moment, from the browsers to the CDNs, the servers, to all things, because now we would need to represent a richer human. I think the problem is that today the underlying problem is that the human model behind this whole design is very, very, very narrow. And we are locked. We can’t do anything, because if we help one now, we lose properties for the other one. So what if we would re-enrich the model behind the scene? How many possibilities we would create? That would be something I could propose.

David Wright: Anna, thank you very much. Boris?

Boris Radanovic: Thank you. I’ll try my best to cover it all. Thank you so much for the questions, and I’ll come to the first one, and Cynthia, bringing child learning protection to the highest levels. I full-heartedly support you and Ben at SD17, and if I can do anything as well to support more to keep this, and this is one of the places to do it, absolutely. Yes, I love the idea of using COVID learnings, especially about volunteering rights and seeing how that works in a different space and impact. I think, as well with Arnaud, I would be interested to see how that works. And on the question of how do we balance that risk in the global, I think that is the challenge. That is the biggest challenge that we have to do, but I have to agree with Arnaud. We cannot be the ones that have conflicting things and have to decide one from the other, which brings me to the point that I want to make, with no disrespect to the person asking that question. We should utterly reject the framework of conversation of having privacy versus security. And if we reject it, I’ll just remind everybody that most of us flew to this wonderful country, and what if 90% of our flights had 90% of a chance to land in Ankara, maybe in Zagreb, maybe in London? None of us would take that option or those odds. So let’s reject the framework of conversation of privacy versus security and focus on the title. that is privacy and security. They are solutions, they are ways we can achieve that. They might be difficult, they might be hard, and I’ll come back, what is easy and what is right. And I think we should, to answer the question of the online speaker, absolutely detest and reject the notion that that is the discussion we are having. None of us wanna go back on our privacy, but none of us as well wanna see the thing that as well are not mentioned, that we cannot fully trust that any system is secure. But what I can tell you that we know and have referenced and have research and evidence that they are currently unintended consequences. They are doing harm to the often unheard, unseen and unsupported people and young children across the world. So I go back and ask the question, what is right and what is easy, and let’s start doing the right thing, even though, and I still hope are not, we will find the mission quite possible in the end, and maybe laugh at this in one day, but I am worried about quantum computing and making this whole discussion basically pointless,

David Wright: but yes. I feel that’s an entirely different workshop. Okay, Afnan, do you wanna go first? I know I’ll come to you next.

Alromi Afnan : Thank you, I’ll make it very short. I just want to thank the floor for bringing those great questions. And just to come back to the pandemic and the COVID, I think one aspect that from the lesson learned brought up from this is having public awareness. I think this is big part of it that you have a right, and it’s part of the online safety that you should be granted to have since the pandemic made us all become most of our time remote. So I think part of the lesson learned here is awareness of the public community and what is right for them and what they can subscribe or work to. So this is just a comment, thank you.

David Wright: Thank you, Afnan. Arnaud?

Taddei Arnaud: Yes, very quick to come back on something I need to probably. re-qualify a bit. When I point to the fact that we cannot trust security, let’s say, I totally agree with you, Boyd. It’s exactly where I want us to go. We need to stop this debate about privacy versus security. And the fact is that at the moment security cannot be trusted and we have nothing to offer to privacy. I see it as an opportunity. Now, to come back on the person that took the analogy with health, this is exactly the same thing. We forget that in the real world your immune system has defects. You will miss a virus, you will have an auto-immune disease. Can I trust my security by design? No. And that’s why we created the health system. But can I trust the health system? Absolutely not either. If the surgeon makes a mistake, I die. If I take too many medicines, I die. So the point is that it is a paradox. But I would like people to consider it as a positive paradox, that if we would precisely heal our security and privacy people together, let’s do something about it. And then we can re-establish perhaps a new approach that could be fruitful for not prioritising humans versus each other, and on the contrary having the right design for each of our different contexts. And they can evolve over time from when people are children up to when they are elderly. That’s it. Thank you. Anna, thank you very much. Honey?

Makola Honey: Yes, thank you. And I just want to echo what my colleagues were saying about the very purpose of this workshop, right? We are here to find balance, so it’s not necessarily weighing one against the other. But the letter part of it is that we are here to find balance, and we are here to find part, the last part of the question asking how do we begin an objective, comprehensive review and how do we do that on a global scale? In my opinion, there is a need for a global body responsible for the global framework and at the moment we have at regional level, the African Telecommunication Union and other regional bodies, but internationally we have the International Telecommunication Union and I think those international bodies then have a responsibility to ensure that they become neutral conveners of the differing stakeholders with their differing viewpoints so that there can be an unrestricted dialogue regarding finding the balance in the solution because ignoring any of the views, whether extreme or on the extreme side or on the other without really looking at the matter and really discussing the situation can oversimplify the issue of encryption and that’s not what we want. So I think that to answer the question, bodies such as the international

David Wright: regulatory bodies are very important in creating that space for that dialogue. Okay, thank you very much and hopefully in terms of those questions posed, there was a suitable and adequate responses to that. We have just a little over five minutes left. Are there any other questions that anybody has? Any more questions online? Okay, I’m going to perhaps then, this question could be one to close us with given, say, just a few minutes left, and it’s one about public awareness. Public understanding of encryption is often limited. I think we’ve kind of heard about that. How can stakeholders better educate citizens, everybody, about the impacts of encryption on privacy and public safety? Who wants to take that? I’ll try and shorten it. Again, it’s about the same word I use is education,

Boris Radanovic: but more so to the fact is adaptable education, because different levels and different capabilities of people need to understand this topic from a different way. I think somebody much, much smarter than me said, if you cannot explain your topic in five minutes to a five-year-old, you might not be an expert in the topic. We need to find sensitive local environments to expand on the topics that are way too complex for the smartest people in the world. That either means that we don’t understand the field well enough, or don’t we have the right people to expand that. Yes, awareness campaigns. Yes, stakeholders who genuinely take effort of educating the people in the right way without the agenda leaning left or right is important. Having a body that can assess that and tell us who is doing it better or worse and kind of being inspired about it. We have been doing decades of awareness-raising from child sexual abuse, for intimate image abuse, for work of child online protection in general. SWGFL alone is 25 years old next year, so we know the principles that can build that and do that, but all of those principles fall on education. Sometimes it took us a decade to educate a whole nation of the importance of why do we need to do one thing or another. It will take us time, so the short answer is to educate the general public and raise their awareness. We need right people in the right place educating them and allow for some time to pass so we can do that on a global scale or a much larger scale. I hope that answers the question enough.

David Wright: That’s a good go, Boris. Andrew? Yeah, so to…

Andrew Campling: I think I would start by being less ambitious, and dare I say, repeating a point I made earlier, where a lot of the decisions about encryption are made are not here, they’re in some of the standards development organisations such as the ITF, I’m active in the Internet Engineering Task Force, which makes design choices about the underlying internet standards. What I think we need to do is to get people from the groups that are here, at least some of them, to engage over there, so civil society groups, governments, regulators, others who have got sufficient technical knowledge to engage in the standards bodies need to attend and pay attention to what is happening there, and the implications for some of the decisions being taken, because otherwise I think what we risk is developing internet standards which create societal problems, not because the people behind the standards are bad or evil, but because they don’t have the necessary knowledge. So dare I say it, and it’s probably appropriate to finish on the point here, multi-stakeholder approach, that is the way forward, and then obviously through our different communities we can then spread the message backwards into the others that we engage with, but I’d like to get, at least introduce some element of the multi-stakeholder approach into the technical bodies first, and then work backwards.

David Wright: Just before you put the microphone down there, Andrew, so can you give us an example about one of those underlying technology changes that may well have an impact, and how would that, what would that look like, just in case, assuming that not everybody has a technical understanding about, so a real life case example. Okay, and again I’ll keep this hopefully at a high level, so some of the current changes

Andrew Campling: that have been made in the underlying standards, something called Encrypted Client Hello, for example, will make it increasingly difficult for parental controls to work. So for those of you that rely on parental control, to stop your children being able to access adult type content or indeed in schools similarly use the same sort of controls potentially those that those systems will stop working not because you’ve stopped using them but because the underlying technology has changed so that will be an example where because there’s a lot of multi-stakeholder discussion it’s being overlooked so

David Wright: that’s why we need that multi-stakeholder approach so I guess there’s a there’s a point for everybody here both in the room and online as well that sounds a bit of a call to action to that if you if you weren’t aware of or indeed if you have the opportunity to engage with the IETF and have as Andrew said that level of technical understanding then please do please go and understand you know how your browser and the internet is being designed in terms of some of those those those standards and perhaps the unintended consequences that you may well see within white well perhaps white while you’re here but those views as has been widely said I think there’s this multi-stakeholder approach a really important aspect so there’s a call to action for everybody about the IETF so we have literally a couple of minutes left and I am going to just have a look around if there are any particular closing remarks anyone may well have amongst the panelists I would go around but we don’t have time to do that if anyone has any concluding concluding remark okay thank you very much so in 60 seconds we clearly did cover a lot of subjects here and I’ve got already pages of notes I think so definitely Andrew the the term that was really opened the responses about the weaponization of the weaponization of privacy to override and impact children’s safety was a bold statement to open. I think we’ve very much heard as well that this is not a privacy against security, this is a privacy and security is one thing that I think we’ve come across as well. This does require multi-stakeholder, this is not one-dimensional and it does require all of us to get involved so that the output is going to be reflective of that multi-stakeholder contribution as well. But I will finish with this is not mission impossible. Arno, Boris, this is mission possible. I think we’ve concluded with seeing the way through. So in that regard, honey, we have found a particular solution at this particular workshop. So thank you very much, everybody, for those questions as well. It’s a real pleasure to be able to moderate this panel of such amazing, esteemed and really world-leading experts here as well. So I would like you to join me in thanking them for their contribution just as we close out. Thank you very much. Thank you.

A

Andrew Campling

Speech speed

150 words per minute

Speech length

1324 words

Speech time

528 seconds

Encryption should not override other human rights

Explanation

Andrew Campling argues that privacy rights are being weaponized to override other human rights, particularly those of children and vulnerable groups. He emphasizes that privacy is a qualified right and should not take precedence over fundamental rights.

Evidence

The Internet Watch Foundation records approximately 100 million reports of CSAM images and videos every year, with roughly three new images being found every second.

Major Discussion Point

Balancing Encryption, Privacy and Public Safety

Agreed with

Taddei Arnaud

Boris Radanovic

Agreed on

Balancing privacy and security

Differed with

Boris Radanovic

Differed on

Approach to balancing encryption, privacy, and public safety

Client-side scanning for known CSAM images

Explanation

Andrew Campling proposes client-side scanning for known CSAM images as a solution that doesn’t break encryption or privacy. He argues that this approach would immediately reduce the scale of the problem without compromising user privacy.

Evidence

End-to-end encrypted messaging platforms are widely used to find and share CSAM, based on research with a large sample size.

Major Discussion Point

Technical Solutions and Innovations

Differed with

Taddei Arnaud

Differed on

Approach to technical solutions

Civil society groups should engage in technical standards bodies

Explanation

Andrew Campling suggests that civil society groups, governments, and regulators with sufficient technical knowledge should engage in standards development organizations like the IETF. This engagement is necessary to prevent the development of internet standards that create societal problems.

Evidence

Changes in underlying standards, such as Encrypted Client Hello, can make it increasingly difficult for parental controls to work.

Major Discussion Point

Public Awareness and Education

Agreed with

Taddei Arnaud

Makola Honey

Boris Radanovic

Agreed on

Need for multi-stakeholder approach

T

Taddei Arnaud

Speech speed

148 words per minute

Speech length

1394 words

Speech time

565 seconds

Need to consider sub-contexts with different requirements

Explanation

Taddei Arnaud proposes considering sub-contexts with specific needs, such as child protection, education, and elderly care. He suggests that this approach could help in making better design choices and trade-offs for specific use cases and requirements.

Evidence

Examples of sub-contexts include enterprise browsers for specific enterprise use cases and family solutions by hyperscalers.

Major Discussion Point

Balancing Encryption, Privacy and Public Safety

Agreed with

Andrew Campling

Makola Honey

Boris Radanovic

Agreed on

Need for multi-stakeholder approach

Consider sub-contexts like child protection, education

Explanation

Taddei Arnaud emphasizes the importance of considering different sub-contexts when designing technical solutions. He suggests that this approach could help address specific requirements for different groups, such as children or the elderly.

Major Discussion Point

Technical Solutions and Innovations

Agreed with

Andrew Campling

Boris Radanovic

Agreed on

Balancing privacy and security

Differed with

Andrew Campling

Differed on

Approach to technical solutions

Learn from health/pandemic response models

Explanation

Taddei Arnaud draws parallels between cybersecurity and health systems, highlighting that both have inherent imperfections. He suggests learning from health system models to develop a new approach that balances security and privacy needs.

Evidence

Examples of imperfections in health systems, such as the possibility of surgeon errors or adverse effects of medicines.

Major Discussion Point

Public Awareness and Education

M

Makola Honey

Speech speed

131 words per minute

Speech length

1097 words

Speech time

499 seconds

International collaboration can help find common ground

Explanation

Makola Honey argues that international collaboration can facilitate dialogue between nations with different stances on encryption. This collaboration can lead to the establishment of harmonized legal and technical standards.

Evidence

The work of the CG Correspondence Group on Travel and Protection of the ITU Study Group 17 in researching privacy-preserving technologies.

Major Discussion Point

Balancing Encryption, Privacy and Public Safety

Agreed with

Andrew Campling

Taddei Arnaud

Boris Radanovic

Agreed on

Need for multi-stakeholder approach

Research privacy-preserving technologies

Explanation

Makola Honey suggests that international collaboration can lead to innovative solutions, such as research into privacy-preserving technologies. This approach aims to balance privacy concerns with other needs, such as child protection.

Evidence

The work of the CG Correspondence Group on Child Protection of the ITU Study Group 17.

Major Discussion Point

Technical Solutions and Innovations

International bodies can convene neutral dialogues

Explanation

Makola Honey emphasizes the role of international bodies like the International Telecommunication Union in convening neutral dialogues between differing stakeholders. She argues that these bodies have a responsibility to ensure unrestricted dialogue to find balanced solutions.

Major Discussion Point

Public Awareness and Education

A

Alromi Afnan

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Global inconsistency in laws creates challenges

Explanation

Alromi Afnan points out that global inconsistency in laws and regulations creates challenges for balancing encryption, privacy, and public safety. This inconsistency makes it difficult for international companies to comply with varied legal standards across different countries.

Major Discussion Point

Balancing Encryption, Privacy and Public Safety

Address challenges of quantum computing

Explanation

Alromi Afnan highlights the challenge posed by evolving technologies, particularly quantum computing. She argues that quantum computing may break current encryption standards, necessitating a transition to secure quantum infrastructure.

Evidence

Discussions in SG17 about emerging and evolving technologies, including quantum computing.

Major Discussion Point

Technical Solutions and Innovations

Public awareness is key part of online safety

Explanation

Alromi Afnan emphasizes the importance of public awareness in online safety. She argues that awareness is a crucial aspect of the right to online safety, especially in the context of increased remote activities since the pandemic.

Evidence

Lessons learned from the COVID-19 pandemic about the importance of public awareness.

Major Discussion Point

Public Awareness and Education

B

Boris Radanovic

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Should reject framing of privacy vs security

Explanation

Boris Radanovic argues for rejecting the framework of conversation that pits privacy against security. He emphasizes the need to focus on achieving both privacy and security, rather than treating them as mutually exclusive.

Major Discussion Point

Balancing Encryption, Privacy and Public Safety

Agreed with

Andrew Campling

Taddei Arnaud

Agreed on

Balancing privacy and security

Differed with

Andrew Campling

Differed on

Approach to balancing encryption, privacy, and public safety

Need adaptable education for different capabilities

Explanation

Boris Radanovic emphasizes the need for adaptable education to help different groups understand the complex topic of encryption and its impacts. He argues that education should be tailored to different levels of capability and understanding.

Evidence

SWGFL’s 25 years of experience in awareness-raising for child sexual abuse, intimate image abuse, and child online protection.

Major Discussion Point

Public Awareness and Education

Agreed with

Andrew Campling

Taddei Arnaud

Makola Honey

Agreed on

Need for multi-stakeholder approach

A

Audience

Speech speed

164 words per minute

Speech length

529 words

Speech time

192 seconds

Use COVID-19 pandemic response as model

Explanation

An audience member suggests using lessons learned from the COVID-19 pandemic response as a model for balancing privacy and public safety in the context of encryption. This approach could provide insights into managing data privacy and security while addressing public health and safety concerns.

Evidence

Examples of contact tracing during the COVID-19 pandemic, where people could voluntarily give up their right to privacy for public health purposes.

Major Discussion Point

Technical Solutions and Innovations

Agreements

Agreement Points

Need for multi-stakeholder approach

Andrew Campling

Taddei Arnaud

Makola Honey

Boris Radanovic

Civil society groups should engage in technical standards bodies

Need to consider sub-contexts with different requirements

International collaboration can help find common ground

Need adaptable education for different capabilities

Speakers agreed on the importance of involving various stakeholders in discussions and decision-making processes related to encryption, privacy, and public safety.

Balancing privacy and security

Andrew Campling

Taddei Arnaud

Boris Radanovic

Encryption should not override other human rights

Consider sub-contexts like child protection, education

Should reject framing of privacy vs security

Speakers emphasized the need to balance privacy rights with other important considerations such as public safety and child protection, rather than treating them as mutually exclusive.

Similar Viewpoints

Both speakers highlighted the need for technical solutions to address specific challenges in balancing encryption, privacy, and public safety.

Andrew Campling

Alromi Afnan

Client-side scanning for known CSAM images

Address challenges of quantum computing

Both suggested learning from health and pandemic response models to inform approaches to balancing privacy and security in the context of encryption.

Taddei Arnaud

Audience

Learn from health/pandemic response models

Use COVID-19 pandemic response as model

Unexpected Consensus

Importance of public awareness and education

Boris Radanovic

Alromi Afnan

Need adaptable education for different capabilities

Public awareness is key part of online safety

Despite coming from different backgrounds, both speakers emphasized the critical role of public awareness and education in addressing encryption and online safety challenges.

Overall Assessment

Summary

The main areas of agreement included the need for a multi-stakeholder approach, balancing privacy with other rights and considerations, and the importance of technical solutions and public education.

Consensus level

Moderate consensus was observed among speakers on the need for balanced approaches and multi-stakeholder involvement. This implies a recognition of the complexity of the issue and the need for collaborative efforts in addressing encryption, privacy, and public safety challenges.

Differences

Different Viewpoints

Approach to balancing encryption, privacy, and public safety

Andrew Campling

Boris Radanovic

Encryption should not override other human rights

Should reject framing of privacy vs security

Andrew Campling argues that privacy rights are being weaponized to override other human rights, particularly those of children, while Boris Radanovic emphasizes the need to focus on achieving both privacy and security rather than treating them as mutually exclusive.

Approach to technical solutions

Andrew Campling

Taddei Arnaud

Client-side scanning for known CSAM images

Consider sub-contexts like child protection, education

Andrew Campling proposes specific technical solutions like client-side scanning, while Taddei Arnaud suggests a more context-based approach considering different requirements for various groups.

Unexpected Differences

Framing of the encryption debate

Andrew Campling

Boris Radanovic

Encryption should not override other human rights

Should reject framing of privacy vs security

While both speakers are concerned with balancing various rights and interests, their framing of the issue is unexpectedly different. Andrew Campling’s approach of prioritizing certain rights over others contrasts with Boris Radanovic’s rejection of the privacy vs. security framing altogether.

Overall Assessment

summary

The main areas of disagreement revolve around the approach to balancing encryption, privacy, and public safety, as well as the specific technical and policy solutions proposed.

difference_level

The level of disagreement among the speakers is moderate. While there is a general consensus on the importance of addressing the issue, there are significant differences in the proposed approaches and solutions. These differences reflect the complexity of the topic and the need for continued multi-stakeholder dialogue to find effective and balanced solutions.

Partial Agreements

Partial Agreements

All speakers agree on the need for broader engagement and collaboration, but they differ in their specific approaches. Andrew Campling focuses on engaging with technical standards bodies, Makola Honey emphasizes international collaboration, and Boris Radanovic stresses the importance of adaptable education.

Andrew Campling

Makola Honey

Boris Radanovic

Civil society groups should engage in technical standards bodies

International collaboration can help find common ground

Need adaptable education for different capabilities

Similar Viewpoints

Both speakers highlighted the need for technical solutions to address specific challenges in balancing encryption, privacy, and public safety.

Andrew Campling

Alromi Afnan

Client-side scanning for known CSAM images

Address challenges of quantum computing

Both suggested learning from health and pandemic response models to inform approaches to balancing privacy and security in the context of encryption.

Taddei Arnaud

Audience

Learn from health/pandemic response models

Use COVID-19 pandemic response as model

Takeaways

Key Takeaways

The discussion should focus on balancing encryption, privacy and public safety rather than pitting them against each other

A multi-stakeholder approach involving diverse perspectives is crucial for addressing these complex issues

Technical solutions like client-side scanning for CSAM could help balance privacy and safety

International collaboration and common standards are needed, while accounting for local contexts

Public education and awareness about encryption impacts is important but challenging

Resolutions and Action Items

Stakeholders should engage with technical standards bodies like IETF to provide input on encryption standards

International bodies like ITU should convene neutral dialogues to find balanced solutions

More research is needed into privacy-preserving technologies that also enable child protection

Unresolved Issues

How to develop global standards that balance needs of different regions and technical capabilities

How to effectively educate the public about complex encryption issues

How to address challenges posed by emerging technologies like quantum computing

How to resolve conflicts between different legal/regulatory frameworks across countries

Suggested Compromises

Consider sub-contexts (e.g. child protection, education) with different encryption requirements rather than one-size-fits-all approach

Use client-side scanning for known CSAM images before encryption to balance privacy and safety

Learn from pandemic response models on balancing individual privacy and public health needs

Thought Provoking Comments

In my view the weaponization of privacy is being used and has been and is continuing to be used to override all of the human rights of children and other vulnerable groups and I think that’s a fundamental problem.

speaker

Andrew Campling

reason

This comment challenges the common narrative around privacy and frames it as potentially harmful to vulnerable groups, particularly children. It introduces a provocative perspective that privacy rights may be overemphasized at the expense of other human rights.

impact

This comment set the tone for much of the subsequent discussion, prompting other participants to consider the balance between privacy and other rights, particularly child protection. It led to a deeper examination of the trade-offs involved in encryption policies.

Client-side scanning for known CSAM images would immediately reduce the size of the problem. It doesn’t break encryption, it doesn’t break privacy, so that’s an easy way to make an impact.

speaker

Andrew Campling

reason

This comment offers a specific technical solution to address child sexual abuse material (CSAM) without compromising encryption or privacy. It provides a concrete example of how technology could be used to balance competing interests.

impact

This suggestion sparked further discussion about technical solutions and their potential impacts. It shifted the conversation from abstract principles to practical implementations.

For other areas we started to realize that maybe the problem is the fact that we have in the background of that the anthropological assumption that was made behind was one model for all the humans. That means a very narrow model for all the humans to make sure it fits the maximum.

speaker

Taddei Arnaud

reason

This comment introduces the idea that the current approach to internet design may be based on an overly simplistic model of human needs and behaviors. It suggests that a more nuanced approach might be necessary.

impact

This perspective broadened the discussion beyond technical solutions to consider the underlying assumptions of internet architecture. It led to considerations of how to design systems that can accommodate diverse needs and contexts.

We should utterly reject the framework of conversation of having privacy versus security. And if we reject it, I’ll just remind everybody that most of us flew to this wonderful country, and what if 90% of our flights had 90% of a chance to land in Ankara, maybe in Zagreb, maybe in London? None of us would take that option or those odds.

speaker

Boris Radanovic

reason

This comment challenges the framing of the debate as a trade-off between privacy and security. It uses a vivid analogy to illustrate why this framing is problematic and unacceptable.

impact

This reframing of the issue shifted the discussion away from seeing privacy and security as opposing forces. It encouraged participants to think about how to achieve both simultaneously rather than trading one for the other.

What I think we need to do is to get people from the groups that are here, at least some of them, to engage over there, so civil society groups, governments, regulators, others who have got sufficient technical knowledge to engage in the standards bodies need to attend and pay attention to what is happening there, and the implications for some of the decisions being taken.

speaker

Andrew Campling

reason

This comment highlights the importance of multi-stakeholder engagement in technical standards development. It points out a gap in current processes where important societal implications may be overlooked.

impact

This suggestion provided a concrete action item for participants and shifted the discussion towards practical steps for improving the decision-making process around internet standards and encryption policies.

Overall Assessment

These key comments shaped the discussion by challenging common assumptions, introducing new perspectives, and shifting the focus from abstract principles to practical solutions. They encouraged a more nuanced understanding of the complex interplay between privacy, security, and other human rights, particularly in relation to child protection. The discussion moved from identifying problems to proposing solutions, with an emphasis on multi-stakeholder engagement and the need for more diverse representation in technical decision-making processes. The overall tone shifted from seeing encryption as a binary choice between privacy and security to exploring ways to achieve both simultaneously.

Follow-up Questions

How can we develop global standards with local sensitivities that respect diverse needs and capabilities?

speaker

Boris Radanovic

explanation

This is important to ensure that encryption and privacy standards can be applied effectively across different countries and contexts while respecting local needs.

How can we address the challenge of protecting against online abuse while maintaining encryption for secure communication?

speaker

Alromi Afnan

explanation

This is crucial for balancing the need for privacy and security with the protection of vulnerable groups, especially children.

How can we apply lessons from COVID-19 pandemic response to balance data privacy, security, and public health/safety in other contexts?

speaker

Catherine Bielek (audience member)

explanation

Learning from past experiences in managing sensitive data during a crisis could inform approaches to balancing privacy and security in other areas.

How can we objectively rank rights and risks on a global level to properly weigh them against each other?

speaker

Cheryl (online participant)

explanation

This is important for developing a framework to address conflicts between different rights and risks in encryption policies.

How can we better educate citizens about the impacts of encryption on privacy and public safety?

speaker

David Wright (moderator)

explanation

Improving public understanding of encryption is crucial for informed debate and policy-making on these issues.

How can we ensure multi-stakeholder engagement in technical standards bodies like the Internet Engineering Task Force (IETF)?

speaker

Andrew Campling

explanation

This is important to ensure that societal implications are considered when developing internet standards that affect encryption and privacy.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #53 Promoting Children’s Rights and Inclusion in the Digital Age

WS #53 Promoting Children’s Rights and Inclusion in the Digital Age

Session at a Glance

Summary

This panel discussion focused on promoting children’s rights and inclusion in the digital age, addressing the challenges and opportunities of ensuring child online protection. Panelists from various countries shared insights on their national strategies and initiatives to safeguard children in the digital space.

Key topics included cyberbullying prevention, online harassment, and digital literacy programs. Participants highlighted the importance of multi-stakeholder approaches involving governments, tech companies, civil society, and children themselves. Several countries reported efforts to implement digital literacy curricula in schools and provide training for law enforcement agencies on cyber-related crimes against children.

The discussion emphasized the need for culturally appropriate and locally relevant interventions, including resources in local languages and consideration for children with disabilities. Panelists stressed the importance of empowering children with digital skills while also protecting them from online threats.

Emerging technologies like artificial intelligence were discussed as both potential tools for enhancing child online safety and as sources of new challenges. The panel emphasized the need for safety-by-design approaches in developing new technologies and digital platforms.

Recommendations for the Internet Governance Forum (IGF) included prioritizing global collaborations to develop localized digital safety resources, encouraging more youth participation in policy discussions, and implementing a global accountability mechanism for countries’ efforts in child online protection. The panel concluded by calling for greater inclusion of children’s voices in future IGF discussions and policy-making processes related to their digital rights and safety.

Keypoints

Major discussion points:

– Strategies and initiatives in different countries to protect children online and promote digital literacy

– Challenges in implementing child online protection, including infrastructure limitations and lack of awareness

– The need for multi-stakeholder collaboration and localized approaches

– Importance of addressing mental health impacts of social media on children

– Recommendations for IGF to promote child safety and inclusion

The overall purpose of the discussion was to examine current efforts to protect children’s rights and safety in the digital age, share best practices from different countries, and provide recommendations to the Internet Governance Forum (IGF) on promoting child online safety and inclusion.

The tone of the discussion was collaborative and solution-oriented. Participants shared insights from their countries and organizations in a constructive manner. There was a sense of urgency about the importance of the issue, but also optimism about the potential for positive change through coordinated efforts. The tone became more focused and action-oriented towards the end as participants provided specific recommendations.

Speakers

– Radhika Gupta: Podar International School

– Gabriel Karsan: Coordinator of the African Parliamentary Network on Internet Governance, founding director of Emerging Youth Initiative

– Speaker 1 (Ahitha): Coordinator of India Youth IGF

– Aishath Naura Naseem: Representative of Women in Tech Maldives, part of Maldives IGF

– Speaker 3 (Janatu): Department of Public Administration, Kumile University

– Samaila Atsen Bako: Director of communication at Cyber Security Experts Association of Nigeria

Additional speakers:

– Madhav Pradhan

– Mary Uduma

– Sasha Nandlal

Full session report

Child Online Protection and Digital Inclusion: A Global Perspective

This panel discussion, moderated by Radhika Gupta, brought together experts from various countries to address the critical issue of promoting children’s rights and inclusion in the digital age. The conversation focused on the challenges and opportunities in ensuring child online protection, with participants sharing insights on national strategies and initiatives to safeguard children in the digital space.

Key Themes and Initiatives

1. Digital Literacy and Education

A primary focus of the discussion was the importance of digital literacy initiatives. Ahitha highlighted India’s National Digital Literacy Mission and Information Security Education and Awareness Program. Naza emphasised community-based training for children, parents, and teachers in the Maldives. Samila Asembaku shared Nigeria’s approach of creating cyber clubs and ambassadors in schools, as well as efforts to provide power to universities using solar panels. Gabriel Karsan discussed Tanzania’s last mile connectivity program and school connectivity initiatives, stressing the implementation of competency-based digital literacy skills in education across Africa.

The panel agreed on the crucial role of digital literacy in empowering children to navigate online spaces safely. However, there were differences in the specific approaches proposed, reflecting the need for tailored strategies in different contexts.

2. Legal and Policy Frameworks

The discussion highlighted various legal and policy initiatives across countries:

– India’s POCSO Act 2012 for protecting children from sexual offenses

– Nigeria’s Cybercrime Act

– Tanzania’s strategy 2050 and its focus on youth empowerment in digital structures

– The importance of adopting the UN Convention on the Rights of the Child General Comment No. 25

Panelists stressed the need for comprehensive legal frameworks that address the rapidly evolving digital landscape and protect children’s rights online.

3. Multi-stakeholder Collaboration

The discussion emphasised the necessity of collaboration between various stakeholders. Examples included:

– India’s partnership with Women in Tech Org Northeast chapter

– Nigeria’s collaboration with UNICEF for the Digital Transformation Project

– Maldives’ work with UNICEF and UNDP for awareness programs

Gabriel Karsan highlighted the importance of public-private partnerships and engaging telecom companies and academia in community-driven approaches. The audience contributed to this point, stressing the need for collaboration between corporate and public sectors in education. There was broad agreement on the value of multi-stakeholder approaches involving governments, tech companies, civil society organizations, and children themselves.

4. Mental Health Concerns

An unexpected point of consensus emerged around the need to address mental health issues related to children’s social media use. An audience member raised concerns about the lack of government focus on this issue, while Samila Asembaku emphasised the importance of observing behavioural changes in children and encouraging more physical engagement with peers. This highlighted a potential gap in comprehensive approaches to child online protection.

5. Localisation and Cultural Relevance

The panel stressed the importance of culturally appropriate and locally relevant interventions. This included developing resources in local languages and considering the needs of children with disabilities. Gabriel Karsan and Naza partially agreed on the need for localised approaches, with Karsan focusing on infrastructure development and Naza emphasising the development of localised digital safety resources through global collaborations.

6. Challenges in Implementation

Several challenges were identified in implementing effective child online protection measures:

– The rapid pace of technological change outpacing policy development

– The need for more widespread awareness and education

– The challenge of reaching rural and underserved areas

– The issue of shared devices and the generational digital divide

– The need for capacity building for law enforcement agencies to better handle cyber cases involving children

Recommendations for the Internet Governance Forum (IGF)

The discussion concluded with several recommendations for the IGF:

1. Adopt global standards for child online protection, based on frameworks like the UN Convention on the Rights of the Child General Comment No. 25.

2. Prioritise global collaborations to develop localised digital safety resources.

3. Encourage more youth participation in policy discussions.

4. Implement a global accountability mechanism for countries’ efforts in child online protection.

5. Create working groups for inclusive and safe technology development, considering accessibility design.

6. Include children’s voices directly in future IGF discussions.

7. Implement mandatory rights audits for digital platforms to ensure child safety.

8. Promote the use of correct terminology, such as “child sexual abuse material” instead of “child pornography.”

9. Encourage limits on children’s screen time and enforce age restrictions on digital platforms.

10. Support ongoing research and dialogue as the digital landscape continues to evolve.

Conclusion

The panel discussion highlighted the complex and multifaceted nature of child online protection. While there was broad agreement on the importance of digital literacy, multi-stakeholder collaboration, and the need for both global standards and localised approaches, differences emerged in the specific strategies proposed. The conversation emphasised the urgency of action, with Radhika Gupta powerfully stating, “We cannot afford to fold our arms or throw them in the air in despair. The stakes are too high and the future of our children demands action, innovation and accountability.”

The discussion underscored the need for a paradigm shift towards “safety by design, privacy by design, child rights by design” in digital spaces. It also highlighted the importance of distinguishing between children and youth in digital policies, recognising the unique vulnerabilities and rights of children.

As the digital landscape continues to evolve, ongoing dialogue, research, and collaborative action will be crucial to ensuring that children can safely and meaningfully participate in the digital world while being protected from its potential harms. The role of initiatives like India’s I4C (Indian Cybercrime Coordination Center) in nationwide coordination efforts for cybercrime prevention was noted as an example of proactive measures in this direction.

Session Transcript

Radhika Gupta: one. Good afternoon, everyone. It is an honor to welcome you to this very important discussion promoting children’s rights and inclusion in the digital age. Today, we address one of the most urgent challenges and opportunities of time, ensuring that children who are the most vulnerable members of our society are protected, empowered, and are included in the ever-evolving digital landscape. The journey towards child online protection has been one of the progress, progressive one, and very awakening. The Child Online Protection Initiative by the ITU laid a foundation with its five strategic pillars which were inspired by the global cyber security agenda. This was followed up by the We Protect Alliance which provided a critical framework in a form of model national response, helping member states and countries to develop the necessary capabilities to tackle online threats against children. A ground-breaking milestone came in 2022 with the release of the UN Convention on the Rights of the Child, general comment number 25, on children’s rights in the digital environment. This document effectively dispelled the notion often cited in developing countries that child online safety is an emerging issue. It’s offered as a global standard to reframe child online protection as a universal and agent obligation. However, as we reflect on the last two decades of digital transformation culminating into the next year’s WSIS plus 20, we must acknowledge that child online safety is now critical than ever. At the beginning of digital age, the issue was not as pronounced as it is now. Today, however, the risks are real, persistent and evolving. We cannot afford to fold our arms or throw them in the air in despair. The stakes are too high and the future of our children demands action, innovation and accountability. We already possess the tools, the frameworks and the collaborative mechanism to protect children online, but these must be implemented with the intentionality and commitment thereof. The way forward requires a paradigm shift towards safety by design, privacy by design, child rights by design, those forming the fundamentals. These efforts cannot succeed in isolation. The complexity of the digital landscape demands robust multi-stakeholder approach, bringing together governments, tech companies, civil society, the community or the society at large and the children as main stakeholders to the discussion table. At the same time, we must resist the tendency to combine children with youth. Children require unique protection, tailored strategies and recognition of their peculiar vulnerabilities and rights. So, to help us unpack this conversation this afternoon, we have five aspects for the panel discussion. We have, I will give them the floor to introduce themselves, a minute each. We have some online, so it’s a hybrid format we’re using. We have Naha, Janatu, Gabrielle, and Samayla. So, Ahitha first.

Speaker 1: Thank you so much. Hi, everyone. Good afternoon. Good to see you all post-lunch. I am Ahitha. I’m the coordinator of India Youth IGF. It’s a platform recognized by the UN IGF that works extensively on empowering young people in the space of Internet governance in the country, in India. And in that regard, there’s been a lot of interaction with young individuals across the country from different cultures and economies. And I hope to present that perspective today in our discussion.

Radhika Gupta: Thank you. Yeah.

Aishath Naura Naseem: Hi, I’m Naura from Maldives. I’m representing Women in Tech Maldives, an NGO, a civil society organization in Maldives. Also, I’m a part of Maldives IGF, which we recently was established. We are working, a community group, yeah, actually, so we are also working to establish, strengthen, formulated, sort of trying to establish a safer digital environment, especially for the kids as well. And the thing is that since we are a geographically disposed island, our component in the awareness section is quite high. So looking forward for this panel discussion.

Radhika Gupta: Right. Thank you. Janalchoo? Good afternoon, everybody. I’m Janalchoo. I’m representing Women in Tech Maldives, an NGO, a civil society organization in Maldives. Also, I’m a part of Maldives IGF, which we recently was established. So looking forward for this panel discussion. Right. Thank you. Janalchoo?

Speaker 3: This is the Department of Public Administration, Kumile University.

Radhika Gupta: Okay, Gabriel.

Gabriel Karsan: Salaams, everybody. My name is Gabriel Karsan. I’m the coordinator of the African Parliamentary Network on Internet Governance, but also active in the space of the Africa Youth IGF, as well as the founding director of the Emerging Youth Initiative. My work intersects in policy, society, as well as the role of young people and children in shaping the digital institution. I’m looking forward to our discussion.

Radhika Gupta: Right. Samayla.

Samaila Atsen Bako: Thank you very much. My name is Samaila Atsen Bako. I am the security evangelist at Code for Africa, which is an African-wide nonprofit. I’m also the director of communication at the Cyber Security Experts Association of Nigeria. So most of my work revolves around user awareness, education, training, policy development, and things like that. I’m looking forward to learning from everyone here in this session. Thank you.

Radhika Gupta: Okay. So thank you, my wonderful panelists. We all come from different backgrounds and all that. We would like to understand what actually is happening in your country to ensure that children and young people are safeguarded. For instance, Maldives, what strategies do you have in place to check cyberbullying, online harassment in general, online grooming, and every other thing? Thank you. They didn’t hear? It’s not working. It’s working. Okay. So, who is taking first?

Aishath Naura Naseem: I’m fine. Okay, that’s fine. In the context of Maldives actually, everything is sort of new over there because we are still on the, you know, the baby steps. Like, we are growing our cyber space over there. So, like I said before, as we are dispersed in, you know, small islands and all, reaching out to our communities is always a challenge for us. So, we are trying to formulate or come up with programs that we can actually accommodate our women and children. So, my organization is also mostly a part of this. We are working on conducting different awareness sessions on cyber safety, online safety, how to secure their digital devices, how to make sure that the internet that they are browsing is safer for them, and also to empower women in the island communities as well. And apart from that, for children also, we are actually, as a government, is working on implying, you know, formulating the strategies that are required for the cyber space and the digital space. As an organization of a community-led organization, we are working with, you know, our schools, our education system to make the children understand, like, what is there out in the digital space, actually. How they can be, how can they identify if they are being targeted for cyber bullying, how they can identify if they are being, you know, like perpetrators are waiting for them out there, and how they can identify if they are being targeted for cyber harassment or any such, because reaching out to our communities are challenging as well. always because we literally have to travel to islands in order to aware these kids and parents and teachers and local communities as well. So as an organization, a community-led organization, we work, we have been actually working with these UNICEF, UNDP, such organizations to conducting different awareness programs and such STEM related programs where we are educating our kids to, you know, how they can come up with critical thinking solutions like through coding and simple programmings and stuff. Yeah, that’s it.

Radhika Gupta: Smila, am I putting you on the spot?

Samaila Atsen Bako: I was just gonna say I wasn’t expecting myself to go next, but it’s fine. It’s fine. So I didn’t mention this earlier apart from Nigeria. So in our country, I would say it’s a bit similar to what the previous speaker just said. We still have a bit of foundational layers being put in place. So for instance, about six months ago, if I recall correctly, we had a session with one of the, with the telecoms industry regulator, what we call the National Communications Commission, sorry, Nigerian Communications Commission, and they’re putting together like, operating procedures basically for child protection and they wanted external stakeholders to be involved. So civil society, the industry experts and so on to contribute to the drafting of those processes. And so we’re part of that session. But even before then, I would say about, if I’m not mistaken, two or three years ago, we had a session with, organized by META, MTN, I think it was NICMEC or ICMEC. and it’s a global organization that does stuff around child and women protection and stuff like that. So we’ve had some of these engagements. And so from my organization’s perspective, and by organization I mean the Sabah Security Exposition of Nigeria, we tend to be involved in these conversations to provide our insights from the Sabah Security perspective as professionals. But then again, as internet users and as parents or elder siblings or guardians, we do have kids that were aware of using the internet and we tend to know some of the things they experience. So even outside the Sabah Security perspective, we also have that end user perspective. So we try to share from these areas and we go a step further. So we also engage students. We tend to go to schools. I mean, we’ve been doing this since maybe, I think 2015 or so, going to high schools, also universities to engage with the students, but also to engage with their teachers. Because what we’ve found out now is that especially the kids in the private school, they tend to get easier access to gadgets and the internet at a younger age because they are more likely from well-to-do homes. And so we go to those private schools, we engage with them. And we try to explain to the students, these are some of the things you need to be aware of. Internet is nice and fun, but they are disissued. So we’ve done that, we’ve tried to engage the government. And I think those are the little contributions we’ve done from our perspective and we continue to do in the near year as well.

Radhika Gupta: Okay, I don’t see that to be little, that’s some progress, so thank you. Gabrielle? If we don’t have Gabrielle ready, can we have Janatu? Hello everyone, this is Janatu for Dose. When it comes to our case. in this digital age, the whole stratosphere of technology and the Internet. Hello? Hello. Is that Gabriel speaking? Yes, I was speaking. Gabriel, can you please pause? Janatu has taken the floor, so wait for her to finish, then you take it up. Thank you. Okay.

Speaker 3: Hello, everyone. My topic is the future of learning digitalization in primary education for sustainable development in Bangladesh. Here, I would like to express the entire challenges regarding digitalization in primary education, and globally, the digital transformation of education is changing the learning experiences, pedagogical approaches, and institutional administration, integrating the newest digital resources into the educational institutions aims to establish a networked environment that enables students to learn in a digital format. Therefore, digitalization information facilitates students’ ability to responsibly protect the environment and the economy and to base their judgments on factual knowledge. This relationship supports a comprehensive and transformative high-quality education that affects learning outcomes and content. ICT contributes to education and human capital development. Furthermore, the challenges occur due to COVID-19, which broke out in Bangladesh in March 11, 2020. In that sense, since then, the widespread closure of schools in response to COVID-19 has hindered traditional modes of education and the focus of… institutional activities has switched to the online approach. In the recent years, actually, there has been widespread recognition of the need of digital competency at different educational levels. With its creative solution to enduring problems in education, such as access to high-quality education, unequal student-teachers ratio, and geographic constraint, this digital revolution is significant for poor nations like Bangladesh. As a part of National Digital Bangladesh Goal, actually, this is the election manifesto of our previous government, that is Awami government, which was introduced in 2009. Bangladesh has started towards full digital integration in education. My paper actually explores how digital technologies can improve the learning results and democratize the access to education resources and transform education delivery. And now, I would like to focus on the digitalization of education. In modern society, heavily relies on technology. The digital revolution of the, actually, in Bangladesh perspective, it was 1990. Here, in that year, in the education system, digital transformative approach were taken into account. Digital education integrates electronic content, e-learning, VR, and social media to modernize teaching and foster digital maturity. Innovations like…

Radhika Gupta: Janatu? Janatu? Okay. Yes, I think you would have to pause there for now. We’re focusing on what strategies are in place to ensuring that… Children and young people are not taking advantage of by predators. So what strategy do you have in Bangladesh? That is seen to cyber bullying that is seen to exploitation and abuse and all other forms of abuse to you Do you have? An intervention on that to share with us. Yes. Yes. I can you go to that for us. Thank you. Okay

Speaker 3: You know each each every each and every issue educational institution there must be a sexual harassment prevention cell And through this it was actually Developed in 2009 through a high court directives that each and every institution must have a sexual harassment prevention cell and through this each and every organization forms a sexual harassment prevention cell with five to nine members where majority majority are female and in most of the case the cyber bullying is done on on the perspective of the female and and harassment cell always working on this to resolve this issue and Actually, this is the gender issue and in in other sense we have lots of challenges like infrastructure It’s it’s it’s a pressing issue in for in perspective of Bangladesh because the infrastructural capacity is not so good good here and in if the government arrange a computer and Internet package, but there is lack of electricity in the rural areas of Bangladesh. There’s a great challenge for Bangladesh. Okay

Radhika Gupta: Thank you so much. I’m not to Gabriel Thank you so much for being with me, yes, just hang in there we’re coming back to you with two other submissions

Gabriel Karsan: Yes, thank you very much Children in Africa are increasingly exposed to elements of cyber bullying online exploitation as well as harmful content So I think it’s very important for us to be aware of this I might say, this is a global issue. We are not immune, even though we are in baby steps of creating our infrastructure and our policies, we are not immune to the different vulnerabilities of the cyberspace, especially towards children. Now we have almost 25, under 25 Africans who use the internet deeply in their day-to-day activities. It’s shaping the culture, and this is seen with post-modernism as well as with social media. In Tanzania, we are still developing a framework, but what has been done is that we have a strong regulatory authority that has deep technical standards that align with the global cross-border and online protection mechanism. Examples, an infrastructure that has IPsec, open communities, as well as a DNS protection system that really aligns with the local sector of protecting the content, but also we have a content policy that aligns on who has access to the internet and in what particular perspective, because it’s still a developing country with the infrastructure still being built. So in our child online protection framework, the critical step which has been done is merging lit-based community, where parents and educators and policy makers can just come together and exchange a particular real case scenario in that manner, because the first thing is that it’s also an issue that aligns the issue, because we have seen that with digital literacy and inclusion, particularly to young women, the cascading effect has been great. Even in rural areas where the internet is coming now, the access comes with a little new information of breaching the divide. And Tanzania has a school connectivity program, and the school connectivity program is a package that comes with. with digital literacy, where they have seen that protecting the child is important, but this is also done through giving them the skillset, giving them languages, so that when they partake or communicate and interact with technological systems, they have an awareness system. But also, we are building feedback loops, which should be open based on different demographics. And I think this is something that’s still in discussion for policy. I think I’ll end there now as we follow our discussion.

Radhika Gupta: Thank you, Gabriel. We’ll come back to you later. I heard you.

Speaker 1: Yes. So hi, everyone. When we talk about child rights online, so I’m gonna bring in a little bit from the harm’s perspective. But before that, Gabriel pointed out a lot around meaningful inclusion and what has already been, you know, with respect to access, that the challenge that we face. So this perspective that I’m going to share is through the interaction with the young people through the YIGF network in India. And a lot of the young people, I’m sure the issues also overlap to other countries, talk about how there is, of course, exposure to inappropriate content online. They do experience cyberbullying extensively. But there’s also unauthorized collection of their personal data. So there’s a lot of privacy violations. So what is the consent mechanism? Is there enough mechanisms to ensure that it’s clearly explained to them as to what is the data, why is it being used, and what kind of control that they have over it? In addition to this, we do see there could be situations of predators, you know, manipulating the mindsets. And then also we also come across targeted advertisement and a lot of misinformation. The other concern we see is for young people to like meaningfully be there on the online space, you need the right amount of digital literacy aspects. And I think that is something that a lot of the panelists before me have highlighted. So in India, from a cyber harm perspective, there are two key things that I want to highlight that the country is currently. doing in the I4C which is our Indian Cybercrime Coordination Center under the Ministry of Home Affairs is currently aimed at tackling cybercrime in a more coordinated aspect so they have a nationwide coordination where they have also come up with a cybercrime reporting portal which targets this is an accessible platform is something that you can also look up on the cybercrime.gov.in where individuals can report cybercrimes including those that are targeting young children around child pornography, online harassment and exploitation and cyberbullying. Further there are also dedicated resources that you can look into that provide you information on how to handle crimes around child abuse and grooming, trafficking among others. Then from a capacity building perspective they also provide training to law enforcement personal prosecutors and the judicially judicial officers and how to handle cybercrime investigations. Among this of course we do require collaboration with global organizations so they do ensure that there is best practice exchange with cross-border cybercrime developments. The last thing I want to highlight is the POSCO Act 2012 in India which is protecting children under 18 from sexual offenses including abuse, exploitation and harassment but I want to highlight the aspect that relates to the digital threats that young children the children pose that falls under this act which is which mentions that it criminalizes the use of children for creating, distributing and accessing child sexual abuse material as CISA material and includes various provisions for online grooming, harassment and cyber exploitation. Thank you.

Radhika Gupta: All right thank you for your interventions in that regard. I don’t know if I should probe further but then for the sake of time we’re going to go further but I just want to correct an impression there in terms of the terminology we no longer have child pornography. An abuse is an abuse. A child does not engage in pornography. So as much as possible, let’s change out. This is a platform we are educating people, so I just thought that I’ll bring that to the fore. So we go with our next round of submissions, but then I’m going to merge the two questions. So if you give us what your country, for instance, what’s India doing specifically to promote digital literacy among young people, then you also tell us what, maybe there’s an intervention to leverage on emerging technologies. What are some of those things? So you’re giving me two perspectives. You’re giving me digital literacy initiatives at country level to empower young people, because when we’re talking about child online safety and we’re just talking about protection, protection, protection, we are leaving out the provision and the participation aspect of things. So as much as possible, we need to take care of the three Ps as involved in child protection, just to make sure that we’re doing the right thing. So much as we are treating the harms and the threats to children, we need to also take care of empowerment. We need to take care of their skill sets in order for them to engage responsibly. So I’m coming to you. I don’t know who wants to take it first. I don’t want to be biased. Should I put you on the spot? Are you okay? So tell us, what digital literacy initiatives do you have in your country? And what are some of the emerging technology leverages that you’re doing in Maldives? Thank you.

Aishath Naura Naseem: In terms of promoting the digital literacy, I think like every other country, we are also still struggling to put the right thing in the right place as a country, because the technology is expanding way too fast,

Speaker 1: then we can reach it out. But as I mentioned before, my organization, the Women in Tech Maldives over there, we have… a huge role in establishing digital literacy among our people, our country, throughout. So we have, for kids, it’s like that you just can’t go and directly reach out to the kids. So we have to pass, you know, level by level, like just go and first educate their parents. What are the digital norms? How can you maintain safety? How can you ensure that the network that you are in connected is safe? How can you ensure that your devices are connected to safe networks? How you can ensure the security of the devices that you are eventually giving out to your child to use? So we have programs that we run across the country that we educate our community, we educate our parents, we educate our students, and I mean the kids, along with the teachers that they are daily life, maybe communicating with. So it’s like, it’s not easy because the whole thing becomes quite new for the community as well, because when we go and say our digital spaces, you have this and this and that over in the digital space, it is harmful. So to identify or to recognize the harmfulness in that particular area is quite challenging for the community people because obviously not enough literate in the digital space. So to literate them in the digital space, how they can identify that they are being targeted, how they can actually monitor the devices that the kids are using, how they can ensure the safety.

Aishath Naura Naseem: We do run several programs in our place to ensure the safety are maintained within the community itself.

Radhika Gupta: Okay, thank you. Gabriele, we’re coming to you, but then I’m also interested in hearing these initiatives that are even in the local language, because sometimes it’s not always English that the children understand. Do we have some of them in the local language? Do we have it in an accessible format, say, taken into consideration? children with disability. Those initiatives, if they are happening in your country, we would like you to share light on that. Then we can pick lessons from it going forward. We can work with that. Gabriel, over to you. Thank you. Okay, if I don’t have Gabriel, could we have Samaila, Nigeria?

Samaila Atsen Bako: Sure. Thanks for the question. I think the question is in two parts. Maybe I start with maybe what some of the things that I know the government has started doing around digital literacy and even emerging tech. I think a few years back, there was some effort to – there was some effort to do some level of digitization when it comes to the schools we have. Some of these interventions are not necessarily targeted towards just kids. It may also be the older children. I know there’s been effort, for instance, to provide power to universities using solar panels so that they can then leverage the technologies that have been provided in the school to improve their digital literacy. The current Minister of Communication and Digital Economy has been pushing for a lot of effort around the adoption and familiarity with AI. I know there’s a scheme, I think it’s the National AI Research Scheme, that was launched a few months ago. There’s been other efforts by him to pull together resources from the private sector and other places and even foreign people to support those efforts. I guess maybe it’s a bit soon to say how effective it has been or the results of such interventions, but I know that they exist. There are others by the National Information Technology Development Agency and other government processes trying to make things work somehow in terms of providing gadgets because they are not so affordable, providing even internet connectivity to some of these areas outside the main urban areas. I think another laudable effort by the government in Nigeria was when, I think about three years ago or so, when the sub-national governments, that is the state governments, came to an agreement to reduce the cost of the right-of-way, that is when, like the money you pay to lay fiber optic cables, and the goal was to slash the price from about 6,000, I think 6,000 Naira per, from as high as 6,000 Naira per meter to a maximum of 145 Naira per meter, and that obviously would lead to better internet penetration because the telecoms companies and the internet service providers can lay their cables to more areas and to spread connectivity to people living outside the urban centers. So I think those are some of the efforts by the government. Generally speaking, from the, what would I say, from the Sim Society perspective, where I come from, I know quite a number of organizations that are doing different kinds of digital safety, let’s say courses or trainings or webinars or sharing flyers, some level of awareness and education, and the goal is just to make sure people do understand more about technology, but also now learn about this side of things. And I think even from the, if you look at law enforcement now, there’s a bit more knowledge, you know, about electronic crime or online fraud and cyber crime and all that, but also from the digital harms perspective, where we have cyber bullying and online harassment, so there’s a bit more knowledge, although we’re hoping for better and more widespread awareness and even prosecution and things like that. to come on board. I think in Nigeria right now, the main laws that may be used when these things come up have to do with cybercrime acts that we have, as well as maybe the Child Rights Act and one or two others. But we’re hoping that this issue of digital literacy, you know, from the angle of the laws, of using technology, of being safe online, addressing these different stakeholders, we are hoping that things improve as time goes on. All right, thank you.

Radhika Gupta: I would have put you on the spot if you were from government, but it’s okay. Gabriel, are you ready? Yes, I’m ready. All right.

Gabriel Karsan: Yes, thank you. Based on our side, I think Tanzania has been going through a big reform in terms of building its strategy 2050, and they have highlighted the importance of having core agency of youth empowerment in the digital structure empowerment programs. So right now, we have a last mile connectivity program where we have tried to have 70% of the B country have fiber optic cabling, and these pass through schools, because in our culture, we come from a very oral culture, it’s still under the communal side of things to pass on knowledge. So school connectivity has come to enhance and more participation of the community, because it has boosted the centers where we are not only equipping children with the digital skills, but now also the teachers and the community members come to interact with technologies, because it is a big strategy that has been pushed by the government. But also in terms of pushing public partnerships, private, public, private partnerships. Now we have been working with different organizations, example, like China mobile, where they’re building a big backbone for connectivity to come to the country, but also with a simple start-up which is building an open-source technological system called KaiOS. This is in Swahili, and it has all the local initiatives including access to literacy materials online and can be used on a simple feature phone. This has been a project that has been able to reach almost a million young people. With the use of the local language and the community element and the funding mechanism that has some subsidies from the government, it has boosted more inclusion. But in terms of the human rights angle of equipping children’s rights such as privacy and freedom of expression and access to information, we still have a big technical gap because we are still building quite the foundation of the framework. And if we are to be realistic in this digital realm, the emerging technologies example, artificial intelligence and the high computational needs, they will need us to really catapult fast and a big investment has to be made in the core infrastructure which is the young children. Tanzania also has pushed for a change in competency-based digital literacy skills being part and parcel of every educational program starting from next year. So I think these are things that maybe in five years we might see catapulate. But in terms of good policy, I also come from the age where there was a subsidy in connectivity elements where I also got to enjoy the benefits of cheap connection and affordable internet. This has given me the privilege of addressing you all today. So I think these policy intersections with a very local lens tend to breed fruits.

Radhika Gupta: All right. Thank you. I like the way you situated in this conversation. Do we have you? Can you take the floor? Okay.

Speaker 1: So I think a lot of very good points have already been covered. So I was just trying to take a note of… what exactly I could add, which is a little more from, in terms of the initiatives that are happening. Okay, so some of the things which have been talked about are happening in India. Yes. We need you to reiterate it, because we are looking at countries and what they are doing. All right, all right. So we, I think Gabrielle also spoke about digital literacy programs. So when we look at India, right, there has been, in the last couple of, in the last decade, for example, the internet adoption has increased tremendously. And especially with, right now, we have the unified payment system, payment interface, UPI is what we call it, and a lot of the transactions in the country are online. So I don’t mean talking, here I don’t talk about just supermarkets, for example, or some kind of transactions with malls. I’m talking about even the street hawkers, for that matter. Everybody does transactions through the unified payment interface. So everything is digital. And that brings us to the main concern about does everybody have the right set of digital skill sets to ensure that they are safe, they are secure, and they’re meaningfully accessing the internet in the country. So just recently, we have the National Digital Literacy Mission, NLDM is what we call it. And this initiative was launched to ensure that the citizens, especially those in rural and underserved areas, have the necessary skills to access digital technology and participate overall in the digital economy. But since earlier in the first question that was addressed to me, I spoke about threats. I also want to highlight from a cyber security skill set perspective. There is the Information Security Education and Awareness Program, which is actually a free of cost bunch of courses for you to have the right set of skills to navigate the internet safely. And if you are interested in developing a little more in the technical skills, you might as well do that through the program. One thing I want to highlight is not just having these skills in any. in English, what we at Youth IGF India, very recently in fact, we had done a partnership with the Women in Tech Org, the Northeast chapter. So, our idea was to ensure that we teach relevant cybersecurity skill sets to young women in the Northeast of India, in a language that they are comfortable with, and content that they are comfortable with to address their relevant concerns of the region around cybersecurity. So, what I’ve realized in this process of this partnership, and among the other initiatives that we’ve been running as a civil society body, that there are a lot of civil society bodies out there, not just focusing on mentoring the young, but also mentoring the educators, the researchers, as well as the, how do I say, judiciary and the police in the country to handle these issues. Thanks.

Radhika Gupta: All right. Thank you. On the note of law enforcement being empowered to do their work, I like open mic situations, because I know there are so many expertise in the room. So, we will pause the panel session now. My panelists be thinking about one recommendation, one key recommendation to IGF, as far as promoting child online safety and inclusion of rights is concerned. So, be thinking about that whilst we give the mic to anyone in the room who would like to make an intervention in this regard. Hello?

Audience: Hi. I also hail from India. So, thank you for putting those points together. I just wanted to make an addition to what she mentioned, that on the efforts by the government, in terms of like, India has, the I4C department has actually adopted a mission to train 5,000 commandos, cyber commandos as well, where they’re training these law enforcement agencies to look… into cyber cases in India, which also looks into child safety. So that’s an addition in terms of what the government is doing. But in terms of population, that’s we feel, as a civil society organization, that’s way too less. But coming from an organization that does a lot of digital literacy programs in India, we do train children about cyber safety from a very young age, basically from children below six up till class 12, as well, to talk about how you need to navigate the digital spaces more safely and securely. Also, the interaction with parents are really high, because the whole generational digital divide is one big issue in India. Also, the issue of shared devices. So putting all those together, of course, the missions are way too high. And a lot of civil society organizations, along with the government, are doing a lot of efforts. But then again, in proportion to the population, I think so it’s still way too less. But yes, of course, I’m sure with more efforts and missions, we’ll be able to do that.

Radhika Gupta: Thank you. I wish we have the whole day. We would have packed teleworking and everything.

Speaker 3: I would like to add something, if you allow me.

Radhika Gupta: Who is it?

Speaker 3: This is Dr. Jannath.

Radhika Gupta: Please hold on for us. We have an open mic session now. So just hold on. Hello, everybody.

Audience: Namaste. I’m from Nepal. I’m Madhav Pradhan. And actually, we are working with the children. We incorporate in the curriculum, because in Nepal, we have access to the children, how children can safely use the internet. And if they are cyberbullied, we are very closely working with the cyber bureau. Cyber bureau is very controlling. And if children have a problem, they can complain in the child help line. Because in child help line, if they have a problem, mentally health is linked with the hospital. And if they have online cyberbullying, link with the cyber bureau. That’s where we are working. And I want to know in other country how they are incorporating the curriculum. Because from one class to 12 class, how children can safely use the internet. And other country, how they can complain that we are now working in Nepal.

Radhika Gupta: Thank you. In the country I come from, online safety has become part of the lessons for ICT. So from the scratch, they have it as a content that they engage with.

Audience: Hello, I am from Bangladesh. In Bangladesh, our government has created our administrative committee for teenagers in locally, district level, as well as in central level. In central committee, there are 11 ministries, as well as police, cyber security, crime related agencies are there. And in the district level, there are district collector, police, super, medical officers, educational officers, and as well as all the society related co-workers are there. And they are actually working for preventing cyber crime for teenagers. So there could be an idea for everyone. So they could do that locally to prevent cyber crime, cyber bullying perfectly. I guess that could be an idea.

Radhika Gupta: Thank you for your intervention. Janatu, could you speak now? Hello? While we wait for her.

Audience: Thank you so much. And like, I mean, Haitha has like added everything, but like, I just very quickly wanted to like come in. Oh, that’s fine now. Yeah, coming on the point that you were mentioning in terms of the local language, right. So one initiative, which is very commendable that the Indian government is trying to do is Bhaashini. So Bhaashini is AI integrated tool, which the government is developing to like actually translate various, you know, digital platforms or the digital services into the vernacular languages so the individuals across India can actually like access them in a more like, you know, meaningful way. In addition to that, like also wanted to like mention a little bit when we are talking about digital literacy, it’s also important to like, look at from the emerging technology perspective. India is like working on something called India AI mission. And one of the key pillars is the, you know, AI future skills. So where they’re actually developing courses and like, you know, developing modules within the, you know, curriculums of the schools and colleges, and also which is actually within the tier two and tier three of the, you know, cities. And finally, one more point, which I wanted to add, like, I think, like, there’s a long curve to go in terms of like, you know, also sensitizing the government in terms of like enforcement, when it comes to all of these aspects, when we talk about like, you know, child abuse and everything, but government is also like very actively working on that there’s a department within the METI, which is the, you know, electronics and information technology Ministry of India, NEGD, which actually looks specifically into like sensitizing how the, you know, the public sector can also like, understand what’s really happening within this ecosystem to act better. Yeah. Maybe I could just add an international perspective. So I was a delegate to the 93rd child rights convention in Geneva. And the primary issue discussed was, of course, all of these serious issues that your governments have been addressing, but a major overlap is the mental health issues caused by social media to children. So it evolves from false body images to several psychological problems. So what do these governments do actually, beyond just teaching children how to use the internet and how to be safe on the internet from a mental health perspective? That is so an open question.

Radhika Gupta: That’s a question. He’s talking. about what governments are doing. I don’t think I’ll limit it to just the panelists. Anybody who has the response could take up the mic. What governments are doing in order to safeguard young people from mental health concerns and body image issues?

Audience: Thank you. Can you hear me? All right. My name is Mary Uduma. I’m from Nigeria. And the panelists from Nigeria have done some intervention on what has been happening in my country. Apart from the government, there are other organizations, other non-governmental organizations that are also involved in capacity building when it comes to child online protection, as well as advocacy group for the children. And my own organization, Ndukwe Kalo Foundation, we are interested in that. And we do capacity building for teachers, engaging them so that they can also engage the students. And also, we’re establishing what we call ambassadors within the school setting so that children, they talk to their peers. They may not speak to the adults, but they can actually confide in their peers. So we’re trying to set up that so that it can happen. As for mental health, I’m not sure there’s a government organization that is looking at mental health. But it’s something we can take back home to make sure that we look at that to include it in our engagement with the caregivers, the teachers, the parents, and then even the law enforcement agent, and also the health care organizations. We’ll take that home and we’ll try to implement it.

Radhika Gupta: Thank you. And let me just reiterate the fact that if country-level strategies are being developed using the general comment number 25, all these things would have been taken care of. But if we develop the country’s strategies without the global standard, it would be very difficult for us to take care of issues that will come up. And it will not be future-proof enough. We will design a solution after one year. We’ll have to now review it in order to come up with other things. So as much as possible, take the message home. The general comment is there for countries to use in order to design solution or strategies that will help protect children effectively in the digital space. Is Janatu available now? Oh, there’s a question online. Okay. Who is our moderator online then? Are you doing that? I can’t see the question.

Audience: The lady’s name is Sasha. Okay, Sasha, could you unmute yourself and ask the question? Hello, can you hear me? Yes, we can hear you. Okay, I tried to put on the video as well. However, it’s not working. So my name is Sasha Nandlal. I’m from Trinidad and Tobago. I’m currently in Canada pursuing my PhD, and my comment and question is based around the second point, digital literacy and inclusion, from the perspective of empowering all students within the process of education and retaining their backgrounds, as well as being mindful of their socio-economic standings. And with that in mind, within Trinidad and Tobago, we have the Digital Transformation Project through UNICEF as well. But it’s been part of our practice to look at the infrastructure, as it would be with most organizations, and consider the policy, the hardware, and building teacher awareness. And I think that’s one of the major parts that we haven’t touched on as much, is that teachers’ attitudes play a role in terms of what morals, values, ethics, and practices are going to be reciprocated throughout the veins of society and the way in which we promote change and change management processes from the generation moving forward. When we consider the formal and hidden curriculum, formal curriculum-wise we consider systemic when it comes to concepts that are going to be developed and the skills that are going to be developed from early childhood years to adulthood. Within that hidden curriculum, as well, we have to consider the effective domains of the students and the teachers and how does that bridge of using technology and using and developing the digital literacy for education and the future work environment, how important is that, and the interjection of AI within that equation. So it’s a very delicate space to be in and when we consider accessibility design, in most cases when it comes to the drive of technology, it starts from the private sector and most times private sectors don’t consider the user at the center of the element that they’re creating. They think about the student after this software has been developed and now we have to go back and weave those pieces through that technology. And so having a strong relationship between corporate and public sectors when it comes to education would be tantamount to its success. And if we have more grants gathered into that space where we could bridge these gaps, bridge that digital divide, and have technology going from one space of private sector to government sector, we’ll find that even the students who didn’t have internet at home could still have access to certain ways of learning, whether it be asynchronous, working from that device home and then plugging into it. to these sessions when they come to school. And so accessibility design might be central to that particular field. I just wondered from the speaker’s perspective and from their own personal experiences, how has that perspective of accessibility played into the role of education development when it comes to digital literacy and inclusion? Thank you.

Radhika Gupta: Right, thank you. You want to take that?

Speaker 1: I think I’m sure somebody else would like to take in a deeper context, but I’ll just give a quick example of how an emerging technology like the internet of things, that’s something we’ve used in India to connect with schools. So especially during the COVID time, a lot of the high school science experiments that could be around titrations or focal length, et cetera, was a project that I worked on, where we worked with an NGO and we also connected with a lot of rural schools in India. They were able to perform the science experiments that was placed in a city, sitting in their schools in the remote part of India. And they had a dashboard, they had an interface, and in real time they were able to conduct these experiments remotely. And there was a queuing mechanism, and that ensured that even, and of course we wanted to ensure there was seamless internet, that is what enabled it. But this is one way to ensure that technology is used in the education sector to ensure that the education is continuous and the learning is continuous. Just wanted to add that, I’m sure somebody else would like to take the question.

Radhika Gupta: Okay, even if nobody adds anything. Her question actually presents the real picture of the complexities involved in doing child online protection. There’s no way we can have one stakeholder have all the solutions or have a system that is foolproof. It’s important that we come together, we bring collective knowledge. And like she said, sometimes businesses. choose profit over protection and well-being. However, if we engage them at the early stage, we are going back to the basics, doing the right thing, we will not have to be there and be retrofitting, but then industry will take into consideration the standard tools in their design and making sure that whatever they’re putting forward has the interest of the child at heart and they are doing the right thing. So which is why, again, country level strategies are very, very important. If we have them in place, you can call the industry player to book in order for them to do what they have to do. So we need to take our country level designs very seriously and make sure we are designing them with the right frameworks in mind. So I’ll give the panelists the last opportunity. Each of you is giving us a recommendation to IGF going forward as to how we can promote child safety and inclusion in IGF. So, Janatu, are you with us? Okay. Samayla? Gabriel? Thank you.

Samaila Atsen Bako: That’s an interesting question because, can you hear me?

Radhika Gupta: Yes, Gabriel. Hello. Oh, that’s Samaila.

Samaila Atsen Bako: Okay. Okay, I think there’s a lag on the call, but no problem, I’ll still go ahead. So I was just saying that that’s an interesting question because I was thinking more from the perspective of the human and societal diversity factors that come into play. So let me start with that and then I’ll give my recommendation. So what I would say is that there are things that are human nature, right? There’s that need to interact with people. So you can’t stop. they’re obviously curious as well. The other answer is that what can we do, you know, knowing that they’re going to go online at some point, what can we do to either help, I think we discussed the helping parts, but what can we do maybe as parents and guidance to be more aware. I think the first thing is to observe behavioural changes. When an outgoing kid, for instance, becomes a bit more reserved or doesn’t want to go to a particular place anymore, that should make us ask questions to become more curious. I think we also need to encourage more physical engagements with their peers as opposed to throwing gadgets at them because you just want some some peace of mind and things like that. And then obviously that means we should now also limit their screen time as well as take into cognizance the age limits that are set on some of these platforms. You know, we shouldn’t be allowing kids that are not up to the right age to be joining some of this platform because they are just exposing themselves to dangers they don’t even understand at the time. Now that being said, in terms of recommendations, again I can’t speak for government but I’ll speak from the non-profit perspective. One of my senior colleagues earlier in the hall, Mrs Mary Uduma mentioned she’s looking into ambassadors in schools. Our association, the Cyber Security Association of Nigeria, has actually recently launched cyber clubs and we started with Caleb University and we’re in talks with about two other universities as well. And the idea is to help with awareness raising within those environments because we know some of these digital harms and security concerns tend to happen in schools. And by virtue of doing that, we’re also hoping that we build more interest in cyber security so we have more people hoping to study cyber security as opposed to getting involved in cyber harms and cyber crime as well. So those are some of the things we are doing. I think you already gave a key recommendation for the government or for IGF to push governments to do, which is the adoption of global standards. about that.

Radhika Gupta: Thank you very much. All right, can I unmute Gabriel? He’s ready to speak.

Gabriel Karsan: Thank you very much. For me, Matthew, since I would urge everybody in the multi-stakeholder approach to understand that the times we are living in, based on a simple African proverb, is we say that it takes the village to raise a child. But in the internet age, it takes the globe, because it’s everybody, almost 5.3 billion lords actively engaging, shaping a culture. So everybody in the multi-stakeholder approach, whether policymaker, educator, technologist, academia, we should champion inclusive, culturally grounded, and child-centered digital practices. Because this is the only way we can create a digital, safer future. We have to understand that childhood is a phase. And in the principles of any community, when a child is born, there’s a particular shift in how you interact to ensure that the child gets to be protected and to enjoy sustainability. And this should be extended in our digital scope, that these are the elements that should be highly put in how we decide policy. And this is based on the inclusivity principles. But also, in best practices, we need a lot of public and private partnerships, because this is the only way we could reach localized communities, with the telecos, with the academia, also with the society participants in itself, in a very community-driven approach. Because local organizations often are the ones who raise our children, and they are the ones who bear as safe guardian people in how they get to evolve. So we should empower them as well. And regional cooperation is still very important, because the African Union has a digital strategy, transformative strategy, and also the Malabo Convention. I think when we use regional integration as well, under strict standards, we can push for our countries to be more compliant in enforcing that a child… is the property of all of Africa and it’s our responsibility and moral obligation that we push for these digital spaces inclusively. All right, thank you Gabriel. Naza?

Aishath Naura Naseem: Before concluding, I would like to thank Mr Das for giving us this last minute opportunity to be among this panel discussion with such insightful things that I’m taking from here and to give the key recommendation I noted down so that I don’t forget that. I just want to mention everything for IGF in the context of a developing nation actually. I would like to mention that to ensure a safer digital environment for our children, for IGF to prioritize on global collaborations to develop localized digital safety resources so that maybe the one of the main component that we were discussing today that for the local contents for that’s being used in the social platforms because that’s one thing that all of our countries I think are facing and along with that I would like to recommend also that to bring out policies or strategies that would actually protect our childs and maybe strengthen the digital literacy among the parents. That’s it, thank you.

Radhika Gupta: Thank you.

Speaker 1: You have a right. So first of all it’s been a very incredible panel because I’ve been constantly noting down points from different countries. Sorry if you’ve just seen me typing throughout. So when we there was a point mentioned about technology developments and how that relates to child safety and I think that is a very critical area of concern that we should all be looking into. So we I think from a standardization perspective since I come from technical community I think we need more working groups to encourage inclusive and safe tech focused on child safety and one more recommendation to the IGF would be that there are the current young people are born into the digital era so I think more participation or some kind of a prior consultation with school students so that an agenda item is set for the IGF making sure that their concerns are part of the agenda of the overall multi-stakeholder discussions that take place

Radhika Gupta: thank you all right thank you all I don’t know if I should have the last last words because we have just two minutes to get out of here so yes let me add my voice to what has been said by our panelists country level strategies are very important and they should be designed based on the right principles stated in a general comment number 25 there’s also the need for mandatory and right audit for digital platforms just to ensure that they’ve taken into consideration rights privacy and safety by designing to consideration capacity building for law enforcement agencies just to make sure that they are prosecuting what needed to be prosecuted digital literacy and awareness programs for children and parents to position them well enough and there should be a global accountability mechanism which will take into consideration what countries are doing and if a country is not up to date you can indirect name and shame for countries that are not working to protect children and then there’s the need for inclusivity in policymaking and maybe the last one I will say that maybe the next IGF we should be having the voice of children in the room and not being represented by youth or adults that’s my last on this note thank you so much for enduring and then we are grateful and we hope that this discussion will be carried forward in our various countries and spaces and we become ambassadors for child protection in the digital space in our various endeavors. Thank you. God bless you. Thank you. Bye everyone. Bye bye. You see you have been forgotten. Thank you. Thank you Gabriel. Thank you Janatu and thank you Samayla. Thank you very much. you

R

Radhika Gupta

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Developing national strategies and frameworks

Explanation

Radhika Gupta emphasizes the importance of creating country-level strategies for child online protection. She suggests these strategies should be based on the principles outlined in the UN Convention on the Rights of the Child, general comment number 25.

Evidence

UN Convention on the Rights of the Child, general comment number 25

Major Discussion Point

Child Online Protection Initiatives

Adopting global standards for child online protection

Explanation

Radhika Gupta recommends that IGF should push governments to adopt global standards for child online protection. This would ensure a consistent and comprehensive approach across different countries.

Major Discussion Point

Recommendations for IGF

Implementing mandatory rights audits for digital platforms

Explanation

Radhika Gupta suggests implementing mandatory rights audits for digital platforms. This would ensure that these platforms have taken into consideration rights, privacy, and safety by design.

Major Discussion Point

Recommendations for IGF

Including children’s voices directly in IGF discussions

Explanation

Radhika Gupta recommends that future IGF meetings should include the direct voices of children in the room. This would ensure that children’s perspectives are directly represented, rather than being filtered through youth or adult representatives.

Major Discussion Point

Recommendations for IGF

G

Gabriel Karsan

Speech speed

157 words per minute

Speech length

1253 words

Speech time

477 seconds

Implementing school connectivity programs

Explanation

Gabriel Karsan discusses Tanzania’s efforts to implement school connectivity programs. These programs aim to enhance digital literacy and community participation in technology.

Evidence

Tanzania’s last mile connectivity program aiming for 70% fiber optic coverage, passing through schools

Major Discussion Point

Child Online Protection Initiatives

Implementing competency-based digital literacy skills in education

Explanation

Gabriel Karsan mentions that Tanzania is pushing for a change in education to include competency-based digital literacy skills. This initiative aims to integrate digital skills into every educational program starting from the next year.

Major Discussion Point

Digital Literacy and Inclusion

Agreed with

Speaker 1

Aishath Naura Naseem

Samaila Atsen Bako

Agreed on

Importance of digital literacy initiatives

Importance of public-private partnerships

Explanation

Gabriel Karsan emphasizes the need for public-private partnerships in addressing child online protection. He argues that these partnerships are crucial for reaching localized communities and implementing effective strategies.

Evidence

Collaboration examples with China Mobile and KaiOS

Major Discussion Point

Multi-stakeholder Collaboration

Agreed with

Radhika Gupta

Audience

Agreed on

Need for multi-stakeholder collaboration

Promoting regional cooperation through African Union digital strategy

Explanation

Gabriel Karsan highlights the importance of regional cooperation in child online protection efforts. He specifically mentions the African Union’s digital strategy and the Malabo Convention as frameworks for regional integration and compliance.

Evidence

African Union digital strategy, Malabo Convention

Major Discussion Point

Multi-stakeholder Collaboration

Agreed with

Radhika Gupta

Audience

Agreed on

Need for multi-stakeholder collaboration

S

Speaker 1

Speech speed

168 words per minute

Speech length

1747 words

Speech time

623 seconds

Launching National Digital Literacy Mission

Explanation

Speaker 1 mentions India’s National Digital Literacy Mission (NLDM) as a key initiative for digital literacy. This program aims to provide necessary digital skills to citizens, especially those in rural and underserved areas.

Evidence

National Digital Literacy Mission (NLDM) in India

Major Discussion Point

Digital Literacy and Inclusion

Agreed with

Aishath Naura Naseem

Samaila Atsen Bako

Gabriel Karsan

Agreed on

Importance of digital literacy initiatives

Using Internet of Things for remote science experiments

Explanation

Speaker 1 describes a project using Internet of Things technology to enable remote science experiments in rural Indian schools. This initiative allowed students in remote areas to conduct experiments located in cities, promoting inclusive education.

Evidence

Project connecting rural schools with city-based science experiments during COVID-19

Major Discussion Point

Digital Literacy and Inclusion

Creating working groups for inclusive and safe technology

Explanation

Speaker 1 recommends creating more working groups focused on developing inclusive and safe technology for child safety. This suggestion aims to encourage the development of technology that prioritizes child protection and inclusivity.

Major Discussion Point

Recommendations for IGF

S

Speaker 2

Speech speed

138 words per minute

Speech length

662 words

Speech time

287 seconds

Providing digital literacy training for children and parents

Explanation

Speaker 2 discusses efforts in the Maldives to provide digital literacy training for children and parents. These programs aim to educate communities about online safety, device security, and identifying potential online threats.

Evidence

Awareness sessions on cyber safety, online safety, and digital device security in Maldives

Major Discussion Point

Child Online Protection Initiatives

Agreed with

Speaker 1

Samaila Atsen Bako

Gabriel Karsan

Agreed on

Importance of digital literacy initiatives

Prioritizing global collaborations for localized digital safety resources

Explanation

Speaker 2 recommends that IGF prioritize global collaborations to develop localized digital safety resources. This suggestion aims to address the need for culturally relevant and language-specific safety materials in developing nations.

Major Discussion Point

Recommendations for IGF

S

Speaker 4

Speech speed

163 words per minute

Speech length

1597 words

Speech time

584 seconds

Creating cyber clubs and ambassadors in schools

Explanation

Speaker 4 mentions the creation of cyber clubs and ambassadors in schools as an initiative to promote online safety. This approach aims to raise awareness within educational environments and build interest in cybersecurity among students.

Evidence

Cyber Security Association of Nigeria launching cyber clubs in universities

Major Discussion Point

Child Online Protection Initiatives

Agreed with

Speaker 1

Aishath Naura Naseem

Gabriel Karsan

Agreed on

Importance of digital literacy initiatives

Reducing costs for internet infrastructure deployment

Explanation

Speaker 4 discusses efforts in Nigeria to reduce the cost of deploying internet infrastructure. This initiative aims to improve internet penetration, especially in areas outside urban centers.

Evidence

Agreement among sub-national governments in Nigeria to reduce right-of-way costs for laying fiber optic cables

Major Discussion Point

Digital Literacy and Inclusion

Importance of observing behavioral changes in children

Explanation

Speaker 4 emphasizes the importance of parents and guardians observing behavioral changes in children as a way to detect potential online issues. This approach is suggested as a proactive measure to identify and address online harms.

Major Discussion Point

Addressing Mental Health Concerns

Encouraging more physical engagement with peers

Explanation

Speaker 4 recommends encouraging more physical engagement among children with their peers. This suggestion is presented as a way to balance online interactions and promote healthier social development.

Major Discussion Point

Addressing Mental Health Concerns

A

Audience

Speech speed

145 words per minute

Speech length

1638 words

Speech time

677 seconds

Establishing administrative committees at local and national levels

Explanation

An audience member from Bangladesh describes the creation of administrative committees for teenagers at local and national levels. These committees involve multiple ministries and agencies to work on preventing cybercrime and cyberbullying for teenagers.

Evidence

Administrative committees in Bangladesh involving 11 ministries and various agencies at central and district levels

Major Discussion Point

Child Online Protection Initiatives

Developing AI-integrated translation tools for vernacular languages

Explanation

An audience member mentions India’s development of an AI-integrated tool called Bhaashini. This tool aims to translate digital platforms and services into vernacular languages, making them more accessible to individuals across India.

Evidence

Bhaashini AI-integrated translation tool in India

Major Discussion Point

Digital Literacy and Inclusion

Lack of government focus on mental health issues from social media

Explanation

An audience member raises concerns about the lack of government focus on mental health issues caused by social media use among children. The speaker highlights issues such as false body images and psychological problems stemming from social media use.

Evidence

Discussion at the 93rd child rights convention in Geneva

Major Discussion Point

Addressing Mental Health Concerns

Need to sensitize government on enforcement of child protection

Explanation

An audience member emphasizes the need to sensitize government agencies on enforcing child protection measures in the digital space. This includes training law enforcement and other relevant agencies to better understand and address online child protection issues.

Evidence

Mention of a department within India’s Ministry of Electronics and Information Technology working on sensitizing the public sector

Major Discussion Point

Addressing Mental Health Concerns

Need for collaboration between corporate and public sectors in education

Explanation

An audience member emphasizes the importance of strong relationships between corporate and public sectors in education. This collaboration is seen as crucial for bridging the digital divide and ensuring accessibility in educational technology.

Major Discussion Point

Multi-stakeholder Collaboration

Agreed with

Radhika Gupta

Gabriel Karsan

Agreed on

Need for multi-stakeholder collaboration

Agreements

Agreement Points

Importance of digital literacy initiatives

Speaker 1

Aishath Naura Naseem

Samaila Atsen Bako

Gabriel Karsan

Launching National Digital Literacy Mission

Providing digital literacy training for children and parents

Creating cyber clubs and ambassadors in schools

Implementing competency-based digital literacy skills in education

Multiple speakers emphasized the importance of digital literacy programs for children, parents, and educators to promote safe and responsible internet use.

Need for multi-stakeholder collaboration

Radhika Gupta

Gabriel Karsan

Audience

Importance of public-private partnerships

Promoting regional cooperation through African Union digital strategy

Need for collaboration between corporate and public sectors in education

Speakers agreed on the necessity of collaboration between various stakeholders, including governments, private sector, and regional bodies, to effectively address child online protection issues.

Similar Viewpoints

Both speakers emphasized the importance of developing comprehensive strategies at national and regional levels to address child online protection.

Radhika Gupta

Gabriel Karsan

Developing national strategies and frameworks

Promoting regional cooperation through African Union digital strategy

Both speakers highlighted the importance of leveraging technology to enhance educational opportunities and digital literacy in remote or underserved areas.

Speaker 1

Aishath Naura Naseem

Using Internet of Things for remote science experiments

Providing digital literacy training for children and parents

Unexpected Consensus

Addressing mental health concerns related to social media use

Audience

Samaila Atsen Bako

Lack of government focus on mental health issues from social media

Importance of observing behavioral changes in children

Encouraging more physical engagement with peers

While not a primary focus of the discussion, there was unexpected consensus on the need to address mental health concerns related to children’s social media use, with both audience members and panelists recognizing this as an important issue.

Overall Assessment

Summary

The main areas of agreement included the importance of digital literacy initiatives, the need for multi-stakeholder collaboration, and the development of comprehensive national and regional strategies for child online protection.

Consensus level

There was a moderate to high level of consensus among speakers on the key issues discussed. This consensus suggests a shared understanding of the challenges and potential solutions in child online protection, which could facilitate more coordinated and effective actions across different countries and stakeholders.

Differences

Different Viewpoints

Unexpected Differences

Addressing mental health concerns

Audience

Samaila Atsen Bako

Lack of government focus on mental health issues from social media

Importance of observing behavioral changes in children

While the audience member raised concerns about the lack of government focus on mental health issues caused by social media, Speaker 4 unexpectedly shifted the responsibility to parents and guardians by emphasizing the importance of observing behavioral changes in children. This difference highlights a potential gap in addressing mental health concerns comprehensively.

Overall Assessment

summary

The main areas of disagreement revolve around the specific approaches to implementing digital literacy programs, the balance between global standards and localized solutions, and the allocation of responsibility for addressing mental health concerns related to online activities.

difference_level

The level of disagreement among speakers is moderate. While there is general consensus on the importance of child online protection and digital literacy, speakers propose different strategies and emphasize various aspects of the issue. These differences reflect the complexity of the topic and the need for multi-faceted approaches tailored to different contexts. The implications of these disagreements suggest that a comprehensive solution to child online protection may require integrating multiple strategies and involving various stakeholders at different levels.

Partial Agreements

Partial Agreements

Both speakers agree on the need for localized approaches to digital safety and education, but Gabriel Karsan emphasizes infrastructure development through school connectivity programs, while Speaker 2 focuses on developing localized digital safety resources through global collaborations.

Gabriel Karsan

Aishath Naura Naseem

Implementing school connectivity programs

Prioritizing global collaborations for localized digital safety resources

Similar Viewpoints

Both speakers emphasized the importance of developing comprehensive strategies at national and regional levels to address child online protection.

Radhika Gupta

Gabriel Karsan

Developing national strategies and frameworks

Promoting regional cooperation through African Union digital strategy

Both speakers highlighted the importance of leveraging technology to enhance educational opportunities and digital literacy in remote or underserved areas.

Speaker 1

Aishath Naura Naseem

Using Internet of Things for remote science experiments

Providing digital literacy training for children and parents

Takeaways

Key Takeaways

Child online protection requires multi-stakeholder collaboration involving governments, tech companies, civil society, and children themselves

Country-level strategies for child online protection should be based on global standards like the UN Convention on the Rights of the Child General Comment No. 25

Digital literacy initiatives are crucial for empowering children to navigate online spaces safely

There is a need to address mental health concerns arising from children’s use of social media and digital technologies

Emerging technologies like AI and IoT can be leveraged to enhance digital education and inclusion

Resolutions and Action Items

Develop country-level strategies for child online protection based on global standards

Implement digital literacy programs for children, parents, and educators

Create cyber clubs and ambassador programs in schools to raise awareness

Include children’s voices directly in future IGF discussions

Conduct mandatory rights audits for digital platforms to ensure child safety

Unresolved Issues

How to effectively address mental health issues caused by social media use among children

Ways to bridge the digital divide and ensure equitable access to digital technologies for all children

Methods to balance protection and empowerment in child online safety initiatives

Strategies for engaging tech companies to prioritize child safety in product design

Suggested Compromises

Balancing the need for child protection with allowing children to participate meaningfully in digital spaces

Combining global standards with localized approaches to address cultural and regional differences in child online protection

Integrating digital literacy into existing educational curricula rather than creating separate programs

Thought Provoking Comments

We cannot afford to fold our arms or throw them in the air in despair. The stakes are too high and the future of our children demands action, innovation and accountability.

speaker

Radhika Gupta

reason

This comment sets an urgent and action-oriented tone for the discussion, emphasizing the critical nature of child online protection.

impact

It framed the subsequent discussion around concrete actions and strategies, rather than just theoretical discussion.

The way forward requires a paradigm shift towards safety by design, privacy by design, child rights by design, those forming the fundamentals.

speaker

Radhika Gupta

reason

This introduces the important concept of proactive design for child safety, rather than reactive measures.

impact

It shifted the conversation towards preventative measures and systemic approaches to online child protection.

We must resist the tendency to combine children with youth. Children require unique protection, tailored strategies and recognition of their peculiar vulnerabilities and rights.

speaker

Radhika Gupta

reason

This highlights the often overlooked distinction between children and youth in digital policies.

impact

It prompted more specific discussion about child-focused strategies rather than general youth policies.

I want to highlight the aspect that relates to the digital threats that young children the children pose that falls under this act which is which mentions that it criminalizes the use of children for creating, distributing and accessing child sexual abuse material as CISA material and includes various provisions for online grooming, harassment and cyber exploitation.

speaker

Speaker 1 (Ahitha)

reason

This comment brings attention to specific legal measures addressing digital threats to children.

impact

It grounded the discussion in concrete policy examples and legal frameworks.

We no longer have child pornography. An abuse is an abuse. A child does not engage in pornography.

speaker

Radhika Gupta

reason

This correction highlights the importance of appropriate terminology in discussing child exploitation.

impact

It raised awareness about language use and framing of issues related to child abuse.

So much as we are treating the harms and the threats to children, we need to also take care of empowerment. We need to take care of their skill sets in order for them to engage responsibly.

speaker

Radhika Gupta

reason

This comment shifts the focus from protection to empowerment, introducing a more holistic approach.

impact

It broadened the discussion to include digital literacy and skills development for children.

I think even from the, if you look at law enforcement now, there’s a bit more knowledge, you know, about electronic crime or online fraud and cyber crime and all that, but also from the digital harms perspective, where we have cyber bullying and online harassment, so there’s a bit more knowledge, although we’re hoping for better and more widespread awareness and even prosecution and things like that.

speaker

Samaila Atsen Bako

reason

This comment highlights the progress in law enforcement awareness while acknowledging the need for further improvement.

impact

It introduced the role of law enforcement in child online protection and the need for their continued education and involvement.

Overall Assessment

These key comments shaped the discussion by broadening its scope from mere protection to empowerment, emphasizing the need for proactive design in digital spaces, highlighting the distinction between children and youth, and stressing the importance of appropriate terminology and legal frameworks. The discussion evolved from general concerns to specific strategies and actions, encompassing various stakeholders including policymakers, law enforcement, and children themselves. The comments collectively pushed for a more comprehensive, nuanced, and action-oriented approach to child online protection.

Follow-up Questions

How are other countries incorporating online safety into their educational curriculum from grades 1-12?

speaker

Madhav Pradhan

explanation

Understanding how different countries integrate online safety education throughout schooling years can help improve child protection strategies globally.

What are governments doing to address mental health issues caused by social media use among children, beyond just teaching internet safety?

speaker

Unnamed audience member

explanation

This highlights the need to consider the psychological impacts of internet use on children, not just physical safety concerns.

How has accessibility design been incorporated into digital literacy and inclusion efforts in education?

speaker

Sasha Nandlal

explanation

Addressing accessibility ensures that digital literacy initiatives reach all students, regardless of their backgrounds or abilities.

How can we encourage more physical engagements with peers as opposed to relying on digital interactions?

speaker

Samaila Atsen Bako

explanation

Balancing online and offline interactions is crucial for children’s holistic development and safety.

How can we develop more localized digital safety resources for different countries and cultures?

speaker

Aishath Naura Naseem

explanation

Culturally appropriate resources are essential for effective implementation of child online safety measures across diverse global contexts.

How can we include more direct participation from school students in setting the agenda for IGF discussions on child online safety?

speaker

Speaker 1

explanation

Involving children directly in policy discussions ensures their perspectives and concerns are adequately addressed.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

[Parliamentary Session 4] Fostering Inclusive Digital Innovation and Transformation

[Parliamentary Session 4] Fostering Inclusive Digital Innovation and Transformation

Session at a Glance

Summary

This panel discussion at the Internet Governance Forum in Saudi Arabia focused on fostering inclusive digital innovation and transformation. The panelists, representing organizations like UNDP, Italy’s Digital Agency, and the World Federation of Engineering Organizations, explored challenges and strategies for ensuring digital inclusion globally.


Key themes included building digital capacity, developing digital public infrastructure, and addressing inequalities in access to digital technologies. Robert Opp from UNDP highlighted their work in over 125 countries on digital strategies, emphasizing the need for capacity building among policymakers and civil servants. Mario Nobile shared Italy’s progress in digital public services, stressing the importance of digital literacy programs. Gong Ke discussed the crucial role of engineering capacity, particularly in AI, and the need for tailored platforms in different regions.


The discussion touched on specific initiatives like UNDP’s collaboration with Kenya on AI skills for civil servants and the World Federation of Engineering Organizations’ program to train 100,000 African engineers in AI. Panelists emphasized the importance of human oversight in AI implementation and the need for ethical considerations in technological advancement.


Audience questions raised issues such as the potential for a UN declaration on AI ethics, the role of parliamentarians in digital transformation, and strategies for connecting underserved communities. The panel concluded by highlighting recent UN initiatives like the Global Digital Compact and ongoing discussions on international AI governance, emphasizing the need for collaborative efforts to ensure inclusive digital development worldwide.


Keypoints

Major discussion points:


– Challenges and strategies for inclusive digital transformation, including capacity building, infrastructure development, and policy frameworks


– The role of digital public infrastructure (DPI) and digital literacy in fostering inclusion


– Addressing inequalities and reaching marginalized populations through digital initiatives


– The need for international cooperation and governance frameworks around AI and digital technologies


– Balancing innovation with ethical considerations and human rights protections in the digital space


The overall purpose of the discussion was to explore ways to promote inclusive digital innovation and transformation globally, with a focus on addressing inequalities and building capacity across different regions and sectors.


The tone of the discussion was largely constructive and solution-oriented. Panelists shared insights and examples from their respective areas of expertise, while audience members asked thoughtful questions that expanded the conversation. There was a sense of urgency around the rapid pace of technological change, balanced with optimism about the potential benefits of digital transformation if implemented inclusively. The tone became more specific and action-oriented towards the end as panelists responded to audience questions about concrete initiatives and next steps.


Speakers

– Tsvetelina Penkova: Member of the European Parliament, Vice Chair of the Committee on Industry, Research and Energy


– Robert Opp: Chief Digital Officer of UNDP


– Mario Nobile: Director of the Agency for Digital Italy


– Gong Ke: President of the World Federation of Engineering Organizations


– Audience: Various audience members who asked questions


Additional speakers:


– Abdullah al-Zawahir: Minister of Communication and Information of Saudi Arabia (mentioned but not present)


Full session report

Expanded Summary of Panel Discussion on Inclusive Digital Innovation and Transformation


Introduction:


This panel discussion, held at the Internet Governance Forum, brought together experts from various international organisations to explore strategies for fostering inclusive digital innovation and transformation globally. The panellists, representing entities such as the United Nations Development Programme (UNDP), Italy’s Digital Agency, and the World Federation of Engineering Organizations, delved into the challenges and opportunities presented by digital technologies in promoting equitable development.


Key Themes and Discussion Points:


1. Digital Inclusion and Capacity Building:


The panellists unanimously agreed on the critical importance of digital literacy and capacity building across all sectors of society. Robert Opp from UNDP emphasised the need for broad-based digital literacy programmes, highlighting UNDP’s digital programs in over 125 countries. Mario Nobile, representing Italy’s Digital Agency, stressed the role of digital literacy as a cornerstone for the adoption of digital services. Gong Ke, from the World Federation of Engineering Organizations, focused on the importance of engineering capacity building, particularly in developing regions.


The discussion revealed different approaches to digital literacy and capacity building. While Opp advocated for a wide-ranging approach across sectors, Nobile emphasised literacy for digital service adoption, and Gong stressed engineering capacity building in developing regions. This diversity of approaches underscores the multifaceted nature of digital inclusion challenges.


An audience member suggested that the UN create common platforms to unify efforts in digital literacy, inclusion, and engineering capacity building. This proposal sparked a discussion on existing UN initiatives and the challenges of coordinating global efforts in digital development.


2. Digital Public Infrastructure (DPI) and Governance:


The concept of digital public infrastructure emerged as a crucial element in fostering digital transformation. Robert Opp introduced DPI as the “digital roads and bridges” necessary for digital development, typically initiated by governments but often implemented and operated by the private sector. He also mentioned the Universal Digital Public Infrastructure Safeguards Framework launched by the UN.


Mario Nobile shared insights from Italy’s experience in developing digital public infrastructure building blocks, highlighting the complexity of implementing digital transformation across a large, diverse governmental system comprising 23,000 public administrations. He provided specific numbers on Italy’s digital initiatives, including 36 million digital identity users, 2 billion e-invoices processed annually, and a budget allocation of 6.7 billion euros for digital transformation.


The rapid pace of AI development raised concerns about governance frameworks. Mario Nobile suggested the need for adaptive governance approaches, while Gong Ke emphasised the importance of tailoring digital platforms to local contexts and languages. Gong also mentioned the UNESCO recommendation on AI ethics and the upcoming UN AI office.


3. Challenges of Digital Transformation and AI Implementation:


The discussion highlighted several challenges in achieving inclusive digital transformation. Tsvetelina Penkova, Member of the European Parliament and Vice Chair of the Committee on Industry, Research and Energy, raised concerns about the risk of deepening inequalities through the digital divide. Audience members pointed out infrastructure and funding challenges for digitalisation in developing countries, particularly in low-income and conflict-affected regions.


Gong Ke emphasised the need to tailor digital platforms to local contexts and languages, recognising the diversity of user needs across different regions. He also discussed challenges in AI implementation, including the need for human oversight and the potential for misuse.


4. Public-Private Partnerships for Digital Development:


The importance of collaboration between public and private sectors emerged as a recurring theme. Robert Opp highlighted UNDP’s partnerships with the private sector for digital skills training, including a center of competence on AI and digital skills for civil servants in Kenya. Mario Nobile shared Italy’s public-private model for digital service delivery, while Gong Ke stressed the need for collaboration across sectors to support digital adaptation.


Specific Initiatives and Examples:


The discussion touched on several concrete initiatives, including:


– UNDP’s collaboration with Kenya on AI skills for civil servants


– The World Federation of Engineering Organizations’ programme to train 100,000 African engineers in AI over 10 years


– Italy’s digital transformation initiatives, including e-invoicing and digital identity systems


Audience Engagement and Questions:


Audience members raised several thought-provoking questions and suggestions, including:


– The potential for a UN declaration on AI ethics, similar to the Universal Declaration of Human Rights


– The role of parliamentarians in digital transformation


– Strategies for connecting underserved communities


– The need for ongoing, rather than one-off, digital literacy programmes


– Approaches to taxing AI tools and systems, as suggested by Mario Nobile


– Questions about Egypt’s involvement in digital initiatives


These questions led to discussions on existing UN initiatives like the Global Digital Compact, which is part of the UN’s Pact for the Future, and ongoing deliberations on international AI governance.


Conclusion:


The panel discussion highlighted the complex and multifaceted nature of inclusive digital transformation. While there was broad agreement on the importance of digital literacy, capacity building, and public-private partnerships, differences emerged in the specific approaches and emphases of the speakers. The discussion underscored the need for tailored solutions that consider local contexts, while also working towards global standards and governance frameworks.


Unresolved issues included how to effectively address the digital divide in low-income and conflict-affected regions, whether new declarations or amendments to existing human rights frameworks are needed to address AI impacts, and how to ensure digital literacy programmes are ongoing rather than one-off projects.


The overall tone of the discussion was constructive and solution-oriented, with a sense of urgency around the rapid pace of technological change balanced by optimism about the potential benefits of inclusive digital transformation. The panel concluded by emphasising the need for collaborative efforts to ensure inclusive digital development worldwide, setting the stage for continued dialogue and action in this critical area of global development.


Session Transcript

Tsvetelina Penkova: Perfect. Good morning, everyone. Good morning to the distinguished guests, panelists and participants in a very important debate we’re going to have this morning on fostering inclusive digital innovation and transformation. We’ve touched upon some of those topics in the previous session and now we’re going to continue this debate in that. So it’s a privilege to welcome you and to begin this event today here at the Internet Governance Forum in Saudi Arabia. So it is a good reminder, those discussions, that the Internet is not just a tool for us, but it’s a space where societies can come together, can foster innovation, promote equality and actually make sure that inclusion is part of our priorities. Of course, digital transformation is a powerful tool to reshape our world, brings unprecedented opportunities to innovation, growth and connectivity. However, this progress does present significant challenges, most notably the risk of deepening inequalities. So on one hand, digital technologies are reshaping industry and unlocking economic growth, driving global connectivity and ensuring competitiveness of our economies. But on the other hand, we can see how they can drive a lot of inequalities among societies. So entire communities, even now, as we speak today, they do lack access to Internet, the digital literacy is at a low level, or the necessary resources to participate actively and meaningfully in this transformation. transformation are not existing. So if we don’t address these gaps, we risk to create a world that’s even more divided. And I’m just going to refer to this key message that was also posed by the Minister of Communication and Information of Saudi Arabia, His Excellency Abdullah al-Zawahir, at his opening remarks at the ceremony yesterday, where he put a strong emphasis on the division when we speak about the digital economy as we see it now. So one of our key goals going forward should be to start working towards diminishing those gaps. If we can, of course, completely remove them, that’s going to be a great success. It’s a great pleasure to introduce the distinguished speakers we have here for this panel today, starting from my left with Mr. Robert Opp, the Chief Digital Officer of UNPD. So he’s going to provide us some insights on the global development priorities and digital innovations. On the other side of the table, we have Mario Nobile, the Director of the Agency for Digital Italy. I’m sure he’s going to bring a lot of expertise on how the public administration, digitalization in Italy is working and what are the challenges we face there. And next to me, we have Mr. Ke Gong, the President of the World Federation of Engineering Organizations. So he’s going to offer the perspective and the intersection of the engineering and artificial intelligence for global benefit. In the previous panel, we’ve also heard how insightful it is to have the experts’ views when we’re discussing some of the policies on the digital matters. So without further ado, I would start with my first question, which also… gives you the opportunity to have some opening remarks, but let’s make it a bit more dynamic. My first question is going to be to Robert. So in UNDP, you strongly promote inclusive digital development, which is, of course, the topic of our debate. You’re speaking about that that inclusion at both local and more global level. So could you actually give us an example of how you are actually trying to assist countries in their specific needs when we’re speaking about digital transformation, digital inclusions, and what support can be provided by the Members of Parliament in the various parts of the world, and what do we expect from us? It’s too many questions in one, but I’m sure you have a lot of insights to share with us.


Robert Opp: Thank you very much, and it’s a pleasure to be here. And yeah, that’s a lot of questions, but there are a lot of complex issues out there that as parliamentarians you all face right now. And in fact, I would just say in globally speaking, and the United Nations Development Program, who I represent here, is actively working in 170 countries worldwide. We have digital programs, digitalization programs in over 125 of those countries, and we are also working with one in three parliaments in the world. So what we see globally is a very strong pattern emerging around digitalization, and the pattern includes things in terms of where we hear expressions of or requests for support from countries are in a few different areas. One of those areas is how do we put in place the policies and the strategies for digitalization? And in 53 countries around the world, we’ve worked on national digital ecosystems evaluation and action planning. So looking ahead at where countries want to go, what are the steps to get there, what are the policies and strategies you need to guide that digitalization movement. The second area is in the space of technology. What are the layers of technology I need to put in place? And we sometimes talk about digital public infrastructure, which I know some of the other panelists are going to mention, but digital public infrastructure is that sort of digital roads and bridges that need to be put in place, usually put in place at the instigation of government, but often implemented itself by private sector and operated by private sector. And what we see is that that can have a very strong impact in terms of accelerating digital public services and accelerating digital economies as well. And then the third area that we hear from countries is around capacity. What are the capacities, digital capacities or capacities to work on digitalization that are necessary to really allow us to take advantage of the powers of technology and mitigate the risks of technology? So we work across all of these areas in many different countries. And I would say that what we hear also from the parliamentarian side is that there are similar challenges. So we hear from government, there’s capacity challenges, uncertainty around policies and strategies, etc. But from parliaments, we also know that there is a challenge in parliamentary capacity because parliamentarians need to also understand the shifts of digitalization, stay ahead of those shifts. We have begun to work in areas like promoting information integrity around, whether it be in general or around electoral processes, preventing online violence, which is a big challenge in many countries. And also looking at the capacity to legislate and oversee the digitalization changes that are happening. So we’ve been very pleased, we have been working with the Inter-Parliamentary Union for a number of years on many different initiatives, but we’ve also recently created an experts group that will look specifically at these capacity challenges for parliamentarians and start to work to build capacity of parliaments around the world to really be able to support the acceleration of digitalization, which we feel has tremendous potential for building human development over time.


Tsvetelina Penkova: Thank you. Thank you, Robert. You’re actually going one step back in the analysis, which is very key because we cannot be promoting or regulating matters if there is a lack of capacity in terms of the regulators. I’m going to have a few follow-up questions to you, but let me first go back to Mario so he can give us a bit more perspective from the agency side. How do you see all those matters? I’m not going to repeat the problems, the question is very clear. How does it work within the agency? What are the challenges you’re facing? And what’s the speed we can actually expect for a proper digital inclusion and transformation in Italy, especially taking into account that you have some quite outermost regions in Italy as well. How are you dealing with that challenge to reach everyone?


Mario Nobile: We have 23,000 public administrations in Italy, local central health services, so it’s very difficult. Thank you. As Director General of the agency, our mission is to drive digital transformation across the nation, and this mission is executed through various strategic tools and initiatives. One of them is the three-year plan. Now, talking about three years, it’s a long time. Now we are talking of months, artificial intelligence, new services, but we have this kind of strategic plan and in 2023, with this plan, the ICT spending in Italy was worth 7 billion euros. 3.3 for central public administration, revenue agency, national welfare institute and so on, 1.6 for local administration, so regions, municipalities and so on, and 1.6 for digital health services, remaining 0.5, last but not least, for education services. This is about 0.64% of the Italian GDP. Our ICT spending is worth it. And the agency, I go to the answer to your question, the agency oversees and ensures the quality of various building blocks of the Italian digital public infrastructures. Laid down by the government to accelerate development and used by different service providers, we have the private sector, but also the single citizen. Some numbers, in Italy we have 59 million citizens, but we have 40 million users of digital identity services. We have 18 million users of certified email, an email to send and receive with legal validity. We have 45 million qualified certified certificates of signature, the digital signatures. On the payment layer, we have PagoPiA for payments towards the public administration and in November 2024, PagoPiA had 34 million transactions with a total value of €7 billion. We have a platform for interoperability from municipality to region to central administration and at November 2024, we have 7,500 public organizations on board. Last but not least, we have an e-invoicing platform, our Italian Revenue Agency smiles because every invoice, business to business, business to government, business to citizen is in electronic format. These building blocks are the enablers for every digital services, artificial intelligence and other services. So these services are mainly delivered through a public-private partnership model where the agency role is to monitor and guarantee the quality of services provided by service provider and we issue guidelines to administrations and companies to steer the development of innovative and inclusive services. Notable examples include the guidelines on accessibility for public administrations and companies. And in conclusion, I would like to highlight the citizen inclusion. It is a best practice demonstrating the Agency’s commitment to digital inclusion and technological literacy. This project provides various tools to public administrations and private entities to improve the quality and accessibility of digital products, services and content. It aims to enhance the accessibility of public digital services in line with European directives. This project is part of the National Recovery and Resilience Plan with a budget of 80 million euros. Thank you.


Tsvetelina Penkova: Thank you, Mario, I forgot to say a few words about myself before we started the session. I’m a member of the European Parliament and the Vice Chair of the Committee on Industry, Research and Energy, but I’m also coming from Bulgaria. Listening to Mario, he did remind me about something. We also have a digital administration which is trying to work. Unfortunately, a lot of people are discovering that it works well only when it comes to collecting taxes or fees, not when it comes about providing services. So I will come back to you after with a follow-up question, probably on the digital literacy and how you are ensuring that. All the numbers that you have told us, they seem quite reassuring. It’s just we want to make sure that really it reaches for everyone and it’s for the benefit of our citizens, not only for the governments and administrations. And now, so I want to have… have all the points, all the viewpoints to our debate. So we’ve heard from the UNDP, we’ve heard from the agency and an example from a country like Italy, which is very big. And now let’s go to one of the very key aspects. And how does it work with the industry? Like how do you ensure that what we have in terms of technological development, innovation, is ensuring to address some of the global challenges like poverty, inequality, economic growth? And I’m turning to you, Mr. Gong, because I know that you are an expert in the field and you can give us some real example if that works. And where can probably we a bit more of support from our side, from the public sector as well?


Gong Ke: Thank you. Thank you so much. I think this year’s IGF is one of the important international events after the United Nations adopted the Pact for the Future and the Global Digital Compact. So it’s my great honour to be part of this very important event and to address a very important theme that’s promoting inclusiveness in the digital transition. So based on our observation, our, I mean, the Chinese Institute of Artificial Intelligence Development Strategy, based on our observation to the development of digitalization in China, we find that the two strategies are very, very crucial. First, from the supply side, it is a key measure to providing open, accessible platforms. to ease the adaptation of digital technologies, including artificial intelligence. And this platform in the form of public cloud services, in the form of open source communities. So this platform linking developers, users, investors, and managers to support collaboration across government, private sector, academia, and civil society, and individual users by providing pre-trained fundamental models, providing standard datasets, providing computer powers, and technical training in an open and scalable way. And very crucially, that platforms should be tailored to local context. Even in China, this is such a big country, we have a different part with different economic development level. So we should tailor to this, to different parts, the local needs, and we have different languages. Also in China, we have official Chinese, but also different dialects. So to cope with different languages, cultural norms, and digital capacities. So that is very important to the ease of adaptation. The second, from the demand side, we need to build capacity with emphasis on developing regions. So when we talk about inclusiveness, that requires proactive investment to human resources. particularly in the global south and in those less developed regions. So, because they do have a lot of barriers, so digital literacy, the lack of investment, lack of financial tools, and so on and so forth. So, I think that is the way the United Nations’ Pact for the Future focuses on equitable development and digital inclusion. And I think the United Nations Resolution adopted in July, titled Enhancing the International Collaboration for Building Capacity of Artificial Intelligence, offers a guideline for these capacity-building activities. So, I think the engineering capacity is a very important content in the capacity-building, because it is engineers to use the technical method, scientific theory, in the specific social, environmental, financial conditions to solve the problem. So, we have to increase the engineering capacity, especially for the less developed regions, to enhance the capability of problem-solving. So, because of the limitation of time, I just stop here. Thank you.


Tsvetelina Penkova: Thank you, thank you, Mr. Gong. Very insightful comments, in terms of all the topics we’ve discussed so far. And now we’re going to come in with more details in the second round on the engineering capacity, because I’m sure the audience would be interested. A few more short questions. from my side, and then now we’re going to ask for questions from the people listening to us today. So Robert, everyone is shifting. So you started the topic on capacity challenge in the parliamentarians. It seems that we acknowledge that, and we’re going to work on that to try to resolve it a bit. I’m having a question again on the literacy, on the inclusion. How are you assessing the challenge of addressing and reaching with your digital initiative, particularly to the people in the low income or some of the conflicted regions? Do you have some specific examples? And what is the response? Because it’s a very particular situation sometimes when we’re speaking about that.


Robert Opp: Absolutely. And the capacity question is probably the number one request that we get from our country partners. And it’s not just a single thing. So as we just heard, one issue is around engineering capacity and software engineers or developers, programmers. But of course, there’s capacity needs across the board from private sector to civil servants, to looking at higher education policy, to the parliamentarians, to et cetera, et cetera, general digital literacy among people. So addressing capacity has to be, it’s a very complex situation, so it needs a number of different kinds of interventions. One of the things that I briefly mentioned before is when we work with countries on digitalization, we often start with an assessment of the local digital ecosystem. And so in 53 of the countries we’ve worked with, that means sitting down with government, private sector, civil society, to understand the state of where we are with business ecosystem, regulatory ecosystem, government capacity, and general digital literacy. And based on that, we can design a kind of an action plan on where to… actually go to support different kinds of capacities that are needed. In terms of specific examples, just one example is, together with the government of Kenya, we have recently launched a center of competence on AI and digital skills for civil servants that is done in partnership with Microsoft and Huawei. And so it’s UNDP, Microsoft, Huawei, government of Kenya, looking at how we can bring civil servants into better capacities. And so there are a number of different kinds of angles that we can use, often involving the private sector, because private sector is also where cutting edge skill sets are. But more on the policy side, we also need to look at how do we actually ensure that people understand not just the technology itself, but as I mentioned before, how to govern those technologies. And so that’s where, in some cases, we’ve started engaging with parliamentarians on specific programs that we run in-country for parliamentarians, often with partners like IPU, to look at how we can actually build capacities of people to understand the shift. Because it is a different set of skills you need to understand how to govern technologies than actually build them.


Tsvetelina Penkova: Continuing with the specific examples, actually, it’s going to be interesting to follow up how those are developing, especially the example with Kenya, but I’m going to go back to Mr. Gong now. As the president of the World Federation of Engineering and Organization, you must have a very, very good overview of how things are developing. And you’ve pointed out on the first priority was the open and accessible platforms to assess the human resources in each part of the world and to make sure that we are promoting also the engineering capacity. Can you, do you have any specific example from your experience as well of like specific engineering solutions that actually helped? or did it impact significantly marginalized populations or solve inequality problems?


Gong Ke: That’s a very important question, but it’s hard to answer. Now, as the World Federation of Engineering Organizations, WFU, we are carrying out a big initiative. We called it the Engineering Capacity Building for Africa program. This spans 10 years long in the framework of the United Nations International Decade of Science for Sustainable Development. So in this initiative, we focus on the artificial intelligence because that is a revolutionary general purpose technology that penetrates all engineering professions. However, the risk of AI coming from two aspects. First is from the technical incompleteness. So there’s technically the contents generated by AI model is based on the probability. We call it so-called the joint probability because on the one side is the prompt and the generated content has the highest joint probability with your prompt. So we have to know that and to increase the capability of AI users to make factual check, logical check, ethical check to the contents. So that’s one side. Another side is that misuse of those models. To use this model to make this and this information, that’s very bad. Another kind of misuse is that perhaps 99% of the answers are correct, so people may simply rely on the model and give the rights to the model to believe all the models said to him or her. That’s not good. We have to keep the human oversight to know the mistakes is possible produced by those models. So that is also very important. Now we carry out trainings to engineers to help them to understand the fundamental mechanism how AI models work and why they can produce mistakes, and how to make so-called ethical, factual, logical, and scientific check to those generated contents. And we do hope that we can join hands with all of you to carry out the initiative. I mean the Engineering Capacity Building for Africa, that is a very ambitious, we hope in the following 10 years we can train more than 100,000 engineers in the workplace to grasp the AI tools, digital tools, to increase the quality of their life and their work.


Tsvetelina Penkova: Very important, very important topic actually, that we should be using the technologies as a tool and to help us to be more efficient, but not to substitute human capacity. This is a key topic everywhere we’re discussing those matters. And now where we are at the stage of implementing the AI technology, this is going to be reoccurring, so I would encourage you to keep on repeating that. that message because there is a fear among many societies that some of the technologies are here to take our jobs, which is absolutely not the case, but we have to provide the regulatory framework and the incentives and the campaigns to teach people how to use them to our best benefit. And this brings me back to Mario before I’m gonna turn to the audience for questions. So like circulating around, you’re going back to the digital literacy, to the knowledge of people if they can access and use the services you’re providing. The numbers you’ve given us in the introductions were very, very impressive. Like I didn’t remember them all, but you just said like 40 million user of digital ID, which is quite impressive already if people are trusting the digital platforms to use it for their identity. This is something as a key message from your opening remarks, but how do you ensure that people are actually aware of the systems that they’re using? And do you think that everyone has access to those services or just a specific group of people who are like young and a bit more open to those platforms? Like just a bit provocative question, I know, but it’s important to hear some answers.


Mario Nobile: I have half an hour, 20 minutes, no, one minute. So I’m joking. It’s, I think that the colleague, the literacy program is one of the cornerstones of digital services. We in the triennial plan, we have three cornerstones. The first is data quality. The second one is literacy for engineers, for deployers, for users. If you use an artificial intelligence, a generative artificial intelligence chatbot. and you are a citizen, you want to start a company, you want to interact with the national public registry of companies. You must know limits, opportunities of that kind of instrument. So the second cornerstone of literacy is very important. And we try on digital identity, on digital signatures, on e-invoicing, to make several kinds of literacy for engineers, for ICT engineers, for domain experts, the revenue officer, for citizens. And it’s tricky to find a way to tell people you are an engineer, you are a user, you are a citizen. The third one, last but not least, is the dataset control, which is your dataset control in the European Parliament, in the engineering organization. Data are in, are out, the patterns are in, are out. So, you know, technology evolves faster than most of the public administrations and companies can innovate. And I was talking before, three years are a long time. Now we are trying to build a strategic planning in a way, in a process, to leverage the predictive part of these emerging technologies in a faster way. This is a mandate for us. A short-term program, a short-term strategy, which involves literacy, data quality, and connection with citizens and companies. It’s a short time


Tsvetelina Penkova: Unfortunately, we’re limited on time with very very key messages So it’s good to acknowledge some of those problems and to start finding solutions together So in in in the sign of being together, I’m turning to you now For some questions or comments. I see the first one here. Then there are two more here. Let’s start Moving this way


Audience: I will speak in Arabic In Arabic or a field that needs digital initiatives 2. familiarizing people with the ideas and names of innovation initiatives an incentive to carry forward those initiatives 3. specialized education and training And especially for the skills of using these digital and smart initiatives for all specializations and levels, from the smallest employee to the highest level in public security. Of course, as a result of this, we can say that 30% of the successful initiatives that we implemented were innovative initiatives from public security employees. Of course, we feel proud and proud because we achieved a very advanced ranking according to the 2024 electronic parliamentary report, almost the first ranking in the Arab world and the 13th ranking in the world. Thank you.


Tsvetelina Penkova: Thank you. Can you pass the mic to the front row and then we’re going to move back here. Thank you.


Audience: Thank you very much. Also, I come from Bahrain. I really want to ask the UNDP representative, do you think that there is a need to issue a UN declaration to protect communities, values and ethics in the community based on the use of AI? Something like a declaration of human rights or maybe to amend the human rights declaration to protect the community of the misuse of AI? Something on the basis of the same basis of the EU Act that they have been issued recently? Because really, I think the changes happening in the AI world are so quick and require really global collaboration and cooperation to protect our community, especially those who don’t have full access or don’t have the knowledge or the technology or the understanding of the use of AI. Thank you.


Tsvetelina Penkova: Thank you. And now I’m moving to… the central sector, and then we’ll give you the floor.


Audience: Thank you very much. My name is Mahamdan Nasser from the Egyptian Parliament and I’m an engineer. So the first question for Mr. Kikong, we already have around 1 million engineers in Egypt. So can we work in this capacity building for engineers with Egypt? That would be wonderful. Another quick question for both Robert and Mario. Do you have any running or planned programs with Egypt for any of the projects you’ve been talking to or not? Thank you.


Tsvetelina Penkova: Very specific questions. Okay, let’s move there and then back and then one last question from this sector I’ll take after you.


Audience: Hello. My question is for Robert. I’m Kundan from India. I work with a non-profit called CG Netswara. I’m concerned around the DPI, digital public infrastructure. We know that, you know, it’s a hot thing. We talk a lot about these things and we also know that it’s a key thing to go ahead in terms of the growth of communities. But when you work with different countries, how do you kind of map out different DPIs within a framework, within a simple framework, which also upholds human rights, equality, and, you know, the values we are trying to uphold as a community, you know, who are trying to build for a multi-stakeholder future. Thank you. Okay, thank you. Just interesting talks. Only one point, please. I like all what you said, isn’t it a good idea for the UN to have common platforms to unify efforts to build such literacy programs, training for literacy or digital inclusion, increase engineering capacity and so on, instead of like every country is working like China, U.S., Europe, Saudi Arabia, Egypt, you know, all countries. Isn’t it better for the UN to have like unified or to work on such important parts so all the countries, they can have one place and other people from Africa, from Asia, from everywhere, they can benefit from it. It’s great work. Thank you.


Tsvetelina Penkova: Thank you. And we’re going to have one last question there and then I’m going to go back to our speakers. A lot of very specific questions were posed.


Audience: Good morning, Aileen Febles from the Cuban Parliament and President of a social organization that brings together all the professionals in the sector of technologies in Cuba. We are not members of the World Organization, but we will be, we will propose it and thank the PNUD because it has supported a lot since the creation of the organization in several projects that we have been developing, so I didn’t want to miss the opportunity. It seemed very important to me everything that has been said about the development of capacities, but sometimes in the projects that we do, they start and they end, and in the development of capacities, in technological issues, it is very important that it is permanent, iterative and incremental, because technological development advances a lot, and sometimes we start projects, we close them, and then that remains in the nothingness. So, trying to think of platforms like the one that the colleague mentioned earlier, and also in which these centers of training sign projects and present them to the UNEP Capacity Building Center for the Cuban engineers and the civil servants in Cuba and for those parliamentaries in my country so that we can really benefit from this.


Tsvetelina Penkova: Thank you. Very good example on inclusion, actually. And now I’m turning back to the panel here. A lot of questions, quite a lot of specific ones. Most of them to Robert. Can I speak? Oh, one. Okay, fine.


Audience: In French. I speak in French. Thank you very much. I come from Senegal, which faces two challenges. First, as you said, the digitization of the administration, whose biggest challenge remains the infrastructure, the appropriation, as well as the provision of funds. And here we rely a lot on parliamentarians and we do a lot of lobbying to achieve these objectives. We don’t have a translation in English. Sorry. No translation? So all of this gives us a wide range of challenges, also with the buyers, with the state. And here too, the parliamentarians play a big role. Because those who are from these white areas, these grey areas, today are still very, very introduced to the internet in their area. And when we see how a connected community is welcomed, we see that it is not only the young people who are happy. It is especially the leader, often it is a religious leader, who is a bit like the village chief, who connects all these people who are in Europe, who are in America, who are in the cities, and through these programs, who manage to develop the locality. It gives pride and it makes you want to go much further. And so all these challenges, we will still expect a lot of cooperation to be able to connect these millions of people who are today left behind in the digital divide. Thank you.


Tsvetelina Penkova: Thank you, thank you so much. Unfortunately, we are pressed by the time, so this was the last question. A comment is welcoming from the audience. Sorry? Okay, so now you want to take the floor? Perfect, okay. So this is going to be now the last one. Okay, I’m happy that we have such an engaged audience. Thank you.


Audience: I would like to thank you first, I would like to thank you first for this initiative which brought us as parliamentarians together and to get expertise from you. Of course, you benefited noting that the Algerian state is towards the digitization of Algeria within the 2020-30 program. We wanted to give a feedback or viewpoint about the infrastructure provided by Algeria and the state infrastructure and the freedoms of the national assembly. We were working in parallel with the executive body the government has provided as an infrastructure believing that if we want to develop the digitalization in our country we have to provide two major issues and then comes such other to interact with many countries to develop ourselves. We have started with infrastructure where Algeria has provided all the funding required to access each village and city in the Algerian territory. We have provided to cover with our coverage. Now, the Algerian parliament is working to provide the conducive environment for digitization and technology in general. We have started with the bill, it is about the self-contractor. We have started at the beginning the And Geneva doesn’t have the mechanism to work with different institutions, meaning that they’re facing challenges in dealing with mechanisms. Now that they succeeded in Algeria, many youth became more creative and can provide their products to all institutions, and we have successful experiences in this respect. In addition to that, today, the lawmakers in Algeria pay attention to a big issue, which is the AI. The AI, considering that now the current legislature in Algeria does not handle the AI issues from the… Of course, with the benefit of yesterday’s sessions, they gave a good idea to us, and also today about the electronic crimes legislatures. We have a couple of that. And so we can say that Algeria has led the legislative infrastructure and digital infrastructure to move forward towards economic. And at the end, I’d like to convey the greetings of the parliamentarian speaker, Thank you so much. Thank you.


Tsvetelina Penkova: Thank you for touching upon the topic that’s going to be debated, and how to prevent online harms, actually. So I know that we are running out of time, but there were a lot of specific questions addressed to our panelists, so I’ll turn back to you in whichever order you prefer to take the floor. So we’re going to take another five to six minutes of your time, but the audience is very engaged. So thank you. Who wants? Robert, thank you.


Robert Opp: I want to say thank you for the questions, because the quality of questions and comments, I think, indicates the level of interest. but also just the importance of this topic, and it makes me really happy to hear these kinds of questions coming out of parliamentarians and others. There were a couple of questions on the role of the UN in a declaration around AI and the Common Capacity Platform, and I want to address those a bit jointly. Professor Gong mentioned that there was, at this past September, the UN approved the Pact for the Future, which had as one of the parts the Global Digital Compact, and if you have not seen the Global Digital Compact yet, I would really encourage you to look for it on the UN website, because it sets out the direction agreed by 193 member states on issues related to capacity building, to artificial intelligence, and so on. At the moment, in terms of a declaration on AI and AI governance and human rights, to my understanding, most of the discussion so far has been that there’s no need to open up the Universal Declaration of Human Rights, but rather we need to think through the implication of the Universal Declaration of Human Rights in the digital space, which is being done inside the UN by the Office of the High Commissioner for Human Rights and a number of other agencies, as well as with the engagement of Permanent Missions in New York as well. And so I think the Global Digital Compact has a number of statements around digital rights and human rights in the digital space that represents the commitment collectively of countries. In terms of the AI portion of that, there is a very active discussion right now on the potential for international or global governance of AI, but that depends on member states more than the UN. because member states will need to instruct the United Nations agencies on how to proceed. So there is an agreement in the Global Digital Compact on the creation of an international scientific panel for AI, the creation of a fund for AI, including around capacity, and an annual dialogue on artificial intelligence that would happen every year to look at these issues. So it does kind of, to get us to the point of saying, you know, could we do something like the EU AI Act, will require the member states to come around that. And in a similar way, the issue around capacity. In the Global Digital Compact, there are commitments around capacity, but the commitments will only become real if there’s resources put behind that, which again is an issue for member states to discuss. I mean, we already as agencies, we work together. So another example of capacity building is UNDP and the International Telecommunication Union, ITU, have a joint project that has received funds from the European Union to support the capacity building of policymakers around the world by developing 17 different courses in different aspects of digital governance and things. And that will be available to as many as 5,000 policymakers in the current budget that we have. And ideally, we would expand that. So there is a lot of discussion around how do we bring those efforts together and make them more available to more people. So I very much appreciate the signal. And then Egypt, we work extensively, UNDP works extensively with the Ministry of ICT and Minister Talat. We have, I think, something in the order of 170. million dollars of projects that are going on, including the Applied Innovation Center and many other things that are going on in the country. And that is also actually supported from the resources of the government of Egypt as well, so we bring the technical assistance and we’re very grateful. We were also grateful to host in Egypt this year, or the government hosted, the first global digital public infrastructure summit, and that was held in October and brought together people from around the world to focus on this issue. And speaking of DPI, the question on DPI, and I’m sorry, I’m trying to work my way through the questions, our view as the United Nations is we should not be implementing technology that does not have the corresponding policies and governance mechanisms around it. In digital public infrastructure, we have, in the last General Assembly this year, announced or launched, rather, the Universal Digital Public Infrastructure Safeguards Framework. That was done together with the UN Technology Envoy’s office, and it represents more or less exactly what you were saying. It is literally a framework. As countries introduce digital public infrastructure, the framework has the principles, policies, and best practices around the kinds of governance, laws, policies that should be in place, policies for data protection and privacy, policies when implementing digital identity platforms, policies around data exchange and data governance, and all of those kinds of things. We are now working to implement that, and for us, we will not work with a country when we don’t have the technology joined together with the governance. So that’s the way we approach it.


Tsvetelina Penkova: Thank you so much. Very specific and comprehensive. Most of the questions were to you anyway, so you had to take the time. Now for like some, I don’t think this is any need for me to answer. So Professor Gong, you’re not taking?


Mario Nobile: Only one minute if I may. Robert answered all the questions, but three points. The first one, Italy, we, our agency with the Ministry of Foreign Affairs is carrying on the Piano Mattei. So our colleague of Egypt, Algeria, and Senegal, we will get in touch about this cooperation between Italy and African countries. And the second one, the Italian presidency of the G7 posed a point about the global governance of artificial intelligence. So we were talking about human rights, but also objectives like the UN sustainable imperative of no one must be left behind. So this is a point. The Italian point of the G7 presidency is we need a global governance about artificial intelligence. The third point, some of your questions. We, it took time to change our mind from horses to cars. Okay? Many years. Now, this relentless pace of artificial intelligence, we are facing months. In Italy, this is my two cents for discussion. In Italy, actually, now we are talking about tax imposition. to artificial intelligence tools. My colleague, the director of the National Institute for Welfare, in a public debate she said, persons must pay taxes. I’m happy to pay taxes to help others. If we are ready to use artificial intelligence tools, why cannot impose taxation? This is a question, not an answer.


Gong Ke: Let me just add a very short answer. So, from the questions, you raise a very important concept, that’s human rights. I think it’s an overarch goal for all of us. And let me remind that early this year, there’s a very important document released by the United Nations, edited by the high-level consultancy body for AI, that is the Governance AI for Humanity. So this is a very important document. And also I’d like to mention that two years ago, UNESCO has released a recommendation on the ethics of AI. That’s a very important standard-setting document. And then, I think it is also important, dealing with the interoperability of the DPI, the international norms. I think it is a very important role for the AI to play. And the good news is that the UN is going to set an AI office in the headquarters in the near future. So we do hope that we can see a more coordinated international operation on AI ethics and adaptations. So, I stop here.


Robert Opp: I think you’re better at representing the UN than I am. Thank you.


Tsvetelina Penkova: That’s the end of our panel. I just want to say thank you to the panelists and to the very engaged audience with the insightful and spot-on questions. A lot of topics were touched upon, a lot of them were very important, and I want to also thank the hosting country, Saudi Arabia, for having the opportunity to discuss all those key matters. As a concluding sentence, I would just say, let us commit to use the insights from today’s discussion to actually craft and create policies and incentives that are truly inclusive and leave no one behind. Thank you so much, and enjoy the rest of the program today.


R

Robert Opp

Speech speed

140 words per minute

Speech length

1798 words

Speech time

765 seconds

Need for digital literacy and capacity building across sectors

Explanation

Robert Opp emphasizes the importance of addressing capacity challenges across various sectors, including private sector, civil servants, higher education, parliamentarians, and general digital literacy. He highlights that capacity building is a complex situation requiring multiple interventions.


Evidence

UNDP has worked with 53 countries to assess local digital ecosystems and design action plans for capacity building. They have launched a center of competence on AI and digital skills for civil servants in Kenya, in partnership with Microsoft and Huawei.


Major Discussion Point

Digital Inclusion and Capacity Building


Agreed with

Gong Ke


Mario Nobile


Agreed on

Importance of digital literacy and capacity building


Differed with

Mario Nobile


Gong Ke


Differed on

Approach to digital literacy and capacity building


Implementing digital public infrastructure with corresponding policies and safeguards

Explanation

Robert Opp stresses the importance of implementing digital public infrastructure (DPI) alongside appropriate policies and governance mechanisms. He mentions the Universal Digital Public Infrastructure Safeguards Framework as a guide for countries implementing DPI.


Evidence

The UN launched the Universal Digital Public Infrastructure Safeguards Framework at the General Assembly, providing principles, policies, and best practices for governance, data protection, privacy, and data exchange when implementing DPI.


Major Discussion Point

Digital Public Infrastructure and Governance


Agreed with

Gong Ke


Agreed on

Tailoring digital solutions to local contexts


Partnerships with private sector for digital skills training

Explanation

Robert Opp highlights the importance of collaborating with the private sector for digital skills training. He notes that the private sector often possesses cutting-edge skill sets that can be valuable in capacity building efforts.


Evidence

UNDP’s partnership with Microsoft and Huawei in Kenya to launch a center of competence on AI and digital skills for civil servants.


Major Discussion Point

Public-Private Partnerships for Digital Development


G

Gong Ke

Speech speed

101 words per minute

Speech length

970 words

Speech time

573 seconds

Importance of engineering capacity building, especially in developing regions

Explanation

Gong Ke emphasizes the need for proactive investment in human resources, particularly in the global south and less developed regions. He stresses the importance of increasing engineering capacity to enhance problem-solving capabilities in specific social, environmental, and financial conditions.


Evidence

The World Federation of Engineering Organizations is carrying out a 10-year Engineering Capacity Building for Africa program, aiming to train more than 100,000 engineers in AI and digital tools.


Major Discussion Point

Digital Inclusion and Capacity Building


Agreed with

Robert Opp


Mario Nobile


Agreed on

Importance of digital literacy and capacity building


Differed with

Robert Opp


Mario Nobile


Differed on

Approach to digital literacy and capacity building


Need to tailor digital platforms to local contexts and languages

Explanation

Gong Ke argues for the importance of tailoring digital platforms to local contexts, including different economic development levels, languages, and cultural norms. This approach is crucial for easing the adaptation of digital technologies, including AI.


Evidence

He mentions the example of China, where platforms need to be adapted for different economic development levels and various dialects.


Major Discussion Point

Challenges of Digital Transformation


Agreed with

Robert Opp


Agreed on

Tailoring digital solutions to local contexts


Collaboration across sectors to support digital adaptation

Explanation

Gong Ke emphasizes the importance of open, accessible platforms that link developers, users, investors, and managers. These platforms should support collaboration across government, private sector, academia, and civil society to facilitate the adaptation of digital technologies.


Evidence

He mentions platforms in the form of public cloud services and open source communities that provide pre-trained fundamental models, standard datasets, computer powers, and technical training.


Major Discussion Point

Public-Private Partnerships for Digital Development


M

Mario Nobile

Speech speed

96 words per minute

Speech length

982 words

Speech time

610 seconds

Digital literacy programs as cornerstone of digital services adoption

Explanation

Mario Nobile emphasizes the importance of digital literacy programs in the adoption of digital services. He argues that users must understand the limits and opportunities of digital tools, such as AI chatbots, to effectively interact with digital public services.


Evidence

Italy’s efforts to provide various types of literacy programs for digital identity, digital signatures, and e-invoicing, tailored for engineers, domain experts, and citizens.


Major Discussion Point

Digital Inclusion and Capacity Building


Agreed with

Robert Opp


Gong Ke


Agreed on

Importance of digital literacy and capacity building


Differed with

Robert Opp


Gong Ke


Differed on

Approach to digital literacy and capacity building


Italy’s development of digital public infrastructure building blocks

Explanation

Mario Nobile outlines Italy’s efforts in developing digital public infrastructure building blocks. These include digital identity services, certified email, digital signatures, and platforms for payments, interoperability, and e-invoicing.


Evidence

He provides statistics on the adoption of these services, such as 40 million users of digital identity services and 45 million qualified certified certificates of signature in Italy.


Major Discussion Point

Digital Public Infrastructure and Governance


Rapid pace of AI development requiring adaptive governance

Explanation

Mario Nobile highlights the challenge posed by the rapid development of AI, which requires adaptive governance approaches. He contrasts the slow adoption of cars with the much faster pace of AI development, suggesting that governance needs to keep up.


Evidence

He mentions ongoing discussions in Italy about imposing taxes on AI tools, reflecting the need for new governance approaches in response to AI’s rapid development.


Major Discussion Point

Challenges of Digital Transformation


T

Tsvetelina Penkova

Speech speed

139 words per minute

Speech length

2128 words

Speech time

916 seconds

Risk of deepening inequalities through digital divide

Explanation

Tsvetelina Penkova highlights the risk of digital transformation deepening inequalities. She points out that while digital technologies bring opportunities for growth and connectivity, they can also exacerbate societal divides if not managed properly.


Evidence

She mentions that some communities still lack access to the internet, have low digital literacy, or lack necessary resources to participate meaningfully in digital transformation.


Major Discussion Point

Challenges of Digital Transformation


A

Audience

Speech speed

116 words per minute

Speech length

1352 words

Speech time

698 seconds

Infrastructure and funding challenges for digitalization in developing countries

Explanation

Audience members from developing countries highlight the challenges of infrastructure and funding for digitalization efforts. They emphasize the importance of connecting remote areas and the role of parliamentarians in lobbying for these initiatives.


Evidence

An audience member from Senegal mentions the challenges of infrastructure and funding for digitizing administration and connecting remote areas.


Major Discussion Point

Challenges of Digital Transformation


Need for global governance framework for AI and digital technologies

Explanation

Audience members express concern about the need for global governance of AI and digital technologies. They suggest the possibility of a UN declaration to protect community values and ethics in the context of AI use.


Evidence

An audience member from Bahrain asks about the possibility of a UN declaration to protect communities, values, and ethics in relation to AI use, similar to the EU AI Act.


Major Discussion Point

Digital Public Infrastructure and Governance


Request for UN to create common platforms for digital literacy and capacity building

Explanation

An audience member suggests that the UN should create common platforms to unify efforts in building literacy programs, digital inclusion initiatives, and increasing engineering capacity. This approach would allow countries to benefit from a centralized resource for capacity building.


Major Discussion Point

Digital Inclusion and Capacity Building


Request for more international cooperation on digital initiatives

Explanation

Audience members express interest in international cooperation for digital initiatives. They seek opportunities to collaborate with organizations like the World Federation of Engineering Organizations and UN agencies for capacity building and digital development projects in their countries.


Evidence

An audience member from Egypt inquires about potential collaboration with the World Federation of Engineering Organizations for capacity building programs.


Major Discussion Point

Public-Private Partnerships for Digital Development


Agreements

Agreement Points

Importance of digital literacy and capacity building

speakers

Robert Opp


Gong Ke


Mario Nobile


arguments

Need for digital literacy and capacity building across sectors


Importance of engineering capacity building, especially in developing regions


Digital literacy programs as cornerstone of digital services adoption


summary

All speakers emphasized the critical need for digital literacy and capacity building across various sectors and regions, particularly in developing areas, to ensure effective adoption and use of digital technologies.


Tailoring digital solutions to local contexts

speakers

Robert Opp


Gong Ke


arguments

Implementing digital public infrastructure with corresponding policies and safeguards


Need to tailor digital platforms to local contexts and languages


summary

Both speakers stressed the importance of adapting digital solutions, including infrastructure and platforms, to local contexts, considering factors such as language, culture, and existing policies.


Similar Viewpoints

All three speakers highlighted the importance of collaboration between public and private sectors in developing digital infrastructure, skills, and services.

speakers

Robert Opp


Gong Ke


Mario Nobile


arguments

Partnerships with private sector for digital skills training


Collaboration across sectors to support digital adaptation


Italy’s development of digital public infrastructure building blocks


Unexpected Consensus

Need for global governance of AI and digital technologies

speakers

Mario Nobile


Audience


arguments

Rapid pace of AI development requiring adaptive governance


Need for global governance framework for AI and digital technologies


explanation

Both Mario Nobile and audience members unexpectedly agreed on the need for global governance frameworks for AI and digital technologies, despite coming from different perspectives (government official and general public).


Overall Assessment

Summary

The main areas of agreement centered around the importance of digital literacy and capacity building, tailoring digital solutions to local contexts, and the need for collaboration between public and private sectors in digital development.


Consensus level

There was a high level of consensus among the speakers on these key issues, suggesting a shared understanding of the challenges and potential solutions in digital transformation. This consensus implies that future policies and initiatives in this area are likely to focus on these agreed-upon priorities, potentially leading to more coordinated and effective efforts in digital inclusion and development.


Differences

Different Viewpoints

Approach to digital literacy and capacity building

speakers

Robert Opp


Mario Nobile


Gong Ke


arguments

Need for digital literacy and capacity building across sectors


Digital literacy programs as cornerstone of digital services adoption


Importance of engineering capacity building, especially in developing regions


summary

While all speakers agree on the importance of digital literacy and capacity building, they emphasize different aspects and approaches. Robert Opp focuses on a broad range of sectors, Mario Nobile emphasizes literacy for digital service adoption, and Gong Ke stresses engineering capacity building in developing regions.


Unexpected Differences

Approach to AI governance

speakers

Mario Nobile


Gong Ke


arguments

Rapid pace of AI development requiring adaptive governance


Need to tailor digital platforms to local contexts and languages


explanation

While both speakers discuss AI governance, their approaches differ unexpectedly. Mario Nobile suggests adaptive governance and potential taxation of AI tools, while Gong Ke focuses on tailoring AI platforms to local contexts and languages, which was not directly addressed by other speakers.


Overall Assessment

summary

The main areas of disagreement revolve around the specific approaches to digital literacy, capacity building, and AI governance.


difference_level

The level of disagreement among the speakers is moderate. While they agree on the importance of digital inclusion and capacity building, they have different emphases and approaches. These differences reflect the complexity of digital transformation and the need for multifaceted strategies tailored to different contexts and needs.


Partial Agreements

Partial Agreements

Both speakers agree on the importance of cross-sector collaboration for digital development, but they differ in their focus. Robert Opp emphasizes partnerships for skills training, while Gong Ke highlights collaboration for technology adaptation.

speakers

Robert Opp


Gong Ke


arguments

Partnerships with private sector for digital skills training


Collaboration across sectors to support digital adaptation


Similar Viewpoints

All three speakers highlighted the importance of collaboration between public and private sectors in developing digital infrastructure, skills, and services.

speakers

Robert Opp


Gong Ke


Mario Nobile


arguments

Partnerships with private sector for digital skills training


Collaboration across sectors to support digital adaptation


Italy’s development of digital public infrastructure building blocks


Takeaways

Key Takeaways

Digital inclusion and capacity building are critical priorities for inclusive digital transformation


Digital public infrastructure needs to be implemented alongside appropriate governance frameworks and safeguards


Public-private partnerships are important for driving digital skills development and service delivery


There are significant challenges in addressing the digital divide, especially in developing regions


AI governance and human rights considerations are becoming increasingly important as AI develops rapidly


Resolutions and Action Items

UNDP and ITU to provide capacity building courses on digital governance for up to 5,000 policymakers


Italy to engage with African countries on digital cooperation through the ‘Piano Mattei’ initiative


UN to establish an AI office at headquarters to coordinate international efforts on AI ethics and adaptation


Unresolved Issues

How to effectively address the digital divide in low-income and conflict-affected regions


Whether a UN declaration or amendment to human rights declaration is needed to address AI impacts


How to ensure digital literacy programs are ongoing rather than one-off projects


Approach to taxing AI tools and systems


Suggested Compromises

Using existing human rights frameworks and interpreting them for the digital space, rather than creating new declarations


Balancing rapid AI development with adaptive governance approaches


Tailoring digital platforms to local contexts while working towards global standards


Thought Provoking Comments

We sometimes talk about digital public infrastructure, which I know some of the other panelists are going to mention, but digital public infrastructure is that sort of digital roads and bridges that need to be put in place, usually put in place at the instigation of government, but often implemented itself by private sector and operated by private sector.

speaker

Robert Opp


reason

This comment introduces the important concept of digital public infrastructure and highlights the collaborative role of government and private sector in implementing it.


impact

It set the stage for further discussion on public-private partnerships in digital transformation and the importance of foundational digital systems.


We have 23,000 public administrations in Italy, local central health services, so it’s very difficult.

speaker

Mario Nobile


reason

This brief statement succinctly captures the complexity of implementing digital transformation across a large, diverse governmental system.


impact

It prompted a more detailed discussion of Italy’s digital initiatives and the challenges of coordinating across many entities.


So based on our observation, our, I mean, the Chinese Institute of Artificial Intelligence Development Strategy, based on our observation to the development of digitalization in China, we find that the two strategies are very, very crucial. First, from the supply side, it is a key measure to providing open, accessible platforms.

speaker

Gong Ke


reason

This comment introduces a strategic framework for digital development based on China’s experience, emphasizing open platforms.


impact

It shifted the discussion towards more concrete strategies for digital inclusion and development, particularly in developing regions.


Do you think that there is a need to issue a UN declaration to protect communities, values and ethics in the community based on the use of AI? Something like a declaration of human rights or maybe to amend the human rights declaration to protect the community of the misuse of AI?

speaker

Audience member from Bahrain


reason

This question raises important considerations about global governance of AI and protection of human rights in the digital age.


impact

It prompted a discussion on existing UN initiatives and the potential need for new global frameworks for AI governance.


Isn’t it a good idea for the UN to have common platforms to unify efforts to build such literacy programs, training for literacy or digital inclusion, increase engineering capacity and so on, instead of like every country is working like China, U.S., Europe, Saudi Arabia, Egypt, you know, all countries.

speaker

Audience member


reason

This suggestion highlights the potential for greater international cooperation in digital capacity building.


impact

It led to a discussion of existing UN initiatives and the challenges of coordinating global efforts in digital development.


Overall Assessment

These key comments shaped the discussion by broadening its scope from national digital initiatives to global considerations of digital infrastructure, inclusion, and governance. They highlighted the complexity of implementing digital transformation across diverse systems, the need for strategic approaches to digital development, and the importance of international cooperation and governance frameworks, particularly for AI. The discussion evolved from specific national examples to broader considerations of how to ensure equitable and ethical digital development on a global scale.


Follow-up Questions

How to ensure digital literacy programs reach all segments of society, including older and less tech-savvy populations?

speaker

Tsvetelina Penkova


explanation

This is important to address the digital divide and ensure inclusive digital transformation.


How to implement AI governance frameworks that protect community values and ethics?

speaker

Audience member from Bahrain


explanation

This is crucial for mitigating risks associated with AI and ensuring its responsible use.


Can the World Federation of Engineering Organizations work on capacity building for engineers in Egypt?

speaker

Mahamdan Nasser from Egyptian Parliament


explanation

This could help enhance engineering skills and AI readiness in Egypt.


What specific digital public infrastructure (DPI) projects are UNDP and the Agency for Digital Italy running or planning with Egypt?

speaker

Mahamdan Nasser from Egyptian Parliament


explanation

This information could facilitate collaboration and knowledge sharing between countries.


How to map out different DPIs within a framework that upholds human rights and equality?

speaker

Kundan from India


explanation

This is important for ensuring DPI implementations align with ethical and human rights standards.


Could the UN create common platforms to unify efforts in digital literacy, inclusion, and engineering capacity building?

speaker

Audience member


explanation

This could streamline global efforts and make resources more accessible to all countries.


How to ensure capacity building in technological issues is permanent, iterative, and incremental?

speaker

Aileen Febles from Cuban Parliament


explanation

This is crucial for keeping pace with rapid technological advancements and maintaining relevant skills.


How to address infrastructure challenges in digitizing administration, especially in developing countries?

speaker

Audience member from Senegal


explanation

This is key for enabling digital transformation in areas with limited resources.


How to implement taxation on AI tools?

speaker

Mario Nobile


explanation

This could be an important consideration for funding public services in the age of AI.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

DC-PR & IRPC: Information Integrity – Human Rights & Platform Responsibilities

DC-PR & IRPC: Information Integrity – Human Rights & Platform Responsibilities