Day 0 Event #165 From Policy to Practice: Gender, Diversity and Cybersecurity

Day 0 Event #165 From Policy to Practice: Gender, Diversity and Cybersecurity

Session at a Glance

Summary

This panel discussion focused on gender diversity and cybersecurity, addressing the underrepresentation of women in the cybersecurity workforce and the gender-specific impacts of cyber threats. Participants highlighted the need for gender mainstreaming in cybersecurity policies and capacity-building initiatives. They discussed various national and international efforts to increase women’s participation in cybersecurity, such as Canada’s feminist foreign policy and Chile’s incorporation of gender perspectives in its national cybersecurity strategy.


The panelists emphasized the importance of addressing gender-specific cyber threats, including online harassment, deep fakes, and AI-driven job displacement risks for women. They stressed the need for inclusive technology development and policy formulation that considers diverse perspectives. Capacity-building programs like ITU’s HerCybertracks and the Women in Cyber Fellowship were highlighted as effective ways to empower women in cybersecurity.


The discussion also touched on the intersection of gender with other aspects of diversity, emphasizing the need for inclusive approaches that consider factors like neurodiversity and visual impairment. Panelists called for increased funding and support for successful regional programs addressing gender imbalances in cybersecurity education and training.


The panel concluded with plans to produce a compendium of good practices for mainstreaming gender into cybersecurity efforts, emphasizing the importance of multistakeholder collaboration in addressing these challenges. Overall, the discussion underscored the critical need for gender-responsive approaches in cybersecurity to ensure a more inclusive and secure digital future.


Keypoints

Major discussion points:


– The underrepresentation of women in the global cybersecurity workforce (only about 25%)


– The need to address gender-specific cyber threats and harms


– The importance of gender mainstreaming in cybersecurity policies and capacity building programs


– The role of emerging technologies like AI in exacerbating gender-based online threats


– Strategies for increasing women’s participation in cybersecurity, including targeted training programs


Overall purpose:


The goal of this discussion was to explore ways to increase gender diversity in cybersecurity and address gender-specific cyber threats. The panel aimed to gather insights and recommendations to inform future policy and capacity building efforts.


Tone:


The tone was largely constructive and solution-oriented. Panelists spoke candidly about challenges but focused on sharing positive initiatives and proposing ways to make progress. There was a sense of urgency about addressing these issues, but also optimism about potential solutions. The tone remained consistent throughout, with panelists building on each other’s points collaboratively.


Speakers

– Shimona Mohan: Moderator, from UN Institute for Disarmament Research


– Aaryn Yunwei Zhou: Deputy Director of the International Cyber and Emerging Technology Policy Division at Global Affairs Canada


– Yasmine Idrissi Azzouzi: Cybersecurity Program Officer at the International Telecommunication Union in Geneva


– Hoda Al Khzaimi: Director of the Center for Cybersecurity at New York University Abu Dhabi, Founder and Director of the Emerging Advanced Research Acceleration for Technologies, Security, and Cryptology Research Lab and Center


– Catalina Vera Toro: Alternative Representative, Permanent Mission of Chile in the Organization of American States


– Luanda Domi: General Streaming and Cyber Skills Development Manager, Global Forum on Cyber Expertise


Additional speakers:


– Pavel Mraz: Cybersecurity Researcher at UNIDIR


– Jocelyn Meliza: MUG member and co-facilitator for the BPF on cyber security


– Paula: Audience member, cybersecurity policy advisor


– Kosi: Audience member, student from Benin, chair of NGO called Women Be Free


Full session report

Gender Diversity and Cybersecurity: Addressing Challenges and Opportunities


This panel discussion, moderated by Shimona Mohan from the UN Institute for Disarmament Research (UNIDIR), brought together experts from various organisations to explore the critical issue of gender diversity in cybersecurity. The conversation centred on the underrepresentation of women in the cybersecurity workforce, gender-specific cyber threats, and strategies to promote inclusivity and equality in the field.


Underrepresentation and Policy Initiatives


The discussion began by highlighting the stark gender imbalance in the global cybersecurity workforce, with women representing only about 25% of professionals in the field. Panellists agreed on the urgent need to increase this representation and shared various national and international efforts to address the issue.


Aaryn Yunwei Zhou, from Global Affairs Canada, emphasised the importance of gender-responsive policies, citing Canada’s feminist foreign policy and the use of Gender-Based Analysis Assessments for all policies. Zhou also mentioned Canada’s involvement in the Global Partnership for Action on Gender-Based Online Harassment and Abuse, demonstrating a commitment to addressing gender-specific cyber threats at an international level.


Catalina Vera Toro from Chile’s Permanent Mission to the Organization of American States highlighted her country’s gender-responsive national cybersecurity policy, which aims to increase women’s participation in cybersecurity to 35% by 2030. Toro also discussed Chile’s feminist foreign policy and its focus on promoting gender equality in international forums, including the Open-Ended Working Group on cyber issues.


Gender-Specific Cyber Threats


The panel delved into the unique challenges women face in the digital sphere, emphasising the need to address gender-specific cyber threats. Aaryn Yunwei Zhou pointed out the disproportionate impact of internet shutdowns on women, while Hoda Al Khzaimi, from New York University Abu Dhabi, presented alarming statistics on deepfake content, revealing that 96% of all deep content online is non-consensual sexual content targeting women. Al Khzaimi noted, “These attacks often aim to silence journalists, politicians, and activists,” highlighting the broader societal implications of these threats.


The discussion also touched on the potential exacerbation of gender disparities due to emerging technologies. Luanda Domi, from the Global Forum on Cyber Expertise, shared a concerning statistic: “3.7% of women’s jobs globally are now at risk of being replaced by AI compared to 1.4% of men’s jobs.” This underscored the importance of training women for AI-driven roles and considering the gendered impact of technological advancements.


Capacity Building and Education Initiatives


A significant portion of the discussion focused on strategies to increase women’s participation in cybersecurity through targeted training programmes and capacity-building initiatives. Yasmine Idrissi Azzouzi from the International Telecommunication Union (ITU) in Geneva highlighted the organisation’s Women in Cyber Mentorship Program, which has transitioned into the HerCyberTracks programme. This initiative aims to empower women in the public sector through a holistic approach combining training, mentorship, role modeling, community building, and real-world exposure.


Aaryn Yunwei Zhou mentioned Canada’s Women in Cyber Fellowship for diplomats, demonstrating the importance of specialised programmes targeting specific sectors. Hoda Al Khzaimi stressed the need for deeper technical education beyond surface-level awareness, while Luanda Domi highlighted the importance of tailoring programmes to address specific needs, including neurodiversity.


The panellists agreed on the necessity of comprehensive and tailored capacity-building programmes to address the diverse needs of women and other underrepresented groups in cybersecurity. They also emphasised the importance of localisation and sustainability in these programmes, ensuring that they are adapted to regional contexts and can continue to have long-term impact.


Challenges and Future Directions


The discussion touched on broader challenges in promoting gender diversity in cybersecurity. Luanda Domi pointed out that only 0.05% of development budgets target gender initiatives, highlighting the need for increased funding and support for successful regional programmes. Domi also mentioned the Cyber Safe Foundation’s program in Africa as an example of effective regional initiatives addressing gender imbalances in cybersecurity education and training.


Catalina Vera Toro discussed the ethical considerations and challenges associated with emerging technologies, emphasising the need for responsible development and deployment of AI and other advanced systems.


The panel addressed the digital gender divide in Africa, recognising the unique challenges faced by women in the region and the importance of tailored solutions. Audience members also raised the need for multilingual training programs and a centralised platform listing available fellowships and programs for cybersecurity capacity building.


Conclusion


The discussion highlighted the critical need for gender-responsive approaches in cybersecurity to ensure a more inclusive and secure digital future. While there was a high level of consensus among speakers on the importance of addressing gender disparities in cybersecurity, the conversation also revealed the complexity of the issue and the need for multifaceted solutions.


The panel emphasised the importance of gender mainstreaming in cybersecurity policies, the development of targeted capacity-building programmes, and the need to address gender-specific cyber threats. The discussion also highlighted the potential of emerging technologies like AI to exacerbate gender-based online threats, emphasising the need for proactive measures to ensure that technological advancements benefit all genders equally.


As the field of cybersecurity continues to evolve, the insights and recommendations from this panel provide valuable guidance for policymakers, educators, and industry leaders working towards a more diverse and inclusive cybersecurity workforce. The panel concluded with plans to produce a compendium of good practices for mainstreaming gender into cybersecurity efforts, as mentioned by Pavel Mraz from UNIDIR, underscoring the importance of sharing knowledge and successful strategies across different regions and organisations.


Session Transcript

Shimona Mohan: Kalina on the screen. I hope you can hear us. So we don’t hear you yet, but if you hear me, please give me a thumbs up, or a nod, or something. OK, perfect. You can hear me. Fantastic. So the reason why I can’t hear you is because I don’t have my earpiece on yet. So I should be able to hear you in a couple of minutes as well. OK. OK. OK. So I hope that’s all good to go. And I hope our colleagues in the room can hear me loud and clear. Fantastic. So thank you all for joining us, and for those of us who are joining us online as well, very nice to have you with us to discuss something that we’ve heard a lot about this year, which is gender diversity and cybersecurity. We know that world over, there’s only about 25% of women in the cybersecurity workforce. So this is a very important conversation for us to have. And I’m very, very glad that us, as the United Nations Institute for Disarmament Research, along with the organizations that have collaborated on this event with us, which is the International Telecommunication Union, the Global Affairs Canada, and the Stimson Center, join us to convene this conversation and further the discussions around gender diversity and cybersecurity. So just to give you a little bit of a brief about today’s session, we know that there is a growing acknowledgment of the gender dimension of cyber threats, as well as the persistent digital gender divide, with women representing only maybe about 25% of the global cybersecurity workforce. However, specific gender-differentiated impacts of cyber threats and strategies have continued to increase when it comes to the global cyberspace at the current moment. And this kind of hinders the multi-stakeholder efforts to enhance cyber resilience and promote inclusive international peace and security in governance models. So today, with our fantastic panel, which is drawn from all kinds of diversities around geographies, around stakeholders, and around substantive expertises, we’ll discuss a little bit more about how to make sure that these gender-specific harms and threats are countered in an effective manner, both through substantive measures as well as through governance measures. So joining us today, just to give you a brief introduction, is, firstly, Ms. Aaron Yunwei Zhu, who is the Deputy Director of the International Cyber and Emerging Technology Policy Division at the Global Affairs Canada. We also have Professor Hoda Alkhazaimi, who is joining us online for now, and hopefully also in person, is the Director of the Center for Cybersecurity, New York University Abu Dhabi, and also the Founder and Director of the Emerging Advanced Research Acceleration for Technologies, Security, and Cryptology Research Lab and Center. We also have Ms. Yasmin Itrissi-Izouzi, who is the Cybersecurity Program Officer at the International Telecommunication Union in Geneva. We’ll also have Ms. Lulu Dugumi, who is the General Streaming and Cyber Skills Development Manager, Global Forum on Cyber Expertise. And we also have online with us Ms. Catalina Peratoro, who is the Alternative Representative, Permanent Mission of Chile in the Organization of American States. Myself, I am Shimona Mohan from the UN Institute for Disarmament Research. And joining us online is also my colleague, Mr. Pavel Mraz, who is a Cybersecurity Researcher, also at UNIDIR. So a couple of housekeeping announcements. I’ll ask our fantastic panelists to give us a little bit of a brief about the questions that I would like to ask you about the issue of cybersecurity and cybersecurity. We’ll have a round of introductions for about five to six minutes each, and then we’ll open up the floor for discussions. So please come prepared with your questions after you hear from our panelists. I would like to also flag that this discussion is part of an ongoing process of collecting recommendations for a compendium of good practices around gender and cybersecurity, mainstreaming gender and cybersecurity, that we as UNIDIR are undertaking with the help and collaboration and contribution of our partner organizations who are also on this panel. So with that, I think we’ll start off with the interventions for the day. And I will first invite, perhaps, since we have in the room, Aaron to kind of give us a little bit of a brief around how can governments tackle gendered cyber threats and attacks, and what kind of policy imperatives are required, and how can a government sort of mainstream these gender considerations in their cyber and digital policy?


Aaryn Yunwei Zhou: Thank you so much for inviting me to this panel. I’m pleased to be here. So for Canada, our approach is to mainstream gender considerations in all aspects of our work, specifically with threats and attacks. We take gender into consideration for both assessments of the threats and our responses to them. I’m sure this audience knows there’s a gender dimension to every aspect of cybersecurity, so not taking gender into consideration actually makes our responses far less effective. So a couple of examples I just wanted to share include internet shutdowns, specifically in Iran. For example, women tended to use Instagram much more, and the internet shutdown had a much more disproportionate effect on their both social and economic participation. And we need to get rid of this arbitrary divide between online and offline, as offline violence is often preceded by online violence. So this is not only the right thing to do, but again, will make our responses more effective. In terms of policy imperatives in Canada, all of our policy initiatives and programs have to go through what is called the Gender-Based Analysis Assessment, GBA+. This isn’t only about how policies affect women, but also how policies affect men and other gender-diverse people. And it’s not only about gender, it also takes into account race, class, ethnicity, cultural background, to really understand how specific policy responses affect people in all their diversity, and to make sure that the programs that we’re designing are fit for purpose. Yeah, and I’ll stop there.


Shimona Mohan: Perfect, thank you so much. I’m also wondering, since you’re here as a representative of Canada, and we know that Canada has a feminist foreign policy with a specific Women, Peace and Security Agenda National Action Plan, that also has several mentions of online harassment and abuse against women and people of diverse gender identities. How have these helped in the prioritization of the inclusion of tech considerations when it comes to gendered harms?


Aaryn Yunwei Zhou: So I think the thing that we do that’s top of mind for us is to create a taboo against online gender-based violence and harassment. So Canada was one of the founding members of the Global Partnership for Action on Gender-Based Online Harassment and Abuse. It’s grown to about 15 countries to form a community of like-minded countries that are building norms against online gender-based harassment and violence at a time when this is increasingly controversial, unfortunately. And what this has meant is we have a community of practice amongst different governments that can share experiences and learnings on how to do this both domestically and in multilateral contexts, notably around the Commission on the Status of Women every year. And another important thing that we do is sponsor the Women in Cyber Fellowship. Because of this program, we’ve been able to train over 50 women diplomats from around the world. Not only has this meant more diverse voices around the table at the Open End Working Group, it’s the first time that any first committee process has reached gender parity at the UN, and it also creates a community for women diplomats, and they can turn to each other and share learning and support each other to bring more diverse voices to those processes.


Shimona Mohan: Fantastic. Thank you so much. In fact, the Women in Cyber Fellowship has been a beacon of hope for all of us. And on the basis of the Women in Cyber Fellowship, we’ve also kind of, at Unity, established something called… the Women in AI Fellowship to replicate the same kind of success in AI-related conversations for women diplomats as well. Speaking of the OEWG, if I may turn to Yasmeen now, I think the OEWG on cyber has often also focused on reducing the gender digital divide to ensure that women get access and equal opportunities in the online space. And this is also, I know, something that ITU has worked on extensively. So how are we currently facing and sort of faring with the divide? And how do you think this gap can be sort of closed?


Yasmine Idrissi Azzouzi: Thank you for that, Shimona. So indeed, the ITU for more than 20 years now, we’ve been very active in closing the digital gender divide in particular by equipping women with digital skills. So the most sort of flagship initiatives when it comes to that are Equals, which is a global partnership, and also Girls in ICTs, which is very much focused on inspiring younger women to pursue careers in technology. In the specific context of cybersecurity, we are mandated by a specific resolution to promote the growth and development of a skilled and diverse cybersecurity workforce, and in particular, to address the skills shortage by including more women and promoting their employment. So based on that, in 2021, we first launched the Women in Cyber Mentorship Program, which is different from the Fellowship Program in three regions, so Arab region, Asia, and Asia, sorry, Asia-Pacific, and Africa. And one of the cornerstones of this program was really the soft skills development, the mentorship aspect. And as we were running it for three, four years, we received some feedback from participants that when they often participate in Women in Cyber-related programs or Women in Tech-related programs, there’s a strong focus on soft skills and a strong focus on developing leadership and whatnot. And they wanted to go a little bit beyond that and really focus on the technical skills and the hard skills in parallel to this. So then we decided to take this experience and we launched HerCybertrucks, which is a highly specialized and tailored training. So this program puts into practice what I believe to be a holistic approach to capacity building, and why holistic? Basically, to me, I think that capacity building is not just about training. Training is, of course, very important, but it’s also focusing on other elements. One, promotion of role models. So definitely elevating individuals that are from underrepresented communities as successful women in cybersecurity, shedding the light on their successes, and showcase that basically it is possible to have somebody that looks like me be able to be in a leadership position in cybersecurity. Second is community building. So facilitating peer exchange, support networks is definitely key to help individuals navigate challenges when it comes to being a woman in cybersecurity in a male-dominated field. Third is exposure to the reality of the field. So we do so through offering study visits to certs of other countries, for example, where women can see how things are being done and learn on the job, basically job placements. And this basically shows career pathways and provides also practical advice on the reality of the field. And then, last but not least, of course, mentorship is still very key. So we connect aspiring professionals with mentors from a bit all over the world and basically guides them through career pathways that have to do with their professional life, but also their personal life, because, of course, we are, let’s say, multifaceted beings as well. So with HerCybertracks, we’ve tailored curricula to be specifically for women in the public sector. And these are across three tracks, three cybertracks. One is policy and diplomacy, the second is incident response, and the third is criminal justice or cybercrime. And basically, we have decided to go, let’s say, beyond traditional training, incorporate study visits, incorporate networking opportunities, incorporate mentorship, and also focusing very much on inter-regional exchange. So our cohorts have participants from Africa, from Eastern Europe in particular, very different contexts, where actually people were surprised to be facing similar challenges, even if the context is quite different. So to sum up, basically, in order to really close the gender-digital divide and really promote what we call the equal, full, and meaningful representation of women in cybersecurity, not just as a checkbox being checked, but really meaningful and skilled representation, we must adopt a holistic approach such as this when it comes to capacity building, combining training, combining mentorship, combining role modeling, community building, and real-world exposure. Thank you.


Shimona Mohan: Perfect. Thanks so much, Yasmin. And this sounds like a fantastic sort of program for women of all walks of life to sort of join and make sure that they can contribute and learn a lot. And I took notes when you were talking, and I really like the fact that you focused on all the different ways of engaging them as tracks, and then sort of making sure that all of those tracks are coming together as one holistic program to make sure that they’re learning as much as they can. But I think going forward from here, I would also love to talk a little bit more about the tech side of things. How are we seeing these harms come up? Because we’ve heard a lot about these harms and how they have specific impacts or adverse effects on women and other gender-diverse individuals. But we’re lucky to have Professor Hoda on the call with us. So I will make use of her expertise and perhaps also ask her, what kind of gendered threats do you see sort of existing in the cyberspace? And who do you think these affect the most? And if you could give us some examples, that would be fantastic. I don’t see them on the chat, sorry, on the call, but I hope you can hear me.


Hoda Al Khzaimi: Can you hear me?


Shimona Mohan: Yes, I hear you, I hear you, I hear you, I hear you.


Hoda Al Khzaimi: Perfect. Thank you, Shomona, for inviting me to this important discussion. I think gendered cybersecurity threats are becoming increasingly urgent, especially as emerging technologies such as AI, deep fakes, and quantum computing to shape the digital landscape and today’s reality. Today, we’ll address two key areas, which are the example of these gender-specific threats, including those like threats that can be interpolated by emerging technologies and strategies for designing next generation solutions to mitigate these harms. The problem on the space for us is how do we define harm? When it comes to gender and women and vulnerable groups, harm is being defined on a magnified aspect, including intangible harm that comes with reputational risk. We have seen technologies as in deep fakes, for example, technology, which is one of the significant emerging threats, particularly for women, where it has been found by DeepTrace in 2019 that 96% of all deep content online is non-consensual sexual content pertaining to target women. These attacks often aim to silence journalists, politicians, and activists, which is an example that we’ve seen recently within the Indian media for one of the Indian journalists who has been targeted with a deep fake video designed to discredit her work and incite harassment. Emerging technologies as well, like AI-based content verification tools that is being developed, for example, by Microsoft, where I’m talking about a video authenticator and blockchain initiatives, like the one that Adobe has been developing on Adobe Content Authenticity Initiative, are promising solutions for those kind of deep fake aspects, as well as other advanced machine learning and cryptographic signature schemes that aims to identify the origins of the contents and flag manipulated media online. We really need to encourage platforms to start authenticating every type of messaging that’s being created online, and also flagging non, I would say, integral or non-authentic material that is being exchanged. When we talk as well about the biometric expectations and AI biases within, for example, facial recognition, we know very well that facial recognition systems often exhibit systematic bias against gender in general, but women in particular of color, they have much more kind of weakness for. A study that has been generated in 2018 showed that the error rate for those system is up to 34.7% for darker-skinned women in comparison for less than 1% for lighter-skinned men. Those biases can lead to wrongful surveillance and enforcement action, which often targets marginalized groups. So emerging technologies, such as federated learning, for example, and privacy-preserving AI models can reduce biases by ensuring diverse decentralized data training is being deployed. Additionally, initiatives by the ITU and UNESCO are pushing for global standards and ethical AI development, for example, emphasizing inclusivity is a must, and it should be. be considered in the design premise of those kind of technology. I’m talking about the cyber harassment and as well IOT devices. We have emerging technologies within the Internet of Things, for example, which introduce new vulnerabilities. Smart home devices and wearables can be exploited through stalking and harassment. There has been multiple incidents in the United States where female, loner females who have been subjected to being tracked by tracker devices and being subjected to targeted attack when they are in a specific area, affecting women not just in abusive relationships, but affecting women in general. According to the UNESCO, such misuse has already impacted 10% of women in developed nations. Security by design principles are a must for us when we are developing and designing the next generation solutions, not just in cybersecurity, but in general and the technology platforms.


Shimona Mohan: Thank you so much, Professor. I would also like to sort of, because you mentioned so many threats and especially the ones exacerbated by AI now, from more of a technical perspective, I would really love to understand what kind of measures can be taken, both proactive and reactive.


Hoda Al Khzaimi: I’m sorry, I can’t hear you, Shimona.


Shimona Mohan: Oh, sorry. Are you able to hear me now? So our technical colleagues in the room, if you could please ensure that Professor can hear us, that would be great. I understand we did a little bit of a technical thing to counter the glitch that was occurring in terms of the audio. So maybe in the meanwhile, until we’re resolving this glitch, and then I’ll come back to you, Professor. I assume you can’t hear me, so I’ll come back to you either way. But I actually wanted to ask the Professor first if we could have an overview of the kind of technical solutions that are possible to employ, both proactive and reactive. But I’ll save that question for when Professor Hoda can hear us. And in the meanwhile, because Yasmin spoke a lot about capacity building, I’d like to come back to her. Shimona, if you can hear us, we can’t hear you. Oh, not at all? Okay. I see. So maybe, could we unmute? We can’t hear you. Maybe you are on mute. Yes. Can you hear me now? Yes. I guess too much. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Could you confirm if you’re able to hear me, with maybe like a thumbs up? No, you can’t?


Luanda Domi: Unmute the microphone.


Shimona Mohan: We see it as mute online, the room microphone. Could we unmute the room, please? Okay. Can you hear me now? Yes. Yes. Okay, perfect. Thank you. So, perfect. Okay. So, there’s always a saying in tech panels, that there’s no tech panels without tech issues. So, we’re living up to that. But, professor, if you can hear me, I would like to come back to you on a follow-up question that I had. Thank you for giving us a background, and especially the exacerbated threats by AI. From a technical perspective, I’d really like to understand what kind of measures we can take, both proactive and reactive, to sort of lessen these harms, and who should employ these measures. So, if you could speak a little bit to that, that would be fantastic. Okay. So, not sure if… No. Okay. Could we unmute, please? Okay. I’m sorry to ask this again, but can you hear me? So, professor Hoda, I’m not sure if you can hear me.


Luanda Domi: I think because she’s on the move, maybe she’s having challenges. Okay. Shimona, you can get back to her later.


Shimona Mohan: Sure. Okay. Sounds good. So, we’ll go back to my previous arrangement of questions, then. Okay. So I just wanted to get a sense of solutions that we can employ to make sure that we don’t counter these gender-specific harms against women and people of diverse gender identities. And after more of a technical perspective, I wanted to come back to Yasmin and ask, since we want to effectively combat these gendered threats and harm, I know that a variety of internet governance communities perhaps need to be cognizant of these threats and then be well-trained to respond to them. And you spoke a little bit about this in your intervention about HerCybertracks, which ITU is doing. I’d like to also understand, how do capacity-building interventions, such as HerCybertracks or, as Aaron mentioned, the Women in Cyber Fellowship, help in this effort?


Yasmine Idrissi Azzouzi: Thanks for that, Shimona. I hope there won’t be any sound effects that are unexpected. But I think that including them in the workforce, including more women in the workforce, would be key. Because, of course, as we know, in the cybersecurity field, there’s still this persistent challenge. There are issues beyond just recruitment, but also retention and meaningful representation of women in cybersecurity. And since women are disproportionately affected by online risks, this can have one or two effects. It can either encourage them to go into the field to be able to face these risks or discourage them, on the other hand. But even more so, the programs that we work with are with women in public sector. So women that are often politicians, diplomats, women in the public eye, which, of course, makes them the higher target for gender harms online, harassment, doxing, and whatnot. So through the holistic capacity-building approach that we use for HerCybertracks that I mentioned earlier, we also run these peer exchange platforms or sessions on the challenges of being a woman in cybersecurity. These are intergenerational and interregional. And now participants affectionately call them group therapy, because they’ve actually been platforms for them to share experiences and difficulty in overcoming obstacles in the workplace and overcoming obstacles in the workforce at large. Tears have been shed and hugs have been given at these kinds of things, even though these are very different contexts, again, from a cybersecurity and maybe socioeconomic perspective as well. So many of our participants, the reason why I’m telling you this story is that the key sort of conclusion that comes from these exchange sessions is the power of communities. And women are often left out of formal processes. So what happens at these formal processes, these boys clubs oftentimes, they have found strength in actually creating informal communities parallel to the formal ones. And programs like HerCybertracks and others have actually have the ability to help create these communities. So now past participants stay in contact, they share with each other, they ask questions, they learn from common experiences, and basically this ensures that today’s informal communities of women in cyber that are in government, in incident response, or in cybercrime, tomorrow become the formal networks of the women that are in leadership, which this can ultimately result in really real confidence-building measures and interregional cooperation. So all in all, being able to respond to these threats I think is not just simply a question of having well-trained people, but also about creating environments where diverse perspectives are encouraged, respected, are heard, and leveraged to ideally create these communities of support. Thank you.


Shimona Mohan: Fantastic. Thank you so much. It’s very eye-opening to get an insight into how all of this plays out from start to finish. And I think with this, I would also like to understand how this happens perhaps on more of a national level. So we’re glad to have with us Catalina on the screen, and I hope, Catalina, you can hear me just checking because of the tech PTSD. But I wanted to get a sense of how Chile, which is actually one of the few countries to have a specific focus on gender in its national cybersecurity policy. How does this kind of focus in the cybersecurity policy help both substantively, say, in terms of gendered harms, and also participatorily, in terms of the ratio of women cyber professionals in the country? So over to you, Catalina.


Catalina Vera Toro: Thank you, Shamana. Hopefully, you can hear me as well without an echo. Yes, all good. So first of all, thank you. And I want to say hi to my distinguished fellow panelists and also to the audience and express my appreciation for the opportunity to share a national experience on this important issue. As you mentioned, Chile has integrated a gender perspective on its national cybersecurity policy. So I will go very briefly on how such focus helps substantively the meaningful participation of women and gender diversity, and how there is broader implications also beyond in international cybersecurity discussions and governance. So back in 2023, I want to mention, we became the first South American country to have a feminist foreign policy. That means we joined Canada, the Netherlands, Mexico, and the like that were leading the way on this. And thus, at our core, Chile has advanced on its commitments on human rights and equality. But through this vision of feminist foreign policy, we want to achieve, through policy, a more inclusive country and a more egalitarian society, you could say. So particularly in regards of gender perspectives on cyber policy, there are very few countries, as you mentioned, that have this in their national strategies. For instance, in Latin America and the Caribbean, 14 include some reference to human rights, but only four incorporate gender perspective. That would be Argentina, Colombia, Ecuador, and us. However, Chile has updated its cyber policy, cybersecurity policy in 2024 from its 2017 policy that already had gender as a reference. But now we had incorporated as a gender responsive approach by mainly establishing the obligation of the state to protect and promote the protection of the rights of the people on the internet through strengthening of the existing institutions in cybersecurity matters by capacity building and updating legal frameworks and by gender mainstreaming. All these initiatives must preferentially consider women, both in terms of their protection and inclusion and positive action aimed at correcting the inequalities that continue to exist in our society, as my previous panelists covered very well, and also mainstreaming protection of children and youth, the elderly, and also the environment. And such focus achieved two major objectives, you could say, first, addressing gender-specific harms in cyberspace, but also promoting the inclusion of women in cybersecurity workforce. How we address harms? Well, basically, we ensure that these risks are not only acknowledged, but systematically addressed by incorporating specific measures for gender-sensitive threat analysis and response mechanisms. That means we have updated our policies, are also in the works to broadening policies and laws that mitigate gender-online harms by developing tailored strategies for protection and victim support, empower affected groups to report incidents without fear or stigma, build safer, more inclusive online environments where everybody can engage freely, and through that, measures that are not only ethical imperatives, you could say, they can also contribute directly to more robust, inclusive cybersecurity systems that protect all citizens. When it comes to participation and encouraging women to go into cybersecurity, as it was previously stated, women represent around 20%, 24% of the global cybersecurity workforce. So policies that prioritize gender inclusivity have a transformative potential, and so we have created initiatives such as scholarships, mentorship programs, and leadership opportunities for women in cybersecurity. Through our national cybersecurity ecosystem that we’re building, next year we will have a cybersecurity agency that will cover nationwide and will collaborate with the MFA for cybersecurity as a whole. We’re also paving the way for more equitable talent pipelines. This inclusivity not only improves the ratio of women and cyber professionals, but also enhance cybersecurity outcomes. And I want to mention something more. Nowadays, we are all talking about technology and how we have to get our people, our workforce, ready for that. When it comes to cybersecurity, it’s very interesting because it’s a very broad ecosystem, you could say, of professionals, like, for instance, cyber diplomats, cyber politicians, not only cyber tech people. So then we need the data, actually. How many people are getting into cybersecurity nowadays, not only in the technical field, but also in those broader aspects? And also, cybersecurity is expanding its scope. Therefore, we need more professionals. And on this, Chile is trying not only to get young women into cybersecurity, like to go in early stages into STEMs and so on, but also to help women later in their careers to reconvert into cybersecurity through, like, customized programs, you could say, that are targeted for a specific need, like women that have already been 10 years in the workforce and want to reconvert into cyber field with short programs, short programs with certification, and also to help them leverage their experience to get into the cyber field, because most of the times, the biggest, like, block, you could say, for this is that they don’t have cyber-specific experience. And there is where the government can come in and help incentivize companies to bring in this women workforce that are being reconverted into this field. And for that, we have implemented, well, not only for cybersecurity, but for women in STEM and beyond, the IWALA certification, you could say, that promotes, you know, through government incentives, parity in the workforce in public sector, but also in private sector, so that they can also venture into this new, you know, cybersecurity opportunities that are at hand and make the most out of it. So I’ll leave it at that. Thank you.


Shimona Mohan: Thank you so much for… So much for… Could we unmute, please? Yes. Thank you so much for giving us that snapshot, Catalina. I also know that Chile has often mentioned, and you talked about this in your intervention just now as well, about promoting a sort of gendered approach to cybersecurity and gender-sensitive cyber capacity building at the policy level. I’m really interested in figuring out how does that prioritization help contribute effectively to also your international discussions around cybersecurity? And I know Chile is also very active in the Organization of American States, so perhaps there’s a regional mainstreaming aspect to this as well. So if you could speak a little bit, that would be fantastic.


Catalina Vera Toro: Yeah, sure. Thank you for that question. Well, we are firm believers that in multilateralism, you could say, and in regards of cyberspace, I think there is… a great opportunity at hand to do it more broadly and inclusively, you could say. So Chile has been a big promoter, along with Canada and like-minded countries, on including reference on human rights and also the need of gender perspective when it comes to the internet, but also to cybersecurity, because harms are differently felt for vulnerable groups, and those need to be represented in those discussions. And therefore, I must say, I’m also a women’s cyber fellow, so I’m living testament on how that fellowship can help, you know, women come into leadership roles. Now I’m the head of delegation to the Open Networking Group for Chile, but also to have that sort of network where you can work with women and bring those concepts into the room and also to negotiate in languages. So when it comes to, like, for instance, the Open Networking Group, we have consensus language on the annual progress reports that incorporate gender perspective. I think this is a good way forward and how we build like a universal, you could say, framework, whether it’s voluntarily or eventually legally binding, that incorporates a human-centric approach, but also a gender. responsive approach. It’s the best way forward and it’s an opportunity that we have nowadays that we need to take very practically. When it comes to the Organization of American States, yes, we do have a great program through CICTE on our security pillar at the OAS that basically is focused on recommendations for strengthening gender and cybersecurity through the regional organization, you could say, and they do this by first institutionalizing gender in national cyber strategies, so member states are encouraged to follow OAS guidance to formally integrate gender perspectives into the national cybersecurity policies and action plans. They also, through the organization, promote gender sensitivity capacity building, so they have programs that expand training programs specifically targeted to women, particularly in underserved and marginalized communities, to close that gender gap in cybersecurity expertise. There is also an enhanced regional collaboration, whether that’s through the search of the Americas, but also other programs where we foster partnerships among OAS member states to share resources, best practices, experiences in gender sensitivity cybersecurity initiatives. For that, for instance, we have 11 confidence building measures and one of those is specifically on gender in the region. We also have programs that combat online gender-based violence. We developed a regional framework to try to address online harassment, exploitation, and abuse, ensuring coordinated responses to the cross-border issues, and also there are programs that are targeted to increase representation of women by creating affirmative policies to ensure women occupy leadership roles and technical roles in cybersecurity at the national and regional level. So there is a lot of work that we have done regionally as well, and we are trying to collaboratively try to promote the incorporation of human-centric and gender perspectives also in negotiations. So this is not only something that Chile is doing, but many countries in the region are also promoting the inclusion of these references, because having consensus language can also be a way to build through other negotiations and other specific topics, like for instance the UN Cybercrime Convention, to incorporate human right language and also like the Convention on the Rights of the Child and also gender perspective or gender-based violence concepts as we build a safer cyberspace for all. Thank you.


Shimona Mohan: Thank you so much, Catalina. That was a very good snapshot of how we’re seeing it play out, not just in Chile but in Latin America, and then moving beyond that, zooming out and internationally as well. And speaking of gender mainstreaming, we’re also lucky to have Luanda on the panel with us, who sort of does this as her entire job. So I was wondering if Luanda, you could also kindly help us draw a clearer picture of how this can be achieved in the area of cyber capacity building, and what does gender mainstreaming look like in practice? What kind of elements, for example, are particularly important for policy audiences to consider? And here I’m thinking more along the lines of perhaps DEIA principles or perhaps intersectionality that might also be of interest for policy professionals to consider. But over to you, Luanda. Thank


Luanda Domi: you, Siobhana, and thanks for the invite. I actually think that my fellow colleagues did the job for me by actually giving some really good examples of what they’re doing on their current work on how gender can be mainstream in national policies like Chile, or through mentorship programs like ITU, and then Canada example for WIC fellowship and reaching gender parity in an open-ended working group, which we’re also as GFC very, very thrilled to facilitate that fellowship. If I can put it in more just steps to understand what it is about cybersecurity in terms of when we talk about gender mainstreaming, that we need to understand that it’s not just a technical issue, it is also a social issue, which means that gender significantly influences individuals’ or users’ experience and the perception of cybersecurity. We heard from Dr. Hoda a specific example of what are some gender-specific threats towards, for example, women or marginalized groups that might significantly change the experience of either participating into public discourse, or even taking jobs, or even removing themselves completely from online forums. So when we talk about gender mainstreaming, it’s important to talk about that it has to happen in two streams, policies and technologies that govern our digital world. And I think this is now so many examples we have with AI, you know, like that we have to kind of see that these technologies really do not take on biases that are in our real world. When we talk about foundation of gender mainstreaming, it’s very, very important that it’s systematically integrated in every step of capacity building. That’s the only way that it could be successful. And how is that in terms of designing, implementation, but also evaluation? So these are the three key things. For example, we all know that right now in the cybersecurity workforce, and it was mentioned today, the percentage of women in cybersecurity professions is quite low globally, but I think also regionally it differs. So our intention, we need to be intentional about very gender-specific goals that we want to reach, like in Chile’s case, for example, aiming that to increase a woman’s employment in cybersecurity role to 35% by 2030. This gives us clear policy, clear steps how to implement it, but also to evaluate it later on whether we actually succeeded in reaching that goal or not. Then when we talk about in terms of capacity building, one way or another, it was mentioned here today, we have to see what are systematic barriers beyond women leadership that are currently in cybersecurity and want to upskill, but what are systematic barriers to women’s participation in educational or training that are available for capacity building in cybersecurity? And this is where we talk about why it’s important to develop capacity programs intentionally that address gender-specific needs and how to teach professionals on how to counter online harassment and gender disinformation. What type of common tactics are used to silence women and marginalized voices online? And then address social engineering attacks, which we heard about today, that clearly disproportionately target women. And then I believe Yasmin mentioned more women in politics or leadership positions, which they are more visible to the public. So I think this is something that it’s quite practical to do and quite easy to do. However, I did want to mention why are we here, where we are today. And I recently came out to, came across to a UN Women report that was just launched and it says that only 0.05% of development budgets are targeted to gender initiatives. This is simply too low for us to be able to carry successful policies, policy implementation and programming for addressing gender and gender parity in a global level.


Shimona Mohan: Yes, thank you. Thank you so much, Luanda. I think that’s a very good sort of spotlight on the report also that you gave for the UN Women report and the very worrying statistic of 0.05%. It’s truly, truly terrible. But I have another question for you, but before I turn to you, maybe I can come back to Professor Hoda who has joined us in the room. And this is actually taking a thread from what Luanda said about how it’s important to have both tech and policy perspectives around this so that we can push for meaningful change. Professor Hoda, you earlier spoke about the kind of threats that we have which are gendered in nature. I’d now like to sort of flip the coin and ask what kind of measures can we take from a technical perspective, both proactive or reactive, that can sort of lessen these harms? And who should perhaps take these measures? Over to you.


Hoda Al Khzaimi: Thank you so much. I think it’s a multifaceted approach. We need to definitely tackle the technological barrier because we’re trying to address the fact that women are targeting on digital platforms. And I think for that kind of measure, we need to change the way we develop the technology. At the moment, technology is being predominantly developed for technicality and functional technicalities, rather than to just take in the consideration of having different groups that are being profiled on the platform for different type of reasons and trying to diffuse harm or eliminate harm of the platform. It’s not easy. We have been working with ITU on multiple kind of global initiatives for children protection, for women protection. Yeah. Yes. We’ve been working on a multiple level with the different platform makers, so I’m talking about Meta and other kind of platform makers as well, to build technological kind of solutions and functionalities within the platform, so they can address security and safety by design for different genders. Yeah. And this required us sitting together and understanding if actually resilience by design and not just security by design is being considered and it’s being taken care of with the form of different functionalities on the platform. I’m not going to get through the rigorous examples, because at the moment, you can see on different platforms that censorship and mass kind of analytics are being considered on a multiple level. But also, if we develop the policies and regulations for inclusive access to digital platform, are those policies being translated by the big kind of industry partners or not? So I think it’s very important to bring everybody on board and make sure that we have a foolproof solution when we are developing for those solutions. And inclusivity for women and women education on this platform is very important, which means that I don’t have to only educate them on the fact that they have to access digital platform, but they have to also know about the research and development and kind of niche aspects that are pertaining to research and development and science and development of new technologies. We’re talking about AI, we’re talking about cryptography, we’re talking about security and cybersecurity. So most of those elements are cross-sectorial and also deep into the analysis of the sector, which means you don’t have just to do like a generic surface-scratching awareness program for women or kind of women in leadership or women in cyber kind of sessions, but you really need to educate them on a deeper aspect, on an academic aspect sometime, on the power of developing new technologies. I think what we are lacking at the moment is the power of the collectives. So I would say the power of providing for the ecosystem champions across board and as well the power of funding and the power of the collectives in terms of knowledge capital, bringing everybody else on the same table. And maybe have a co-creation lab for women to design their own solutions within a digital platform and figure out if those solutions could be championed by different industrial partners to be implemented on the different platforms as well and within the industry. So I mean creating disruptors within the industry that comes from innovators of the space who understand those needs and who are focusing on solving those gaps and needs, not focusing on commercially building a massive solution that pertains to a generic use of the public. And maybe then we will be starting to solve all these kind of issues that we have, not just for women but for all other gendered groups and as well for children as well at the same time. So this is one in terms of technology development, in terms of education, but as well in terms of policy formulation. I think there is a huge gap in between the recipient of the policy, which is the women at the moment, and the policy developers who are developing these policies sometimes in Global North and the impacted pool are in Global South. So how can we redistribute policy creation where we have policy labs within the affected zones actually. If we know that, I don’t know how many we’ve said, we have 25% of women are actually in cyber, right? What regions are predominantly high on the number of women in cyber and what regions are low? I know for example for us in this region we have quite of a high percentage of women in technology and STEM and STEAM and women in cyber as well. So how would the learning curve be viewed on this kind of aspects? Would we be able to maybe bring in together this kind of co-learning labs that are globally developed by those indigenous groups, not just by specific entities within the global kind of narratives? I think trying to solve from grassroots perspective is supposed to be a powerful tool that we should exercise, not just on technology level or education or awareness, but also on the policy aspect. Thanks.


Shimona Mohan: Thank you professor. I think you covered a lot in terms of what kind of solutions we can look at and we’ve been talking about solutions on this entire panel and I think that’s a really… I hope you can hear me. Yes, okay. So talking a lot about solutions on this panel and I think that’s particularly important because we’ve heard a lot about problems around and we continue to hear them in the context of emerging tech like AI as well. But a couple of things that you said really are really really interesting especially about sort of combining the convening power of audiences from different fields, from different walks of life to sort of come together and contribute to these efforts. And I’d like to go back to Luanda here who was earlier talking about a very similar thing when it came to both tech and policy audiences sort of coming together and making sure that these mean streaming efforts are taking place in collection in some. So Luanda, I’d like to also come back to you and perhaps ask if you already have any sort of work streams in place or any projects or ideas around this perhaps where cyber capacity building programs are targeting both tech professionals and policy professionals in perhaps a combined manner. And if this would perhaps be this kind of hybrid model of capacity building might perhaps be a bridge for technologists and policy audiences who we’ve seen do have a little bit of a gap to understanding perhaps the same issue or how to tackle it. Professor also pointed out that there’s a disconnect between the people at the policy level and at the recipient level as well. Perhaps, Luanda, you can speak a little bit more to this when it comes to gender mainstreaming and cyber capacity building. Thank you.


Luanda Domi: I think you called my name. We’re having a bit of a challenge hearing you. But please intervene if it’s not my turn. Excuse me, in the room. Okay. Can they hear us? If I heard correctly, also what Dr. Hoda mentioned, I wanted just maybe to say one thing about AI because she mentioned a lot of the risks and one thing that is quite interesting about gender and if we talk about gender and education versus, I mean, women education in STEM and cybersecurity versus employment, I think this is where we see huge differences globally. Now, this is not the case in Middle East. Middle East is doing really great on this area, actually. But what I wanted to say in terms of statistics is that now a big issue is AI and gender employment risk. So again, to that report from UN Women, we see that, I mean, they quantified what we actually knew. It’s that 3.7% of women’s job globally now are at risk of being replaced by AI compared to 1.4% of men’s jobs. So this is only goes to echo the importance of training women in AI driven roles. So only with capacity building programs, we can help bridge this gap, which I assume will only get higher with the years to come by offering specialized trainings like ethical hacking, secure coding and AI governance. I think this is quite important if we’re trying to see like what is some of the solutions out there for a growing problem. And I’m really happy to also hear of UNIDIRS Women Fellowship on AI. I think this is the way that we need to approach it. Now, also when we talk about gender and its intersection, we really must go beyond it and see, adopt the principles of diversity, equity, inclusion and accessibility to ensure that everyone, including women, new diverse individual, those with disabilities like visually impaired can really, really thrive in cybersecurity. And why am I mentioning this? Because we’re talking about what type of training models should be there, separate or hybrid models. But this is when we were looking at what type of models, this has to be adapted to the learners that we are targeting, right? So every learner has different requirements, have different needs, has different pace, and we have to adapt to their unique strengths. For example, when we talk about, so I’m actually lobbying for both, but separate programs need to be tailored for specific needs. For example, because they would provide a focused, safe environment for individuals that can learn without distractions. Let’s talk about, for example, individuals coming from neurodiversity communities, they can learn much more and within excellent pace if they have a highly structured schedules, if they have clear instruction, and if they have, for example, sensory inputs. So these are the things that we need to check. And they have been quite, quite successful in positions that are, for example, ethical hacking, especially for ethical hacking. And I think programs like cybersecurity, new diversity talent pipeline are programs that we need to further scale up for in regional and global level. And then you have, for example, programs that could leverage adaptive tools like screen readers, tactile diagrams, or braille-friendly resources. So these are very good workshops that can be done for visually impaired individuals on topics that have been done so far in secure coding or cryptography. And they help achieve successful graduation rate because through these tools, educational aids, they also get to do a lot of hands-on activities. Now, when we look at hybrid programs, and I would lobby for these programs, but in very specific curriculums or purpose, I should say, not curriculum, but for example, if we want to do exchange for collaboration in terms of bringing together policy makers and technologists, I think in these cases, hybrid cybersecurity training can help in understanding, adapting, regulatory implications. So this could be like very cross-sector capacity-building workshop in cybersecurity. Or another one would be either neurodiverse individuals with traditional learners versus visually impaired and fully disabled individuals. Visually impaired and fully sighted participants. So they have been very good example where they have to do joint exercises where maybe visually impaired individuals had have an environment that could, they could use the tools easily. And onsite participants had the other aspects that could contribute to better interaction with visually impaired participants. So these all foster some sort of teamwork without challenging, I think, the diversity and the strength, unique strengths of each players or learners in this kind of aspects. So I think these are some of the, I think, pros and cons of each. But I would say that in cybercapacity building, both are very important. And they have to be carefully planned so they are inclusive and create unique teams that are successful.


Shimona Mohan: Perfect. Thanks so much, Luanda. Thanks so much, Luanda. If you can hear me? Yes. Thanks so much, Luanda, for that. I think you answered a question that I hadn’t even thought to ask and I should have. But thank you so much for that very, very exhaustive sort of snapshot into the diversity aspect of things, which I think we sometimes sort of skip over or overlook just because we, is it lesser? So thank you so much for bringing it up in this panel again. I think now we’ll perhaps have a very quick lightning round of questions. So I know that there’s one question on the chat, or two on the chat. But I’d also like to perhaps open the floor if there are any questions in the room. Okay. Could you please come up to the mic here and take the questions? And I’d urge you to keep the questions very brief because we have the room for a limited time. Thank you.


Jocelyn Meliza: Hello, can you hear me? Fantastic. My name is Jocelyn Meliza. I am a MUG member and also a co-facilitator for the BPF on cyber security. This year we were focusing on mapping out cyber security initiatives. But then we realized that several organizations are also mapping out, but the gap really came in terms of cross collaboration. And I really liked what Professor Huda mentioned on building the power of the collective and the power of the collective. And something else that we also noted while mapping, we did not see any map of women in cyber capacity building programs. So do those map exist anywhere and even beyond? that how do we harness what you mentioned on the power of the collective. And lastly, an opportunity to also welcome you to the BPF on cybersecurity. It’s on Tuesday at 4.45, because it will be an extension on this, and I think this topic will be very valuable in building into that mapping exercise, as well as building into next year. Thank you.


Shimona Mohan: Perfect, thanks so much. I think we’ll take all the questions together, and then perhaps we can go with the panel. Yes, please.


AUDIENCE: Okay, I hope you can hear me. Thank you for that lovely and insightful presentation. My name is Paula, and I have a question, perhaps to Yasbin. The ITU’s CyberTracks program is fantastic, and I speak as a beneficiary of the program. I am currently a cybersecurity policy advisor, and a lot of the work that I do is based off of what I’ve learned from the program. What I wanted to find out is, of course, the program can only take a certain number of people per cohort, and there are so many people that want to be part of the program. So is there a way that the program could be done in collaboration with governments, for instance, where learners can access the platform freely so that we don’t have to wait for a new cohort to start for people to join the program, but it will be more distributed? Is there a possibility of that happening? And then, as an afterthought, is there a platform that, if I’m interested in building my capacity in cybersecurity, is there a platform where I can go and I will find a list of all the fellowships or programs that are open so that all of them can be accessed from one particular platform? Thank you.


Shimona Mohan: Perfect, thank you. And then the gentleman in the back.


AUDIENCE: Hello, good afternoon. I’m Kosi, I’m a student from Benin. I’m working for government. Also, I chair one NGO called Women Be Free. And I want to know, your capacity-building program, is it available in some language, like French, for example? Is it available? Do you have some tool somewhere? Is it possible to have some different kind of training you provide, if it’s possible to have it in French, in English, and so on? Now, last question. Is it possible to have partnership with your organization directly to provide training for people locally? Is it possible for you to come, for example, to Benin and provide physically training for people there? What is the plan? What is the process? Thank you.


Shimona Mohan: Perfect, thank you. I will also just read out a couple of questions that we have from the chat online. So, one of the questions is about how, what are the ethical considerations and challenges associated with emerging tech? And how can we prepare the next generations to navigate them? And sort of related questions, which we can perhaps club together is, how can we address the digital gender divide in Africa to ensure that women and girls benefit equally from AI advancements? So, I will invite our panelists to sort of do lightning responses, and perhaps club them together with any closing remarks that you have, any final words, comments, et cetera. Perhaps to avoid tech issues, we start from the room, and then we’ll proceed online back to Catalina and to Luanda. So, I’d give the floor to Professor Hoda first, and then we just come together here.


Hoda Al Khzaimi: Thank you so much for your questions. This has been amazing. Paula and Kusi, and what’s your name, I’m sorry? Josephine? Josephine as well. I think you have very, very interesting kind of collective reflection on how we can maybe provide for online tools as long-term assets of this program. We always invest in cyber capacity building and capacity building at large in different regions of the world, but we also wanna make sure that it’s not just a repetitive problem where it’s exhaustively complex to solve. And I think utilizing digital platform where we have a LinkedIn-like platform where you can put in the profiles of people, different capacities that they have developed, and access to courses and material would be very, very kind of powerful to have where the contributors could be different countries from different parts of the world. So, building that kind of platform effect is a must at the moment, I would say. Thank you, Paula, for bringing that up, and as well, Josephine. And as well, considering the different nonsense and languages and as well details that comes from different parts of the world is very important. And this is what I meant exactly when I said that if we started solving for the capacity development problem, for example, in cybersecurity, from within the regions, the solutions and the nudges are quite simple. And the development of those solutions is not very exhaustive. And I think we should put in some effort to have some traces and some kind of legacy that would be created to benefit the public. One of them, I think, the co-creation lab, the funding platform, the co-development platform where we can have the courses and the people, like matching challenges with needs is very important.


Yasmine Idrissi Azzouzi: If I may also take maybe those questions together, in particular, I have a feeling that they are related to sustainability of programs and localization as well. And so really taking advantage, let’s say, of that local ecosystem. So actually at the ITU, we’ve piloted last year a similar approach where we actually wanted the program that we had been running for a while, which was the Women in Cyber Mentorship Program to be then given, quote unquote, and ran by a local organization. So we partnered with this organization called Women in Cybersecurity Middle East, who is a very, very active network in this region. And they have basically ran the program in the region under sort of the guidance and umbrella of the ITU. And this pilot has also shown us that, of course, we know we’ve mentioned that funding is an issue in the cybersecurity capacity building field as other capacity building fields. And so having this capacity of sort of multiplier initiatives where then knowledge and resources are given to local organizations can be part of the solution there.


Shimona Mohan: Perfect. Thank you so much, Yasmin. And I’ll go back to the online room, perhaps first to Catalina and then to Luanda, if in a couple of minutes you’d like to respond to any questions or have any concluding remarks. Thank you.


Catalina Vera Toro: Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Plataish. I don’t know, I’ll just go. So basically from a governance and MFA standpoint, what I can say is that we are trying to collaboratively come together within the open-ended working group for capacity building. I think that is gonna be a huge negotiation and a huge pillar because specifically for the Global South, it’s one of those structural elements that need to be well-addressed within the open-ended working group and how we transition to the permanent mechanism. So we are trying to come up with something that is obviously inclusive and that takes into account the needs specifically for developing countries. And that will probably entail some sort of repository and portals, for instance, for the voluntary norms or demand focus offer of capacity building. And also best experience and how we share best experience throughout the world. So I think eventually we’re going to have something that is on a global scale that will bring in all the experiences, hopefully the successful experience. So that it will be available not only in English, but in all the official languages. That also entails French, for instance, of resources that can help each other build that safer cyberspace for all. And that also comes where, and also regional instances and organizations do great work. So within the open-ended work, for instance, we don’t want to duplicate the efforts that regional organizations are doing. We know that not every country is part of a regional organization, but if you are, there are great programs that do rotating workshops. For instance, the Organization of American States rotates throughout countries to do mentorship programs or gender capacity building programs for each of its member states. So there are great resources out there as of now, and we’re hopeful that by the end of 2025, when we move to that permanent mechanism, for instance, in the open-ended working group, we will have something similar in a global scale. So please continue to help us on how to address this. Very briefly on Raby’s question on ethical considerations, I want to say that many countries are coming up with AI policy frameworks, so you have great resources of how countries are. So, I would like to start by saying that we have a lot of work to do in terms of addressing ethical issues when it comes to AI. Of course, the main one is the respect of human rights, that by that ensuring that AI systems align with and uphold fundamental rights such as privacy, equality, and non-discrimination. Luanda went in depth on this and the risk, the high risk of this. So, I would like to start by saying that we have a lot of work to do in terms of ensuring that, you know, that it’s transparent to the user as well. Fairness and equity, so there is no biases. And also environmental sustainability. So, I think there are great efforts globally on this. Of course, there is the work to do in terms of ethical AI. But also, you know, I think that it’s a great resource to look into. There is great work in that. And I just wanted to mention very briefly that back in 2023, Chile was the first host for the high-level ministerial of authorities on ethical AI for the Latin American and Caribbean, so we had a joint declaration on how regionally we should be doing this. So, there is a lot of work to be done in terms of making sure that it aligns with also with the UNESCO framework. So, there is great work that is being done in there, and, of course, the next step and a responsibility of every state is how we bring that back home. And provide programs and education specifically for our younger generations, you know, that are, of course, immersed in technology nowadays. So, I think that’s it.


Luanda Domi: Thank you. I’ll be very, very brief, and this was a lovely conversation. I don’t want to be dim, but I kind of have to. It’s my job. Just to point out to a couple of the things for, I think, the participants that asked the online question. Last year’s survey, I believe it says that the global population is about 2.6 billion people. So, this is a huge number. So, I think we also have to be a bit real on the things as a global community that what we have to invest to then not just address gender parity, but also this regional, I think, challenge of people without access to the internet. And how do we do this? I think accessibility, digital literacy, and the very basic one, providing affordable digital tools. We need to do this as a global community. This is, I think, not just a strategic thing. It’s a moral imperative. To include the populations everywhere on internet. And then also teach them about the risks, right? And then the second thing I would say very quickly, in Africa specifically, there is a cyber safe foundation which has an amazing program on cyber girls, fellowships, and there is a cyber safe foundation on cyber girls, fellowship. And the women that have been trained, about 67% of the women that graduate from this program get employed. And their salaries increase from 200% to 400%. This year, due to funding, goes back to the report, the fellowship almost closed. And this is where we have to voice out, and this is what we do at GFC through our women in cyber capacity building network, lobby for a regional women’s network to get the funding for successful programs to address gender disbalance and training and education in cyber, STEM, or AI. And we really use this as a call of action to support successful programs out there because there are out there. So we just need to support and scale them up. Thank you.


Shimona Mohan: Thank you so much, Luanda. And to everybody on the panel, to Professor Hoda, to Erin, to Yasmin, to Catalina and Luanda for joining us online, for joining us on Sunday evening in December for this very interesting conversation. I will give the floor now to my colleague, Pavel, for concluding remarks, after which we will close the panel. So thank you again for joining us, everybody. And over to you, Pavel.


Pavel Mraz: Hello, Simona, and hello, everyone. Thank you for the floor. I really want to thank all partners, including the governments of Canada, Chile, ITU, GFC, and the Stimson Center for supporting this event with their contributions and recommendations. I think together today, we have identified at least four key takeaways. The urgent need to increase gender diversity in the global cybersecurity workforce. The necessity of mainstreaming gender considerations both into cybersecurity policy, but also into existing and future capacity-building initiatives. And, of course, the need to address and research gender-based threats amplified by emerging technologies, but rest assured, we have listened very carefully today to all your insights, questions, and recommendations. And in order to ensure that these recommendations are properly captured for policymakers who can take action, Unidear, together with partners, plans to produce a compendium of good practices for mainstreaming gender into cybersecurity efforts. And in terms of next steps, what we plan to do is we will convene a series of online workshops over the year of 2025 with the aim of discussing each of those issues separately. So the gender-based threats, international obligations, women participation in cyber workforce, mainstreaming gender into policy, and capacity-building. And, of course, if these workshops are of interest, please share with us your contact details either in chat or with Simona in the room. We want to capture as broad, as diverse a range of perspectives. We would be more than happy to invite you to continue these conversations. And very lastly, I would just say I would be remiss not to thank the IGF Secretariat for allowing us to have this conversation in a truly multistakeholder fashion by bringing together governments, international organizations, civil society, industry, and technical experts, we can truly assure that our approaches to gender mainstreaming will not be forgotten. So thank you to all of our speakers and participants who have contributed to this dialogue. And rest assured, your insights and recommendations are invaluable, and they will inform our efforts going forward.


Shimona Mohan: So thank you. Back over to the room, and have a wonderful evening in Saudi Arabia. Thank you so much. Thank you. Thank you. Thank you. Thank you.


S

Shimona Mohan

Speech speed

147 words per minute

Speech length

3161 words

Speech time

1282 seconds

Need to increase women’s representation beyond current 25%

Explanation

There is a need to increase the representation of women in the cybersecurity workforce beyond the current global average of 25%. This low percentage highlights the importance of addressing gender diversity in the field.


Evidence

World over, there’s only about 25% of women in the cybersecurity workforce.


Major Discussion Point

Gender diversity in cybersecurity workforce


Agreed with

Aaryn Yunwei Zhou


Yasmine Idrissi Azzouzi


Catalina Vera Toro


Agreed on

Need to increase women’s representation in cybersecurity


A

Aaryn Yunwei Zhou

Speech speed

125 words per minute

Speech length

469 words

Speech time

224 seconds

Canada’s Gender-Based Analysis Assessment for policies

Explanation

Canada requires all policy initiatives and programs to undergo a Gender-Based Analysis Assessment (GBA+). This assessment considers how policies affect women, men, and gender-diverse people, as well as other factors such as race, class, and ethnicity.


Evidence

All of our policy initiatives and programs have to go through what is called the Gender-Based Analysis Assessment, GBA+.


Major Discussion Point

Policy and governance approaches


Agreed with

Catalina Vera Toro


Luanda Domi


Agreed on

Importance of gender-responsive policies and frameworks


Disproportionate impact of internet shutdowns on women

Explanation

Internet shutdowns can have a disproportionate effect on women’s social and economic participation. This highlights the need to consider gender-specific impacts when assessing cyber threats and responses.


Evidence

Internet shutdowns in Iran had a much more disproportionate effect on women’s social and economic participation, particularly through the use of Instagram.


Major Discussion Point

Gender-specific cyber threats and harms


Women in Cyber Fellowship for diplomats

Explanation

Canada sponsors the Women in Cyber Fellowship program, which has trained over 50 women diplomats from around the world. This initiative aims to increase diverse voices in international cybersecurity discussions.


Evidence

Over 50 women diplomats from around the world have been trained through the program, leading to gender parity in the First Committee process at the UN.


Major Discussion Point

Capacity building and education initiatives


Canada’s feminist foreign policy and national action plan

Explanation

Canada has implemented a feminist foreign policy with a specific Women, Peace and Security Agenda National Action Plan. This policy addresses online harassment and abuse against women and people of diverse gender identities.


Evidence

Canada was one of the founding members of the Global Partnership for Action on Gender-Based Online Harassment and Abuse.


Major Discussion Point

Policy and governance approaches


Y

Yasmine Idrissi Azzouzi

Speech speed

162 words per minute

Speech length

1300 words

Speech time

480 seconds

ITU’s HerCyberTracks program for women in public sector

Explanation

The International Telecommunication Union (ITU) launched HerCyberTracks, a specialized training program for women in the public sector. The program focuses on policy and diplomacy, incident response, and criminal justice or cybercrime.


Evidence

HerCyberTracks offers tailored curricula across three tracks: policy and diplomacy, incident response, and criminal justice or cybercrime.


Major Discussion Point

Capacity building and education initiatives


Agreed with

Shimona Mohan


Aaryn Yunwei Zhou


Catalina Vera Toro


Agreed on

Need to increase women’s representation in cybersecurity


ITU’s holistic approach combining training, mentorship and networking

Explanation

ITU’s capacity building approach combines training, mentorship, networking, and real-world exposure. This holistic method aims to promote meaningful representation of women in cybersecurity.


Evidence

The program incorporates study visits, networking opportunities, mentorship, and inter-regional exchange.


Major Discussion Point

Capacity building and education initiatives


Agreed with

Hoda Al Khzaimi


Luanda Domi


Agreed on

Necessity of targeted capacity building and education initiatives


Differed with

Luanda Domi


Differed on

Approach to capacity building


H

Hoda Al Khzaimi

Speech speed

130 words per minute

Speech length

1759 words

Speech time

806 seconds

96% of deepfake content targets women non-consensually

Explanation

Deepfake technology poses a significant threat to women, with the vast majority of deepfake content being non-consensual sexual content targeting women. This technology is often used to silence journalists, politicians, and activists.


Evidence

DeepTrace found in 2019 that 96% of all deep content online is non-consensual sexual content pertaining to target women.


Major Discussion Point

Gender-specific cyber threats and harms


AI systems exhibit bias against women, especially women of color

Explanation

Facial recognition systems and other AI technologies often show systematic bias against women, particularly women of color. This bias can lead to wrongful surveillance and enforcement actions targeting marginalized groups.


Evidence

A 2018 study showed that the error rate for facial recognition systems is up to 34.7% for darker-skinned women compared to less than 1% for lighter-skinned men.


Major Discussion Point

Gender-specific cyber threats and harms


Need for deeper technical education beyond surface-level awareness

Explanation

There is a need for deeper, more comprehensive technical education for women in cybersecurity, beyond surface-level awareness programs. This includes education on research and development aspects of new technologies like AI and cryptography.


Major Discussion Point

Capacity building and education initiatives


Agreed with

Yasmine Idrissi Azzouzi


Luanda Domi


Agreed on

Necessity of targeted capacity building and education initiatives


C

Catalina Vera Toro

Speech speed

134 words per minute

Speech length

2182 words

Speech time

975 seconds

Chile’s goal to increase women in cybersecurity to 35% by 2030

Explanation

Chile has set a specific goal to increase women’s employment in cybersecurity roles to 35% by 2030. This target is part of their efforts to address gender imbalance in the field.


Major Discussion Point

Gender diversity in cybersecurity workforce


Agreed with

Shimona Mohan


Aaryn Yunwei Zhou


Yasmine Idrissi Azzouzi


Agreed on

Need to increase women’s representation in cybersecurity


Chile’s gender-responsive national cybersecurity policy

Explanation

Chile has updated its national cybersecurity policy to incorporate a gender-responsive approach. This includes establishing state obligations to protect and promote the rights of people on the internet, with a focus on women’s protection and inclusion.


Evidence

Chile updated its cybersecurity policy in 2024 to incorporate a gender-responsive approach.


Major Discussion Point

Policy and governance approaches


Agreed with

Aaryn Yunwei Zhou


Luanda Domi


Agreed on

Importance of gender-responsive policies and frameworks


OAS rotating workshops on gender capacity building

Explanation

The Organization of American States (OAS) conducts rotating workshops throughout member countries to provide mentorship programs and gender capacity building in cybersecurity. This approach helps distribute resources and knowledge across the region.


Major Discussion Point

Capacity building and education initiatives


Developing ethical AI policy frameworks

Explanation

Many countries, including Chile, are developing AI policy frameworks to address ethical considerations. These frameworks focus on respecting human rights, ensuring transparency, fairness, and environmental sustainability in AI systems.


Evidence

Chile hosted the first high-level ministerial meeting on ethical AI for Latin America and the Caribbean in 2023, resulting in a joint declaration.


Major Discussion Point

Policy and governance approaches


L

Luanda Domi

Speech speed

113 words per minute

Speech length

1808 words

Speech time

953 seconds

Only 0.05% of development budgets target gender initiatives

Explanation

A recent UN Women report revealed that only 0.05% of development budgets are targeted to gender initiatives. This extremely low percentage highlights the need for increased funding and support for gender-focused programs in cybersecurity and other fields.


Evidence

UN Women report finding that only 0.05% of development budgets are targeted to gender initiatives.


Major Discussion Point

Gender diversity in cybersecurity workforce


3.7% of women’s jobs at risk from AI vs 1.4% of men’s

Explanation

Artificial Intelligence poses a greater risk to women’s employment compared to men’s. This disparity highlights the need for targeted training programs to help women adapt to AI-driven roles in the workforce.


Evidence

UN Women report showing 3.7% of women’s jobs globally are at risk of being replaced by AI compared to 1.4% of men’s jobs.


Major Discussion Point

Gender-specific cyber threats and harms


Importance of tailored programs for specific needs like neurodiversity

Explanation

Capacity building programs should be tailored to address the specific needs of diverse learners, including those with neurodiversity or visual impairments. This approach ensures that cybersecurity education is inclusive and accessible to all.


Evidence

Examples of successful programs for neurodivergent individuals in ethical hacking and visually impaired individuals in secure coding or cryptography.


Major Discussion Point

Capacity building and education initiatives


Agreed with

Yasmine Idrissi Azzouzi


Hoda Al Khzaimi


Agreed on

Necessity of targeted capacity building and education initiatives


Differed with

Yasmine Idrissi Azzouzi


Differed on

Approach to capacity building


Need for gender mainstreaming in cyber policies and technologies

Explanation

Gender mainstreaming should be systematically integrated into every step of capacity building in cybersecurity. This includes the design, implementation, and evaluation of policies and technologies that govern the digital world.


Major Discussion Point

Policy and governance approaches


Agreed with

Aaryn Yunwei Zhou


Catalina Vera Toro


Agreed on

Importance of gender-responsive policies and frameworks


P

Pavel Mraz

Speech speed

195 words per minute

Speech length

336 words

Speech time

103 seconds

UNIDIR’s planned compendium of gender mainstreaming practices

Explanation

The United Nations Institute for Disarmament Research (UNIDIR) plans to produce a compendium of good practices for mainstreaming gender into cybersecurity efforts. This initiative aims to capture recommendations for policymakers to take action on gender issues in cybersecurity.


Evidence

UNIDIR plans to convene a series of online workshops over the year 2025 to discuss various aspects of gender in cybersecurity.


Major Discussion Point

Policy and governance approaches


Agreements

Agreement Points

Need to increase women’s representation in cybersecurity

speakers

Shimona Mohan


Aaryn Yunwei Zhou


Yasmine Idrissi Azzouzi


Catalina Vera Toro


arguments

Need to increase women’s representation beyond current 25%


Canada’s Gender-Based Analysis Assessment for policies


ITU’s HerCyberTracks program for women in public sector


Chile’s goal to increase women in cybersecurity to 35% by 2030


summary

Speakers agree on the importance of increasing women’s representation in the cybersecurity workforce through various policy and educational initiatives.


Importance of gender-responsive policies and frameworks

speakers

Aaryn Yunwei Zhou


Catalina Vera Toro


Luanda Domi


arguments

Canada’s Gender-Based Analysis Assessment for policies


Chile’s gender-responsive national cybersecurity policy


Need for gender mainstreaming in cyber policies and technologies


summary

Speakers emphasize the need for gender-responsive policies and frameworks in cybersecurity at national and international levels.


Necessity of targeted capacity building and education initiatives

speakers

Yasmine Idrissi Azzouzi


Hoda Al Khzaimi


Luanda Domi


arguments

ITU’s holistic approach combining training, mentorship and networking


Need for deeper technical education beyond surface-level awareness


Importance of tailored programs for specific needs like neurodiversity


summary

Speakers agree on the importance of comprehensive and tailored capacity building programs to address the diverse needs of women and other underrepresented groups in cybersecurity.


Similar Viewpoints

Both speakers highlight the disproportionate impact of AI on women, particularly in terms of bias and job displacement risks.

speakers

Hoda Al Khzaimi


Luanda Domi


arguments

AI systems exhibit bias against women, especially women of color


3.7% of women’s jobs at risk from AI vs 1.4% of men’s


Both speakers emphasize the importance of specialized programs to train and empower women in cybersecurity roles, particularly in the public sector and diplomacy.

speakers

Aaryn Yunwei Zhou


Yasmine Idrissi Azzouzi


arguments

Women in Cyber Fellowship for diplomats


ITU’s HerCyberTracks program for women in public sector


Unexpected Consensus

Importance of addressing neurodiversity in cybersecurity education

speakers

Luanda Domi


arguments

Importance of tailored programs for specific needs like neurodiversity


explanation

While most discussions focused on gender, Luanda Domi unexpectedly highlighted the importance of considering neurodiversity in cybersecurity education, broadening the conversation on inclusivity beyond gender.


Overall Assessment

Summary

The speakers generally agreed on the need to increase women’s representation in cybersecurity, the importance of gender-responsive policies, and the necessity of targeted capacity building initiatives. There was also consensus on the disproportionate impact of emerging technologies like AI on women and the need for more inclusive approaches in cybersecurity education and workforce development.


Consensus level

High level of consensus among speakers, with complementary perspectives on addressing gender disparities in cybersecurity. This strong agreement suggests a clear direction for policy makers and stakeholders in prioritizing gender mainstreaming and inclusive approaches in cybersecurity initiatives.


Differences

Different Viewpoints

Approach to capacity building

speakers

Yasmine Idrissi Azzouzi


Luanda Domi


arguments

ITU’s holistic approach combining training, mentorship and networking


Importance of tailored programs for specific needs like neurodiversity


summary

While both speakers emphasize the importance of capacity building, they differ in their approach. Yasmine advocates for a holistic approach combining various elements, while Luanda emphasizes the need for tailored programs addressing specific needs of diverse learners.


Unexpected Differences

Overall Assessment

summary

The main areas of disagreement were subtle and primarily focused on different approaches to implementing gender mainstreaming in cybersecurity, rather than fundamental disagreements on goals or principles.


difference_level

The level of disagreement among the speakers was relatively low. Most speakers shared similar goals and principles, with differences mainly in specific implementation strategies or focus areas. This low level of disagreement suggests a general consensus on the importance of addressing gender issues in cybersecurity, which could facilitate more effective collaboration and policy development in this area.


Partial Agreements

Partial Agreements

Both speakers agree on the need for gender-responsive policies, but they differ in their specific approaches. Canada uses a Gender-Based Analysis Assessment for all policies, while Chile has incorporated a gender-responsive approach specifically in its national cybersecurity policy.

speakers

Aaryn Yunwei Zhou


Catalina Vera Toro


arguments

Canada’s Gender-Based Analysis Assessment for policies


Chile’s gender-responsive national cybersecurity policy


Similar Viewpoints

Both speakers highlight the disproportionate impact of AI on women, particularly in terms of bias and job displacement risks.

speakers

Hoda Al Khzaimi


Luanda Domi


arguments

AI systems exhibit bias against women, especially women of color


3.7% of women’s jobs at risk from AI vs 1.4% of men’s


Both speakers emphasize the importance of specialized programs to train and empower women in cybersecurity roles, particularly in the public sector and diplomacy.

speakers

Aaryn Yunwei Zhou


Yasmine Idrissi Azzouzi


arguments

Women in Cyber Fellowship for diplomats


ITU’s HerCyberTracks program for women in public sector


Takeaways

Key Takeaways

Resolutions and Action Items

Unresolved Issues

Suggested Compromises

Thought Provoking Comments

We take gender into consideration for both assessments of the threats and our responses to them. I’m sure this audience knows there’s a gender dimension to every aspect of cybersecurity, so not taking gender into consideration actually makes our responses far less effective.

speaker

Aaryn Yunwei Zhou


reason

This comment highlights the critical importance of incorporating gender considerations comprehensively in cybersecurity, not just as an add-on but as a core element that improves effectiveness.


impact

It set the tone for the discussion by emphasizing the practical benefits of gender mainstreaming in cybersecurity, beyond just ethical considerations. This led to further exploration of specific ways gender considerations can be integrated into policies and programs.


To sum up, basically, in order to really close the gender-digital divide and really promote what we call the equal, full, and meaningful representation of women in cybersecurity, not just as a checkbox being checked, but really meaningful and skilled representation, we must adopt a holistic approach such as this when it comes to capacity building, combining training, combining mentorship, combining role modeling, community building, and real-world exposure.

speaker

Yasmine Idrissi Azzouzi


reason

This comment provides a comprehensive framework for addressing the gender gap in cybersecurity, emphasizing the need for a multi-faceted approach.


impact

It shifted the conversation from discussing the problem to exploring concrete solutions, leading to more detailed discussions about specific programs and initiatives that embody this holistic approach.


96% of all deep content online is non-consensual sexual content pertaining to target women. These attacks often aim to silence journalists, politicians, and activists

speaker

Hoda Al Khzaimi


reason

This statistic starkly illustrates the gendered nature of certain cyber threats and their broader societal implications.


impact

It brought attention to the severity and specificity of gender-based cyber threats, leading to discussions about the need for targeted technological and policy solutions to address these issues.


3.7% of women’s job globally now are at risk of being replaced by AI compared to 1.4% of men’s jobs. So this is only goes to echo the importance of training women in AI driven roles.

speaker

Luanda Domi


reason

This comment introduces a new dimension to the discussion by highlighting the gendered impact of AI on employment, connecting cybersecurity issues to broader economic concerns.


impact

It broadened the scope of the conversation to include the intersection of gender, cybersecurity, and emerging technologies like AI, emphasizing the need for forward-looking capacity building programs.


Overall Assessment

These key comments shaped the discussion by progressively expanding its scope and depth. The conversation evolved from establishing the importance of gender considerations in cybersecurity to exploring specific challenges, comprehensive solutions, and future implications. The comments highlighted the multifaceted nature of the issue, touching on policy, education, technology, and economic aspects. This led to a rich discussion that emphasized the need for holistic, collaborative approaches to address gender disparities in cybersecurity, while also considering the impacts of emerging technologies.


Follow-up Questions

Do maps of women in cyber capacity building programs exist?

speaker

Jocelyn Meliza


explanation

This information could help identify gaps and opportunities in existing programs.


How can we harness the power of the collective in cybersecurity initiatives?

speaker

Jocelyn Meliza


explanation

Collaboration across organizations could enhance the impact of cybersecurity efforts.


Is there a way to make the ITU’s CyberTracks program more widely accessible, possibly through collaboration with governments?

speaker

Paula


explanation

Expanding access to this program could benefit more individuals interested in cybersecurity.


Is there a centralized platform listing all available cybersecurity fellowships and programs?

speaker

Paula


explanation

Such a resource would make it easier for individuals to find and access relevant opportunities.


Are capacity-building programs available in multiple languages, such as French?

speaker

Kosi


explanation

Offering programs in various languages would increase accessibility for non-English speakers.


Is it possible to establish partnerships for providing local, in-person training?

speaker

Kosi


explanation

Local training could better address specific regional needs and challenges.


What are the ethical considerations and challenges associated with emerging tech, and how can we prepare the next generations to navigate them?

speaker

Online participant (Raby)


explanation

Understanding and addressing ethical challenges is crucial for responsible technology development and use.


How can we address the digital gender divide in Africa to ensure that women and girls benefit equally from AI advancements?

speaker

Online participant


explanation

Closing this divide is essential for inclusive technological progress in the region.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance

Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance

Session at a Glance

Summary

This discussion focused on the development of an ethical AI governance framework and tool by the Digital Cooperation Organization (DCO) and Access Partnership. The session began with an introduction to the importance of addressing AI as a societal challenge, emphasizing the need for ethical, responsible, and human-centric AI development. Ahmed Bhinder from DCO explained their approach to ethical AI governance from a human rights perspective, highlighting the organization’s diverse membership and goals.


Chris Martin from Access Partnership presented DCO’s view on ethical AI governance, outlining six key principles: accountability and oversight, transparency and explainability, fairness and non-discrimination, privacy, sustainability and environmental impact, and human-centeredness. The discussion then introduced a prototype tool designed to assess AI systems’ compliance with these ethical principles and human rights considerations.


Matthew Sharp detailed the tool’s functionality, explaining how it provides risk assessments and actionable recommendations for both AI developers and deployers. The tool aims to be comprehensive, practical, and interactive, focusing on human rights impacts across various industries.


The session included a practical exercise where participants were divided into groups to analyze AI risk scenarios using the framework. Groups identified potential ethical risks, scored their severity and likelihood, and proposed mitigation strategies. This activity demonstrated the tool’s application in real-world scenarios.


The discussion concluded with remarks from Alaa Abdulaal of DCO, emphasizing the importance of a multi-stakeholder approach in addressing ethical AI challenges and the organization’s commitment to providing actionable solutions for countries and developers. The session highlighted the ongoing efforts to create practical tools for ensuring ethical AI development and deployment on a global scale.


Keypoints

Major discussion points:


– Introduction of the Digital Cooperation Organization (DCO) and its work on ethical AI governance


– Presentation of DCO’s human rights-centered approach to AI ethics and governance


– Overview of an AI ethics evaluation tool being developed by DCO and Access Partnership


– Interactive exercise for participants to apply the tool’s framework to AI risk scenarios


Overall purpose:


The goal of this discussion was to introduce DCO’s work on ethical AI governance, present their new AI ethics evaluation tool, and gather feedback from participants on the tool’s framework through an interactive exercise.


Tone:


The tone was primarily informative and collaborative. The speakers provided detailed information about DCO’s approach and the new tool in a professional manner. The tone shifted to become more interactive and engaging during the group exercise portion, as participants were encouraged to apply the concepts and provide input. Overall, the discussion maintained a constructive and forward-looking atmosphere focused on addressing ethical challenges in AI development and deployment.


Speakers

– Chris Martin: Head of policy innovation at Access Partnership


– Ahmad Bhinder: Representative of the Digital Cooperation Organization


– Matthew Sharp: Senior manager at Access Partnership


– Thiago Moraes:


– Alaa Abdulaal: Chief of Digital Economy Foresight at the DCO


Additional speakers:


– Kevin: Colleague mentioned as handing out worksheets


Full session report

Revised Summary of Discussion on Ethical AI Governance


Introduction


This discussion, led by representatives from the Digital Cooperation Organization (DCO) and Access Partnership, focused on the development of an ethical AI governance framework and assessment tool. The session emphasized the critical importance of addressing AI as a societal challenge, highlighting the need for ethical, responsible, and human-centric AI development.


Key Speakers and Their Roles


1. Chris Martin: Head of policy innovation at Access Partnership


2. Ahmad Bhinder: Representative of the Digital Cooperation Organization


3. Matthew Sharp: Senior manager at Access Partnership


4. Thiago Moraes: Facilitator of the interactive exercise


5. Alaa Abdulaal: Chief of Digital Economy Foresight at the DCO


Discussion Overview


1. Importance of Ethical AI Governance


Chris Martin opened the discussion by framing AI as a societal challenge rather than merely a technical one. He emphasized the monumental stakes involved in AI development and deployment, stressing the need to “get this right” by ensuring AI is ethical, responsible, and human-centric. Martin highlighted the uneven global diffusion of AI technologies, noting the concentration in Asia Pacific, North America, and Europe, while identifying growth opportunities in the Middle East and North Africa. He also mentioned concerns about the increasing energy consumption related to AI development.


2. DCO’s Approach to AI Governance


Ahmad Bhinder introduced the DCO, explaining its membership of 13 countries and an observer network of 20 countries. He elaborated on DCO’s human rights-centered approach to AI governance, noting that the organization had identified which human rights are most impacted by AI and reviewed how AI policies, regulations, and governance intersect with these rights across their diverse membership and globally.


3. Key Principles for Ethical AI


Chris Martin presented DCO’s view on ethical AI governance, outlining six key principles:


a) Accountability and oversight in AI decision-making


b) Transparency and explainability of AI systems


c) Fairness and non-discrimination in AI outcomes


d) Privacy protection and data safeguards


e) Sustainability and environmental impact considerations


f) Human-centered design focused on social benefit


Martin provided specific examples and explanations for each principle, emphasizing their importance in ethical AI development and deployment.


4. DCO’s AI Ethics Evaluation Tool


Matthew Sharp provided a detailed overview of the AI ethics evaluation tool being developed by DCO and Access Partnership. Key features of the tool include:


– Separate risk questionnaires for AI developers and deployers


– Assessment of severity and likelihood of human rights risks


– Interactive visualizations to help prioritize actions


– Practical, actionable recommendations based on risk assessment


Sharp explained the tool’s workflow, highlighting differences between developer and deployer questionnaires. He emphasized that the tool is designed to be comprehensive, practical, and interactive, focusing on human rights impacts across various industries. Sharp also noted how this tool differs from existing frameworks, particularly in its focus on human rights and inclusion of both developers and deployers in the assessment process.


5. Interactive Exercise


Thiago Moraes led an interactive exercise where participants were divided into groups to analyze AI risk scenarios using the framework. The exercise involved:


– Identifying potential ethical risks in AI systems for medical diagnosis and job application screening


– Scoring risks based on severity and likelihood


– Developing actionable recommendations to mitigate identified risks


Participants engaged with real-world scenarios, applying the ethical considerations involved in AI deployment.


6. DCO’s Future Plans and Closing Remarks


Alaa Abdulaal concluded the session by emphasizing DCO’s commitment to a multi-stakeholder approach in addressing ethical AI challenges. Key points included:


– DCO’s belief in collaborative digital transformation


– The organization’s aim to provide actionable solutions for ethical AI deployment


– Plans to share the final AI ethics tool publicly


– DCO’s mission to enable digital prosperity for all through cooperation


– The importance of ethical use of AI in various sectors


Conclusion


The discussion presented a comprehensive overview of the DCO’s efforts to develop an ethical AI governance framework and assessment tool. By emphasizing a human rights-centered approach and providing practical tools for risk assessment, the DCO aims to address the complex challenges posed by AI development and deployment on a global scale.


Additional Notes


– Matthew Sharp mentioned a QR code survey for participants to provide feedback on the session.


– A feedback form was shared at the end of the session for further input from participants.


Session Transcript

Chris Martin: Hiya, how are you doing? Check, check. Is that better? Cool. Again, hello. Welcome. My name is Chris Martin. I’m head of policy innovation at Access Partnership. We’re a global tech policy and regulatory consulting firm. So pleased to be here with all of you and with our partners at the Digital Cooperation Organization. Perhaps we can get started with, I think, a little bit of an acknowledgement that artificial intelligence is no longer really a technical challenge. It’s a societal one. Every decision that AI systems make, what they power, are going to impact and shape our lives, how we work, and how we interact. And the stakes are monumental. They demand that we get this right, and at the same time, key questions remain. Most especially, how do we ensure that AI is both a powerful tool, but also ethical, responsible, and human-centric? Today we stand at a pivotal moment. Policymakers, technologists, and civil society are coming together to navigate the complex intersection of innovation and ethics, and together we need to develop frameworks that both anticipate the risks inherent in these systems, but also seize the transformative potential of AI for global good. Now, this session isn’t just about policies. It’s about principles in action, defining who we are, what we as a global community value, and how we protect those values, especially in the face of rapid change. I invite you to take this opportunity to explore these possibilities with us, to ask some hard questions, and build pathways to ensure that AI serves humanity and not the other way around. With that, please let me introduce my colleague, Mr. Ahmed Binder from the Digital Cooperation Organization.


Ahmad Bhinder: Hello. Good afternoon, everybody. I see a lot of faces from all around the world, and it is really, really fortunate for us to be able to gather you all here together and showcase some of our work, actually tell you who we are as Digital Cooperation Organization, and discuss some of the work that we are doing and seek your inputs. So we really meant to have this a very interactive discussion, a roundtable session to say let’s see how we can convert this into a roundtable discussion going forward. So my name is Ahmed Binder, and I represent the Digital Cooperation Organization. We are an intergovernmental organization. We are represented by the Ministers of Digital Economy and ICT for the the 16 member states who we represent. And the member states come from, as you would see, from the Middle East, from Europe, from Africa, to South Asia, and we are very rapidly expanding. We have a whole network of private sector partners that we call observers, as you would see in other intergovernmental organizations. And we have over 40 observers who are already with us now. Since our existence, which is, we are quite young, so we started our, we came into conception from end of 2020, so we’re in our fourth year. So with a bit of our organization, what DCO is and how it works, this work this year we started was to look at the ethical AI, ethical governance of AI. And we thought that while a lot of work is done, is being done on ethical and responsible AI governance, we wanted to look at it from a human rights perspective. So we identified which human rights are more, most impacted by the artificial intelligence, and then we reviewed across our membership and across the globe, how does AI policies and regulation and governance intersects with those human rights, and what needs to be done to ensure that we have a human rights protective, ethical AI governance approach. There are a couple of reports that we are going to publish on that, and we are developing a policy tool, which will be the crux of our discussion today. We have developed a framework on the human rights risks associated to AI, what are the ethical principles that need to be taken care of, and then the tool is going to provide our member states and beyond. and a mechanism where we can get, yeah, can you hear me all right? Okay, sorry. So we’ll provide with the AI system developers or deployers with a tool where they can assess the systems, compliance per se, or their closeness to the human rights or ethical principles. And then the tool is going to recommend improvements in the systems. So again, I don’t want to kill it in my opening remarks. We have our colleagues from Access Partnership who we are developing this tool together with. So I will give it back to Chris to take us through this and I would look forward to all your inputs into the discussions today. Thank you so much.


Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what DCO’s view is on ethical AI governance. And then my colleague, Matt Sharp, will walk us through the tool itself. And as Ahmed previewed, then we’ll kind of break out to do a little bit of scenarios and get you started. And then we’ll kind of break out to do a little bit of scenarios and get a chance to play with the tool yourselves. I think the first question, why is this important for DCO? Well, it’s a big deal, I think everywhere. And DCO working with members and other stakeholders wants to really be an active member at the forefront of this debate. So these are two of the objectives to kind of start. And then the tool can be seen as one way to instill and align alignment and interoperability. between regulatory frameworks. I think we all recognize there’s a real wide divergence right now of AI readiness and regulatory approach. And then once you start to see that, actually proposing impactful, actionable initiatives is critical. DCO feels that’s important. And lastly, facilitating interactive dialogues like the one we’re here today to have. So a bit deeper on what does a human rights approach look like in AI governance for DCO? Well, it starts with four things. First, looking to prioritize protection and promotion of human rights. Name of the session, that’s why we’re here. To design and uphold human dignity, privacy, equity, and freedom from discrimination. Third, to create systems that are transparent, accountable, that are inclusive, and ones that don’t exacerbate inequalities. And lastly, to ensure advancement and contribute to the common good while mitigating all the potential harms I think we’re starting to see evolve with AI. So the toolkit that we’re developing will take a human rights-centered approach across four different areas. Again, looking at inclusive design and ensuring that there’s participation from diverse communities, especially marginalized ones. It will look to integrate human rights principles like dignity, equality, non-discrimination privacy at each stage of the AI lifecycle. It will seek to recommend the use of human rights impact assessments as a way to get in front of AI deployments and ensure that you mitigate those potential problems early. And then lastly, promote transparency and looking at disclosure of how AI makes its decisions. Taking it a little bit of a step back, and I think illustrating the moment we’re in, is that AI diffusion is pretty uneven across the world. This looks at the market for AI concentrated across Asia Pacific and North America and Europe to a greater degree, but still a lot of opportunity for growth in the Middle East and North Africa, where currently a lot of DCO member states reside. So this is an important moment to get involved at an early stage. On the governance side, DCO sees really seven different areas where global best practice can be leveraged to advance AI governance. The first looks at institutional mechanisms. Typically these involve how do nation states govern artificial intelligence within their jurisdictions? Do they develop an AI regulator? Do they do it sector by sector? These are questions that are live at the moment across every country. How are they going to plan for that at the government level? Is there an AI strategy or an AI policy that helps dictate the different stages? And then beyond AI specifically, where are they in policy readiness? Cybersecurity frameworks, privacy frameworks, intellectual property, the whole range of different areas that impact AI and are important to consider. And then shifting beyond just the government specific places, but how do you build an innovation ecosystem? On the government side, can you foster investment and entrepreneurship in AI? But also how do you build a culture around that? And how do you do that in a way that also brings in that diversity of participants and voices? So that’s really critical to getting it right. The seventh area is, or sixth area, I’m sorry, is future-proofing the population. And by this we mean getting a population ready for AI. There is going to be displacements in the workforce, there are going to be educational requirements, and countries have to address those as they build these into their societies. And then lastly, an international cooperation is so fundamental, I think that’s why we’re all here at IGF today. And there are a lot of different processes that are underway to allow international collaboration to happen, and being a part of that is important. I think some of the findings across DCO member states are interesting in the sense that it’s a, I think, a unique pairing of different types of nation-states. And we see that it has a lot of varying levels of AI governance across it. It’s not to be unexpected when you have both regionally diverse and economically diverse countries within a single group. And that’s, I think, reflective of the case that we face globally. That feeds into the diverse definitions and approaches to AI, and it also feeds into the potential for further engagement and international cooperation, both within the DCO’s membership itself, but also in events and engagements like the one we’re doing. So there’s a view that we are building around the generic ethical considerations of AI, but part of our conversation today is to help us think about this, and are we getting it right? And there are, right now, very limited recommendations and practical implications to address human rights in DCO member states. And so this tool and this exercise is part of creating that for DCO and potentially beyond. I’m going to walk through these ethical principles very quickly, and then I’m going to pass it to my colleague, Matt, to pick up the tool itself. but the ethical principles that govern this tool are six-fold. The first, dealing with accountability and oversight. We want to ensure there’s clear responsibility for AI decision-making, addressing those gaps in things like verification, audit trails, incident response. We’ll want to look at transparency and accountability, as already discussed. Things that promote clarity in how you make these decisions is important, and you don’t want the complexity to undermine a user’s understanding. We’ve got fairness and non-discrimination as our third principle. Protecting against bias, unfair treatment and outcomes, and mitigating demographic disparities in how these systems perform. Fourth will be privacy. We all care about our privacy, and we’re all concerned about this, as our uses of different technologies now feed the AI ecosystem. We want to make sure there are those personal safeguards in place, and a respect for privacy rights. Fifth, around sustainability and environmental impact. I was on the panel right before this one in this room, and they talked about how AI is going to require the equivalent of another Japan in terms of energy use and consumption, and that’s going to put a strain on resources, so we’ve got to address that. And for the development of AI, comport environmental goals. And then lastly, it’s got to be human-centered. It’s got to be looking at social benefit and ensuring that it’s meeting societal needs while respecting individuality, and aligning these capabilities with those needs. So with that, I’m going to pass it to Matt. He can walk you through the tool itself in a little further detail, and then we’ll pick up the exercise.


Matthew Sharp: Hi everyone. I’m Matt Sharpe, a senior manager at Axis Partnership. Yeah, so the six principles are based on extensive research of frameworks around the world, which we try to distill into these six areas to focus on. And this is a brief description of the tool that we’ve developed, which is still in its prototype phase. But the idea is that this will be available online and publicly accessible for everyone. The tool provides a detailed risk questionnaire, which is different for both developers and deployers. And there are questions to ascertain both the severity and likelihood of risks related to human rights. And based on the way that the questions are answered, there will be an interactive radar graph, which basically helps the user prioritize their actions. And each risk area, an average score will be calculated for each risk area. And this will lead to actionable recommendations being given based on the specific way that the questions were answered. OK, if you go to the next slide. Yeah, and so the tool is designed to be comprehensive, practical, and interactive. The human rights first approach, which maps AI systems to universal human rights, it’s designed to be very practical. So it accommodates various AI systems across diverse industries. and organizations will get comprehensive risk insights and practical guidance on how to mitigate risks related to human rights. So our tool, there are a few other frameworks and tools related to R1. A lot of these are developed by national governments and tend to focus on their own national contexts. For example, the UAE and the New Zealand framework, a lot of them focus on verification of actions rather than risk assessments. And yeah, and a few of the existing tools focus only on AI developers and not AI deployers as well. And generally ours is the one that’s most focused on human rights. So we think this tool offers a unique contribution to advance ethical AI. So I mean, I already talked about this, but basically the way that the tool works, the workflow, so users will register on the website, they’ll provide some information about their AI systems and their industries, they’ll complete the questionnaire covering six risk categories. The questions will be different for developers and deployers. And then based on how the risks are assessed, they will see this risk radar chart, identifying priority areas for action, and they’ll receive advice on targeted mitigation strategies. OK, so this is our framework that underlies the tool. Basically. The diagram shows the principles at the top, and underneath those are more specific risks related to those principles. For example, in the case of privacy, there’s a focus on data protection and unauthorized collection and processing of sensitive information related to accountability and oversight. The risks there are insufficient involvement of humans and inadequate incident handling, for example. Then there’s detailed recommendations below this, which there’s no one-to-one mapping between the principles and the recommendation areas. When a risk category is high risk, it’ll quite often be the case that there’ll be specific recommendations related to each of these risk areas. But of course, these cover data management, validation, and testing of AI systems. The integration of stakeholder feedback is, of course, very important as well. Then just to say that there are two distinct stakeholder groups in the AI lifecycle, the developers and deployers. Of course, they will receive slightly different questionnaires. Developers are, of course, focusing on the design and the construction of AI systems. They need to predict AI risks in advance. They need to think about technical architecture. Deployers, of course, are focused on the implementation of these AI systems and for them, of course, the focus is on operational vulnerabilities and actual impacts on users and stakeholders. Yeah, I mean, this slide is perhaps a bit detailed, but just to say that for each of the recommendation categories, because of the different position in the AI library, there’ll be slightly, which is given to developers and deployers, but the six recommendation categories are consistently used for both developers and deployers. So yeah, if you wouldn’t mind just using a QR code to answer a couple of quick questions. So, yeah, so once you’ve answered those two questions. questions. We have a breakout activity which is designed to basically understand the logic of the AI ethics tool that we’ve developed. So, Kevin, I think, will be handing out worksheets that you could fill in. There’ll be different AI risk scenarios, and the idea here is to review the framework that we presented for the AI evaluator tool, AI ethics evaluator tool, and then identify two ethical risks related to each scenario, the scenario that you’ve given, and then do a risk scoring exercise where you score both the severity and likelihood of the risks you’ve identified. So, you can pick two of the principles that are relevant for your particular scenario. You score the severity and likelihood, their definitions on the worksheets. You calculate a combined score, an impact score for each risk, and then you’re able to rank them from most to least critical, and then you develop actionable recommendations, just trying to come up with two recommendations for the two risks for developer. And this whole exercise should take 15 minutes.


Ahmad Bhinder: Well, sorry to put you through this. We intended to make it as an interactive discussion, and we really wanted, selfishly, to get your inputs on to brainstorming on some of these scenarios. So, I do apologize in advance to the organizers for mixing up the chairs, but I think we should convert how we are sitting into, I think, three breakout groups and have the discussion. Please move your chairs around, and let’s go through this exercise. we can have a more interactive discussion. So we are well within the time for this session. We have half an hour to go. So for 15 minutes, let’s go through this exercise and then we would love to hear your thoughts on this. Thank you.


Chris Martin: And guys, I know this seems daunting. It is not. I promise, I did it myself last week. It’s actually kind of fun. And it gives you a real sense of how to actually start putting yourself in the mindset of assessing AI risk. So we were thinking maybe this side of the room could be one group, and then maybe split this side of the room in two. Those of you in the back, one group, and then those of you up front, here another. We’ve got these sheets that my colleague Kevin is gonna start passing out. I think we’ll hand out one set on this side and then one set there and one set here. And I’m happy to go around and check in with you guys as we take this forward and see how we can actually pull this together. Yeah. Thank you. . . . . . . . . . . . . . . . . . A high likelihood, high risk. The discriminatory impact on vulnerable groups, same thing. And I think for kind of working backwards then, the recommendations we had were you’re going to need valid testing of these thresholds to understand what is going to be correct for your platform. So validation and testing is going to be one remediation measure and then a continuous evolution to improve that. For the harmful or the inadequate human verification, we saw that you have to have human in the loop. And then for the last one, we really… Actually, I think that’s all we have. Thank you very much. Sorry for this. We have 30 minutes and I’ll give my mic to Mr. Thiago. Please, yeah.


Thiago Moraes: So yeah, well, our case is the use of AI for… Okay, diagnose systems critical rare disease, right?


Chris Martin: so The first risk was related to the explainability since we’re talking about rare disease and inaccurate answers can give issues here also Scrutinatory issues more specifically we talked about gender-based Discrimination if like the populations and statistics not well used and privacy risks like data leaks with Very high sensitive data. So for scoring the first one we did a six in the end So for explainability the one for discrimination a four and no privacy We think it’s the most sensitive here. It we gave a nine because from there from a leak many other issues may happen And following Following the Following following the DCO recommendations we suggest for the explainability enhance comprehensive documentation so documentation and reporting and for the discriminatory Impact we suggest validating and testing and for the privacy. We suggest that the management


Ahmad Bhinder: Thank you so much and Again, let’s have 30 seconds here. Are you going to?


Chris Martin: Okay, so our scenario is we have a multinational Okay, so we have an our scenario is we have a multinational cop that is deploying an AI system for screening job applications and to do that they are using historical data to rank the candidates based on predicted peppermint so So for us we thought it’s a risk on Discrimination because we’re looking at it from a perspective that historically people who work in the engineering field with men and mostly white male. And now you’re using that historical data to make an assessment on people who may be applying who look like me. So already we said fairness and non-discrimination, that’s a risk especially discriminatory impact on vulnerable groups. And performance, the scoring was quite high, likelihoods three, severity three, everything quite high. And then here’s one. Thank you. Thank you so much.


Ahmad Bhinder: Before we are kicked out, I would pass the mic to Alaa Abdullal. She is our Chief of Digital Economy Foresight at the DCO for some closing remarks. And sorry to rush through the whole thing.


Alaa Abdulaal: So hello, everyone. I think I was honored to join the session. And I have seen a lot of amazing conversation. At DCO, we are really, as our name say, we are the digital cooperation organization. We believe in a multi-stakeholder approach. And we believe that this is the only approach that will help in the acceleration of digital transformation. And the topic of ethical use of AI is an important topic. Because again, AI now is being one of the main emerging technologies that are offering a lot of advancement and efficiency in the digital transformation of government and different sectors. This is why it was very important for us as a digital cooperation organization to provide actionable solution to help countries and even developers to have that right tool to make sure that whatever systems that are being deployed have the right risk assessment from human rights. And to have that tool available for everyone. And this is why we wanted to have this session to get the feedback to really understand if what we are developing is in the right way. And thank you so much for being here and allocating the time and effort to join this discussion and provide your valuable inputs. And we are looking forward to share with you the final deliverable and the ethical tool hopefully soon. And hopefully together we are building a future to enable digital prosperity for all. Thank you very much for your time and for being here.


Chris Martin: Thanks, everybody. We also just put this up. If you want to provide feedback, we certainly welcome it on this session. Take a picture. It shouldn’t take long. And thanks, all. We really appreciate your participation.


C

Chris Martin

Speech speed

123 words per minute

Speech length

2136 words

Speech time

1034 seconds

AI is now a societal challenge, not just a technical one

Explanation

Chris Martin emphasizes that AI has evolved beyond being merely a technical issue and now impacts society as a whole. He stresses the importance of addressing the societal implications of AI systems.


Evidence

Every decision that AI systems make, what they power, are going to impact and shape our lives, how we work, and how we interact.


Major Discussion Point

Major Discussion Point 1: The importance of ethical AI governance


Agreed with

Ahmad Bhinder


Matthew Sharp


Agreed on

Importance of ethical AI governance


Need to develop frameworks that anticipate risks and seize AI’s potential for good

Explanation

Martin argues for the development of comprehensive frameworks to address potential risks associated with AI while also harnessing its positive potential. He emphasizes the need for a balanced approach in AI governance.


Evidence

Together we need to develop frameworks that both anticipate the risks inherent in these systems, but also seize the transformative potential of AI for global good.


Major Discussion Point

Major Discussion Point 1: The importance of ethical AI governance


Agreed with

Ahmad Bhinder


Matthew Sharp


Agreed on

Importance of ethical AI governance


AI diffusion is uneven globally, creating an opportunity to get involved early

Explanation

Martin points out that AI adoption is not uniform across the world, with some regions lagging behind. He suggests this presents an opportunity for early involvement in shaping AI governance in these areas.


Evidence

This looks at the market for AI concentrated across Asia Pacific and North America and Europe to a greater degree, but still a lot of opportunity for growth in the Middle East and North Africa, where currently a lot of DCO member states reside.


Major Discussion Point

Major Discussion Point 1: The importance of ethical AI governance


Accountability and oversight in AI decision-making

Explanation

Martin emphasizes the importance of clear responsibility and oversight in AI decision-making processes. He highlights the need for mechanisms to ensure accountability in AI systems.


Evidence

We want to ensure there’s clear responsibility for AI decision-making, addressing those gaps in things like verification, audit trails, incident response.


Major Discussion Point

Major Discussion Point 2: Key principles for ethical AI


Transparency and explainability of AI systems

Explanation

Martin stresses the need for AI systems to be transparent and explainable. He argues that the complexity of AI should not undermine users’ understanding of how decisions are made.


Evidence

Things that promote clarity in how you make these decisions is important, and you don’t want the complexity to undermine a user’s understanding.


Major Discussion Point

Major Discussion Point 2: Key principles for ethical AI


Fairness and non-discrimination in AI outcomes

Explanation

Martin highlights the importance of ensuring fairness and preventing discrimination in AI outcomes. He emphasizes the need to protect against bias and unfair treatment in AI systems.


Evidence

Protecting against bias, unfair treatment and outcomes, and mitigating demographic disparities in how these systems perform.


Major Discussion Point

Major Discussion Point 2: Key principles for ethical AI


Privacy protection and data safeguards

Explanation

Martin emphasizes the importance of privacy protection in AI systems. He argues for the implementation of personal safeguards and respect for privacy rights in the AI ecosystem.


Evidence

We want to make sure there are those personal safeguards in place, and a respect for privacy rights.


Major Discussion Point

Major Discussion Point 2: Key principles for ethical AI


Sustainability and environmental impact considerations

Explanation

Martin highlights the need to consider the environmental impact of AI systems. He points out the significant energy consumption associated with AI and the need to align AI development with environmental goals.


Evidence

They talked about how AI is going to require the equivalent of another Japan in terms of energy use and consumption, and that’s going to put a strain on resources, so we’ve got to address that.


Major Discussion Point

Major Discussion Point 2: Key principles for ethical AI


Human-centered design focused on social benefit

Explanation

Martin emphasizes the importance of human-centered AI design that prioritizes social benefits. He argues that AI should meet societal needs while respecting individual rights and aligning with human values.


Evidence

It’s got to be looking at social benefit and ensuring that it’s meeting societal needs while respecting individuality, and aligning these capabilities with those needs.


Major Discussion Point

Major Discussion Point 2: Key principles for ethical AI


A

Ahmad Bhinder

Speech speed

136 words per minute

Speech length

695 words

Speech time

305 seconds

DCO is taking a human rights-centered approach to AI governance

Explanation

Ahmad Bhinder explains that the Digital Cooperation Organization (DCO) is focusing on ethical AI governance from a human rights perspective. They are developing tools and frameworks to ensure AI systems respect and protect human rights.


Evidence

We wanted to look at it from a human rights perspective. So we identified which human rights are more, most impacted by the artificial intelligence, and then we reviewed across our membership and across the globe, how does AI policies and regulation and governance intersects with those human rights.


Major Discussion Point

Major Discussion Point 1: The importance of ethical AI governance


Agreed with

Chris Martin


Matthew Sharp


Agreed on

Importance of ethical AI governance


M

Matthew Sharp

Speech speed

109 words per minute

Speech length

890 words

Speech time

485 seconds

Tool provides risk questionnaires for both AI developers and deployers

Explanation

Matthew Sharp describes the DCO’s AI ethics evaluation tool, which includes separate questionnaires for AI developers and deployers. This approach recognizes the different roles and responsibilities in the AI lifecycle.


Evidence

The tool provides a detailed risk questionnaire, which is different for both developers and deployers.


Major Discussion Point

Major Discussion Point 3: DCO’s AI ethics evaluation tool


Agreed with

Ahmad Bhinder


Thiago Moraes


Agreed on

Need for practical tools to assess AI risks


Assesses severity and likelihood of human rights risks

Explanation

Sharp explains that the tool evaluates both the severity and likelihood of human rights risks associated with AI systems. This comprehensive assessment helps users understand the potential impact of their AI applications.


Evidence

And there are questions to ascertain both the severity and likelihood of risks related to human rights.


Major Discussion Point

Major Discussion Point 3: DCO’s AI ethics evaluation tool


Agreed with

Ahmad Bhinder


Thiago Moraes


Agreed on

Need for practical tools to assess AI risks


Generates interactive visualizations to help prioritize actions

Explanation

The tool creates interactive visualizations, such as radar graphs, to help users prioritize their actions. This feature aids in identifying the most critical areas for improvement in AI systems.


Evidence

There will be an interactive radar graph, which basically helps the user prioritize their actions.


Major Discussion Point

Major Discussion Point 3: DCO’s AI ethics evaluation tool


Agreed with

Ahmad Bhinder


Thiago Moraes


Agreed on

Need for practical tools to assess AI risks


Offers practical, actionable recommendations based on risk assessment

Explanation

Sharp highlights that the tool provides specific, actionable recommendations based on the risk assessment results. These recommendations are tailored to the user’s responses and help guide improvements in AI systems.


Evidence

And this will lead to actionable recommendations being given based on the specific way that the questions were answered.


Major Discussion Point

Major Discussion Point 3: DCO’s AI ethics evaluation tool


Agreed with

Ahmad Bhinder


Thiago Moraes


Agreed on

Need for practical tools to assess AI risks


T

Thiago Moraes

Speech speed

109 words per minute

Speech length

18 words

Speech time

9 seconds

Participants engaged in scenario-based risk assessment exercise

Explanation

Thiago Moraes describes a practical exercise where participants applied the AI ethics tool to specific scenarios. This hands-on approach allowed attendees to understand the tool’s functionality and the process of ethical risk assessment in AI.


Evidence

Our case is the use of AI for… Okay, diagnose systems critical rare disease, right?


Major Discussion Point

Major Discussion Point 4: Practical application of the AI ethics tool


Identified ethical risks in AI systems for medical diagnosis and job application screening

Explanation

Moraes reports that participants identified various ethical risks in AI systems used for medical diagnosis and job application screening. This exercise highlighted the diverse range of potential ethical issues in different AI applications.


Evidence

The first risk was related to the explainability since we’re talking about rare disease and inaccurate answers can give issues here also Scrutinatory issues more specifically we talked about gender-based Discrimination if like the populations and statistics not well used and privacy risks like data leaks with Very high sensitive data.


Major Discussion Point

Major Discussion Point 4: Practical application of the AI ethics tool


Scored risks based on severity and likelihood

Explanation

Moraes explains that participants scored the identified risks based on their severity and likelihood. This quantitative approach helps prioritize which ethical issues need the most urgent attention.


Evidence

So for scoring the first one we did a six in the end So for explainability the one for discrimination a four and no privacy We think it’s the most sensitive here. It we gave a nine because from there from a leak many other issues may happen


Major Discussion Point

Major Discussion Point 4: Practical application of the AI ethics tool


Developed actionable recommendations to mitigate identified risks

Explanation

Moraes reports that participants developed actionable recommendations to address the identified risks. This step demonstrates how the tool can guide users towards practical solutions for ethical AI implementation.


Evidence

Following the DCO recommendations we suggest for the explainability enhance comprehensive documentation so documentation and reporting and for the discriminatory Impact we suggest validating and testing and for the privacy. We suggest that the management


Major Discussion Point

Major Discussion Point 4: Practical application of the AI ethics tool


A

Alaa Abdulaal

Speech speed

139 words per minute

Speech length

250 words

Speech time

107 seconds

DCO believes in a multi-stakeholder approach to digital transformation

Explanation

Alaa Abdulaal emphasizes DCO’s commitment to a multi-stakeholder approach in addressing digital transformation challenges. This approach involves collaboration between various sectors and stakeholders to ensure comprehensive solutions.


Evidence

At DCO, we are really, as our name say, we are the digital cooperation organization. We believe in a multi-stakeholder approach.


Major Discussion Point

Major Discussion Point 5: DCO’s approach and future plans


Aims to provide actionable solutions for ethical AI deployment

Explanation

Abdulaal highlights DCO’s goal of developing practical, actionable solutions to support ethical AI deployment. This includes tools and frameworks that can be used by countries and developers to assess and mitigate risks in AI systems.


Evidence

This is why it was very important for us as a digital cooperation organization to provide actionable solution to help countries and even developers to have that right tool to make sure that whatever systems that are being deployed have the right risk assessment from human rights.


Major Discussion Point

Major Discussion Point 5: DCO’s approach and future plans


Plans to share the final AI ethics tool publicly

Explanation

Abdulaal announces DCO’s intention to make their AI ethics evaluation tool publicly available. This commitment to open access aims to promote widespread adoption of ethical AI practices.


Evidence

And we are looking forward to share with you the final deliverable and the ethical tool hopefully soon.


Major Discussion Point

Major Discussion Point 5: DCO’s approach and future plans


Seeks to enable digital prosperity for all through cooperation

Explanation

Abdulaal emphasizes DCO’s overarching goal of promoting digital prosperity for all through international cooperation. This vision underscores the organization’s commitment to inclusive and ethical digital development.


Evidence

And hopefully together we are building a future to enable digital prosperity for all.


Major Discussion Point

Major Discussion Point 5: DCO’s approach and future plans


Agreements

Agreement Points

Importance of ethical AI governance

speakers

Chris Martin


Ahmad Bhinder


Matthew Sharp


arguments

AI is now a societal challenge, not just a technical one


Need to develop frameworks that anticipate risks and seize AI’s potential for good


DCO is taking a human rights-centered approach to AI governance


summary

All speakers emphasized the critical need for ethical AI governance, focusing on societal impacts and human rights considerations.


Need for practical tools to assess AI risks

speakers

Ahmad Bhinder


Matthew Sharp


Thiago Moraes


arguments

Tool provides risk questionnaires for both AI developers and deployers


Assesses severity and likelihood of human rights risks


Generates interactive visualizations to help prioritize actions


Offers practical, actionable recommendations based on risk assessment


summary

The speakers agreed on the importance of developing and using practical tools to assess and mitigate AI-related risks, particularly in relation to human rights.


Similar Viewpoints

Both speakers emphasized the importance of key ethical principles in AI governance, including accountability, transparency, and fairness.

speakers

Chris Martin


Matthew Sharp


arguments

Accountability and oversight in AI decision-making


Transparency and explainability of AI systems


Fairness and non-discrimination in AI outcomes


Unexpected Consensus

Environmental impact of AI

speakers

Chris Martin


arguments

Sustainability and environmental impact considerations


explanation

While most discussions focused on societal and ethical impacts, Chris Martin unexpectedly highlighted the significant environmental concerns related to AI energy consumption, which wasn’t echoed by other speakers but is an important consideration.


Overall Assessment

Summary

The speakers demonstrated strong agreement on the importance of ethical AI governance, the need for practical assessment tools, and the focus on human rights in AI development and deployment.


Consensus level

High level of consensus among speakers, particularly on the need for human-centric, ethical AI governance. This agreement implies a shared vision for the future of AI regulation and development, which could facilitate more coordinated and effective approaches to addressing AI-related challenges.


Differences

Different Viewpoints

Unexpected Differences

Overall Assessment

summary

There were no significant areas of disagreement identified among the speakers.


difference_level

The level of disagreement was minimal to non-existent. The speakers presented a unified approach to ethical AI governance, focusing on human rights, practical tools for risk assessment, and multi-stakeholder collaboration. This alignment suggests a cohesive strategy within the DCO for addressing ethical challenges in AI development and deployment.


Partial Agreements

Partial Agreements

Similar Viewpoints

Both speakers emphasized the importance of key ethical principles in AI governance, including accountability, transparency, and fairness.

speakers

Chris Martin


Matthew Sharp


arguments

Accountability and oversight in AI decision-making


Transparency and explainability of AI systems


Fairness and non-discrimination in AI outcomes


Takeaways

Key Takeaways

AI governance is now a critical societal challenge requiring ethical frameworks and human rights protections


The Digital Cooperation Organization (DCO) is developing an AI ethics evaluation tool focused on human rights


The tool assesses risks for both AI developers and deployers across six key ethical principles


Practical application of ethical AI principles requires careful risk assessment and mitigation strategies


A multi-stakeholder, cooperative approach is essential for responsible AI development and deployment


Resolutions and Action Items

DCO to finalize and publicly release their AI ethics evaluation tool


Participants to provide feedback on the session and tool prototype via the provided QR code


Unresolved Issues

Specific implementation details of the AI ethics tool across different contexts and industries


How to address the uneven global diffusion of AI technologies and governance frameworks


Balancing innovation with ethical considerations in AI development


Suggested Compromises

None identified


Thought Provoking Comments

Every decision that AI systems make, what they power, are going to impact and shape our lives, how we work, and how we interact. And the stakes are monumental. They demand that we get this right, and at the same time, key questions remain. Most especially, how do we ensure that AI is both a powerful tool, but also ethical, responsible, and human-centric?

speaker

Chris Martin


reason

This comment sets the stage for the entire discussion by emphasizing the far-reaching impact of AI and the critical importance of ethical governance.


impact

It framed the subsequent conversation around the ethical implications of AI and the need for responsible development and deployment.


We wanted to look at it from a human rights perspective. So we identified which human rights are more, most impacted by the artificial intelligence, and then we reviewed across our membership and across the globe, how does AI policies and regulation and governance intersects with those human rights, and what needs to be done to ensure that we have a human rights protective, ethical AI governance approach.

speaker

Ahmad Bhinder


reason

This comment introduces a unique approach to AI governance by centering it on human rights, which is not commonly seen in other frameworks.


impact

It shifted the focus of the discussion towards considering AI’s impact on specific human rights, leading to a more nuanced conversation about ethical AI governance.


AI diffusion is pretty uneven across the world. This looks at the market for AI concentrated across Asia Pacific and North America and Europe to a greater degree, but still a lot of opportunity for growth in the Middle East and North Africa, where currently a lot of DCO member states reside.

speaker

Chris Martin


reason

This observation highlights the global disparities in AI development and adoption, bringing attention to the need for inclusive approaches.


impact

It broadened the scope of the discussion to consider the global context and the importance of supporting AI development in regions that are currently underrepresented.


Our tool, there are a few other frameworks and tools related to R1. A lot of these are developed by national governments and tend to focus on their own national contexts. For example, the UAE and the New Zealand framework, a lot of them focus on verification of actions rather than risk assessments. And yeah, and a few of the existing tools focus only on AI developers and not AI deployers as well. And generally ours is the one that’s most focused on human rights.

speaker

Matthew Sharp


reason

This comment provides a comparative perspective on existing AI governance tools, highlighting the unique features of the DCO’s approach.


impact

It helped participants understand the distinctive aspects of the DCO’s tool, particularly its focus on human rights and inclusion of both developers and deployers.


Overall Assessment

These key comments shaped the discussion by establishing the critical importance of ethical AI governance, introducing a human rights-centered approach, highlighting global disparities in AI development, and differentiating the DCO’s tool from existing frameworks. They collectively steered the conversation towards a more comprehensive, globally-aware, and human-centric consideration of AI ethics and governance.


Follow-up Questions

How can we ensure AI systems are transparent and their decision-making processes are explainable?

speaker

Chris Martin


explanation

Transparency in AI decision-making is crucial for building trust and ensuring accountability.


What are the best practices for conducting human rights impact assessments for AI systems?

speaker

Chris Martin


explanation

Human rights impact assessments are important for mitigating potential problems early in AI deployment.


How can countries address workforce displacement and educational requirements resulting from AI adoption?

speaker

Chris Martin


explanation

Preparing populations for AI-driven changes in the job market is crucial for future-proofing societies.


What are effective strategies for fostering investment and entrepreneurship in AI while ensuring diversity and inclusivity?

speaker

Chris Martin


explanation

Building a diverse and inclusive AI innovation ecosystem is critical for ethical AI development.


How can we address the increasing energy consumption requirements of AI systems to align with environmental goals?

speaker

Chris Martin


explanation

The growing energy demands of AI pose significant environmental challenges that need to be addressed.


What are the most effective ways to integrate stakeholder feedback in AI system development and deployment?

speaker

Matthew Sharp


explanation

Incorporating diverse perspectives is crucial for developing ethical and human-centered AI systems.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Day 0 Event #170 2024 Year of All Elections: Did Democracy Will Survive?

Day 0 Event #170 2024 Year of All Elections: Did Democracy Will Survive?

Session at a Glance

Summary

This discussion focused on efforts to combat disinformation and protect election integrity across different regions, particularly in light of recent and upcoming elections. Speakers from European institutions highlighted initiatives like the European Digital Media Observatory (EDMO) and legislative measures such as the Digital Services Act to monitor and counter disinformation. The European Parliament’s efforts included pre-bunking videos and media literacy campaigns to empower voters. In contrast, the United States was described as regressing in its approach, with platforms reducing content moderation and a rise in polarization and hate speech. The speaker from Africa Check emphasized unique challenges in Africa, including limited internet access, language barriers, and concerns about potential misuse of anti-disinformation laws. Despite fears about the impact of generative AI on elections in 2024, the final speaker noted that while AI-generated content played a role in elections, it did not have the catastrophic effects some had predicted. However, continued vigilance and research on AI’s impact on elections was stressed as crucial. The discussion highlighted regional differences in approaches to combating disinformation, with Europe taking a more regulatory stance, while other regions face distinct challenges in implementing similar measures. Overall, the speakers emphasized the importance of collaboration, media literacy, and ongoing monitoring to address the evolving landscape of online disinformation and its potential impact on democratic processes.


Keypoints

Major discussion points:


– European efforts to combat disinformation, including legislation, rapid response systems, and initiatives like EDMO


– US challenges with disinformation, including platform inaction and polarization


– African experiences with disinformation in elections, including targeting of journalists and electoral bodies


– The impact of generative AI on elections in 2024 and looking ahead to 2025


– Differing views on the value of legislative approaches to combating disinformation in different regions


The overall purpose of the discussion was to examine approaches to combating disinformation and protecting election integrity across different regions, with a focus on recent and upcoming elections. Speakers shared experiences and initiatives from Europe, the US, and Africa.


The tone was largely informative and analytical, with speakers providing overviews of the situation in their respective regions. There was a sense of concern about the challenges posed by disinformation, but also some cautious optimism about efforts to address it, particularly in Europe. The tone became slightly more urgent when discussing the US situation and the potential impacts of AI going forward.


Speakers

– GIACOMO MAZZONE: Moderator


– ALBERTO RABBACHIN: Representative from the European Commission


– GIOVANNI ZAGNI: Representative from the European Digital Media Observatory (EDMO)


– PAULA GORI: Representative from EDMO


– DELPHINE COLARD: Spokesperson at the European Parliament


– BENJAMIN SHULTZ: Representative from American Sunlight Project


– PHILILE NTOMBELA: Researcher at Africa Check (South African office)


– CLAES H. DE VREESE: University of Amsterdam, Member of Executive Board of EDMO


Additional speakers:


– VIDEO: Narrator in a pre-recorded video about disinformation


Full session report

The discussion focused on efforts to combat disinformation and protect election integrity across different regions, particularly in light of recent and upcoming elections. Speakers from various institutions and organizations shared insights on the challenges and strategies employed in Europe, the United States, and Africa.


European Efforts and the Role of EDMO


Giovanni Zagni, from the European Digital Media Observatory (EDMO), explained EDMO’s crucial role in monitoring disinformation across the European Union. This initiative brings together fact-checkers, researchers, and other stakeholders to provide a comprehensive overview of the disinformation landscape in Europe. EDMO’s work includes coordinating national hubs, conducting research, and providing policy recommendations.


Delphine Colard, spokesperson for the European Parliament, outlined additional measures taken by the institution. These include the creation of a website explaining election integrity measures and the production of pre-bunking videos designed to educate voters about disinformation techniques. Colard emphasized the importance of media literacy campaigns in empowering voters to critically evaluate information. She also mentioned the potential establishment of a special committee on European democracy shields by the European Parliament.


Alberto Rabbachin, representing the European Commission, highlighted the activation of a rapid response system for European elections. This system aims to quickly identify and address disinformation threats as they emerge, demonstrating a coordinated approach to tackling disinformation in Europe.


Challenges in the United States


Benjamin Shultz, representing the American Sunlight Project, painted a concerning picture of the situation in the United States. He described a regression in efforts to combat disinformation, characterized by platforms “giving up” on content moderation. This has led to a rise in far-right narratives claiming censorship and a proliferation of deepfakes targeting politicians.


Shultz highlighted specific examples, including a recent report by the American Sunlight Project on sexually explicit deepfakes targeting members of Congress. He also noted the lack of regulation on election integrity measures in the US, contrasting sharply with the European approach. Shultz expressed concern about platforms like Meta’s third-party fact-checking program and issues with content moderation on YouTube.


African Perspective on Disinformation


Philile Ntombela, a researcher at Africa Check’s South African office, provided insights into the unique challenges faced in combating disinformation in Africa. These include:


1. Targeting of journalists and judiciary bodies by disinformation campaigns


2. A significant digital divide limiting access to fact-checking resources


3. Language barriers that complicate fact-checking efforts


4. Concerns about the potential misuse of anti-disinformation laws for censorship


Ntombela shared an example of how fact-checkers and journalists in South Africa faced accusations of bias when attempting to fact-check politicians’ statements. This led to the formation of an Elections Coalition, which included journalists and media houses working together on fact-checking efforts.


She also highlighted the Africa Facts Network declaration, an initiative to increase collaboration among African fact-checking organizations. Ntombela emphasized the need for context-specific solutions that take into account local challenges and potential risks associated with strict regulations.


Impact of AI on Elections


Claes H. de Vreese, from the University of Amsterdam and a member of EDMO’s Executive Board, addressed the role of generative AI in recent elections. He described the current situation as being “between relief and high alert.” While AI-generated content played a role in the 2024 elections, it did not have the catastrophic effects that some had feared.


However, de Vreese emphasized the importance of continued vigilance and research on AI’s impact on elections. He suggested that observatories like EDMO should continue monitoring how these technologies are deployed across various aspects of the electoral process in future elections, such as those in 2025.


Unresolved Issues and Future Directions


Several key issues remain unresolved and warrant further attention:


1. Balancing free speech concerns with the need to combat disinformation, particularly in the US


2. Addressing the digital divide and language barriers in combating disinformation in Africa


3. Understanding the long-term impact of AI-generated content on election integrity


4. Assessing the effectiveness of current platform policies in addressing disinformation globally


The speakers discussed whether a legislative framework similar to Europe’s would be helpful in their regions. While some saw potential benefits, others expressed concerns about the potential for misuse or unintended consequences.


The discussion concluded with a question from the moderator about the most pressing issues for the coming year. Responses varied by region but included the need for continued monitoring, improved collaboration between stakeholders, and addressing the challenges posed by emerging technologies.


The discussion highlighted the evolving nature of the disinformation landscape and the importance of ongoing research, collaboration, and adaptive strategies to protect election integrity in diverse global contexts.


Session Transcript

ALBERTO RABBACHIN: were committed to analyze the flags and decide, on the basis of their terms of services, if an action needs to be taken. This system was activated for the European election, for the French election, and for the Romanian election. And there was also a similar mechanism that was put in place on the Moldovan election. I’m basically going towards the end of my presentation. I just wanted to have a focus on the Romanian election. We know that they have been very difficult. There was a lot of foreign interference on this election, and this was shown also by the number of flags that the rapid response system had seen, with more than a thousand flags exchanged, coming from civil society organizations and fact-checkers, and going to the major platforms. Alberto? Yes, I’m concluding, because very quickly, I mentioned the European Digital Media Observatory. This is a bottom-up initiative financed by the Commission. We have put more than 30 million euros on this initiative. It has 14 hubs, and it has a system of monitoring this information across the EU and doing investigation. I’m sure that Giovanni will give you all the details on this. I will stop here, and I’m happy to take any questions if you need.


GIACOMO MAZZONE: Thank you very much. The question will be eventually at the end, because now we are very tight. Giovanni, you have been asked to continue and to complement the picture.


GIOVANNI ZAGNI: Yes, thank you. Thank you, Alberto. I’ll try and share my screen, because I have my presentation there. We’ll have a couple of seconds of embarrassed silence. Something is happening, yes, that’s magic, ok, that’s great. So I present, good afternoon, it is great for me to present our work with the European Digital Media Observatory in such an important venue as the 2024 Internet Governance Forum. My time is short, so I will dive right in. First of all, I would like to present a couple of key pieces of information about the observatory, which is usually known through its acronym EDNMO. Alberto mentioned it briefly, so since I have a couple of minutes more, I’ll try to present it. The observatory was established in 2020 as a project co-funded by the European Union. Its core consortium is composed of universities, research centres, fact-checkers, technology companies and media literacy experts. Beside the coordinating core, there are currently 14 hubs connected to EDNMO, covering sometimes just one country and sometimes a larger area in Europe. The concept behind the hubs is to cover the variety of languages and the specificity of media landscapes across the Union, since the challenges posed by this information are clearly very different from Slovakia to Portugal and from Finland to Greece. The general scope of EDNMO is to obtain a comprehensive coverage of the European media ecosystem, mainly with regards to disinformation and all the connected issues, and to propose, as well as enact, new and effective strategies to tackle them. For the broadness of its scope and its multi-stakeholder approach, it is a unique experience in the European landscape for its ability to carry out many different efforts, such as 1. To monitor disinformation across the continent through a network of 50-plus fact-checking organisations, working together on a regular basis. in monthly briefs and investigations. Two, to provide analysis and suggestions in the policy area with a special focus on the code of practice on this information that was mentioned by Alberto right before. Three, to coordinate and to promote media literacy activities such as the one that I’ll present in a minute. And four, to contribute to research landscape, to the research landscape in the field with a special effort in promoting data access to researchers but on this we have a class here which will tell us more in a minute. Let’s try and be very practical and specific about what we have done and what we are currently doing. I will give only a couple of examples of the many activities we are carrying out and I invite you to visit our website edmo.eu to know more. First example, ahead of the 2024 European elections we decided to set up a task force specifically to monitor the media ecosystem and to provide insight, analysis and action around the issue of disinformation. As part of this effort we set up a daily newsletter that, thanks to the hubs and the day-to-day work of fact-checkers in the field, updated policy makers, journalists and experts about the main disinformation narratives and the most important issues of the day, providing a connecting work which is usually made difficult at the European level, for example because of the great linguistic variety of Europe. Let me show you an issue I selected almost randomly. This is issue number 46 on June 10, 2024. It had three main items. One about how disinformation tried to exploit problems with polling stations in Spain on election day, as detected by Spanish fact-checkers. Another one about how a top Swedish candidate’s campaign was likely boosted by coordinated behaviours by ex-accounts. And it was done linking a Swedish report in the newsletter. and the third one about the rise of disinformation targeting the 2024 Paris Olympics, which was spotted by the big French newswire agency AFP and its Factuel department. At the same time we published reports and weekly insights about the main trends in disinformation in Europe and we promoted a Europe-wide media literacy effort with the hashtag BeElectionSmart. The BeElectionSmart campaign was an Edmo initiative to support citizens in finding reliable information about the elections and recognizing false or manipulative content ahead of the elections themselves. Each Monday from 20 to 29 of April to June 3rd of 2024 a new message along with the practical tips was published on the websites and social media accounts of all 14 Edmo hubs covering all EU member states. Most of the activities were new ideas and never tried out at the European scale and some of these activities turned out to be successful pilot projects that we have since adapted to new scenarios. For example the Romanian elections that were mentioned before of late November 2024 made headlines in Europe and beyond when a previously little-known candidate was able to finish first in the popular vote. Citing foreign interference and tampering with the regularity of the electoral campaign, on December 6th 2024 the country’s constitutional court annulled the results and ordered for the first turn of the elections to be held again. Drawing from a pilot during the last few weeks before the EU elections in June that Alberto mentioned, in the context of the Romanian elections we activated a rapid response system mechanism through which members of the Edmo community can proactively flag two very large online platforms suspicious and troublesome cases. It is then up to the platforms to take action or not, according to their terms of service. Moreover, on the EDMO website, we translated and made available to the community at least three analyses of the role of social media on the Romanian elections. Just yesterday, we made progress in an effort to provide a technological tool based on AI, which is currently in its testing phase, to Romanian fact-checkers, a tool which is apparently quite good at spotting networks of social media accounts, carrying on coordinated campaigns of dissemination of suspicion content. This is the result of cooperation between many actors, researchers involved in developing the tool, which are part of the EDMO community, the Bulgarian-Romanian Observatories of Digital Media, or BROAD, which is one of the most active regional hubs in the EDMO ecosystem, and EDMO-EU acting as coordinator and facilitator of these exchanges. It is tiresome, intensive, but also very rewarding work. And let me conclude with some actionable advice, if this short presentation gave you some food for thought. First, disinformation is an issue that naturally crosses disciplines and fields. It is crucial to build a multidisciplinary network of practitioners. Second, it is also necessary to find means to connect those practitioners. If not a big and expensive project as EDMO, then just a newsletter, or a comms group, or an app. Three, to help you communicate the results and attract new forces for your effort, you will need to produce an easy-readable, easy-shareable output. There are many for EDMO, and you can find all of them on the edmo.eu website. Finally, I will invite you to get in contact with us at EDMO to know more about our experience, our difficulties, our few successes, and many challenges. We are very happy to share what we know and to learn about what we do not. And thank you for your attention.


GIACOMO MAZZONE: Thank you very much. Giovanni, I hand over to Paola, please.


PAULA GORI: Thank you, Giovanni, for sharing the work that we are doing at ETMO. I think you did a very impressive presentation of the work we did and I like how you concluded. We are trying to learn, but we are open to learn even more. And that’s why events like today are actually very important also to learn from other experiences. But now I would like to give the floor to Delphine Golar. She is the spokesperson at the European Parliament and she will tell us what the European Parliament actually did. Because, as you know, in June we had the EU elections and that was quite an important moment for the Parliament. So the floor is yours, Delphine.


DELPHINE COLARD: Well, thank you and thank you for the opportunity to join you remotely and to talk here today. Indeed, the Parliament has been active in this area since 2015. As co-legislator, it has been pushing forward legislations, the legislations that my colleague from the Commission outlined, legislations to protect citizens from detrimental and harmful effects of the Internet, but also promoting freedom of speech and ensuring consumers have access to trustworthy information. This was at the core of the priorities during the past legislature and it will remain at the core in the next legislature that just started. And if we take stock now of the European elections that took place last June, well, the European elections are conducted hand in hand with the 27 EU member states. The European Parliament was adding a layer, deploying a go-to-vote campaign to mobilize as many people as possible by showing the added value of European democracy. And an important part of this democracy campaign, of this communication strategy, was to counter and prevent disinformation from harming the electoral process. And the idea was to anticipate potential disruption. could be expected in connection with the European elections. And we cooperated in this end with the other EU institutions, from the colleagues of the Commission that you just heard, the European member states in a rapid alert system, and the fact-checking community to get a complete picture. And I have to say that in-depth analysis that was provided throughout the period by the European Digital Media Observatory that we just heard, Giovanni, was really, really worthwhile. This was instrumental to have this whole-of-society approach. From our internal analysis in the Parliament of national elections in the member states, we knew that we could expect attempts to try and sow distrust in their electoral processes, alleging that elections are fraudulous or rigged, or spreading false voting instructions, or sowing polarization, especially around controversial topics. So what we wanted is to make sure that European citizens were exposed to factual and trustworthy information, so to empower them to recognize the signs of disinformation and also to give them some tools to tackle it. So inspired, we were inspired by many good practices in the different member states. And with the similar example that we had from Estonia and the Nordic countries, we set up a website about the European elections, explaining the technicalities of the elections. I hope you see the slide, because I don’t. The measures, perfect. The measures put in place by the European member states were explained, and it explained also at length how the EU ensured free and fair elections. So the idea of the website was to inform about the different aspects of election integrity, from information manipulation to data security. So the website was also equipping voters with tools how to tackle this information. This is one example. But of course, what we did also is we developed a… a series of pre-bunking videos explaining how to avoid common disinformation techniques. So, for example, taking advantage of strong emotions to manipulate or polarizing attempts or flooding the information attempts with contradictory versions of the same event. And thanks to the External Action Service, our partners in foreign policy, the videos were also available in non-EU languages, including Ukrainian, Chinese, Arabic and Russian. And I think I have a short version that we can show for 40 seconds now. Thank you. And it’s coming.


VIDEO: Disinformation can be a threat to our democracy. People who want to manipulate us with disinformation often use content with strong emotions, such as anger, fear or excitement. When we feel strong emotions, we are more likely to hit the like or share button without checking if the content is true. By doing this, we help spread disinformation to our friends and families. What can you do? Watch out for very emotional content, such as sensational headlines, strong language and dramatic pictures. Question what you see and read before you share. Things you would question if somebody told you face to face. You should also question online. Take a step back. Pause and resist the reflex to react without thinking.


DELPHINE COLARD: So you see, this was an example. So this is what spread throughout the period before the elections. And it was shared via social media and TikTok. And in addition, we organized briefings, workshops for civil society organizations, for youth, for educators, for journalists, for content creators. The idea there was really to engage different audiences, providing really tips and tricks on how to detect and avoid disinformation techniques. So, it was also reaching other members in the European Parliament with a specific guide. One element we are particularly proud of is establishing contact with several youngsters across Europe, especially first-time voters, via Euroscola and what we call the European Parliament’s Ambassador School Programme. Those are flagship programmes of the Parliament for students. As you know, raising awareness of the threat is a key and long-lasting solution that requires the implication of the society as a whole, and it starts with education. These were two examples, or three examples. One element that I want to highlight is also the importance we pushed on having strong relationships with private entities, civil society organisations to convey the importance of voting, and this was spread as widely as possible. So, it was tech companies, other companies, as the Code of Practice also mentioned, it was instrumental to have them on board. I want to mention the importance of strong, independent, pluralistic media that were instrumental in this fight. During the legislature, there were several legislations that were passed at European level, the European Media Freedom Act, or a Directive to Protect Journalists Against Abusive Lawsuits, and we tried also to support media in their work, from briefings, invitations or grants. I don’t know, we saw a lot of things during the last elections, maybe not the tsunami that was potentially, that we potentially feared, but there was an increase of information manipulation attempts targeting the European elections. Until now, we have not detected any that seemed capable of seriously distorting the conduct of these elections, and this is an assessment that we have shared with the EU institutions, the other EU institutions, and the European Digital Media Observatory, Giovanni can of course give you more. more information if you need. But we have to remain vigilant beyond the elections and continue the European elections, continue monitoring because the effect of this information is not a one-off. It’s not only during the European elections or those big moments. It’s a slow dripping that hollows out the stone. Look at what recently happened with the Romanian elections. It’s something that’s where the parliament is scrutinizing really at the moment and asking an information about to the commission. This legislature, we see that the parliament is very eager to have more information. They have passed several legislations to call for step up in this area, stepping up efforts. There will be, that’s a new dimension for this week. There will be a special committee that will focus on the European democracy shields. It’s really to assess existing and planned legislations related to the protection of democracy. They are also asking to deal with the question of addictive designs in social media platforms. So there is a lot of activity. And next week in the plenary, there will be two debates specifically on this information, especially this information during electoral period. Maybe to conclude as Giovanni did with some learnings, three main of our end, information manipulations, information manipulators really see elections as an opportunity to advance their own goals by smearing leaders, exploit existing political issues. So distrust erode the credibility in the democratic system and its institution. Second, good intentions and voluntary actions are not enough. Legislations and regulations play an instrumental part. So parliament has been and will remain a key actor there as co-legislator to shape laws that are fit for the digital age. And third, at the same as Giovanni already underlined, it is really important to continue to implement a whole of society approach. learning from each other’s practices and programs to double our efforts to make society more resistant to destabilization attempts. So this was a bit what we want to transmit for the European Parliament. Thank you.


PAULA GORI: Thank you so much Delphine and indeed as you said maybe there wasn’t a tsunami but as you rightly underlined and as we also underlined as Edmo this information is rather a drop after drop so we cannot just focus on it ahead of the elections but it’s rather a longer process and I think that right after these two presentations I mean there are so many keywords media, journalism, fact-checking, media literacy, emotions, addictive design, media literacy, digital platforms and so on so you understand why the whole of society approach but also that why the multidisciplinary approach for example if we know that emotions play a certain role that’s thanks to a specific research in the field. If we know about fact-checking it’s another field of research so that’s why institutions like Edmo in collaboration with other organizations like the Parliament, the Commission but platforms as well and civil society organization and so on are so important. And now I mean after this lots of words about the EU we thought that maybe would be interesting also to focus on other parts of the world because as we were saying this was a year of elections actually in the whole globe and so I’m very happy to give it over to Benjamin Schultz from American Sunlight who will focus on the US. The floor is yours.


BENJAMIN SHULTZ: Awesome, thanks so much Paula. I’m just gonna share my screen real quick if it wants to go. Okay I think it looks like that was a success. Can everyone see? Yes. Okay lovely. Well thank you all very much it’s wonderful to be here speaking with such a geographically and in so many other ways different diverse crowd from all over the world, different backgrounds, all here to talk about disinformation and how do we make the internet a better, safer place. So thank you for having me. Obviously, in the US, we just had an election. I think, putting it bluntly, I think it’s a result that surprised a lot of people. And really, this year of elections in the States was kind of a weird one. In many ways, we kind of regressed. And I’m going to explain this a bit further on, into my seven or eight minutes. But this is just a taste of what’s to come. But we’re in a very different place in the States than we were four years ago, and even eight years ago, in terms of taking action on disinformation, in terms of platforms playing an active role in content moderation, trust, and safety across the board. We’ve seen things regress, which I think is the direct opposite to how things have gone in Europe. And so it’s a very interesting phenomenon taking place. So with that, I will jump on into the slides here. So this GIF, this is one of my favorite GIFs. And I think it actually really accurately, despite it being funny, really accurately describes the current US approach to disinformation and online harms of all kinds. Whack-a-troll is what we at the American Sunlight Project call it. That’s a play on whack-a-mole. And as you can see, the cat is just trying to tap the fingers as they’re popping up. And as the cat tries to tap them, they go away. And the cat continues to do this for as long as I have this slide up here. And I think, again, this sums up where we’re at. In the last year, really two years, we have seen platforms pretty much just give up entirely. And this is not specific to any particular platform, although I will say that some certainly are doing more giving up. Uh, than others, um, I, you know, for, for lack of, um, or for not wanting to sort of impugn anyone’s integrity, I won’t, I won’t name that platform, but I think we can all take a guess. Um, and this is really problematic because we have seen a massive proliferation of hate speech, of, uh, false information of all kinds from false polling place locations to, um, attacks against elected officials, um, doxing, things like this. This has become commonplace in the States in the last year or two, um, we’ve seen polarization reach really unprecedented levels, political polarization. Um, and the political system is about as toxic as it’s been, um, certainly in my lifetime. And even though that’s, you know, anecdotal and qualitative, I, I, you know, it has certainly become worse in the last couple of years. Um, this has been buoyed by the rise of the far right in the States. Um, there’s been a significant populist turn as there has been in many other countries in the world. Um, and we’ve seen platforms sort of go along with this, um, which I think is very different than how things have played out in Europe. Um, in the States we do not have regulation such as the digital services act or really anything of the sort. Um, and platforms, as you can see just from these headlines, um, have, have surrendered and give it up. Um, termination of, uh, trust and safety teams has taken place. Um, and we’ve also seen the rise of, of a narrative of censorship, uh, from the far right primarily, also limited sections of the far left. Um, but really the political fringe on both sides, um, have started to claim that, um, any content moderation, any, um, action against false and, and, and malicious claims is tantamount to censorship. Um, in the States, of course, we have the first amendment, which protects the right to freedom of speech and, um, expression and assembly. Um, and this is of course a very American, um, American right. I, you know, this is something that I think every single American supports, um, the right to freely express yourself. Um, but where we’re seeing. some conflict, politically, in the states and with platforms in the government and the incoming administration, is sort of where that right, where the boundary is, right? For instance, in the states right now, my organization, the American Sunlight Project, we just released a new report just last week impacting how malicious, sexually explicit deepfakes have actually affected members of Congress. What we found was one in six women in Congress in the states are subject currently right now on the internet to sexually explicit deepfakes. And what we’ve seen is the Senate has passed bills already that would regulate these deepfakes and make it a criminal offense to spread them non-consensually because you’re using someone’s likeness to denigrate them and portray them as being in pornographic material. But we’ve seen pushback in the House from the far right who claims that this is a violation of free speech. So this is kind of the situation in the states right now, and I just want to preface everything that I’ve said here is not me taking a position one way or another, just sort of painting a picture of where we’re at in the states. And so, sorry, I should have skipped ahead a little bit earlier on the slide here, but we’ve seen this kind of democratization in the most kind of extreme way of artificial intelligence, deepfake technology, but also other types of malicious text-based content too that we’ve found plenty of evidence of foreign bot networks that have played a significant role on various social media platforms in this election cycle. You know, to the extent they changed voting behavior, I think that’s really kind of impossible to measure. But certainly we have plenty of evidence and not just us, but pretty much every organization working in this field in the states that this type of content has been pervasive and getting


GIACOMO MAZZONE: into people’s feeds, whether it’s on TikTok or X or Facebook or Instagram or whatever. We’ve seen that algorithms, just as my European colleagues just mentioned, you know, content that is emotional or. gets people riled up, makes them want to click more and scroll more. Algorithms favor this kind of content. And we’ve certainly seen plenty of malicious, fake, and actually also illegal content making their way into people’s feeds. So again, to the extent that changed voting behavior, not sure, but certainly people have been increasingly exposed to false and malicious content in this election cycle. Going back to the deepfake issue, and not just in terms of deepfake sexually explicit material or image-based deepfakes, but we’ve seen numerous instances in the States of actually election officials, and even Joe Biden himself, being spoofed and imitated by deepfakes. And again, in the States right now, just to paint the picture, again, without taking a political position, just accurately describing where we’re at, a lot of this material, it’s completely legal to make and create and disperse. Now, in this headline, the third one on the right here, New Hampshire officials investigate robocalls, et cetera, there was a criminal investigation here because it’s illegal to sort of interfere this blatantly in an election. But there’s plenty of other instances which haven’t been prosecuted of this type of behavior happening. And this is incredibly damaging. And again, in the States, we have pretty much zero regulation on deepfakes on really any kind of election-based integrity measures. We have pretty much none. Which again, is pretty much in direct opposite to how Europe has approached this issue. I think certainly in the States, our institutions are structured differently. I think it’s much harder for us to implement these kinds of measures, the DSA, GDPR, et cetera. But nonetheless, this really just highlights the issue. Thank you, Benjamin. Can you go to a close, please? Yes, we’ll close it up. So one thing, just I’ll wrap it up very quickly. Kamala Harris


BENJAMIN SHULTZ: ran for president in 2020, again, not getting political, but she has pretty much been the most attacked person in terms of gender and identity in the last really four to five years in the States. Numerous studies, actually one study that my boss authored found that of all gender and identity based attacks against any politician in the United States, Kamala Harris received roughly 80% of those attacks. And this really just, again, going back to the polarization, the toxicity of the American political system, this highlights that and it makes it incredibly difficult to get people to agree on a set of regulations, a set of rules for technologies or platforms, et cetera. And so with that, I will wrap it up and say that for the immediate term, the outlook in the States is not so good. And hopefully we get through this tough period and are able to sort of be united as Europe has been on this issue and, you know, improve our feeds, improve our political system and go from there.


PAULA GORI: So thank you very much. Thank you very much, Benjamin. As we are running a little late, I will give it immediately to Pilile Ntombela from Africa Check. We are moving again geographically. Pilile, the floor is yours.


PHILILE NTOMBELA: Good day, everybody. I’m going to just quickly share my screen as well and then I’ll get started if it doesn’t actually share. Can you see my screen at all? Not yet. OK. Now, yes, you can see it now. Yeah, fantastic. Amazing. Great. So I’m just going to quickly go off the screen and then hope for the best. Great. OK, first and foremost, thank you for having me. My name is Pilile Ntombela. I’m a researcher at the South African Office of Africa Check. Africa Check is the continent’s first independent fact-checking organization, and we have offices in South Africa, Kenya, Nigeria, and Senegal. So first and foremost, I’m sure everybody in the room is aware, but we often like to share that misinformation shared by well-meaning people who are trying to inform you and have no idea that the information is false, while disinformation, obviously shared with an intent to mislead, has a golden mind, often a political one, and the people share it know that the information is false. So patterns across the continent this year that we’ve seen through the election year, we found that targeting journalists and the judiciary and all these other bodies was a very powerful tactic. We found that journalists were accused of bias whenever they tried to fact-check, so we had something called the Elections Coalition in South Africa, which included journalists and media houses who would either try and do a quick fact-check themselves, so we trained them beforehand as part of our company, our organization’s training systems, or we helped them to fact-check, and often they were accused of bias whenever they fact-checked a specific politician and told that they support their opposition. We had rumors of electoral bodies favoring parties. A certain party in South Africa actually took our independent electoral commission to the Constitutional Court, which is the highest court in our country, stating that they felt they were being marginalized and also they were being basically receiving unfair treatment. Of course, the Constitutional Court ruled against that because it wasn’t actually true. It was more publicity to make people both wonder whether the Constitutional Court is independent and if the electoral commission is. And so this is the last part, which again leads back to the court case with the Constitutional Court. We found that people are more connected, but voters are still vulnerable to misinformation. Media literacy is not a big like this, a lot of people don’t have it. News media is increasingly putting important information behind paywalls. And in a continent like ours, which has a huge amount of economic inequality and poverty, this is very difficult for people to overcome those. Language remains a barrier. Africa has, I think, more than 2,000 languages across 53 states. And those are the ones that are the official languages. Sorry, let me go back here quickly. The news media as well, but on top of being connected, we found a report by the International Telecommunications Union, which showed that Africa is still, even in 2022, Africa still suffered from what we call the digital divide. We had the lowest level of people that had internet connectivity, which means those people then, whatever information they receive, they don’t have the opportunity to look it up or double check it or send it to a fact-checking organization like Africa Check, which means that these issues then remain a problem. Finally, we had platform accountability. We found that, especially on YouTube, I don’t know if I’m supposed to mention it, but some platforms, people were able to share their information unchecked, disinformation unchecked. One of the platforms obviously has put a note with a sort of miniature fact-check, but doesn’t actually curb the spread of that same content or the same posts that show disinformation. We are part of the META third-party fact-checking program. And so in that side, we found that we actually were able to kind of help the spread, because when we add a fact-check, it downgrades the post or even removes it. However. there’s still far more to be done on social media, particularly in places where, just like I said, in Africa, it’s very difficult for people to access that information in other ways. So if I find that there’s information that is incorrect, but I believe it and I share it and I send it to somebody who has no way of finding out if it’s important, if it’s true, then you’ll find that then that information spreads even faster. And platforms, obviously, their algorithms definitely need to be able to pick up common phrases used for disinformation, particularly in election years, but in general overall. The biggest allegation, the biggest disinformation this year was fraud allegations, particularly in South Africa. Claims of post spreading were by far the worst. A specific politician started these, knowing that he’s a very charismatic character, but also has had a lot of problems with legally and also with the IEC. So of course, started the issue saying that the vote would be rigged even before the election season started, even the year before. The media then did, they just shared with the parity fashion. They’d just not actually fact check it or even try to say this person said. It was more just shared verbatim to what he said. This then trended on social media. And this is some of the examples. So I’m assuming it’s my, but on my left, you have a report by a nationwide newspaper, which then took that statement made by the politician and just made this the headline. Which, of course, then drives the idea rather than creates a sense of this is what somebody said. On the right, the conspiracy was then named the big lie. And we found that between 25 May to June, this cloud was basically. it took over social media with the biggest players being in the biggest causes being the ones in purple and then later the ones in the turquoise. Sorry, we are long. Could you go to a close because we have the last speaker waiting and the next session starting soon. Thank you very much. Okay, sure. Then I’ll just speak about our stance on anti-information regulation. We found that in Africa, the backfire can be quite damaging. So if we use stifle and stifle people, censorship, and also people can turn to covert means, for example, using platforms like WhatsApp, which have end-to-end encryption, and then we won’t have access to them. It also runs the risk of penalizing misinformation instead of disinformation. So we had an accurate, we’ve decided to have a combined accord, which was created this year at the African Exchange, 55 checking organizations in more than 30 countries, basically reaching offline and variable communities, expanding access to reliable information, protecting fact checkers from facing harassment, and collaborating with tech partners to innovate. This then for us was more a collaborative space rather than a legislative or legal space. And that is all I’d say. Thank you so much.


GIACOMO MAZZONE: Thank you, Philippe. Sorry for that. The last speaker now is Klaas. Please, Klaas. Yes. So I suggest something radical,


CLAES H. DE VREESE: seeing that we have taken a lot of time and the next session is beginning soon. So I’m Claes de Vries. I work at the University of Amsterdam, a member of the Executive Board of Aetmo. And I was going to talk about a specific risk, but from a broad perspective around generative AI and elections in 2024. Let me just give you my take home message rather than going through the whole presentation at this point of the panel. I think we are somewhere between relief and high alert. So if you look across elections of 2024, it will be hard to identify a generated AI generated risk that really flipped elections in the last days. of the election. So on the one hand, you could say that was a big relief because that’s very different from the expectations going into 2024 when there were true and genuine fears about elections being overturned through generative AI. That didn’t happen. At the same time, all the evidence, and that’s the evidence that would be in the slides that I will then skip, there’s also not been a single election in 2024 worldwide where generative AI and AI-generated material has not played a role. And I think that’s the take-home message really of this discussion on AI, that there is a certain sense of relief that 2024 did not become the absolute catastrophic year where there was an absence still of regulations in this area and a technology that was available and that was deployed, but maybe not with the detrimental effects that were expected in 2024. Does that mean that the AI discussion is over as we move into a big election year in 2025? Absolutely not. It’s important to look at the impact of generative AI and Aetmo will continue to doing so also in 2025 as we see elections that take place in that year. It’s important to not only look at persuasion of voters, but to see what kind of role AI is playing in the entire ecosystem of elections, whether that is in the donation phase, whether that is in the mobilization phase, whether it is spreading disinformation about your political opponents, and whether or not it is igniting and fueling into already existing conflict lines and emotions that are taking place in particular elections in society. So let that be the take-home message for 2025, that while 2024 did not become the AI catastrophe, which was in many ways predicted by a lot of observers also in this space, I believe that as we move into 2025, there’s all the reason for an observatory like Aetmo to continue the work, to see how these technologies are being deployed across elections. And this is something that we should do collaboratively also with centers and researchers and civil society from outside of the European Union to really get a better grasp on the. impact of AI on elections.


GIACOMO MAZZONE: Thank you very much, Klaas. Thank you for sacrificing your time. Just one question to the two non-European speakers. Do you think that, Philile and Ben, that if you have a legislative framework like in Europe this would make your life easier or not?


PHILILE NTOMBELA: Okay, I’ll go first. So, for us, no, it wouldn’t, for the reasons I mentioned in the presentation. There’s a huge history of censorship, suppression, and once you create a law like that, unfortunately, because everything works with precedent, once one person is able to use that law to manipulate people that are actually trying to spread proper information, it can go wrong in so many ways. And so, this is why we came up with that declaration at the Africa Facts Network. You know, 30 fact-checking organizations around the, 50 actually, around the continent then all said, proved that, well, signed up to say that they’d rather we collaborate, including with responsive governance, so that we can try and fight disinformation and misinformation from, you know, a media literacy perspective, an outreach perspective, and an international, but still within the continent perspective, rather than laws, because of how they can be manipulated, as we’ve seen with other laws in our countries already.


BENJAMIN SHULTZ: Yeah, I generally, I think, yes, actually, I think it would help us in the States. I think, you know, implementing something like totally identical to like the DSA or any general law regulating platforms, I think that would be difficult to like legally deal with. I think it would face a lot of kind of chopping down, but particularly around protection of researchers, data access for researchers, these things, I think, would be extremely helpful and would enable civil society to you know, do what we were doing in 2020, which was analyzing content from platforms and reporting on online harms, which we can’t do today.


GIACOMO MAZZONE: Thank you very much. Thank you to all the speakers. Thank you to the people in the room for the patience. Apologies to the next speakers of the next session. We hand over to them and sorry for not taking question, but we will be outside the room if you need help and you have to raise any point. Thank you.


A

ALBERTO RABBACHIN

Speech speed

134 words per minute

Speech length

210 words

Speech time

94 seconds

Rapid response system activated for European elections

Explanation

A rapid response system was implemented to combat disinformation during European elections. This system involved analyzing flags and deciding on actions based on platforms’ terms of service.


Evidence

The system was activated for the European election, French election, Romanian election, and Moldovan election.


Major Discussion Point

Efforts to combat disinformation in Europe


Agreed with

GIOVANNI ZAGNI


DELPHINE COLARD


BENJAMIN SHULTZ


PHILILE NTOMBELA


CLAES H. DE VREESE


Agreed on

Importance of monitoring disinformation in elections


G

GIOVANNI ZAGNI

Speech speed

145 words per minute

Speech length

1217 words

Speech time

503 seconds

European Digital Media Observatory monitors disinformation across EU

Explanation

The European Digital Media Observatory (EDMO) is a project that monitors disinformation across the European Union. It involves a network of fact-checkers, researchers, and media literacy experts working together to tackle disinformation.


Evidence

EDMO has 14 hubs covering different countries and regions in Europe, and it monitors disinformation through a network of 50-plus fact-checking organizations.


Major Discussion Point

Efforts to combat disinformation in Europe


Agreed with

DELPHINE COLARD


PHILILE NTOMBELA


Agreed on

Need for multi-stakeholder approach to combat disinformation


D

DELPHINE COLARD

Speech speed

147 words per minute

Speech length

1271 words

Speech time

518 seconds

European Parliament website explaining election integrity measures

Explanation

The European Parliament created a website to explain the technicalities of elections and measures to ensure free and fair elections. The website aimed to inform voters about various aspects of election integrity and equip them with tools to tackle disinformation.


Evidence

The website explained measures put in place by European member states and how the EU ensured free and fair elections.


Major Discussion Point

Efforts to combat disinformation in Europe


Agreed with

ALBERTO RABBACHIN


GIOVANNI ZAGNI


BENJAMIN SHULTZ


PHILILE NTOMBELA


CLAES H. DE VREESE


Agreed on

Importance of monitoring disinformation in elections


Differed with

BENJAMIN SHULTZ


PHILILE NTOMBELA


Differed on

Approach to regulating disinformation


Pre-bunking videos to explain disinformation techniques

Explanation

The European Parliament developed a series of pre-bunking videos to explain common disinformation techniques. These videos aimed to educate voters on how to avoid manipulation and recognize disinformation tactics.


Evidence

The videos covered topics such as emotional manipulation, polarization attempts, and flooding of information with contradictory versions of events.


Major Discussion Point

Efforts to combat disinformation in Europe


Agreed with

GIOVANNI ZAGNI


PHILILE NTOMBELA


Agreed on

Need for multi-stakeholder approach to combat disinformation


B

BENJAMIN SHULTZ

Speech speed

174 words per minute

Speech length

1361 words

Speech time

467 seconds

Platforms giving up on content moderation

Explanation

In the United States, social media platforms have largely abandoned content moderation efforts. This has led to a proliferation of hate speech, false information, and attacks against elected officials on these platforms.


Evidence

Termination of trust and safety teams at various platforms, and headlines indicating platforms’ surrender on content moderation.


Major Discussion Point

Disinformation challenges in the United States


Differed with

DELPHINE COLARD


PHILILE NTOMBELA


Differed on

Approach to regulating disinformation


Rise of far-right narratives claiming censorship

Explanation

There has been an increase in narratives from the far-right in the US claiming that content moderation is a form of censorship. This has made it difficult to implement measures against false and malicious claims on social media platforms.


Evidence

Pushback in the House of Representatives against bills regulating deepfakes, with claims that such regulation violates free speech.


Major Discussion Point

Disinformation challenges in the United States


Proliferation of deepfakes targeting politicians

Explanation

There has been a significant increase in the use of deepfake technology to create false or misleading content targeting politicians in the US. This includes sexually explicit deepfakes of female politicians and voice imitations of election officials.


Evidence

A study found that one in six women in Congress are subject to sexually explicit deepfakes. There were also instances of deepfake voice imitations of Joe Biden and other election officials.


Major Discussion Point

Disinformation challenges in the United States


Agreed with

ALBERTO RABBACHIN


GIOVANNI ZAGNI


DELPHINE COLARD


PHILILE NTOMBELA


CLAES H. DE VREESE


Agreed on

Importance of monitoring disinformation in elections


P

PHILILE NTOMBELA

Speech speed

147 words per minute

Speech length

1347 words

Speech time

549 seconds

Targeting of journalists and judiciary bodies

Explanation

In African elections, there has been a trend of targeting journalists and judiciary bodies with disinformation. Journalists attempting to fact-check were often accused of bias, while rumors spread about electoral bodies favoring certain parties.


Evidence

A political party in South Africa took the independent electoral commission to the Constitutional Court, claiming unfair treatment.


Major Discussion Point

Disinformation issues in Africa


Agreed with

GIOVANNI ZAGNI


DELPHINE COLARD


Agreed on

Need for multi-stakeholder approach to combat disinformation


Differed with

DELPHINE COLARD


BENJAMIN SHULTZ


Differed on

Approach to regulating disinformation


Digital divide limiting access to fact-checking

Explanation

Africa suffers from a significant digital divide, with the lowest level of internet connectivity. This limits people’s ability to fact-check information or access reliable sources, making them more vulnerable to misinformation.


Evidence

A report by the International Telecommunications Union showed that Africa had the lowest level of internet connectivity in 2022.


Major Discussion Point

Disinformation issues in Africa


Fraud allegations spreading rapidly on social media

Explanation

In African elections, particularly in South Africa, fraud allegations were a major form of disinformation. These claims spread rapidly on social media, often initiated by charismatic politicians and amplified by uncritical media coverage.


Evidence

A specific politician in South Africa started fraud allegations even before the election season began, which then trended on social media.


Major Discussion Point

Disinformation issues in Africa


Agreed with

ALBERTO RABBACHIN


GIOVANNI ZAGNI


DELPHINE COLARD


BENJAMIN SHULTZ


CLAES H. DE VREESE


Agreed on

Importance of monitoring disinformation in elections


C

CLAES H. DE VREESE

Speech speed

175 words per minute

Speech length

491 words

Speech time

167 seconds

Generative AI played a role but did not overturn elections in 2024

Explanation

While generative AI was used in elections worldwide in 2024, it did not have the catastrophic impact that was initially feared. There was no evidence of AI-generated content flipping election results in the final days of campaigns.


Evidence

No single election in 2024 was overturned due to AI-generated material, despite its presence in every election.


Major Discussion Point

Impact of AI on elections


Agreed with

ALBERTO RABBACHIN


GIOVANNI ZAGNI


DELPHINE COLARD


BENJAMIN SHULTZ


PHILILE NTOMBELA


Agreed on

Importance of monitoring disinformation in elections


Need to monitor AI’s impact across entire election ecosystem

Explanation

It’s important to continue monitoring the impact of AI on elections beyond just voter persuasion. AI’s role in various aspects of the election process, including donation, mobilization, and spreading disinformation about opponents, needs to be studied.


Major Discussion Point

Impact of AI on elections


Agreements

Agreement Points

Importance of monitoring disinformation in elections

speakers

ALBERTO RABBACHIN


GIOVANNI ZAGNI


DELPHINE COLARD


BENJAMIN SHULTZ


PHILILE NTOMBELA


CLAES H. DE VREESE


arguments

Rapid response system activated for European elections


European Digital Media Observatory monitors disinformation across EU


European Parliament website explaining election integrity measures


Proliferation of deepfakes targeting politicians


Fraud allegations spreading rapidly on social media


Generative AI played a role but did not overturn elections in 2024


summary

All speakers emphasized the importance of monitoring and addressing disinformation in elections, whether through rapid response systems, observatories, or analysis of AI-generated content.


Need for multi-stakeholder approach to combat disinformation

speakers

GIOVANNI ZAGNI


DELPHINE COLARD


PHILILE NTOMBELA


arguments

European Digital Media Observatory monitors disinformation across EU


Pre-bunking videos to explain disinformation techniques


Targeting of journalists and judiciary bodies


summary

These speakers highlighted the importance of involving various stakeholders, including fact-checkers, researchers, media literacy experts, and government bodies in combating disinformation.


Similar Viewpoints

These speakers shared a positive view of the European Union’s efforts to combat disinformation through various initiatives and tools.

speakers

ALBERTO RABBACHIN


GIOVANNI ZAGNI


DELPHINE COLARD


arguments

Rapid response system activated for European elections


European Digital Media Observatory monitors disinformation across EU


European Parliament website explaining election integrity measures


Both speakers highlighted challenges in their respective regions (US and Africa) related to the spread of disinformation, particularly due to platform issues or lack of access to fact-checking resources.

speakers

BENJAMIN SHULTZ


PHILILE NTOMBELA


arguments

Platforms giving up on content moderation


Digital divide limiting access to fact-checking


Unexpected Consensus

Impact of AI on 2024 elections

speakers

CLAES H. DE VREESE


BENJAMIN SHULTZ


arguments

Generative AI played a role but did not overturn elections in 2024


Proliferation of deepfakes targeting politicians


explanation

Despite concerns about AI’s potential to significantly disrupt elections, both speakers noted that while AI and deepfakes were present in elections, they did not have the catastrophic impact that was initially feared.


Overall Assessment

Summary

The main areas of agreement included the importance of monitoring disinformation in elections, the need for a multi-stakeholder approach to combat disinformation, and the recognition that while AI and deepfakes were present in elections, they did not have the catastrophic impact initially feared.


Consensus level

There was a moderate level of consensus among the speakers, particularly on the importance of addressing disinformation. However, there were notable differences in approaches and challenges faced in different regions (EU vs. US vs. Africa). This implies that while there is a shared recognition of the problem, solutions may need to be tailored to specific regional contexts and legal frameworks.


Differences

Different Viewpoints

Approach to regulating disinformation

speakers

DELPHINE COLARD


BENJAMIN SHULTZ


PHILILE NTOMBELA


arguments

European Parliament website explaining election integrity measures


Platforms giving up on content moderation


Targeting of journalists and judiciary bodies


summary

The speakers disagreed on the effectiveness and appropriateness of regulatory approaches to combat disinformation. While the European approach favors strong regulation and platform accountability, the US has seen a retreat from content moderation, and the African perspective warns against potential misuse of regulations.


Unexpected Differences

Impact of AI on elections

speakers

CLAES H. DE VREESE


BENJAMIN SHULTZ


arguments

Generative AI played a role but did not overturn elections in 2024


Proliferation of deepfakes targeting politicians


explanation

While both speakers addressed AI’s role in elections, there was an unexpected difference in their assessment of its impact. De Vreese suggested a sense of relief that AI didn’t cause catastrophic effects in 2024, while Shultz highlighted significant concerns about deepfakes targeting politicians in the US.


Overall Assessment

summary

The main areas of disagreement centered around regulatory approaches to disinformation, the role of platforms in content moderation, and the impact of technological advancements like AI on elections.


difference_level

The level of disagreement was moderate to high, with significant variations in approaches and experiences across different regions. These differences highlight the complexity of addressing disinformation globally and the need for context-specific solutions.


Partial Agreements

Partial Agreements

All speakers agreed on the need to combat disinformation, but disagreed on the methods. While the European approach involves a coordinated observatory, the US faces challenges with platform cooperation, and Africa struggles with digital access issues.

speakers

GIOVANNI ZAGNI


BENJAMIN SHULTZ


PHILILE NTOMBELA


arguments

European Digital Media Observatory monitors disinformation across EU


Platforms giving up on content moderation


Digital divide limiting access to fact-checking


Similar Viewpoints

These speakers shared a positive view of the European Union’s efforts to combat disinformation through various initiatives and tools.

speakers

ALBERTO RABBACHIN


GIOVANNI ZAGNI


DELPHINE COLARD


arguments

Rapid response system activated for European elections


European Digital Media Observatory monitors disinformation across EU


European Parliament website explaining election integrity measures


Both speakers highlighted challenges in their respective regions (US and Africa) related to the spread of disinformation, particularly due to platform issues or lack of access to fact-checking resources.

speakers

BENJAMIN SHULTZ


PHILILE NTOMBELA


arguments

Platforms giving up on content moderation


Digital divide limiting access to fact-checking


Takeaways

Key Takeaways

Europe has implemented coordinated efforts to combat disinformation, including rapid response systems, monitoring by EDMO, and pre-bunking campaigns


The US has seen a regression in platform content moderation and a rise in disinformation, particularly deepfakes


Africa faces unique challenges with disinformation due to the digital divide, language barriers, and risks of censorship


Generative AI played a role in 2024 elections but did not have the catastrophic impact some feared


There are significant differences in regulatory approaches to disinformation between Europe, the US, and Africa


Resolutions and Action Items

EDMO to continue monitoring disinformation across EU elections


European Parliament to establish a special committee on European democracy shields


African fact-checking organizations to collaborate through the Africa Facts Network declaration


EDMO to continue monitoring AI’s impact on elections in 2025


Unresolved Issues

How to balance free speech concerns with the need to combat disinformation, particularly in the US


How to address the digital divide and language barriers in combating disinformation in Africa


Long-term impact of AI-generated content on election integrity


Effectiveness of current platform policies in addressing disinformation globally


Suggested Compromises

In Africa, focus on collaborative efforts and media literacy rather than strict regulation to avoid potential misuse of laws


In the US, consider adopting some aspects of European regulations, particularly around researcher access to data, while respecting First Amendment concerns


Thought Provoking Comments

We found that journalists were accused of bias whenever they tried to fact-check, so we had something called the Elections Coalition in South Africa, which included journalists and media houses who would either try and do a quick fact-check themselves, so we trained them beforehand as part of our company, our organization’s training systems, or we helped them to fact-check, and often they were accused of bias whenever they fact-checked a specific politician and told that they support their opposition.

speaker

Philile Ntombela


reason

This comment highlights the challenges faced by fact-checkers and journalists in Africa, revealing how attempts to combat misinformation can be weaponized against them.


impact

It shifted the discussion to consider the unique challenges faced in different regions and the potential backlash against fact-checking efforts.


In the States we do not have regulation such as the digital services act or really anything of the sort. Um, and platforms, as you can see just from these headlines, um, have, have surrendered and give it up.

speaker

Benjamin Shultz


reason

This comment provides a stark contrast between the regulatory approaches in the US and Europe, highlighting the lack of oversight on platforms in the US.


impact

It prompted a comparison of different regulatory approaches and their effects on platform behavior across regions.


So let that be the take-home message for 2025, that while 2024 did not become the AI catastrophe, which was in many ways predicted by a lot of observers also in this space, I believe that as we move into 2025, there’s all the reason for an observatory like Aetmo to continue the work, to see how these technologies are being deployed across elections.

speaker

Claes H. de Vreese


reason

This comment provides a balanced perspective on the impact of AI in elections, acknowledging both the relief that catastrophic scenarios didn’t materialize and the ongoing need for vigilance.


impact

It shifted the discussion towards a more nuanced view of AI’s role in elections and emphasized the importance of continued monitoring and research.


Overall Assessment

These key comments shaped the discussion by highlighting the diverse challenges faced in different regions when combating disinformation, from accusations of bias in Africa to lack of regulation in the US. They also emphasized the evolving nature of threats, particularly regarding AI in elections. The discussion moved from specific regional experiences to broader comparisons of approaches and the need for ongoing vigilance and research. This led to a more nuanced understanding of the global landscape of disinformation and the varying strategies needed to address it effectively.


Follow-up Questions

How can we improve media literacy efforts to combat disinformation?

speaker

Delphine Colard


explanation

Delphine emphasized the importance of education and media literacy in combating disinformation, suggesting this is a key area for ongoing work and research.


What are the impacts of addictive design in social media platforms on the spread of disinformation?

speaker

Delphine Colard


explanation

Delphine mentioned that the European Parliament is interested in addressing this issue, indicating it’s an important area for further investigation.


How can we better measure the impact of disinformation on voting behavior?

speaker

Benjamin Shultz


explanation

Benjamin noted that it’s difficult to measure how disinformation changes voting behavior, suggesting this is an area that needs more research and better methodologies.


What are effective strategies for combating disinformation in contexts where media literacy is low and internet access is limited?

speaker

Philile Ntombela


explanation

Philile highlighted these challenges in the African context, indicating a need for research on strategies that work in these conditions.


How can we improve platform accountability and content moderation without risking censorship or suppression of free speech?

speaker

Philile Ntombela


explanation

Philile expressed concerns about potential negative consequences of strict regulations, suggesting a need for research on balanced approaches.


What are the long-term impacts of AI-generated content on elections and democratic processes?

speaker

Claes H. de Vreese


explanation

Claes emphasized the need for continued monitoring and research on AI’s role in elections beyond immediate persuasion effects.


How can we improve international collaboration in researching and combating disinformation?

speaker

Claes H. de Vreese


explanation

Claes suggested the need for collaborative efforts across countries to better understand the impact of AI on elections globally.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Day 0 Event #185 Universities impact in accelerating the adoption of free, open-source government software towards supporting the Blue Ocean eco-system

Day 0 Event #185 Universities impact in accelerating the adoption of free, open-source government software towards supporting the Blue Ocean eco-system

Session at a Glance

Summary

This discussion focused on the adoption and promotion of open-source software in Saudi universities. Representatives from King Khaled University, Imam Muhammad Bin Saud University, Al-Jouf University, and Al-Qasim University shared their experiences and strategies for implementing open-source solutions in academic and administrative processes.


The universities highlighted their efforts to integrate open-source programming into educational curricula, research initiatives, and innovation projects. They emphasized the importance of building sustainable technology ecosystems, strengthening local content, and encouraging open innovation. Several universities reported success in developing in-house solutions using open-source tools, which have been shared with other government agencies.


Partnerships with the private sector and government bodies were identified as crucial for advancing open-source adoption. The universities discussed various initiatives to support student projects, hackathons, and the creation of specialized research centers focused on open-source development. They also addressed the challenges of cultural change, financial sustainability, and the need for specialized training.


The participants agreed on the potential of open-source software to enhance operational efficiency, promote innovation, and support the digital economy. They called for a clear national roadmap for open-source software adoption and emphasized the role of universities in developing human capital and fostering a culture of open collaboration.


The discussion concluded with a recognition of the progress made in establishing a digital warehouse for government open-source software and the potential for Saudi Arabia to become a global leader in this field. The participants expressed optimism about the future of open-source initiatives in supporting the country’s digital transformation goals.


Keypoints

Major discussion points:


– Universities’ strategies and initiatives for adopting open-source software


– Successful experiences and partnerships in developing open-source solutions


– Impact of open-source adoption on academic, research and administrative processes


– Challenges and opportunities in promoting open-source culture in universities


– Future vision for open-source software development in Saudi Arabia


Overall purpose/goal:


The discussion aimed to explore how Saudi universities are implementing open-source software strategies, share successful experiences, and discuss ways to further promote open-source adoption in alignment with national digital transformation goals.


Tone:


The overall tone was positive and collaborative. Participants spoke enthusiastically about their universities’ open-source initiatives and expressed optimism about the future potential. There was a sense of shared purpose in advancing open-source adoption nationally. The tone became increasingly forward-looking towards the end as participants discussed future visions and recommendations.


Speakers

Speakers from the provided list:


– Dr. Muneerah Badr Almahasheer


Role: Electronic Education Dean at Imam Abdul Rahman bin Faisal University


Area of expertise: Electronic Education


– Dr. Hamed Saleh Alqahtani


Role: Dean of the Department of Electronic Services at the University of Al-Malik Khaled


– Dr. Sultan Alqahtani


Role: Dean of the Department of Information Technology and Electronic Learning at the University of Imam Muhammad Bin Saud


Additional speakers:


– Dr. Badr Al-Daghifig


Role: Deputy Dean of the Department of Electronic Education and Digital Transformation at the University of Al-Jawf


– Dr. Abdul Latif Al-Abdul Latif


Role: Deputy Dean of the Department of Electronic Education and Information Technology at the University of Al-Qasim


– Khaled Al-Ghamdi


Role: Engineer and consultant


– Dr. Ahmed Al-Swayyan


Role: Representative of the Digital Government Board


Full session report

Expanded Summary: Open-Source Software Adoption in Saudi Universities


This discussion focused on the adoption and promotion of open-source software in Saudi universities, bringing together representatives from various institutions to share their experiences and strategies. The conversation covered a wide range of topics, from implementation strategies to future visions for open-source development in Saudi Arabia.


University Strategies and Initiatives


The universities represented in the discussion have been developing comprehensive digital transformation strategies that incorporate open-source software. Dr. Hamed Saleh Alqahtani from King Khaled University highlighted their “Marina Path Initiative” and products like Sprint, Wasl, and A-plus. Dr. Sultan Alqahtani from Imam Muhammad Bin Saud University shared that their implementation process has been ongoing for about a year and five months, with the university contributing systems like the employment gate, access system, link service, and field training system to the digital warehouse. Notably, they have shared 7 million lines of code and 657 files.


Al-Jouf University, represented by Dr. Saleh Albahli, is focusing on agriculture-related applications and AI projects. Dr. Abdul Latif from Al-Qasim University mentioned their goal of achieving 40% of services built on open-source software by the end of the year.


Integration into Education and Research


Universities are integrating open-source software into educational programmes and curricula. This integration extends beyond just teaching the use of open-source software to actively involving students in its development and application. Universities are also leveraging open-source solutions to support academic and research processes, developing in-house solutions that enhance operational efficiency and provide valuable learning opportunities.


Talent Development and Community Building


The discussion highlighted the importance of recruiting and supporting young talent in open-source development. Universities are creating open-source programming factories and communities within their institutions to nurture this talent. They are also organizing hackathons and training initiatives to further develop skills and foster innovation.


Partnerships and Ecosystem Development


Collaboration with industry sectors and government bodies was identified as crucial for building sustainable open-source ecosystems. Universities are working to establish partnerships with the private sector and government agencies to support the long-term viability of open-source initiatives. These collaborations are seen as essential for promoting innovation, strengthening local content, and encouraging open innovation.


Challenges and Opportunities


Several challenges were identified, including the need for cultural change, ensuring financial sustainability, maintaining consistency, and providing specialized training. The Digital Government Agency’s role in creating licenses for open-source government programs was noted as a positive step.


The adoption of open-source software was seen as a way to reduce reliance on commercial software, potentially leading to cost savings for universities. Dr. Abdul Latif mentioned that government spending on commercial software is expected to increase by 24% by 2025, equivalent to 16 billion Riyals, highlighting the potential for cost savings through open-source adoption.


There was also discussion about the potential for monetization of open-source solutions, although this topic revealed some differences in perspective among the participants. Some viewed monetization as a way to ensure sustainability, while others emphasized the importance of maintaining the open and collaborative nature of open-source development.


Future Vision and National Goals


The participants expressed a shared vision for Saudi Arabia to become a leader in open-source software development. Dr. Hamed Saleh Alqahtani called for the development of a clear national roadmap for open-source software adoption in universities. Dr. Sultan Alqahtani spoke about the ambition to become a world-class open-source software repository.


Dr. Muneerah Badr Almahasheer raised the importance of improving the classification and filtering of Saudi government open-source software in the digital warehouse. There were also suggestions to expand the digital warehouse to include international open-source products.


Conclusion and Next Steps


The discussion concluded with recognition of the progress made in establishing a digital warehouse for government open-source software and the potential for Saudi Arabia to become a global leader in this field. Participants agreed on the need for continued collaboration between universities, industry, and government to advance open-source adoption.


Key action items identified include:


1. Developing a clear national roadmap for open-source software adoption in Saudi universities


2. Improving the classification and filtering of government open-source software


3. Expanding partnerships between universities, industry, and government agencies


4. Increasing the percentage of university services built on open-source software


5. Focusing on user experience and the user journey in developing open-source software


The participants expressed optimism about the future of open-source initiatives in supporting the country’s digital transformation goals, while acknowledging the ongoing nature of this transition and the need for continued effort and innovation. The success of universities in digital transformation was celebrated, with a commitment to further progress in open-source adoption and development.


Session Transcript

Dr. Muneerah Badr Almahasheer: In the name of God, the most gracious, the most merciful, welcome, we welcome you, friends of happiness, leaders of e-education in Saudi universities, and we start by welcoming you. First of all, Dr. Hamed Saleh Al-Qahtani, Dean of the Department of Electronic Services at the University of Al-Malik Khaled. Welcome. Secondly, Dr. Sultan Al-Qahtani, Dean of the Department of Information Technology and Electronic Learning at the University of Imam Muhammad Bin Saud. Welcome. Next, Dr. Badr Al-Daghifig, Deputy Dean of the Department of Electronic Education and Digital Transformation at the University of Al-Jawf. Welcome, doctor. And finally, Dr. Abdul Latif Al-Abdul Latif, Deputy Dean of the Department of Electronic Education and Information Technology at the University of Al-Qasim. Welcome. Our workshop today will be held in the Digital Government area, about the impact of universities on the rapid adoption of open-source government software. The workshop is related to the strategy of open-source free software in the Kingdom of Saudi Arabia, and its six pillars. Building a sustainable technology, strengthening local content, encouraging open innovation, enabling human resources, improving cyber security, and supporting the digital economy. The main title of our first dialogue is about a strategy that paves the way for universities to advance technology in open-source software. And how universities can play a leading role in promoting the Blue Ocean system for the adoption of open-source free software. We can take the opportunity to talk with you, Dr. Saad Hamid, about how the University of King Khaled can develop a comprehensive strategy for digital transformation and the adoption of open-source free software as a main tool for innovation and technology, while taking into account the role of this strategy and its relationship with educational programs, promoting innovation, and building societies and institutions within the University of King Khaled. So, you can talk with us about this experience. May God bless you, Dr. Lira.


AUDIENCE: May God bless you, dear colleagues and distinguished guests. Of course, we thank the Digital Government for hosting us, and we also congratulate everyone for the awards they have received in this special and rare occasion. Of course, the University of King Khaled, during the short period in which it was under the care of the Prince of Asir region, Prince Turki bin Dalal, launched the University of King Khaled to the world. And it was through this launch that the strategy of the University of King Khaled was launched, and its goals are the qualitative strategy, which includes improving the educational output, achieving institutional excellence, promoting research and innovation, diversifying alternative desires, as well as improving the quality of life, and also promoting voluntary and social participation. We, in terms of e-services, and in terms of the digital transformation strategy, which supports these strategic goals of the University of King Khaled, we made sure that there is a fundamental component in this strategy, which is the adoption of open-source programming. Because we all know that we all have one goal, which is to have in this precious kingdom, a programmer among a thousand residents. We, in the University of King Khaled, and after the qualitative partnership, which was a year ago, and in this place, or in Ritz, the next day, we, in the University of King Khaled, glorified this partnership, and we started the real work. We sat with various university agencies, and we started the real work, starting with the University of Shun Ta’alimiya, through the implementation of decisions in these programs. We also talked with the Faculty of Accounting, through the adoption of these programs. We also reviewed many plans and curricula, in the technical educational materials, which have a practical side. We made sure that these programs are a fundamental part, in enhancing research, innovation, and digital sustainability in the University of King Khaled, and in our precious country. Also, we, in the electronic services, and through enhancing this culture, the culture of adopting open-source programming, we made sure on all levels, the level of applications, the level of construction, the level of networks, the level of digital transformation, and all levels, to glorify the implementation of these programs, for many reasons. The reasons of spending efficiency, the reasons of enhancing cyber security, the reasons of technical participation, the increase of productivity, and this, by the will of God, we refer to when it comes to the increase of productivity, in the field of research. We have, may God protect you, dear colleagues, we, in the University of King Khaled, in the Electronic Services Agency, we made sure that we have different platforms, in these open-source programs. And we have an initiative now, which is called the Marina Path Initiative, in cooperation with the National Center for Electronic Education, and we made sure, through this initiative, that they really support the adoption of open-source programming. Today, we, in the University of King Khaled, we cooperate with all departments, whether they are students, through the talent union, through the training union, and the training of students on open-source programming. Today, we are with the departments of higher studies, we sat and talked a lot about how to have a research grant in these fields related to innovative technologies, and support the adoption of open-source programming. And we have, by the will of God, in the University of King Khaled, a variety of numbers in this field. Also, the Business Leadership Center, there is a hotline between the departments that we mentioned, and the Business Leadership Center, through the integration of these programs, or the solutions that were built based on the adoption of open-source programming. Today, we are with the colleagues in the Faculty of Applications, we are working hard to have diplomas for remote learning programs in such programs, or the employment of programs in many diplomas. We, in the University of King Khaled, we made sure and we are keen to enhance innovation in the adoption of open-source programming through many hackathons that were held recently. We had a type of hackathon, an innovation hackathon, and one of the most important elements of these hackathons was to have a process based on open-source programming. The talk is long on this side, but we want to conclude this talk about the University of King Khaled and its strategy,


Dr. Muneerah Badr Almahasheer: and all its actions related to the development of the educational process of open-source programming. We are still at the beginning of the process of adopting open-source programming, and we promise you, next year, that you will see many products based on open-source programming. This is a promise? Yes, it is a promise. So, it is a commitment to the workshop? Yes, it is a Southern promise. By God’s will, by God’s will. Thank you, Dr. Saad. I would like to pass the floor to my colleague, Dr. Sultan Al-Qahtani, at the University of King Khaled. We are sure that all universities share the same passion and the work of the creator. We would like to ask Dr. Saad to share his vision about how the University of King Khaled can contribute to the establishment and support of an open-source technology platform to enhance small and medium-sized projects.


Dr.Sultan Alqahtani: Thank you, Dr. Neera, and I would like to thank my colleagues at the Digital Government for hosting this wonderful event. Today, we are at the University of King Khaled and through the University’s Digital Transformation Strategy, one of the goals of the University’s Digital Transformation Strategy is to promote research and innovation. Through this element, we have launched a number of initiatives for the development of open-source software and initiatives that have been launched in cooperation with the Innovation and Business Leadership Center, including the initiative to support small projects and graduate projects. Through this initiative, the University’s Digital Transformation Strategy works in conjunction with the Digital Government to support small and medium-sized projects. the digital warehouse, and also with the faculties that are related to support this system in adopting open and free software. Today, through this strategy and this initiative, the initiative to support small projects and graduate projects, the Faculty of Computer Science and Information and the Faculty of Applications is working on establishing a research university, and even the Faculty of Science and Research is working on establishing a research university specializing in open and free software, projects that are considered a type of projects in this field, and hopefully I will be able to tell a number of successful stories in this field, and also the students who are interested in learning and exchanging skills between students, teaching staff, and those who are interested in this field of open and free software. Today, through the initiative of the Blue Ocean and the Digital Government, we aim to establish specialized research universities, and specialized student universities in the field of open and free software, in the field of information security. We have a number of approximately 12 projects now, which are considered small projects, adopted under the guidance of the Imam, all of them are technical projects, and all of them are based on open and free software. A number of research projects have been transformed from research laboratories into startup companies. For example, one of the projects, one of the students in the message of the Magistrate was able to present a developed software using open and free software, a software specialized in DevSecOps, and this software has now become a tool that is marketed commercially, because it has become a small company, and it is protected by the Black & Play Business Projects, with the National Security Authority. One of the projects, the graduation project, has now become not a graduation project for a bachelor’s level students, but a graduation project that produces solid scientific results. The idea was to use artificial intelligence tools, such as FastText, as an open source component, used by students in a research lab in the college. The idea became popular, and it became an open source tool to analyze bug reports, or what is called software reports, and classify them as security, whether it is a security report or not. If it is a security report, it is classified as a type of security, such as Buffer of Load, Denial of Service, and so on. All these examples I mentioned are the results of initiatives through a strategic plan to support small projects and adopt them in the College’s Business Projects. These are student projects, which have now become research projects, and as I mentioned earlier, I will explain the name, ShieldOps.net has now become a startup company, adopted by the National Security Authority, and the Black & Play Business Projects, and this is one of the results of the initiative. Today, the methodology we follow in the Imam University, in this field, is still on the way, and the implementation is now about a year and five months, and God willing, the process will develop, and there will be a complete adoption of open source software, whether on the level of scientific research, or even on the level of development research in the field of electronic information technology. So, God willing, in the second phase, I will talk in detail about the methodology we follow in the Imam University, and support the initiative of the Blue Ocean, or even support the store of free software, and the open source software of the Digital Government. Thank you.


Dr. Muneerah Badr Almahasheer: Thank you, Dr. Sordhan. In fact, this is an inspiring speech about the strategic relationship between software and projects, and its connection to the student NGO, and the research societies, and the building of societies, and the transformation of some programs from graduate projects to startup companies. In fact, this is a kind of vision that we rely on a lot, and we say that, just as there is a percentage of employment for graduates, we see that in universities, there must also be a percentage of projects that turn into small companies or institutions, but they have a great impact, and have an impact on the university’s strategy. And technology, in its turn, is fast. For this reason, the initiatives and strategies that are launched may not be on paper. They may be implemented in the first months. As we heard from Dr. Hamed, there is an initiative that they started last year, and now, God willing, the university has achieved creativity in this area. Right. So, God willing, good luck to everyone. With God’s will. With God’s will. We now move, with the happiness of Dr. Badr’s colleague, to talk about Al-Jouf University. We know, Dr., that Al-Jouf University is interested in increasing its ability to recruit and support young talents to support the open-source software system. We don’t want you to talk to us in general. We want you to tell us the secrets of the university and its specific goals in this regard. Peace be upon you. Thank you, Dr. and thank you to the members of the Digital Board and the IGF meeting. In the university, maybe I should start with a general strategy, and then I can elaborate on some of the things that we have done in the university. No doubt that the universities now, in their current state, are not only educational institutions, where there is the care of talented people, where there is the support of work, and where there is also an investment part. This is something that opens doors for recruiting talents from all over the world,


AUDIENCE: and even from outside the Kingdom. The strategy that we have done internally, at least in cooperation with the Ministry of Education in the region, we went to public education, and the Faculty of Accounting adopted it in cooperation with the Digital Education and Transformation Office. Several projects were adopted by students, by talented people, in terms of software, and their research was directed to open-source software, in order to have the results for these projects. The program has not ended yet, and the results have not been announced yet. Maybe, God willing, next week or the following week, we will find out. Another thing is the training and introduction boards. Without a doubt, students, before enrolling in the university, must be informed about open-source software, so that we can direct them, at least, to the field of software, or the programs that are specialized in this field. Also, in terms of education, international students, since we are, thank God, in the Kingdom, we do not recruit international students from abroad, we have provided a small part in cooperation with the Ministry of Education, the Ministry of Higher Education, and the Ministry of Science and Research, to offer a scholarship for international students, in software and software engineering. And, thank God, we have a brother and a sister in Yemen. We have students studying software engineering now, and, by God’s will, God willing, they will contribute to the initiative. Also, among the things that we have targeted, are the hackathons, organized by the university. Last year, the university organized two hackathons, one for digital health, for women, and another hackathon for agriculture. Schools, public education, and other universities were invited, and the university participated, thank God. But, in these hackathons, there may be a seed of recruitment, and sponsorship of talents, who are interested in open-source software engineering. One of the projects adopted by the university, or the Institute of Digital Education, is one of the projects, from a group of students, specializing in the process of… It is an application for agriculture, to read plants, and get to know them, through artificial intelligence. So, we invited them, and we provided them with a servant, and the technical equipment, to develop this application, in the future, God willing. The work is still in progress, and also, in cooperation with the Institute of Digital Education, we have recruited one of the students. Dr. Sultan, on the subject of programs and curricula. These curricula must be expanded, especially in terms of programming regulations. They should include sections for open-source programming, as well as programs for open-source programming, e-learning programs, or short-term programs, in terms of adopting them to increase awareness and attract those interested in them, so that we can communicate with them later so that they can be among the trainers in terms of open-source programming. Last summer, we held a co-training session in Amada. We focused on the development of open-source programming. We also launched workshops by our colleagues in Amada to enhance the care of talented people and attract them to work in Amada in terms of open-source programming. Thank you. It’s clear that the care of talented people, attracting them and supporting them is a focus in Al-Jouf University. Al-Jouf University will be one of the fastest-growing open-source programming institutions in your university and in the country. We will now move on to my colleague, Abdul Latif Al-Abdul Latif, from Al-Qasim University, to talk about how Al-Qasim University’s role in promoting change towards a blue ocean system through open-source programming as a strategy and as a working system in Al-Qasim University. I congratulate you on the university’s victory in digital transformation. Thank you, doctor. Thank you very much. It’s a great opportunity to be here with my colleagues and we’re happy to welcome the Digital Governance Board. No doubt, our goal today is to promote open-source and free programming. No doubt, the success story began with the Board at the end of last year, when there was an agreement signed to promote open-source programming. Alhamdulillah, Al-Qasim University had three main goals when it came to promoting open-source programming through education, research, innovation, as well as strategic partnerships with different parties. In the field of education, Alhamdulillah, Al-Qasim University’s Faculty of Accounting has been working hard to implement a number of resolutions that aim to promote open-source programming. Last year, we were encouraged by the idea of managing projects in different departments through open-source tools and software, whether it’s AI, cyber security, or software development in general. Alhamdulillah, in the field of education, we’re taking steady steps, and with our colleagues in the Faculty, we’re providing the necessary support through lectures and courses. In the field of research, Alhamdulillah, we’re optimistic that at the beginning of 2025, a number of initiatives will be launched through the Faculty of Science and Research to target students and teachers. One of the goals is to build ideas in various technical fields based on open-source programming tools. We hope that this initiative will be strengthened in the coming year. As we mentioned, the idea of strategic partnerships started with the Faculty, and Alhamdulillah, the Faculty is grateful for that. It’s working with a number of external parties, and Mr. Khaled Al-Ghamdi is an example of that. Thank you. We’re always bothered by this, but it’s all good and blessed by Allah. Alhamdulillah, we believe that one of the challenges of strategic partnerships is the issue of existing experiences. Students and teachers need to connect with the private sector in this regard. Alhamdulillah, we see that these three main centers are very important, and I’d like to add that even in the field of information technology, we tried to benefit students, teachers, and their ideas by adding internal services in our university. With Allah’s blessing, we’ll be able to take special steps, and the students who participated in this initiative will benefit from it. Thank you very much, Dr.


Dr. Muneerah Badr Almahasheer: You’re welcome. Thank you, Dr. Abdul Latif. We’ll now move to the second topic of our discussion, and we’d like to link it to the strategy of open-source free programming. It consists of a number of main initiatives that are part of the programming and university projects, including public programming centers, open-source programming societies, open-source innovation labs, and partnerships with the private sector. Dr. Hamed, I didn’t forget to congratulate King Khaled University, but I was late in congratulating you when we talked about the success stories. We consider all universities that were in the creative category today to have succeeded, and we’re sure that all Saudi universities, with the support of the Ministry of Education, the University Council, and the state, God willing, will join in and we’ll present a unique model in the most famous digital transformation measure. In the second round, we’ll focus on the success stories and partnerships. We have a question for King Khaled University. What are the main successful experiments that led King Khaled University to develop open-source programming to support academic and administrative processes? How did these successes affect the academic and research community? Also, what is the secret of success and what are the local and international partnerships in this context? May God bless you, Dr. Hamed. Congratulations to all of you. The success of King Khaled University and Al-Qasim University is the success of the university in a big way. Dr. Hamed, your question is related to the success stories and partnerships. Before that, universities now have roles, not just in terms of teaching, they now have roles in developing the university. All of these opportunities will be easily achieved if there is a digital environment that will enable all the digital opportunities or even the national opportunities in 2030. Today, the success stories, if there wasn’t a stone, there wouldn’t be a success. The main stone was, in the past short period, the establishment of a factory for open-source programming and also a community of open-source programmers, especially that work was mostly related to huge data and artificial intelligence, but we were also affected in other areas related to cloud computing, institutional systems, and many other things related to applications and mobile applications. Now, we started with a stone by having a factory, we chose a team, a great team, we chose the best faculty of the University of Al-Khaled


AUDIENCE: to lead this factory. After that, we did a lot of training for them, and we also supported them logistically and financially so that the success stories of the University of Al-Khaled will be produced. Of course, we also determined, among the successes that we will mention later, the location of one of the most beautiful places in the University of Al-Khaled, and Khaled Dar, the architect of this place, and the students… We made this factory an innovative environment. It is the main hub for this factory and for the digital talents that will be reflected in the products we will mention in a few moments. At Malkhad University, the open-source programming had a big impact on all the goals of Malkhad University’s strategy 6, in terms of achieving or improving the quality of teaching and learning. Now Malkhad University students at the academic level, in terms of programming and digital skills, are competing with many universities in the world, whether they are international or international, in terms of getting university degrees. Also, at the academic level, we notice that the students are ready to enter the job market through some of the processes and through the type of training that we have. We have a great experience at Malkhad University, which is that we take the best 30 students from the Faculty of Computer Science, and they are trained in a type of open-source programming in the field of e-services. We found that this intensive training, which is very intensive, had excellent results. Many of them are now working in the ministries of a large foundation. At the research and innovative level, as my brother Khaled mentioned in this dialogue session, which I hope everyone will benefit from, I contacted my colleagues in the research and higher studies about the increase in the percentage of scientific research in open-source programming, which was used in the innovative technologies and research papers related to artificial intelligence. We found that 36% of the scientific papers used now and the published papers, were used in open-source programming. This is also a great achievement. One of the great achievements, the big center that Malkhad University has today, is the result of a very long work, which had an impact on improving the quality of university life. Today, administratively and operationally, more than 50% of the courses are now in Malkhad University. We have stability in business. We have transparency in business. We have operational and financial efficiency in business. This is the result of the adoption of open-source programming in the field of e-services. On the level of volunteering and social participation, we have great successes in how to use these hackathons for open-source programming. We had an initiative under the supervision of the regional governor, called Ajawid. Ajawid was a digital platform, where we conducted 10 practical training courses using open-source programming. The result of this training course was about 9,000 participants. Last week, Malkhad University was the first university to promote open-source and social participation. This result is also a collective work, including digital work. Now, we are at Malkhad University, with a new strategic goal, which is to diversify our initiatives and sustainability. Now, we are seriously thinking about how to promote many of the products based on open-source programming. So, no one will participate in the hackathons? No. Today, we have products like QX, which is one of the great platforms. Malkhad University has been launching this product for 7 to 8 years. This product is a great source of income for the university. We don’t want to close the door on universities. As we mentioned, there could be participation in some of the programs that are included in the digital government. We could also include the idea of marketing the products in universities. This is not prohibited. Why not? It is not prohibited. It is the future. It cannot be marketed in the Kingdom or abroad. It is an opportunity to have a different kind of work in open-source programming. We can say that this is one of the recommendations. The engineer and consultant Khalid Al-Ghamdi always says that universities should think about the social impact, which is a right for everyone, and a right for the state, as well as the economic impact and sustainability of the work in universities. I say yes. Great. On the contrary, we support this point. During the past 6 months, depending on the open-source programming, there are 3 types of products in the university. One of them is called Sprint.


Dr.Hamed Saleh Alqahtani: The goal is to use it in digital hackathons. It was a great experience. I think it is a unique solution. It is not available now. We have another platform called Wasl. The goal of Wasl is to improve the employee experience in Malkhat University. We succeeded in this. We have A-plus, a platform that supports students to exchange knowledge and experiences. I do not object to this, as the engineer Khalid, after the interview, agreed that it should be available soon. Great. Last but not least, all the success stories we mentioned come from our colleagues in other universities. Right. But now, it is up to all parties to realize the institutional distinction. Today, open-source programming must be adopted by all of us to work on it. The culture of programming must be implemented. There are many challenges, such as consistency, financial support, and digital capabilities. But by hand, we can reach, and with the support of the Digital Government, a high degree of adoption, not only at the national level, but also at the global level. I agree with you, Dr. Hamed, that universities have infrastructure, rich in human minds, and the ability to build different programs, and the ability to participate at the university level. Also, as one country, there are gaps in some universities, needs in some universities. We encourage the transformation of the gaps between universities to the integration within universities, to build a strong national university educational system.


Dr. Muneerah Badr Almahasheer: Thank you, Dr. Hamed. If you allow me, I would like to move to Dr. Sultan, at the University of Imam Muhammad Bin Saud. My question to you is, how did the university use open-source programming to develop solutions that support academic and research processes? And if you provide us with examples of technical projects that have been successful in cooperation with government agencies, we will know that you have a number of practices at the University of Imam Muhammad Bin Saud, and you have also taken a step forward in the field of government programming.


muhammed bin saud: Thank you, Dr. Hamed. To be honest, the iron is in the free and open-source programming, its participation and ways of dealing with it. I do not agree with what Dr. Hamed said about the exploitation of products and being an obstacle to one of the parties. On the contrary, the open-source programming opens wide and deep horizons in the topic of participation between the parties. Participation is not only the participation of the source code, and then it is over. Participation may be the participation of the knowledge, the participation of relations, the direct contact with government agencies and development teams. On the contrary, it is a contract of joint partnerships, and it is not an official agreement. Today, at the University of Imam Muhammad Bin Saud, from the first day of the launch of the digital warehouse, the digital government, the warehouse specializing in government programming, free and open-source, we have been contributing with a number of products that have been fully developed within the institution, with a national and specialized hand in this field. A number of systems, I have printed a piece of paper, so that I do not forget it, such as the employment gate, the access system, the link service, field training, a comprehensive system that connects the student and the private sector in the field training subject and complete its procedures. Also, the specialization system for medical students, how to complete their procedures between the government agencies, such as hospitals and the private sector, with their academic system inside the university, the academic instruction system, the communication system, which I will show you some details of, the financial system, the social service, and the electronic gate. These systems, a number of them, approximately, If we calculate the number of programs downloaded, it is more than 7 million line of codes. It is also more than the number of files and shares. This warehouse has more than 657 files and programs that were shared through this warehouse. We found that there is a very big interaction between the information technology community in the other governmental areas. Direct communication with us to inquire about some systems. Some systems are now fully used in other governmental areas. For example, the CRM communication system. Most of the governmental areas have CRM systems, but not all of them. The communication system is a technical system that serves the student and his teaching partner, as well as the employee, in communication between the departments within the university. It also serves a large number of university visitors, who can use this system without getting a direct user, only direct access through the phone. The system is now being used by 10 governmental areas. These 10 governmental areas interact with us directly. I have a development team. The development team is in the governmental areas. Most of the reports that we have taken from these governmental areas have been applied in our system. They have been developed in our current environment. We are using the latest version. This may take a long time at the warehouse level. As I said, the source code is not only a part of a program or a system that is used in governmental areas. No, there is a success story with other governmental areas. The tools used are being developed in other governmental areas. There is a lot of work. Communication is a very big thing. The warehouse serves one of the most important programs in Vision 2030, which is the development of human resources. There are a number of assessments that we have taken on Atecna through this warehouse. This is the first performance for all technical projects for Atecna. If there is not a system that can be used directly through the warehouse, a project is launched, such as what is known in other governmental areas. Today, the success story is being told through this warehouse.


AUDIENCE: We are grateful to our colleagues at the Digital Government Agency for their support and empowerment. We see this warehouse as an interactive environment between governmental areas, and we share its resources in a similar way. We achieve the efficiency of spending, and we also enhance the knowledge of other developed governmental areas. Through this warehouse, we can attract other governments at the technical level. There are security challenges, as Dr. Hamid pointed out in a recent exhibition. In order for this ecosystem to be completed, there must be a number of procedures that start with the Digital Government Agency. There must be a license, and this is one of the discussions that took place between the Digital Government Agency and the Intellectual Property Agency, and I was a part of this discussion. There must be a license, such as the Open Source License, which is available worldwide. This license must also be at the level of the Digital Government Agency, which is the number of the open source government programs. This license gives me trust that the systems that are shared, and this very large number of systems that are shared with other governmental areas, are used in the correct way, and there are no violations or violations of the agreement policies that are based on these systems. Today, we extend our welcome to everyone. We are not open. Of course, the goal is bigger and bigger. We are building an electronic system that is shared with most governmental areas, and it is not forbidden to make it cloud-based, to be SaaS, and therefore to be a subscription to the governmental areas, if there is support in this regard. You changed your mind. I said there is no problem. As you thought. There is no problem. On the contrary, to complete the system, there must be serious steps by the Digital Government Agency to support this. We mentioned that the payment is for sustainability. Very natural. Yes. This is the idea. I defend myself against Dr. Hamid. You may be in a group of people who are passionate about Open Source. No, we are happy with your support for us. On the contrary, I say that it is possible for sustainability, even part of sustainability, and it is not hidden from colleagues. Today, when I contract with a technology company to develop an internal system, I, honestly, the CRM is currently being developed by other governmental agencies. And I trust you, now I can name these agencies, can’t I, Doctor? Yes, like the trust of the Eastern Region, the Health Affairs in the National Guard, the National Center for Electronic Education, King Abdulaziz University, the General Institution for Technical and Professional Training, and until yesterday, there was a communication between them and the development team in Amana, asking about some files, how to test this system on the Distinct Environment Engine. This is a kind of participation. We are providing them with a free service, and at the same time, we are benefiting from these technical experiences. Absolutely. And there is no financial incentive to develop this product. On the contrary, the product is developing itself. That’s right, Doctor. There is no doubt that the user experience, and building the user journey inside the system of free software development, increases the maturity of the project, its efficiency, and also its inclusiveness, progress, and development. I thank you, really, for the rich intervention. And if possible, with the help of Dr. Badr at the University of Al-Jouf, if you could share with us, Doctor, the most prominent experiences in cooperation with the sectors, and you, as a university, mentioned the industrial sectors, as if there is a direction for the industrial sectors at the University of Al-Jouf, and I expect that all of this is related to the regional distinctions, and the interests of the university, and how can these partnerships build a sustainable system for open-source software? Ali, I will start, I will join you from the beginning. Okay. I am not in a hurry. I am not in a hurry. They will not invite us again. Yes. But, at the beginning of development and participation, it starts with the principle of open-source software. It does not prevent that there is a symbolic thing for the process of renting, or for the process of maintenance, that the process or the ecosystem of open-source software is sustained. Because, in the end, where will the expenses come from? It must have a specific budget. The universities’ partnerships in the processes of open-source software are centered around the partnerships with the technical departments, specifically in training and development, and also the opening of academies inside the university for these companies. Today, I was happy to sign a memorandum of understanding here, in Al-Muntadhar and Al-Hamd, with one of the companies that will open an academy, God willing, and this academy will be directed to open-source software, with the help of this company, by highlighting the important role of open-source software, and establishing a working group of experts, even from outside the Kingdom, remotely or in person at the university. We also have two partnerships with technical companies, to build specialized e-stores, which may be used in the processes of open-source software, in the matters of networks and the security of networks. Since we were able, through the partnership with one of the companies, a social partnership between them, to build this factory completely, with the latest technologies, which concern the matter of networks and the security of networks, will be directed to the processes of open-source software. Without a doubt, the most prominent partnership is with Al-Haya, with the support of Mr. Khaled Shakerla. He brought us one of the expert companies in the field of open-source software. Soon, we will announce the factory, and also to support the projects of graduation for students. Now, we will move to the institutional systems. Currently, the software that takes care of the institutional systems is the ERP system. As the ERP system is a large system that exists in most regions, we try to turn it into an open-source software, and the drawings are symbolic of it. If there are drawings, or you share them through the warehouse, we are happy to share them with everyone for free. These partnerships are very important in the external sectors. In the government sector, there are partnerships to exchange knowledge and adopt the software that is being developed, so that there is a complete user experience for certain softwares, and development of these softwares, and a sustainable system for development, and then reuse and replace what is already in the government system. This is about the partnerships with universities. Finally, I would like to invite my colleagues, some of them may be from companies, we are representatives of universities, there are three or four universities here, actually, there are five, I’m sorry, Dr. Ali. So, you can start with a social partnership in developing open-source softwares, or you can build this partnership through workshops, through awareness-raising, through opening academies, or you can adopt what is already in the software universities, and develop it, as the colleagues mentioned, as an excellent experience they have in Al-Mak Khaled University and Al-Iman University. Thank you, doctor. Thank you, doctor. We can pass the floor to Dr. Abdul Latif from Al-Qasim University to talk about the most significant experience that the university has achieved in adopting open-source softwares as a support for its existing projects. Thank you, doctor. We can start with the suffering. One of the reports issued by the Digital Government Board in 2021, I think, was expected to increase the percentage of government spending on commercial softwares by 24% by the end of 2025. This is equivalent to more than 16 billion Riyals. There is no doubt that everyone, especially those in universities, is suffering from the issue of commercial softwares. This is a problem for everyone. The university, thank God, started in 2020 in adopting the idea of developing open-source softwares inside the university. Services that are directly offered to the employees, whether they are students, teaching staff, or employees. Thank God, by the end of this year, by the end of this year, we will have achieved approximately 40% of the services offered inside the university. All of them are built within the open-source software field in all forms of services, both for students and employees. For this, I think, thank God, we have achieved success. Achievements like this need an initiative from the top of the pyramid. Always like this, adopting open-source software is not easy. It requires resources, capabilities, and training. Therefore, we believe that this is the most important story of success. We still have a vision, God willing, that we will be independent for a long time in the issue of developing products based on open-source software. We have participated with the university three weeks ago and with colleagues in the government sectors in one of the systems, which is monitoring the technical infrastructure. Thank God, God willing, it will have a positive impact. On the contrary, we are open to cooperation. This is the goal of our presence today in this meeting. I also believe that one of the advantages we have in the university is the issue of benefiting from the resources of the students. A large part of the graduation projects are aimed at developing our internal services. Students, after graduation, we always adopt them from six months to a year until they get a better opportunity outside the university to develop our internal software and services and directly benefit from them. Thank God, we are proud of this development in our university and we are always honored to exchange experiences with everyone. Thank you, doctor. Thank you, doctor. We would like to conclude with a few words for each of our colleagues in the way they would like, whether it is a thank you speech or an idea for the future of open-source software. Thank you, doctor. I hope that all of our colleagues and I hope that there will be a roadmap for open-source software. Universities have not yet noticed the national direction in software like this. We want a clear and sustainable roadmap. Thank God, this country is proud of its digital capabilities that will make it one of the world’s best warehouses. God willing, we will soon be a world-class warehouse. God willing. Thank you, doctor. I would like to conclude by thanking you, doctor, for the professional management of this session and my cooperation with my colleagues. I would like to thank my colleagues in the Digital Government for their kindness and their generous invitation. God willing, open-source software will be available in the international region of Saudi Arabia. God willing, there will be leading international products that are technologically advanced and nationally advanced. God willing, open-source software. Thank you, doctor. Thank you, doctor. Thank you for the invitation to the Institute. We hope that the Institute will be a leader in all the software that is available, with its classification, with the parties we communicate with, so that it is easier for the parties in the government and private sectors to communicate with the parties, so that there is a certain agreement to share and use this software. Thank you, doctor. Thank you, doctor. Thank you, Mr. Khaled. Also, thank you to the Institute. Thank you, DIGF, for the invitation. Thank you, doctor. I would like to conclude with my pride and honor for the presence of almost 4,000 products in the open-source software store in Mosul, from different parties at the international level. There is no doubt that this is a very big move, and we look forward, God willing, to helping to speed up development in this area. Thank you, Mr. Khaled. Thank you, doctor, for managing the workshop. Thank you, Mosul, and all the colleagues in the session. Thank you, Dr. Abdel Latif. Finally, as universities, we congratulate the Digital Government Board, represented by my colleague, Dr. Ahmed Al-Swayyan, for the success of the Digital Government Board, which we saw today, an effort that can’t be done in a day or in an hour. This is an effort for all governments, and a great national representation, and a special national presence. We also thank the members of Al-Sa’ada, the leaders in Saudi universities, King Khaled University, Imam Hamad bin Saud University, Al-Jawf University, and Al-Qasim University.


Ammira Al-mahashir: I am Dr. Amnira Al-Mahashir, Electronic Education Dean at Imam Abdul Rahman bin Faisal University. I would like to thank Al-Sa’ada’s engineer, Dr. Khadir Al-Ghamdi, for working with us. At Imam Abdul Rahman bin Faisal University, I have known it for a short period of time, but I know that its work has been going on for a longer period of time.


Dr. Muneerah Badr Almahasheer: We look forward to improving the digital warehouse, and to turning it, as Dr. Hamid mentioned, into a global warehouse. We also look forward to the classification process of the warehouse, and the creation and filtering of the Saudi government’s free software, which we are proud of. Tomorrow, we will announce that these products will soon be global. Thank you very much.


D

Dr.Hamed Saleh Alqahtani

Speech speed

130 words per minute

Speech length

244 words

Speech time

112 seconds

Developing comprehensive digital transformation strategies

Explanation

Dr. Hamed emphasizes the importance of universities developing comprehensive digital transformation strategies that incorporate open-source software. He highlights how King Khaled University has made open-source programming a fundamental component of their strategy to support their strategic goals.


Evidence

King Khaled University’s strategy includes improving educational output, achieving institutional excellence, promoting research and innovation, and improving quality of life.


Major Discussion Point

Open-source software adoption strategies in Saudi universities


Agreed with

Dr.Sultan Alqahtani


AUDIENCE


Agreed on

Importance of developing comprehensive digital transformation strategies


Creating open-source programming factories and communities within universities

Explanation

Dr. Hamed discusses the establishment of open-source programming factories and communities within King Khaled University. These initiatives focus on areas such as big data, artificial intelligence, cloud computing, and institutional systems.


Evidence

The university created a factory for open-source programming and a community of open-source programmers, choosing the best faculty to lead this initiative.


Major Discussion Point

Successful implementations and partnerships for open-source software


Agreed with

Dr.Saleh Albahli


AUDIENCE


Agreed on

Supporting and nurturing talent in open-source software development


Developing a clear national roadmap for open-source software adoption

Explanation

Dr. Hamed calls for a clear and sustainable roadmap for open-source software adoption in Saudi universities. He emphasizes the need for a national direction in this area to guide universities’ efforts.


Major Discussion Point

Future outlook for open-source software in Saudi Arabia


Differed with

Dr.Sultan Alqahtani


Differed on

Approach to monetization of open-source software


D

Dr.Sultan Alqahtani

Speech speed

139 words per minute

Speech length

710 words

Speech time

305 seconds

Integrating open-source software into educational programs and curricula

Explanation

Dr. Sultan discusses the integration of open-source software into educational programs and curricula at Imam Muhammad Bin Saud University. He emphasizes the importance of expanding programming regulations to include sections for open-source programming.


Evidence

The university has implemented decisions to adopt open-source programs in various faculties and reviewed plans and curricula in technical educational materials.


Major Discussion Point

Open-source software adoption strategies in Saudi universities


Agreed with

Dr.Hamed Saleh Alqahtani


AUDIENCE


Agreed on

Importance of developing comprehensive digital transformation strategies


Aiming to become a world-class open-source software repository

Explanation

Dr. Sultan expresses the ambition for Saudi Arabia to become a world-class repository for open-source software. He emphasizes the country’s digital capabilities and potential to achieve this goal.


Major Discussion Point

Future outlook for open-source software in Saudi Arabia


Differed with

Dr.Hamed Saleh Alqahtani


Differed on

Approach to monetization of open-source software


D

Dr.Saleh Albahli

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Recruiting and supporting young talent in open-source development

Explanation

Dr. Saleh discusses Al-Jouf University’s focus on increasing its ability to recruit and support young talents in open-source software development. He emphasizes the importance of attracting and nurturing talent from within and outside the Kingdom.


Evidence

The university has implemented programs to introduce students to open-source software before enrolling, offered scholarships for international students in software engineering, and organized hackathons to identify and sponsor talented individuals.


Major Discussion Point

Open-source software adoption strategies in Saudi universities


Agreed with

Dr.Hamed Saleh Alqahtani


AUDIENCE


Agreed on

Supporting and nurturing talent in open-source software development


Collaborating with industry sectors to build sustainable open-source ecosystems

Explanation

Dr. Saleh highlights the importance of partnerships between universities and industry sectors to build sustainable open-source ecosystems. He discusses various initiatives and collaborations aimed at promoting open-source software development.


Evidence

The university has partnerships with technical companies to build specialized e-stores and develop open-source software. They also collaborate with government sectors to exchange knowledge and adopt developed software.


Major Discussion Point

Successful implementations and partnerships for open-source software


A

AUDIENCE

Speech speed

143 words per minute

Speech length

4573 words

Speech time

1908 seconds

Promoting change towards open-source systems through education and partnerships

Explanation

The speaker discusses how Al-Qasim University is promoting change towards open-source systems through education, research, innovation, and strategic partnerships. They emphasize the importance of these elements in advancing open-source programming adoption.


Evidence

Al-Qasim University has implemented resolutions to promote open-source programming in various departments and is launching initiatives through the Faculty of Science and Research to target students and teachers.


Major Discussion Point

Open-source software adoption strategies in Saudi universities


Agreed with

Dr.Hamed Saleh Alqahtani


Dr.Sultan Alqahtani


Agreed on

Importance of developing comprehensive digital transformation strategies


Adopting open-source software to reduce reliance on commercial software

Explanation

The speaker highlights the financial burden of commercial software on universities and discusses Al-Qasim University’s efforts to adopt open-source software. They aim to reduce reliance on commercial software and develop internal services using open-source solutions.


Evidence

By the end of the year, Al-Qasim University expects to achieve approximately 40% of services offered inside the university built within the open-source software field.


Major Discussion Point

Successful implementations and partnerships for open-source software


Agreed with

Dr.Hamed Saleh Alqahtani


Dr.Saleh Albahli


Agreed on

Supporting and nurturing talent in open-source software development


m

muhammed bin saud

Speech speed

137 words per minute

Speech length

653 words

Speech time

284 seconds

Developing open-source solutions to support academic and research processes

Explanation

The speaker discusses how the University of Imam Muhammad Bin Saud has used open-source programming to develop solutions supporting academic and research processes. They emphasize the importance of sharing and collaboration in the open-source community.


Evidence

The university has contributed several products to the digital warehouse, including systems for employment, access, field training, and communication. These systems have been downloaded over 7 million times and shared through 657 files and programs.


Major Discussion Point

Successful implementations and partnerships for open-source software


D

Dr. Muneerah Badr Almahasheer

Speech speed

139 words per minute

Speech length

1369 words

Speech time

589 seconds

Improving classification and filtering of Saudi government open-source software

Explanation

Dr. Muneerah emphasizes the need to improve the classification and filtering process of Saudi government’s free software in the digital warehouse. She expresses pride in these products and anticipates their global recognition.


Major Discussion Point

Future outlook for open-source software in Saudi Arabia


A

Ammira Al-mahashir

Speech speed

186 words per minute

Speech length

62 words

Speech time

20 seconds

Expanding the digital warehouse to include international open-source products

Explanation

Ammira expresses pride in the presence of almost 4,000 products in the open-source software store in Mosul from different parties at the international level. She looks forward to further development and expansion in this area.


Evidence

The open-source software store in Mosul contains almost 4,000 products from different parties at the international level.


Major Discussion Point

Future outlook for open-source software in Saudi Arabia


Agreements

Agreement Points

Importance of developing comprehensive digital transformation strategies

speakers

Dr.Hamed Saleh Alqahtani


Dr.Sultan Alqahtani


AUDIENCE


arguments

Developing comprehensive digital transformation strategies


Integrating open-source software into educational programs and curricula


Promoting change towards open-source systems through education and partnerships


summary

The speakers agree on the need for universities to develop comprehensive strategies that incorporate open-source software into their digital transformation efforts, educational programs, and partnerships.


Supporting and nurturing talent in open-source software development

speakers

Dr.Hamed Saleh Alqahtani


Dr.Saleh Albahli


AUDIENCE


arguments

Creating open-source programming factories and communities within universities


Recruiting and supporting young talent in open-source development


Adopting open-source software to reduce reliance on commercial software


summary

The speakers emphasize the importance of creating environments within universities to support and nurture talent in open-source software development, including establishing programming factories, communities, and initiatives to attract young talent.


Similar Viewpoints

Both speakers express a vision for Saudi Arabia to become a leader in open-source software, emphasizing the need for a clear national direction and the potential to achieve world-class status in this field.

speakers

Dr.Hamed Saleh Alqahtani


Dr.Sultan Alqahtani


arguments

Developing a clear national roadmap for open-source software adoption


Aiming to become a world-class open-source software repository


These speakers highlight the importance of collaboration between universities, industry sectors, and government agencies to develop and implement open-source solutions that support academic and research processes.

speakers

Dr.Saleh Albahli


muhammed bin saud


arguments

Collaborating with industry sectors to build sustainable open-source ecosystems


Developing open-source solutions to support academic and research processes


Unexpected Consensus

Economic potential of open-source software

speakers

Dr.Hamed Saleh Alqahtani


AUDIENCE


arguments

Developing a clear national roadmap for open-source software adoption


Adopting open-source software to reduce reliance on commercial software


explanation

While the primary focus was on educational and developmental aspects, there was an unexpected consensus on the economic potential of open-source software, both in terms of reducing costs for universities and potentially creating new economic opportunities.


Overall Assessment

Summary

The main areas of agreement include the importance of developing comprehensive digital transformation strategies, supporting talent in open-source software development, collaborating with industry and government sectors, and recognizing the potential of open-source software to reduce costs and create new opportunities.


Consensus level

There is a high level of consensus among the speakers on the importance and potential of open-source software in Saudi universities. This strong agreement implies a unified vision for the future of open-source software in Saudi Arabia’s higher education system, which could lead to more coordinated efforts in implementation and development across universities.


Differences

Different Viewpoints

Approach to monetization of open-source software

speakers

Dr.Hamed Saleh Alqahtani


Dr.Sultan Alqahtani


arguments

Developing a clear national roadmap for open-source software adoption


Aiming to become a world-class open-source software repository


summary

While Dr. Hamed emphasizes the need for a clear national roadmap for open-source software adoption, Dr. Sultan focuses more on the ambition to become a world-class repository for open-source software. This suggests a difference in approach, with one focusing on national strategy and the other on international positioning.


Unexpected Differences

Monetization of open-source software

speakers

Dr.Hamed Saleh Alqahtani


muhammed bin saud


arguments

Developing a clear national roadmap for open-source software adoption


Developing open-source solutions to support academic and research processes


explanation

While most speakers focused on the adoption and development of open-source software, there was an unexpected discussion about the potential monetization of these solutions. This difference in perspective on the commercial aspects of open-source software was not initially anticipated in the context of university adoption strategies.


Overall Assessment

summary

The main areas of disagreement revolve around the specific strategies for implementing open-source software in universities, the approach to talent development, and the potential for monetization of open-source solutions.


difference_level

The level of disagreement among the speakers is relatively low. Most speakers agree on the importance of adopting open-source software in universities but differ in their specific approaches and priorities. These differences are not fundamental and do not significantly impede the overall goal of promoting open-source software adoption in Saudi universities. The implications of these differences suggest a need for a more coordinated national strategy that can accommodate various approaches while maintaining a unified direction.


Partial Agreements

Partial Agreements

All speakers agree on the importance of integrating open-source software into university strategies and curricula. However, they differ in their specific approaches: Dr. Hamed focuses on comprehensive digital transformation strategies, Dr. Sultan emphasizes curriculum integration, Dr. Saleh prioritizes talent recruitment, and the AUDIENCE speaker highlights education and partnerships.

speakers

Dr.Hamed Saleh Alqahtani


Dr.Sultan Alqahtani


Dr.Saleh Albahli


AUDIENCE


arguments

Developing comprehensive digital transformation strategies


Integrating open-source software into educational programs and curricula


Recruiting and supporting young talent in open-source development


Promoting change towards open-source systems through education and partnerships


Similar Viewpoints

Both speakers express a vision for Saudi Arabia to become a leader in open-source software, emphasizing the need for a clear national direction and the potential to achieve world-class status in this field.

speakers

Dr.Hamed Saleh Alqahtani


Dr.Sultan Alqahtani


arguments

Developing a clear national roadmap for open-source software adoption


Aiming to become a world-class open-source software repository


These speakers highlight the importance of collaboration between universities, industry sectors, and government agencies to develop and implement open-source solutions that support academic and research processes.

speakers

Dr.Saleh Albahli


muhammed bin saud


arguments

Collaborating with industry sectors to build sustainable open-source ecosystems


Developing open-source solutions to support academic and research processes


Takeaways

Key Takeaways

Saudi universities are developing comprehensive strategies to adopt open-source software in education, research, and administrative processes


Universities are creating open-source programming factories, communities, and partnerships to support development and innovation


There is a focus on integrating open-source software into curricula and supporting student talent in this area


Universities are developing open-source solutions to reduce reliance on commercial software and improve efficiency


There is a vision to make Saudi Arabia a leader in open-source software development and create a world-class repository


Resolutions and Action Items

Develop a clear national roadmap for open-source software adoption in Saudi universities


Improve classification and filtering of Saudi government open-source software in the digital warehouse


Expand partnerships between universities and industry to build sustainable open-source ecosystems


Increase the percentage of university services built on open-source software


Unresolved Issues

How to address security challenges in open-source software adoption


Determining the appropriate licensing model for government open-source software


Balancing free sharing of open-source code with potential for commercialization or sustainability


Suggested Compromises

Allowing symbolic fees for hosting or maintenance to sustain open-source ecosystems while keeping core software free


Sharing open-source code freely but potentially charging for implementation services or specialized versions


Thought Provoking Comments

Today, we are at the University of King Khaled and through the University’s Digital Transformation Strategy, one of the goals of the University’s Digital Transformation Strategy is to promote research and innovation. Through this element, we have launched a number of initiatives for the development of open-source software and initiatives that have been launched in cooperation with the Innovation and Business Leadership Center, including the initiative to support small projects and graduate projects.

speaker

Dr. Sultan Alqahtani


reason

This comment introduces the idea of universities not just adopting open-source software, but actively promoting its development through initiatives and partnerships. It shows a proactive approach to digital transformation.


impact

This shifted the discussion towards concrete examples of how universities are implementing open-source strategies, leading other participants to share their own initiatives.


Today, the methodology we follow in the Imam University, in this field, is still on the way, and the implementation is now about a year and five months, and God willing, the process will develop, and there will be a complete adoption of open source software, whether on the level of scientific research, or even on the level of development research in the field of electronic information technology.

speaker

Dr. Sultan Alqahtani


reason

This comment highlights the ongoing nature of open-source adoption and suggests that it’s a process that extends beyond just implementation to include research and development.


impact

It prompted other participants to discuss their own timelines and methodologies for adopting open-source software, broadening the conversation to include long-term strategies.


We made sure that there is a fundamental component in this strategy, which is the adoption of open-source programming. Because we all know that we all have one goal, which is to have in this precious kingdom, a programmer among a thousand residents.

speaker

Dr. Hamed Saleh Alqahtani


reason

This comment ties the adoption of open-source programming to a broader national goal, showing how university strategies align with national objectives.


impact

It elevated the discussion from university-specific strategies to the role of universities in achieving national technology goals.


Today, we are with the colleagues in the Faculty of Applications, we are working hard to have diplomas for remote learning programs in such programs, or the employment of programs in many diplomas.

speaker

Dr. Hamed Saleh Alqahtani


reason

This comment introduces the idea of integrating open-source programming into formal education programs, showing a commitment to long-term skill development.


impact

It led to further discussion about how universities are adapting their curricula and programs to support open-source adoption.


On the contrary, the open-source programming opens wide and deep horizons in the topic of participation between the parties. Participation is not only the participation of the source code, and then it is over. Participation may be the participation of the knowledge, the participation of relations, the direct contact with government agencies and development teams.

speaker

Dr. Sultan Alqahtani


reason

This comment challenges the traditional view of open-source collaboration, expanding it beyond just code sharing to include knowledge and relationship building.


impact

It broadened the discussion about the benefits of open-source adoption, leading to more comprehensive considerations of its impact.


Overall Assessment

These key comments shaped the discussion by moving it from general statements about open-source adoption to specific strategies and initiatives being implemented by universities. They highlighted the multifaceted nature of open-source adoption, including its role in education, research, national goals, and inter-institutional collaboration. The discussion evolved from simply describing open-source initiatives to exploring their broader implications for university strategies, national technology goals, and the future of education and innovation in Saudi Arabia.


Follow-up Questions

How can universities develop a comprehensive strategy for digital transformation and adoption of open-source free software?

speaker

Dr. Muneerah Badr Almahasheer


explanation

This question is important as it addresses the need for universities to have a strategic approach to implementing open-source software and digital transformation.


How can universities establish and support an open-source technology platform to enhance small and medium-sized projects?

speaker

Dr. Muneerah Badr Almahasheer


explanation

This area of research is crucial for understanding how universities can contribute to the growth of small and medium enterprises through open-source technology.


What specific goals does Al-Jouf University have for increasing its ability to recruit and support young talents in the open-source software system?

speaker

Dr. Muneerah Badr Almahasheer


explanation

This question is important for understanding concrete strategies universities are employing to attract and nurture talent in the open-source field.


How can universities promote change towards a blue ocean system through open-source programming?

speaker

Dr. Muneerah Badr Almahasheer


explanation

This area of research is significant for exploring how universities can lead innovation in open-source programming and create new market spaces.


What are the main successful experiments that led universities to develop open-source programming to support academic and administrative processes?

speaker

Dr. Muneerah Badr Almahasheer


explanation

This question is important for identifying best practices and successful implementations of open-source programming in university settings.


How can universities use open-source programming to develop solutions that support academic and research processes?

speaker

Dr. Muneerah Badr Almahasheer


explanation

This area of research is crucial for understanding the practical applications of open-source programming in enhancing university operations and research capabilities.


How can partnerships between universities and industrial sectors build a sustainable system for open-source software?

speaker

Dr. Muneerah Badr Almahasheer


explanation

This question is important for exploring collaborative models between academia and industry to ensure the long-term viability of open-source software initiatives.


What is the roadmap for open-source software adoption in universities?

speaker

Dr. Hamed Saleh Alqahtani


explanation

This area of research is crucial for providing clear guidance and direction for universities in implementing open-source software strategies.


How can the classification and filtering of Saudi government’s free software be improved in the digital warehouse?

speaker

Dr. Muneerah Badr Almahasheer


explanation

This question is important for enhancing the usability and accessibility of open-source software resources for government and educational institutions.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Day 0 Event #191 High-Level Launch: Advancing Internet Universality 2.0

Day 0 Event #191 High-Level Launch: Advancing Internet Universality 2.0

Session at a Glance

Summary

This discussion focused on UNESCO’s revised Internet Universality Indicators (IUIs) and their role in shaping global digital governance. The panel, comprising experts from various sectors, explored how the updated IUIs can contribute to evidence-based policymaking and address digital inequalities. Key points included the importance of multi-stakeholder collaboration in implementing the indicators and the need for adaptability to different national contexts.

Panelists emphasized that the IUIs are not meant for ranking countries but rather as a tool for self-assessment and improvement. The revised framework aims to be more streamlined and user-friendly, with fewer questions and indicators. It also incorporates new elements such as environmental sustainability and artificial intelligence considerations.

Challenges in implementing the IUIs were discussed, including data availability issues and the need for meaningful multi-stakeholder engagement. The importance of addressing the digital divide, particularly in developing countries and small island nations, was highlighted. Panelists also stressed the need for the indicators to evolve with technological advancements and emerging governance challenges.

The discussion touched on the role of the private sector in internet governance and the need for accountability in digital development. The potential of the IUIs to uncover issues related to Universal Service and Access Funds was also mentioned. Overall, participants agreed on the value of the IUIs in fostering inclusive digital transformation and informing national and regional internet governance frameworks.

In conclusion, the panel emphasized the transformative potential of the IUIs in shaping an inclusive, rights-respecting, and sustainable digital future. The importance of continued collaboration and adaptation of the framework to address evolving digital challenges was underscored.

Keypoints

Major discussion points:

– The revised UNESCO Internet Universality Indicators (IUIs) framework and its importance for assessing national digital environments

– Challenges in implementing the IUIs, especially for developing countries and small island nations

– The need for multi-stakeholder collaboration and evidence-based policymaking in internet governance

– The role of the private sector and governments in advancing internet access and digital transformation

– Emerging issues like AI that need to be considered in future iterations of the framework

The overall purpose of the discussion was to launch and promote UNESCO’s revised Internet Universality Indicators framework, explaining its importance as a tool for countries to assess their digital environments and develop evidence-based policies for inclusive and sustainable digital transformation.

The tone of the discussion was largely positive and congratulatory, with panelists praising UNESCO’s work on the revised framework. There was a sense of collaboration and shared purpose among the diverse group of stakeholders represented. The tone became more reflective and forward-looking towards the end as participants considered future challenges and opportunities for implementing the IUIs.

Speakers

– Camila Gonzalez: Facilitator/moderator of the session

– David Souter: Managing Director of ICT for Development Associates in the UK, lead researcher and author of the Internet Universality Indicators

– Vinton Cerf: Vice President and Chief Internet Evangelist at Google, member of the IGF Leadership Panel

– Anriet Esterhuysen: From South Africa, facilitator of the session, works with the Association for Progressive Communications

– Tawfik Jelassi: Assistant Director General for Communications and Information at UNESCO

– Alexandre Barbosa: Head of the Center of Studies for Information and Communication Technologies in Brazil (CETIC.br)

– Jennifer Bachus: Principal Deputy Assistant Secretary from the Bureau of Cyberspace and Digital Policy for the US

– Tenanoia Veronica Simona: Chief Executive Officer of Tuvalu Telecommunications Corporation

– Alla Abdulaal: Chief Digital Economy Foresight at the Digital Cooperation Organization, based in Saudi Arabia

Additional speakers:

– Jose Fissa: Coordinator of chat with IGF, attending as a reporter

– Aziz Hilali: Professor and former co-chair of ISOC Morocco

– Avice: From Cameroon, representing civil society

Full session report

UNESCO’s Revised Internet Universality Indicators: A Framework for Global Digital Governance

The discussion focused on UNESCO’s revised Internet Universality Indicators (IUIs) and their role in shaping global digital governance. A diverse panel of experts explored how the updated IUIs can contribute to evidence-based policymaking and address digital inequalities worldwide.

Key Features of the Revised IUIs

Tawfik Jelassi, UNESCO’s Assistant Director General for Communications and Information, highlighted that the revised IUIs are more streamlined, accessible, and future-ready. The framework has been updated to address new challenges, including environmental risks and artificial intelligence (AI) considerations. The number of indicators has been reduced from 303 to 125, and questions from 109 to 21, making the framework more user-friendly.

Jelassi introduced the ROAMx framework, which forms the core of the IUIs:

– R: Rights-based approach

– O: Openness

– A: Accessibility

– M: Multi-stakeholder participation

– x: Cross-cutting indicators (including gender equality, children’s rights, sustainable development, and trust and security)

David Souter, the lead researcher and author of the IUIs, emphasized that the revised framework aims to be more practical and easier to implement. Importantly, Jelassi clarified that the IUIs are not intended for ranking countries but rather as a tool for self-assessment and improvement.

Multi-stakeholder Collaboration and Implementation

A recurring theme was the critical importance of multi-stakeholder collaboration in implementing the IUIs and fostering sustainable internet governance. Alexandre Barbosa, Head of CETIC.br in Brazil, shared Brazil’s experience with multi-stakeholder governance, highlighting its success in areas such as domain name management and cybersecurity incident response.

Jennifer Bachus, from the US Bureau of Cyberspace and Digital Policy, emphasized the need for meaningful multi-stakeholder participation, cautioning against tokenistic involvement and stressing the importance of diverse perspectives. Vinton Cerf, Vice President at Google, highlighted the vital role of the private sector in internet implementation and advocated for partnerships with governments.

Challenges in Implementing IUIs

The panel acknowledged several challenges in implementing the IUIs, particularly for developing countries and small island nations:

1. Infrastructure and Capacity Building: Tenanoia Veronica Simona, CEO of Tuvalu Telecommunications Corporation, highlighted the unique challenges faced by small island nations, including high costs of undersea cables, limited infrastructure, and vulnerability to natural disasters. She emphasized the need for affordable satellite solutions and international support.

2. Data Availability: Alexandre Barbosa noted that data availability and quality remain key challenges in many countries, potentially hindering comprehensive assessments.

3. Digital Divide: Aziz Hilali, representing ISOC Morocco, emphasized that the digital divide remains a significant issue, especially in Africa and Arab regions. Tawfik Jelassi later expanded on the multiple dimensions of the digital divide, including access, skills, and content.

4. Affordability: Tenanoia Veronica Simona stressed that the affordability of internet access is a major concern in small island nations.

5. Universal Service and Access Funds: An audience member raised the issue of ineffective Universal Service and Access Funds in many countries. David Souter acknowledged this challenge and suggested that the IUIs could help assess the effectiveness of such funds.

Practical Applications and Impacts of IUIs

Speakers shared examples of how the IUIs have been applied:

– Argentina used the indicators to inform its national digital agenda.

– Senegal incorporated the IUIs into its national digital strategy.

– Brazil leveraged the framework to enhance its multi-stakeholder governance model.

Alaa Abdulaal from the Digital Cooperation Organization highlighted the potential of the IUIs to foster digital cooperation among member states and promote inclusive digital transformation.

Future of IUIs and Global Digital Governance

The discussion touched on the future evolution of the IUIs and their role in global digital governance:

1. Emerging Technologies: Vinton Cerf suggested that the IUIs can help address emerging challenges like AI governance. He mentioned initiatives like the Measurement Lab and Broadband Coalition as relevant to the IUIs’ goals.

2. Balancing Approaches: Jennifer Bachus emphasized the need to strengthen both multilateral and multi-stakeholder efforts in digital governance.

3. Regional Cooperation: Alexandre Barbosa noted that the IUIs could facilitate regional and international cooperation on digital issues, particularly in light of the upcoming WSIS+20 review.

4. Accountability: David Souter raised thought-provoking questions about assessing accountability in complex AI systems, highlighting a significant challenge for future revisions of the IUIs and society as a whole.

Conclusion and Next Steps

The discussion concluded with a strong emphasis on the transformative potential of the IUIs in shaping an inclusive, rights-respecting, and sustainable digital future. Key takeaways included:

1. Encouragement for countries to conduct national digital assessments using the revised IUIs.

2. The need for stakeholders to work on improving data availability and quality for effective implementation.

3. A plan for UNESCO to showcase early results from the implementation of revised IUIs at the next Internet Governance Forum in Norway (May/June 2025).

Tawfik Jelassi closed by reiterating UNESCO’s commitment to addressing all dimensions of the digital divide and ensuring that no one is left behind in the digital age.

It’s worth noting that the session experienced some technical difficulties, which occasionally affected the flow of discussion but did not significantly impede the overall exchange of ideas and information.

Session Transcript

DAVID SOUTER: I’m not sure exactly who’s going to kick off the meeting on the site. You’ll find your seat. I’m just trying to get water. You can speak. I’ll follow the order. Yes, we ran about 15 minutes late in the IGFSA meeting, but mostly because of technical problems. So, let me switch it off. I’ve not been following anything this morning. How’s it been going? The opening is tomorrow, isn’t it? The formal opening?

VINTON CERF: That’s right. But there are quite a few meetings today. Of course, here I’m in Washington, D.C., where it’s 06.48 and still dark. Indeed.

DAVID SOUTER: I’m in London, where it’s just coming up to noon. This is only the second of these I’ve missed, actually. But I’m doing the 20-year review for CSGD at the moment. I’m drafting that. Wow. It doesn’t make sense to me to be at both. Anyway, I think I can’t have two U.N. contracts simultaneously.

VINTON CERF: Yes, that’s a challenge. We haven’t figured out how to clone people. However, there’s a science fiction book, I’ll put it in the chat, that actually speaks to that. So, if you happen to want to read about people cloning, that’s an interesting book. I’m guessing we’re… Is our conversation live in the room, or not? That’s a good question, and I don’t know the answer to that. Be discreet.

DAVID SOUTER: Yes. There are people coming into the room, I see.

VINTON CERF: I’m not sure what the mix is of people in person and people online for this meeting. It was about… In the last IGF in Japan, it was… 6,000 in-person and 3,000 remote, I think.

DAVID SOUTER: Yeah, but I mean, I think the problem with those figures is always the extent to which you have people who only come for the opening session.

VINTON CERF: Yeah, correct. Oh, well, that’s a good point.

DAVID SOUTER: We need local people who want to clear for them. I wonder if that’s a typo on the screen where it says height-level launch instead of high-level launch. I think it is. UW is not a reference to the international, but to the world, American radical group from the, what, 1930s that Woody Guthrie was associated with?

VINTON CERF: Oh, yeah, the Wobblies. Indeed. Do you remember the Wobblies? That was all about. Well, that’s weird. So this is an interesting development. If I remember right, the first time UNESCO showed up visibly in an IGF was in Kyoto last year. They’ve always been there.

DAVID SOUTER: I’m thinking back to when I drafted the first IRISA years ago. I think we probably presented them then as well. And Guy Berger used to be there. I used to see Guy Berger at IGF meetings in the past when he was running this art of UNESCO.

VINTON CERF: OK, maybe it wasn’t just as visible. For some reason, it became quite visible. In Kyoto, I think, partly because of the debate about, yeah, there were questions about multi-stakeholder versus multi-lateral and things like that. Okay, they’re asking us to mute. Okay. There’s another one. Oh, there’s another one. Okay. I think good afternoon.

ANRIET ESTERHUISEN: You can hear me. Thanks very much to our tech team, and welcome everyone to this session. We’re about to start. Apologies that we are a little bit late. So my name is Anriet Esterhuisen. I am from South Africa. I am very proud to be facilitating this high-level session, which is also unveiling the UNESCO’s Advanced Internet Universality Indicators. I work with the Association for Progressive Communications as a consultant, and sometimes with other organizations as well. So before we start on this momentous occasion, there’s a lot of work that’s gone into this process, I just wanted to introduce you to our high-level panel. And we’re very honored here to have, and he’ll be our opening speaker, Mr. Taufik Jelassie, who’s the Assistant Director General for Communications and Information at UNESCO. And as I think many of you would know, UNESCO has really been one of the lead UN agencies in the World Summit on Information Society, but also in participating in shaping the IGF. After Mr. Jelassie, we’ll have Mr. Alexander Barbosa, who’s the head of the Center of Studies for Information and Communication Technologies in Brazil, CETIC.br, a very important role that they have played in the revision of the UNESCO Indicator. indicators. We’ll then have online, and if I can just get confirmation, is David online? I’m very happy to welcome Dr. David Souter, who’s the Managing Director of ICT for Development Associates in the UK. And David has been the lead researcher and author in the first version of the Internet Universality Indicators, and now also with the revision. Next we’ll have Ms. Jennifer Bachers, who has just arrived. Welcome, Jennifer. Principal Deputy Assistant Secretary from the Bureau of Cyberspace and Digital Policy for the US. Online joining us as well will be Vint Cerf, Vice President and Chief Internet Evangelist at Google. Vint, another lead person who’s been in this space, and a member of the IGF Leadership Panel, in fact the chair of the IGF Leadership Panel. After Vint, we’ll have Ms. Tenanoya Veronica Simona over there, who is the Chief Executive Officer of Tuvalu Telecommunications Corporation. And then, last but not least, next to me, Alaa, Chief Digital, Alaa Abdullalal, and Chief of Digital Economy Foresight at the Digital Cooperation Organization, based in Saudi Arabia, but the Digital Cooperation Organization, I think you work in 16 different countries around the world. So, welcome to our panel. And now, to get us started and to add some welcoming remarks to this high-level session, on behalf of UNESCO, I’m going to give the floor to Mr. Jelassi. And I think, Mr. Jelassi, if you can also tell us, after you’ve made the welcoming remarks, why is UNESCO doing this work on the IUIs?

Tawfik Jelassi: Thank you very much, Henriette. Distinguished panelists, esteemed participants, colleagues, and France. Good afternoon to all of you. Can you hear me? Apparently, you can. Excellent. Sorry for being a few minutes late. I just arrived from Paris. So, this is fresh from the oven, whatever I’m going to say. I’m very pleased to welcome you to this session, which is very important for us, since we are going to unveil the revised Internet Universality Indicators of UNESCO. And we’ll tell you more about it, including answering your question, Henriette, why we embarked on this effort a couple of years ago. For UNESCO, this initiative reaffirms our vision of a digital future which is anchored in human rights, in openness, in accessibility, and in ensuring a multi-stakeholder participation. This milestone would not have been possible without the invaluable contributions of many individuals and organizations, which I would like now to recognize. First, the Brazilian Network Information Center, NIC.br, and its Regional Center for Studies on the Development of the Information Society, that is CETIC.br. Their expertise has been very valuable for us at UNESCO. And I would like to acknowledge here Mr. Alexander Barbosa, who is in charge, who is the head of CETIC. He’s seated on my left. And also Mr. Fabio Senni, who is in the audience, and who was seconded to us at UNESCO for quite a long period of time to specifically work on the revised Internet Universality Indicators. I would like also to acknowledge the contribution of David Souter, who was the architect of the initial version of the indicators and is now, is it working? Okay. And also was very much involved in the revised framework. I would like also to acknowledge the contribution of the IUI Steering Committee and Dynamic Coalition, which both provided us with valuable guidance to ensure that the revised framework of the indicators addresses today’s challenges. I would like also to thank our host country, the Kingdom of Saudi Arabia, for making this session possible and for their hospitality at this year’s IGF. And finally I would like to acknowledge the IGF Secretariat for its enduring partnership, which has been a cornerstone of UNESCO initiatives, including the IUI framework and its revision. Ladies and gentlemen, as we know, the Internet has democratized access to information, has involved people from all over the world, and for us it’s a way of implementing the principle of leaving no one behind. However, as we know, in spite of the democratization of access to information, the Internet created some disparities, not only between countries, but within countries as well, and has introduced new challenges, among which, of course, the digital divide, but not only. As the UN Secretary General, Mr. Antonio Guterres, reminded us He said, the future of digital must be human-centered. We all share this statement, this principle, and as we’ll see in a few minutes, the revised IUI indicators wholeheartedly embrace this principle. I mentioned the ROAMx framework which encapsulates this vision and the pillars of the ROAMx for those who may not be very familiar with it. The R in ROAM stands for Human Rights Based Approach and this includes, of course, freedom of expression, data privacy, dignity, gender equality. The O of the framework stands for openness, ensuring that information flows freely, without barriers and without silos. The A stands for Accessibility, as I said, very much to ensure an equitable internet access. And the M stands for Multi-Stakeholder Participation in order to foster transparent and inclusive decision-making. I said it’s ROAMx, the X stands for cross-cutting issues such as gender equality, trust, security, sustainable development and emerging technologies such as artificial intelligence. You may know that to date, over 40 countries worldwide have used the UNESCO Internet Universality Indicators to conduct national assessments and the latest, I would say, Argentina, Senegal. And I would like to mention here the impact of these national digital assessments using our framework and the Romex indicators. In Argentina, as an example, the findings from the assessment have empowered the IUI research team to draft a proposed law aiming at addressing critical gaps in the country’s data protection framework. So again, this is one of the tangible outcomes of the IUI-based national digital assessment. In Senegal, the assessment facilitated the implementation of the country’s 2025 digital strategy and its high-speed national plan. Now to answer Amria’s question, why did we revise the framework, obviously we wanted to make the framework relevant, to make it adaptive, future-ready, we wanted to integrate key insights and lessons that we have learned from the 40 implementations around the world. This is very important for us, but we also wanted the revised indicators to enhance accessibility and ease of use to accelerate stakeholder adoption and implementation of the Romex. Let me just conclude here by saying that the revised framework also is aligned with the Global Digital Compact and the Pact of the Future, which were adopted, as you know, last summer, and the revised framework is now more streamlined with 63% fewer questions to answer in the survey and 56% fewer indicators to use. This makes the revised framework both comprehensive and accessible. I think I’ll stop here, Ariet, and give you the floor back. Thank you very much, Tawfiq. I do want to ask you one follow-up question, but particularly because I think we might have people in the room who are not that familiar with the indicators. But if I am from a country in the global south, we have very little bandwidth, very little internet access. Should we be worried about using the indicators? Will we find ourselves in some kind of ranking, where we would perhaps look as if we are not performing well? Is that something I should be scared of, or is that not something that the indicators will do? That is not something that the indicators will do, for a simple reason. UNESCO has been around for 80 years, has never done any comparative studies nor rankings of member states. So we are not in the business of rankings. The indicators are meant to be a guidance to our member states to conduct a national digital assessment, but not to compare countries, and certainly not to rank them.

AUDIENCE: Thanks so much for emphasizing that. I think it’s one of the reasons why the indicators are not just a powerful tool, but an empowering tool. David, are you ready? Are you online? Can you hear me? David, can you tell us, is David able to speak? I just want to check that I can hear you. Excellent, good. David, what is new in this revised internet universality framework? Taufik outlined very clearly why it was done, and also that it’s going to be much easier to use. But from your perspective, what is new? What do you feel are the key trends and challenges that was identified across the global IUI assessment process that informed this revision?

DAVID SOUTER: OK, so let me say something first about some issues, and then also something about the experience of research. and using the IOIs over the last eight years or so. I’m currently working on the 20-year review of the World Summit for the UN’s Commission on Science and Technology for Development. So that means I’m very conscious of the pace and the extent of change we’ve seen in digital development over the last 20 years, but particularly the acceleration of that growth in pace and extent of change in the last eight years or so since the original IOIs were published. And it was always intended that the first framework would be revised in time in line with what was happening in the digital environment and with the experience of researchers. So in this last eight years, we’ve seen really dramatic changes in each part of the Romex framework, from new trends, new challenges, which needed to be addressed within the indicators and the questions that frame them. So the enjoyment of rights online, for example, that’s been profoundly affected by issues concerned with information integrity or with platform regulation, the exploitation of personal data. The openness of the internet and open technology and resources has been amplified by the way in which technology and services have diversified. What we mean by access, the A in the framework, is much more concerned now with affordable connectivity and usage and, indeed, with impact of new technologies. And I think the multi-stakeholder context has also become much more diverse because digital resources now have great impact in every area of our economy, societies, and cultures, every aspect of sustainable development, and require input from those whose expertise lies not in digital resources themselves, but in those other areas of public policy and life. Much more attention is being paid now than was the case 10 years ago to gender equity, children’s rights. opportunities in welfare too, I think have been greatly enhanced by, the discussion of them has been greatly enhanced by General Comment 25 to the Children’s Rights Convention. So all of these are themes to which we’ve responded in the IUI revision, bringing them more to the fore. The ATG mentioned two trends in particular that we have especially sought to address, which, because they have greatly increased in importance, we’ve given them much more substance in the new framework. One of those is environmental risks. The environmental problems associated with digital development, as well as opportunities, are now much better understood than they were, concerned with energy consumption, with climate change, with waste, and that’s led to a much greater understanding of the need for a more circular approach to the literal economy. So that’s now incorporated within the X category in a way that it wasn’t before. And also there, one, artificial intelligence and other frontier technologies present great opportunities for digital development. They also present serious new challenges of governance arising from uncertainty and risk, which needs to be assessed within any thinking about the national internet environment. So all of those elements feature in the evolution of digital policies. They feature in the Pact for the Future, in the Global Digital Compact. They will feature in the BUSES First 20 review and the review of the Sustainable Development Goals in 2030. And clearly, they needed to feature in the IUIs as well. I could just say something as well about why we revised the structure in response to experience. I think it’s important that the framework is used to analyse the national internet environment, not just to tick boxes about particular indicators. And it’s important that it leads to recommendations that are feasible and can be put into practice by governments and other stakeholders, rather than simply speculating on what would be desirable in a perfect environment. So we’ve given more prominence to a couple of things. in the framework too, specific questions that should be addressed in the work of the research teams and of their multistakeholder advisory boards. The reports which are generated from these studies, they should respond to these specific questions above all, they should assess how the indicators relate to them, and they should make recommendations for ways in which the recommended supports can be advanced through changes that can practically be made to the national internet environment. And the other point, as David, you mentioned, is this was a substantial framework. Sorry, David, can you just pause? There’s interference, is everyone else hearing the interference as well? David, just pause a second, I just want to ask our tech support to check into that. Is it fixed, the interference? Great. It’s Vint Cerf and I’m not hearing any interference on the Zoom link. Thanks Vint, it’s obviously just those of us in the room. David, try again. Can you speak, David? Okay, there’s still interference. Try now, David. I’m afraid the noise is still there. I’m just asking if we can have, maybe there’s a phone or a mic, if that mic can be off. It’s crackling as well. David, try again. Okay, we still have interference. issue here and Tatevic can I ask you to just check with our tech team if they can fix this problem and David I think you should continue and I just apologize to everyone in the room for the noise on the line but hopefully it will be fixed please go ahead at least the remote participants are on not experiencing it go ahead David so those are the two main aspects that we felt from the experience of researchers needed to be amended they’re not particularly surprising I hope that they make the tools even more effective than they’ve been in the past so that’s all that I was going to say and there’s something more about the way the revision was done and about how the indicators can most effectively be used and I will pause I will stop there and hope that the rest is fine

Camila Gonzalez: thanks David and I’m happy to report that the interference on the on the reception has gone as well so and I reserve the right to come back not to cross-examine you but to ask you another question at a later stage but now let’s move on to to Alexander Barbosa from CETEC and what do you I mean in your experience Alexander you were involved in the initial development you’ve applied the indicators in Brazil and you’ve been part of the revision process. How do you feel that the revised IUIs can transform and advance national internet development and governance, particularly from the perspective of evidence-based policymaking and coming up with those kind of tailored policy recommendations, like the example Taufik gave us from Argentina? Tell us more.

Alexandre Barbosa: Thank you very much, Andrette, and good afternoon, everyone. Well, let me start by thanking Mr. ADG Taufik Jalassi for inviting me to this panel. And it was an honor for me to be part of the steering committee of the Internet Universality Indicators. And I have to tell you, Taufik, that your leadership and also Cedric’s and Taufik’s was really instrumental for this revision. It was one year of hard work of many actors revising this BROMEX framework and the set of Internet Universality Indicators. But if you allow me, I would like to go back in history to 2013 in the IGF in Bali, Indonesia, when UNESCO, along with NIC.br and LACNIC, we decided to fund a paper on the concept of Internet Universality. And after that year, we had into 2014 the NetMundial in Brazil, and we at NIC.br conducted two national and regional consultation to refine the concept and the framework. And after that, in 2015, we have conducted the first pilot of the indicators that gave us insights for the last revision. And David was really amazing by revising it. revising and putting all those indicators together, and the conceptual framework and everything. And in 2019, UNESCO published the first IUI report with the data from Brazil. So it was a real honor for us. And since then, many countries, as you said, ADG, more than 40 countries have published already a national assessment. And this gave us a lot of insights for this revision. And now going to your questions, Henriette, I would like to highlight four key points in my personal opinion that’s on the importance on how the revised UIS can transform and advance national internet development and governance. And I would like to emphasize the following points. First, I think that it was already been said by Henriette, internet universality indicators empower countries to adopt evidence-based policymaking by providing actionable data and diagnostic tool. This is the most powerful thing. We are not talking about ranking, but about assessment. And governments can identify gaps and strength in their internet ecosystem, ranging from digital inclusion to data protection and so many other aspects and dimensions of these ecosystems. And of course, allows a country to develop policies that directly address these findings from those national assessments. For instance, just to give you an example, the indicators can highlight disparities in internet access among marginalized groups or region, prompting target interventions to bridge the digital divide. I would say that more than that, the assessments offer a structured approach to evaluate the impact of policies over time, enabling continuous refinement and empowerment. So I think this is the first key important aspect on the how. The second point, in my opinion, is that the indicators facilitate tailored policy recommendations by aligning national priorities with international frameworks like the Sustainable Development Goals. This is a very important aspect. The revised IUI integrates lessons from global applications, as was already mentioned. A huge number of countries have already applied this assessment, and those lessons learned ensure their relevance across diverse contexts. And we, in this revision, took into consideration the past experience of those countries made in the assessment. And I would say that this adaptability allows policymakers to customize their use, whether focusing on enhancing data privacy laws, or fostering innovation, or promoting digital literacy. So one example from my country, Brazil, the application of this assessment in the country was able to flag very important aspects of this ecosystem. I’m not going into the detail, but for instance, the need to improve rural internet access. We had a long debate on community networks, and how to bridge the gap in rural areas, and also the establishment of the legal framework for personal data protection. And after this assessment, three years later, Brazil approved the national law on personal data protection. Another important point… on the how, is that the IUI strengthens multi-stakeholder collaboration as a cornerstone of sustainable Internet governance. So the multi-stakeholder dimension is really a key aspect of this framework. And by doing so, representatives from civil society, academia, private sector, government, this framework really fosters consensus-driven strategies that reflect diverse perspectives and interests. This is a very important point. And this multi-stakeholder approach ensures that policies are not only inclusive, but also rooted in practical and cross-sectoral expertise. And finally, my last point is that the IUI serves as a catalyst for regional and international cooperation. We have been seeing this in Latin America. We have helped many countries in the region to conduct their national assessment and collaboration was key in this regard. And countries can share best practices and align their strategy with global standards, fostering a collective effort to address shared challenges. For instance, the indicators can help neighboring countries in Latin America develop a harmonized approach to close the digital gaps, and more recently, to adopt best practices to measure the new concept of meaningful connectivity. This is really a key aspect that I would like to highlight. So in conclusion, I would say that the revised IUI offers an invaluable roadmap for advancing regional Internet development and governance. And by supporting evidence-based policymaking and providing tailored recommendations, the framework empowers… countries to create a more inclusive, rights-respecting and resilient digital ecosystem. So it was really a great pleasure to work with all of you in the steering committee and to produce this new revised version that we have some copies here, so for those willing to have a physical copy you can take after the panel. Thank you so much. Thanks Alexandre. Just a follow-up question on this. Now

Camila Gonzalez: UNESCO and CETIC consulted very widely in the process of this revision, so this was not a desk-based exercise, it was based on talking to people that have used the indicators and also getting input from others. Are there any particular insights or regional perspectives that you came across in the revision process that influenced this redesign of the indicators? Is there anything that stood out for you from the consultation process? Yes, it is important to mention that besides conducting a new round of consultations we also, UNESCO, sent out a survey for those countries that had already conducted the assessment and the result of this survey was very insightful for the process, because one issue was that the questionnaire, the number of indicators was too long, so it was an opportunity to review this set of questions and indicators. And more than that, in the X dimension we were able to really consider new dimensions like sustainable development, gender, so this was a really insightful process and as UNESCO is a very transparent type of organization, we took into account their voice from regions and from countries. Thanks very much for that, Alexandre. I’m just checking, I also have the Zoom on my phone here, so I’m also checking what’s happening online. But now that you’ve heard more about the indicators and about the revision, are there any questions or comments? And I also want to invite all the other panelists, including those online, if you have any additional remarks or questions in this first segment before we move on to our next part. The next part of the session is going to really look at the future of the IUIs and how we see the IUIs playing a key role in global digital governance. But the floor is now open. If you’re in the room, raise your hand. If you’re online, raise your virtual hand. I see… I don’t see the online hands. Tatevic, is there someone online? I can’t hear you, I’m afraid. Who’s there? It’s a question for David. So, David, can you… I’m sure you’ve read it, but I’m going to read it out for everyone in the room. And for David Souter, it’s from Susanna Naranjo. In your view, how do the IUIs stay adaptable to future technological and policy challenges while maintaining their core principles? David, are you happy to take that on?

DAVID SOUTER: Yes, sure. I think… So, one of the things is that, as I said at the beginning, it was always the intention to revise these indicators after a number of years in order to respond specifically to developments that were taking place. The developments that take place within the digital sector are particularly problematic because they’re very difficult to anticipate. If you look back at the World Summit 20 years ago, there was very little said then about mobility because that was not seen at the time as being a particularly important dimension of the future development of the digital world, of the information society. Well, clearly, that was one of the most stark. And many of the technologies and services we have now, but most of them really, were simply not anticipated at that time. So, these changes are dramatic and there needs to be a response to them. I think I’d say two things. This isn’t meant to be a rigid framework. It is meant to be a framework that is for the use of people within their particular environments. And so, the… teams and the multi-stakeholder advisory boards that assist them in each country should be thinking about what is specifically important to their country, how do these questions relate to their country. The meaning of that is different within every country as it is and it’s also gained adaptation that one might need here is twofold. I think every few years, maybe five years, maybe a little longer, it’s important for UNESCO to reflect on these indicators again and how they might evolve, perhaps next time after the SDG review in 2030, but it’s also important for those dealing with them, those using them in individual countries to think of the adaptations that are needed to interpret those principles and those questions for their own country and their own time.

AUDIENCE: Thanks for that, David. Is there bad audio again? Is it better? Great. I do have a question and you know and I want you to it’s it’s I can anyone on the panel can can respond to that David yourself as well and what about data sources? Have you found that in some of the countries where the indicators have been applied that actually getting the data to respond to those questions and analyze, do the kind of analysis that you’re talking about, has that been a challenge and and if it has been a challenge, how has that evolved over time and and what have you found as effective ways of responding to that challenge? So I’m not sure David or Alexandra if you want to to respond to that. David, Alexandra is going to respond first and then you can add. Thank you for this question. This is a key issue in the assessment. We need data availability. And we know that in many countries we still have a data gap for many indicators. In case of Brazil, we have the privilege of having a very rich data sets for many, many years. But what we have been seeing in countries that when the data is not available, countries have to go to more official type of data source from the International Telecommunication Union or World Bank or OECD, which is a secondary data source. And of course, in terms of collecting primary data, the interviews that we have conducted with the key actors in these ecosystems from different segments, not only government, of course, but academia, private sector and civil society, is really key. So it is very important that the multi-stakeholder advisory board really have the domain and can lead the data collection, not only using already existing data sets, but also going through new data collections. David, did you want to add on data?

DAVID SOUTER: Yes, I will. I think it’s one of the ironies, isn’t it, of the digital revolution, that we have, in fact, inadequate data to assess how the digital revolution is actually going. I think, so, I mean, firstly, one of the reasons why there were so many indicators in the first framework is actually addressing this particular problem. It was trying to give options that researchers could use within the particular question that they were looking at to find evidence. And essentially, the data that aren’t in there, adequate in many different contexts, and therefore researchers have to make the best use of what is available. That includes using assessments which are qualitative rather than quantitative, so considering the authoritative sources which might be available, the expertise which might be available, to fill in those gaps as credibly as possible. And also I think at the end of the day it’s really important among the recommendations that are coming out of these responses, coming out of these reports, that a part of that, some of those recommendations are about the ways in which the data gathering, data analysis framework needs to be improved within the countries, because good data are essential for good policymaking. Thanks,

Camila Gonzalez: thanks for that, David. Vint, I’m very happy to see a hand in our virtual room. You have the floor.

VINTON CERF: Thank you so much, Annemarie. I just wanted to draw attention to two activities in the US that might be of interest with regard to metrics. One of them is called the Measurement Lab. It’s part of an organization called Code for Science and Society. Google is one of the members among many. It’s a data collection effort to create open source data of performance of the Internet. There is continued development of new measurement tools and metrics in order to understand the quality of service that’s provided. For a long time bandwidth was the big kahuna, but now people are worried about latency and other things. So I would urge people to have a look at the Measurement Lab or MLab online to see whether there’s open data there they could find useful and also perhaps participate. The second activity is called the Broadband Coalition at the Marconi Society, and that’s a regular meeting. people who are concerned about getting broadband access in the rural parts of the U.S. into operation. And many of you will be aware of a major $42 billion effort to make broadband Internet access available in the rural parts of the U.S. So I just draw those attention I draw these to your attention because they are very much relevant to the metrics that you’ve been developing.

Camila Gonzalez: Thanks for Vint. And those are really good examples as well because I think it illustrates how the multi-stakeholder approach that the indicators deploy allows you to source metrics from your national statistical agencies but also metrics that are generated by other stakeholder groups or other sectors. And I don’t see any hands in the room. No one wants to ask a question. Is it too noisy here? And I don’t see any other hands online. So is there a question in the chat? No, I think we’ve covered everything so far. So let’s move on to the second part of our session and listen to more of our fantastic speakers. Just to look a bit more towards the future now and the role of the IUIs in this evolving landscape of global digital governance. Jennifer, I’m going to start with you. What is your view? How do you think evidence-based policy can inform the development of national digital policies to address tech inequalities and governance challenges? And I’m going to add a little bit of a more provocative question as well if I’m allowed to. Do you feel that evidence-based policy always comes naturally to governments or is there also sometimes a process there too that has to be undertaken in a collaborative, consultative way just to demonstrate and convince policymakers of the value of evidence-based policy?

Jennifer Bachus: I think this is working, yes. So my team gave me a great answer but I’m gonna riff a little on your question and thanks for including us here today and you know as since the US rejoined UNESCO we have really been thrilled to be collaborating and working with you and your colleagues on a whole range of issues that we find incredibly important. So I should say I have been in the US government for I’m coming up on 27 years and let me tell you this question around data-driven policymaking has taken off probably in the last 15 to 20 years but it’s really challenging. I think you’ve got to start with the idea that what’s the metric you use right I like to give the example I was I don’t know 20 years ago or so I was working in our embassy in Vietnam as the econ and labor officer and we were supposed to determine whether or not you know what was the metric we’re gonna use on the state of labor relations in Vietnam and they said okay well if you have more demonstrations more labor actions is that a good thing or a bad thing and I’m an economic person so I was like more labor actions means it’s a bad thing because it means that labor feels unhappy and the political officers were like no it’s a good thing because it means they can go and they can demonstrate and they can have their their point of view heard. So in many worlds actually trying to figure out how to judge these issues is really difficult as policymakers. Now luckily for us on some of these issues it’s actually quite easy what percent of the population is connected that one I feel like is pretty is pretty good but there are always going to be indicators that are probably going to be a little bit more fraught on what you’re trying to achieve right so I think first of all we should all strive for data-driven decision-making it’s something the US government has really embraced over the last 20 years but we also need to recognize that there’s going to be there’s going to be some tensions between you know, what data are we looking for? Is that a good thing? Is that a bad thing? How can we continue to try to strive towards reaching these outcomes that we want? Because the thing is, again, I am clearly a career bureaucrat. I haven’t worked in the private sector, but having worked with lots of bosses who have, they always say to me like, okay, but like, I got to get to understand what the return on investment is. If I’m going to spend X amount of money to connect it, am I getting my money’s worth? Like, where is the return on investment and how can we demonstrate our share, which is as taxpayers, that it is worth it? We are going in and out, I apologize for that. I promise it’s not me. I know. I do want to spend a couple of other things on some of the ideas they have written in here. So, you know, we think it’s incredibly important to have an affirmative vision for how digital technologies are working together. Part of the affirmative vision is the idea of connecting the unconnected, of trying to bring digital development to the world and doing so in a multi-stakeholder point of view. And I think, I also was very much struck by Vince’s comment about the indicators that can come from the private sector. We, the US government, can say we believe we have this percent of the population connected, but we need to double track that if you’re talking about the US approach, by talking to US telecommunications providers and US civil society and others who are going to say, well, you think it’s this percent, but you’re missing, you’ve got this over here on the left, and you need to look at that. We think about trying to analyze. You need to get as many data sources as possible and then recognize that sometimes those data sources are, in fact, going to disagree with each other, and then you have to figure out a way to reconcile them. Now, all of that is messy and time-consuming. but I think does have to be our ultimate goal as we look at these indicators. So, hopefully I answered your question. I think I missed a lot of things that my team wanted me to say, so if you come back around, I’ll see what I was supposed to say.

Camila Gonzalez: I will definitely come back around to you. And thanks for that response, because I think you really cut to why this is so challenging, and I don’t think we should pretend that it’s not. But Vin, do you want to build on that, actually? Jennifer talked about the perspective, the value that the private sector brings to this kind of process. What do you see as the role that tech companies, and specifically, but the private sector at large, can bring to this approach, advancing these kind of principles, but also addressing this issue of data-driven, evidence-based policymaking?

VINTON CERF: So, first of all, data-driven policymaking is really smart. I mean, any business model that you want to put together, really needs to be based on data, otherwise you’re just flying blind. So I’m a huge fan of data collection and analysis. In Google, of course, we believe that numbers count, and gathering data to guide our policies is absolutely essential. So I’m a big fan of measurement, and I’d like to congratulate UNESCO on its further evolution of the IUI framework. I wanted to just make a comment about metrics for a second, because it’s one thing to measure things like data rates and latency and so on, but there are some other very important things that determine whether something’s useful or not. One of them is availability. Is it reliably there all the time when you need it? Can you actually afford it, which is a major issue? And is it fit for purpose? That is to say, do its parametric performance values actually serve the applications that the users want? And I would argue that, as you… move around the world you find people using different applications requiring different kinds of performance. Reliability and resilience are equally important because if it’s not there when you need it, then it doesn’t serve your needs. And I would include one other possible metric. I’m not sure how you would do this, but I wonder if accountability is an important component of the utility of the Internet. We know that there are harmful behaviors on the net and we wonder about how to hold parties accountable. I have no idea whether that’s a metric that you can measure, but it certainly is something that should be concerned about. Am I assuming that we’re moving into my more general presentation, Henriette, or am I just responding to your immediate question?

Camila Gonzalez: I think that, you know, I think you’re responding to my immediate question, but I think if you have time, now would be a good time if you wanted to make some broader inputs as well. So please go ahead. I do want to come back though. I’m going to, while you do that, I’m going to alert David and Alexandra to your question about accountability. And Vint has just, I think, put a very important challenge on the table. Can you measure accountability? How do you measure it? It, of course, has many dimensions, but perhaps you can think about it and after Vint has gone again, you can come back and tell us the extent to which the indicators at present tries to and deal with this question of accountability. But back to you, Vint.

VINTON CERF: Thank you so much, Henriette. So I will continue more broadly. I want to talk a bit about private sector because that’s where a great deal of the Internet access is implemented. Certainly at Google, we invest very heavily in international subsea cable networks, our land-based cable networks, and of course our data centers and all of the communications that are required to support them, plus interconnection to the public Internet. in order to allow users to get to our data and computing capabilities. So we make, as do others in the private sector, major investments that enable people to make use of the Internet and the kinds of applications that it can support. Certainly another element here in terms of metrics and investment is Internet exchange points that allow the various networks of the Internet to interconnect efficiently with each other. And I’m sure many of the countries that are concerned about connectivity have made a point of at least measuring, if not also investing, in Internet exchange points to facilitate interconnection and resilience. There are other ways in which the private sector can contribute. One of them is open source, and we’re big fans of that at Google. Much of our software is available through open source, and it is an enabler for others to take advantage of that work and to build upon it. We also provide broad platforms like large language models for artificial intelligence and machine learning applications that, again, let other people build on top of those frontier models. We’re also big fans of open research, that is to say sharing what we’ve discovered and what we’ve learned. We’re also very active, as you know, in the Internet Governance Forum and the national and regional Internet Governance Forums because those are places where information about the metrics that you have developed can be disseminated and perhaps also feedback can be obtained from the measurements that are made. We’re active in standards as well, and I think those are other equally enabling mechanisms that make the Internet more useful for everyone. I could go on and on here, but I won’t. I will say, though, that… And with regard to accessibility, this is a space where the private sector has made significant contributions, not only at Google where we’re very focused on captions and translation of languages from one to another. Others have made significant investments, Microsoft and Apple, for example, in terms of accessibility. These are really fundamental to making the Internet useful for everyone, which of course is one of UNESCO’s primary objectives. I will say, however, that I don’t quite know what to say about artificial intelligence and the IUIs. I think we’re not clear yet what to measure about an artificial intelligence application to tell us whether it’s working well or not. So there’s still some, I think, work to be done to figure out how we assess artificial intelligence. And if it becomes increasingly central to the applications that we all use, I suspect that there has to be some further discussion within the UNESCO context about how we measure the utility and safety of the artificial intelligence applications that are emerging. So I’ll stop there. I’m sure you’ve got other questions and more for the rest of the panel.

Camila Gonzalez: Thanks for that, Vint. And I think if I recall actually what the revision of the IUIs does, it’s actually, it’s not overly ambitious. I think it’s actually asking countries to assess, are they at least talking about the challenges related to AI? So rather than having a finite measuring framework there, I think it is actually just giving those country teams the opportunity to have that conversation that you just opened. But now I want to move to Tenanoia from Tuvalu, from the Tuvalu Telecoms Corporation, because Tuvalu has applied the indicators. And I want to ask you, what are the challenges, particularly for a small island developing state which already has so many internet-related challenges? What challenges do you feel you face in internet governance? Reflecting on your experience… of applying the IUIs in Tuvalu. Can the framework provide solutions? Do you think it can? Do you feel it has? Just give us your insights based on your

Tenanoia Veronica Simona: experience. Thank you so much for the for the question. I would like to take the opportunity to thank you for the opportunity to participate and include our small Pacific Island countries in this assessment and it’s it’s it’s really great to have the people from UNESCO visit our small island state. Thank you. Looking at Tuvalu’s digital development journey really it reflects both the aspirations and challenges of small island nation in advancing IUIs within a unique socio-economic and geographic context. So let me just give you a little bit of the Tuvalu digital development journey. Our journey is forward-looking towards digital transformation. It’s shaped basically by the geographic isolation, limited infrastructure, vulnerability to climate change, and I think David mentioned that as well. But despite these challenges Tuvalu has enhanced and embraced a digital digitalization as part of its digital nation initiative. And the initiative really involves building digital twins, deploying modern telecommunication infrastructure, and and also implementing service like you know we are very disadvantaged from fintech solutions. It really aims to enhance connectivity and foster economic inclusion from our perspective. But I think this development reflects a commitment to more like leveraging digital tools like this, frameworks, to overcome the structural limitation and strengthen governance, education, and public service delivery. Coming to the question of challenge, small island nations like Tuvalu face very, very unique challenges in internet governance, which I would like to highlight a few. One of which is the big challenge is infrastructure constraints, in a sense that high cost of undersea cables, and I’m happy to say that my nation Tuvalu just landed the first submarine cable just a couple of days ago, and I thank the help from giant companies like Google, Vint, and your team for getting us connected to the entire world. And also with these challenges, the constraint of limited capacity for fiber deployment, given that the structure and geographic landscape of our islands are very remote and isolated, so the deployments can be very costly. And we, before that cable land, we depend 100% on satellite connectivity, so we are left out from the opportunity, you know, on advancement in technology. One of the other limitation that I think we highlighted in the assessment that we did is, because we are very small economies, we struggle to attract private investment in digital infrastructure, and we really rely heavily on development aids. The other important challenge that I want to highlight is capacity building. We have very limited technical expertise because of our isolated islands, and we are very far from the world with advanced technology, and given that we are now connected to the internet through cable, this will enable us, you know, to get those capacity built at the very rapid rate. One of the major challenge in the Pacific Island countries is the fragmented and underdeveloped regulatory frameworks, which makes it very difficult to ensure… things like cyber security, data protection, and basically competition. The other factor that really takes into account when it comes to internet governance is the environmental risks. We are very vulnerable as small island state to natural disasters because it disrupts connectivity and also it strains our ability to recover from those disasters. And so bridging the digital divide in remote islands and remote communities, ensuring affordable internet access, it’s still remain more like a persistent issue. So your question, the next part of your question asks if we consider this as a solution for the IUI framework. Well it provides, it offers quite a number of solutions from the perspective of assessment and benchmarking because it identifies the gaps in how we access the infrastructure, the skills and content of how we prioritize our interventions and also localize solutions. And I think adapting to global best practices to the specific needs and reality of small island nation is very important because we can contextualize what we need. From a policy guidance perspective, I think the framework really offers their governance, like a recommending governance structure that foster equitable access, I would say, and sustainability and resilience. Coming back to capacity building, I think this framework kind of give, like highlights the need for partnerships and training our programs to build that technical expertise at the local context, at the local level. My last point that I wanted to, like, the framework offers is the facilitating of collaboration in terms of funding availability, given our small context with international, say, donors and development agency, because it addresses the financial constraint that we have as small Pacific Island countries. Thank you.

Camila Gonzalez: Thanks, Thien-Hanoi. I think those are really good responses, because I would imagine that when you face so many different challenges, going through this national process, which allows, or also perhaps forces you to… I can’t hear myself, but I assume it’s working. And I think I really… I think what you said there is that it… And I think this is… I would… I mean, I’ve been close enough to the IUIs to say this is part of their design, that the IUIs are designed in such a way that you build to do the measurement and the analysis can also evolve into partnerships for collaboration around addressing those challenges. And I think you’ve just said that so well. I want to move on now to Alain. You work in digital cooperation for digital inclusion in 16 different countries. You haven’t used the IUIs yet, but you’re engaging with it now, and you’re thinking about it. How do you, at this point, see the IUIs facilitating and supporting the kind of multi-stakeholder collaboration that you’re already working with in those 16 countries?

ALLA ABDULAAL: Hello, everyone. Thank you. I’m very honored to be here on this panel. And I would like, first of all, to congratulate UNESCO for the launch of the new IUI. I think it’s a very amazing step to have that reviewed because we are in an era where everything is accelerating very quickly, and digitalization is affecting and impacting the transformation of countries. So it is very important always to stop, reflect, and engage, and always update all the measurements and frameworks that we are building. The digital cooperation organization, I think what we are trying to achieve through the IUIs and what is really aligned with what UNESCO, as an organization, is trying to do is to bridge the digital divide, to have a framework that will help and support countries to assess where they stand, to understand their current state, and build actionable solutions and plans. And even us as a digital cooperation organization, we have recently launched our digital economy navigator, which focuses on the digital economy from a maturity perspective. Again, we are moving towards using the same approach, which is a database of different indicators. It’s very aligned with what UNESCO is trying to do with the IUIs. I really see the point where it’s very important for us when we are trying to build collaboration is to have the right data in place, to have it built upon those different indicators. And this is why I believe with this framework that even the new one with the IOIs, I think it will really provide targeted plans for countries, not only to understand where they are, but even to, and it was mentioned by one of the panelists, to look at what is the return of investment, to measure what is being really impacted. So when you sit there and start creating initiatives or even change policies that are based on data, this will really help you measure how much you are progressing, because it’s not just random. It’s not just putting plans in place, but actually building those plans on existing data, on a unified framework, where not only one country is looking at it, but a list of countries having the same direction, having the same vision that they want to accomplish. And then, also, it’s very important, which is mentioned, share the lessons and the experience between those countries, again, based on a well-established framework. For all the countries to have that unified vision is a very, I believe, it’s an accomplishment by itself. And not only from a country perspective, but as we mentioned, it’s a stakeholder, a multi-stakeholder aspect. To have also the academia, to have the private sector, to have governments, all of them, looking at the digital divide in the same way, trying to bridge it, trying to address it. This is really a step forward to accelerate that evolution and transformation that we are trying to achieve. It is the right way, it is the fastest way, and I believe it’s the only impactful way for us to move forward. Thank you very much.

Camila Gonzalez: Thanks very much, Hala. And, in fact, as a taxpayer, I consider tax as my form of investment in the public sector, and I ask that question, too. You know, what return of investment do we get from our taxes? So, I think that that approach is as important, perhaps even more so, with public sector investment. And I want to go back to you, Jennifer, seeing as you’re a self-confessed government official. How do you see… I mean, we talk a lot about the multi-stakeholder approach. approach in the IUIs and in the IGF. But do you feel that there’s also a need to strengthen multilateral efforts to use tools such as the IUIs which adopt the multi-stakeholder approach? Do you think there’s enough of a understanding within the intergovernmental space about the value of tools such as this to address these emerging global governance and digital governance challenges? Thanks for the question. I think the reality is is that multilateralism dominate most of our work. I mean, let’s be clear, having negotiated the global digital compact, even though we injected multi-stakeholderism into it, in it and the global digital compact did reinforce the role of multi-stakeholderism, multilateralism is continuing to have a leading role in these issues. And I guess I’m more worried about multi-stakeholderism than I am about multilateralism. I’m worried about, and I looked back notes and it wasn’t in there, but it sparked my mind with the idea of the rights-based, open, accessible to all and nurtured by multi-stakeholder participation. It’s a question of how you evaluate the stakeholder participation. If you consult, is that multi-stakeholder participation? Talk to one civil society organization, is that multi-stakeholder participation? And I think we really need to, when you talk about the quality of data, the reality is you need to be consulting with multiple companies, multiple civil society, multiple academia, and multiple tech communities. And I think that. One of the things we need to think about, and sorry to have ignored your multilateral question because I think that we spend a lot of time on it, but I think the definitions around multi-stakeholderism in a way that’s actually meaningful is something that’s incredibly important and to make sure that the data is not just one company. Because what we find regularly is talking to big tech, we’ll get a different answer than talking to, let’s say, small tech, and talking to civil society based in rural areas is different than talking to civil society based in urban areas. Also, quantity, but a quality, says otherwise, you’re gonna make policies that are not the best and that are potentially not lamentable. So I think, for me, I think a lot about this because I am a government democrat. I am not an expert in the way technology works. I need as many experts as I can to say, well, you think if you write this thing, it’s gonna have an outcome, but in fact, yeah, it’s not working. But anyway, hopefully you get my point. Sorry, I am just sorry, Jennifer. Is the audio cutting out for other people as well? Not just for me. So our tech people in the back of the room, the audio from speakers in the room is not working fantastically. I don’t know if it’s the mic or whatever. But Jennifer, I’m so glad you said that. Do you think it’s the mic there? Good. So we need to give you another mic. We’ll hear now. I’m very glad you emphasized that because I think if we wanna use this multi-stakeholder approach, we cannot just use it at a tokenistic level. We’ve gotta be intentional about it, deliberative, acknowledge that there’s diversity. So I think that’s really important. And I think there’s the, in fact, we, I think, launched our open consultation. process for this updated internet universality indicators during NetMundial Plus 10 in Sao Paulo earlier this year. And one of the outcomes of that process is that both the multilateral and the multi-stakeholder internet governance processes need to get better and be more intentional and inclusive.

AUDIENCE: But Ale, you wanted to come in on this issue as well, so please go ahead and we’ll check your mic. Can everyone hear me? Yes, I think I can be heard. So, yeah, I totally agree with what Jennifer was mentioning, the specifics. Is it on? Yes. I can’t hear you. Can you hear me? Go ahead. I can’t hear you, but it’s fine. So I totally agree with what she has said, because again, it’s not only about one country perspective if we are talking about… It impacts also the multilateral aspect, because it’s not only from one country perspective rather than looking at different countries, different regions, different situations, different level of maturities of different aspects, different sectors. And then we are talking about, as you have mentioned, are we talking about big tech companies? Are we talking about SMEs, small, medium enterprises, a different perspective coming from academia, researchers, think tanks? They all provide their own right angle of how to tackle different transformation challenges that different countries are facing. And I believe we should try, even from an international organization’s work together, to bring everyone on the table through different consultations on different regions, different layers. Again, I believe this is the only way that we can really help in providing that unified direction for different countries to really all to be on the, let’s say, not the same level, but at least we are all talking on the same foundation and being in the same era, not having third of the world unconnected or 2.6 million of people are not connected and the other are connected and talking about a different age of transformation.

Camila Gonzalez: Thanks. In Africa, it’s under 40% at the moment. your hand. I also want to invite other people in the room to speak and ask questions. And also if someone, okay I’ve noted your hand, and Vint asked a question in the Zoom chat about whether low-earth-orbit satellites are being used for Pacific Islands. So if anybody wants to volunteer to respond to that, either from the room or online, please go ahead. Jennifer, you have an answer. I mean the answer is yes they are, and it depends what you’re trying to do whether LEOs are going to be sufficient to connect the unconnected. And this is, you know, we worked a lot on the Tuvalu, and I will turn to you, but recognizing that what you can do for a population the size of Tuvalu with low-earth-orbit satellites is nothing compared to what you can do with an undersea cable. But I also want to add another thing. What was really interesting was when we started to talk to ambassadors, U.S. Chief Submission about AI, we had a number of ambassadors who said, you guys are talking about a conversation that really is about a small number of countries, when in fact, and these were particularly in developing countries, ambassadors are like, we don’t even have connected populations. So you’re like, so it’s like there’s almost two different conversations happening in different areas. And so it’s a little bit of like, you can’t forget that you still have the unconnected. You need to have them in the room. You can’t just say, we’re going to run and have another complete conversation about AI, when we can’t even you know, talk about having, you know, meaningful access to information and connectivity. So that is, I think, your point about getting everybody in the room, because otherwise you can’t have a conversation that feels like only part of the world is participating in it. And I don’t speak for Tuvalu, but I think that you probably have similar points. Do you want to respond? Go ahead. Yeah, so in now part of the world, Leo, yes, it’s playing a very important role in connecting our remote areas. The question is, is it affordable? for a standard local person in that remote area to get connected to this LEO solution. You know, it’s a brilliant, it’s a very good solution, but there’s always a question of affordability and how we can sustain that in the context of small island nation. And it’s something that people in my country, I would say, still facing that affordability issue and challenges. Thanks for that. And Vint, I’m going to come to you. I’ve seen your hand. But I have two people in the room that are eager to speak, so I’m going to give them the floor. We’ll start over there. And then, can I ask someone to help with moving the mic? This is the mic that works. And just introduce yourself and be brief. Thank you so much.

JOSE FISSA: Hi, everyone. My name is Dr. José Fissa Hadidban. I coordinate the chat with IGF, and I’m here attending this session as a reporter. I would like to thank UNESCO for giving me this chance. And if I’m here attending IGF, it’s because of UNESCO. So thanks a lot to UNESCO team for that. I think I would like also to appreciate, I mean, the framework, revise it. But situations, I mean, in my country and indicators are quite different. And for me to understand clearly the position of the co-facilitators, I have a question. If you could please specify, how can the advanced second generation Internet universality indicators, ROMEx framework contribute to shaping an inclusive, right-respecting and sustainable digital future? And what specific strategies should a stakeholder, such as government, civil society, private sector and academia, adopt to integrate these indicators into national and regional Internet governance framework?

Camila Gonzalez: effectively. Thank you so much. Thanks very much for that question and I think maybe Taufik you can also come to that your closing remarks perhaps and that address that. Aziz let’s have your question and then Aviz just introduce yourself and be brief and then we’ll have Vint. Aziz you are next.

AZIZ HILALI: Thank you. We can hear you. You hear me? Yes. Thank you Henriette. I am Aziz Hilali, I am professor and former co-chair of ISOC Morocco and member of different IJF locally and regionally and I would like to come back to the special importance of indicators particularly in the Arab region where digital transformation is happening but in different speeds or different ways. More than half of I think we are in the Arab region less than 500 million people in 23 countries of the region are connected to the Internet. However there are still significant digital gaps. Internet penetration in the region remains below than the global average which is 65%. The same in African region. This indicator I think importance is they can act as a compass to guide public policies toward fire and sustainable solution aligned with the substantial development goals SDG. In this context I want to highlight the importance of including these indicators in the national and and regional strategies, for example, I want just to give North Africa as example, with its challenges, such as weak infrastructure and inequal access could benefit from more accommodation based on reliable data to reduce digital divides. So, this effort must involve stakeholders, as most speakers said in this session, we have to integrate these indicators into stakeholder strategies is very crucial for building an internet that is inclusive, openness and the respect of human rights, thank you.

Camila Gonzalez: Thanks Aziz, I think yes, I think we always talk about a holistic and I think what the indicators give us is a way of applying that. We’ll have a question from Avice, do you want to, you don’t have a question. Sorry, I couldn’t, I can’t hear, maybe give him the. Thank you very much, I’m Avice from Cameroon, it is a concerning question, I’m asking myself for civil society as I’m from civil society, we want to involve in the assessment and as you know, there is some data coming from the government, specifically in this point from the use of the Universal Asset Fund, which is very quiet, some country are not really, let’s say, they don’t want to give the information about this point and as you know, the use of the Universal Fund give lot of data for what are being implemented in the field. So I don’t know if there is some advice when coming on this point, please. Good, I think that’s a very good question and I’m going to ask any panelist who’s got experience of this and who understands the indicators to talk about whether the process will reveal whether there are concerns about our Universal Access Funds. I know, I have learned from looking at the indicators. is how different countries are actually approaching universal access funds deployment differently. But Vint, let’s have your question, and then we’ll go into a round of responses. I see David is ready to tell us about accountability. Vint, if that wasn’t an old hand, please go ahead.

VINTON CERF: It’s not an old hand. I am an old hand, but that’s a different story. So am I. Yeah. I just wanted to draw attention to the fact that multi-stakeholder practices are vitally important here. It’s certainly true that member states have a great deal to do with policy, and international policy in particular. But with regard to internet and its implementation, it’s fair to say that the bulk of the implementation is done in the private sector. And so there is a natural partnership that should arise out of government and the private sector, to say nothing about the influence of the civil society and the technology community with regard to either utility or implementation of the internet. So I just want to overemphasize the importance of this collaborative component for connectivity, as well as all the other metrics that go along with the IUI. Thanks. Thanks, Vint. I can’t resist saying this, though. If the private sector was doing a better job, we wouldn’t have lower than 40% internet penetration in Africa. But that’s really a challenge to the mobile sector, not to Google.

Camila Gonzalez: So let’s just hear what David. David, you can respond to the accountability question. So please go ahead. And then I’m going to ask other panelists to respond to other questions. Is that you need to leave quite soon, so I’ll give you first. Yes, how does the indicators, do they help us address that issue of accountability that Vint raised earlier? Please go ahead.

DAVID SOUTER: So I would say. Accountability is a very fundamental question to anything around this, and it’s really to do with the relationship between technology and human society, both governments and businesses as well. And it’s very much related to power structures, so the extent to which you are capable of assessing accountability really does depend a lot on what the power structures within a society enable you to look at and what sort of data are available. In this framework here, there are quite a lot of places where the IUI indicators are asking you to look first at what the state law, regulation and so on, so what is the formal requirement, and then secondly at how that is enforced in practice or what is actually happening in practice. So that is about, that second part of that question is to do with accountability, and in assessing it, I’d say, well first, quantitative evidence is often lacking, but also not always going to be the best source. However, a great deal more openness from digital businesses here might be helpful, there’s too much keeping of information confidential for commercial reasons, or supposed commercial reasons, I’ve noticed this particularly with the area of working in the environment. There’s a need for critical assessment of what quantitative information is available by researchers, but also there’s a need to look at this qualitative evidence that I said before, what do serious observers, academics, serious journalists, researchers in genuine think tanks, that sort of thing, what are they saying? And in terms of accountability again, I’d say, and I think the framework does address this, it’s not just about what governments are doing, it’s also about the power of other actors within the digital environment, including government. including business markets. And I suppose two other quick points. Firstly on evidence-based policymaking and one of the problems we have here is that not everybody believes in evidence-based policymaking and actually quite a lot of governments believe in evidence-based policymaking as we’re seeing. So that’s a challenge here. In terms of AI that’s going to be particularly difficult in the context where assessing accountability. How do you assess accountability if those who are running systems themselves aren’t really capable of understanding why particular decisions are being made? So I think with AI we reach another level of difficulty in assessing accountability which is a challenge for the next revision of the IUIs but it’s also actually a much bigger challenge I think for society as a whole.

Camila Gonzalez: And David thanks very much for that and before I move on to the next, well let me actually ask Alaa to speak and then I’ll come back to you on Universal Service Funds. Do you have some

ALLA ABDULAAL: closing remarks before you leave? Yes so first of all again we congratulate UNESCO and I really want to emphasize on the transformative potential of such frameworks like the IOIs and their vital role that they really play in shaping an evidence-based policy and action plans and having a good multi-stakeholder collaboration and bridging the digital divide. As the digital landscape is really evolving very quickly as the digital cooperation organization we really are committed to support these efforts by enabling our member states to leverage such tools like the IOIs and also our digital economy navigator to achieve their inclusive and sustainable digital transformation journey. We believe that as our name, we are the Digital Cooperation Organization, we believe in the importance of cooperation. And this is why we are very happy to be in such a panel beside the UNESCO and to have that multilateral and multi-stakeholder conversation. And we believe that this is the right approach to work together, to share our experience, to make sure that no one is left behind and that we have a prosperous future for all.

Camila Gonzalez: Thank you. Thanks very much. And I know you have to go, but thanks for joining our panel. And David, will applying the IUIs help a country team unpack some of the challenges around how the Universal Service and Access Fund is defined, deployed and contributing to meaningful access? Is that covered by the indicators? David, are you still there or are you muted? I lost the connectivity and you came back at the end saying something about…

DAVID SOUTER: My question is that I know that you have built in the revised IUIs, there’s now additional focus on meaningful connectivity. So the question that we had from the floor was, would applying the indicators help at a national level that multi-stakeholder group of implementers of the indicators be able to unpack whether it is using its Universal Access or Service Fund effectively, whether there are issues with how it’s defined, whether it’s being used for, let’s say, local access or community networks. Is that a topic that will be surfaced by applying the IUIs? I mean, okay, so the indicators that… you know they one of the things that’s important about them is then not overwhelmingly specific and so the issues that are raised in terms of meaningful connectivity in one country will differ from those that are raised in another and what the indicators do is give to the researchers to identify what is important within their individual country and then to focus on that so the answer to your question is yes it does and it doesn’t need to identify that in a you know it’s a very specific way in order in order for that to be the case it’s something that the researchers and the multi-stakeholder advisory boards should direct their work towards in those countries where that is a particularly important question thanks very much for that david and i

Camila Gonzalez: also know that it can also reveal if a regulator is finding it difficult to get data from operators which is often the case the iui process will also most likely reveal that too but but let’s have some some final remarks um from the the panel i put and i hear myself cutting out but um alexander let’s start with you and then we’ll we’ll we’ll go on to to you and then to jennifer and then taufik will will close for us any reactions or responses to the questions or additional points that you want to make

Alexandre Barbosa: thank you very much uh and yet well um i would like to comment on the multi-stakeholder dialogue that we’re having here just to mention that in brazil we have a very well established multi-stakeholder internet governance model in which the government coordinate the whole structure and i would say that it is um not so my multi-stakeholder dialogue platform in which Which we have different voices, and I agree with Jennifer, which voices are we hearing. But what I want to say, that although this is a well-structured governance model, we do have so many opportunities of dialoguing with the society, like the National Internet Governance Forum, all different areas that is taken into consideration in this dialogue, like culture, digital inclusion, gender, meaningful connectivity, artificial intelligence. So in those specific areas, we invite different voices so that we can take into consideration into the policy design those aspects. And besides that, I’m responsible for a data production center related to measuring the adoption and impact of ICTs in different areas of society, and in that particular case, we do have also expert groups that support our measurement activities. So we do have government, academia, civil society, private sector, that guide us in terms of how to measure and what to measure, based on which methodology. So I would say that the Brazilian model is really solid enough that provides the government very important, insightful for policy design. And just to mention three important influential dialogues that we have. I guess that most of you may know the Brazilian Act on the Internet Bill of Rights, that we call in Portuguese, Marcos Civil da Internet. It was based on a very important dialogue that took place into the Brazilian Internet Steering Committee. Also, we had the private, when we had the personal data protection. Act approved. It was previously based on the dialogue that we had, multi-holder dialogue. Also in terms of digital inclusion policies or digital abilities along with the Minister of Education in Brazil. So this is a really important process and I agree that a hundred percent evidence-based policy is really difficult to have because the policy design and the process is quite complex. But I would say that in the last 20 years, based on this dialogue on the Brazilian International Committee, we did make progress in that regard. And I would like to finish by saying that also the Brazilian government counts on this structure, on this multi-holder structure, to help in very important and critical actions like the G20. We have just finished the presidency of the G20, in which the Brazilian International Committee took a very active role in several areas in terms of digital economy. So we work very closely with the government in terms of artificial intelligence and meaningful connectivity. And just to finish, I would like to say that since the first assessment that we had in 2015, using the first generation IOI, I would say that today we are not discussing digital inclusion by the fact of being or not being connected, but the meaningful connectivity which brings a huge number of dimensions like digital skills, affordability, safety, use of the internet. So this is a result of this process. And I hope, Taufique, that we will be able to once again pioneer by adopting the second generation. this framework, which is so important. So maybe in the next IGF we will be able to give some results of this second generation. And once again I think that this is a very important moment because we have recently approved the PACT for the future and also the upcoming WSIS plus 20 review and Romex is a very important model. And just to finish I would like to say that the new title of this publication is advanced inclusive digital transformation with the Romex indicators. Romex is a key pillar in all this discussion of the global digital compact and WSIS plus 20 review. So once again congratulations to UNESCO and to your leadership for providing this very important and relevant framework.

Camila Gonzalez: Thanks very much Alexander. Tena Noya, do you want to add anything? Yep, thank you for the second opportunity. I think I will acknowledge the the role of MAP, the multi-stakeholder advisory body board, because there are key benefits out of this arrangement and one of which is the the board really brings out the, you know, together the government, civil society, private sector and community representation. Because it ensures the diverse voice of you know of the people and also enhancing that credibility. It gives you know stakeholders involvement, foster trust and buy-ins from various groups. And I think members contributed technical expertise as well as in size into best practices. which, for me, enhance the quality of assessment and recommendation from this. And the last one I wanted to say is that, you know, when it comes to a table, when we did the consultation, multi-stakeholders with various sectors, sometimes there’s conflict of resolution, and I think that the Board really brings that, you know, helps mediate that kind of competing interest and align the objectives to what we believe it should contextualize to the context of the nation. So, and again, I would like to echo the same sentiment from other speakers congratulating UNESCO for the launching of the new framework. Thank you. Thanks, Tena Noya. Jennifer. Thanks for that, and again, to echo the comments of the rest of the panelists, congratulations on this. I think it’s so critically important that we continue to discuss ways to try to evaluate and understand how connectivity is taking place. I think, you know, that the U.S. government has really upped its game on connectivity, on engagement with UNESCO, on engagement with a multi-stakeholder community. We’re proud in our organization to have been a critical element of this, and also through the launching of the U.S. International Cyberspace and Digital Policy Strategy, which talks about all of these things in it. But I think to conclude, I just want to say how excited I am to continue to work with this group and with others to really advance our digital future, one that’s based on retrospecting technology, one that’s based on bringing all the voices into it, because ultimately, if we’re going to achieve our goals for connectivity and for meaningful access to information, we’re going to need to do it together in a way that really is based on this idea of data and collaboration. communication. So thanks. Thanks, Jennifer. Vint, did you have any further comments? David, anything more from you? Vint says audio. The Zoom participants say that the audio is cutting out for them. And I think let’s move on then to asking Taufik to make some closing remarks for us. And I think I have to add, before you start, my congratulations to you as well, to UNESCO, to CETEC and to everyone who’s been part of this process.

Tawfik Jelassi: Thank you very much, Henriette. Let me first comment on the last two, maybe a couple of recent remarks made by Jennifer first regarding the collaboration here. We very much enjoyed having the US back to UNESCO since July 2023. We look forward to continued collaboration with the US in spite of changing administration at the White House. And you mentioned the field of communication and information, which has many themes there. Freedom of expression, protection of journalists, media development, multilingualism online, caring for minorities, including indigenous peoples, and so on and so forth. So we have many common topics of great interest and would like to continue the work that we have reinitiated a bit over the years since US came back to UNESCO. And also I want to comment on what you said, Alexander. You said that maybe in the next IGF 2025 you can showcase some early results from the implementation of the revised internet universality indicators. This is a great goal, we should definitely do that, because the point is not just to prove the concepts, it’s to prove the value of our work, of the ROAMX framework. However, this creates of course momentum, and as we know the next IGF is not one year away from today, it’s just six months away from today. So we need to be ready by May 2025, so in June 2025 in Norway, at IGF, we can showcase, we can share with interested parties the first set of results and to what extent the revised framework was impactful. Let me also thank everybody in the panel, but online and physically in the room, but also our participants. I see that I was told we have a few minutes left, I’ll be brief. So I would like to thank the panelists online and in the room, I would like to thank the audience as well for coming to this session, this is very important for us, and the discussion and the multiple perspectives we heard from the US, from Brazil, from Tuvalu, from the Digital Cooperation Organization, and for their field, I think the different testimonies clearly show the relevance of the ROAMX frameworks and the indicators, also they emphasize the challenges and the specificities of different contexts around the world. I think this reminds us also that we need to have a truly inclusive collective effort if we want to build the digital future that you aspire towards, and open, safe, secure, but also multi-stakeholder and human rights anchored. I think a number of speakers, including Jennifer, insisted on the multi-stakeholder dimension, in addition of course to… the multilateral role of international organizations, but I think we all agree that the work has to be not only anchored but remain respectful of human rights. And let me say again to conclude that the revised framework that we presented today and that book, booklet or book, which is here in physical form but we have it also available online and soon in multiple languages. I think this shows that what we did was not just revising a framework, it’s more than that. I think it was a bold step towards ensuring that the internet remains a force for equity, for sustainability and for human development. I think for sure our work and those who will be implementing the revised framework will foster our partnership going forward. A number of you mentioned the digital divide that is still today unacceptably high. I think this work of national digital assessment using our indicators is a step towards putting in place the right national digital strategy among others to reduce the digital divide. And as we know it’s multiple divides. It’s digital, it’s informational, it’s knowledge divide, it’s a gender divide as well. So it’s only through collective efforts and partnership that we can tackle these divides and the common goal of all of us is we should not leave anyone behind or anyone out of the new digital era, the new digital age in which we live today. And thank you Henriette for your excellent moderation of this panel.

Camila Gonzalez: And thanks to everyone. I see that the remote participants are also complaining. I would like one last word to thank the UNESCO team. I see Cedric Warhol is here. I see Tatavic as well, and colleagues who are in Paris, who maybe are not with us in the room here. I want to also say a big thank you to all of you for making it ready for IGF24. Thanks Tafik. A massive amount of work to make it ready for this launch. So thanks to everyone. Apologies for the difficulties with the audience. Thanks to our tech team. I know you did your best. And thanks very much everyone for joining us.

T

Tawfik Jelassi

Speech speed

116 words per minute

Speech length

1736 words

Speech time

897 seconds

IUIs revised to be more relevant, adaptive and future-ready

Explanation

The Internet Universality Indicators (IUIs) were revised to make them more relevant to current digital challenges. The revision aimed to make the framework adaptive and prepared for future developments in the digital landscape.

Evidence

The revised framework integrates key insights and lessons learned from 40 implementations around the world.

Major Discussion Point

Revision and importance of UNESCO’s Internet Universality Indicators (IUIs)

Agreed with

Alexandre Barbosa

DAVID SOUTER

Agreed on

Importance of revised Internet Universality Indicators (IUIs)

Revised IUIs are more streamlined and accessible

Explanation

The revised IUI framework has been simplified to make it more user-friendly and accessible. This streamlining aims to accelerate stakeholder adoption and implementation of the ROAM-X principles.

Evidence

The revised framework has 63% fewer questions to answer in the survey and 56% fewer indicators to use.

Major Discussion Point

Revision and importance of UNESCO’s Internet Universality Indicators (IUIs)

A

Alexandre Barbosa

Speech speed

120 words per minute

Speech length

1627 words

Speech time

811 seconds

IUIs empower countries to adopt evidence-based policymaking

Explanation

The Internet Universality Indicators provide countries with actionable data and diagnostic tools. This enables governments to identify gaps and strengths in their internet ecosystem, leading to evidence-based policy decisions.

Evidence

The indicators can highlight disparities in internet access among marginalized groups or regions, prompting targeted interventions to bridge the digital divide.

Major Discussion Point

Revision and importance of UNESCO’s Internet Universality Indicators (IUIs)

Agreed with

Tawfik Jelassi

DAVID SOUTER

Agreed on

Importance of revised Internet Universality Indicators (IUIs)

IUIs facilitate tailored policy recommendations aligned with SDGs

Explanation

The indicators help align national priorities with international frameworks like the Sustainable Development Goals. This alignment ensures that policy recommendations are tailored to specific country contexts while adhering to global standards.

Evidence

The revised IUI integrates lessons from global applications, ensuring their relevance across diverse contexts.

Major Discussion Point

Revision and importance of UNESCO’s Internet Universality Indicators (IUIs)

Multi-stakeholder collaboration is key for sustainable internet governance

Explanation

The IUI framework strengthens multi-stakeholder collaboration as a cornerstone of sustainable Internet governance. It brings together representatives from civil society, academia, private sector, and government to foster consensus-driven strategies.

Evidence

This multi-stakeholder approach ensures that policies are not only inclusive but also rooted in practical and cross-sectoral expertise.

Major Discussion Point

Multi-stakeholder approach in implementing IUIs

Agreed with

Jennifer Bachus

Tenanoia Veronica Simona

VINTON CERF

Agreed on

Multi-stakeholder approach in implementing IUIs

D

DAVID SOUTER

Speech speed

136 words per minute

Speech length

2454 words

Speech time

1076 seconds

Revised IUIs address new challenges like environmental risks and AI

Explanation

The revised IUI framework incorporates emerging challenges such as environmental risks associated with digital development and the governance of artificial intelligence. These areas have gained importance since the original framework was developed.

Evidence

The revised framework gives more substance to environmental problems associated with digital development, such as energy consumption, climate change, and waste. It also addresses the challenges of AI governance arising from uncertainty and risk.

Major Discussion Point

Revision and importance of UNESCO’s Internet Universality Indicators (IUIs)

Agreed with

Tawfik Jelassi

Alexandre Barbosa

Agreed on

Importance of revised Internet Universality Indicators (IUIs)

IUIs should remain adaptable to future technological and policy challenges

Explanation

The IUI framework needs to be flexible and adaptable to respond to future developments in the digital sector. This adaptability is crucial because technological changes are often difficult to anticipate and can be dramatic.

Evidence

The speaker suggests that the framework should be reviewed every few years, possibly after the SDG review in 2030, to ensure it remains relevant.

Major Discussion Point

Future of IUIs and global digital governance

J

Jennifer Bachus

Speech speed

178 words per minute

Speech length

782 words

Speech time

263 seconds

Need for meaningful multi-stakeholder participation, not just tokenistic

Explanation

Jennifer emphasizes the importance of genuine multi-stakeholder participation in implementing the IUIs. She argues that consultation should involve multiple companies, civil society organizations, academia, and tech communities to ensure meaningful input.

Evidence

She points out that consulting with just one civil society organization or one company is not sufficient for true multi-stakeholder participation.

Major Discussion Point

Multi-stakeholder approach in implementing IUIs

Agreed with

Alexandre Barbosa

Tenanoia Veronica Simona

VINTON CERF

Agreed on

Multi-stakeholder approach in implementing IUIs

Need to strengthen both multilateral and multi-stakeholder efforts in digital governance

Explanation

Jennifer highlights the importance of balancing multilateral and multi-stakeholder approaches in digital governance. She suggests that while multilateralism dominates most work, there’s a need to ensure that multi-stakeholder participation is meaningful and inclusive.

Evidence

She mentions the Global Digital Compact as an example where multi-stakeholderism was injected into a primarily multilateral process.

Major Discussion Point

Future of IUIs and global digital governance

T

Tenanoia Veronica Simona

Speech speed

118 words per minute

Speech length

770 words

Speech time

388 seconds

Small island nations face unique infrastructure and capacity building challenges

Explanation

Small island nations like Tuvalu face specific challenges in internet governance due to their geographic isolation and limited infrastructure. These challenges include high costs of undersea cables and limited capacity for fiber deployment.

Evidence

The speaker mentions that Tuvalu just recently landed its first submarine cable, previously relying 100% on satellite connectivity.

Major Discussion Point

Challenges in implementing IUIs, especially for developing countries

Affordability of internet access is a major concern in small island nations

Explanation

While solutions like low-earth-orbit satellites can help connect remote areas, the affordability of these solutions for local populations remains a significant challenge. This affects the ability of people in small island nations to access and benefit from internet connectivity.

Evidence

The speaker mentions that people in her country still face affordability issues and challenges in accessing satellite-based internet solutions.

Major Discussion Point

Challenges in implementing IUIs, especially for developing countries

Multi-stakeholder advisory boards bring diverse voices and expertise

Explanation

The multi-stakeholder advisory boards play a crucial role in the IUI implementation process. They bring together government, civil society, private sector, and community representatives, ensuring diverse voices are heard and enhancing the credibility of the assessment.

Evidence

The speaker notes that these boards contribute technical expertise and insights into best practices, enhancing the quality of assessment and recommendations.

Major Discussion Point

Multi-stakeholder approach in implementing IUIs

Agreed with

Alexandre Barbosa

Jennifer Bachus

VINTON CERF

Agreed on

Multi-stakeholder approach in implementing IUIs

V

VINTON CERF

Speech speed

132 words per minute

Speech length

1506 words

Speech time

682 seconds

Private sector plays vital role in internet implementation and should partner with government

Explanation

Vint Cerf emphasizes the crucial role of the private sector in implementing the internet. He argues that there should be a natural partnership between government and the private sector in internet development and policy-making.

Evidence

He points out that the bulk of internet implementation is done in the private sector, including investments in international subsea cable networks, land-based cable networks, and data centers.

Major Discussion Point

Multi-stakeholder approach in implementing IUIs

Agreed with

Alexandre Barbosa

Jennifer Bachus

Tenanoia Veronica Simona

Agreed on

Multi-stakeholder approach in implementing IUIs

IUIs can help address emerging challenges like AI governance

Explanation

Vint Cerf suggests that the IUI framework needs to consider how to assess artificial intelligence applications. He points out that as AI becomes increasingly central to applications, there needs to be further discussion on how to measure its utility and safety.

Major Discussion Point

Future of IUIs and global digital governance

A

AZIZ HILALI

Speech speed

93 words per minute

Speech length

241 words

Speech time

154 seconds

Digital divide remains a significant issue, especially in Africa and Arab regions

Explanation

Aziz Hilali highlights that despite digital transformation happening in the Arab region, there are still significant digital gaps. Internet penetration in the region remains below the global average, indicating a persistent digital divide.

Evidence

He states that more than half of the population in the Arab region (less than 500 million people in 23 countries) are connected to the Internet, but the penetration rate is below the global average of 65%.

Major Discussion Point

Challenges in implementing IUIs, especially for developing countries

Agreements

Agreement Points

Importance of revised Internet Universality Indicators (IUIs)

Tawfik Jelassi

Alexandre Barbosa

DAVID SOUTER

IUIs revised to be more relevant, adaptive and future-ready

IUIs empower countries to adopt evidence-based policymaking

Revised IUIs address new challenges like environmental risks and AI

The speakers agree on the significance of the revised IUIs in addressing current digital challenges, promoting evidence-based policymaking, and incorporating new issues like environmental risks and AI governance.

Multi-stakeholder approach in implementing IUIs

Alexandre Barbosa

Jennifer Bachus

Tenanoia Veronica Simona

VINTON CERF

Multi-stakeholder collaboration is key for sustainable internet governance

Need for meaningful multi-stakeholder participation, not just tokenistic

Multi-stakeholder advisory boards bring diverse voices and expertise

Private sector plays vital role in internet implementation and should partner with government

The speakers emphasize the importance of genuine multi-stakeholder collaboration in implementing the IUIs, involving diverse voices from government, civil society, private sector, and academia.

Similar Viewpoints

Both speakers highlight the improved accessibility and applicability of the revised IUIs, emphasizing their alignment with global frameworks like the SDGs and their potential to inform tailored policy recommendations.

Tawfik Jelassi

Alexandre Barbosa

Revised IUIs are more streamlined and accessible

IUIs facilitate tailored policy recommendations aligned with SDGs

Both speakers stress the need for the IUIs to remain flexible and adaptable to address future technological developments, particularly in emerging areas like AI governance.

DAVID SOUTER

VINTON CERF

IUIs should remain adaptable to future technological and policy challenges

IUIs can help address emerging challenges like AI governance

Unexpected Consensus

Challenges faced by small island nations in implementing IUIs

Tenanoia Veronica Simona

AZIZ HILALI

Small island nations face unique infrastructure and capacity building challenges

Digital divide remains a significant issue, especially in Africa and Arab regions

While representing different regions, both speakers highlight similar challenges in implementing IUIs, particularly related to infrastructure constraints and the persistent digital divide. This unexpected consensus underscores the global nature of these challenges.

Overall Assessment

Summary

The main areas of agreement include the importance of the revised IUIs, the need for meaningful multi-stakeholder collaboration, and the recognition of persistent challenges in implementing digital policies, particularly in developing regions.

Consensus level

There is a high level of consensus among the speakers on the value and potential of the revised IUIs. This consensus suggests broad support for the framework and its implementation across diverse stakeholders and regions. However, there is also agreement on the need to address ongoing challenges, particularly in developing countries and small island nations, indicating that while the IUIs are seen as valuable, their successful implementation may require additional support and resources in certain contexts.

Differences

Different Viewpoints

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement were limited, with most speakers generally aligned on the importance of the IUIs and multi-stakeholder approaches. The primary difference emerged around the role and effectiveness of the private sector in internet implementation.

difference_level

The level of disagreement among the speakers was relatively low. Most participants shared similar views on the importance of the Internet Universality Indicators (IUIs) and the need for multi-stakeholder approaches in internet governance. The few differences that emerged were more about emphasis and specific implementation strategies rather than fundamental disagreements. This general alignment suggests a strong consensus on the value of the IUIs and collaborative approaches to digital governance, which could facilitate smoother implementation and adoption of these frameworks globally.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of multi-stakeholder participation in internet governance. However, Jennifer Bachus emphasizes the need for meaningful participation from multiple stakeholders, while Alexandre Barbosa focuses more on the collaborative aspect for sustainable governance without explicitly addressing the depth of participation.

Jennifer Bachus

Alexandre Barbosa

Need for meaningful multi-stakeholder participation, not just tokenistic

Multi-stakeholder collaboration is key for sustainable internet governance

Similar Viewpoints

Both speakers highlight the improved accessibility and applicability of the revised IUIs, emphasizing their alignment with global frameworks like the SDGs and their potential to inform tailored policy recommendations.

Tawfik Jelassi

Alexandre Barbosa

Revised IUIs are more streamlined and accessible

IUIs facilitate tailored policy recommendations aligned with SDGs

Both speakers stress the need for the IUIs to remain flexible and adaptable to address future technological developments, particularly in emerging areas like AI governance.

DAVID SOUTER

VINTON CERF

IUIs should remain adaptable to future technological and policy challenges

IUIs can help address emerging challenges like AI governance

Takeaways

Key Takeaways

UNESCO’s revised Internet Universality Indicators (IUIs) are more streamlined, accessible, and address new challenges like environmental risks and AI

IUIs empower countries to adopt evidence-based policymaking and facilitate tailored policy recommendations aligned with SDGs

A meaningful multi-stakeholder approach is crucial for effective implementation of IUIs and sustainable internet governance

Developing countries, especially small island nations, face unique challenges in implementing IUIs, including infrastructure limitations and capacity building needs

The digital divide remains a significant issue globally, particularly in Africa and Arab regions

IUIs can play a key role in shaping global digital governance and fostering international cooperation on digital issues

Resolutions and Action Items

UNESCO to showcase early results from the implementation of revised IUIs at the next IGF in Norway (May/June 2025)

Countries encouraged to conduct national digital assessments using the revised IUIs

Stakeholders to work on improving data availability and quality for effective implementation of IUIs

Unresolved Issues

How to effectively measure and ensure accountability in the digital ecosystem, especially with emerging technologies like AI

Addressing the affordability of internet access, particularly in small island nations and developing countries

Balancing multilateral and multi-stakeholder approaches in global digital governance

How to effectively include diverse voices and perspectives in the multi-stakeholder process

Suggested Compromises

Using a combination of quantitative data and qualitative assessments from experts to overcome data limitations in some countries

Adapting the IUI framework to specific country contexts while maintaining core principles

Balancing the need for comprehensive assessments with making the IUI process more streamlined and accessible

Thought Provoking Comments

UNESCO has been around for 80 years, has never done any comparative studies nor rankings of member states. So we are not in the business of rankings. The indicators are meant to be a guidance to our member states to conduct a national digital assessment, but not to compare countries, and certainly not to rank them.

speaker

Tawfik Jelassi

reason

This comment clarifies a key aspect of the Internet Universality Indicators (IUIs) framework, emphasizing its purpose as a tool for self-assessment rather than comparison or ranking. This is crucial for understanding the intent and proper use of the framework.

impact

It addressed potential concerns about the IUIs being used to create unfavorable comparisons between countries, potentially encouraging more countries to adopt and use the framework without fear of negative repercussions.

I think we really need to, when you talk about the quality of data, the reality is you need to be consulting with multiple companies, multiple civil society, multiple academia, and multiple tech communities.

speaker

Jennifer Bachus

reason

This comment highlights the importance of diverse and comprehensive stakeholder engagement in the data collection and assessment process, emphasizing the need for a truly multi-stakeholder approach.

impact

It sparked a discussion about the definition and implementation of multi-stakeholderism, leading to a deeper examination of how to ensure meaningful participation from various sectors.

How do you assess accountability if those who are running systems themselves aren’t really capable of understanding why particular decisions are being made? So I think with AI we reach another level of difficulty in assessing accountability which is a challenge for the next revision of the IUIs but it’s also actually a much bigger challenge I think for society as a whole.

speaker

David Souter

reason

This comment introduces the complex challenge of accountability in AI systems, highlighting a significant gap in current assessment frameworks and pointing to future challenges.

impact

It broadened the discussion to include considerations of emerging technologies and their implications for internet governance and assessment frameworks, prompting thoughts on how the IUIs might need to evolve in the future.

Small island nations like Tuvalu face very, very unique challenges in internet governance, which I would like to highlight a few. One of which is the big challenge is infrastructure constraints, in a sense that high cost of undersea cables, and I’m happy to say that my nation Tuvalu just landed the first submarine cable just a couple of days ago.

speaker

Tenanoia Veronica Simona

reason

This comment brings attention to the specific challenges faced by small island nations, providing a concrete example of how different contexts require different approaches to internet development and governance.

impact

It highlighted the importance of considering diverse national contexts when applying the IUIs, leading to a discussion on how the framework can be adapted to different situations and needs.

Overall Assessment

These key comments shaped the discussion by emphasizing the non-comparative nature of the IUIs, the importance of genuine multi-stakeholder engagement, the need to consider emerging technologies like AI, and the necessity of adapting the framework to diverse national contexts. They collectively deepened the conversation about the purpose, implementation, and future evolution of the IUIs, while also highlighting the complex challenges in internet governance across different global contexts.

Follow-up Questions

How can the accountability of digital technologies and services be measured?

speaker

Vinton Cerf

explanation

Accountability is crucial for ensuring responsible development and use of digital technologies, but measuring it presents challenges.

How can the revised Internet Universality Indicators framework address the challenges posed by artificial intelligence?

speaker

Vinton Cerf and David Souter

explanation

AI presents new governance challenges and uncertainties that need to be assessed within national internet environments.

How can the affordability of low-earth-orbit satellite internet solutions for remote areas be improved?

speaker

Tenanoia Veronica Simona

explanation

While LEO satellites offer connectivity solutions for remote areas, affordability remains a challenge for many users in small island nations.

How can the revised Internet Universality Indicators be integrated into national and regional Internet governance frameworks?

speaker

Jose Fissa

explanation

Understanding specific strategies for different stakeholders to integrate these indicators is crucial for their effective implementation.

How can the indicators help address issues related to the use and transparency of Universal Access Funds?

speaker

Avice

explanation

Access to information about Universal Access Funds is important for comprehensive assessments, but some countries are reluctant to share this data.

How can the multi-stakeholder approach be implemented more effectively to ensure diverse and meaningful participation?

speaker

Jennifer Bachus

explanation

Ensuring genuine multi-stakeholder participation, beyond tokenistic involvement, is crucial for developing effective policies and assessments.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns

Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns

Session at a Glance

Summary

This panel discussion focused on the issue of internet shutdowns and the importance of a multi-stakeholder approach in addressing this growing problem. Participants from government, civil society, academia, and the private sector shared insights on the trends, impacts, and potential solutions to internet shutdowns.

The discussion highlighted alarming trends, with Access Now reporting a 41% increase in shutdowns in 2022 and over 270 shutdowns in 40+ countries already documented in 2024. Panelists emphasized the wide-ranging negative impacts of shutdowns, from economic losses to hindering democratic processes and humanitarian aid efforts.

The importance of data-driven approaches was stressed, with academics calling for interdisciplinary research to better understand the motivations behind shutdowns and their societal effects. The private sector perspective highlighted the need for transparency in reporting disruptions and the value of human rights due diligence in technology development.

Participants discussed the role of advocacy in preventing shutdowns, citing examples of successful interventions in countries like Mauritius and the Democratic Republic of Congo. The Global Digital Compact was identified as a key opportunity for advancing multi-stakeholder efforts to combat shutdowns.

The discussion underscored the complexity of the issue, acknowledging that some governments may resort to shutdowns out of a perceived lack of alternatives. Panelists agreed that addressing root causes of societal issues and promoting good state practices are crucial steps in reducing the prevalence of internet shutdowns.

Overall, the panel reinforced the critical need for continued collaboration among diverse stakeholders to develop effective strategies for keeping the internet open and accessible worldwide.

Keypoints

Major discussion points:

– The increasing prevalence and concerning trends of internet shutdowns globally

– The importance of a multi-stakeholder approach to addressing internet shutdowns

– The role of data, research, and transparency in understanding and combating shutdowns

– The impacts of shutdowns on human rights, economic development, and society

– Potential solutions and advocacy efforts to prevent or mitigate internet shutdowns

The overall purpose of the discussion was to examine the issue of internet shutdowns from multiple perspectives (government, civil society, private sector, academia) and explore collaborative approaches to address this growing problem.

The tone of the discussion was largely serious and concerned about the increasing use of internet shutdowns, but also constructive and solution-oriented. Speakers emphasized the need for cooperation and shared examples of positive developments. The tone became more urgent when discussing recent trends, but remained hopeful about the potential for multi-stakeholder efforts to make progress on this issue.

Speakers

– Kanbar Hussein Bor: Deputy Director Democratic Governance & Media Freedom, UK Foreign, Commonwealth & Development Office

– Felicia Anthonio: Keeping It On campaign manager at Access Now

– Joss Wright: Researcher at Oxford Internet Institute, Oxford University

– Alexandria Walden: Global Head of Human Rights, Google

– Scott Campbell:  Senior Human Rights and Technology Officer of the United Nations Office of the High Commissioner for Human Rights

Additional speakers:

– Nikki Muscati: Audience member who asked questions (role/affiliation not specified)

Full session report

Internet Shutdowns: A Multi-Stakeholder Approach to Keeping the Internet On

This panel discussion, moderated by Kanbar Hussein Bor from the UK Foreign Commonwealth and Development Office, brought together experts from civil society, academia, the private sector, and the United Nations to address the growing issue of internet shutdowns. The conversation highlighted alarming trends, explored the wide-ranging impacts of shutdowns, and emphasised the critical importance of multi-stakeholder collaboration in developing effective solutions.

Trends and Impacts

The discussion opened with a sobering assessment of the current state of internet shutdowns globally. Felicia Anthonio, representing Access Now, reported that their ongoing work on the 2024 annual database has already documented approximately 270 shutdowns in over 40 countries. Notably, seven shutdowns have been recorded in countries that had never previously imposed such measures, including Comoros, El Salvador, Guinea-Bissau, France (disrupting TikTok in New Caledonia), Malaysia, Mauritius, and Thailand. Antonio also highlighted the worrying trend of cross-border shutdowns as a new development.

Panellists unanimously agreed on the severe negative impacts of internet shutdowns. Kanbar Hussein Bor highlighted the significant economic costs, while Alexandria Walden of Google emphasised how shutdowns affect basic services and democratic processes. The discussion made clear that beyond measurable GDP losses, shutdowns have profound effects on people’s daily lives, hindering access to education, healthcare, and vital information.

Joss Wright from the Oxford Internet Institute stressed the need for data-driven approaches to fully understand and quantify these impacts. This call for rigorous research was echoed by other panellists, who agreed that a more comprehensive understanding of shutdown effects could strengthen advocacy efforts and inform policy decisions.

Multi-Stakeholder Approaches

A central theme of the discussion was the critical importance of collaboration between diverse stakeholders in addressing internet shutdowns. Joss Wright articulated a vision of multi-stakeholderism that goes beyond mere representation to focus on leveraging diverse perspectives and capabilities in problem-solving. This sentiment was echoed by Scott Campbell from the UN Human Rights Office, who highlighted the reaffirmation of the multi-stakeholder model in the Global Digital Compact.

Kanbar Hussein Bor mentioned the Oxford Statement as an important multi-stakeholder effort in addressing internet shutdowns. The panel explored various roles different sectors can play:

1. Civil Society: Felicia Anthonio shared examples of successful advocacy efforts, such as interventions in Mauritius, where an attempt to shut down social media before elections was prevented, and the Democratic Republic of Congo. She also highlighted the importance of ECOWAS court judgments against internet shutdowns.

2. Academia: Joss Wright emphasised the need for interdisciplinary research to understand shutdown motivations and impacts, as well as to develop technical solutions. He stressed the importance of empathising with authorities’ perspectives to proactively prevent shutdowns, rather than simply opposing them outright.

3. Private Sector: Alexandria Walden discussed Google’s efforts in transparency reporting, including their Transparency Report and the Jigsaw team’s work on VPNs and the Outline product. She also highlighted the importance of human rights due diligence in technology development.

4. Government: Kanbar Hussein Bor outlined the UK government’s role in championing multi-stakeholder efforts and promoting good state practices, citing the UK’s decision not to shut down the internet during recent riots as an example.

5. International Organisations: Scott Campbell discussed leveraging the Global Digital Compact as a framework for advocacy against shutdowns and emphasized the need to address root societal causes of protests rather than relying on technological solutions like shutdowns.

Policy and Advocacy

The discussion revealed a nuanced approach to policy and advocacy. While all speakers opposed internet shutdowns, there was recognition of the need to understand government motivations. The panel agreed on the importance of showcasing examples of good state practices and developing alternatives to shutdowns that address legitimate government concerns.

Technical and Business Perspectives

Alexandria Walden provided valuable insights into private sector considerations, noting that shutdowns are “bad for business because they’re bad for everyone who uses our products”. She discussed Google’s efforts in measuring and tracking shutdowns for transparency, as well as developing circumvention tools and alternative connectivity solutions.

Joss Wright emphasised the need for interdisciplinary approaches that combine technical expertise with policy understanding. This sentiment was echoed in discussions about developing more nuanced technical solutions that could allow for some government control without resorting to full shutdowns.

Unresolved Issues and Future Directions

While the panel demonstrated a high level of consensus on the importance of addressing internet shutdowns, several unresolved issues emerged:

1. How to effectively prevent shutdowns in cases of protests or conflicts that are difficult to predict

2. Addressing the root societal causes that lead governments to implement shutdowns

3. Specific ways to institutionalise multi-stakeholder approaches at national levels

The discussion concluded with a call for continued collaboration and research. Key action items included leveraging the Global Digital Compact for advocacy, continuing private sector transparency efforts, and conducting more research to understand shutdown motivations and impacts.

In summary, this panel discussion provided a comprehensive overview of the complex issue of internet shutdowns, emphasising the critical need for continued multi-stakeholder collaboration to develop effective strategies for keeping the internet open and accessible worldwide. The conversation highlighted both the urgency of the problem and the potential for positive change through coordinated efforts across sectors.

Session Transcript

Kanbar Hussein Bor: Assalamu alaikum to everyone. Good afternoon. It’s a real pleasure to be hosting you for this event on multi-stakeholderism and internet shutdowns. I will be chairing you this panel today. My name is Canberra same bore. I am head of the democratic governance and media feeding department in the UK foreign Commonwealth and Development Office. I will quickly go through and introduce our panel members. I’ll say a few words, and then hand over to my colleagues to also say a few words. And then we hope that we’re going to have a interactive session. And please come forward with some questions we hope that there will be a good half an hour or so available for everyone to actually come in. So I’ll just go and introduce those panel members. So, I’m pleased to say that we’ve got Felicia Antonio from access now the keeping on campaign manager Felicia Do you want to say hello to everyone. Hi everyone. To join you today. Thank you. I’m also pleased that we’ve got a just right from the Oxford Internet Institute at Oxford University from the UK. Just would you like to say hello. Hello, pleased to be here. Brilliant. And in the room together, we’ve got Alexandria Walden from Google. Alex over to you. Got it. Okay, perfect. Hi. Thank you for inviting us to be part of the session today I’m Alex Walden I lead human rights at Google. Brilliant. And last but not least, we’ve got Scott Campbell from UNHCHR Scott over to you. Hi, Scott Campbell, leading the work on tech and human rights at the UN Human Rights Office based in Geneva. Brilliant. Well, just to start the session off really. I’ll say a few remarks. I hope you do take away from our session. There are three main messages from me. of the multi-stakeholder approach. Second is just to talk about the importance of trying to stop internet shutdowns. And finally, just a word on some of the ways in which we can try and prevent shutdowns and the impact that they may have. Firstly, internet shutdowns. I’m conscious I’m in a room where a lot of you are much more technical than I am. From my perspective and the UK’s perspective, we are taking a broad approach to internet shutdowns, what that means. You’ve got a spectrum of activity from one end, which might be classified as a technical shutdown, whereby you can have no access at all to the internet. But then there are a number of other measures in place as well, where for example, you can have efforts which are aimed at throttling the internet, whereby you can have almost some sort of access, but for all intents and purposes, that access deprives you of the ability to be online. From our perspective, that type of activity is a significant impediment to a free and open interoperable internet, but also it has a significant real world impact on the lives of people across the world. All of you can imagine, and I’m sure all of you know, what type of impact this could have. It could affect what you might categorize as farmers in a developing world who require and need access to climate data to ensure that they’ve got the best information available to maximize the yields that they are trying to extract. It can affect a citizen who wants to partake in the democratic process and is unable to express their views online. It might affect a business who needs the ability to access their financial services online quickly, and they can’t do that. Or it might affect an individual who needs to charge up their electronic car, and they can’t do that. So all of those type of impacts can have real world impediment to the ability of people to actually carry out their lives. Unfortunately, internet shutdowns are increasing in their prevalence. We are seeing more and more countries who are resorting to internet shutdowns. Access Now have reported a 41% rise of internet shutdowns from 2022. From the UK’s perspective, we have been championing the right for this. We have been championing the policy change whereby states no longer shut the internet down. We are arguing that this has a not only significant impact on democratic context, but also most recently in the situation of Bangladesh, it’s been reported that the shutdown there during the summer resulted in almost $300 million loss of GDP. So we have been using our G7 presidency in 2021 to argue that this is an important issue and states should refrain from doing so. We’re proud to be chairing the taskforce internet shutdown. from the Freedom Online Coalition and we’ve been using that for the last two years to come up with a number of measures to highlight the importance of this issue. In particular we have come up with a statement through the FOC on the importance of keeping the internet on in the context of elections. We also with UNESCO on the last year’s International Day for Universal Access to Information we came up with the Oxford Statement which underlined the importance of digital connectivity to both issues around development but also democracy. And last but not least we are using our platform here at the IGF as Task Force Internet Shutdowns to highlight this issue. My final comment on all this is that clearly this is a significant issue but most importantly all the all the measures I’ve just described they couldn’t have happened without the multi-stakeholder approach. You know through the FOC we’re proud that we’ve got colleagues that represented from academia, private sector, government, civil society coming together. Through the drafting of the election statement on shutdowns we brought together the multi-stakeholder community and I know from first-hand experience that diversity of views made that final product much more effective. And also when we went about the Oxford Statement we had representatives from over 60 to 80 different organizations all part of that multi-stakeholder approach who were able to highlight the importance of this. So a few framing comments from me but now let me pass on to our panelists to also express their views on this. So I might just start off with Felicia. Felicia can you talk a little bit more about the key trends you’re seeing in 2024 around shutdowns and some of the challenges you foresee in so far as trying to take a collaborative approach. Over to you Felicia. Thank you so much Kumbar and yeah

Felicia Antonio: definitely I can speak to what we are seeing. I come bearing not so much of good news. And just before I jump into that, just to mention that for those who are not familiar with the Keep It On campaign and coalition, this is a global coalition that is dedicated to fighting internet shutdowns around the world since 2016. And it currently has over 330 civil society organizations as members. We track internet shutdowns. We advocate against them, we raise awareness and we work with diverse stakeholders, including governments, regional and international bodies, like the UN, the EU, the African Union, and the Freedom Online Coalition, as Kanbayu’s mentioned, industry players, journalists, researchers, among others, to push back against internet shutdowns. So we track shutdowns looking at the triggers, which simply triggers incidents that are likely to get a government to impose a shutdown. And our focus is on deliberate disruptions to the internet, complete shutdowns, throttling, as well as targeting the social media platforms or digital platforms. And so what we’ve seen over the years since we started documenting shutdowns is that protests, exams, elections, conflicts, major triggers of internet shutdowns. In 2023, unfortunately, conflict was the main trigger of shutdowns, where we saw 74 shutdowns recorded in nine countries in times of conflict. And then protest was the second highest of 63 shutdowns in 15 countries. And then we’ve also seen government. disrupting internet during school exams and then elections is also an area where governments are likely to disrupt internet access. So for 2024 we are currently working on our annual database and we’ve already seen approximately over 270 shutdowns in 40 plus countries globally. And the countries are likely to be more and if that is the case we’re going to see the highest number of countries where we’ve documented shutdowns in a single year and that is not good news. The number of shutdowns are also likely to be really high but these figures will be finalized early next year when we release our annual reports. And so this of course underscores a worrying trend that shutdowns are spreading increasingly becoming a go-to tool by both repressive and democratic countries. In 2024 we’ve also already documented seven shutdowns in new countries that are countries that have never imposed internet shutdowns. In 2024 we’ve seen countries including Comoros, El Salvador, Guinea-Bissau, France disrupting internet TikTok in New Caledonia, Malaysia and Mauritius as well as Thailand disrupting or imposing internet shutdown. And in that case we’ve also seen two members of the Freedom Online Coalition that is Kenya and France resorting to the use of internet disruptions. Another worrying trend we are seeing is the deliberate use of cross-border shutdowns and that is countries imposing shutdowns across borders in around the globe and this is really really concerning for us as a civil society. And so just looking at these trends have indicated some of the challenges is the fact that we’re seeing the democracies also resorting to the use of shutdowns and so it really makes our advocacy work difficult. And so I think it’s important for us to continue to hold each and every government that has found an internet account so that we can push back confidently and effectively against internet shutdowns. We’ve also seen that conflict-related shutdowns is really becoming a big problem which means that this has implications on, for instance, delivery of humanitarian aid during conflicts and we are also looking at what alternative sources of connectivity can be provided during conflicts to sustain or to ensure that the internet is still or remains open and secure for people as well as the humanitarian organizations that are operating on the ground. As you mentioned, having the international community support civil society advocacy is really crucial and so, for instance, the statement that the FOP put out prior to the 2024 election, where we’ve seen that 2024 was declared the year of elections, was really important for our work and we continue to use this in our advocacy and engagement with governments and our election watch initiative. In 2024, the African Commission on Human and People’s Rights also adopted a resolution which recognizes the importance of internet connectivity to the realization of free fair and credible elections. And so this was also really timely for advocacy against internet shutdown. And then in prior to some of these milestones, we’ve also seen governments such as Nigeria, the Democratic Republic of the Congo and Sierra Leone making commitment to keep the internet on during elections in their respective countries. And beginning of the year, we also saw Bangladesh in January making similar commitments to keep the internet on. I mentioned Mauritius as one of the countries, new countries that have disrupted internet access. And if you may be aware, in November, the authorities issued an order to shut down social media until after elections. And this was like 10 days or so before elections. And this was really shocking. Mauritius had never disrupted internet access. And we all recognize that has been rated as a free country. And so seeing a country like that imposing shutdowns and even attempting to have the shutdown in place for over two weeks was really concerning. And so following backlash from civil society and also engagement with diverse stakeholders, we got the authorities to reverse the shutdown. So it was lifted after 24 hours. And the people went to the polls on November 10, which opened access to secure internet throughout the electoral processes. So these are some of the cases or trends we’ve seen so far in 2024. I will pause here and hopefully, we can have time for questions and discussion. Thank you.

Kanbar Hussein Bor: Thank you, Felicia. Some worrying trends there that you’ve highlighted. But I’m also conscious you concluded with some positive examples as well, where countries have committed to keep the Internet on and where the multi-stakeholder approach has helped in terms of advocacy and accountability. Shall we move on to Joss Wright? Joss, if I can bring you in here, could you talk a little bit more about how a much more data-driven approach to this particular challenge can help us to navigate this challenge and maybe come up with better policy approaches? Joss, over to you.

Joss Wright: Thank you. Yeah, I’m really sort of pleased to follow on from what Felicia has just said. And what I want to do in the spirit of talking about this in a multi-stakeholder sense is in the context of a lot of Internet governance, multi-stakeholderism traditionally is more about representing the voices of the different people who are being affected by different sort of effects and policies that are going on. But I think there’s an interesting shift in perspective on multi-stakeholderism when we are, as a group, trying to address a problem that is fairly universally recognised as a problem. And I think that what I’d like to represent from the perspective of academia here is a form of multi-stakeholderism that isn’t so much about hearing all of our voices, but drawing on the perspectives we have in our solutions and our approaches and our abilities to provide some input to resolving the problem. And I think I sort of working in this area as an academic with the limitations of academia, I really see that every group that is working in this area, we’ve got representation from civil society, we’ve got academia, we’ve got business, we’ve got policy makers, each one has their particular sort of strengths and their particular abilities to affect change in an interesting way. but each has its sort of flip side, its set of weaknesses, and, you know, I’m not going to go into too much detail on all of them because I’ll just end up insulting all the amazing work that people are doing in the room. But there is this element of, you know, civil society has got the positive advantage of being on the ground and very sort of solution-focused, working directly to reduce the impacts of things like internet shutdowns, but the flip side of that is a tendency to need sort of results quite quickly to be very solution-focused in itself and have a restriction in the amount of sort of long-term stepping back that can go on. Business, in contrast, has a lot of power, a lot of voice, a lot of resources, a lot of ability to affect the policies of governments through the interactions at the business level, but they also have to sort of take into account their own market considerations, their own sort of legal constraints, and doing things like that. The policy side, the government side, is obviously, you know, the strongest voice in being able to push policies forwards, but necessarily is relatively slow, not quite as reactive as it can be. And then from my perspective as an academic, I see our abilities in this area as being more on the kind of flexible methodological innovation, the ability to bring new approaches and maybe longer-term questions and understandings, but we tend to have sort of less voice, less sort of capacity to directly interact with policy, and frankly, as academics, maybe a little bit less urgency in directly achieving the solution to a problem, and we’re all focused on publishing academic papers, because that’s all we care about as academics, ultimately, when we should be actually trying to work effectively to help there. But I think that’s why the multi-stakeholderism approach is so so important is because the things that academia can bring to the table, if appropriately incentivized, are to take a different perspective and to bring some of the more cutting-edge techniques that wouldn’t necessarily be practical or applicable for civil society to do, for government to do, or even for business to do. So to speak directly to the academic side of things and my own particular role in this, I think one of the particular strengths we have is the interdisciplinarity that comes out of modern academia, certainly. And it was mentioned at the beginning that this is a very tech-heavy field. And as somebody who studied as a computer scientist, it was quite a frustration to me to realize several decades into my career that I should have studied law, because that’s where I would have had much more effect on helping with stopping things like this. But it’s a bit late now, so I’ll stick with the computer science. But that’s the interaction between what we can bring on the data side, the method side, the data science side, is something we can bring to this. And traditionally, the limitation of the academic side of this work is that it has been very technologically focused. There’s been a lot of work on measuring the internet, measuring shutdowns, providing data, but then not being so interested in doing something with that data. So we’ve built a tool that will measure internet shutdowns in x, y, and z. Now it’s somebody else’s job to go off and do the policy advocacy, the interaction with users. Or on that technical side, we’re going to build a circumvention tool. And we’re going to show that we can get around the internet shutdown, or we can still access the internet in this particular place in a way that is largely meaningless to 95% of the population who don’t have advanced computer science degrees that can use these technologies. And so the reality of this is that while this is a technological substrate, a technological basis for what we’re talking about here, it’s a socio-technical system. It’s an attempt to use an important society-wide technology to have control over an aspect of society. And so the research that I do, that my group does here at the Oxford Internet Institute, is largely focused on trying to bridge that gap between the strong technical measurements and the social and political understanding that drives it. Because we do need to understand the technology. We need to know how it works so that we don’t make silly mistakes. But we also need to answer the question, why are internet shutdowns happening? Why do states or authorities implement internet shutdowns? Because if we understand that, we can say, look, this is what you’re trying to do, and it’s not doing what you think it’s doing. It’s not achieving the goals you’ve set out for yourself. And then hopefully that’s a route into policy to try and prevent that from happening in the future. Or if we’re being honest, we need to say this is having the effect that you think it’s having, but there are externalities, there are negative sides that are so significant that it’s not worth what you’re paying for it. And there are many forms of externality. I think GDP is widely mentioned. Frankly, I think it’s a poor measure of the impact of a shutdown. I’m much more interested in people being unable to communicate with friends, people being unable to coordinate their activities, people being unable to access healthcare information, quality news information, and similar. I realise I’ve already hit the five minute limit that I was given for this talk, so I won’t talk too much about the work that we’re doing here. But I’d like to say that some of the work we have been doing is about bridging these gaps. I’d particularly like to mention the Open Observatory of Network Interference to Uni project who have been working strongly with us to provide data, and we reciprocate by giving them the analytical tools, the sort of statistical data science machine learning tools that we can work with here to try and understand how the data around shutdowns relates to the social and political factors on the ground. How does shutdowns shift in the lead up to an election? How likely are they leading up to an election? What happens after an election? Not just in terms of the internet, but in how people respond. Does a shutdown increase or decrease the amount of political violence, the amount of sort of protest or things like that? And can we understand these dynamics and feed into the policy process to try and reduce the effects, the negative effects that happens? So just to conclude then, at the multi-stakeholder level, which I think is hugely important to tackling this problem, what we need to do is to continue this route of drawing from the strengths and the perspectives of each stakeholder, not in terms of what do they want out of it, but what does civil society bring? It brings an ability to work with people on the ground, to advocate for people on the ground, and to interact with policy. Policymakers have the ability to drive policy directly, limited by the need for agreement, nationally and internationally. Academia provides the analytical tools, the perspectives, and the methods, and business provides resources, infrastructure, and there’s a lot of crossover between these, but I’m so happy to see that this community in this field takes this multi-stakeholder approach very strongly, and so that coordination is something I hope will continue going forward.

Kanbar Hussein Bor: And I’ll stop there. Thank you so much, Josh. Really, really helpful. I’m going to move on now to our next speaker, which, in the room, Alex from Google. Alex, could you talk a little bit about the perspective of the private sector, really? How can the private sector help address this challenge, and how does a private sector, how is it affected by internet shutdowns? Thank you.

Alexandria Walden: Thanks for that question, and it’s nice to come after Felicia and Josh, because I think you hit on a lot of the things that are also important to us in the private sector, and that we think are important to be partnering with you on, to continue partnering with you on. These issues have long been a priority for Google. You know, we believe in a free and open internet. That has always been a core value of Google, and its products, and the way that we interact in the world, and so the increase in shutdowns that Felicia talked about is something that’s deeply troubling to us, and is part of why we believe in the multi-stakeholder model, and in engaging with governments, and civil society, and academia, and others in industry to make sure that there is data available to those of us who are studying it, as well as informing public policy statements, and advocacy to ensure that governments who are using shutdowns understand what the repercussions are, and hopefully can think of more tailored ways to address the problems that they’re seeking to work on by doing shutdowns, and disruptions, and throttling. In particular, you know, one thing we had sort of emailed about before was thinking about the value of human rights due diligence when we think about shutdowns, and for us, it is something that we think about when we think about how our products are operating in the world. We do have to think about sort of how we evaluate and plan when we know these things are going to happen, and how do we sort of design potentially around these kinds of, this kind of activity that will ultimately affect the people who are trying to use our tools, the devices that run on our operating systems, etc. The thing about shutdowns is they are rarely, from a human rights perspective, rarely necessary, and rarely proportionate. They’re a blunt tool that impacts all of our users and all of our services, and so from a company’s perspective, it’s bad for business because it’s bad for everyone who uses our products, and I think Joss was hitting on this a little bit, but certainly it’s everyone who’s messaging and trying to communicate with their friends and family members. It’s people who are trying to use digital payments and trying to send money back and forth. It impacts businesses small and large around the world, so it really does. It’s not just GDP. It’s every sort of interaction that we’re trying to digitize for people. When you have disruptions, that means that those, that activity can’t happen, and so I do think it is an, it’s interesting to think about ways we can maybe illustrate that and measure how that impacts people in all of these small ways that really add up. For us, we have, again, like I said, for a long time been working on these issues, and so one thing in particular that we have always had is a disruptions report on our transparency site, and so what that does is it tracks the sort of activity across all of Google products around the world, and you can see when the activity gets low on any given product. Ultimately, when there is a shutdown, you know, Google doesn’t directly control any of the infrastructure, so when there’s a shutdown, we normally learn about it when people are not able to access our products. It’s not something that we know about ahead of time, so that’s sort of the value of the transparency report. It makes sure that when we are learning that these things are happening, everyone else is learning that at the same time, and so transparency is one place where I think we’ve spent a lot of energy making sure that everyone has access to information about when these things are happening so that advocacy can happen. My colleagues in Jigsaw they are sort of our think-do-tank internally, have really invested also in transparency, but in partnering with other organizations to make sure that there’s more data sharing and creating more comprehensive visibility around the impacts of shutdowns and disruptions. So that’s included partnership and support of the Measurement Lab, as well as UNI and various others. And I’m sure Joss has also worked closely with them as well. So that is one place where we are continuing to invest in kind of the measurement and tracking and information sharing around transparency and disruptions. Just to maybe highlight a little bit more of the other ways we’re working with other stakeholders, we’ve long supported the Keep It On campaign and think that the advocacy role is something that needs to be supported by those of us in industry. And I think many of the companies that are part of Global Network Initiative have long done that. And that’s also part of why we engage with TFIS as part of the Freedom Online Coalition. We think that engaging with like-minded governments who understand all of the problems with disruptions and why that’s not the best way to solve whatever challenges are happening in any given country, that it’s important for private sector to be at the table, talking about what we’re seeing, what we’re tracking, and to come to the table with one voice on that. And then the last thing I just wanted to flag is that Joss, it was funny that you said, you know, law school would be useful, because I would say as someone who’s a lawyer, that actually it’s really important for us to be partnered with technologists, because ultimately, sort of when I go back to the human rights due diligence and how do we plan for addressing these issues, really that does require us to think about what tools are available to people. And so again, my colleagues in Jigsaw have focused on building things like Outline, which is a product that’s focused on VPNs, and how do we make sure that VPNs are more accessible to people? How do we ensure that people can maintain access when these things are happening? And those really are questions that require technologists to be at the table with policymakers. So maybe that does just reinforce the value of all the stakeholders at the table, both from a technical and an advocacy perspective.

Kanbar Hussein Bor: Thank you so much, Alex, really helpful. We’re just gonna go to our last speaker before we open it up to all of you for questions. So do cling on to those questions, we’ll be coming to you. Scott, if I may, your organization recently had a number of recommendations after you issued a report on internet shutdowns. What’s your assessment on the progress of trying to facilitate positive change in this field, building on those recommendations?

Scott Campbell: Thanks. Thanks very much, Kanbar. And thanks to the UK and the Freedom Online Coalition and all of the different actors involved in organizing this, Google, Oxford, and Access Now. We’re really pleased to take part in a multi-stakeholder discussion. And at the risk of repeating you a little bit, Kanbar, in your opening remarks, but this, our office has been a champion of multi-stakeholder approaches and will continue to be one going forward. The value of this kind of conversation is clear, and this will be a priority area for our office in ensuring that participatory decision-making processes and the one that Alex just mentioned are truly participatory and that civic space is clearly protected so that all can take part freely in such discussions on the challenges of internet governance and also on the topic of today on shutdowns. The progress on shutdowns, so I was asked to speak on that in the frame of our report. I think the first thing I would say is that we very much need a multi-stakeholder approach to make progress, to quote Joss, or misquote Joss, but we need a multi-stakeholder approach to provide input into solving the problem. And I think that’s a key takeaway, and I see in this conversation opportunities for the Freedom Online Coalition to be thinking about in 2025, for Google to be thinking about, for all of us to be thinking about alongside member states with how to move the dial. I don’t wanna repeat Felicia, in terms of the progress, if we can frame it as progress made, backsliding perhaps, but I did want to salute Access Now for the exceptional work that they’ve done through the Keep It On campaign and having really important data on what the trends actually are in the world. So I won’t repeat that. For today, I thought it would be more useful really to look at opportunities for problem solving through the multi-stakeholder lens and hooking to the Global Digital Compact. And what I think is really today our most significant frame, multi-stakeholder frame for problem solving. Multi-stakeholderism is clearly reaffirmed in the Global Digital Compact. And I think this, for our office, in terms of making progress on the recommendations, in a report, we need to seize this opportunity along with all. I just wanna touch on a couple of those hooks and maybe ask a few questions even, or put out a few ideas. I think as most of you that followed the Global Digital Compact are aware, it is firmly anchored in international human rights law. And I think it’s also fair to say the GDC doesn’t move us forward in terms of a normative framework, but it does move us forward in terms of having 193 member states reaffirm their commitment to human rights in the digital space and reaffirm their commitment to multi-stakeholderism. So I think we just need to seize on that re-commitment or commitment from some in those affirmations and seize that political momentum. And we’re very pleased that our office is one of the five UN entities that is called upon to implement, which is a huge challenge and leads me just to the two areas I’ll focus on in terms of hooks. Most of you, if you’re attending this workshop on shutdowns, you probably noticed the very clear language on internet shutdowns in the GDC. It’s quite simple, watered down in some ways from where it was, but I think very effective in calling for states to simply. refrain from internet shutdowns? Where are the opportunities to push on that very clear commitment? Companies, and Alex you touched on this, but there’s a clear call on companies to respect human rights and to apply human rights due diligence throughout the full lifecycle of technology. What are the opportunities to move forward also in a multi-stakeholder fashion on that? And I think there’s work, you know, you gave one example and I think Felicia touched on another interesting example, which is the Democratic Republic of Congo, where a couple of weeks ago at the annual forum on business and human rights, Vodafone was talking about the agreement that they came to with support from civil society and the government of the DRC in a pre-electoral context. Now what was the impact of that agreement? Where are the shortcomings, the gaps? How can it be used as an example to build on? Open question. And then the last area is objective one of the GDC on connectivity. And this, in our view, is likely to be a massive area of investment from international financial institutions, from the United Nations, and a huge opportunity to integrate human rights concerns into agreements around connectivity. And as different types of infrastructure and connectivity projects are being established, there’s a key moment for prevention, as we outlined in our report, but for language to be included that makes it very difficult for governments to shut down and easier for companies to push back with their legal tools against shutdowns. I’m going to stop there because I could go on, but I sense we’re at time and really look forward to the conversation. Thanks again for including us.

Kanbar Hussein Bor: Thank you, Scott. Some really helpful concluding remarks there. Now I’m going to open up for question answers. I’m going to first look to the room here, so if you do want to ask a question. Can you hear me? Quick sound check. Yep, okay. If you do want to ask a question, please make your way forward. There’s a mic there at the front and you’re very welcome. So who would like to open up? Great. Please come in. Oh, I think we may need to turn the mic on. Do you want to use my mic? I just want I wanted to ask about any kind of good examples where policy advocacy actually made a difference on influencing internet sort of shutdowns or preventing them. Any that’s goes for people in the on the panel or in the room. And if so, what was the anatomy of that influence. I’ll pass that on. I think there’s another hand up. We’ll have two questions in the room. And then I’ll ask the panel to come in on that. Thank you. Are there any proactive steps in order to mitigate the disasters that will be behind internet shutdown for countries. Thank you. Great. Well, I’ll take that last one. I think in proactive steps. I think meetings like this are an example of that. We’re trying to raise the awareness around internet shutdowns. We’re trying to take a holistic approach in terms of the impact of shutdowns, be it from the issues around civil political rights, be it from issues about economy, but also development. So this is an example of that, but also our work through advocacy of the taskforce internet shutdowns and work on freedom online coalition. So we’re trying to be proactive there, but certainly there’s much more we can do. Who in the panel would like to come in on the point was raised about some positive examples about advocacy working. I see Felicia, you’re nodding. Do you want to briefly address that. Thanks.

Felicia Antonio: Yes, definitely. I think I mentioned some good examples, having commitments from governments, and I think this is prior engagement. So it comes under the election watch initiative and keep it on where we are able to engage with governments prior to elections to raise awareness about shutdown, the harms they have on human rights and how people can actually leverage connectivity. to actively participate in the electoral processes. And so through these engagements, that is when we got governments like the DRC, Nigeria and Sierra Leone to make a commitment to keep it on to other electoral processes. We’ve also taken governments to courts and won against internet shutdowns and the ECOWAS court has so far passed two or three judgments in favor of civil society against internet shutdowns, one in Togo, in Nigeria during the Twitter blocking and also in Guinea, where civil society sued the authorities for disrupting internet access. So these are some positives and in addition to the examples that Kamba mentioned. And I think proactive measures are really important, but some of the triggers is very difficult to predict, like protests, like conflicts. We don’t know when they just spring up on us and we have to find solutions to that. But with elections over time, we’ve been able to prepare ahead and to engage with stakeholders to push back against election related shutdowns. Thank you, Felicia. I’m just

Kanbar Hussein Bor: going to make a note. I can’t see comments coming online. I joined a little bit late, so I might ask Josh and Felicia if you could make a scrub of any comments in the chat and bring them to our attention. But while that happens, I do see a hand up in the room. So if you would like to come and

Nikki Muscati: ask a question and introduce yourself, that’d be great. Hi everyone. My name is Nikki Muscati. First, I just want to say thank you so much to the UK government for continuing to chair the task force on internet shutdowns and the FOC. And thank you to all the panelists. I look forward to this session every year. But I have two questions, if I may. One is actually to Google. Alex, you were talking quite a bit about how the ability for people to be able to actually access all the different Google products is key for the ability for Google to actually do business in a country. And so I guess one of the things that I’m wondering is when you are considering introducing products in different new countries and different new settings, is our internet shutdown something that you look at? Are internet disruptions something that you look at when you’re considering the expansion of Google products and new markets? I think it’s just helpful for people to know because I think there’s such broad conversation about FDI, but it’s not really looked at from a government perspective, excuse me, from a private sector perspective in that way. So it’d just be helpful if you could share. And then the second question I have is, we talk a lot, I think it was Joss that was sort of noting that governments will shut down the internet and they’ll provide a justification. And sometimes it’s not resolving the issue that they want. Sometimes it’s having the exact effect that they want. But the issue that they’re trying to resolve is one that has nothing actually to do with the internet itself. And so I guess that I’m wondering for the panel. panelists, or for the room, is we’ve been having this conversation for a long time. The fact that government shutdowns are often imposed because someone is trying to address something that really doesn’t have to actually do with the internet. Has there been conversation in these sort of multi-stakeholder settings on how do you address the actual root issue at hand? I think one of the things we hear is, we got to shut down the internet because there’s a protest that’s going to happen. Those are all just very different things that are one technical, like technically a solution that’s not a real solution being applied.

Kanbar Hussein Bor: Thanks. Thank you so much, Nikki. One question for Alex, and then we have a question, which I think very much chimes with Nikki’s second question around chatting to governments and trying to explain the issue. Maybe I might ask one of Joss or Scott if they want to address that, but Alex, over to you. I’m directed at you. Thanks.

Alexandria Walden: Hi. Yeah, that’s a good question. I think there are myriad factors that get evaluated when we think about where we’re going to expand business for any given product, but one of the buckets of things that we focus on is the operating environment. What is rule of law like? What are the regulations that we may have to comply with? And then finally, whether or not there is access available, what the infrastructure is, is the government shutting it down on a regular basis are things that we would highlight when we’re thinking about the riskiness of a country, and is it worthwhile to expand our business there? So it is something that comes up absolutely when we’re doing those evaluations.

Kanbar Hussein Bor: Thanks, Alex. And then we’ve got a question which was talked about having a dialogue with states, trying to really explain the challenges around this. I might ask Josh to come in on that, and then Scott, there’s a question in the chat directly for you. So Josh, do you want to address that one, and then Scott, if you could come in afterwards. Thanks.

Joss Wright: I mean, Alex, Scott, you come in. after that, but just first to you. Um, yeah, I think this this sort of trying to understand the relationship between the intended action of some authority, whether it’s local government or national government, and what they actually do, I’m beginning to suspect that maybe policymakers are not entirely rational actors, at all times in the way that they make their policies. And in many senses, there is this knee jerk reaction. And in some cases, it’s, you know, in some cases, it’s, I say it carefully, but justifiable, you know, there are there have, you know, I’ve certainly seen cases where the stated intention is, you know, there is sectarian violence going on in this region that is being spread by social media, we don’t see an alternative, this is the only thing we can think of to do. And as somebody that is utterly against internet shutdowns, I can still have sympathy with that perspective. Because if I don’t try to understand why an authority wants to shut down the internet, I can’t work proactively to try and prevent them from having that conclusion. And I think that’s something we really need to accept as a community that it’s it’s not an us and them problem. It’s it’s a it’s a problem to work out together, how can we prevent this from from happening. And I think that there’s a much wider point there, which is, you know, most people in this room would probably share this opinion with me, if you told me 10 years ago, that internet shutdowns would be increasing. Now, I was studying censorship 10 years ago, 15 years ago. And, you know, the perspective would always have been the internet’s becoming more and more important, it’s becoming more and more embedded in society, everyone is using it, it’s necessary, how could it possibly be shut down, it’s going, it’s just going to die off as something that you wouldn’t do. And yet it is. And why is that happening? Partially because the authorities who are trying to achieve their societal or political or economic goals, don’t feel that they have an alternative. lever of power to achieve that. And so it’s become an all or nothing problem, especially with the rise of encryption on the internet, which is obviously, from my perspective, an unmitigated good. But it’s meant that some of the more subtle or insidious forms of censorship where you could block pages or keywords or other things have gone away. And states are now being presented with the option, we either shut down everything or functionally everything in terms of the large major services, or we have no control over this. And that’s a difficult position for a state or authority to be in. And so that’s why I, I focus a lot of my work on trying to understand the motivations and the impacts, because it feels to me that that is the most proactive way and holistic way to try and combat this problem, rather than the 1015 year ago approach, which was more naive, which was saying, we will just stop, you know, we’ll just find ways to get around your censorship, and you’ll eventually give up. And that’s obviously not happened. So I think that really drives to the heart of certainly my research agenda, but where I think we need to be thinking in these terms.

Kanbar Hussein Bor: Thanks, Josh. Can I bring in Scott? Now we have a question in the chat, I think you’ve seen about institutional structures. And does the UN have a recommended model? You see the question? No, I haven’t seen that. But how can we ensure or enable national states to use multistakeholder model by creating institutional structures to engage all interested bodies? Does the UN have a recommended model? We can hear you.

Scott Campbell: Can you hear me? Okay, so sorry, lost power. Missed the question. I just really want to emphasize what Josh was just saying, that we see more and more governments looking for tech solutions to what are deep societal problems. And in a nutshell, and the root causes of protests are related to a lot of our bread and butter work, but promoting freedom of expression, freedom of assembly, non-discrimination. and exclusion. So I think we need to look at those societal causes and take on the reality of the human rights space that is or is not available to get to the problem. Similarly with online racism, you know, the racism won’t go away if we start censoring racism online. I think that there are, I think we see a lot of, you know, knee-jerk reactions in that space. So now that my ears are back on, I still can’t read anything without my glasses, but there’s a question about institutional… Institutional structures to engage interested bodies. Do any of your recommendations address how we can institutionalize the multi-stakeholder approach? Ah, absolutely, yes. Well, it’s a good plug too. Hope to see everybody at the next IGF and the next IGF and the next IGF, IGF. Perfect. We’ve reached the, I’ve got to

Kanbar Hussein Bor: sign and say we’ve got about three minutes left, but there’s one question in the room and I might try and wrap up things. I’ll give you my mic. Do you want to introduce yourself and ask the question? Thanks. Thank you. So that, it was my question that, so the question was can, how can we promote national governments to use this multi-stakeholder body to develop the policies, including all your parties, et cetera? Sorry, we can hear you now. Connectivity.

Scott Campbell: The short answer is also leverage the global digital compact. Whatever governments, 193 governments have just committed to a multi-stakeholder model. There’s very clear language. I’m forgetting which objective or whether it’s in the introductory parents, but there is clear language committing to multi-stakeholderism and an inclusive IGF that’s in there. I think this is, again, it’s not anything new that we couldn’t find in existing international human rights law and principles, but the fact that governments have collectively come together and reaffirmed that gives us space for advocacy and for peer pressure.

Felicia Antonio: Thank you so much. I’m going to wrap this up now, really. I’m going to say a few words and maybe ask each panelist to very briefly come up with any sort of concluding thoughts. I think from my perspective, I just want to thank everyone again. I think today is a real demonstration. This panel is a real demonstration of the power of the multi-stakeholder. older approach. I think one particular point which I want to draw out which hasn’t been highlighted is good state practice. Often it’s very easy to highlight examples where states have actually shut the internet down. Representing the British government, I would say this wouldn’t I, but in the summer in the UK just gone we experienced some really shocking riots across the country. They had a real impact on our social fabric, they were reported widely and they entailed quite a lot of violence. However, during that time the internet was not shut down. Policymakers took the decision not to do that. So I just highlight that because good state practice can encourage other states to look at this and realize that this is a blunt tool and there are wider societal issues at play and there are different levers available to address that. So with that in mind I might ask each panelist very briefly to have a concluding thought for us. So Felicia, do you want to come in? Yes, just to rehash that the fight against internet shutdowns needs multistakeholderism and so it’s important for us to continue to work together to push back against these riots coming practices. Thank you. Thank you.

Kanbar Hussein Bor: Joss, any concluding thoughts?

Joss Wright: really in terms of understanding the motivations and the activities of the authorities that are engaging in internet shutdowns, we all agree that this is a blunt tool and there are alternative ways to achieve those goals. Let’s not fall into the same problem of thinking that we can have similarly blunt solutions. We need to be just as sort of subtle and holistic in how we address this problem if we’re going to bring it together and multistakeholderism is the way to do that. Thank you. Alex? I’m not sure I have anything additional

Alexandria Walden: to add because we continue to be committed to multistakeholder model. Maybe the one thing I’ll just highlight is that I do think companies, Google in particular but not just Google, are really interested in continuing to work directly with civil society to understand how you’re experiencing the impact of shutdown so that we can continue to think about building tools that are effective as part of the work that we’re doing. And last but not least,

Scott Campbell: Scott? No, thanks. I learned quite a bit in the dialogue, so thanks for including us and I guess I’m inspired to have to go deeper into some of the examples that Felicia was putting out there as possible models, good and bad, but how we can learn together and to improve member-state practice in the shutdown space. Great. Well, thank you to our panelists but also thank you to

Kanbar Hussein Bor: everyone else who’s… joined us both in the room and online. Thank you. Alright, welcome, everybody.

F

Felicia Anthonio

Speech speed

131 words per minute

Speech length

1466 words

Speech time

669 seconds

Increasing prevalence of shutdowns globally

Explanation

Internet shutdowns are becoming more widespread globally. There has been a 41% rise in shutdowns from 2022 to 2023, with over 270 shutdowns documented in 40+ countries so far in 2024.

Evidence

Access Now reported a 41% rise in internet shutdowns from 2022. In 2024, over 270 shutdowns have been documented in 40+ countries.

Major Discussion Point

Trends and impacts of internet shutdowns

Agreed with

Kanbar Hussein Bor

Alexandria Walden

Agreed on

Increasing prevalence and negative impacts of internet shutdowns

K

Kanbar Hussein Bor

Speech speed

143 words per minute

Speech length

2037 words

Speech time

850 seconds

Shutdowns have significant economic and societal impacts

Explanation

Internet shutdowns have substantial negative effects on both the economy and society. They can impede various activities from farming to democratic participation, affecting individuals’ daily lives and national economies.

Evidence

In Bangladesh, a recent shutdown reportedly resulted in almost $300 million loss of GDP.

Major Discussion Point

Trends and impacts of internet shutdowns

Agreed with

Felicia Anthonio

Alexandria Walden

Agreed on

Increasing prevalence and negative impacts of internet shutdowns

UK government championing multi-stakeholder efforts

Explanation

The UK government is actively promoting a multi-stakeholder approach to address internet shutdowns. They are using various platforms and initiatives to highlight the importance of this issue and bring diverse stakeholders together.

Evidence

UK’s leadership in the Freedom Online Coalition’s taskforce on internet shutdowns, collaboration with UNESCO, and organizing multi-stakeholder events at IGF.

Major Discussion Point

Multi-stakeholder approaches to addressing shutdowns

Agreed with

Joss Wright

Scott Campbell

Agreed on

Importance of multi-stakeholder approach

Highlighting examples of good state practices

Explanation

It’s important to showcase examples of good state practices where governments choose not to shut down the internet during crises. This can encourage other states to consider alternative approaches to addressing societal issues.

Evidence

The UK’s decision not to shut down the internet during recent riots, despite their significant impact on social fabric.

Major Discussion Point

Policy and advocacy efforts

J

Joss Wright

Speech speed

167 words per minute

Speech length

2164 words

Speech time

776 seconds

Need for data-driven approaches to understand shutdown impacts

Explanation

A data-driven approach is crucial to better understand the impacts of internet shutdowns. This involves using analytical tools and methods to study how shutdowns relate to social and political factors on the ground.

Evidence

Collaboration with Open Observatory of Network Interference to Uni project to provide data and analytical tools for studying shutdown impacts.

Major Discussion Point

Trends and impacts of internet shutdowns

Differed with

Alexandria Walden

Differed on

Approach to addressing internet shutdowns

Importance of collaboration between civil society, academia, business and government

Explanation

Addressing internet shutdowns requires collaboration between different stakeholders, each bringing unique strengths and perspectives. This multi-stakeholder approach is crucial for developing comprehensive solutions to the problem.

Major Discussion Point

Multi-stakeholder approaches to addressing shutdowns

Agreed with

Kanbar Hussein Bor

Scott Campbell

Agreed on

Importance of multi-stakeholder approach

Need to understand government motivations for shutdowns

Explanation

It’s crucial to understand why authorities implement internet shutdowns to effectively address the issue. This understanding can help in developing more targeted and effective solutions to prevent shutdowns.

Major Discussion Point

Policy and advocacy efforts

Need for interdisciplinary technical and policy approaches

Explanation

Addressing internet shutdowns requires a combination of technical expertise and policy understanding. An interdisciplinary approach can help bridge the gap between technical measurements and social/political understanding.

Major Discussion Point

Technical and business perspectives

A

Alexandria Walden

Speech speed

185 words per minute

Speech length

1267 words

Speech time

409 seconds

Shutdowns affect basic services and democratic processes

Explanation

Internet shutdowns impact a wide range of services and processes, from basic communication to digital payments and business operations. This affects not just GDP, but every digitized interaction in society.

Major Discussion Point

Trends and impacts of internet shutdowns

Agreed with

Felicia Anthonio

Kanbar Hussein Bor

Agreed on

Increasing prevalence and negative impacts of internet shutdowns

Private sector role in transparency and advocacy

Explanation

The private sector plays a crucial role in providing transparency about internet shutdowns and advocating against them. Companies like Google engage in various initiatives to track and report on shutdowns.

Evidence

Google’s disruptions report on their transparency site, partnerships with organizations like Measurement Lab and UNI for data sharing.

Major Discussion Point

Multi-stakeholder approaches to addressing shutdowns

Differed with

Joss Wright

Differed on

Approach to addressing internet shutdowns

Private sector considerations in markets with shutdowns

Explanation

Companies consider the risk of internet shutdowns when evaluating expansion into new markets. Factors like rule of law, regulations, and frequency of shutdowns are taken into account in business decisions.

Major Discussion Point

Technical and business perspectives

Developing circumvention tools and alternative connectivity

Explanation

Private sector companies are working on developing tools to help users maintain access during shutdowns. This includes products focused on VPNs and other technologies to ensure continued connectivity.

Evidence

Google’s Jigsaw team developing products like Outline, focused on making VPNs more accessible.

Major Discussion Point

Technical and business perspectives

Measuring and tracking shutdowns for transparency

Explanation

Companies play a role in measuring and tracking internet shutdowns to provide transparency. This data is crucial for understanding the scope and impact of shutdowns globally.

Evidence

Google’s disruptions report on their transparency site, tracking activity across all Google products worldwide.

Major Discussion Point

Technical and business perspectives

S

Scott Campbell

Speech speed

160 words per minute

Speech length

1250 words

Speech time

466 seconds

UN reaffirmation of multi-stakeholder model in Global Digital Compact

Explanation

The Global Digital Compact reaffirms the commitment of 193 member states to a multi-stakeholder model in addressing internet governance issues. This provides a framework for advocacy and peer pressure to prevent internet shutdowns.

Evidence

Clear language in the Global Digital Compact committing to multi-stakeholderism and an inclusive IGF.

Major Discussion Point

Multi-stakeholder approaches to addressing shutdowns

Agreed with

Kanbar Hussein Bor

Joss Wright

Agreed on

Importance of multi-stakeholder approach

Leveraging Global Digital Compact for advocacy

Explanation

The Global Digital Compact provides a new opportunity for advocacy against internet shutdowns. It reaffirms governments’ commitment to human rights in the digital space and multi-stakeholderism, which can be used to push for policy changes.

Major Discussion Point

Policy and advocacy efforts

Agreements

Agreement Points

Increasing prevalence and negative impacts of internet shutdowns

Felicia Anthonio

Kanbar Hussein Bor

Alexandria Walden

Increasing prevalence of shutdowns globally

Shutdowns have significant economic and societal impacts

Shutdowns affect basic services and democratic processes

All speakers agreed that internet shutdowns are becoming more frequent and have substantial negative impacts on economies, societies, and basic services.

Importance of multi-stakeholder approach

Kanbar Hussein Bor

Joss Wright

Scott Campbell

UK government championing multi-stakeholder efforts

Importance of collaboration between civil society, academia, business and government

UN reaffirmation of multi-stakeholder model in Global Digital Compact

Speakers emphasized the crucial role of collaboration between different stakeholders in addressing internet shutdowns effectively.

Similar Viewpoints

Both speakers highlighted the importance of data-driven approaches and transparency in understanding and addressing internet shutdowns.

Joss Wright

Alexandria Walden

Need for data-driven approaches to understand shutdown impacts

Measuring and tracking shutdowns for transparency

Both speakers emphasized the importance of leveraging international frameworks and initiatives for advocacy against internet shutdowns.

Kanbar Hussein Bor

Scott Campbell

UK government championing multi-stakeholder efforts

Leveraging Global Digital Compact for advocacy

Unexpected Consensus

Understanding government motivations for shutdowns

Joss Wright

Scott Campbell

Need to understand government motivations for shutdowns

Leveraging Global Digital Compact for advocacy

Both academic and UN perspectives aligned on the importance of understanding government motivations and using international frameworks to address the root causes of shutdowns, rather than just opposing them outright.

Overall Assessment

Summary

The speakers showed strong agreement on the increasing prevalence and negative impacts of internet shutdowns, the importance of multi-stakeholder approaches, and the need for data-driven understanding and transparency.

Consensus level

High level of consensus among speakers, suggesting a unified approach to addressing internet shutdowns across different sectors. This consensus implies potential for effective collaborative efforts in policy advocacy, research, and development of technical solutions to mitigate the impacts of shutdowns.

Differences

Different Viewpoints

Approach to addressing internet shutdowns

Joss Wright

Alexandria Walden

Need for data-driven approaches to understand shutdown impacts

Private sector role in transparency and advocacy

While both speakers emphasize the importance of addressing internet shutdowns, they differ in their proposed approaches. Joss Wright advocates for a data-driven approach to understand the impacts, while Alexandria Walden focuses on the private sector’s role in providing transparency and advocacy.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the specific approaches and strategies to address internet shutdowns, rather than fundamental disagreements about the issue itself.

difference_level

The level of disagreement among the speakers is relatively low. All speakers agree on the importance of addressing internet shutdowns and the value of a multi-stakeholder approach. The differences mainly lie in the specific strategies and focus areas each speaker emphasizes based on their expertise and perspective. This low level of disagreement suggests a generally unified approach to the topic, which could be beneficial for developing comprehensive solutions to internet shutdowns.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of addressing government motivations for shutdowns, but they propose different strategies. Joss Wright emphasizes understanding motivations to develop targeted solutions, while Scott Campbell suggests leveraging the Global Digital Compact for advocacy and policy changes.

Joss Wright

Scott Campbell

Need to understand government motivations for shutdowns

Leveraging Global Digital Compact for advocacy

Similar Viewpoints

Both speakers highlighted the importance of data-driven approaches and transparency in understanding and addressing internet shutdowns.

Joss Wright

Alexandria Walden

Need for data-driven approaches to understand shutdown impacts

Measuring and tracking shutdowns for transparency

Both speakers emphasized the importance of leveraging international frameworks and initiatives for advocacy against internet shutdowns.

Kanbar Hussein Bor

Scott Campbell

UK government championing multi-stakeholder efforts

Leveraging Global Digital Compact for advocacy

Takeaways

Key Takeaways

Internet shutdowns are increasing globally and have significant negative economic and societal impacts

A multi-stakeholder approach involving civil society, academia, business and government is crucial to address the issue of internet shutdowns

Data-driven research and transparency efforts are important to understand and track the impacts of shutdowns

There is a need to understand government motivations for shutdowns and engage in dialogue to find alternatives

The Global Digital Compact provides a framework for advocacy against internet shutdowns

Resolutions and Action Items

Continue multi-stakeholder collaboration and dialogue on addressing internet shutdowns

Leverage the Global Digital Compact for advocacy against shutdowns

Private sector to continue transparency efforts and development of circumvention tools

Conduct more research to understand motivations and impacts of shutdowns

Unresolved Issues

How to effectively prevent shutdowns in cases of protests or conflicts that are difficult to predict

How to address the root societal causes that lead governments to implement shutdowns

Specific ways to institutionalize multi-stakeholder approaches at national levels

Suggested Compromises

Engage with governments to find alternatives to shutdowns that address their concerns while maintaining internet access

Develop more nuanced technical solutions that allow for some government control without full shutdowns

Thought Provoking Comments

Unfortunately, internet shutdowns are increasing in their prevalence. We are seeing more and more countries who are resorting to internet shutdowns. Access Now have reported a 41% rise of internet shutdowns from 2022.

speaker

Kanbar Hussein Bor

reason

This comment sets the stage for the urgency of the issue and provides a concrete statistic to illustrate the growing problem.

impact

It framed the discussion around the increasing prevalence of internet shutdowns and set a tone of urgency for addressing the issue.

In 2024 we’ve also already documented seven shutdowns in new countries that are countries that have never imposed internet shutdowns. In 2024 we’ve seen countries including Comoros, El Salvador, Guinea-Bissau, France disrupting internet TikTok in New Caledonia, Malaysia and Mauritius as well as Thailand disrupting or imposing internet shutdown.

speaker

Felicia Anthonio

reason

This comment provides specific, up-to-date examples of the spread of internet shutdowns to new countries, including democracies.

impact

It deepened the conversation by highlighting the global nature of the problem and raised concerns about the spread of shutdowns to previously unaffected countries.

I think that what I’d like to represent from the perspective of academia here is a form of multi-stakeholderism that isn’t so much about hearing all of our voices, but drawing on the perspectives we have in our solutions and our approaches and our abilities to provide some input to resolving the problem.

speaker

Joss Wright

reason

This comment reframes the concept of multi-stakeholderism from representation to collaborative problem-solving.

impact

It shifted the discussion towards a more action-oriented approach to multi-stakeholder collaboration in addressing internet shutdowns.

The thing about shutdowns is they are rarely, from a human rights perspective, rarely necessary, and rarely proportionate. They’re a blunt tool that impacts all of our users and all of our services, and so from a company’s perspective, it’s bad for business because it’s bad for everyone who uses our products

speaker

Alexandria Walden

reason

This comment provides insight into how private sector companies view internet shutdowns, highlighting both human rights and business perspectives.

impact

It introduced the business perspective into the conversation and emphasized the wide-ranging negative impacts of shutdowns.

And as somebody that is utterly against internet shutdowns, I can still have sympathy with that perspective. Because if I don’t try to understand why an authority wants to shut down the internet, I can’t work proactively to try and prevent them from having that conclusion.

speaker

Joss Wright

reason

This comment introduces a nuanced perspective on understanding the motivations behind internet shutdowns, even while opposing them.

impact

It challenged participants to consider the complexities of the issue and the importance of understanding all perspectives to find effective solutions.

Overall Assessment

These key comments shaped the discussion by establishing the urgency and global nature of the internet shutdown problem, reframing the concept of multi-stakeholderism towards collaborative problem-solving, introducing diverse perspectives from academia, civil society, and the private sector, and encouraging a nuanced understanding of the motivations behind shutdowns. The discussion evolved from simply describing the problem to exploring complex, multi-faceted approaches to addressing it, emphasizing the need for collaboration across sectors and a deeper understanding of the underlying issues.

Follow-up Questions

What are alternative sources of connectivity that can be provided during conflicts to ensure the internet remains open and secure?

speaker

Felicia Anthonio

explanation

This is important to address the growing problem of conflict-related internet shutdowns and their impact on humanitarian aid delivery.

How can we illustrate and measure the impact of internet shutdowns on people’s daily lives beyond just GDP figures?

speaker

Alexandria Walden

explanation

This would provide a more comprehensive understanding of how shutdowns affect individuals and communities in various ways.

What was the impact and what were the shortcomings of the agreement between Vodafone, civil society, and the government of the DRC in the pre-electoral context?

speaker

Scott Campbell

explanation

Analyzing this case could provide insights into effective multi-stakeholder approaches to preventing shutdowns.

How can human rights concerns be integrated into agreements around connectivity as international financial institutions and the UN invest in infrastructure projects?

speaker

Scott Campbell

explanation

This is crucial for preventing future shutdowns and ensuring respect for human rights in connectivity initiatives.

Are internet shutdowns and disruptions considered when Google is expanding its products to new markets?

speaker

Nikki Muscati

explanation

Understanding how private sector companies factor in shutdown risks could inform advocacy and policy approaches.

How can we address the root causes of issues that governments claim to be addressing through internet shutdowns?

speaker

Nikki Muscati

explanation

This could help develop more effective alternatives to shutdowns and address underlying societal problems.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Day 0 Event #10 First Aid Online: Making the Difference for Children

Day 0 Event #10 First Aid Online: Making the Difference for Children

Session at a Glance

Summary

This discussion focused on the work of Safer Internet Centers in Europe, particularly their efforts to protect children online and provide support through helplines. The session featured representatives from Belgium and Poland, as well as a youth ambassador, who shared insights into current online safety challenges and initiatives.


Key issues highlighted included the rise of non-consensual sharing of intimate images, sextortion, and cyberbullying. The speakers emphasized the importance of avoiding victim-blaming and instead focusing on empowering children with digital skills and resilience. They noted concerning trends such as the increasing use of AI in creating deepfakes for exploitation and the prevalence of harmful online behaviors among young teens.


The helplines operated by Safer Internet Centers were described as crucial resources, offering professional support to children, parents, and educators dealing with online risks. Statistics showed that teenagers are the primary users of these services, with cyberbullying being a top concern. The speakers stressed the need for ongoing education and awareness campaigns to encourage more young people to seek help when needed.


The discussion also touched on the challenges of parental oversharing online and the importance of involving youth in developing online safety strategies. The youth ambassador highlighted the value of helplines in providing immediate relief and guidance to young people facing online issues.


Overall, the session underscored the complex and evolving nature of online risks for children and the critical role of Safer Internet Centers in addressing these challenges through education, support, and collaboration with policymakers and industry.


Keypoints

Major discussion points:


– Overview of Safer Internet Centers in Europe and their role in supporting online safety for children and youth


– Trends in online risks for young people, particularly non-consensual sharing of intimate images, sextortion, and cyberbullying


– Importance of empowering youth and involving them in online safety efforts, rather than just focusing on protection


– Challenges in getting youth to report online issues and seek help from adults/helplines


– Need for more data and research on emerging online risks like AI-generated harmful content


Overall purpose:


The goal of the discussion was to raise awareness about Safer Internet Centers and helplines in Europe, highlight current online safety trends and challenges for youth, and emphasize the importance of youth empowerment and participation in online safety efforts.


Tone:


The tone was primarily informative and concerned, with speakers presenting statistics and examples to illustrate the seriousness of online risks for youth. There was also an emphasis on hope and empowerment, particularly when discussing youth involvement and potential solutions. The tone became more urgent when discussing emerging AI-related risks near the end.


Speakers

– SABRINA VORBAU: Moderator of the session, representative of the Better Internet for Kids initiative


– NIELS VAN PAEMEL: Policy advisor for Childfocus, the Belgian foundation for missing and sexually exploited children


– ANNA RYWCZYNSKA: Representative of NASK, the National Research Institute in Poland


– JOAO PEDRO: Better Internet for Kids Youth Ambassador


Additional speakers:


– Sadat Raman: Representative from Bangladesh working on internet safety initiatives for teenagers


Full session report

Expanded Summary of Safer Internet Centers Discussion


Introduction


This discussion focused on the work of Safer Internet Centers in Europe, particularly their efforts to protect children online and provide support through helplines. The session featured representatives from Belgium and Poland, as well as a youth ambassador, who shared insights into current online safety challenges and initiatives. The discussion highlighted the complex and evolving nature of online risks for children and the critical role of Safer Internet Centers in addressing these challenges through education, support, and collaboration with policymakers and industry.


Structure and Role of Safer Internet Centers


Safer Internet Centers play a crucial role in supporting online safety for children and youth across Europe. As explained by Sabrina Vorbau, the moderator from the Better Internet for Kids initiative, these centres provide awareness, helplines, hotlines, and youth panels. The specific focus of each centre may vary by country:


– In Belgium, Niels van Paemel noted that their Safer Internet Center is operated through a partnership between Childfocus and another organization.


– Anna Rywczynska described how the Polish Safer Internet Center involves collaboration between NASK (a research institute focused on cybersecurity) and NGOs, including the Empowering Children Foundation.


The Better Internet for Kids Plus strategy, mentioned during the discussion, is built on three pillars: youth protection, participation, and empowerment.


Current Online Safety Trends and Issues


The speakers highlighted several concerning trends in online risks for young people:


1. Non-consensual sharing of intimate images: Niels van Paemel identified this as a growing problem across Europe and worldwide.


2. Sextortion: Van Paemel reported a 400% increase in sextortion cases over the past five years.


3. Cyberbullying: Anna Rywczynska noted this as a major issue in Poland, citing research on its prevalence.


4. AI and deepfakes: Van Paemel raised concerns about AI-generated content, including deepfakes. He presented statistics showing that 42% of young people know what deepfakes are, 23% have seen at least one, and 13.8% have received one or more. Alarmingly, 99% of the victims are girls. The potential for live deepfake technology to be used in grooming was also discussed.


5. “Troll parenting”: Rywczynska highlighted the issue of adults setting negative examples online, including sharing embarrassing content about their children (“sharenting”).


Approaches to Prevention and Support


The speakers agreed on the importance of empowering children and involving them in online safety efforts:


1. Avoiding victim-blaming: Van Paemel stressed addressing gender stereotypes in prevention efforts.


2. Empowerment: Rywczynska emphasised building self-esteem and teaching assertiveness.


3. Research: Van Paemel called for more studies on emerging online risks.


4. Awareness and accessibility: Joao Pedro, a Better Internet for Kids Youth Ambassador, highlighted the need to expand awareness of helpline services.


5. “MenAble” project: Van Paemel mentioned this initiative aimed at working with boys on respectful online behavior.


6. “Cyberspots”: Rywczynska described this Polish initiative involving school teams focused on digital issues.


7. Digital Youth Forum: An annual event organized in Poland to engage youth in online safety discussions.


Helplines and Support Services


Helplines are a key component of Safer Internet Centers, offering crucial support for young people facing online issues. Challenges include:


1. Low reporting rates: In Belgium, only 15% of victims reach out for help.


2. Balancing confidentiality and safeguarding: Joao Pedro noted the importance of maintaining anonymity while fulfilling reporting obligations.


3. Expanding reach: Speakers emphasized the need to reassure children that they are not alone and provide tools to seek assistance and request content takedowns.


International Collaboration


The discussion underscored the global nature of cybersafety issues and the need for worldwide cooperation:


1. Safer Internet Day: Celebrated on the second Tuesday in February as a global awareness initiative.


2. Cross-border challenges: Online safety issues transcend national boundaries and require international cooperation.


3. Interest from other countries: An audience member from Bangladesh shared information about their helpline and Amber Alert initiative, indicating potential for broader international partnerships.


Conclusion


The discussion highlighted the critical role of Safer Internet Centers in addressing the complex and evolving landscape of online risks for children and youth. While speakers agreed on the importance of these centres and the need for youth empowerment, they also identified several challenges and areas for further work. These include addressing the rise of AI-generated threats, improving reporting rates for online abuse, and balancing confidentiality with safeguarding responsibilities in helpline services. The session underscored the need for continued research, international collaboration, and adaptation of strategies to meet emerging online safety challenges.


For more information, visit betterinternetforkids.europa.eu.


Session Transcript

SABRINA VORBAU: Good afternoon, everyone. Thank you very much for joining our session. My name is Sabrina Forba, I will moderate the session. This session is organized by the Better Internet for Kids initiative and the InSafe network of Safer Internet Centers. Today’s session, we will tell you a little bit more about what is a Safer Internet Center and specifically how Safer Internet Centers support citizens at national level. We will hear two country examples today from Belgium and Poland, and we also have one of our youth ambassadors online with us, Joao from Portugal, who will also provide you with the youth perspective to understand how important the subject matter is. Very briefly. Can we change to the next slide, please? Okay. Very briefly, as I said, this workshop is organized by the Better Internet for Kids initiative. Better Internet for Kids is a European Commission funded initiative to create a safer and a better Internet for children and young people, not only in Europe, but also beyond. The Better Internet for Kids initiative comes with the Better Internet for Kids portal, a portal where users can find more information, but also resources in multiple languages on better and safer Internet subjects. Today, as part of this workshop, we will also look at current trends and issues. how we can support children and young people but also adults such as educators, social workers, to support young people in this matter. However, children and young people are at the heart of what we do. As I said, we also have one of our youth ambassadors from Portugal with us today. It is very vital for us with everything we do, with every resource we are co-creating, with every policies we are shaping to have the voice of children and young people. A safer internet center, for those of you who are not familiar with it, is structured based on four strengths. A national awareness center, a helpline, a hotline and also a youth panel. In today’s session we will specifically focus on the importance and the objective of helplines really acting as a first aid service in countries supporting users to supporting citizens, mainly children and young people, what to do when they occur troubles online. While we will hear, as I said before, we will hear and dive deeper a little bit into country studies specifically from Belgium and Poland and also Portugal. We are collecting at European level statistics because those helplines are existing in each of the EU member states plus Norway and Iceland. These statistics I’m presenting here are from the quarter of April and June this year. We are collecting these statistics on a quarterly basis because this helps us to assess what issues citizens are facing at national level and how we can combat these issues, how can we help users to be more aware but also feel secure and protected. You can see here on the slides some statistics, also first of all who is contacting those helplines and we see that a vast majority of people that contact the helplines are actually teenagers, young adults that seek help and later on you will see the current most trending topics and issues. There are various ways how users can contact a helpline. Traditionally this was done by phone but of course by today helplines are offering various different forms of contacts, also ensuring anonymity of course, especially if we’re talking about reporting sensitive issues such as for example abuse online or other issues. Many of our helplines offer for example online forms or chat services because definitely it takes also a village for someone to pick up the phone to report something, so also for those who are more introverted to give really opportunity to report their matter and seek help. You can also see here that helplines are also available to adults specifically during the COVID-19 pandemic. We saw a lot of educators and a lot of parents contacting the helplines seeking for advice, seeking for advice on technical matters but also seeking for advice on social matters as well. A final slide from my side just to outline the current trends and issues here on the right side and as I said we will dive deeper in two of the issues, my colleagues will dive deeper in two of the issues in a minute. Cyberbullying, we see that this has been really one of the top trends not only in this quarter but for many many years. It seems specifically for children and young people this is the main issue they are countering online and we see a lot of adults puzzled on how to help them because we know it’s not only cyberbullying it’s also offline bullying so it’s really really an important subject matters. But we also see an increase for example in subjects like sexting or sextortion which is also one of the issues we will comment closer on. And when looking at these trends and issues where do these trends and issues occur? It’s mainly online where these issues occur and for the vast majority it’s on social media platforms. So I think all that young people spend an awful lot of time on these social media platforms not only in one multiple social media platforms so it’s really really important that we step in that we also work together with social media platforms but also policymakers. So we from the Better Internet for Kids initiative we tried to bridge this conversation between policymakers but also industry representatives and really provide first aid for the end-user. Now I will hand over the floor to my colleagues as I said as part of this workshop now we will dive deeper into two country examples because we have colleagues here from the Belgium and the Polish helpline so I hand over to Niels now for some trends and also operational matters of the Belgium helpline.


NIELS VAN PAEMEL: Thank you Sabrina. Okay hi everybody so my name is Niels, Niels van Pamel and I work as a policy advisor for Childfocus which is the Belgian foundation for missing and sexually exploited children. I will specify, because I only have a few minutes, so I will talk about a part of our job, right? The mission of ChildFocus is easy. We are the foundation for missing, as I said, but also sexual exploitation of minors. And that happens, as we all know, both in the online as in the offline world. But of course, over the last two decades, we shifted a bit from children being more vulnerable offline towards more and more online. We are the Belgian Safer Internet Center, as Sabine explained, so we are the Belgian helpline that kids can call on the number 116000 number 24x7x3. And we work with FHIRs and case managers. FHIRs are first responders, and then case managers are the ones who really will go in-depth with the child that needs help, or with caretakers, professionals, even police. It can be people who work in education, who need advice around one of the topics that we work around. So that’s the things that we do. What do we want to do? We want to create a better internet for kids in Belgium, of course. We want to support and accompany professionals working with children in development of their digital and media skills. We want to strengthen media literacy and media education for children in Belgium. And we want to provide support for parents, professionals, and children through our 116000 helpline. And we report and combat child sexual abuse material. So what I want to focus on today is going to be the non-consensual sharing of intimate images, or NCII. And later we will go to sextortion. You cannot really see well the numbers on there, but it starts in 2018 and we go to 2023. So these are all the cases that we open at Childfocus, from children, mostly children who contact us because they have sex. Their image has been spread in the context of a school or a youth club or whatever, but without consent, the images have been spread towards third parties, right? And then in 2020, we saw a very big rise of this case, and we thought, okay, maybe it’s because of it’s COVID, all children are inside of their houses, and maybe that’s why they will go to more risky behavior and it might go more wrong online because of it. So we thought, after the quarantine, it’s going to go down again. But look what we’re seeing here, we are almost doubling our numbers, and with extortion, you will see that it’s even more. So we are seeing that it’s actually going, oh my God, I have the wrong, yes. So we’re seeing that it’s really on the rise, right? And not only in Belgium, it’s a European phenomenon, and also worldwide. What do I want to tell you about this, from working with all these children, that one of the biggest problems that we do, also as professionals, people have the tendency to go towards victim blaming, telling a child, you shouldn’t have taken that photo in the first place. But if you do that, children will even have a more hard time to reach out to you. And what do we want? We want children to reach out when they’re into trouble. Research shows that in Belgium, only 15% of victims reach out, and we want to heighten that number. And how do we do that? By making children reassured that they are not alone, that they have been the victim of something, and then we can give them the tools to not only reach out, but also find help and also go to the takedown of certain images. So consent, that’s the new word that we want to introduce here. It’s all about consent. If young people exchange within the context of a very healthy sexual relationship, if they want to exchange photos, as such, that is not the problem. The problem is that somebody takes the confidence of somebody, receives a very personal gift, and decides to spread it to a much larger community. And that’s also how we should look at it. We should not blame the victim, we should blame the person who asked for the photos and then spread them further on. So it’s about a break of confidence and sexual intimidation. Now another thing, secondary victimization, that’s actually what I said, we should try not to victim blame children who are reaching out to us. And that can come from many sides, parents, peers, teachers, but also police. The children reaching out to police when things go wrong is less than 10%. And also there, it’s also a sector that we really want to work with, and that we do through many trainings. And then, last but not least, gender stereotypical behavior also plays a big role in here. As we see, we are opening more and more cases of a child that calls us, yes, in my school there is this telegram group, sluts of school x. And then, a lot of boys will collect photos of mostly girls, but it’s not always the case. Mostly boys collect photos of mostly girls, post it on platforms like telegram, and there we see how this whole culture of exposing and doxing is finding its way, and where girls are being victimized with their own material. If we want to change this, we should really start working more and more with boys, because we need to show them that it’s not okay to do this. It’s not okay to be disrespectful online. It’s not okay to slut shame a girl. So things like this is really something we need to work more and more about. If you come find us at the InSafe booth, you will see that we started a project called MenAble, where we try to enable boys by giving them tools to reach out, to talk in a more positive, respectful way about sexuality online, and where we really want to work with them to give them the tools. So come find us at the booths later. I need to rush a bit, I’m sorry. We did a study last December about deep nuding. Has anybody here in this room heard of what deep nudes are? If yes, put your hand up, please. Two people, three people, four people, five. So for the people online, that’s a minority. There’s many more people here, there’s like 100 people. So deep nudes are actually deep fakes. I suppose you know deep fakes, but with a sexual connotation, right? So people making fake images of somebody naked, or it can also be videos. So what did we do? We thought that this is happening more and more. We got a phone call at our helpline, a girl saying, oh, my photo’s being spread around across the school, but actually I never made one. So turns out that actually somebody used AI to create a fake photo, and then later slut-shamed her with a photo that she never even took. This is something that’s happening more and more, but there was no research nowhere in the world around this, so we did a study that we just went to look about the markets, how are these apps working, how easily can you find them? Can you just Google them? Turns out, yes, spoiler alert, you can just Google those apps. Just some numbers, because you can scan the QR code and go to the study directly, but just so you know, and it’s from last year. The numbers right now would be higher. 42% know what deep notes are. 23% has seen at least one. 13.8% has received one deep note, or more, of course. And 60% of those who know deep noting apps have used them. It’s a very important one to know. And 99% of all the victims are girls. Why? Because until, let’s say, somewhere in this year, the deep noting apps, they would not work with boys, because they were mainly trained on female data. But actually, this percentage now should be a little bit more down. opening more and more cases of sextortion with boys. Why? Because, we will go there later, it’s because boys are the main victims of sextortion, right? And perpetrators are finding ways to sextort them. And now, if you can do it with a fake photo, why not? It’s easier for the perpetrators. Here we are, sextortion. The same, 2018 till 2023. Yellow means content sextortion, which, first of all, I should ask, does everybody know what sextortion means? Not everybody? Okay, basically, it’s somebody who gets extorted with their own nude pictures. So, mostly boys, because 90% of the victims of sextortion are boys, they get into contact with another person online, they are chatting, and the conversation goes in a sexual way. The boy is pursued to send naked photos, and then afterwards, they will have to pay money, or these photos will be transferred to their parents or to their friends, so they’re being scammed, basically, with their own photos. That’s what sextortion means, sexual extortion. And this phenomenon is on the rise. 400% up over five years. And we have some ideas why this might be the case. First of all, young people are online at a much younger age, but also perpetrators are having a much easier time to find them. And now, they are also starting to use AI to do this. So, this is something that really scares us and that we really should work with children more and more about. We need to tell them, we need to give them the tools to have conversation about this topic and to stop this from happening, right? This extreme rise. I will skip this. This is just like how in the present it got more and more picked up, that this is a big problem, but also, that there’s now also a link with. with sextortion and these deep nudes that I talked to you beforehand about, right? There might be a person who makes a fake photo of you, but then says, if you don’t pay me, I will show this fake photo to your parents. This is happening right now. So right now you can be extorted or scammed with a real photo, with a photo of you that’s been nudified. What does that mean? With a bikini photo that you use a website or an app that takes a bikini off, fakely, but it looks so realistic that everybody believes it, or with a completely fake photo that they just use your face and they make a naked photo of this. But also on the grooming side, because if you want to pursue a boy to send naked photos, the boy needs to be groomed in the first place, right? He needs to think that he’s talking to a sexy girl around the same age, right? But now with live deep fake technology, I’m a 37 year old man, but on the screen you could see a 15 year old girl and also the voice of a 14 year old girl in any language that there is. So it’s being made very, very easy for perpetrators to find their way into tackling young boys. So this is something that we really want to worry about, warn about, sorry. Within the InSafe community, we see that these are European trends, but if I’m talking to my colleagues of NACMAC or even in certain African countries, we see it’s a worldwide problem. It makes sense, right? Because these people who are looking for victims online, they do not necessarily have a sexual interest in the child. No, they’re just finding a way to earn money. So it’s a lot about gangs that are using technology for their own good in order to get money, to scam people off. And this is also maybe something that I should also… also tell you that now we should also rethink our prevention work. To give an example, if we’re talking in the past about parenting, parents that are sharing photos of their children, we would say like, oh maybe you should watch out with swimming pool photos of your child because people might take it out of context, right? They might sexualize this photo. But now people can generate abuse material of children just by the photo of a face. Maybe we should go to new prevention tips towards parents. Maybe we should even say like, okay, you should send photos, family photos, keep it maybe in an encrypted WhatsApp of the family and maybe don’t post them online. So this is something we should think about. But on the one hand, we want to keep our safe message towards families. Internet is a good place and it offers a lot of opportunities for children. But on the other hand, we do want to warn and we want to give tools to children to find their way in a safe and responsible way. This is my last slide before I give the floor to my colleague Anna. What are the challenges now that we have? Okay, these phenomenons are here to stay and they are even on the rise and AI is making it even more easy for children to become victim. So what should we do now? Do we need to respond at EU level or even worldwide? We need more data. We need more studies for sure. We were the first study, but even that, the deep nude study was one of the first ones in the world. But even this is just showing that it exists, right? But we need more. We need to go behind like dynamics. How is this happening? Who are the perpetrators? Towards prevention, like I said, how do we work with red flags? In the old days, we would say like, oh, if you’re talking to somebody online, ask them to put their hand up and then you can see if it’s a fake image or not. Right now, with deep fakes, very easy. You don’t see the red flags anymore. Toxic masculinity, something to talk about. Like I told you, the gender part. Sharenting. And then towards hotline. what would AI-generated CSAM, which is also on the rise. And as the last one, victim extortion offenders using live deepfake technology. And if I can end maybe with one positive thing, that is the fact that victim blaming has been made impossible. Because if a child tells you, I’m the victim of deepnoding, you cannot tell that person, you should not have made that photo in the first place, right? So that might be the only positive thing I have to say here. And gender-based violence, we need to have this discussion. We need to dare to see things as they are. And it’s a gender phenomenon that we need to talk to young men and boys about their behavior online. So thank you.


ANNA RYWCZYNSKA: Hello, everyone. First of all, I would like to express how happy I am, but I think that we all are, to be able to participate again in the Internet Governance Forum. And actually, I’m absolutely, I don’t know, I’m a little bit surprised because we are here since 3 p.m. And I think we have spoken with maybe 30 or 40 people about how big challenge is kids’ safety online and from the different parts of the globe. And I think this is exactly what we love about the Internet Governance Forum. So it’s so good to be here at this event again. And now I will tell you something about how the Polish cipher Internet works. I think there is no presentation, actually. Someone could help. There it is. Okay. Okay, I think we are… We are here. Sabrina told you how the Safer Internet Centers look in Europe. So here only the information who builds the Safer Internet Center in Poland. So we cooperate, the two organizations cooperate together. It’s Empowering Children Foundation, that is the NGO, and it’s a National Research Institute, NASK, and I’m representing NASK, the Research Institute, and here you can see what are our competences. NASK is one of the leading institutions for cyber security, but we are also very involved in safety actions, and we’ve started our work within the Polish Safer Internet Center in 2004, so it’s actually 21 years that we operate. And here is how we are constructed, so we cooperate together to deliver awareness activities and educational activities, and then helplines are run by the foundation, and at NASK we have hotlines, so the team responding to the illegal content online, but today we are focusing on the helpline support that we provide to children. Okay, can I ask for the second slide? Ah, okay. Now I have to go back? Ah, now I would know, I have to point different direction. Okay, sorry for that. Okay, so being the public institution as NASK, we operate under the auspices of the Ministry of Digital Affairs, so we are very involved also in the policy that is developed at the moment in Poland, I think one of the most important issues. is now the new law that is being developed, the law that is going to protect children from the illegal content, but also harmful content, not only illegal. And of course we are as well in all the process of the implementation of the DSA in Poland. We are also promoting the main activities. One of those activities that we are promoting also here at our booth, like Nils said, we invite you very warmly to our booth. We are talking a lot about the Safer Internet Day. This is like a big global event and it will come very soon because it will be 11th of February. So you can come to us and we will talk about it more. And here you can see some statistics on how the event looks in Poland. We managed to attract over one million and a half participants last year. So we are trying every year to get more and more impact. Our main mission is what Sabrina said and Nils, is to protect children and to secure their safe experience online. And here you can see how the helpline services work in Poland. So we have three branches of the helpline. We have the helpline for children, which is 116 and 111. That is available 24 hours, 7 days a week in two languages, Polish and Ukrainian. And we have among the counselors, we have lawyers. psychologists, sexologists. What is very important, nor in 116, 111, nor in the helpline for adults, 800, 100, 100, adults and carers and teachers, we don’t have interns, we have only professionals. But also what we provide is offline assistance. There is something called Child on the Web Counseling Center and carers, parents can come also to these centers to get some offline assistance. Okay, and among different types of reports that we receive in a helpline, one is the one that Niels picked because we wanted to talk about the most emerging trends that we can observe in our centers. So Niels was talking about extortion and I will focus on cyberbullying, which is one of also the emerging trends, especially in Poland. Okay, and I think all of you know what is cyberbullying. It’s of course a violence carried out by using electronic devices. These are few examples of different kinds of cyberbullying. We have things, situations like happy slapping, so you cause some accident, you attack somebody and then you film that and you put it online. You can do frapping, which is using somebody’s identity if someone forgot to log out. It can be of course stalking, one of the most serious risks that might happen for young people and for adults as well. You can have identity theft and this is the difference between identity theft and frapping, that here someone wants to get some financial benefits from that. And then of course there is a hate speech and I think on this we will focus mostly. And what is the difference between cyberbullying and the regular bullying? Because actually sometimes we go away from saying cyberbullying because sometimes you know it it sounded like something less important than regular bullying but actually it’s even sometimes more important. It harasses the child in the same way. We had here the wide reach, extremely wide reach, sometimes it cannot be stopped. We have this idea of anonymity so it’s easier for someone to be a bully. It’s very often the lack of adult supervision because as you know many of those cyberbullying cases they happen in communicators like whatsapp or messenger and very often like kids have a huge groups of friends of of peers but very often there the cyberbullying happens and the adults not necessarily knows about it. Okay and what is really problematic in Poland right now is that we are on a top list of cases that relate to the cyberbullying. We are on the in some research it’s we are on the fifth place in the European Union sometimes we are even on the first like in a research UK it’s online so it’s like a huge problem in Poland and of course the problem raises and the most of the bullies and the most of the victims these are kids around 13 years old so we can see that this is the moment when we really have to start with all the prophylactics prevention actions activities because this is the moment where it’s absolutely needed and of course why because 13 is a moment when kids go to the social media and I think you all know about the huge and wide discussion about postponing the moment when it will be legal like in Australia they put it to the 16 years old. So now I think we are in a very broad international discussion if the social media should be available from 16 years old because this is when really the problem starts and it is somehow related to this age. And also what Neil said, like the experience it goes slower. So now it’s the average 8 years old who is having alone and for his own mobile phone and then the time races. Yes, now from our research in Poland it’s 5 hours 36 minutes a day that a kid is in front of the screen. And of course the connotation of these two phenomena is not helping with the fighting and preventing the cyber bullying. These are Polish statistics so here you can see that over 40% of young people had experienced, not experienced by themselves, but could see online the situation of cyber bullying against their peers. And this you can see what were the reasons. Yes, here you see the physical appearance, clothing, style and these are the issues that very often are underestimated by their parents and carers. Like people think okay what a big deal, yes, but these are the situations, these are the cases that are most often the topics for the cyber bullying. And of course it goes also together with the excessive use of the internet because when the child posts online a photo, even by themselves, yes, and then the photo is not receiving enough likes that was expected or is receiving some, you know, bad comments. Then the child is all the time online and checking, checking, checking, counting the likes. If there is not enough likes then the child even sometimes, you know, delete the photo. So this is also all this, you know, the tension around this causes also the excessive… use of the Internet. What is very important in our work, we talk more and more about empowering children and not so much about, well, we talk about the protection but it’s, we have to change, you know, the accents. We have to more talk about empowerment and this is also what kids tell them by themselves. There was a very good research in Australia, I’m repeating Australia, the second time, but they did a huge job recently and there is a very interesting research and they’ve asked the young people what they need from adults to be more resilient from the cyberbullying and what they said was we need to have higher self-esteem, we need to be able to create like the safe relationship, we need to know how to be assertive, so these are the competences they needed, not necessarily how to protect my, you know, profile, yes, so they didn’t want any technical information from us but they want us to really empower them from this point of view. And what is the problem? I mean, they don’t get enough empowerment now from parents and even they get very bad examples on how to behave online. We have lots of problems with the troll parenting but I think it’s a global problem. I’m not sure if you’ve heard about this cheese challenge and egg challenge. This is something for me, like it’s very sad but it got a huge dissemination. These are the cases when there are parents and the little child crying, this cheese challenge and the child is crying and the parent want to chill the child throwing at him cheese and then it’s like a slice of cheese and then of course the child stopped crying because it’s like shocked, yes, and it’s always filming and put online and it’s like a ha ha ha, look how funny my child reacted. Sometimes the child is crying, sometimes the child is, you know, is frightened. And the second example, the egg challenge, is the parents invite a young girl, boy, a child to cook a cake together. Of course, they prepare all the scene for the filming and then in the moment when you have to add the egg to produce a cake, and of course, the child is not happy because these are the small kids, like five, four, six, so for them cooking with parents, baking with parents is something really cool. And in the moment that you should put the egg to the cake, then you broke the egg on the head of a child. And of course, the child is shocked, yes, because… And also, all these reactions are put online and it’s like funny. So, like the examples from the adult world are really not good and it gets like millions of likes and is totally disseminated. And here, talking about the sharenting, because very often the sharenting is also a part of this all cyber bullying process that is happening to our children. This is the Polish research as well. We asked them, do they like when parents post photos about them? Because there is like 70% of parents in Poland post photos of their children online. And you can see that 23% is not happy, like feels embarrassed. And it’s not necessarily must be a photo, you know, like a bad photo, like from the troll parenting photo. It can be a regular photo, but as I said in the few slides before, they really take a lot of care about their appearance online, about the identity that they create. And very often, because of some spontaneously posted photos by parents, they also get cyber bullied. They get bullied by their peers. And because there is still a very big problem in, I would say, in the belief that kids have to the parents, adults, teachers, that we can help. Very little, very little. percent of teenagers really goes to somebody when it happened, when they experience cyberbullying. It’s 38 percent, over 38 percent, who don’t go to anybody because what they hear from the first moment, of course, because all happened because you’ve spent too much time online, yes, so they are afraid that we will take off the mobile phone from them and they will be victimized for all the bad situation that happened. So these are the procedures, these are the situations that we have to educate parents on how to react when child come to us. And the cyberbullying is so present online that it’s even hard for them sometimes to say if they were bullied, if something was already a hate speech or it was just, you know, a joke because they are so surrounded by this kind of situations. And talking about and trying to prevent the cyberbullying, we have to always remember about the three roles that are involved. We have to remember about the bully who is also a child and needs our assistance. We have to remember about the witness who is actually one of the most important actor with all the situation because this is the person who is not that much emotionally involved in a problem and can react. But what is very important, we always have to emphasize for the witness that they have to react only in a way that is safe for them. Sometimes even not putting like to some, you know, bad post is already reaction. Sometimes to go to your parent is a reaction. But we have to always repeat to the child that not all reaction can be safe for them. It’s like, you know, in a first aid on the street the first information that we get for the person who is learning how to give the first aid is first you have to secure your own safety. Yes, you have to check if any car is not coming. Yes, so this is the same thing happens with the witness of the cyberbullying. And we have also, of course, the person who is experiencing bullying and we have to be very careful on the signs that might happen because each child can react totally differently. There might be a child who is excessively checking what is happening online but there might be a child who is not checking at all and just don’t look at the internet totally. But the effects of the long-term cyber bullying might be absolutely horrifying. They can lead to depression situations and even to the self-harm activities. I think I have to rush, yes, we don’t have too much time. We have to remember that talking about the prevention we have to always take care of all the environment around the child and we have to do this action permanently. We have to repeat like with all the situations regarding the safety of a child we have to keep repeating what should be done and we as a helpline we try to secure all these pathways. We help the child to cooperate with the police to provide the right evidence. We have to be present at schools. We help them with developing the right procedures. We help with all the collaboration when there is a cyber bullying situation in a school then we help you know to all this what happens between teachers, directors and parents. We give psychological support and we also help in a contact with the contacts to the operators and to know what is happening in our teenagers lives, to know how we can help them. We cooperate with children a lot and we provide them lots of different educational services. We organize a big conference for them, digital youth forum. It will be already the 10th one this June. We cooperate a lot with our youth panel and now we’ve started a totally new initiatives which we call the cyberspots and this is the building that school teams focus on digital issues. So we try to build the teams that can be invited by school authorities for example to work on some policies. These are kids who would tell what they have the biggest problems with and we started a month ago and we now have over 200 schools that joined this action. So it’s absolutely fantastic. We had a meeting with them and we had a thousand young people learning how to become those digital leaders at schools and we got lots of educational materials and we invite all of you to contact us if you would like to learn more and we invite you of course to our booth for next days. Thank you very much.


SABRINA VORBAU: Thank you very much Anna. As said these were just two country examples from Poland and from Belgium but of course as we all know these are global issues we have to work together. We heard a couple of times already how important it is to involve children and young people into the conversation and also create a conversation with them. Listen to them and take them seriously into account. That’s why I’m also very happy that I’m very happy that we have one of our Better Internet for Kids youth ambassadors with us, Joao. He has been supporting us for over 10 years now starting really the first time working with the Portuguese Safer Internet Center and with us as Better Internet for Kids when he was 13-14 years old. He’s also part of the youth IGF and I think it’s very important that Joao is connecting with us today to also hear the perspective of the young people why it is so important to have services like safer internet centers, to have a national helpline, and how we also can encourage children and young people to contact those services and to have a conversation, to share with us what they experience online and how can we help them. Joao, I hope you can hear us. I give you the floor now. Yes, can I just confirm that in the room the sound is okay? Yes, we can hear you.


JOAO PEDRO: Very nice. So, good afternoon, good evening, good morning everyone that is joining on the discussion today, thanks Sabrina for the presentation. Indeed, I’m one of the Better Internet for Kids Youth Ambassador. The idea is to be a bridge between providing awareness to young people and providing feedback from those awareness sessions to the wider Better Internet for Kids Network that collaborates with online platforms and decision makers to include the feedback from a youth perspective. Regarding the helplines and bringing it to the context of the Portuguese Safer Internet Center Helpline. In this case, it’s run by APAV, which is the Victims Support Association for Portugal. And indeed, it plays a crucial role providing these services. It’s actually a link between their expertise with let’s say the offline cases and the online support that it’s now providing for young people. And I think it’s important to reflect on a couple of things. So, helplines are beneficial, indeed, what we see in terms of the interactions with young people is that they are… getting the right support to the claims that they have, either because of cases of dealing with online harassment, harmful content, personal crisis, and that part helplines provide immediate relief. It’s also a valuable tool for educational guidance in terms of seeking digital rights, safe online practices, so helplines are helping bridging that gap. And I think it’s also a tool to provide access to further help, so either a young person or an educator that seeks helplines such as the Portuguese one have at least an opportunity to get the proper recommendation of whose authority or whose institution should they go further to tackle a certain problem. Of course it’s not everything perfect because if we are seeing that the helpline use is increasing, you also have to ensure that it’s widespread enough because if they are having good results it means that we should provide that tool or bring awareness of that tool to more and more young people. And I think that when it comes to the awareness challenges to such helpline services there are a few, so there are indeed a little bit of barriers to access in terms of that some young people may face stigma, don’t know really how to reach these services, and of course the solution has to come from a perspective of enhancing visibility through schools, through the youth programs. social media and it’s it’s in this step that I see the most effort being made especially around dates like the Safer Internet Day where typically the helpline number is provided or disseminated widely across the broader Safer Internet Day campaigns. That part I think it’s very important. From a youth perspective it’s interesting to see that the confidentiality and trust is something that is very tangible, very thin and it’s sometimes hard to breach hard to breach concept. So providing the anonymity while maintaining the reporting obligations of helpline are typically the challenges that someone operating those services might face. The scope of strengthening the helpline’s accessibility and future directions. Actually it’s becoming ever more interesting and understanding what should be done on this part. It’s an ongoing discussion but for instance recognising the hotline side of these services under the trusted flaggers role under the DSA. It will be possibly an interesting way of ensuring quick responses to the illegal online content reporting such as taking down the child sexual abuse material or hate speech. Also something that is relevant is of course including feedback from the youth that is interacting with the helplines and I think a best practice has been already shown in the two use cases that we’ve seen, so reporting or decision making based on the numbers and facts that are collective also from the current helpline usage helps of course to improve the helpline quality. And of course wherever it’s possible to expand the reach, I think right now the Portuguese helpline is a good example because it bridges all the offline impacts from the know-how of the Portuguese Association for the Support of the Victims with the online and more scoped environment of youth safety online. And yeah, I would say that’s basically most of my potential contribution. I think it’s important to expand awareness initiatives, normalizing the helpline use, not becoming a stigma and ensuring accessibility, mostly by providing the different forms of contact as also shown in the previous slide.


SABRINA VORBAU: Thank you very much Joao for your intervention. I think it’s very important to have these voices of young people represented specifically at forums like the IGF. Here on the slide you can see and connect with the Safer Internet Centre in your country. Of course this is a European initiative but not exclusively. We do also work with like-minded organizations at the global level. We have a program which is called Safer Internet Centre Plus and through initiatives like the Global Safer Internet Day you can be able to get in touch with us. You can create and act like the Safer Internet Day community. within your country. Safer Internet Day is an international international day that we celebrate every year on the second Tuesday in February. It’s a day where we stand together and raise awareness for a safer and a better Internet and we are also very happy to exchange with you how you can set up a safer Internet Center in your country, how can you establish these operational infrastructures of helplines and hotlines. We have great expertise there. We are coming to the end of the session but my colleagues already told you we are also represented in the IGF village at the Insafe booth. We will be here throughout this week and we are happy to share more best practices and more information with you. At European level we are also working under the Better Internet for Kids plus strategy that really emphasizes, it’s a policy at European level that really emphasizes and is built on three pillars. It’s about youth protection, youth participation and youth empowerment and this is really something we are trying to incorporate at a very integral level, trying to have voices of young people in all our actions. So this just invites me to also visit us online betterinternetforkids.europa.eu. As said we have a resource gallery there where we provide in multiple languages resources also for teachers, for parents. We have heard from my colleagues how vital it is to not only educate children and young people but also the adults for a more, for a better, safer and also more inclusive online behavior. I don’t think we have to much time for questions but as I said please do come by and visit us at the booth and thank you very much for joining our session today and we wish you a nice evening. We have five minutes by the way so if anyone has a question


Audience: Hello, I am Sadat Raman from Bangladesh. I would like to share our idea because we follow safer internet, better internet centre and also last year we celebrate safer internet day in Bangladesh and also we have a helpline, so we are working for the teenagers in Bangladesh, so we know the teenager age range is 13 to 19, so our helpline number is 13, then 2, then 19, so it is very easy to remember and it is toll free national helpline number and powered by the young people in Bangladesh and my three members have come to join this event, so we would like to collaborate with you and Nusk and also Nils from the Dial Focus, and also in Bangladesh we are trying to launch Amber Alert in Bangladesh, so we open a website called amberalertforbangladesh.org, so we are trying to sign 1 lakh petitions and we will launch this platform in Bangladesh because in Bangladesh child are missing day by day, so thank you so much and we hope we will work together for the betterment of the teenager and children. Thank you. I am very sure about that. Thank you.


S

SABRINA VORBAU

Speech speed

135 words per minute

Speech length

1655 words

Speech time

735 seconds

Safer Internet Centers provide awareness, helplines, hotlines and youth panels

Explanation

Safer Internet Centers are structured with four main components: awareness centers, helplines, hotlines, and youth panels. These centers aim to create a safer and better internet environment for children and young people.


Evidence

The speaker mentions that Safer Internet Centers exist in EU member states plus Norway and Iceland.


Major Discussion Point

Structure and Role of Safer Internet Centers


Agreed with

NIELS VAN PAEMEL


ANNA RYWCZYNSKA


JOAO PEDRO


Agreed on

Importance of Safer Internet Centers


Safer Internet Day as a global awareness initiative

Explanation

Safer Internet Day is an annual international event celebrated on the second Tuesday in February. It aims to raise awareness for a safer and better internet globally.


Evidence

The speaker mentions that the event is celebrated internationally and invites participation from various countries.


Major Discussion Point

International Collaboration


N

NIELS VAN PAEMEL

Speech speed

169 words per minute

Speech length

2548 words

Speech time

899 seconds

Belgian Safer Internet Center focuses on missing and sexually exploited children

Explanation

The Belgian Safer Internet Center, operated by ChildFocus, primarily deals with issues related to missing and sexually exploited children. Their work has shifted from offline to online threats over the past two decades.


Evidence

The speaker mentions that ChildFocus is the Belgian foundation for missing and sexually exploited children.


Major Discussion Point

Structure and Role of Safer Internet Centers


Agreed with

SABRINA VORBAU


ANNA RYWCZYNSKA


JOAO PEDRO


Agreed on

Importance of Safer Internet Centers


Non-consensual sharing of intimate images is a growing problem

Explanation

The non-consensual sharing of intimate images, particularly among young people, is increasing. This issue often occurs in school or youth club contexts and can lead to further victimization.


Evidence

The speaker presents statistics showing a significant rise in cases from 2018 to 2023.


Major Discussion Point

Current Online Safety Trends and Issues


Agreed with

ANNA RYWCZYNSKA


Agreed on

Rising online safety challenges


Sextortion cases have increased 400% in 5 years

Explanation

Sextortion, where individuals are extorted using their own nude pictures, has seen a dramatic increase. Boys are particularly vulnerable to this form of exploitation.


Evidence

The speaker presents data showing a 400% increase in sextortion cases over five years.


Major Discussion Point

Current Online Safety Trends and Issues


Agreed with

ANNA RYWCZYNSKA


Agreed on

Rising online safety challenges


AI and deepfakes are creating new online safety challenges

Explanation

Artificial Intelligence and deepfake technology are being used to create new forms of online exploitation. This includes the creation of fake nude images and videos, as well as live deepfake technology for grooming.


Evidence

The speaker mentions examples of AI-generated nude images and live deepfake technology being used for grooming.


Major Discussion Point

Current Online Safety Trends and Issues


Agreed with

ANNA RYWCZYNSKA


Agreed on

Rising online safety challenges


Importance of not victim-blaming and addressing gender stereotypes

Explanation

It’s crucial to avoid victim-blaming when dealing with cases of non-consensual image sharing or sextortion. There’s also a need to address gender stereotypes and work more with boys to promote respectful behavior online.


Evidence

The speaker mentions the MenAble project, which aims to enable boys to communicate more respectfully about sexuality online.


Major Discussion Point

Approaches to Prevention and Support


Agreed with

ANNA RYWCZYNSKA


JOAO PEDRO


Agreed on

Need for empowerment and education


Differed with

ANNA RYWCZYNSKA


Differed on

Approach to prevention and support


Need for more data and studies on emerging online risks

Explanation

There is a lack of comprehensive data and studies on emerging online risks, such as deepfakes and AI-generated child sexual abuse material. More research is needed to understand these phenomena and develop effective responses.


Evidence

The speaker mentions conducting one of the first studies on deep nudes and the need for more in-depth research.


Major Discussion Point

Approaches to Prevention and Support


Cybersafety issues are global and require worldwide cooperation

Explanation

Online safety issues, such as sextortion and cyberbullying, are not limited to specific countries but are global problems. Addressing these issues requires international cooperation and shared strategies.


Evidence

The speaker mentions that these trends are observed across Europe and in other parts of the world, including African countries.


Major Discussion Point

International Collaboration


A

ANNA RYWCZYNSKA

Speech speed

149 words per minute

Speech length

2922 words

Speech time

1169 seconds

Polish Safer Internet Center involves research institute and NGO collaboration

Explanation

The Polish Safer Internet Center is a collaboration between the Empowering Children Foundation (an NGO) and NASK (a National Research Institute). This partnership combines expertise in child safety and cybersecurity.


Evidence

The speaker mentions that NASK is involved in cyber security and safety actions, while the foundation runs helplines.


Major Discussion Point

Structure and Role of Safer Internet Centers


Agreed with

SABRINA VORBAU


NIELS VAN PAEMEL


JOAO PEDRO


Agreed on

Importance of Safer Internet Centers


Cyberbullying is a major issue, especially in Poland

Explanation

Cyberbullying is a significant problem in Poland, with the country ranking high in European statistics. The issue is particularly prevalent among 13-year-olds, coinciding with increased social media use.


Evidence

The speaker cites research placing Poland fifth or even first in the European Union for cyberbullying cases.


Major Discussion Point

Current Online Safety Trends and Issues


Agreed with

NIELS VAN PAEMEL


Agreed on

Rising online safety challenges


Focus on empowering children rather than just protection

Explanation

There is a shift in approach from merely protecting children to empowering them. This involves building self-esteem, creating safe relationships, and teaching assertiveness rather than just focusing on technical protection measures.


Evidence

The speaker cites Australian research where young people expressed the need for higher self-esteem and assertiveness skills to be more resilient against cyberbullying.


Major Discussion Point

Approaches to Prevention and Support


Agreed with

NIELS VAN PAEMEL


JOAO PEDRO


Agreed on

Need for empowerment and education


Differed with

NIELS VAN PAEMEL


Differed on

Approach to prevention and support


J

JOAO PEDRO

Speech speed

109 words per minute

Speech length

743 words

Speech time

407 seconds

Helplines offer crucial support and guidance for young people online

Explanation

Helplines play a vital role in providing immediate support and guidance to young people facing online issues. They offer relief for cases of online harassment, harmful content, and personal crises, as well as educational guidance on digital rights and safe online practices.


Evidence

The speaker mentions that helplines provide access to further help and proper recommendations for tackling specific problems.


Major Discussion Point

Structure and Role of Safer Internet Centers


Agreed with

SABRINA VORBAU


NIELS VAN PAEMEL


ANNA RYWCZYNSKA


Agreed on

Importance of Safer Internet Centers


Expanding awareness and accessibility of helpline services

Explanation

There is a need to increase awareness and accessibility of helpline services among young people. This involves addressing barriers to access, such as stigma, and enhancing visibility through schools, youth programs, and social media.


Evidence

The speaker mentions efforts to disseminate helpline information during campaigns like Safer Internet Day.


Major Discussion Point

Approaches to Prevention and Support


Agreed with

NIELS VAN PAEMEL


ANNA RYWCZYNSKA


Agreed on

Need for empowerment and education


A

Audience

Speech speed

127 words per minute

Speech length

195 words

Speech time

91 seconds

Interest in collaborating with European initiatives from other countries

Explanation

There is interest from non-European countries in collaborating with and learning from European online safety initiatives. This includes implementing similar helpline services and awareness campaigns in their own countries.


Evidence

An audience member from Bangladesh shares their experience of implementing a helpline and celebrating Safer Internet Day, expressing interest in further collaboration.


Major Discussion Point

International Collaboration


Agreements

Agreement Points

Importance of Safer Internet Centers

speakers

SABRINA VORBAU


NIELS VAN PAEMEL


ANNA RYWCZYNSKA


JOAO PEDRO


arguments

Safer Internet Centers provide awareness, helplines, hotlines and youth panels


Belgian Safer Internet Center focuses on missing and sexually exploited children


Polish Safer Internet Center involves research institute and NGO collaboration


Helplines offer crucial support and guidance for young people online


summary

All speakers emphasized the crucial role of Safer Internet Centers in providing support, awareness, and resources for online safety.


Rising online safety challenges

speakers

NIELS VAN PAEMEL


ANNA RYWCZYNSKA


arguments

Non-consensual sharing of intimate images is a growing problem


Sextortion cases have increased 400% in 5 years


AI and deepfakes are creating new online safety challenges


Cyberbullying is a major issue, especially in Poland


summary

Speakers highlighted the increasing prevalence of various online safety issues, including non-consensual image sharing, sextortion, AI-related challenges, and cyberbullying.


Need for empowerment and education

speakers

NIELS VAN PAEMEL


ANNA RYWCZYNSKA


JOAO PEDRO


arguments

Importance of not victim-blaming and addressing gender stereotypes


Focus on empowering children rather than just protection


Expanding awareness and accessibility of helpline services


summary

Speakers agreed on the importance of empowering and educating young people, rather than just focusing on protection, to address online safety issues.


Similar Viewpoints

Both speakers emphasized the need for more research and international collaboration to address emerging online safety challenges.

speakers

NIELS VAN PAEMEL


ANNA RYWCZYNSKA


arguments

Need for more data and studies on emerging online risks


Cybersafety issues are global and require worldwide cooperation


Unexpected Consensus

Importance of youth involvement

speakers

SABRINA VORBAU


ANNA RYWCZYNSKA


JOAO PEDRO


arguments

Safer Internet Centers provide awareness, helplines, hotlines and youth panels


Focus on empowering children rather than just protection


Helplines offer crucial support and guidance for young people online


explanation

While not unexpected, there was a strong consensus on the importance of involving youth in online safety initiatives, which was emphasized across different aspects of the discussion.


Overall Assessment

Summary

The speakers showed strong agreement on the importance of Safer Internet Centers, the need to address rising online safety challenges, and the focus on empowering and educating young people. There was also consensus on the need for more research and international collaboration.


Consensus level

High level of consensus among speakers, indicating a shared understanding of key issues and approaches in online safety. This consensus suggests a strong foundation for coordinated efforts in addressing online safety challenges across different countries and organizations.


Differences

Different Viewpoints

Approach to prevention and support

speakers

NIELS VAN PAEMEL


ANNA RYWCZYNSKA


arguments

Importance of not victim-blaming and addressing gender stereotypes


Focus on empowering children rather than just protection


summary

While both speakers emphasize the importance of supporting children, Niels focuses on addressing gender stereotypes and avoiding victim-blaming, while Anna emphasizes empowering children by building self-esteem and teaching assertiveness.


Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the specific approaches to prevention and support for children facing online risks.


difference_level

The level of disagreement among the speakers is relatively low. They generally agree on the importance of protecting children online but have slightly different focuses and approaches. This suggests a collaborative environment where various strategies can be implemented to address online safety issues for children.


Partial Agreements

Partial Agreements

All speakers agree on the importance of addressing online risks for children, but they focus on different aspects: Niels emphasizes the need for more research, Anna highlights the specific issue of cyberbullying, and Joao stresses the role of helplines in providing support.

speakers

NIELS VAN PAEMEL


ANNA RYWCZYNSKA


JOAO PEDRO


arguments

Need for more data and studies on emerging online risks


Cyberbullying is a major issue, especially in Poland


Helplines offer crucial support and guidance for young people online


Similar Viewpoints

Both speakers emphasized the need for more research and international collaboration to address emerging online safety challenges.

speakers

NIELS VAN PAEMEL


ANNA RYWCZYNSKA


arguments

Need for more data and studies on emerging online risks


Cybersafety issues are global and require worldwide cooperation


Takeaways

Key Takeaways

Safer Internet Centers play a crucial role in providing support, education and resources for online safety across Europe


Current online safety trends include increasing cases of non-consensual image sharing, sextortion, cyberbullying, and emerging AI-related risks


Prevention approaches are shifting towards empowering youth rather than just protection


International collaboration is essential to address global cybersafety issues


Youth voices and participation are vital in developing effective online safety initiatives


Resolutions and Action Items

Expand awareness and accessibility of helpline services for young people


Conduct more research and gather data on emerging online risks like AI-generated content


Incorporate youth feedback to improve helpline quality and services


Promote Safer Internet Day globally to raise awareness


Explore recognizing hotlines as trusted flaggers under the DSA


Unresolved Issues

How to effectively address the rapid rise in sextortion cases


Best approaches to educate parents on appropriate online sharing of children’s information


Strategies to encourage more young people to seek help when experiencing online issues


How to adapt prevention messaging in light of new AI-enabled risks


Suggested Compromises

None identified


Thought Provoking Comments

We are seeing that it’s actually going, oh my God, I have the wrong, yes. So we’re seeing that it’s really on the rise, right? And not only in Belgium, it’s a European phenomenon, and also worldwide.

speaker

Niels van Paemel


reason

This comment highlights the alarming rise of non-consensual sharing of intimate images, framing it as a global issue rather than just a local one.


impact

It shifted the discussion from a country-specific focus to a broader, international perspective on online safety issues.


Research shows that in Belgium, only 15% of victims reach out, and we want to heighten that number. And how do we do that? By making children reassured that they are not alone, that they have been the victim of something, and then we can give them the tools to not only reach out, but also find help and also go to the takedown of certain images.

speaker

Niels van Paemel


reason

This insight emphasizes the importance of empowering victims and creating a supportive environment for reporting incidents.


impact

It led to a discussion on strategies to encourage reporting and support victims, moving beyond just prevention to focus on response and support.


42% know what deep notes are. 23% has seen at least one. 13.8% has received one deep note, or more, of course. And 60% of those who know deep noting apps have used them. It’s a very important one to know. And 99% of all the victims are girls.

speaker

Niels van Paemel


reason

This comment introduces concrete data on the prevalence and gender dynamics of deep fake technology misuse.


impact

It brought attention to an emerging technological threat and its disproportionate impact on girls, leading to a discussion on gender-based online violence.


We need to tell them, we need to give them the tools to have conversation about this topic and to stop this from happening, right? This extreme rise.

speaker

Anna Rywczynska


reason

This comment emphasizes the need for education and empowerment of young people to address online safety issues.


impact

It shifted the focus from protective measures to empowering youth with knowledge and skills, leading to a discussion on educational approaches.


From a youth perspective it’s interesting to see that the confidentiality and trust is something that is very tangible, very thin and it’s sometimes hard to breach hard to breach concept. So providing the anonymity while maintaining the reporting obligations of helpline are typically the challenges that someone operating those services might face.

speaker

Joao Pedro


reason

This comment provides valuable insight from a youth perspective on the delicate balance between confidentiality and reporting obligations in helpline services.


impact

It introduced the youth perspective into the discussion, highlighting the importance of trust and anonymity in encouraging young people to seek help.


Overall Assessment

These key comments shaped the discussion by broadening the scope from country-specific issues to global trends, highlighting the importance of empowering victims and youth education, introducing emerging technological threats, and emphasizing the need for a balanced approach to confidentiality and reporting in helpline services. The discussion evolved from describing problems to exploring solutions, with a strong focus on youth perspectives and gender-specific challenges in online safety.


Follow-up Questions

How can we address the rise of AI-generated child sexual abuse material (CSAM)?

speaker

Niels van Paemel


explanation

This is an emerging challenge that requires new approaches for detection and prevention.


How can we combat the use of live deepfake technology by offenders for grooming and extortion?

speaker

Niels van Paemel


explanation

This new technology makes it easier for perpetrators to deceive and exploit children, requiring updated prevention strategies.


How should we adapt our prevention work and advice to parents regarding sharing photos of children online, given the new risks posed by AI image generation?

speaker

Niels van Paemel


explanation

Traditional advice may no longer be sufficient given the ability to generate abusive content from innocuous photos.


How can we better work with young men and boys to address toxic masculinity and problematic online behavior?

speaker

Niels van Paemel


explanation

This is identified as a key factor in online gender-based violence and exploitation.


Should the minimum age for social media use be raised to 16, as implemented in Australia?

speaker

Anna Rywczynska


explanation

This is part of an ongoing international discussion to address cyberbullying and other online risks for young teens.


How can we improve the reporting and support mechanisms for cyberbullying, given that only a small percentage of teenagers seek help from adults?

speaker

Anna Rywczynska


explanation

The low rate of reporting indicates a need for better trust-building and support systems.


How can helpline services balance the need for anonymity with reporting obligations?

speaker

Joao Pedro


explanation

This is a challenge in maintaining trust while fulfilling legal and ethical responsibilities.


How can the role of helpline hotlines be strengthened under the Digital Services Act’s trusted flagger system?

speaker

Joao Pedro


explanation

This could potentially improve response times to illegal online content reports.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.