WS #209 Multistakeholder Best Practices: NM, GDC, WSIS & Beyond

WS #209 Multistakeholder Best Practices: NM, GDC, WSIS & Beyond

Session at a Glance

Summary

This discussion focused on multi-stakeholder best practices in internet governance, particularly in the context of recent initiatives like NetMundial Plus10 and the Global Digital Compact. Participants explored the challenges and opportunities in strengthening multi-stakeholder engagement using the Internet Governance Forum (IGF) and other processes.


Key points included the need for balanced representation and meaningful participation from all stakeholder groups, including governments, civil society, the private sector, and technical communities. Panelists emphasized the importance of inclusive processes that give voice to diverse perspectives, especially from developing countries and underrepresented groups.


The discussion highlighted tensions between multilateral and multi-stakeholder approaches, with some noting the challenges governments face in global forums. Participants stressed the need for coherent stakeholder processes within groups to enable effective collaboration across sectors.


Several speakers pointed out that multi-stakeholder processes require significant resources and time to be truly effective. The importance of transparency, clear guidelines, and mechanisms to ensure authentic engagement was emphasized.


The role of the IGF in capturing learning and applying best practices was explored, with suggestions to better utilize IGF messages in other forums and improve host country selection. Panelists also discussed how to make processes like the Global Digital Compact more inclusive while recognizing the challenges of balancing different stakeholder interests.


Overall, the discussion underscored the complexity of multi-stakeholder internet governance and the ongoing need to refine and improve collaborative approaches to address evolving digital policy challenges.


Keypoints

Major discussion points:


– The role and effectiveness of multi-stakeholder processes in internet governance


– Challenges and opportunities for improving multi-stakeholder collaboration


– The relationship between multilateral and multi-stakeholder approaches


– How to make the Internet Governance Forum (IGF) more impactful and inclusive


– Implementing lessons from processes like NetMundial and the Global Digital Compact


The overall purpose of the discussion was to examine best practices for multi-stakeholder engagement in internet governance, particularly in light of recent processes like NetMundial+10 and the Global Digital Compact. The goal was to identify gaps, challenges, and opportunities to strengthen multi-stakeholder approaches, especially through the IGF.


The tone of the discussion was largely constructive and reflective. Participants acknowledged both the value and limitations of multi-stakeholder processes. There was a sense of cautious optimism about improving these approaches, balanced with frank discussion of challenges. The tone became more solution-oriented towards the end as participants suggested concrete ways to enhance the IGF and other multi-stakeholder initiatives.


Speakers

– Anriette Esterhuysen: Chair of the Global Network Initiative


– Bruna Martins: Civil society representative, member of the IGF Multi-Stakeholder Advisory Group


– Isabelle Lois: Representative from Ofcom, the Swiss government’s office on communications


– Flavia Alves: Director of International Organizations for META


Additional speakers:


– Tijani Benjama: Civil society representative from Tunisia


– Lina: Representative from Search for Common Ground and the Council on Tech and Social Cohesion


– Dana Kramer: Representative of Youth IGF Canada


– Manal Ismail: Works at the National Telecom Regulatory Authority of Egypt, former chair of ICANN’s Governmental Advisory Committee


– Aziz Hilary: From Morocco


– Arjun Singh Vizoria: Founder of Vizoria Foundation, a civil society organization in India


Full session report

Multi-stakeholder Best Practices in Internet Governance: A Comprehensive Analysis


This discussion focused on multi-stakeholder best practices in internet governance, particularly in the context of recent initiatives like NetMundial Plus10 and the Global Digital Compact. Participants explored the challenges and opportunities in strengthening multi-stakeholder engagement using the Internet Governance Forum (IGF) and other processes.


Role and Effectiveness of Multi-stakeholder Processes


The participants unanimously agreed on the importance of multi-stakeholder processes in internet governance. Bruna Martins emphasized that these processes bring diverse perspectives together and serve to bring civil society voices to the table. Isabelle Lois argued that governments should use their convening power to ensure inclusive processes. Anriette Esterhuysen stressed that multi-stakeholder processes need to survive even when there is serious disagreement, highlighting the need for resilience in these approaches.


However, the effectiveness of current multi-stakeholder processes was a point of contention. Flavia Alves critiqued the Global Digital Compact process for insufficient non-governmental participation, while Anriette Esterhuysen viewed it as an attempt by multilateral institutions to be more inclusive, despite imperfections. This difference in perspective underscores the ongoing challenges in implementing truly effective multi-stakeholder approaches.


NetMundial Plus10 and Sao Paulo Guidelines


Several speakers highlighted the importance of the NetMundial Plus10 initiative and the Sao Paulo guidelines as frameworks for effective multi-stakeholder engagement. Bruna Martins suggested that these guidelines provide a roadmap for implementation and could serve as a ‘litmus test’ for evaluating the effectiveness of multi-stakeholder processes. The Sao Paulo guidelines, in particular, were noted for their emphasis on inclusivity, transparency, and accountability in internet governance processes.


Challenges and Opportunities for Improvement in the IGF


Participants identified several areas for improvement in the IGF. Isabelle Lois argued that IGF messages should be better utilized in other forums and decision-making processes, highlighting the need to increase the impact of these discussions. Bruna Martins emphasized the importance of selecting host countries that ensure safety and inclusivity for all participants. Flavia Alves stressed the need for mapping evolving issues to keep multi-stakeholder processes relevant, addressing the need for adaptability in these approaches.


An audience member raised an important question about the participation of small-scale organizations in the IGF, highlighting the challenges faced by smaller entities in engaging with global internet governance processes. This sparked a discussion on the need for more inclusive and accessible participation mechanisms.


Relationship Between Multi-stakeholder and Multilateral Processes


A key point of discussion was the tension between multilateralism and multi-stakeholderism, particularly in processes like the Global Digital Compact. Bruna Martins highlighted this tension, while Anriette Esterhuysen noted that governments are more comfortable with multi-stakeholder approaches nationally than globally. An audience member pointed out the gradual opening up of previously closed governmental processes, as seen in ICANN, suggesting potential for progress in this area.


Manal Ismail and Isabelle Lois emphasized the crucial role of governments in multi-stakeholder processes, noting that their involvement is essential for implementing outcomes and ensuring inclusive participation.


Thought-Provoking Insights


Several comments during the discussion challenged conventional thinking and added nuance to the conversation. Anriette Esterhuysen cautioned against romanticizing past multi-stakeholder processes, reminding participants of the difficulties in reaching consensus even within stakeholder groups. This insight highlighted the complexity of these processes and the need for coherent internal processes within stakeholder groups.


Lina from Search for Common Ground raised a provocative question about the honesty of discussions regarding the relationships between governments, the UN, civil society, and big tech. She suggested that litigation, regulations threatening fines, or extreme reputational damage were the primary drivers of change, challenging the effectiveness of multi-stakeholder forums.


Isabelle Lois offered an important perspective on inclusion, emphasizing that including more stakeholders does not remove power from those already involved. This reframing of inclusion as a non-zero-sum game could potentially make the concept more palatable to those resistant to change.


Unresolved Issues and Future Directions


Despite the productive discussion, several issues remained unresolved. These include balancing multilateral and multi-stakeholder approaches in global internet governance, ensuring authentic multi-stakeholder processes rather than just rhetoric, addressing power imbalances between different stakeholder groups, making multi-stakeholder processes more resource-efficient while maintaining effectiveness, and better integrating perspectives from developing countries and smaller organizations in global processes.


The discussion generated several suggestions for future action, including using the Sao Paulo multi-stakeholder guidelines as an evaluation tool, better utilizing IGF messages in other forums, reviewing and refining the IGF’s intersessional work models, and starting early preparations for the upcoming review of the IGF mandate.


Conclusion


The discussion underscored the complexity of multi-stakeholder internet governance and the ongoing need to refine and improve collaborative approaches to address evolving digital policy challenges. While there was broad agreement on the importance of multi-stakeholder processes, the conversation revealed nuanced perspectives on their implementation and effectiveness. Moving forward, the challenge lies in translating these insights into concrete improvements in multi-stakeholder engagement, ensuring that these processes remain relevant, inclusive, and impactful in shaping the future of internet governance.


Session Transcript

Anriette Esterhuysen: We are starting two minutes late, which is unacceptable, but I hope you forgive us. It’s day three of the idea, so I think we are fading a little bit. My name is Anriet Esterhosen. I’m chaired by the Global Network Initiative, along with other partners. None of them are here, but I hope we do them justice. The topic that we’ll be discussing is probably a topic that I’ve certainly been in multiple sessions at this IGF, to talk about multi-stakeholder best practices, but particularly how we can understand multi-stakeholder best practices in the context of the NetMundial Plus10, which took place in May this year and which produced this document. It’s not an official document produced by the UN, but it’s a document that was created in a bottom-up way, which is, I think, really owned by all the people that were part of that process. I’m going to ask some of our panellists. I’m going to stray a little bit from the script. I hope I have your permission. And then also the Global Digital Compact, this new process, which became formalised at the Summit of the Future in September this year in New York, which gives very strong endorsement for the multi-stakeholder approach. And then, of course, the World Summit on the Information Society, the UN process that I think consolidated and mainstreamed the idea of multi-stakeholder collaboration as being, I think, generally a good idea. But in the case of the Information Society, the Internet and digitalisation, it’s kind of non-negotiable. You can’t actually do anything really effectively in development and digital and human rights and inclusion if you do not have effective collaboration and participation from the private sector, the technical community, governments and civil society. And I know we don’t always have them as a separate group, but the academic and research community, think tanks and researchers all around the world. So what we want to achieve, I want to check if Ramsha from JNI is online yet. Ramsha, I’m looking for you online. I don’t see you yet. But just to emphasize what the goal of this workshop is, we really want to look at where are the gaps and what are the key challenges and what are the opportunities that we can take from all these processes that I’ve just mentioned to really strengthen multi-stakeholder engagement using the IGF and coordinating and synergizing how we strengthen multi-stakeholder engagement. I think what the NetModial guidelines told us is that it’s not just in this multi-stakeholder arena that we need to strengthen our processes. Oh, fantastic, Flavia, welcome. It’s also in the multilateral space. But just to get us started and I think also to make sure there’s a level playing field. And I think everyone, I want you to, I’m not going to, I will open the audience, but I want people to raise their hands. I think we are on day three of the IGF. You’ve discussed many of these issues. So I’m going to ask people in the room to interrupt. If you want to say something at any point, put up your hand. As long as you’re brief, it’s absolutely fine. and so that we can have very dynamic interaction between the panelists and the room, assuming that’s okay. But I think let’s just, I want to start and ask Bruna. I said earlier that the NetMundial plus 10 and the Sao Paolo guidelines has very strong ownership from those who created it. It might also have had gaps, but can you just tell everyone in very brief terms, what is, what was NetMundial plus 10? What are the Sao Paolo guidelines? And why is there both on the one hand, strong ownership, but on the other hand, also feeling that it’s not official enough?


Bruna Martins: Thanks, Henrietta, and thanks for the invitation. I think that looking back at NetMundial plus 10, it was a community oriented and steered process, right? NetMundial was multisakeholder from its very beginning. It’s a initiative that was shifted towards by the Brazilians, Nick.br, but in the previous edition by the government, this year we had a huge support from the government yet. In terms of the ownership, I think it’s because it’s set for itself the challenge of addressing all of the gaps we perceived from the process and attempted to improving that. The Sao Paolo guidelines, they are a set of principles and process steps as a how-to for effective implementation of multisakeholder in internet governance and digital policy, right? And the goal, and in doing so, we must also look forward to implementing openness, inclusiveness, and agility in internet governance, as well as the need for all stakeholders to be well-informed. So I think, in very brief words, I think the ownership comes from that. It is a community bottom-up initiative. It lies a lot on the success of the first edition of the initiative. And last but not least, it aims in addressing all of the gaps that we saw throughout the GDC process. Thanks.


Anriette Esterhuysen: Thanks very much for that, Bruna. And I didn’t introduce my panel. I was waiting for Flavia, but just that Bruna is from civil society. She can tell you more about herself. She’s a member of the IGF Multi-Stakeholder Advisory Group. So one of the people that organized this event and that we are part of and also was on the, what was the body? The High-Level Executive Committee. The High-Level Executive Committee of NetMundial Plus 10. So just to jump into the kind of substance of multi-stakeholder collaboration. I mean, Isabel, it’s become quite a buzzword. We talk about it. Some governments are more explicit about how they support it, how they use it. Some governments are more cautious about how and when they use it. But from your perspective, Isabel is with Ofcom, the Swiss government’s office on communications. So very much right inside government, but very active in multiple multi-stakeholder processes. But what do you think governments should do to deliver on this promise and potential of multi-stakeholder processes? And what do you think they are, certainly in your experience, what are they doing well? And what do you think they’re not doing well?


Isabelle Lois: Thank you, Henriette. That is a great question and a very difficult question to answer. I think government can and should play a vital role in assuring that we have inclusive and open processes. I think the main point and our main responsibility as governments is to ensure that all of the relevant voices are being heard and are being listened to and are being taken into account. But the really difficult part is how can we do that and where is our capacity to do that highlighted. And I think governments have often a very strong convening power. We can sometimes set details on who will be included in a room in a discussion, and this is a power that we should use to make sure that everybody can be part of the conversation. So that means, on one part, being present in multi-stakeholder spaces, for example, at the IGF, be that the international IGF, the regional ones, the national ones, so being very active in those processes, making sure that governments are also there, and it’s not just other stakeholders talking in between themselves, being part of the conversation, but also making sure within other structures and other for us that where there is a space and a need to include stakeholders, we make sure that they are all in the room with us. So I think that would be, for me, the main role and the main possibility for governments to do, and of course this is much easier said than done, and there is a way forward. I think at least Switzerland tries its best to include all stakeholders in our discussions, in our conversations, make sure that if we are planning a panel, we are not just inviting governments to speak, and we are using our convening power as best as we can. There is more that we can do, and I think we should push for that and include that in all processes. I think that is the main point I would say here.


Anriette Esterhuysen: I want to ask you just a follow-up question, and anyone else is welcome to respond as well. What is the difference between a government facilitating multi-stakeholder cooperation and a government living up to its constitutional obligations for public participation and policymaking?


Isabelle Lois: I think that is a very good question. I think a lot there enters in how does government include this perspective nationally, and how do they make sure that this is also included? internationally. And I think this is a distinction that is not always very easy to navigate, I mean, for governments within their country and then internationally. In Switzerland we have a very strong will and we have a lot of public participation at the national level. We do a lot of consultations, we have a semi-direct democracy, so we vote on many issues and we have everyone sort of being part of the conversation at the national level. This is something we strive for and work for. And then at the international plane, this is where it becomes a bit more complicated. Because, I mean, first we have to find an agreement between governments and between stakeholders, who should be included, how should we include them, what is meaningful, how do we make sure everybody’s in the room. And this is where we think that the Sao Paulo Multi-Stakeholder Guidelines is a very useful tool to not just talk the talk, but actually walk the walk. It’s a way for us to see what are the main questions we should ask ourselves, how do we make sure that these principles that we find valuable and necessary, how can we actually use them. So what should we include, how should we include, how do we make sure that if there is a power imbalance, we have thought about it and try to mitigate it as best as possible. Of course, it will never be perfect, but we can do better. And now we have a sort of roadmap on how we can do better. And I think this is very useful. This is why it’s such an important document to read and to include in our processes.


Anriette Esterhuysen: Thanks very much for that response, Isabel. I think, I mean, speaking from civil society and someone who does a lot of, not all, but a lot of my work in Africa, what you said actually, I think, mirrors, I think many governments who have some reservations about the multi-stakeholder approach don’t really have it about working at national level. They’re much more comfortable, they work very closely with the private sector at national level. Civil society sometimes thinks they work too. too closely with the private sector at national level. And they also collaborate with civil society and grassroots organization. It’s when you get to a global forum that there’s more caution about that multi-stakeholder approach. And I think that’s exacerbated by the fact that many developing country governments already feel fairly disempowered in global arenas. And when they feel they have to not just be effective and influential in relation to countries that are much more powerful and rich and influential that they are, but also deal with the multi-stakeholder community, it is quite challenging. But Bruna, as a member of the MAG, what are the lessons? And the MAG is a multi-stakeholder advisory group. It’s supposed to be perfect. And it’s been going for a long time. This is the, how many of IGF? 19th IGF. What do you think we can learn from the IGF? And in applying the multi-stakeholder approach, what are we doing wrong? What are we doing right?


Bruna Martins: Maybe I’ll start by saying that I think 2024 has been one of those inflection points, right? Or years where the internet governance space was all sorts of crazy or dynamic in that sense. Everyone was talking about with the CDC pact for future, what is gonna happen with the IGF, how ICANN is gonna react to those spaces, what happens with the ITF and many of those things and how all of those missions or questions would be integrated, right? We had a lot of meaningful processes. We had a lot of all of them taking place at the same time. And I think that some of those spaces like the GDC, in my personal opinion, they have presented a rather serious risk for the way we do things at the IGF, which is bottom-up multi-stakeholder and ensuring everyone has a say and has a microphone above all. right? And coming back to the IGF, I would say that this is the main value of this space that everyone is here, gets to come here or gets to join sessions remotely, given that the remote participation is working. And at the same time, this is a space that relies a lot on the diversity of perspectives and not just in terms of the difference of opinions, but the difference in terms of backgrounds and expertise. This is a space where you hear to people from the Pacific, from Brazil like me, or Tanzania, talking about different aspects around internet governance. And to me, that’s one of the core aspects. So the big diversity around stakeholders, perspectives, backgrounds, and so on. And I would say that this is what makes the IGF one of the primary spaces for internet governance and digital cooperation related issues. Because over the course of almost 20 years, the space has been leveraging on all of these vast community experts and expertises in order to move forward and to evolve its model. Back to the challenges, I would say, to put it more bluntly, I would say that the tension to what the GDC seems to have catered between multilateralism and multi-stakeholderism is one of the challenges, right? And that is because there has always been somehow a clear ask for some member states for more silo discussions or exclusive mechanisms. And the point is that, right, we need to balance those two expectations. It’s something that the São Paulo multi-stakeholder guidelines try to do by making some waves or some, you know, signalization to the multilateral spaces, but one should not overcome the other. We should achieve for balanced spaces in that sense. So maybe I’ll stop here, Rietz. Thanks. Thanks very much. Flavia, let’s go to you. Flavia is from, oh,


Anriette Esterhuysen: did you want to? Oh, sorry. Thanks a lot. Flavia is from META and META has really invested in time and people in participating in many of these spaces. Picking up from what Rune said about this being a year of, I guess you use the term inflection point, we had the culmination of the summit of the future, the global digital compact, and we have the WSIS plus 20 process well underway now, and of course we have the IGF. How have you participated as a company in this process and what, from your perspective, what works and what doesn’t work?


Flavia Alves: Sure. Thanks. Hi, everyone. I’m Flavia Alves, Director of International Organizations for META, and I have been doing an internet governance and multi-stakeholderism processes for the past 20 years. From the 19 IGFs, I think I have been on half of them, so maybe a good chunk of IGFs. As META, we believe it’s important to have a level playing field that all stakeholders should be part of processes that deal with internet governance. Historically, we are in the pick, and I agree completely with Rune, we are in the pick of diverse, of multilateral and multi-stakeholder processes. dealing with issues related to internet governance. If we go back to WSUS 2006, or WSUS Plus 10, or before WSUS Plus 10, there was a clear division between multistakeholderism and multilateralism, and what do we do in multistakeholderism as opposed to multilateralism. One of them was internet governance, and that’s why the IGF was created. Internet governance issues were supposed to be treated on a level playing field where all tech communities, civil society, private sector, governments, and international organizations would have a voice. Right now, we are dealing with processes where private sector might not have had voices, including tech communities and civil society. We are looking forward to see what the UN Global Digital Compact implementation is going to be, but the reality is that through the process, which is multilateral processes, which I should have said in the beginning, there is a difference between multilateral processes that once you have input from other stakeholders through multistakeholder processes. In the case of multilateral processes, I believe that the Global Digital Compact could have got a little bit more of participation from civil society, private sector, and tech community, and then be transparent on how those comments, or et cetera, were intaked and uptaken by the Global Digital Compact. We participated at the WSUS Plus 10 in the past, and there was an opening processes for consultations. There were consultations that were taken into account into a final document, and then a final document for folks to have comments on. Everything is still on the UN website. We have dates and meetings. We were in the room during WSUS Plus 10. I would have hoped that the Global Digital Compact was similar, and then I’m hoping the WSUS Plus 20 will be similar. We are now, we have an opportunity, and I think META is going to try as much as possible to be together with tech community and civil society to be part of the WSUS Plus 20 process, to work with the co-facility. to make sure there are consultations, that we are in the room and that we are providing comments through the several documents that are going to come. And I think we have those opportunities through the several other mood stakeholder process that we have in Sudan. There is the ICANN meeting, there will be other conference, but there is also IGF in Norway before the WSIS. And so I would invite this community for us to work closely together to see how and what do we want from WSIS plus 2020. There is also the discussion are we going to be able to build upon what WSIS plus 2010 has agreed on that resolution? Are we open for new comments? And then we need to map up with issues that we wanted to address as a community, as a multi-stakeholder community. What are the issues that are there that we should be re-addressing now? There are issues that unfortunately I think we’re gonna have a lot of challenging, challenged conversations should be able to agree on. To say the least, one on internet governance or another on human rights. However, there is the renewal of the IGF and I think we obviously we all want to renew the IGF, I would assume. The question here for this group is also how do we make the IGF more, even more relevant for others so that we can have more governments present, more civil society and even more colleagues of ours and other tech peers, private sector community present here. META is committed to the IGF and then through the years as the IGF changed location etc, we increased our participation back again after COVID. This year we had presence like a global diverse delegation from all over the regions but Europe and so I’m at Asia, LATAM, NORAM as well as content experts. So we had safety, privacy and I talking to stakeholders in every little corner here because we believe it’s important for us to exchange and the power of IGF among all is also the convening power that it brings. So I hope that we can continue that spirit and we can continue to invest on those processes but together. If we go in silos just as Bruno just said, governments sometimes want to work in silos, I think the other communities we should come together. The tech communities, civil society, private sector just as we do at the ITU, a multilateral firm where we all have a seat. I guess I’ll stop there


Anriette Esterhuysen: otherwise I could go for it for years. I mean it all makes sense but I think what I don’t hear is what does it mean to not work in silos? What does it mean to all come together? Do we all just come to the IGF? It’s a multi-stakeholder space. We all sit together, we talk together but does it make it, do we do we do we really effectively are we effectively able to engage about this is where there’s a common interest, this is where there’s a divergent interest, and do we come together as sectors or do we come together as individual companies, individual governments, you know individual civil society organizations. You know I think we sometimes we romanticize the past of the WSIS and the wonderful WSIS multi-stakeholder process. What we forget, those of us who were there at the time, like Tijani and myself, is that we had bureaus. We had a civil society bureau, we had a private sector bureau, and governments of course have to negotiate with one another. And we had within civil society before every opportunity to give an input on an item of the agenda, we had to reach internal consensus, and it was very very difficult. But we had to, and we were given the space by the WSIS process to meet, and we were forced to reach consensus, and then our consensus statement was given to governments, and governments took our consensus statements quite seriously. The same thing with the private sector, you did not have individual companies submitting their views, businesses had to work together and decide these are our priorities. And I think we sometimes forget that, that to have effective multi-stakeholder collaboration, you need coherent stakeholder processes within those stakeholder groups. And I think the same applies for regional multi-stakeholder processes. For Africa to have a strong voice in the global IGF or in the WSIS, Africa has to have a strong regional multi-stakeholder process, but it also needs a strong regional multilateral process. So I’m trying to unpack a little bit, how do we, I think we all believe in this modality, we believe in the multi-stakeholder approach, but I think we recognize that it needs to be better. I think NetMundial, Sao Paulo Guidelines is trying to make us do that. And I guess my final challenge, and I want you all to react to this, is that it takes resources. I think sometimes we look at the multi-stakeholder approach as a more cost-effective approach, because we put everyone in the same space, but is effective multi-stakeholder processes not also actually quite resource and time-intensive? But I’ve now challenged the panel, but I want to open it to the room and also online. If there’s anyone who wants to ask a question, or make a comment, and then we’ll go back to our panel. And Tijani, please, can we have a mic? Can we ask one of our… Excuse me, the volunteer on her cell phone in the back of the room. Sorry, can you help us with the microphone, please? Thank you so much, and very much… Tijani, just introduce yourself and be brief.


Audience: Okay, my name is Tijani Benjama. I am from Tunisia, civil society, from the beginning. And yet, I really thank you for asking how, what does it mean working in silos? We know that governments want to work in silos, but what about the other stakeholders? Are there any kind or any aspect of multi-stakeholder in their work? Do they consult with civil society? Do they consult with governments? This is a very important point. When we speak about multi-stakeholder model, we speak about it for all the stakeholders, not only for the governments. Thank you.


Anriette Esterhuysen: Any other comments? Any other comments from the question? Please, go ahead, from the floor. Is that working?


Audience: Hi, my name is Lina. I’m with Search for Common Ground, a peace-building organization, and the Council on Tech and Social Cohesion. I wonder whether or not we’re being honest enough about the relationship between governments, the UN, civil society, and big tech. Because it feels like the only things that are actually making things move is litigation, certain regulations that threaten fines, or extreme reputational damage. And sometimes I’m just not sure that these kinds of forums are really raising the issue. And I think it is changed, right? We have billions of dollars of lobby funds going to countries that are trying to move the needle around certain regulations so that those regulations don’t happen. And I’m not seeing necessarily that big tech is wanting a coherence from a regulatory standpoint. And just to give an illustration of what I’m talking about, we’ve seen that when it comes to online safety, protection of women and children, the kinds of things that are on many panels here, that this information has been known by the companies for a long time. And yet, they’re waiting until regulations in Europe force them to do different things. Meanwhile, the global south is not benefiting from any of those changes and protections. So I’m really trying to just see whether or not the multi-stakeholder model isn’t being threatened by this. And are we being honest about that? Thank you. Hello, Dana Kramer for The Record, representing my youth, R.I. Youth IGF Canada. I’m curious about the panel. Can you hear me? Okay, sorry. It seems to be cutting in and out of my own. I’m curious if the panel can speak to implementation of the GDC and where it could be implemented. So kind of building off of the last question about IGFs, are we seeing that practical element? And I’m wondering if the panel can maybe speak to points of, would the IGF be the best place to implement the GDC so that there’s an action-oriented outcome within it for some of those principles, within the document for more safe internet? Sorry, I’m just building off of your question there. But where we can see some of this impact for multi-stakeholderism, because as mentioned earlier with the resource constraints, it’s easy when we’re all kind of coming together that this would be the most appropriate venue. Thank you. Thanks, Dana. And we have one more comment from online, and then I’ll ask you to respond. Manal Ismail, let me just see if I can unmute you. I can unmute you. So please go ahead and introduce yourself. Manal is someone with a very deep track record in the multi-stakeholder process. No, I can’t hear you. Can anyone else hear Manal? We have a remote speaker that’s trying to speak, Manal Ismail. We can’t hear her. I have unmuted her. Manal, try now. Sometimes the audio is going to the table but not to the speakers neither here So if you guys can change it and Manal you can type your comment and I will read it. Okay, I Will look in the chat in the meantime Let’s just oh she’s speaking but we can’t hear you Let’s see if I can unmute you again last try And we have one more question in the room and and and Manal just type your comment, I’m so sorry we can’t hear you The the remote participants can hear you but those of us in the room can’t hear you Whose hand was it Aziz? Please go ahead. Yes. I am Aziz Hilary from Morocco. I just want to add one quick question about What mechanism or criteria? can ensure that multi-stakeholders Is authentic And not just a rhetoric and just words What mechanism and criteria we can apply to ensure that multi-stakeholders is authentic?


Anriette Esterhuysen: They’re really tough that’s a tough question and It’s okay I think Manal says we should go ahead But I really do ask our technical team to try and make sure that our remote participants can participate and Here’s quite a wide range of Reactions there and challenging questions Who wants to go first?


Bruna Martins: Guess I’ll go to Lina’s point about the multi-stakeholderism and and whether or not it works or it’s been implemented Brazil is one of those countries, right that has been championing the multi-stakeholder model into policymaking into law enforcement and in some of those things But again, we must not conflate the issues right the IGF is not a regulatory body the IGF the IJF is a convening space for the discussion of ideas and it doesn’t have the interesting that they are muting everyone now. Can we please cut off the, but just to say that the IJF has it like, it’s a, it’s general, like general or initial idea of being a convening space for different thoughts and different approaches, but in any sense, can we please stop the interference on the microphone? But in any case, Brazil has been implementing that and just to quote two examples, we have the civil rights framework for the internet and also our Data Protection Act, which were all discussed and co-written by a group of stakeholders that were convened by the rapporteurs in the parliament and which main idea was to make sure everyone had its position. And the point that I always kind of mention when there is this kind of tension between big tech and the rest put at the table is that when we talk about policy processes, governments talk to business because they have the financial interest. Governments talk to business, to other stakeholders, technical community, because of different interests, but there is nothing that makes them talk to civil society depending on where you’re coming from and what country you’re going to. Obviously, if you master participation mechanisms and so on, that’s one thing, but there’s literally nothing that obliges governments to go to end users and the multi-stakeholder model serves its purpose that is bringing civil society to the table and making sure that here, it’s not a financial interest that’s at play, but the needs for including everyone above all. So maybe that’s kind of what I go,


Anriette Esterhuysen: but yeah, I’ll stop. Flavia. Hi, thank you.


Flavia Alves: There are a couple of issues that I would like to address. Just picking up on this last one just so that we get it first. First of all, we as Meta, and I won’t speak for other tech companies, are highly supportive. or harmonization and interoperable approach of key critical issues. We also comply with regulation globally around the world, and we appreciate processes that are either interoperable or harmonized. In a sense, right now, you might be very well familiar with the Global Digital GDPR, also EU AI Act, or Digital Services Act, or Digital Market Access Act, and so there are DMA, DSI, there are all of these processes that we were part of the process as they were developing, and now we are working together with governments to try to implement it as much as possible. So I would say we are supportive of regulatory making processes that are open, and then provide processes for us to provide our comments and together develop documents or regulations, as we said. META has always been proactive, supportive of regulation, particularly because we don’t want to be the ones trying to and having to determine what we should or should not have in the internet. On the safety, here at IGF, we have a child safety group that has been here for years, I think right now might have closed, and from that, we developed a community that today have continued working together on safety matters and have several different groups addressing online safety, particularly child safety issues. I can send you some of those details, my digital safety person is around here too, but that’s something that I wanted to make sure you understand. This convening helps us develop in the community to understand what are the issues and how can we address it together with tech companies. We have, well, several other groups as well. From the IGF perspective, and then coming back to Henriette and your comment, I do agree that sectors need to, together, come up with consensus. For me, I cannot picture, because I wasn’t at WSIS, but I cannot picture another process that worked as best as NET Mundial, the first and the second. So it was NET Mundial 2014 and NET World 2014. Mundial plus 10. I remember perfectly most of us there, having like the head of civil society, have everyone on the same level, and even on the negotiations, we each had a room, we each had processes, and then we had to come up with a consensus on a single document, with governments on the same level. Now, the document exists, and then we review it like last year. I think we should use this base for other processes, and perhaps that’s where we want to go. In fact, an interesting thing about NetMundial is that, imagine a room like this, but it’s very, very big, and then you have different microphones, one for civil society, one for government, one for business, one for the technical community, and they have to line up, but of course there are only so many speakers, civil society usually has loads, and as a result, we as civil society had to negotiate. We had a WhatsApp group, we had a Google doc, so that we could prioritize what, you know, we only had three opportunities to speak, so we had to prioritize. So I think in a way, it did capture that combination of stakeholders having to collaborate, as well as be on a level playing field. But you know,


Anriette Esterhuysen: just I’m going to give the mic to Manal, but I also want to say, isn’t the true test to respond to the comment from Search for Common Ground, isn’t the true test of an effective multi-stakeholder process that it should survive, even if there is serious disagreement on how to regulate, and what to regulate, by whom? Isn’t that ultimately what shows us that our multi-stakeholder processes has matured, that we don’t abandon them when we reach points of conflict? Same with governments that have different perceptions, different understandings of human rights, and compliance of human rights. Should we stop working with them? Because we disagree, but that’s another challenge. And Manal, please, you can, I think it works now. the team has sorted the problem, so please go ahead and share your experience. Hello everyone, can you hear me now, Andriette? We can.


Audience: Excellent, thank you. So, just very quickly, I was triggered by your comment that governments partner with civil society. Please introduce yourself, sorry, I didn’t introduce you yet. Sure, sure, I’m sorry. This is Manal Ismail. I work at the National Telecom Regulatory Authority of Egypt, and I’ve been participating to almost all the IGF meetings. And in ICANN, I participated to the represented Egypt on the governmental advisory committee in different capacities, last of which was chairing the committee. And I just wanted to share the experience of government’s participation to the governmental advisory committee of ICANN, and as said, I was triggered by your comment, Andriette, that governments collaborate and partnership with civil society and private sector at the national level, but they are more cautious globally. And I think this could be attributed to, if I’m participating in an individual capacity, it’s more easy and more flexible to just speak my mind. When someone is participating on behalf of the country, it is more difficult to speak up without really being prepared and consulted at the national level. We started the very first meeting I tried to participate to. I found the room was closed with a key, so it was a really closed government meeting. But over the years, we started opening up gradually. We opened up certain sessions, then all the sessions except the communicate drafting. But now all the meetings are open, including the communicate drafting. And I think a few things that help is, for example, sharing the topics and everything in advance so people can prepare and consult at the national level before they come so they can speak more freely in public. And also availing in real time interpretation also helped because sometimes the language is a barrier and people are very cautious. careful in choosing each and every word, because it’s going to be attributed to their governments and their country. I’m cautious of time, I leave it at this, but just wanted to share with you that after we had very close meeting with AKI, now we’re having all the meetings open, and we are also engaging with other stakeholder groups at ICANN, and thus benefiting from the multi-stakeholder nature of the organization. Previously, we were meeting in silos, all the stakeholders, but not in one meeting. I leave it at this. Thank you, Andrea, for the opportunity.


Anriette Esterhuysen: Thanks very much for sharing that, Manal, and I think it’s a very good example of how one learns incrementally from processes. We have about 15 minutes left, and we want to look at what role the IJF can play in capturing this learning, capturing best practices, and applying them. I think we should also reflect, there’s been a lot of talk about the Global Digital Compact, but I’ve also heard many people say that it’s one of the most inclusive and collaborative processes that has been run from within the UN General Assembly. I think I felt frustrated by it, but I also sometimes would talk to the co-facilitators and see how much additional work they had from normally just facilitating a negotiation between members of the UN General Assembly. They also tried to get in all this other stakeholder input. It was imperfect, but there was a serious attempt. How do you think we can make these processes better? How do we use the Sao Paulo guidelines? What do you see the IJF doing concretely? Maybe it demonstrates, but maybe it can also innovate in making us get away from the happy, wonderful, multi-stakeholder community. to actually having deeper engagement that produces more concrete policy outcomes that might not always be consensus-based, but that are serving the broader public interest in the best possible way and the internet. So yeah, I know it’s a very long question, but I know you’ve been thinking about this and you are in these spaces. So let’s start with you, Isabel.


Isabelle Lois: Thank you. Okay, thank you for the question. Very, very long and there’s a lot of things. I just wanted to add one point on what we’ve said before that I think is very important. When we talk about inclusion, inclusion does not remove the power from the people who are already in the room, opening up to more stakeholders, more people, is not removing from those who are already there. And I think this is something that we need to be aware of. And I think this is something we have to remember and underline in all of the processes. So this was just my first little point that I think is important to highlight. On what we can do and how we can use what has been, we have 20 years of experience of trying to be as multi-stakeholder as possible and try to be better. We now have some guidelines on how we can make it effective. And I think this is something we should use to not have moments where we believe a process or is multi-stakeholder just by name because we name it, but without maybe actually being it, it’s sort of a whitewashing just because we’re using this buzz term and not actually living it up. And I think this is something that we now have some sort of litmus test that we can use with the Sopaldo guidelines of checking, is this truly multi-stakeholder or is this just called a multi-stakeholder process? So I think this is one of the points that we can emphasis on. And for the IGF specifically, it is difficult to see what it could do. And I think there are probably many, many ideas. But one of the points I would like to highlight is on the messages. We have messages at the end of every IGF. And, of course, these are not adopted by consensus. It’s a sort of summary of what has been discussed. But it gives us a very good knowledge of what has been shared, what are the issues that are raised, what are the opinions that have been raised in the different rooms. And I think we could do much better in using those messages in other forums, bringing them, highlighting them, saying, okay, this was discussed at the IGF, these issues were identified, and then bringing them in the other conversation, in the other rooms where there might be regulation or decision-making. Because the IGF is not a regulatory body. This is very important to highlight. But we are coming up with new ideas that might then just be lost in a document that is not read as much. So I think this is something where we can actively, we have the opinions of the different stakeholders, they are concretely written down, and we should use this more. This would be my little point. I’m happy to give it to you.


Bruna Martins: I think I’ll start with the idea that for the upcoming hosts, we should also make sure that the host country selection process takes into account safety of participants. You know, comfort around participants and whether or not the community, or whether or not the selection will result in one part of the community being less present, right? My stakeholder group is one that’s not present this year, or present in much smaller numbers, and it’s one of the main stakeholders within the IGF space. To anyone that’s here for the first time, this space is much more lively. I do miss my colleagues from Latin America. society and many other spaces in this broader conversation. So maybe looking at the IJF, making sure that the host country selection takes into account new aspects around safety of participants and so on is one thing. And lastly, I would just echo some of the guidelines, not the guidelines, but the suggestions issued by the MAG Working Group Strategy, because we just issued a vision document for the IJF looking into 2025. And a couple of the recommendations, they go around making sure the next year’s events takes a lot of makes a couple of discussions on how to improve the IJF mandates, a couple more things on making sure that the IJF has a track for GDC follow up and implementation and brings into a lot of the GDC follow up and implementation discussions through the workshops, main sessions and everything that takes place. But also working on the development of relationships between the IJF and some of the WSIS partner institutions and also continue some of the MAG discussions on NetMundial plus 10 alignment and last but not least, review and refine the intersessional work models. I’ll just wrap this by saying that if we don’t have every single, you know, group and stakeholder group at the table, this doesn’t work. And this goes both ways for civil society and private sector for academia. And, you know, many other parts of government, many other parts of this community, there is not a multistakeholder model where we don’t have one or where we don’t like one of them. And that’s the kind of the perk of it all and the joy of the IJF space.


Anriette Esterhuysen: Thanks. Thanks. You want to make a comment? I just want to react quickly to what Bruna said. I think, yes, all stakeholders at the table, but I think what the NetMundial Sao Paulo guidelines tells us, scope the issue that’s being discussed. And based on that issue, identity, to identify who’s affected by that particular discussion, and then you bring the stakeholders. If it’s about meta and content regulation and gender-based violence, you bring together meta, you bring together feminist organizations, you bring together data brokers, regulators, and you bring together freedom of expression people because any kind of content regulation might impact on freedom. So you have to, I think, be quite focused and targeted as well. And I think Ned Montiel gives us steps to help do that. I said you could interrupt us, so you can, but you’re gonna have to get up and come and fetch your own microphone because if I get up, I’m gonna drop something. Thanks, Flavia. Yeah. Hello.


Audience: Thank you so much for the mic. Good afternoon, everyone. My name is Arjun Singh Vizoria. I am from India. Here I am representing a civil society organization called the Vizoria Foundation that is founded by me in 2016. We are working in India in rural sector for the digital literacy. Now my question is here, how a small-scale organization can work with the IGF and is there any space to work with the IGF in India for the small-scale organization? And the second question is my to the matter. Sorry for the direct question. And I just want to know that how the matter is dealing with the cyber bullying. Sometimes I’m using the Facebook or certain platform. I see certain messages that are not relevant to me. It’s just like a direct message. Somebody is targeting. So how you are dealing with that?


Anriette Esterhuysen: Yeah. So that and then make your final remarks as well. And then you can answer the question about participating in national IGFs. Let’s talk on the side on this.


Flavia Alves: I am not an expert on cyber bullying. Sure, sure. So we are addressing it and we have our digital, head of safety here, so we can discuss with you. I guess most of the points I was going to make, particularly with regard to multistakeholderism and IGF, were made by my excellent co-panelists. Particularly, I think one thing that I keep hearing is we need to map the issues. We need to say, what are the issues? And this needs to be an evolving process. There is not a rule set in stone. Issues that we were discussing 10 years ago are different from the issues we were discussing now. At that time, we wouldn’t discuss content. Today, we have a whole DSA in content regulation issues. We have the UNESCO information integrity. We have the UN information integrity matters. And so, I think we should take this into account as we prepare for the upcoming review of the IGF mandate. I would love for the IGF next year in Norway for us to start early the process, particularly the country with the hosts, to make sure we bring stakeholders from all groups. My group is also not too present here. And so, I think we could partner in trying to make sure we bring others to the place. And with Norway, we agreed also to try to make available the list of participants earlier, so others have more incentive to be present, but also try to bring small business, small organizations, small developing countries, and making sure the remote participation is there as well. So, I would stop there. I know we are out of time.


Anriette Esterhuysen: Thanks very much, Flavia. And Bruna, maybe you can talk to Bruna about how to participate in the national. And absolutely, national IGFs are completely open to any organization of any size. Well, I think, you know, it feels again like we’re almost scratching the surface of this, but I think that we should take this experience of the Global Digital Compact. I think it’s an important experience where a multilateral institution tried hard to be consultative. The results might not be what we are used to or expect from the multistakeholder space. That doesn’t mean that there wasn’t good intention. I think it demonstrates how difficult it is. So, I think let’s look at that. that process, work with multilateral processes to make these processes that originate from within the United Nations system more inclusive. I think you’ve outlined very clearly how the IGF can become more effective. And my closing, this workshop was convened amongst others by Global Network Initiative. And I wanna quote Rebecca McKinnon, she’s not here, but she’s the person that was the founder of the Global Network Initiative. And she always says it takes different types of initiative. There’s no one fix to all of this. There’s no one perfect process. And if we look at how we’re making progress in using the multi-stakeholder approach to have more accountable, democratic, inclusive, digital and internet governance, it takes all these different types of processes. And it’s kind of the imperfections of all of them together, I think sometimes that really makes us be more effective and more inclusive. So thanks everyone for joining us and thanks to our panel. Thanks to the remote participants, sorry about the tech issues. And thanks very much Manel for your contribution as well. Thank you. Thank you. Thank you. You You You


B

Bruna Martins

Speech speed

161 words per minute

Speech length

1395 words

Speech time

517 seconds

Multi-stakeholder processes bring diverse perspectives together

Explanation

Bruna Martins emphasizes that the IGF’s value lies in its ability to bring together diverse perspectives from different backgrounds and expertises. This diversity of stakeholders, opinions, and experiences is what makes the IGF a primary space for internet governance discussions.


Evidence

The IGF allows people from various regions like the Pacific, Brazil, and Tanzania to discuss different aspects of internet governance.


Major Discussion Point

The role and effectiveness of multi-stakeholder processes in internet governance


Agreed with

Isabelle Lois


Flavia Alves


Anriette Esterhuysen


Agreed on

Importance of multi-stakeholder processes in internet governance


Multi-stakeholder model serves to bring civil society voices to the table

Explanation

Bruna Martins argues that the multi-stakeholder model is crucial for ensuring civil society participation in policy processes. She points out that while governments often engage with businesses due to financial interests, there’s no inherent obligation for them to consult with civil society or end users.


Evidence

Brazil’s implementation of the civil rights framework for the internet and the Data Protection Act, which were co-written by various stakeholders convened by parliament rapporteurs.


Major Discussion Point

The role and effectiveness of multi-stakeholder processes in internet governance


NetMundial Plus 10 and Sao Paulo guidelines provide a roadmap for effective implementation

Explanation

Bruna Martins highlights the importance of the NetMundial Plus 10 initiative and the Sao Paulo guidelines. She explains that these documents provide principles and process steps for effectively implementing multi-stakeholder approaches in internet governance and digital policy.


Evidence

The Sao Paulo guidelines aim to address gaps perceived in previous processes and improve implementation of openness, inclusiveness, and agility in internet governance.


Major Discussion Point

Challenges and opportunities for improving multi-stakeholder engagement


All stakeholder groups need to be present for multi-stakeholder processes to work

Explanation

Bruna Martins emphasizes the importance of having all stakeholder groups present for the multi-stakeholder model to function effectively. She argues that the absence of any group, whether it’s civil society, private sector, academia, or government, undermines the process.


Evidence

Bruna notes the reduced presence of her stakeholder group (likely civil society) at the current IGF, affecting the liveliness and diversity of discussions.


Major Discussion Point

The role and effectiveness of multi-stakeholder processes in internet governance


Host country selection for IGF should consider safety and inclusivity of all participants

Explanation

Bruna Martins suggests that the IGF host country selection process should take into account the safety and comfort of all participants. She emphasizes the importance of ensuring that the selection doesn’t result in underrepresentation of any part of the community.


Evidence

Bruna mentions the reduced presence of her stakeholder group and colleagues from Latin America at the current IGF.


Major Discussion Point

Challenges and opportunities for improving multi-stakeholder engagement


Agreed with

Isabelle Lois


Flavia Alves


Agreed on

Need for improvement in multi-stakeholder engagement


I

Isabelle Lois

Speech speed

182 words per minute

Speech length

1148 words

Speech time

378 seconds

Governments should use their convening power to ensure inclusive processes

Explanation

Isabelle Lois argues that governments have a responsibility to use their convening power to ensure inclusive and open processes. She emphasizes that governments should ensure all relevant voices are heard, listened to, and taken into account in discussions.


Evidence

Lois mentions Switzerland’s efforts to include all stakeholders in discussions and use its convening power to ensure diverse participation in panels and conversations.


Major Discussion Point

The role and effectiveness of multi-stakeholder processes in internet governance


Agreed with

Bruna Martins


Flavia Alves


Anriette Esterhuysen


Agreed on

Importance of multi-stakeholder processes in internet governance


IGF messages should be better utilized in other forums and decision-making processes

Explanation

Isabelle Lois suggests that the messages produced at the end of each IGF should be better utilized in other forums and decision-making processes. She argues that these messages provide valuable insights into the issues discussed and opinions raised during the IGF.


Evidence

Lois points out that the IGF messages, while not adopted by consensus, provide a summary of what has been discussed and shared during the forum.


Major Discussion Point

Challenges and opportunities for improving multi-stakeholder engagement


Agreed with

Bruna Martins


Flavia Alves


Agreed on

Need for improvement in multi-stakeholder engagement


F

Flavia Alves

Speech speed

169 words per minute

Speech length

1735 words

Speech time

613 seconds

Global Digital Compact process could have had more participation from non-governmental stakeholders

Explanation

Flavia Alves expresses that the Global Digital Compact process, while attempting to be inclusive, could have benefited from greater participation from civil society, private sector, and the tech community. She suggests that the process could have been more transparent about how stakeholder input was incorporated.


Evidence

Alves compares the Global Digital Compact process to previous processes like WSIS+10, which had more open consultations and transparent incorporation of stakeholder input.


Major Discussion Point

Challenges and opportunities for improving multi-stakeholder engagement


Agreed with

Bruna Martins


Isabelle Lois


Anriette Esterhuysen


Agreed on

Importance of multi-stakeholder processes in internet governance


Differed with

Anriette Esterhuysen


Differed on

Effectiveness of the Global Digital Compact process


Mapping of evolving issues is needed to keep multi-stakeholder processes relevant

Explanation

Flavia Alves emphasizes the need to continually map and update the issues being discussed in multi-stakeholder processes. She points out that the topics of discussion have evolved over time and that this evolution needs to be taken into account in preparing for future IGF mandates.


Evidence

Alves gives examples of how discussion topics have changed, such as the emergence of content regulation issues and information integrity matters that weren’t prominent 10 years ago.


Major Discussion Point

Challenges and opportunities for improving multi-stakeholder engagement


Agreed with

Bruna Martins


Isabelle Lois


Agreed on

Need for improvement in multi-stakeholder engagement


A

Anriette Esterhuysen

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Governments are more comfortable with multi-stakeholder approaches nationally than globally

Explanation

Anriette Esterhuysen observes that many governments are more comfortable with multi-stakeholder approaches at the national level than in global forums. She suggests that this discomfort at the global level is exacerbated by the power imbalances between developed and developing countries.


Evidence

Esterhuysen notes that governments often work closely with the private sector and civil society organizations at the national level, but are more cautious about multi-stakeholder approaches in global arenas.


Major Discussion Point

The relationship between multi-stakeholder and multilateral processes


Agreed with

Bruna Martins


Isabelle Lois


Flavia Alves


Agreed on

Importance of multi-stakeholder processes in internet governance


Multi-stakeholder processes need to survive even when there is serious disagreement

Explanation

Anriette Esterhuysen argues that the true test of an effective multi-stakeholder process is its ability to survive and continue even when there are serious disagreements among participants. She suggests that mature multi-stakeholder processes should be able to handle conflicts and divergent interests.


Major Discussion Point

The role and effectiveness of multi-stakeholder processes in internet governance


Multilateral institutions like the UN are trying to be more consultative, as seen in the Global Digital Compact process

Explanation

Anriette Esterhuysen acknowledges that multilateral institutions like the UN are making efforts to be more consultative, as demonstrated by the Global Digital Compact process. She suggests that while the results might not meet all expectations, there was a genuine attempt to be more inclusive.


Evidence

Esterhuysen mentions that the co-facilitators of the Global Digital Compact had to do additional work beyond their usual role of facilitating negotiations between UN General Assembly members to incorporate stakeholder input.


Major Discussion Point

The relationship between multi-stakeholder and multilateral processes


Differed with

Flavia Alves


Differed on

Effectiveness of the Global Digital Compact process


A

Audience

Speech speed

137 words per minute

Speech length

1280 words

Speech time

559 seconds

Gradual opening up of previously closed governmental processes is possible, as seen in ICANN

Explanation

An audience member (Manal Ismail) shares the experience of government participation in ICANN’s Governmental Advisory Committee. She describes how the process has gradually opened up over the years, moving from closed meetings to fully open sessions, including communiqué drafting.


Evidence

The speaker mentions that ICANN meetings started with closed government meetings, but over time opened up certain sessions, then all sessions except communiqué drafting, and finally all meetings including communiqué drafting.


Major Discussion Point

The relationship between multi-stakeholder and multilateral processes


Agreements

Agreement Points

Importance of multi-stakeholder processes in internet governance

speakers

Bruna Martins


Isabelle Lois


Flavia Alves


Anriette Esterhuysen


arguments

Multi-stakeholder processes bring diverse perspectives together


Governments should use their convening power to ensure inclusive processes


Global Digital Compact process could have had more participation from non-governmental stakeholders


Governments are more comfortable with multi-stakeholder approaches nationally than globally


summary

All speakers emphasized the importance of multi-stakeholder processes in internet governance, highlighting the need for diverse perspectives and inclusive participation.


Need for improvement in multi-stakeholder engagement

speakers

Bruna Martins


Isabelle Lois


Flavia Alves


arguments

Host country selection for IGF should consider safety and inclusivity of all participants


IGF messages should be better utilized in other forums and decision-making processes


Mapping of evolving issues is needed to keep multi-stakeholder processes relevant


summary

Speakers agreed on the need to improve multi-stakeholder engagement through various means, including better host country selection, utilization of IGF messages, and continuous mapping of evolving issues.


Similar Viewpoints

Both speakers emphasized the importance of civil society participation in multi-stakeholder processes, particularly at the global level where governments may be more hesitant.

speakers

Bruna Martins


Anriette Esterhuysen


arguments

Multi-stakeholder model serves to bring civil society voices to the table


Governments are more comfortable with multi-stakeholder approaches nationally than globally


Unexpected Consensus

Recognition of efforts by multilateral institutions to be more inclusive

speakers

Flavia Alves


Anriette Esterhuysen


arguments

Global Digital Compact process could have had more participation from non-governmental stakeholders


Multilateral institutions like the UN are trying to be more consultative, as seen in the Global Digital Compact process


explanation

Despite criticism of the Global Digital Compact process, both speakers acknowledged the efforts made by multilateral institutions to be more inclusive, which is an unexpected area of consensus given the typical divide between multi-stakeholder and multilateral approaches.


Overall Assessment

Summary

The main areas of agreement centered around the importance of multi-stakeholder processes in internet governance, the need for improvement in multi-stakeholder engagement, and the recognition of efforts by multilateral institutions to be more inclusive.


Consensus level

There was a moderate level of consensus among the speakers on the importance and challenges of multi-stakeholder processes. This consensus suggests a shared understanding of the value of diverse participation in internet governance, but also highlights the ongoing challenges in implementing effective multi-stakeholder approaches, particularly at the global level. The implications of this consensus point towards a continued push for more inclusive and effective multi-stakeholder processes in internet governance, while also recognizing the need for improvement and adaptation to evolving issues.


Differences

Different Viewpoints

Effectiveness of the Global Digital Compact process

speakers

Flavia Alves


Anriette Esterhuysen


arguments

Global Digital Compact process could have had more participation from non-governmental stakeholders


Multilateral institutions like the UN are trying to be more consultative, as seen in the Global Digital Compact process


summary

While Flavia Alves critiques the Global Digital Compact process for insufficient non-governmental participation, Anriette Esterhuysen views it as a genuine attempt by multilateral institutions to be more inclusive, despite imperfections.


Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the effectiveness of current multi-stakeholder processes, particularly the Global Digital Compact, and the specific roles different actors should play in improving these processes.


difference_level

The level of disagreement among the speakers is relatively low. While there are some differences in emphasis and perspective, there is a general consensus on the importance of multi-stakeholder processes and the need for their improvement. These minor disagreements are constructive and contribute to a more nuanced understanding of the challenges and opportunities in implementing effective multi-stakeholder approaches in internet governance.


Partial Agreements

Partial Agreements

All speakers agree on the importance of inclusive multi-stakeholder processes, but they emphasize different aspects: Bruna Martins focuses on the presence of all stakeholder groups, Isabelle Lois highlights the role of governments in ensuring inclusivity, and Flavia Alves stresses the need for continual updating of discussion topics.

speakers

Bruna Martins


Isabelle Lois


Flavia Alves


arguments

All stakeholder groups need to be present for multi-stakeholder processes to work


Governments should use their convening power to ensure inclusive processes


Mapping of evolving issues is needed to keep multi-stakeholder processes relevant


Similar Viewpoints

Both speakers emphasized the importance of civil society participation in multi-stakeholder processes, particularly at the global level where governments may be more hesitant.

speakers

Bruna Martins


Anriette Esterhuysen


arguments

Multi-stakeholder model serves to bring civil society voices to the table


Governments are more comfortable with multi-stakeholder approaches nationally than globally


Takeaways

Key Takeaways

Multi-stakeholder processes are valuable for bringing diverse perspectives together in internet governance, but need improvement to be truly effective


The NetMundial Plus 10 and Sao Paulo guidelines provide a roadmap for more effective implementation of multi-stakeholder approaches


There is tension between multilateral and multi-stakeholder processes that needs to be navigated carefully


The IGF plays an important role in facilitating multi-stakeholder dialogue, but could improve in translating discussions into concrete outcomes


Inclusivity and safety of all stakeholders is crucial for effective multi-stakeholder processes


Resolutions and Action Items

Use the Sao Paulo multi-stakeholder guidelines as a ‘litmus test’ to evaluate if processes are truly multi-stakeholder


Better utilize IGF messages in other forums and decision-making processes


Review and refine the IGF’s intersessional work models


Start early preparations for the upcoming review of the IGF mandate


Make participant lists available earlier for IGF events to encourage broader participation


Unresolved Issues

How to balance multilateral and multi-stakeholder approaches in global internet governance


How to ensure authentic multi-stakeholder processes rather than just rhetoric


How to address power imbalances between different stakeholder groups


How to make multi-stakeholder processes more resource-efficient while maintaining effectiveness


How to better integrate perspectives from developing countries and smaller organizations in global processes


Suggested Compromises

Combine strong regional multi-stakeholder processes with regional multilateral processes to strengthen voices in global forums


Use targeted, issue-specific multi-stakeholder engagement rather than always trying to include all stakeholders in every discussion


Balance the need for inclusive processes with the need for concrete outcomes and decision-making


Work with multilateral institutions to make their processes more consultative and inclusive, while recognizing their distinct nature


Thought Provoking Comments

I think sometimes we romanticize the past of the WSIS and the wonderful WSIS multi-stakeholder process. What we forget, those of us who were there at the time, like Tijani and myself, is that we had bureaus. We had a civil society bureau, we had a private sector bureau, and governments of course have to negotiate with one another. And we had within civil society before every opportunity to give an input on an item of the agenda, we had to reach internal consensus, and it was very very difficult.

speaker

Anriette Esterhuysen


reason

This comment challenges the idealized view of past multi-stakeholder processes and introduces nuance about the difficulties of reaching consensus within stakeholder groups.


impact

It shifted the conversation to consider the internal dynamics within stakeholder groups and the need for coherent processes within those groups for effective multi-stakeholder collaboration.


I wonder whether or not we’re being honest enough about the relationship between governments, the UN, civil society, and big tech. Because it feels like the only things that are actually making things move is litigation, certain regulations that threaten fines, or extreme reputational damage.

speaker

Lina from Search for Common Ground


reason

This comment challenges the effectiveness of multi-stakeholder forums and raises critical questions about power dynamics and motivations for change.


impact

It prompted panelists to address the role of regulation and litigation in driving change, and to defend the value of multi-stakeholder processes while acknowledging their limitations.


When we talk about inclusion, inclusion does not remove the power from the people who are already in the room, opening up to more stakeholders, more people, is not removing from those who are already there.

speaker

Isabelle Lois


reason

This insight highlights an important aspect of inclusion that is often overlooked – that it’s not a zero-sum game.


impact

It reframed the discussion on inclusion to focus on expanding participation without threatening existing stakeholders, potentially making the concept more palatable to those who might resist change.


Isn’t the true test of an effective multi-stakeholder process that it should survive, even if there is serious disagreement on how to regulate, and what to regulate, by whom?

speaker

Anriette Esterhuysen


reason

This question redefines the measure of success for multi-stakeholder processes, emphasizing resilience in the face of disagreement rather than just consensus.


impact

It prompted reflection on the maturity and robustness of multi-stakeholder processes, shifting the focus from achieving agreement to maintaining dialogue despite differences.


Overall Assessment

These key comments shaped the discussion by challenging idealized views of multi-stakeholder processes, introducing critical perspectives on power dynamics, and reframing concepts of inclusion and success. They moved the conversation beyond surface-level agreement on the value of multi-stakeholder approaches to grapple with the complexities and challenges of implementing them effectively. The discussion became more nuanced, acknowledging both the potential and limitations of these processes, and considering how they might evolve to better address power imbalances and maintain relevance in the face of disagreement.


Follow-up Questions

How can we make multi-stakeholder processes more effective and resource-efficient?

speaker

Anriette Esterhuysen


explanation

This question addresses the challenge of implementing multi-stakeholder approaches in a cost-effective and time-efficient manner, which is crucial for their sustainability and widespread adoption.


What mechanism or criteria can ensure that multi-stakeholder processes are authentic and not just rhetoric?

speaker

Aziz Hilary


explanation

This question highlights the need for concrete measures to evaluate the genuineness and effectiveness of multi-stakeholder processes, which is important for maintaining trust and credibility in these approaches.


How can the implementation of the Global Digital Compact (GDC) be improved, and where could it be implemented?

speaker

Dana Kramer


explanation

This question addresses the practical aspects of implementing the GDC and suggests exploring the IGF as a potential venue, which is important for turning principles into action.


How can we ensure that all stakeholder groups, including those less represented this year, are present and actively participating in future IGFs?

speaker

Bruna Martins


explanation

This question addresses the need for inclusive participation in the IGF, which is crucial for maintaining its multi-stakeholder nature and effectiveness.


How can the IGF better utilize its messages and outcomes in other forums and decision-making processes?

speaker

Isabelle Lois


explanation

This question explores ways to increase the impact and relevance of IGF discussions in other policy-making arenas, which is important for the IGF’s influence and effectiveness.


How can small-scale organizations work with the IGF, particularly at the national level?

speaker

Arjun Singh Vizoria


explanation

This question addresses the need for inclusivity of smaller organizations in IGF processes, which is important for diverse representation and grassroots participation.


How can we better map and address evolving issues in internet governance through multi-stakeholder processes?

speaker

Flavia Alves


explanation

This question highlights the need for adaptability in multi-stakeholder processes to address new and emerging issues in internet governance, which is crucial for maintaining relevance and effectiveness.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

DC-IoT & DC-CRIDE: Age aware IoT – Better IoT

DC-IoT & DC-CRIDE: Age aware IoT – Better IoT

Session at a Glance

Summary

This discussion focused on age-aware Internet of Things (IoT) and how to create better IoT systems that respect children’s rights and safety. Participants explored various aspects of this topic, including data governance, age assurance technologies, AI’s role, and capacity building.


The conversation highlighted the importance of considering children’s evolving capacities when designing IoT systems and policies. Experts emphasized the need for a more nuanced approach to age verification that goes beyond simple chronological age limits. They discussed the challenges of balancing children’s protection with their rights to privacy, access to information, and participation.


The role of AI in IoT systems was examined, with participants noting both its potential benefits for personalized learning and its risks in terms of data collection and user profiling. The discussion touched on the need for ethical AI development that considers children’s best interests.


Labeling and certification of IoT devices were proposed as ways to empower users and parents to make informed choices. Participants stressed the importance of global standards and the potential role of public procurement in driving adoption of child-friendly IoT practices.


The conversation also addressed the need for capacity building among various stakeholders, including parents, educators, and industry professionals. Experts called for more inclusive discussions that involve children and young people in the development of IoT policies and technologies.


Throughout the discussion, participants emphasized the shared responsibility of industry, governments, and civil society in creating a safer and more empowering digital environment for children. They concluded by highlighting the importance of ongoing dialogue and the need for practical, enforceable solutions to protect children’s rights in the evolving IoT landscape.


Keypoints

Major discussion points:


– The importance of age-aware IoT and developing good practices to protect children while allowing them to benefit from technology


– The need for better data governance, labeling, and certification of IoT devices to empower users and protect privacy


– The role of AI in adapting IoT environments to users’ abilities and needs


– The importance of capacity building and education for children, parents, and other stakeholders about IoT


– The tension between innovation, regulation, and corporate responsibility in developing safe IoT for children


Overall purpose:


The goal of this discussion was to explore how to develop age-appropriate and safe Internet of Things (IoT) technologies that serve people, especially children, while addressing potential risks and ethical concerns.


Tone:


The tone was collaborative and solution-oriented, with experts from different fields sharing insights and building on each other’s ideas. There was a sense of urgency about addressing these issues, but also optimism about finding ways to harness technology for good. Towards the end, the tone became more pointed about the need for corporate accountability and including children’s voices in future discussions.


Speakers

– Maarten Botterman: Chair of the Dynamic Coalition on Internet of Things (DC IoT)


– Sonia Livingstone: Professor at London School of Economics, expert on children’s rights in digital environments


– Jonathan Cave: Senior teaching fellow at University of Warwick, Turing Fellow at Alan Turing Institute, former economist member of British Regulatory Policy Committee


– Jutta Croll: Representative of Dynamic Coalition on Children’s Rights in the Digital Environment


– Torsten Krause: Role not specified, helped moderate online comments


– Pratishtha Arora: Expert on AI and children’s engagement with technology


– Abhilash Nair: Legal expert on age assurance and online regulation


– Sabrina Vorbau: Representative of Better Internet for Kids initiative


Additional speakers:


– Helen Mason: Representative from Child Helpline International


– Musa Adam Turai: Audience member who asked a question


Full session report

Age-Aware Internet of Things: Protecting Children’s Rights in a Connected World


This comprehensive discussion explored the challenges and opportunities of creating age-aware Internet of Things (IoT) systems that respect children’s rights and safety. Experts from various fields, including child rights, economics, law, and technology, convened to address the complexities of developing IoT technologies that serve people, especially children, while mitigating potential risks and ethical concerns.


Key Themes and Discussions


1. Data Governance and Age-Aware IoT


The conversation highlighted the nuanced nature of data collection in IoT systems, recognising that it can be both beneficial and harmful to children. Jonathan Cave emphasised that static age limits may not be appropriate given the evolving capacities of children. Sonia Livingstone stressed the need to consider broader child rights beyond just privacy and safety, arguing for a more holistic approach to children’s rights in digital environments. She emphasised the importance of consulting children in the design of technologies and policies.


2. Labelling and Certification of IoT Devices


Several speakers agreed on the importance of labelling and certification for IoT devices as a means of empowering users and protecting privacy. Maarten Botterman suggested that such measures could enable users to make informed choices about the technologies they adopt. Jutta Croll proposed leveraging public procurement to drive the adoption of standards, while Abhilash Nair noted that certification could help mitigate literacy issues for parents and caregivers.


Jonathan Cave expanded on this idea, suggesting that public procurement could serve as a complement to self-regulation or formal regulation, potentially incentivising industry compliance with safety standards. This approach was seen as a novel policy tool for promoting child-safe technologies.


3. The Role of AI in Age-Aware IoT


The discussion explored the dual nature of AI in IoT systems, acknowledging its potential to both facilitate and potentially distort children’s development. Jonathan Cave highlighted this duality, while Pratishtha Arora emphasised the importance of developing age-appropriate AI models and interfaces. Arora also raised the crucial point of considering impacts on children who may not be direct users of IoT devices but are nonetheless affected by them.


4. Capacity Building and Awareness


Participants stressed the need for translating research into user-friendly guidance for parents and educators. Sabrina Vorbau discussed the Better Internet for Kids initiative, which aims to create a safer and better internet for children and young people. She emphasised the importance of involving children and youth in developing these resources. Jutta Croll mentioned the EU ID wallet as a potential tool for age verification in digital environments.


Helen Mason from Child Helpline International advocated for including civil society and frontline responders in discussions, noting that data from child helplines could provide valuable insights into children’s experiences with online technologies.


5. Corporate Responsibility and Regulation


A significant portion of the discussion focused on the need to place more responsibility on industry rather than users for ensuring child safety in IoT environments. Sonia Livingstone argued strongly for this shift, while Jonathan Cave suggested that personal liability for executives might drive more attention to child safety issues. Abhilash Nair supported this idea, noting that it could lead to more proactive measures from companies.


The conversation also touched on the tension between free speech rights and child protection, particularly in conservative societies, as raised by audience member Musa Adam Turai. This highlighted the need for nuanced approaches that balance various rights and cultural contexts.


Thought-Provoking Insights


Several comments sparked deeper reflection and shifted the discussion:


1. Jonathan Cave’s challenge to static age-based protection measures, encouraging more nuanced approaches based on digital maturity.


2. Sonia Livingstone’s emphasis on considering the full spectrum of children’s rights, not just safety and privacy.


3. An audience member’s suggestion to focus more on media information literacy rather than access restrictions.


4. Livingstone’s critique of the term “user” and how it can lead to overlooking children’s specific needs and rights in technology development and policy discussions.


Conclusion and Future Directions


The discussion concluded with a call for ongoing dialogue and practical, enforceable solutions to protect children’s rights in the evolving IoT landscape. Participants emphasised the shared responsibility of industry, governments, and civil society in creating a safer and more empowering digital environment for children.


Key takeaways included the need to consider children’s evolving capacities in IoT design, the potential of labelling and certification to empower users, the importance of involving children in technology development processes, and the need for greater industry responsibility.


Moving forward, participants suggested involving children and young people in future IGF sessions on this topic, developing more user-friendly guidance on age assurance, and considering the use of public procurement to drive adoption of child safety standards in IoT. Jutta Croll noted the upcoming high-level session on children’s rights in the digital environment at the UN, highlighting the growing importance of this topic on the global stage.


Session Transcript

Maarten Botterman: Oh, you cannot unmute. Jonathan, you should be able to…


Jonathan Cave: Oh, now I can. Yes, I am allowed to unmute. I can’t turn on my camera, but at least I can speak.


Maarten Botterman: Okay, that’s excellent. Thank you. Sonja will check you to…


Sonia Livingstone: Hello. Yes, I can speak now. Thank you. And it would be lovely to have my camera on, if that’s possible.


Maarten Botterman: We’re checking. Thank you. Can we put on the camera for those speakers? Can we put on the camera for Sonja and Jonathan? Yes. Oh, excellent.


Sonia Livingstone: Thank you.


Maarten Botterman: And Jonathan Case, right? And Jonathan Case. Gentlemen, Jonathan Case as well. So, unmute your camera. Yes, there I am. Great. Thank you. Jonathan and Sonja, you can unmute and unmute yourself. So, if you’re not speaking, maybe best to mute yourself.


Sonia Livingstone: Okay, sounds good.


Maarten Botterman: You’re now both co-hosts. Shall we begin? Can we begin? Okay, good morning, everybody. Welcome to the session from DCIoT and DCCRIDE, which is focused on age-aware IoT and better IoT. Can you hear me well in the room? Good. So, this session will take us through the landscape of evolving technology and how it relates to how we deal with people socially in age, how we can make sure technology serves the people with specific focus on age. So, there’s so many opportunities to make everyday life more convenient, safer and more efficient. But there’s also threats that come with that. And we want to get the best out of it. This is why dynamic coalitions throughout the year explore how to develop good practice in the best possible way and address risk processing, provision of information that may be inappropriate or even harmful to individuals or the initiation of processes that are based on false assumptions. And one of the ways to counter these risks is by categorizing the users. If the devices in the surroundings can categorize users, the Internet of Things can adequately adapt according to the needs and the specific measures to serve that user. So, this is why Jutta and I discussed coming together with the two dynamic coalitions and focus on how this will look like. A little bit on the reasons for the DCIAT on CRITE. Can you share that, Jutta?


Jutta Croll: Yes, of course I can share that. You gave me the perfect segue to that when you mentioned evolving technologies because the dynamic coalitions are talking about the evolving capacities of children, which is one of the general principles of the UN Convention on the Rights of the Child. I do think both dynamic coalitions started their work very early in the process of the Internet governance from children’s rights than was called the Dynamic Coalition on Child Online Safety. And that started in 2007. I do think IoT started the same year as well?


Maarten Botterman: 2008.


Jutta Croll: 2008. So, very long-standing collaboration between these two dynamic coalitions in a certain way. And several years later, when the General Command Number 25 came out, which is dedicated to children’s rights in the online environment, we renamed the Dynamic Coalition to Children’s Rights, as Martin has already said. And we found some similarities in the work that we are doing and also in the objectives that we want to achieve because we know that children are the early adopters of new technology, of emerging technologies, and that’s always the case where we have to look whether their rights are ensured and whether they can benefit from these technologies. And IoT is one area that can help people, that can help children to benefit from digital environment. And that’s, having said that, I hand over to you, Martin, again.


Maarten Botterman: Thank you so much. So, basically, the DC IoT 2008, HydroBot was the first time, has been talking over time, so how can IoT serve people? What global good guidelines, what should be a good global good practice guidelines should be adopted by people around the world because this technology is used everywhere. So, like the Internet, the Internet of Things doesn’t stop at the border. Very practically, because products come from all over the world, but also because, for instance, the more mobile IoT devices, like in cars, in planes, or what you’re carrying with you when you travel, crosses borders all the time as well. So, an understanding of global good practice would also help governments to take it into account when they develop legislation, being more aware of what consequences could be, what to think of. It could take business, global business, could take it into account, design and development of devices and systems. By doing that from the outset, innovation can be guided by some insights, even when it’s not lost yet. So, the Internet of Things good practice aims at developing IoT systems, products and services, taking ethical considerations into account from the outset, development phase, deployment phase, use phase, and waste phase of the life cycle, or sustainable way forward. It’s using IoT to help to create a free, secure, and enabling rights-based environment with a minimal ecological footprint, and that for a future we want for us and future generations. And when we talk about an Internet we want in which IoT, as we want, is to be developed, it’s crucial that we really get that clear what that means for us, and that we do take action to make something happen there, because otherwise, still remembering very much what Vint Cerf, the chair of the high-level panel here in Kobe, if we don’t make the Internet we want, we may get the Internet we deserve, and we may not like that. So, with that, I really look forward to discussions today for which we have a number of excellent speakers in the room and online, and we will talk first about the data governance aspects that underline this, then we go into labeling and certification of IoT devices as that helps in transparency of these devices and what they can do, and empowers users to be more informed in their choices. Every session so far, I think I’ve heard the word AI, so let me be the first one to mention it here. Of course, it’s important that IoT environments work and also how selections can be made to adapt that to abilities of people. And then last but not least, in all this, the kind of horizontal layer is, how do we develop capacity? Because IoT may be developed all over the world, but to apply it locally, you need to have local knowledge. So where does that come together? How can we work on that? I’m very, very happy to have also Sabrina here to talk more about that. With that, I’d love to give the word to Jonathan Cave, who is a senior teaching fellow, who used to be a senior teaching fellow at economics in the University of Warwick, and a Turing Fellow at the Alan Turing Institute, well known for its work on ethics in the digital space. He was also a former economist member of the British Regulatory Policy Committee. Jonathan, can you dive into the data governance aspect and why this is so important and what we need to do about it? You will be followed by Sonia.


Jonathan Cave: Yes, thank you, Martin, and thank you, everyone, for showing up. This is a very important topic, and I’m going to largely limit these first remarks to matters dealing with data. But one thing I want to point out is that this idea of evolving capacity applies not only to the technologies which are changing and collecting more and more data, but also applies to the evolving capacities of the individuals involved, in particular children.


Maarten Botterman: Jonathan, can you improve your microphone? Not really. Okay, you’re understandable, but just not great. If you don’t have an easy trick, let’s continue. Sorry about that.


Jonathan Cave: Okay, let me just try.


Maarten Botterman: Closer to the device may help. Okay, well, let me try.


Jutta Croll: Now that you’re so close to the device, it’s better if you just go close.


Jonathan Cave: No, actually, the device is attached to my ears, so there’s no way of going closer without changing my face geometry. But I’ve switched to another microphone on my camera. Is that better?


Jutta Croll: Yes. Yes, it’s much better.


Jonathan Cave: Okay, thank you. One can never tell with these technologies. Yes, I think it’s very interesting is that much of our laws and much of our policy prescriptions around child safety is predicated around the idea of chronological age, that people below a certain age should be kept safe and people above that age lose that protection. But of course, particularly as children have more and more experience of online environments, and the people making the rules have less and less experience of the new technologies, that static perspective of protecting people on the basis of age may not be the most appropriate, and we need to stay aware of that. First of all, I think it’s interesting to remember the data governance issues. One element of this is that the data themselves can be a source of either safety or risk and harm to young people, and the reason we care about that is both in terms of the immediate harm, but in terms of the collective, progressive, or ongoing harm that early exposure to inappropriate content, which includes manipulation of individuals by priming and profiling, can expose people, which then change the way they think as individuals or as groups. Now, in that respect, the question becomes, which data should people have available to them? One particular element of this is that we have a lot of privacy laws, and many of these privacy laws set age limits for people’s exposure to or ability to consent to certain kinds of data collection or processing. Mostly, these are predicated around what we would consider sensitive data, but in the online environment, particularly social environments or gaming environments, many more data are collected whose implications we only dimly understand, and this is where AI comes in. It’s not obvious which data may be harmful. So, instead of imposing rules and asking industries simply to comply with those rules, we may need, and increasingly in areas like online harms, we’re moving in the direction of a sort of duty of care where we make businesses and providers responsible for measurable improvements and not for following static codes. So, it’s like harm reduction rather than compliance. So, there’s the question about which data are collected. There’s also a more minor issue. We are exempt from certain kinds of data collection, but those may be the same data needed to assess either what their true chronological age is or their level of digital maturity. So, it may be that some of the rules we have in place make it difficult to do too many things to keep pace with the evolving technologies. Okay. I think, probably, rather than going on, I should turn over to Sonia at this point. Any comments as to when things come back up?


Maarten Botterman: Fantastic. Thank you. Yes, Sonia, please go on.


Sonia Livingstone: Okay. Brilliant. Thank you very much, and thank you for the preceding remarks, which kind of set the scene. So, I did want to begin by acknowledging what an interesting conversation this promises to be because we’re bringing together two constituencies, those concerned with Internet of Things and those concerned with children’s rights, that haven’t historically talked together. It’s really valuable that we’re having this conversation now. In a kind of Venn diagram of child rights and IoT experts, I think the overlap currently is relatively modest, and I hope we can widen the domain of mutual understanding. I think it’s even there in some of the… and age assurance is a brilliant topic to illustrate some of the both overlaps but also differences. So, I guess, from a child rights perspective, a starting point is to say that it’s very hard to respect children’s rights online or in relation to IoT if the digital providers don’t know which user is a child. And so, having that knowledge seems a prerequisite to respecting children’s rights. And yet, as some of us have been investigating so far, it is far from clear that age assurance technologies as they currently exist do themselves respect children’s rights. So, the many is that we might bring in a non-child rights respecting technology to solve a serious child rights problem. So, and I think this has amplified this challenge in relation to the Internet of Things because now we’re talking about technologies that are embedded, that are ambient, and that the users may not even know are operating or collecting their data, processing… but also introducing risks. So, a child rights landscape always seeks to be holistic. So, privacy is key, as has already been said. Safety is key, as has already been said. But the concern about some of the age assurance solutions is that, as Jonathan just said, they introduce age limits. And so, there are also costs, potentially, to children’s rights, as well as benefits. …perspective that is crucial. So, it’s always important that we think about privacy, safety, if you like, hygiene factors. How do we stop the technologies introducing problems? But we need to think about those also in relation to the rest of children’s rights. What am I thinking of? I’m thinking of consulting children in the design and making of both the technologies and also the policies that regulate them. I’m thinking of children’s right to access information, right to participation, and rather than excluded through age limits, or perhaps through delegating the responsibility of children to parents, which means parents might exclude children. We’re seeing a lot of this in various parts of the world at the moment. As Jutta said, I’m thinking about evolving capacities. This is not just a matter of age limits. Underneath, children are excluded, and above, they are placed at risk, as it were. This is a terrible binary, if that’s where we’re heading. But we’re also thinking of appropriate provision for children who are users or may be impacted by technology. They may not be the named user. They may not be signed up to the profile or signed up to the service or paying for the service, but they may be in the room, in the car, in the street, in the workplace, in the school. They may be impacted by the technology. I’m thinking also about best interests, that overall, the balance should always be in the child’s best interest. interests. That’s what every state in the world, except America, has signed up to when it ratified the convention. And I’m thinking of child-friendly remedy so that when something goes wrong, children themselves, not necessarily through adult remedy. So I think a child rights approach brings a broader perspective but also one that is already embedded in and encoded in established laws policies and obligations on institutions and states to ensure that these new areas of business respect children’s rights during and as part of their innovative process. And so I’ll end with the mention of child rights by design if you like to give a broader focus to questions of privacy by design.


Maarten Botterman: Thank you. I see no… Oh, in the chat. I will buy the host. No, there’s no comment. Torsten?


Torsten Krause: Yeah, we have one comment in a chat. I would read it out, please, as it was written by Godzway Kubi. I hope I pronounced it correctly. And Godzway Kubi wrote, two ways Age-Aware IoT can contribute to a better IoT ecosystem for children is by prioritizing private devices designed for children, such as smart toys and learning tools, and using age-appropriate interfaces and content filters to enhance usability and safety.


Maarten Botterman: Okay, thank you for that very appropriate remark. By the way, Sonja, the first togetherness on age was actually on children’s toys and IoTs six years ago. So… Jonathan, please.


Jonathan Cave: Yeah, just a small follow-up. Those are extremely useful remarks. There’s one thing I wanted to say about the use of technologies to identify children’s ages, whether they’re chronological or let’s call them digital ages, which is that, of course, these, like all other age verification technologies, can be bypassed. And one particular concern we should have is that when these bypass approaches become to children or by groups of children, there may be an adverse selection in the sense that those most likely to bypass those protections may be those most at risk, either as individuals or in the groups through which these practices share, disseminate. I remember when I was growing up and our county was 21 for drinking alcohol, the neighboring county was 18, and fake IDs were in widespread circulation among certain social networks. And so there is this issue about whether, although the solution may be very good, the path to the solution may be more harmful than where we start up now. And so, yeah.


Maarten Botterman: I think that’s a very good point and one of the big later on. So with that, I would like to move on towards the second. Oh, Sonja, do you want to respond to that?


Sonia Livingstone: Well, I was just going to say, thank you. I was just going to say very briefly, in response to Jonathan, and thinking of a paper that I worked on as part of the EU consent project also with Abhilash, when we consult children and families, they actually value those kinds of workarounds. It provides a little flexibility. And I might say, I don’t think a five year ID to drink, but perhaps a 19 year old could. And it’s that little bit of flexibility around hard kind of age limits, that many families and children say is important to them in providing just that bit of flexibility for where they know their child is a bit less mature or a bit more mature. And so encoding everything in legislation and technology can in itself take away some agency from families. And I think that’s a challenge to consider.


Jonathan Cave: I’d also say that it takes away some agency from the children themselves, who have to learn how to navigate. There is this tension between asking whether the environment can be made safe. And if not, whether denying access, as for example, in Australia is the appropriate approach, or whether some more active form of engaged filtration that goes, but then you have to move away from the binary, of course. Yeah, because you do have to not only gauge how mature children are, but to provide them with curated experiences, perhaps under parental or peer control, that enable them to become capable of safely navigating these environments.


Maarten Botterman: So I think there’s the legal hard, legal coded limits. And I think online tools can be more useful and practical limits than with online code limits, unless you involve obligatory registration, including at the UKIP.


Jutta Croll: Yes, so far, what we know, for example, from the GDPR is a hard set limit of, it’s between 13 or 16. But in every country in Europe, it’s a hard limit, either 13, 14, 15, or 16. And what the European limits is more about age brackets, which would mean a certain age range, saying between 13 and 15, or 16 to 18, or something like that. So that you have a diverse, a bit of a range. And there comes in the issue of maturity. So you might have a 13 year old that is mature, like a 15 year old, and a 15 year old that is only like someone who’s 14, or 11, or 12, something like that. So when we are not talking about an exact age threshold, but about age brackets, we get some of this flexibility. And it also pays into the concept of maturity, so that it makes it more flexible. Thank you.


Maarten Botterman: Yes, thank you very much. And I see another one from Doherty in the audience. I don’t know about this, but let’s call on Doherty, please.


Torsten Krause: Doherty Gordon wrote in a comment. It’s repeating a little bit what Jonathan said. She wrote, denying access will always encourage teens to look at work around, to not engage in dangerous behavior because they have no guidance. Why do we not put more emphasis on media information literacy, so that users understand how to protect themselves?


Maarten Botterman: Yes, thanks for that. And also, the brackets, can you come into that as well? Please.


Pratishtha Arora: Yeah, so I want to speak on the point. Looking into all aspects of children over there, it’s equally important over here that when we’re defining age on the accessibility of technology for children, what category of children that we’re talking about, because it might just differ as per the point that, you know, of their sense of understanding about technology, and also their sense of understanding about using the technology.


Maarten Botterman: Yes, thanks. And this was Batista Aurora. If you speak, please, we will be speaking shortly on AI as well. Thank you for that. So how can we help to help ensure that parents, children environments know how I think devices can serve them. And that is the next topic we would like to dive into. So basically having all these goods from all over the world, having different capacities and having found in the past that, for instance, the security of these devices was sometimes limited to or to being set with default passwords like admin is not useful. Also, we found that in the past some devices, for instance, send data back to the factory or wherever without users being aware of that or asks for more of that clarity. At the same time, legislation is per country. And so how can we together get a good tool here that helps us to understand what we actually buy, what we actually start using? So from the IoT perspective, we had a big discussion last year in Kobe where it was made clear that labeling of devices and of services is crucial. Needs to be even dynamic because with upgrades of software and things, the label may change. So a label that can be linked to online repository would be crucial. And certification of that, of course, is important because one could say anything there. And how do we know that it’s true? So some certainty needs to be built in. And there’s different certification schemes. This is not a session to go deeper, very deep into the differences of those. At this moment, these labeling and certification schemes are also discussed around the world and put in place. Framework has been put in place that has also identified this in Singapore. Action has been taking place in other places. What is now, and in Europe as well, of course, as part of the Digital Services Act. And what we see is that now the diplomatic go around is beginning. These countries are talking to each other about how can we do it in such a way that we can recognize each other’s certifications, that the labels of other countries are useful for us as well. And this is the beginning. But we’re not there yet. The deeper intent of labeling and certification, so labeling is what is it about the certification is how do I know that this is correct, is basically that it empowers users to make smarter choices. So next to security, we discussed last year, it should be also about data security. So where is data streams going? And I think what came up over the years since is more and more emphasis also, so how much energy does it use, like IEEE is already doing for electronic devices. So all this together is, I think, in the name of IoT devices can do and can offer. And I’m very interested to hear your perspective, Abhilash, on what this can do for age awareness and appropriateness.


Abhilash Nair: Thank you. I want to talk a little bit about why age assurance matters from a legal perspective. As a starting point, we know that requires some form of age assurance for various content services and sale of goods online. And some of these laws have explicit requirements of age verification or age assurance. Some of them are implied. But in practice, there is very little out there. For decades, really, we’ve had laws that have not been enforced properly because they have not been complemented with appropriate age assurance tools. And the notable exception for that is probably online gambling, where the law seems to have worked in some jurisdiction under the Gambling Commission. But part of the reason why it was successful is because it’s not just about age assurance. It’s also about identity verification, so where people need to be identified so that they can be offered support against support for problem gambling, so on and so forth. For almost all of the cases, when we looked at the EU Consent Project that Sonia mentioned earlier, when we looked at all member states in the EU plus the UK, we found that there was very little out there in terms of AV tools, age verification tools, that can actually help implement legislation. And content was the most problematic of all of us, not least because there are cultural variations even within Europe as to what is acceptable content, even for children in under the sub-18 category of people. But there was also a wider problem there. There was a disconnect between the principle of self-regulation that the EU has advocated, especially for content for people of Italian, on the one hand, with legislation that suggests the adoption of age assurance tools on the other. So, not led to a happy situation that, you know, the law on the one hand requires age assurance to protect children, but the practical reality is, you know, there hasn’t been any useful means of enforcing that legislation in practice. There is a legal principle which suggests that if a law cannot be enforced as unenforceable, it’s unlikely to succeed. It’s unlikely to command the respect of people who are bound by that legislation. You can see a good example in copyright infringement online. It’s not because people’s copyright is unlawful that people infringe copyrights, it’s because they can do it without consequence. Unfortunately, that is the case with most laws that require age assurance, most not to play with content, as I’ve already mentioned. So, what I’m trying to say here is it’s important that age assurance, or effective age assurance, complements legislation for the legislation to work, and that is a starting point. Jonathan already mentioned that we’ve got too many rules. The solution is not to have more legislation. The real starting point is to make sure that what we have is enforceable and practical. Now, things are changing lately with more specific legislation coming into the books, especially specifically mandating age assurance and posting specific obligations on platforms and websites for non-compliance. But it’s not without problems. The problem with age assurance, in my view, the fundamental problem there is, essentially, it has been a debate about children out and adults in, and that’s not, you know, that’s not how it should be. Sonya’s already talked about evolving capacity, so children in the UK cannot just classify everyone under the age of 18 into one category and use age assurance on them and have adults on the other side. But there’s also the other binary debate between adults who will feel strongly about privacy and their ability to access internet without any restrictions, without any restrictions, and keeping children safe. Now, that balance also has not been struck appropriately thus far. So, and there’s also the other issue of, you know, the age threshold for children for accessing content services or purchasing goods can also vary across nations, across cultures, even within the same country. There are different cultural variations in that. But the law does not always factor in evolving capacities of children. To take the example of the EU Audiovisual Media Services Directive, which refers to a notion of risk of harm, to be the basis for adopting appropriate safeguards and measures for age assurance is a good example because, in principle, it recognises evolving capacities of children, you know, every child, say, for example, an 11-year-old is very different to a 16-year-old. But not all legislation does that. I’ve already talked about the cultural variations as to age thresholds, even for what is generally perceived as harmful content for children. Even within Europe, there are variations as to the age threshold for accessing porn, even under the sub-18 category. So, that’s where I stand on age assurance laws. I think we have toyed with the idea of self-regulation, especially in the online space, for more than three decades, and it hasn’t worked. And I think we need, what I’m saying is we don’t need new age assurance laws, we already have age assurance laws. What we need is workable things. legislation. And like you said, labelling and certification can be mandated by law or it could be a voluntary thing, but they obviously have to go hand in hand and complement legislation because I do believe that measures like certification and labelling gives more consumer choices, gives more parental or caregiver autonomy, also children autonomy, but that cannot be a substitute for it is how I feel about it.


Maarten Botterman: Thank you for that. Comments online on this subject, Torsten?


Torsten Krause: There are discussions online,


Maarten Botterman: but it’s skipped Jonathan and Sonja because they raised their hand. Other comments? There are comments, but not to this. Jonathan, please. Your thoughts on this.


Jonathan Cave: Okay. Thank you. Thank you very much. And thanks for that.


Maarten Botterman: I can’t hear you right now. Am I still inaudible? One moment. Yes. Jonathan? Yes. No.


Jonathan Cave: No. I’m still inaudible.


Maarten Botterman: I see the technical section working on it. Yeah. It will be the same issue for her because it’s settings here in the room. Can you say something? Something? Something? Say something. We can’t hear the online speakers. We can’t hear the online speakers. Okay. One moment. Yes. We can hear you now. You’re back.


Jonathan Cave: Okay. I’ll be very brief because of the technical delays. I think one of the things that we learned with age verification in relation to pornography is that the very existence of the single market or the global use of these technologies makes it very difficult to make differences. Even attempts to tackle the problem by regulating payment platforms because you couldn’t regulate content users on the platforms sort of failed because the content was coming from outside the jurisdiction, and the fact that it was banned within the jurisdiction merely increased the profitability or the price of external supplies of this kind of potentially harmful content. Another thing is that I completely agree that some mixture of self and co-regulation and formal regulation backed by a concept of a duty of care or harm-based regulation, something more accenting, is required to keep pace, not just with the evolution of technology and people’s understanding, but with how it reacts to existing, let’s say, bans and protections of regulations. And the final point was to say that we should probably also be aware of the fact that certification schemes and other forms of high-profile regulation can convey a false sense of security, but by the same token, it may be the case that some of the harms against which we regulate are really harmful because people have evolved away from the point where they’re vulnerable to them. And in that case, in that sense, I just point out that in relation to disinformation, misinformation, and malinformation, there’s evidence that a lot of the younger generations, Gen Alpha, Gen Z, in particular, are less vulnerable to these harms than their unrestricted, unregulated adult counterparts. And so that it may be that some of the harms we worry about cease to be harms or are no longer appropriate to be tackled by legal means. Okay, those are my comments.


Maarten Botterman: Thank you for that, Jonathan. Sonia, please.


Sonia Livingstone: Thank you. I wanted to acknowledge the conversation in the Zoom chat for the meeting, identifying the range of stakeholders involved. And so, of course, maybe we should have said at the very beginning that in facing this new challenge of age assurance in relation to IOT, a whole host of actors are crucial, they play a role, and there are difficult questions of balance which will vary by culture. So yes, we need to empower children and make sure that these services are child friendly, they speak to them, they’re understandable by them, we need to address parents, we need to address educators, and involve media literacy initiatives in exactly this domain. But there is, I wanted to make two points there. And one is, in that we can only educate people, the public, in school and so forth, insofar as the technologies are legible, insofar as people can reasonably be expected to. What are the harms, where the harms might come from, and then what are the levers, what are the kind of available resources for the public, the users, to address those. And we’re not there yet. And so on the question of balance, I think the spotlight for IOT is rightly on the industry, and on the role of the state, as Abhilash said, in bringing legislation. And on that point, we’ve been doing some work trying to make the idea of child rights by design real and actionable. We’ve been doing some work with the industry, the stakeholder group that is kind of most … And so I just want to open up the black box of industry a little bit. And because what we’re finding is that from the CEO, through the legal department, to the marketing department, the design department, the developers, the engineers, you know, all the different professionals and all the different experts that make up the development of a technology, for the most part, most are not aware of the child user who may or may not be at the end of the process. Most of them have a different kind of individual, not a family, that might share passwords and share technologies, and by and large is a relatively invulnerable or resilient user, rather than one with a range of vulnerabilities, including children that we’ve thought about. So I think let’s break up this, you know, look into this notion of the industry and think about where are ethical principles, our duty of care, our legal requirements and our child rights expectations, where will they land within a company, whether it’s a small startup that is completely hard-pressed and has no idea of these concerns, all the way through to an enormous company that has, you know, a lot of kind of trust and safety focus at a relatively unheard level in the organization, and a lot of engineers and innovators who are pushing forward without the kind of knowledge and the kind of awareness that we’re discussing today. So pointing to industry and governments regulating industry just, you know, opens up the next set of challenges about who and how to address these issues.


Maarten Botterman: Very well said, and also an excellent segue towards our next sections, because basically this is about maybe education of the equipment through AI, and for sure the capacity building to parents, children and environments, to which we will talk in the last session. I’d like to invite Patricia Arora to initiate explaining the role for AI that you see in this interaction. Thank you.


Pratishtha Arora: Yes, thank you for that. I think the impact of AI is kind of, you know, putting a lot of emphasis on children and the engagement on the devices. In terms of impact, I see both the positive and the negative in terms of positive, because it’s also a learning platform for many children who are maybe slow developers, you know, watching through videos and learning and learning and building their own capacities. It’s a good contrary when I talk about technology and the advancement of it and the impact on children. That is also a big challenge for them in terms of that children are being given all the rights to use that device without. What speaker children are using to call their parents, voice out what they feel like, engage with that device, which is also giving them that expression of freedom to learn on the aspect that, okay, the device is answering them back from the point that a child is asking a question. But there, the speaker or the device is unable to understand that if, what is the age of the child over there? So age assurance as a point over there that an old child asking the question or a 13-year-old or a 15-year-old, so that gap is not being identified as of now. So where I feel that, you know, these technological advancements is playing a very big role. On the point that how it is leading to a negative impact is that, you know, there is over dependence on these technological tools as well. Because for every small thing that we are going up to the device and asking a problem to solve, to solve that problem, and that is also leading to somewhere the development of the skills in terms of the physical development of a child, the mental development of a child as well. Because we are totally dependent upon what the device is responding to us. In terms of standards, I feel that, you know, we need to have more defined standards where children have access to devices and where the engagement of parents have a big role in terms of what PowerPoint and also from the point that we need to have other stakeholders involved from the industry perspective that they follow these rules when any device is being designed from a child perspective. Because developers, like what Sonia mentioned, that, you know, developers are not able to figure out that, you know, this device has been designed from a child’s perspective. So that thought to be ingrained to the point that any technology which is being designed needs to be child friendly as we are advancing with technology now. Of course, reinforcing on the point again and again that safety by design is a key concept. And in the future of particular point that all these aspects are taken into consideration that any child and every child is looked upon irrespective of their age, irrespective of their own skills and their own learning capabilities as well. So coming from India, I have experienced a varied range of, you know, observations of how technology is being used by children. And also how it is misleading them in terms of the engagement in the online spaces as well. While it is in the space of online gaming or it is in the space of, you know, social media interactions. Because somewhere or the other, the Internet of things or the devices, when we see that there is a development on one aspect with technology, there is the flip side of it on the misuse of technology. So we need to keep the right checks and balances. I think that’s what has been coming again and again in the conversation as well. We need to have the right checks and balances. Also because Internet of things is quite an alien term when we talk about it in India. It’s sad. We need to somewhere or the other, we need to also break down this concept of Internet of things to people to simplify the understanding of what exactly it means. Like when we talk about trust and safety, it is again an alien term. Somewhere we are advancing with the technology in the technological tools only with certain sector of people who are involved into this whole game of designing and developing tools. But for the larger or for the masses, it is still an alien concept. What is? How the standards need to be defined. That is why I think that’s the missing gap when we talk about child friendly or safety by design as a concept. Also now because somewhere or the other, technology has been knocking everybody’s door. So as a smart device or a device in terms of a phone is in everybody’s house. But in terms of having other gadgets is again more about what section of society engage. But larger is more about the difference in the economic backgrounds of the families. That it is more with the privileged section that they are encountering this problem and challenge about devices, the engagement over there. While the phone, a smart phone is in everybody’s house. I feel there, this is also global attention that a phone device which is in a capacity that any child can use because there has been an emphasis about that we need to have devices as well. So I think I’ll stop there. And with the last point that how data governance is playing a very big role over there. Because whenever you’re setting up any device and you’re giving out the data, you’re also ending up giving your child data. So there, what is the governance about data, the privacy aspect of children’s data over there?


Maarten Botterman: Okay. Thank you very much. Some good points made. Jonathan, I’ll leave the floor to you.


Jonathan Cave: Okay. Thank you. And thank you for that discussion. I just have a few points that are I think other points I’d like to introduce. I’m an educator. One of the things that AI does is that it not only facilitates education and children’s development in the ways that we normally understood it, but it also preempts or distorts it. It has an influence on the way people think. One of the aspects of this is that the AI devices that children use learn about the child, but in the process they also, as it were, to program the child and they teach the child. Now, one of the things they teach them is to rely on the system for certain kinds of things, outsourced our memory to our AI devices, and we will ask for things that in the past we would have thought about. When a child searches for information online or asks a question, in the past they would have had to read a book, for example. They would have read things that they weren’t specifically looking for, and they would have to think about them to develop an answer to the question. If the AI gets very good at simply answering the question that was asked, the educational aspect of that is somewhat lost, and the child on the device becomes in a certain sense deeper. The child becomes an interface that sets the device in motion. Now, this is something we have to deal with. We might say what we need to do is to prevent it, but my students say to me, with respect to the use of AI to write essays and so on, that it is a transferable skill that the world into which they will grow will make use of these technologies, and to learn how to use the technologies may be more important than learning to use the technologies to do the things that we used to ask them to do. So there is a question here, a deep question, I think, of experience is best for children to help them become the kind of adults who can successfully work in this environment. And there are some technical things we can do along the way, like developing specific or stratified large-language modules or even small-language models for specific children to use, or to use synthetic data or digital twins to put a sandbox around children’s experience of using the technologies. But I think that the general lesson is that these technologies used in this way to serve people, that if they’re oriented towards solving past problems, and developers often tend to do that, they need to be required to think consequences of what it is that they’re doing. And that requires a continuous conversation involving children, developers, parents, and the rest of society that doesn’t just stop when the device is released into use. And a final point on games. It’s certainly true that games, particularly immersive online games, have a kind of reality or salience to a person that is even greater than the salience of real experience. It can cut through to the way in which we think in ways that normal contact doesn’t always do that. We know that from neuroscience experiments as well as normal experience, which suggests that these games could be used actually to help people navigate this new world, to promote ethical development instead of sort of attenuating ethical and moral sensitivities among children. And then the final thing is that there is a difference, and this was very compellingly brought out by the experience from India. Of the technologies which are designed for or used for elites, whether they are privileged elites in the sense of money or trained elites who can navigate, understand the risks and benefits, and those same technologies when used by the drop, for example, and the uses become different, evolve away from those that the developers originally intended. So I think that is a fundamental issue that needs to be dealt with at the development and deployment level. So oh, then the final thing is to say that, of course, one of the things that AI can do is it can police the problems that AI creates. And so one of the things that one would expect a machine learning model, a deep neural net model to do is to keep track of how these technologies are changing our children and to respond. It’s, I don’t know, the solution to the problems created by AI is more AI. I would hesitate to actually endorse that because then we do give up our human agency. But those are sort of my concluding thoughts on AI in this respect.


Maarten Botterman: Thank you very much for that. Yeah. That play that is ongoing and is forming us with a clear call for being aware of dependency that may grow with AI, the safety for design by kids, from the outset, right, by design. The warning, the human agency warning, like, we may want to keep that in some way or another. Jonathan?


Jonathan Cave: Yeah. The other thing I would just add to that is Piaget’s dictum that play is a child’s work, that that sense of engaging with these technologies for play allows us to develop in ways that doing them in anger or for serious reasons do not. And there’s some really interesting work going on at Oxford on the difference on play as a state of mind when we engage with technologies. OK.


Maarten Botterman: Thank you so much. Person from the room?


Torsten Krause: No comment to the current block, but there was a discussion or a hint that it’s not only necessary that children understand how IoT works, what’s the functionality behind it, but also parents must know how it works and how it could influence their children. But maybe that’s an aspect we can add to the next block, too.


Maarten Botterman: Yes. We will come back to that in the end, because it’s not only about children and parents, but also the babysitters, but also social environments. So we will take that back to the end. Jutta, you want to? Other specific questions on the AI impact? I mean, it’s clear, right? We’re learning with AI, we’re learning from AI, but also AI is learning while we do it. And let’s make sure that it learns it in the right ways, taking into account the values that we share around the world. And that may be not all values that you’ve been getting from your parents, like the inclusiveness, the recognition and valuing of diversity, and human agency and privacy. I think these are some of the clear examples of values that we share. And for new tools, new developments to be taken into account is one thing. Let’s also keep in our mind, so what with all the old stuff that is already out there? And evolve that. Now, on standards, we heard about legislation, we heard about industry practice. And in a way, if we look to, for instance, electronic shocks, we’ve got the IEEE standards. It’s global standards. For internet standards, we’ve got the IETF standards that set certain rules. But they are voluntary, but they’re industry standards and they’re adopted and at least agreed and discussed around the world. And more of this is likely to come. Jutta, please.


Jutta Croll: Yes, if I may come in, since you got back to the standards, I was considering that Abelis was saying labeling certification schemes should be mandated. That would be the first step to go further with IoT, when we have that mandatory certification scheme or labeling. But then it also needs to be accepted. And one example that we know for about 20 years now is Section 508, that at that time demanded or obliged the US state’s administration to have accessibility as a precondition in procurement. So from that time on, any product that was bought by administration in the United States needed to be accessible for people with disabilities. Of course, the whole process of having a broad range of products that were certified to be accessible, and also it brought the prices down. The products got affordable because the administration was obliged to only buy those products that are accessible. And if we could come to that next stage, not only labeling and certification, but having it like a procurement precondition, that I do think would really help to bring forward the labeled IoT. Do you understand what I mean? It’s come to my mind that it’s a really good example that we could follow up with.


Maarten Botterman: Yes, I see Jonathan clap his hands because we talked a lot about this. And basically, we’ve got standards, we’ve got legislation. The problem about legislation is it’s per jurisdiction. And then you can start to harmonize across jurisdictions and it takes time. But at least if there’s principles that are globally recognized, you have something to look for. And an organization like IEEE, ITF, ISO, there’s a role in that. So very much, I know Jonathan is much more an expert in this than I am. And I see he raised his hands, Jonathan.


Jonathan Cave: Yes, thank you very much. That’s a brilliant point. The idea of using public procurement as a tool, as a sort of complement either to self-regulation or to formal regulation is, I think, one that’s worked in a number of areas. One of the things it can do, as Jotun mentioned, is to set a floor under the kind of capability that we wish to provide a stimulus, an economic stimulus to. The things have to be accessible. They have abilities. But it can also create a kind of direction of competition. So when you specify a procurement, part of it is the requirements that you put in that the proposed solutions have to provide. The other part are the things on which you provide advanced scores. And so procurement tenders written appropriately can also stimulate innovation to come up with better and more effective solutions. So there’s that aspect that puts money into developments that might not yet have a market home, that might not yet be profitable for people to provide, but which with certain development or certain economies of scale might become profitable. And you can do that without putting governments in the position of saying, this is what we need, because governments are particularly bad at picking winners and specifying solutions. But what they can do is to move the whole industry in the direction of providing these things. So that also happens, and the final point on this, when it comes to the adoption of standards within the procurement. European standards, although not developed by the EU, are often incorporated into public procurement tenders, with the idea being that you have to show compliance with the standard or equivalent performance. And that introduces into this market-based alternative to regulation, something which looks like an outcome-based or principles-based criterion as to what’s acceptable and what isn’t. So in other words, you either have to do the thing which is there in the code, you have to comply, or better still, you have to show that you can do better. And if you do that, you harness the inability to give the customer some say in the matter, and not just a negotiation or a procurement officer within a government bureaucracy. So I think it’s really profitable.


Maarten Botterman: Yes. A clear example on that is that, for instance, for internet security, there are standards that back up the flaws in the current system, like DNSSEC, like RPKI. These are standards that can be implemented. adopted by service providers. Now, these standards are, again, global, but they’re voluntary. For instance, in the Netherlands, the Dutch administration does include it in its standarding for services. And with that, they ensure that their service providers, that I as a citizen can also go to for those services, because the services get the basis. So that’s one of the examples. And yes, if government isn’t sure, at least they can help with the direction. So Utah, thanks for raising it. Jonathan, thanks for bringing it home. And you wanted to compliment on that.


Abhilash Nair: Yes, thank you. Just wanted to follow up on what you just said about laws mandating, said laws could rather than, you know, must always. I recognize there are some instances where it’s not possible. One thing, one other thing just to add to that is it might also help mitigate the literacy problem of parents or caregivers, because often policymakers assume that parents and caregivers are always educated and every child comes from a two-parent middle-class household. And that doesn’t happen, especially in countries with varying literacy rates, let alone digital literacy rates, with that kind of certification level. They might for children.


Jutta Croll: You gave the perfect segue to handing over to Sabrina, I would say. Yes. Start to go on.


Maarten Botterman: And yes, we also come with the remark from Dorothy online that says, there’s so many people who are not online yet. And how do we provide, make sure that they don’t miss the boat? So with the focus on, well, after what we can have technology do and develop with AI and standards, in the end, it’s about the people. And how can we make sure that people use it well? Sabrina, please.


Sabrina Vorbau: Yeah, thank you. Good afternoon, everyone. I kept quiet for the moment because I think it makes sense for me to come in at the very end, just to compliment on the various aspects that have been, so how we can indeed build this bridge of the information and the knowledge we have to the end users, which are, of course, in primary children and young people, but not exclusively also parents, caregivers, teachers, but also not to forget other stakeholders, the policymakers in the industry. I want to come in with a concrete example, representing the Better Internet for Kids initiative, which is funded by the European Commission under the Digital Europe program. The initiative aims to create a safer and more empowering online environment for children and young people. The goal is in the European Union, we have the Better Internet for Kids plus strategy that is based on three core pillars, child protection, child participation, and child empowerment. So also what was mentioned already, to try to empower children and young people to become agents of change, but in order for them to do this, they need us as an adult responsible space. And that, of course, is to promote, it’s the goal of the Better Internet for Kids initiative, to promote responsible use of the internet, protecting minors from online risks such as harmful and inappropriate content, but also to provide resources for parents, for educators, and other stakeholders to better support on aspects such as online safety and digital literacy. And, of course, Better Internet for Kids also addresses the very prominent topic of age assurance, to ensure that children and young people engage with age-appropriate content and are protected from harmful content. And to give some concrete examples of materials that you will find on the Better Internet for Kids portal, just earlier this year, together with University of Leiden in the Netherlands, we published a mapping of age assurance typologies and by a comprehensive report that gives an overview of different approaches of age assurance and the associated legal, ethical, and also technical considerations that were picked up by my fellow panelists. And just to scratch on some key areas, first of all, a diverse approach to age assurance, really the approach of there is no one-size-fits-all solution, the crucial importance of privacy and data protection concerns that were also already highlighted by Jonathan, and the balance and act of effectiveness and also user experience. As said, this is a very comprehensive research report, and as I mentioned with existing laws and policies, we need to ensure that this knowledge, this information is translated in user-friendly guidance, so we also transmit this expertise and the knowledge we have to educators, to parents that are really, really crucial in this process, and also how can we build capacity and make sure also this is properly implemented at a local level, and that’s why also on the Better Internet for Kids portal that you can find on betterinternetforkids.eu, we very much put age assurance in the spotlight, specifically focusing on two stakeholder groups. First of all, the educators and families really to provide resources to help proper awareness raising, but also knowledge sharing to foster digital literacy. I think this was also a comment that was given in the chat earlier, really how can we ensure proper media literacy education, and that’s why we developed an age assurance toolkit that includes age assurance explainers, and just to give you some examples of what users can find in the school toolkit, first of all explaining what is age assurance in the first place, and I think it was also mentioned other examples were given before age assurance might be a typology or a term that is not so accessible for many people. That’s why we also in this toolkit provide concrete examples of when age assurance comes. Why is it so important and how it actually can protect children? I think that’s also important for carers and parents to understand why is this topic so important? How can it protect my child? And in addition to this, so this is a resource, and as I said, I have a printed copy here, so you can see it’s a much lighter report. We also designed this together with children and young people. That’s also what we really try to work on these resources also together with the end users, because ultimately it’s for them, so it’s also very important to involve them in this process, and I think it’s always very eye-opening, because I think we are very much used to certain terminologies that is quite self-explanatory for us, but for some people it is not. It is not so accessible, so that’s also important to really follow this co-creation process, and then touched up on the group already on the black box of the industry. Of course, the Better Internet for Kids initiative, we really try to bridge the conversation and also have industry and policymakers around the table when we discuss certain topics. So on the website, we also have resources aimed at digital services providers to check their own compliance, and this has been done in the form of the self-assessment tool manual and questionnaire that we also developed in the Netherlands, and really the aim is here for the service provider to critically reflect on their services and how these may intersect with the protection of children and young people, but what is important here to note is, of course, that it only provides guidance, so it’s not a legal compliance mechanism, and here again, and it was mentioned also before, it’s not a one-size-fits-all when we talk about online service providers. We talked about the gate search engines, so I think we also need to acknowledge their diversity, and then maybe just to conclude, also to highlight on behalf of the European Commission that, of course, a lot of focus and work is done here in this space, and this is also complemented by the work the European Commission is doing on age verification. For example, following a risk-based principle, the European Commission, together with the EU member states, is also developing a European approach to age verification in the commission, is preparing for an EU-wide interpropyl and privacy-preserving short-term age verification solution before the European Digital Identity Wallet is offered as of 2026 in the European Union. So I think conversations like today are really, really important, really trying to pull the different strings and bringing different stakeholders together to work together and then hopefully in the future settings of the IGF we will have children and young people but also educators participating in such conversations because we really need them, we really need to understand what are their needs and for us then to act properly and as I said really build this bridge to share the knowledge we are having on the different aspects and really make sure this is translated properly at national and local level.


Maarten Botterman: Thank you very much, Sabrina. I know this is diverse by definition because it’s 29 member states that are all finding their way in this. Globally, this may be very good input. For me, experience that we have of capacity building in general is that we got examples from all over the world of good practice. We got legislatory examples, we got teaching examples, we got practical examples. But how to apply it best in your region is for the people in the region. This is why capacity building isn’t only on using the same guide around the world but it’s also about understanding the why’s and the how’s and make sure that it’s applied for the Indian region, for the different regions in Africa. Africa isn’t one region either and Latin America, et cetera. So you can adapt to that and learn from that. Same with the relationship with children, with parents, around the world. And we need to recognize that we can’t set one standard for all but we can have some principles that are valid for all. So with that, I see, Jutta, you grabbed the microphone.


Jutta Croll: Yes, I grabbed the microphone because I saw a comment or a question in the chat that I would like to address. And that was the question. Sabrina mentioned the EU ID wallet that has to come into place in 24-month time. But already the European Commission has acknowledged more time. And the EU ID wallet is an instrument for ID verification. So it’s more than age verification. But the wallet is also foreseen to make possible age verification only. So it needs to have an option that you can use it only to verify your age, not to give away your identity. And that is very important in regard of the privacy aspects, the data protection aspect that were already mentioned by Jonathan Cave. The question was whether the Commission is developing their own age verification tool. I would not say the Commission will develop it, but they have issued a tender in October this year for the development of an age verification instrument that should be white-labeled so that whatever age verification instrument is available in any country, Euro-global as well, it should have an open interface to that white-labeling tool that the Commission has issued a tender for. And they did so because they gave priority to age assurance. And they would not wait these 24 months for the EU ID wallet to come. So that also shows how important and how topical the thing is that we are talking about here. Age assurance is very topical, not only for the Commission. We’ve heard several sessions talking about that already here at the Internet Governance Forum. And we are pretty sure that train is put on rails, I would say.


Maarten Botterman: Thank you very much for that. So with that, I think we’ve had a pretty good cycle and I’d like to ask people also in the Zoom room if you have any final questions, or here in the room. If you have any final questions, raise it and then we’ll do a final round. Yes, I was looking at… Dorothy has been very active in the chat and Fabrice has been very active in the chat. Any of you wants to talk as well? Otherwise, we go to Sonja. As you’re not raising your hand. Sonja, please.


Sonia Livingstone: Thank you. I just wanted to make a point that hasn’t been perhaps as a political point. I’m very struck by how much the industry innovates in relation to complex and challenging technologies and then introduces them into the marketplace. We’re seeing this with AI now. Suddenly it pops up in all of our search and our social media in ways that were not necessarily asked for. And the same, of course, will happen with IoT. And then the worthy groups like us sit around and say parent leaders must do that. Of course, we want them to. But this is a major shift of innovation in commerce, placing an obligation on ordinary people and on the public sector. And it is, you know, I think this is why the conversations about regulation, certification, standards and obligations on industry are really so crucial because otherwise the burden of making a profit on one side really does fall on those who are already extremely hard pressed. So let’s keep up the pressure on the industry without in any way undermining the argument that of course media literacy and public information and awareness are crucial.


Maarten Botterman: Yes, thank you very much for that. Of course, regulatory innovation is also a point that approaches within Europe. I’m European too. Where the European Commission, for instance, with the AI Act, without immediately going to regulate, first invites the industry to come with, so what should we talk about? What should we regulate? How should good practice look like? And regulation doesn’t only need to be from the countries, but it can also be from industry and self-regulation. Jonathan, yes, please. I learned this from Jonathan. Please, Jonathan.


Jonathan Cave: I just wanted to applaud Sonja’s point really because a lot of these things, there is a transfer of responsibility from industry, well, from the developers of the tech part of the industry to the service providers and the comms providers and the others who are already regulated and from them to us. To a certain extent, society is being used as a kind of beta tester or alpha tester for these technologies. They’re spat out, and the ones that succeed, succeed, and the ones that don’t, don’t. Maybe they grow a regulatory structure around them to make them more robust, but the irreversible changes that take place will nonetheless have taken place and cannot be undone, even if we later come to regret them. And so some element of, A, a precautionary principle, and B, an appropriate placement of responsibility should be important. And when I say appropriate placement, these things are uncertain. So where responsibility lies should be some mixture of being able to understand the uncertainties, being able to do something about them, and being able, in particular financially, evive the disruption involved in getting from where we are now to a solution that we can not only live with, but can sort of accept and understand. And I think simply provide and protect or responding to industry by shoring up the crash barriers and so on encourages industry to take less and less responsibility for the consequences of what it is that they do or to define them in narrower and narrower and more technocratic terms and to say this is safe because in lab tests it works out safely. We saw this with medicine. This is why real-world evidence in the use of drugs is so important. They may survive a randomized clinical trial, but put them in the real world and they don’t work like that. So there needs to be some way of joining this up so that industry at all levels, people and government are partners in something and not people sitting on a predefined responsibility. So anyway, thanks for making it political as it may have been.


Maarten Botterman: Thank you very much for that. We’ve got a lady. Can you introduce yourself in the room?


Helen Mason: Thank you. And thank you for a very interesting session, which I unfortunately came a little late to. But nevertheless, I’m picking up on a few.


Maarten Botterman: What is her name?


Helen Mason: My name is Helen Mason. I’m from Child Helpline International in the Netherlands. We work in 132 countries to provide child helplines. 24-7 to children and young people via a variety of channels and I think two points really, I would say that we must include civil society and the first-line responders in these kinds of discussions because people that are actually talking to children and young people and dealing with reports of harms that have happened online and building the capacity of those frontline responders is absolutely crucial in being able to respond adequately and report and know where to report to to have proper alliances and referral protocols with law enforcement for example with regulators so our work at Child Helpline International is really advancing this particular aspect to make sure that all of our members are well equipped to respond to all kinds of incidences that might occur online and we have much data that shows an increase for example in areas like extortion children and young people not knowing where they should report is there a crime committed what should they do next should they delete the the evidence etc so having those frontline responders you know capacitated to be able to respond adequately is really vital for us one more point I want to make as well is that the data that is generated by the child helplines themselves as a result of the conversations they have with children young people is really a unique resource so I would really encourage all stakeholders to have a look at that information that we collect it’s around prevalence it’s around help-seeking behavior it’s around trends around and just a case material we have it has a lot of information about the actual experiences of children and young people of course it’s all done very safely and anonymized and working together with people like Sonia we can really use this information to feed back into policy and I’d really encourage all of the stakeholders to take a look at the information that we’re publishing online thank you


Maarten Botterman: thank you so much for your remark with that please


Abhilash Nair: thank you thanks for those comments very useful indeed thank you I just wanted to follow up on what Sonia said about corporate liability or imposing obligations on the industry we discussed this at a different panel yesterday I wasn’t on the panel and I’ll hear about to what extent should that liability extend and who should be held accountable for should it just be financial penalties or should executives be sent to prison for draws negligence and other lack of action and I wondered if you had any thoughts on that Sonia because I think I don’t think it is for want of obligations on websites or platforms or providers that things haven’t worked so far financial penalties are sometimes too little even if they sound like a lot of money for the average person in the street for the large company tech company in particular it’s not a lot of money would introducing criminal sanctions for corporate executives make a difference it’s just a thought rather than a question really


Maarten Botterman: yes thank you thank you for that I think with this all and this is one of the reasons several remarks throughout the session on companies will behave when they know it’s paid attention to I think I dare to hope I’m an optimist that some companies really care and do it from the outset and I know there are companies who do and I think these will be the companies from the long run not from the short run profit there’s so much but accountability in that is key what to be accountable for is the thing that we need to be clearer on we’re not gonna mandate from this little group what a parent may or may not say to his child what an industry may or may not put on the market what a child may or may not do but we can help by finding making clear what what to be taken into account and and the capacity development around the world is important in that we discussed from early on that well if you protect children by not allowing to use any of it at some point they will be allowed to use and then we dived in the dip and this is the same problem we see for instance of internet access in Africa the biggest challenge in Africa is to get online but as soon as you’re online you’re in trouble and have a lot of opportunities so you need to be aware before that same is true for children capacity development is also for all stakeholders legislators administrations let’s let’s companies I mean also for another example there is the what in Europe is called the corporate sustainability reporting directive it’s to make companies aware of the ecological footprint of what they do and with that move slowly towards more responsible behavior this is something that here should be an obvious thing to certain there is legislation you cannot harm children let’s make sure that understood what does it means in the context of the new digital world as well and last but not least of course it’s also the ability to act is something that need to be brought in and that need to be brought in together in reasonable ways so this is from the IOT perspective also a very important part of how you deal with things users etc but I really appreciate it to again after a couple of years work together with Utah and you all on something where this comes together because in the end technology is also for the people and to serve the people is what I and my colleague Jonathan tend to believe go to Utah and then to Jonathan and Sonia for the last word and Musa okay and you’re still lost so so after my attempt to round off we will now open up again and then Utah will round off Musa I’m unmuting you please go ahead yes we’re listening and we’re even hearing you


Musa Adam Turai: okay my question is sorry I can’t very let you the problem my question is regarding the okay what can defendants of free expression in these regions address the tension between the protection cultural and religious burden of holding the universal of holding the universal holding the universal rights and to free speech particularly in deeply conservative societies I listening


Maarten Botterman: yes I’m listening I’m I’m trying to comprehend the question that is behind this remark okay how can be how can you get it Jonathan you got it yeah please answer and then continue with your your final statement


Jonathan Cave: okay yes if I understand the question well there is a tension also between free speech rights and in particular the exercise of those rights by children and the need to protect children not only in their own rights of self-expression but from the harmful consequences of this self-expression of other in societies where freedom of speech is heavily restricted that you have freedom of speech but only in certain directions and certainly yeah and certainly in surveillance I mean take the right to be forgotten for example when I was very young I said very many intemperate politically intemperate things later on in my life I went through a period where I was very glad that those things had been forgotten and then later on still I came to a point where it was very important to me in my image of myself that it be those things but fortunately the consequences for me were sort of non-existent or minimal but we have seen that the consequences can be very great and what that says to me is that when we talk about child safety and child protection it is not just protection from the content that they see online but from the social legal political terrorist whatever consequences of using those online platform it’s an issue there where the safety goes beyond the safety within the online environment so I I get that point it’s a hard question I don’t have an answer for it of course what I wanted to say by way of rounding off was really just on this last point about how we make corporations and actually governments pay serious and sustained attention to these issues now I remember that in the antitrust environment when the u.s. passed the Sherman antitrust act the liability on a company that broke competition was only on the company as economic person. It was only with the Clayton Act, where personal liability was brought in, that the big trusts began to sit up and pay attention and change their behavior. So that personal liability does make a difference. In the Guardian today, there was a call for companies to be held responsible. This is the second aspect of this, not just for the harms that they have done in the past, but also for producing improvements into the future. What we’re seeing today with things like the Grenfell Inquiry or the Post Office Inquiry is that when something goes wrong, people are held to account. They’re supposed to stand by themselves and say, we’re sorry, we’ve learned lessons, and so on, until the next thing comes along. This doesn’t really help. It doesn’t really help when the problems are systemic and when the problems cannot be remedied by somebody saying, I’m sorry, or paying an amount of money to somebody for something. We need something that is more continuously engaging. And finally, it is commonly the case, as we saw with Paula Ventles in the Post Office Inquiry, that the people who are supposed to bear the responsibility evade that responsibility or that they didn’t have the information. Now, in many criminal contexts, this has been this concept of what I knew when I took the action is replaced by something which said, you were sitting in this position of responsibility. You had certain privileges, like a universal service obligation. This is what you knew or what you should have known. And if you don’t, if you’re not aware of these things, that by itself is a black mark against you. And it’s only the fact that these things went wrong that caused the light of day to shine upon that. So I think that we should take the issue more seriously. And with politicians, this happens. They come into office. They say things about children and the online risk of their office. And the box has been ticked as far as the newspapers are concerned, but it doesn’t become part of the culture that the safety of children doesn’t become the kind of cultural value on which we act. It actually changes what we do when we have new decisions to make. Okay, so that’s my call to arms. And now I’ll shut up. Thank you.


Maarten Botterman: Thank you very much. As a certified board director myself, I must say, I’ve seen that ongoing trend. And I know I’m personally liable for not doing the right things, not asking the right questions within reason. If I exert my fiduciary duties in the right way, then I can make mistakes too. But I fully appreciate your point. And that attention, that’s a crucial point. Also, call for boards to be aware of what they tell their CEO to do. Make more money or make sure that you do it in the right way. So I see a lot of nodding heads here. And I even see Sonja’s smile. So Sonja, to you, and then the last word to Jutta, please.


Sonia Livingstone: Brilliant. Thank you very much. Lots of really great things said. I wanted to say something, come back to the question of children, the way in which their rights can be heard and acknowledged. So users, the word user is a really problematic word. And I think if we talk about users, we can quickly forget there are children. So by and large, in relation to IoT and other innovations, by and large, children are not the customer. They don’t pay. They don’t always set up the profile, especially for IoT. They don’t seek remedy unless we scaffold that. They don’t bring lawsuits. They don’t get to speak at the IGF. You know, they are kind of uniquely dispossessed in relation to these debates. And yet they are one in three of the world’s Internet users. One in three also of the world’s population. If I continue my statistics, you know, one in three of the users are children, one in three are women and one in three are men. And we have to rethink who the user is and recognise their diversity. I think my last word might be to mention, as hasn’t yet been mentioned, that in General Comment 25, the UN Committee on the Rights of the Child has set out how the Convention on the Rights of the Child applies to the digital environment, including to IoT, to the different technologies, including to a whole range of digital innovations. And in so doing, it maps out and tries to look within the industry and all of those who provide the checks and balances around the industry, as well as speaking to the state. So, I think what we’re saying when we want companies to be aware or board members to tell their CEO or perhaps executives to get arrested when they land at Heathrow, wherever, I think we’re trying to recognise that there are people within this sector and very many agents who can be part of the process of making things better. And I would include those in the engineering schools who are training tomorrow’s engineers. And the data scientists who think they’re just processing anonymised data. It has nothing to do with them. And the marketers who are creating a certain vision of the user and how it might be used when they promote the technology. And so on and so on. You know, great that we’ve talked about procurement, which I think is really critical. And so, I would like the next session at the IGF, if I may be so bold on this topic, to include representation and the voices of children and young people in the room. And to begin with a more disaggregated vision, both of children, but also of the actors who are shaping this technology of the future. Thank you.


Maarten Botterman: Thank you for very beautifully said.


Jutta Croll: Thank you for giving me the last word. I don’t think I need to do any more wrapping up because everything has already been said. Just to put real what Sonja said at the beginning, that children are being impacted by Internet of Things, even though they might not be the users as we understand users so far. But if developers have that in mind, I do think that is very important. And to reflect on Jonathan saying what he was saying about the politicians, I just need to mention that yesterday, we had for the first time ever in 19 years of Internet for Children’s Rights. And we only have five high level sessions set by the United Nations. And one of these five sessions was set for children’s rights in the digital environment. So, awareness is raised. We have come a long way and we have a long way to go. But still, these are steps that are milestones, I do think. And people will remember that and we will bring it forward to Norway next year. Thank you so much for being here, for listening and for taking part. Thank you.


Maarten Botterman: Yes, thank you very much. I just want to applaud this too. Thank you so much, everybody, for attending and for contributing in any way, shape or form. And really appreciate the session, not only as a DCIoT person, but a father and even a grandfather. So, see you around. Yeah. That was good. We can refer to it. We can refer to it. This may be the future of dynamic emissions. I mean, we plan to go for another 30 flights next year. So, that’s what I would need to know. I’m comfortable. Well, if I’m the current principal of the Dynamic Emissions Bureau, we know so much about it, but we haven’t heard any more of it. It’s true, but we saw good deploying. Next year we saw more jamming. Hopefully this will help with that today. For the next two or three months, we will be exclusively the Dynamic Emissions Bureau, and no spayed applications for Dynamic Emissions projects. Most recently, we adished the management of our small business. This year we started out with a new startup company and Dynamic Emissions came out with the idea of raising the speed of our businesses.


J

Jonathan Cave

Speech speed

148 words per minute

Speech length

3517 words

Speech time

1421 seconds

Data collection can be both beneficial and harmful to children

Explanation

Jonathan Cave points out that data collected from children can be a source of both safety and potential harm. The immediate and long-term effects of exposure to inappropriate content or manipulation through profiling are concerns.


Evidence

Example of privacy laws setting age limits for data collection and processing.


Major Discussion Point

Age-aware IoT and data governance


Static age limits may not be appropriate given evolving capacities of children

Explanation

Cave suggests that using chronological age as the sole basis for protecting children online may not be the most appropriate approach. Children’s digital maturity and experience with online environments should be considered.


Major Discussion Point

Age-aware IoT and data governance


Agreed with

Sonia Livingstone


Pratishtha Arora


Agreed on

Need for age-appropriate design in IoT and AI


Differed with

Sonia Livingstone


Differed on

Approach to age verification and assurance


AI can both facilitate and potentially distort children’s development

Explanation

Cave discusses how AI can aid in children’s education and development, but also potentially distort it. He points out that AI devices not only learn about the child but also ‘program’ the child in certain ways.


Evidence

Example of how AI answering questions directly may reduce the educational aspect of children searching for information themselves.


Major Discussion Point

Role of AI in age-aware IoT


S

Sonia Livingstone

Speech speed

145 words per minute

Speech length

2061 words

Speech time

847 seconds

Need to consider broader child rights beyond just privacy and safety

Explanation

Livingstone emphasizes that a child rights approach should consider more than just privacy and safety. She argues for a holistic approach that includes rights such as access to information, participation, and appropriate provision.


Evidence

Mentions the UN Convention on the Rights of the Child and the concept of best interests of the child.


Major Discussion Point

Age-aware IoT and data governance


Differed with

Jonathan Cave


Differed on

Approach to age verification and assurance


Importance of consulting children in design of technologies and policies

Explanation

Livingstone stresses the importance of involving children in the design and development of technologies and policies that affect them. This ensures that children’s perspectives and needs are taken into account.


Major Discussion Point

Age-aware IoT and data governance


Agreed with

Pratishtha Arora


Jonathan Cave


Agreed on

Need for age-appropriate design in IoT and AI


Need to place more responsibility on industry rather than users

Explanation

Livingstone argues that the burden of ensuring safety and appropriate use of technology should not primarily fall on users, especially children and parents. She emphasizes the need for industry to take more responsibility in this area.


Major Discussion Point

Corporate responsibility and regulation


Differed with

Sabrina Vorbau


Differed on

Focus of responsibility in ensuring child safety online


Children are often overlooked as stakeholders in tech development

Explanation

Livingstone points out that children are often not considered as primary stakeholders in technology development, despite being one-third of internet users globally. She argues for greater recognition of children’s diverse needs and experiences in tech development.


Evidence

Statistic that one in three of the world’s Internet users are children.


Major Discussion Point

Corporate responsibility and regulation


M

Maarten Botterman

Speech speed

129 words per minute

Speech length

3732 words

Speech time

1727 seconds

Labeling and certification can empower users to make informed choices

Explanation

Botterman argues that labeling and certification of IoT devices can help users understand what they are buying and using. This transparency enables users to make more informed decisions about the technology they adopt.


Evidence

Examples of past issues with IoT devices, such as default passwords and undisclosed data sharing.


Major Discussion Point

Labeling and certification of IoT devices


Agreed with

Jutta Croll


Abhilash Nair


Agreed on

Importance of labeling and certification for IoT devices


J

Jutta Croll

Speech speed

129 words per minute

Speech length

1120 words

Speech time

518 seconds

Public procurement can be used to drive adoption of standards

Explanation

Croll suggests that using public procurement as a tool can encourage the adoption of standards for IoT devices. By making certain standards a requirement for government purchases, it can stimulate the market for compliant products.


Evidence

Example of Section 508 in the US, which required accessibility features in products purchased by the government.


Major Discussion Point

Labeling and certification of IoT devices


Agreed with

Maarten Botterman


Abhilash Nair


Agreed on

Importance of labeling and certification for IoT devices


A

Abhilash Nair

Speech speed

143 words per minute

Speech length

1200 words

Speech time

502 seconds

Certification could help mitigate literacy issues for parents/caregivers

Explanation

Nair suggests that certification of IoT devices could help address literacy issues among parents and caregivers. This would make it easier for them to understand and manage the technology their children are using, regardless of their educational background.


Major Discussion Point

Labeling and certification of IoT devices


Agreed with

Maarten Botterman


Jutta Croll


Agreed on

Importance of labeling and certification for IoT devices


P

Pratishtha Arora

Speech speed

137 words per minute

Speech length

1038 words

Speech time

453 seconds

Need to consider impacts on children who may not be direct users of IoT devices

Explanation

Arora points out that IoT devices can impact children even when they are not the primary users. This includes situations where children are in environments with IoT devices, such as smart homes or connected cars.


Major Discussion Point

Role of AI in age-aware IoT


Importance of developing age-appropriate AI models and interfaces

Explanation

Arora emphasizes the need for AI models and interfaces that are appropriate for different age groups. This involves considering children’s varying levels of understanding and maturity when designing AI-powered IoT devices.


Major Discussion Point

Role of AI in age-aware IoT


Agreed with

Sonia Livingstone


Jonathan Cave


Agreed on

Need for age-appropriate design in IoT and AI


S

Sabrina Vorbau

Speech speed

137 words per minute

Speech length

1178 words

Speech time

513 seconds

Need to translate research into user-friendly guidance for parents/educators

Explanation

Vorbau stresses the importance of making research findings accessible to parents and educators. This involves creating user-friendly resources that help adults understand and navigate the complexities of children’s online experiences.


Evidence

Example of the Better Internet for Kids initiative developing toolkits and resources for educators and families.


Major Discussion Point

Capacity building and awareness


Differed with

Sonia Livingstone


Differed on

Focus of responsibility in ensuring child safety online


Importance of involving children/youth in developing resources

Explanation

Vorbau highlights the value of involving children and young people in the creation of resources about online safety and digital literacy. This ensures that the materials are relevant and understandable to their target audience.


Evidence

Mention of co-creation process with children and young people in developing resources for the Better Internet for Kids initiative.


Major Discussion Point

Capacity building and awareness


H

Helen Mason

Speech speed

175 words per minute

Speech length

375 words

Speech time

128 seconds

Civil society and frontline responders should be included in discussions

Explanation

Mason argues for the inclusion of civil society organizations and frontline responders in discussions about children’s online safety. These stakeholders have direct experience with children’s issues and can provide valuable insights.


Evidence

Example of Child Helpline International’s work in 132 countries providing support to children.


Major Discussion Point

Capacity building and awareness


Data from child helplines is a valuable resource on children’s experiences

Explanation

Mason points out that data collected by child helplines can provide unique insights into children’s online experiences and challenges. This information can be valuable for policymakers and researchers.


Evidence

Mention of increasing reports of online extortion and children not knowing where to report issues.


Major Discussion Point

Capacity building and awareness


M

Musa Adam Turai

Speech speed

84 words per minute

Speech length

60 words

Speech time

42 seconds

Tension between free speech rights and child protection in some societies

Explanation

Turai raises the issue of balancing free speech rights with child protection, particularly in conservative societies. This highlights the cultural and societal differences in approaching online safety for children.


Major Discussion Point

Corporate responsibility and regulation


Agreements

Agreement Points

Need for age-appropriate design in IoT and AI

speakers

Sonia Livingstone


Pratishtha Arora


Jonathan Cave


arguments

Importance of consulting children in design of technologies and policies


Importance of developing age-appropriate AI models and interfaces


Static age limits may not be appropriate given evolving capacities of children


summary

The speakers agree on the importance of considering children’s evolving capacities and involving them in the design process to ensure age-appropriate IoT and AI technologies.


Importance of labeling and certification for IoT devices

speakers

Maarten Botterman


Jutta Croll


Abhilash Nair


arguments

Labeling and certification can empower users to make informed choices


Public procurement can be used to drive adoption of standards


Certification could help mitigate literacy issues for parents/caregivers


summary

The speakers agree that labeling and certification of IoT devices can empower users, drive adoption of standards, and help address literacy issues for parents and caregivers.


Similar Viewpoints

Both speakers emphasize the need for a more nuanced approach to children’s rights and protection online, considering their evolving capacities rather than relying solely on static age limits.

speakers

Sonia Livingstone


Jonathan Cave


arguments

Need to consider broader child rights beyond just privacy and safety


Static age limits may not be appropriate given evolving capacities of children


Both speakers advocate for including diverse stakeholders, particularly children and those working directly with them, in discussions and decision-making processes related to online safety and technology design.

speakers

Sonia Livingstone


Helen Mason


arguments

Importance of consulting children in design of technologies and policies


Civil society and frontline responders should be included in discussions


Unexpected Consensus

Corporate responsibility in technology development

speakers

Sonia Livingstone


Jonathan Cave


Abhilash Nair


arguments

Need to place more responsibility on industry rather than users


AI can both facilitate and potentially distort children’s development


Certification could help mitigate literacy issues for parents/caregivers


explanation

Despite coming from different perspectives, these speakers unexpectedly converged on the idea that the tech industry should bear more responsibility for ensuring safe and appropriate technology for children, rather than placing the burden primarily on users or parents.


Overall Assessment

Summary

The main areas of agreement include the need for age-appropriate design in IoT and AI, the importance of labeling and certification for IoT devices, and the necessity of involving diverse stakeholders in discussions and decision-making processes.


Consensus level

There is a moderate to high level of consensus among the speakers on key issues. This consensus suggests a growing recognition of the complexities surrounding children’s rights in the digital environment and the need for multi-stakeholder approaches to address these challenges. The implications of this consensus could lead to more collaborative efforts in developing age-aware IoT solutions and more comprehensive policies that consider children’s evolving capacities and rights.


Differences

Different Viewpoints

Approach to age verification and assurance

speakers

Jonathan Cave


Sonia Livingstone


arguments

Static age limits may not be appropriate given evolving capacities of children


Need to consider broader child rights beyond just privacy and safety


summary

While Cave emphasizes the limitations of static age limits, Livingstone advocates for a more holistic approach considering various child rights beyond age verification.


Focus of responsibility in ensuring child safety online

speakers

Sonia Livingstone


Sabrina Vorbau


arguments

Need to place more responsibility on industry rather than users


Need to translate research into user-friendly guidance for parents/educators


summary

Livingstone emphasizes industry responsibility, while Vorbau focuses on empowering parents and educators with user-friendly guidance.


Unexpected Differences

Role of AI in children’s development

speakers

Jonathan Cave


Pratishtha Arora


arguments

AI can both facilitate and potentially distort children’s development


Importance of developing age-appropriate AI models and interfaces


explanation

While both speakers discuss AI’s impact on children, Cave unexpectedly highlights potential negative effects on development, whereas Arora focuses more on the need for age-appropriate design without explicitly addressing potential distortions.


Overall Assessment

summary

The main areas of disagreement revolve around the approach to age verification, the distribution of responsibility between industry and users, and the role of AI in children’s online experiences.


difference_level

The level of disagreement among speakers is moderate. While there is general consensus on the importance of protecting children online, speakers differ in their proposed approaches and emphasis. These differences reflect the complexity of the issue and suggest that a multifaceted approach, incorporating various perspectives, may be necessary to effectively address age-aware IoT and child protection online.


Partial Agreements

Partial Agreements

All speakers agree on the need for more nuanced approaches to child protection online, but differ in their proposed solutions: Cave suggests moving away from static age limits, Livingstone advocates for a broader rights-based approach, and Botterman proposes labeling and certification as tools for informed decision-making.

speakers

Jonathan Cave


Sonia Livingstone


Maarten Botterman


arguments

Static age limits may not be appropriate given evolving capacities of children


Need to consider broader child rights beyond just privacy and safety


Labeling and certification can empower users to make informed choices


Similar Viewpoints

Both speakers emphasize the need for a more nuanced approach to children’s rights and protection online, considering their evolving capacities rather than relying solely on static age limits.

speakers

Sonia Livingstone


Jonathan Cave


arguments

Need to consider broader child rights beyond just privacy and safety


Static age limits may not be appropriate given evolving capacities of children


Both speakers advocate for including diverse stakeholders, particularly children and those working directly with them, in discussions and decision-making processes related to online safety and technology design.

speakers

Sonia Livingstone


Helen Mason


arguments

Importance of consulting children in design of technologies and policies


Civil society and frontline responders should be included in discussions


Takeaways

Key Takeaways

Age-aware IoT needs to consider children’s evolving capacities rather than using static age limits


Labeling and certification of IoT devices can empower users to make informed choices


AI in IoT can both facilitate and potentially distort children’s development


Capacity building and awareness efforts should involve children/youth and translate research into user-friendly guidance


There is a need to place more responsibility on industry rather than users for child safety in IoT


Children are often overlooked as stakeholders in tech development despite being 1 in 3 internet users


Resolutions and Action Items

Involve children and young people in future IGF sessions on this topic


Develop more user-friendly guidance on age assurance for parents and educators


Consider using public procurement to drive adoption of child safety standards in IoT


Unresolved Issues

How to balance free speech rights with child protection, especially in conservative societies


Extent of corporate liability and accountability for child safety issues in IoT


How to effectively implement age assurance across different cultural contexts


How to ensure IoT benefits reach children who are not yet online


Suggested Compromises

Use age brackets rather than hard age limits to allow for flexibility in maturity levels


Develop ‘white-labeled’ age verification tools that can interface with different systems


Balance precautionary principle with allowing children to learn to navigate online risks


Thought Provoking Comments

We need to stay aware that the static perspective of protecting people on the basis of age may not be the most appropriate, and we need to stay aware of that.

speaker

Jonathan Cave


reason

This challenges the conventional approach of using chronological age as the sole basis for online protection measures.


impact

It shifted the discussion towards considering more nuanced, evolving approaches to protecting children online based on their digital maturity rather than just age.


A child rights landscape always seeks to be holistic. So, privacy is key, as has already been said. Safety is key, as has already been said. But the concern about some of the age assurance solutions is that, as Jonathan just said, they introduce age limits. And so, there are also costs, potentially, to children’s rights, as well as benefits.

speaker

Sonia Livingstone


reason

This comment broadens the perspective beyond just safety and privacy to consider the full spectrum of children’s rights.


impact

It prompted a more comprehensive discussion of the trade-offs involved in age assurance technologies and their potential impacts on children’s rights and development.


Denying access will always encourage teens to look at work around, to not engage in dangerous behavior because they have no guidance. Why do we not put more emphasis on media information literacy, so that users understand how to protect themselves?

speaker

Doherty Gordon


reason

This comment challenges the effectiveness of access restriction approaches and suggests an alternative focus on education.


impact

It sparked discussion about the importance of digital literacy and education as complementary or alternative approaches to technical solutions for online safety.


The idea of using public procurement as a tool, as a sort of complement either to self-regulation or to formal regulation is, I think, one that’s worked in a number of areas.

speaker

Jonathan Cave


reason

This introduces a novel policy approach to incentivizing industry compliance with safety standards.


impact

It shifted the conversation towards considering economic incentives and government purchasing power as tools for promoting child-safe technologies.


Users, the word user is a really problematic word. And I think if we talk about users, we can quickly forget there are children. So by and large, in relation to IoT and other innovations, by and large, children are not the customer. They don’t pay. They don’t always set up the profile, especially for IoT. They don’t seek remedy unless we scaffold that. They don’t bring lawsuits. They don’t get to speak at the IGF.

speaker

Sonia Livingstone


reason

This comment highlights how children are often overlooked in discussions about technology users and policy.


impact

It prompted reflection on the need to explicitly consider children’s perspectives and interests in technology development and policy discussions.


Overall Assessment

These key comments shaped the discussion by broadening its scope beyond simple age-based protections to consider more holistic approaches to children’s rights in the digital world. They challenged participants to think about the complexities of balancing protection with other rights, the role of education and literacy, economic incentives for industry compliance, and the importance of explicitly considering children’s perspectives in technology development and policy. The discussion evolved from focusing on technical solutions to exploring a multi-faceted approach involving education, policy, industry incentives, and children’s participation.


Follow-up Questions

How can we ensure proper media literacy education to help users understand how to protect themselves online?

speaker

Doherty Gordon (audience member)


explanation

This is important to empower users, especially children and teens, to navigate online risks safely rather than relying solely on access restrictions.


How can we develop age assurance technologies that themselves respect children’s rights?

speaker

Sonia Livingstone


explanation

This is crucial to ensure that solutions meant to protect children’s rights online do not inadvertently violate those rights in the process.


How can we develop more flexible approaches to age verification that account for children’s evolving capacities rather than relying on strict age limits?

speaker

Sonia Livingstone and Jonathan Cave


explanation

This is important to create more nuanced and effective protections that align with children’s actual developmental stages rather than arbitrary age cutoffs.


How can we ensure that age assurance and online safety measures account for children who may not be the primary user or customer of a service but are still impacted by it?

speaker

Sonia Livingstone


explanation

This is crucial to protect children who may be indirectly affected by IoT and other technologies, even if they are not the intended users.


How can we better incorporate the perspectives and experiences of children and young people into discussions and policymaking around online safety and IoT?

speaker

Sonia Livingstone


explanation

This is important to ensure that policies and technologies are truly responsive to children’s needs and experiences.


How can we address the tension between protecting children online and upholding rights to free expression, particularly in conservative societies?

speaker

Musa Adam Turai (audience member)


explanation

This is important to balance child protection with other fundamental rights across different cultural contexts.


How can we create more effective corporate accountability measures for online child safety that go beyond financial penalties?

speaker

Abhilash Nair and Jonathan Cave


explanation

This is crucial to ensure that companies take their responsibilities towards child safety seriously and make it a core part of their operations.


How can we better integrate civil society organizations and frontline responders into discussions and policymaking around online child safety?

speaker

Helen Mason (Child Helpline International)


explanation

This is important to ensure that policies and technologies are informed by real-world experiences and data from those directly working with affected children.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #6 Bridging Digital Gaps in Agriculture & trade Transformation

WS #6 Bridging Digital Gaps in Agriculture & trade Transformation

Session at a Glance

Summary

This discussion focused on the use of innovative technology, specifically the Internet Backpack, to provide internet connectivity to rural and underserved communities in Africa. The panel, comprising experts in internet governance, policy, and technology, explored how this device could support agricultural development, education, and economic growth across the continent.

Dr. Lee McKnight introduced the Internet Backpack, explaining its ability to provide connectivity through cellular, satellite, and mesh networks, along with sustainable power solutions. The device was designed to connect up to 25 users simultaneously, though field tests have shown it can support over 35 users. Panelists emphasized the importance of community ownership and involvement in implementing such technologies.

The discussion highlighted the potential of the Internet Backpack to support various sectors, including agriculture, education, and healthcare. Kwaku Antwi noted how improved connectivity could enhance agricultural value chains and market access for farmers. Mama Mary stressed the importance of digital skills development and community networks in rural areas.

Panelists also addressed regulatory and policy considerations, with Dr. Jimson Olufuye emphasizing the role of the private sector in expanding connectivity. The potential use of universal service funds to support such initiatives was discussed. The conversation touched on the African Continental Free Trade Agreement and how improved connectivity could facilitate its implementation.

Throughout the discussion, speakers emphasized the need for collaborative, multistakeholder approaches to expanding internet access in Africa. They highlighted the importance of building local capacity and ensuring that technologies like the Internet Backpack align with community needs and contexts. The panel concluded by calling for continued efforts to bridge the digital divide and leverage internet connectivity for sustainable development across Africa.

Keypoints

Major discussion points:

– The Internet Backpack as a solution for providing connectivity to rural and underserved communities, especially in Africa

– The importance of internet access for agriculture, education, healthcare, and economic development in rural areas

– The role of community networks and multi-stakeholder collaboration in expanding internet access

– Technical capabilities and use cases of the Internet Backpack technology

– The vision for local manufacturing and capacity building for internet technologies in Africa

Overall purpose/goal:

The discussion aimed to explore how innovative technologies like the Internet Backpack can help expand internet connectivity in rural Africa, supporting agricultural development, education, and economic growth. The speakers sought to highlight the importance of community-driven approaches and multi-stakeholder collaboration in bridging the digital divide.

Tone:

The tone was largely optimistic and solution-oriented. Speakers were enthusiastic about the potential of the Internet Backpack and similar technologies to make a positive impact. There was also a collaborative spirit, with participants building on each other’s points and emphasizing the need for partnership. Towards the end, the tone became more inspirational as speakers called for collective action to build “the internet we want” and “the Africa we want.”

Speakers

– Yusuf Abdul-Qadir: Moderator

– Lee McKnight: Co-inventor of the Internet Backpack technology, Professor

– Jane Asantewaa Appiah-Okyere: Researcher on Internet Backpack use in Ghana

– Jimson Olufuye: Chair of the Advisory Council of Africa ICT Alliance, Principal consultant at Contemporary Consulting

– Mary Uduma: Leader in West Africa IGF

– Kwaku Antwi: Regulatory and policy expert based in Ghana

Additional speakers:

– Rob Loud: Representative from Imcon International, the company that manufactures the Internet Backpack

– Jarell James: Works on internet resilience research

– Poncelet O. Ileleji: Member of the Association of Progressive Communications board

Full session report

Internet Connectivity for Rural Africa: Exploring Innovative Solutions

This comprehensive discussion, held as part of the Internet Governance Forum (IGF), focused on the use of innovative technology, specifically the Internet Backpack, to provide internet connectivity to rural and underserved communities in Africa. The panel, comprising experts in internet governance, policy, and technology, explored how this device could support agricultural development, education, and economic growth across the continent, aligning with the UN Sustainable Development Goals and the African Union’s Agenda 2063.

Technology Specifications and Capabilities

Dr. Lee McKnight, co-inventor of the Internet Backpack technology, introduced the device, explaining its ability to provide connectivity through cellular, satellite, and mesh networks, along with sustainable power solutions. Key features include:

– Battery capacity allowing 16 to 20 hours of runtime

– Newer models featuring a double battery design for extended use

– Local cloud capability for data storage and access

– Ability to create a micro cell for cell phone connectivity in areas without cellular signal

– Support for over 35 simultaneous users, surpassing initial expectations

Rob Loud, a representative from Imcon International, the manufacturer of the Internet Backpack, highlighted its versatility, mentioning its use in rescue work during hurricanes and potential applications in monitoring CO2 levels near volcanoes.

Applications and Benefits

The Internet Backpack shows promise in various sectors:

1. Agriculture: Enhancing value chains and market access for farmers

2. Education: Supporting e-learning initiatives

3. Healthcare: Enabling e-health services in remote areas

4. Economic Development: Facilitating participation in the digital economy and e-commerce

5. Disaster Response: Providing connectivity during emergencies

Speakers emphasized that improved internet access could also support the implementation of the African Continental Free Trade Agreement.

Implementation and Sustainability

The discussion highlighted several key points for successful implementation:

– Need for collaborative, multistakeholder approaches

– Importance of community ownership and engagement

– Potential use of universal service funds for deployment

– Partnerships between government, private sector, and communities

– Focus on digital skills development alongside connectivity provision

Regulatory and Policy Considerations

Speakers addressed the need for regulatory reforms to enable and support community networks, though specific details were not elaborated upon. The discussion also touched on the importance of aligning technology deployment with local regulatory frameworks.

Long-term Vision and Challenges

Yusuf Abdul-Qadir articulated a vision for African ownership and development of connectivity technologies, emphasizing:

– Building community development and African-based infrastructural capacity

– Local manufacturing and distribution of technologies

– Addressing e-waste management

– Job creation and economic value generation

Challenges were acknowledged, including the significant digital divide (only 27% of African rural communities currently have internet access) and the need for sustained capacity building.

Conclusion

The panel concluded by calling for continued efforts to bridge the digital divide and leverage internet connectivity for sustainable development across Africa. They emphasized the importance of building local capacity and ensuring that technologies like the Internet Backpack align with community needs and contexts.

The discussion demonstrated a high level of consensus on the importance of internet connectivity for rural development and the potential of technologies like the Internet Backpack to address connectivity challenges in remote areas. This shared vision, rooted in the African concept of Ubuntu (interconnectedness), provides a strong foundation for implementing and scaling such solutions across rural Africa.

For those interested in joining this work, more information can be found at agcip.org or africaglobalcommunityinternetprogram.org. The Africa Open Data Internet Research Foundation was also mentioned as a relevant organization in this field.

The Internet Backpack has been deployed or tested in several countries, including Ghana, Costa Rica, and Liberia, demonstrating its potential for wider application across the African continent and beyond.

Session Transcript

Yusuf Abdul-Qadir: What would you call it? Wisdom, are you ready? No sound. Okay, we’re good. Yeah, we’re on channel three. Yeah, we’re on channel three. Yes. Hard to see the screen behind me. Thank you for being here. This is a really important discussion that we’re going to be having today with a number of really, you know, experts who are going to be sharing a bit about the development and the need for internet connectivity, and the impact that the internet connectivity will have on communities across the continent of Africa and beyond. I want to provide some background, and for those who are not familiar, that we’ve been doing at Syracuse University in collaboration with the Africa Open Data Internet Research Foundation, and a nonprofit that we founded collectively, Internet Program. We’ll see in front here for those who are online, you will see this device in front of us. We’ll provide a little background here, though. The Africa We Want vision seeks to make Africa global, global equal, an integrated economy with accessible digital services for government, businesses, and citizens. It emphasizes e-commerce, e-government, and participation in the fourth industrial revolution, particularly for countries across the continent. However, challenges like limited infrastructure, low internet access, where only 27% of African rural communities are applying, technology gaps remain. Strain from climate change, land degradation, water shortages, Ike is really not working too well. Maybe we’ll try the hand mic. No, I need the mic. This is not working. Is this better? Perfect. Alright. Sorry, folks. We are at the Internet Governance Forum, but for some reason we have not figured out how mic technology works, but I think we’ll start over. I want to first welcome folks to this discussion. We’re going to start over as if nothing happened. We’re in Riyadh, Saudi Arabia, a country and a city that I come to often in a place that I think is opening itself up to the world and I want to first thank the hosts for inviting us here and welcoming us to both this beautiful facility, but also to a country that is doing rapid change and rapidly expanding its access and connections to the world. For those who haven’t been here before, you’re witnessing a fascinating transformation, and I think we’d be remiss if we didn’t mention that and thank the hosts for inviting us and welcoming us to this space. So the conversation we’re going to have here today… is largely about what happens when communities don’t have access to Internet, how those communities are able to then rebound and be able to participate in the global economy, what this means for sustainable development, and in particular with a focus on the agricultural sector. We have in front of us this technology called the Internet Backpack, which we’ve presented here at IGF over the last three years in Addis, in Kyoto, and now here in Riyadh, where we’re able to connect communities, rural communities in particular, which is our area of focus, the 27% of whom don’t have access to Internet, don’t have access to the ability to interact and correspond with people around the world, don’t have access to sharing their crops on the global markets, don’t have access to being able to develop their creativity and share it with the world, don’t have access to health care, and don’t have access to basic educational opportunities. We think that’s a fundamental problem here, and the Backpack is one solution that can help facilitate for solving that problem. It can facilitate for the implementation of the UN Sustainable Development Goals, and it can facilitate the African Union’s own Agenda 2063. We have an amazing group of folks here and online. As you can see, Professor Lee McKnight, Kwaku Antwi, myself, Youssef Abdelkader, who is the moderator, Mama Mary, who is a legend in the Internet governance space across Africa. And we really, really, really, sorry, my brother here from Nigeria, who I’ve gotten to have lunch with this afternoon. And so we’re really here to talk about the ways with which we can try to facilitate for providing connectivity to communities. But I think it’s important for us to ground us in the technology, what its use is, how we’ve developed it over time, and then bring it to the panel to begin to talk about this use case for agriculture. I’m going to open the floor to Dr. Lee McKnight, who is the inventor of this technology, to talk about the Backpack, and then we’ll begin to open it up for the panel discussion. I know some of the panelists have other panels to speak at, so we’ll try to to make sure to get them before they go. And then we’ll open it up to some discussants and for the room to begin the conversation. Without further ado, I’m sure you’re tired of hearing me talk, though you will get used to it for much of the discussion. I’m gonna hand it over to Dr. McKnight. Lee.

Lee McKnight: Thank you, Yusuf, and thank you all for participating and for this opportunity to discuss further our work with the Internet Backpack. And I wanna make clear, I’m not the inventor, I’m a co-inventor. There’s been a team involved over years in refining the technology, developing the technology, and bringing it to the point that it’s at as you’re seeing there in the room. So first, just the quick background. What were the origins? What was the purpose? It was to bring connectivity anywhere, no matter what. Meaning with or without a local power supply, with or without cell infrastructure, we wanted to make it possible to connect. Originally, this was a National Science Foundation supported research effort. Eventually, this combination of technologies, and I’ll get to that in just a second, was urgently requested by the Goma Volcano Observatory in the Democratic Republic of Congo when we were literally had just exhibited it as a proof of concept or prototype. And they were saying they needed it for real. And we’re saying, we’re not ready. And say, no, no, we don’t care. We want it right now. And we brought it out of the lab. That was seven years ago. And over the last seven years, the technology has been refined. And from what I’m aware of, it’s in at least a dozen countries being used for a variety of purposes. Well, first, the background on the technology, it’s not just a connectivity tool. It’s also designed. as a mini microgrid, meaning sustainable power and energy is a key part of the design. Since we can wait for out in rural Africa or if we’re in a disaster zone, we can’t count on access to electricity or a utility grid. So in the pack itself, there’s a foldable solar panel, there’s, depending on the model, one or two batteries, and there’s a number of different devices then that can be with the solar panel, you can recharge everything, maintain connectivity indefinitely without access to a utility grid. Of course, it’s easier to plug it in and recharge everything if power is available. So you can connect via cellular network to about 10 kilometers away from the nearest cell tower. So if you’re even in a location where your phone says there’s no bars and there’s no access, the pack has a more powerful device inside it called a cradle point router, which can pick up a signal and then act like a booster and now create a Wi-Fi hotspot around the pack itself. There’s also a satellite router, satellite internet access, so you can create a shared connectivity. And there’s also a mesh network, radio network device called the Beartooth that can create, even if there’s no access to cellular network, even if you’re not using the satellite, you can essentially make, do a point-to-point connection and enable like a walkie-talkie, people to talk over several kilometers, perhaps again in its circumstances in which there could be emergency circumstances. So the idea is satellite, cellular. Wi-Fi, off-grid mesh networking with energy sustainably. There’s one more aspect to mention here, which is software. And that’s what was patented in 2022 for essentially reducing the amount of bandwidth used. Again, obviously, if we’re out in remote Africa, all the bandwidth is precious, right? It’s a precious resource. So we don’t want to waste it. So we don’t waste it. We use it more efficiently with something we called edgeware from our years of research. And that’s packaged together is what the backpack constitutes. And I should mention one more aspect. What we’re mainly doing is connecting to cloud services from, again, anywhere on the planet, including remote Africa. And the pack itself is then managed from those cloud services. So you don’t need any special skills to operate it. If you can, again, operate a cell phone, you can become an internet backpack operator. That’s another design consideration. We wanted to make this easy and simple to use, portable, fast to set up within minutes. You can get this up and going anywhere across Africa. And now we get to the use cases where really that’s the discussion today here in whether it’s for, I said, like emergency monitoring, CO2 monitoring of the volcanoes or so. Essentially, now if we take those sensors, not for CO2 monitoring, but monitoring crops and fields or internet of things, IOT type resources, that’s where the backpack can act essentially as a connectivity hub to support farmers in Africa and other communities. in providing access and monitoring to agricultural resources over time. Again, I don’t want to take up too much time. I think I’ve sort of laid out what the combination of technologies does and where it fits in as, again, as a connectivity hub for digital transformation of agriculture and trade across Africa, you know, going forward, potentially. It’s not magic. It’s just a backpack. But it has a role to play where connectivity is limited. And that includes many communities, unfortunately, across Africa. Thank you.

Yusuf Abdul-Qadir: Thank you, Dr. McNight. I’m going to jump to our esteemed colleague here to my right, only because I know that you have another panel. Mama Mary, you’re going to be quickly following after, because you’re, as we would say, the young folks would say you’re an OG. You’re an original. You are the matriarch of West Africa IGF. And so I’m going to quickly follow with you after. So just be prepared. But Dr. Olufi, I got you. Don’t worry, brother. I remember my Nigerian brother from Abuja. You know, your work has really been centered around making sure that the private sector has been engaged both in the discussions on governance, but also the responsibilities for facilitating for sustainable development. At our quick lunch break, which, by the way, we didn’t actually meet and know that we were going to be on this panel at lunch. So fate would have it that we were sitting together. At your lunch break, you raised the prospects of the Africa Free Trade Agreement and the way with which connectivity could help to facilitate for this really important endeavor to really facilitate for the development of an interconnected Africa. What role do you see the private sector playing in this and how does internet connectivity fare in this as a kind of facilitator for that trade system that you were talking about at lunch?

Jimson Olufuye: Thank you very much, Professor Yusuf. I want to also thank my friend, the president of the Open Data Foundation. Wisdom, thank you for the invitation. And I also say plus one to your gracious commendation of the government of Saudi Arabia for hosting us. It’s really great. And of course to thank Dr. McKnight for that wonderful presentation. My name again is Jameson Olufoye. I happen to be the chair of the Advisory Council of Africa ICT Alliance. It’s an alliance of more than 40 countries in Africa made up of ICT associations, companies, and individual stakeholders. And we do advocacy, engaging with government to fast-tracking the issue of connectivity, the issue of prosperity, and the achievement of the sustainable development goals. In my private life, I happen to be the principal consultant on to contemporary consulting. So we’re into digitalization. We build data centers and we are in the forefront of cyber security awareness and assessment and research. So this presentation is so germane, so appropriate in that because when I read the concept note, it talks about it relates to the sustainable development goal, removing hunger, enabling prosperity, and ensuring there’s inclusive participation in the global digital economy, and in particular agriculture. So I was really thrilled to be here to participate in the panel. You know, the United Nations Commission for Science and Technology, sorry, United Nations Economic Commission for Africa, UNECA, they commissioned a report That report reflected a connection between internet penetration and GDP per capita. So a 10% increase in internet penetration using devices like these will enable up to 8.2% increase in GDP per capita. So it is substantive that we boost our internet connectivity, it’s very, very important. And right now, even in Nigeria, we just about, we are 75% connected, you know, by the time we did SIM number validation, it reduced to 61%. So we see quite a chunk of our people not yet connected or active. So they are in the rural areas, okay. So we need to come up with innovation on how to reach them. And especially our farmlands, our farmers are in the interior, basically. So this kind of device is one of those game changers, okay, that could help. So from the private sector perspective, based on our advocacy, because we believe there should be increased collaboration, okay, and for government, private sector, civil society, academic and technical community to come together to see how they can fulfill or achieve the goal of 100% connectivity. And we can recall in September, the United Nations signed this pact for the future. And it has global data compact right there. And that is talking about bridging the digital divide, okay. One of the objectives, one objective is bridging the digital divides. Number two is to ensure that everybody, everyone in the world benefits from digital economy, okay. And number three, human rights, four, talking about data governance, and number five, talking about using of AI, okay, for good. So we in the private sector, we bought into this, we are part of it, and we committed to pushing forward tools like this, okay, innovative products like these, maybe through pilot activity, through proof of concept, and through direct intervention, so that more people can be absorbed, more people can participate, especially the farmers. Now we have the African Continental Free Trade Zone, which is a powerful agreement by government. It is a beautiful one, African government have done very well, really. But we now need to ensure our people benefit from this agreement, and these are part of the framework or tools that could make that possible. And we need to do a lot of capacitation, capacity development need to be pushed forward, and this is part of capacity development anyway, which is commendable, and I believe the recording can be sent around to many other stakeholders so that they can get to be capacitated about what is available. And so, apart from the capacity development, there has to be regulatory framework in terms of policies to say, okay, let us all agree that we need to push things like this, and connect with the underserved areas. And the number of groups, in that regard, not only private sector, other groups need to be brought in. We must not neglect any group at all. In fact, we have this principle called the Net Mundial Principle, which was agreed to in April this year, and it talks about meaningful participation. It also talks about meaningful connectivity. It also talks about getting everybody to be involved. Okay, so we believe in. The private sector, we believe so much in that. And that is why I’m here today. That’s why we believe that we must continue to engage. We engage in Nigeria, we engage across Africa, and we will continue to engage worldwide. I will continue to engage because it’s something good. Once our people are connected, it will also benefit the private sector, seriously. Because our interest is that there is more markets, more demand for our product. Because once people are connected, there will be demand for our services, for our apps, and what have you. So I hope I’ll be able to talk about those, or answer your question.

Yusuf Abdul-Qadir: You did it actually quite comprehensively, as I expected you would. And so I appreciate you both laying out the way with which collaborative ecosystems help to facilitate for really important partnerships that will drive the growth and development. And you really beautifully connected to the way with which the free trade zone is really helping to kind of be a next stage that in the role that the internet will have to be an accelerant of that. You know, we have continued to talk about internet governance and centered the backpack as a part of that conversation. And in West Africa IGF, there’s always been this culture of recognizing that no one’s left behind, especially rural communities. I think as Dr. McKnight said earlier, we’ve initially begun as like an emergency response process, this backpack that has evolved to other use cases, including education. And in the educational use cases that we’ve seen in Ghana and Costa Rica, in Liberia and elsewhere, we’ve seen a significant increase for our research in the involvement of girls and boys in schools. I mean, in Ghana, we’ve been able to identify that for a period of time, girls weren’t participating in school. And once we brought the backpack through the really important research of a, I guess, Dr. Jane, who is no longer a doctor. a student, but now Dr. Jane, who is a part of the African Community Internet Program, we’ve been able to see that bringing the backpack to her village in Ghana, her rural village in Ghana, was first able to provide the ability for teachers to learn how to use the Internet, how to access the Internet, then teach their students how to do research on the Internet. But the interesting effect was the increased involvement of boys and girls. So given the large work that you’ve had in building the future generation of Internet users across Africa, but particularly in West Africa, and giving the roles that you’ve played in helping to make sure that no one’s left behind, why is this so important at this moment and what’s the effect that this will have on rural communities beyond just education and emergency response and inclusive of agriculture?

Jane Asantewaa Appiah-Okyere: Okay, thank you very much for giving me the floor, and thank you Wisdom for inviting me. And I want to lend my voice to thanking the government of Saudi Arabia for such a great outing for us, and we can’t say thank you enough for them. For me, this is a passion, a passion in the sense that my advocacy is for digital justice for my community, whether it’s in Zambia, I mean in Gambia, or in Burkina Faso, or in Nigeria. We want the communities to be connected, and I think there’s a solution here that I’m looking at. The fact that I may not need electricity to get my people connected is a plus, and something we should embrace with all our hearts and all our minds. I know that the community needs, or each of the communities, they need e-learning. You know what happened during the COVID and some were cut off from education because they didn’t have connectivity. So with this, I think we would be able to surmount that challenge we had during the COVID. We also look at the e-health would also be part of the services or the connectivity that will help the community. And now we have food crisis in West Africa. Since after the COVID, we have been having food crisis and the terrorism in West Africa has not helped. So we believe that with this, with agriculture, using this device would also help us in agriculture, distributing of seedlings to our rural communities who are the mainstay of the agri sector of our economy. So I know that it will help us develop policies. When we develop agri policies, we’ll be able to disseminate into the interior so that they can also benefit from what the government is doing. It is also going to help in the e-government. E-government, we want to know what our government is saying. If government is saying, is rolling out new policies, our people in the rural area will be able to benefit and know about it and key into it and then be part of the process. Dr. Jameson said something about the digital economy. Digital economy would also include the agri-economy. And when our agri-economy is digitalized, we’ll find out that we’ll be able to also contribute to the digital economy. GDP of the country. So for me, and I belong to a group in Nigeria, the advisory group for community networks. And this is a big solution for us. Log and play, drop anywhere, any community. The first time I mentioned this to my group, some of them were willing to, oh, let’s snap it up and get, how much is it going to cost? We want to drop it in our community. We want our community to be connected. We want our people to know, to benefit from what internet provides. So for us, it’s something that we are very passionate about. And we think that it can bring quick solution for community networks. And so working with other stakeholders that are community network stakeholders, for instance, the ISOC of this world, the government, whether a regulator or policymaker, that is interested in community networks. Yes, in 20, I think it was 2019 or, Nigeria, you could think that Nigeria is covered because we have operators that will give us connectivity. But it was spotted that there are so many communities in Nigeria where you don’t have the dial tone. So you don’t have the dial tone, and you don’t have connectivity. You don’t have power. So many of them, there were over 300 communities yet to be covered. So this could be a game changer to drop this in those communities. And they will receive their own digital justice by getting connected to have their e-learning, to have their e-health, to have their e-agriculture, to have the e-government, and to have e-regulation. that there are regulations coming out, then they will have meaningful connectivity, just as he mentioned. It could also bring meaningful connectivity to our communities. So with this, would be able to leverage on, you know, sustainability of the community networks. You know, sometimes you need to build the, what’s it called? The mesh or the technology around connecting. But this one is that you is on your backpack and you get to know where in the desert, in the Niger Delta, where there are, there are, what’s it called? Where you have the water log areas that you don’t have land to even connect. There are some of our communities, you have to, this is water and water, you just put bench or bridge to walk into your house. You have to build your house to a point, then you, I mean, raise it up for you to be able to put a shelter over yourself. So it is doable, is something that will bring connectivity to such areas. And very difficult areas to access, where the commercial operators will not reach, this will reach, would look at strategies to make it happen. It’s either PPP with the business people with the afficted, okay, because they are the public sector, I mean, private sector people, alliance with our community themselves. Because if you bring this to a community and show them, just like we have testified of Ghana, you will see the enthusiasm and the excitement that will come, oh, we can access the internet with this. And there’s hunger for data. rural area as well, so they could also connect and access the internet, do even research and do education. One of the women that is in the advisory committee said that she refused to go to the university. She’s now a doctor, she did everything online. So if somebody in my village or in my community could get internet access, could do his education or higher education and it’s cheaper for instance. And so those are things that we know that this revolutionary equipment can bring to our community. So we are passionate about it and we are interested in making sure that our community gets connected. Thank you.

Yusuf Abdul-Qadir: That was beautiful, Dr. Jimson.

Jimson Olufuye: Yes, so maybe after this quick statement I’ll have to join another session. What I want to, I’ve mentioned it briefly before, what I want to conclude with is that it’s very important we get the community to own this. It’s going to go into underserved areas. We consult with, we need to engage them thoroughly in terms of sustainability of the product and safety and security. And then not only that, in terms of regulation, we need to be consulted. Because at the local level, when it comes to multi-stakeholder engagement, it’s bottom-up. So this will help consultation at the local level. So I believe wisdom is good at that, bottom-up engagement process. So this will be good. The community needs to be so… So that even IGF, they’ll be watching IGF in the future through this. So we need to demonstrate that use cases that, look, this is what we’re talking about. Those in the remote, far-flung areas, they are connected because of this. So see that feedback at the next IGF in Norway. Thank you very much.

Yusuf Abdul-Qadir: Thank you. Thank you. Two comments I want to make, and then we’ll go to Dr. Bignay for a question, because I think Mama Mary and Dr. Jimson, and thank you for being here. We appreciate you. We’ll keep in touch for sure. Thank you. Mama Mary, really, it’s as if you were, like, in my brain. I don’t know if there’s a connectivity happening there. There’s a different kind of connectivity. But you really did speak to a number of things that we are actively having conversations about in ACIP. And so what we’ve observed in our research, where we deployed the backpack through grant funding in Costa Rica, and a rural community in the rainforest, when the community, when the funding dried, and, you know, we, because we were able to do the research, excuse me. I’ll get to that in a second. When the funding dried, and, again, we were able to deploy the backpack, but, you know, the backpack will use either, through a high-powered antenna, will connect to a cellular tower, or it connect to satellite, and, obviously, satellite data is more expensive. The community then pooled their resources, which was quite beautiful. Talking to the ownership, the community said, hey, like, we want to make sure our kids and our communities are still able to access the Internet. And because the backpack, when it connects to cellular, is able to use SIM cards, you can imagine now that maybe one person can’t afford a SIM data plan, but when it’s shared across 100 families, that becomes affordable and sustainable, and we didn’t expect that communities would then pool those resources together. We didn’t expect that the community would then pool those resources together. Communities will both find value in this and then say, hey, let’s collectively work to ensure that we can continue to provide this to our communities. And that’s the kind of interesting things that we’ve begun to observe. On the regulatory framework side, a part of the way with which we’ve had discussions about funding these sorts of experiences beyond grants are important and aid funding is necessary but it’s not sustainable necessarily, and they’re helpful to begin to go beyond proof of concept but to begin to kind of build the frameworks to establish this as a proper intermediary. But as we talked about, governments have laws that develop regulations of which typically internet service providers have to provide a percentage of the funds that they would be able to enable communities who are not commercially viable to provide access to. Those are typically called universal access funds or universal service funds. Many countries have universal service and universal access funds and it’s not economically feasible often to be able to still develop infrastructure. When a government has to choose between do I provide water sanitation or do I provide internet access, it’s not a very difficult choice to make. And so what we found is an untapped resource are universal access funds. Kenya, for instance, has $100 million in universal access funds. It is not feasible for Kenya to build fiber optic cables and internet infrastructure because that will be tens, in fact, hundreds of millions of dollars but it may be feasible to leverage this technology. Dr. McKnight, can you speak to that a little bit if you’re still here about the ways with which and then incorporate the cost of the backpack for a question here in the auditorium about the way with which we’ve begun to advocate for the utilization of universal service and universal access funds. What are those and why are they necessary and important?

Lee McKnight: Thank you very much. And again, thank you for including me in this important conversation. So first, in general, universal service programs have been around, you know, going for quite some time in North America, in Western Europe, for building out rural connectivity many, many decades ago. Now, if we take those same kinds of programs, as you mentioned, Yusuf, in African nations, where they exist, the challenges for the per capita income in a rural area, where the government would have, say, Kenya, let’s focus on Kenya for a moment, traditional telecommunications infrastructure, we’re not talking hundreds of millions, we’re talking billions of dollars for bringing full cell tower fiber optics into all parts of rural Africa. So it’s just not practical or feasible. So those funds sit there, essentially, not fully utilized, because the government has to look at it honestly, and say, okay, even if we put this cell tower out here, it’s going to sort of rot and fall apart, because the local income levels are not going to be able to sustain the maintenance and operation of this infrastructure over time. So instead, now, let’s look at the backpack. In the case where we are using cellular networks, essentially, we’re sharing the same amount of bandwidth as one cell phone, now across a community. So we’re subdividing, like you do the arithmetic, now it’s 35 people or 100 people, it’s a community that’s now sharing one cellular network. And for connectivity, just over time, with this clever bandwidth management capabilities and I’ll give a shout out to Rob Loud and Tim Kelly of Imcon International here who are listening in on the Zoom, the firm that makes the pack and that filed the patent on it, on that capability. So now we do the math. Whatever it costs for your data plan, now you divide it by 30 and that’s the cost for a community. The design parameter we were thinking of originally was like 25 people, but in the case of Ghana that we talked about, we had up to 35 people simultaneously using one, what’s essentially like a one cell line to maintain decent internet connectivity. So the economics work out and we’re still at an early stages. I know the firm, you could bring the cost of the pack down, but the first level, the real issue is the cost of the data over time and to maintain connectivity as you gave that example, Yusuf, of the community in Costa Rica that said, hey, we’re not going to wait for government programs to come along and help figure out how to help us. We’re going to help ourselves. So there’s some aspect of that where it could be self-help of communities. It could be a mixture where there is some business of some scale in the area and they’re saying we’ll pitch in some amount because this is going to benefit us. And now we get to the universal service programs themselves, I think, and community networks that Mama Mary spoke of. That’s really the connection there. If we now have government programs that support and permit community networks, this is essentially a special case of a kind of starter community network that maybe perhaps could build out more additional infrastructure over time beyond the backpack itself. So regulatory reform to enable or permit community networks is a key step. Now, going back to the economics itself, there’s different models of the PAC. The latest model comes with something called the Starlink Mini, which is more expensive than a cell line, but it’s much cheaper satellite data than the current model, the model that you have there in the room, which does work everywhere. But if you have to use a satellite, I think it’s prohibitively expensive without some government support in the most remote communities. The PAC itself, again, I’m not the professor, I’m not the sales guy, I’m not trying to sell anybody anything other than I agree, it’s important for connecting these communities. But the PACs can cost, right now, one model called the Light Internet Backpack is $12,000 US, the other model is $20,000, which is the fully equipped one with warranty and a year data plan, and so on. This is not affordable for one rural community on its own currently, but I should also mention there’s lease financing, so you wouldn’t have to come up with that money upfront. Essentially, once this got to some scale, banks, finance companies would support and essentially provide the PAC, and then they would have to pay over a three-year time period. Let’s say that in the case of the $12,000, you have to have 4,000, then 4,000, and then $4,000 over three years and amortize it over that period. Or in the case of the fully equipped model, that would be $6,600 roughly per year for three years to pay for the cost of the infrastructure. without having to pay for that all up front. So that’s the kind of business model where it’s for, what’s interesting to me was how quickly a lease finance company said, yeah, we could send, you can send this pack anywhere in Africa and we’ll provide the financings for it. So you don’t have to come up with that money up front. I say, really, you would do that? And I think Rob on the Zoom could say, yeah, they really would, which is pretty amazing to me. So one aspect to then mention here is, okay, what about if something breaks? And again, we’re being small and portable and going anywhere. The whole thing is kind of amazingly robust and reliable from our experience over the last six years. But still, if something does go wrong, we’ve had this actually in the Ghana case that the cradle point router I mentioned went bad. Okay, well, guess what? The cradle point took it back, shipped in another one to that remote community in Ghana. They say, all right, great. Now, final aspect to mention here is sort of lifecycle and e-waste, which we’ve had from discussions in other IGFs with parliamentarians. Our rural Africa doesn’t wanna be left with all the junk. So that’s something to be thinking about is the ultimate recycling and as equipment wears out. And that’s kind of the beyond the scope of what I’m ready to say much more about other than to say we recognize that and that can be accounted for as well. Hope that helps and was what you were looking for Yusuf.

Yusuf Abdul-Qadir: Yeah, that was very helpful. We’ll take two more of the folks here so that we can begin to open up the conversation to the folks on the floor. Kwaku Antwi, who is a brilliant regulatory policy expert based in Ghana, a true Pan-Africanist in his heart and in his practice and in his ethos is here with us as well as a major leader here in the Africa. Really, we’ve gone from Africa community out of the global community internet program. I would be remiss if we didn’t mention one of the developers of the software, a gentleman, Elil Matai, who is himself from the Democratic Republic of Congo. And then the development that expanded from DRC to Haiti, again, in a Pan-African spirit that continues. to be developed. You know, as a Pan-African, I think it would be remiss again if I didn’t mention that we are constantly looking at the ways with which we can build interconnectedness across our communities, both in the diaspora and on the continent. And I’m personally a part of this work because of that kind of long-term goal. But Kwaku, I wanted to pull you in here, given your both regulatory and policy expertise. Excuse me, given your regulatory and policy expertise, I wanted to bring you in here to talk a little bit about the rural communities in particular, the agricultural kind of aspects here that I think Dr. Mignogna alluded to earlier on, that would help deal with the IOT aspects, that would help facilitate for the ability for sensors and the regulatory regimes that are necessary to bring that online. Kwaku, would you please share with us a little bit of your perspective here? And you’ve got to unmute yourself. If you’re all back there, I can unmute Kwaku onto you.

Kwaku Antwi: All right. Thanks. Thanks. Yusuf, can you hear me? Yes, you’re coming in. Yes. All right. Thank you. Thank you so much. And thank you so much for hosting me, and I’m glad to be part of the session. So yes, I’m building up on the point in terms of agriculture and the rural IOT and other allied services. From the previous discussions, we’ve seen that the connectivity is key, but another aspect which is key that we’ve seen so far, and this is across Africa, is in terms of being able to utilize the technology for monitoring and in terms of the produce itself, the agriculture produce itself in terms of climate, in terms of temperatures, and in terms of agriculture planting information. But most importantly, after having produced the agriculture produce. The value chain marketing, as well as the agricultural business value chain is one thing which is quite important that the internet connectivity is spawning. So quite recently, most across Africa, we are seeing that not just the traditional agricultural produce itself is being promoted, but also the value chain in terms of business. And I know that most African communities where now the social media is prevalent, we see a lot more people marketing their produce online through social media applications, as well as through intermediate businesses and also personal to personal. But why is this so important? We see that the underlying technologies of internet connectivity of the backpack, which is climate compliant, as well as community integrative, helps the communities themselves to operate, as well as the farmers themselves to be able to be in touch with what happens most of the times with what we call agricultural extension services. And this helps both the government in terms of the regulatory aspect in being able to help the farmers to monitor through agricultural extension services, when to apply fertilizers, when time is up for veterinary services, and to be able to have a kind of a cyclical application of communication updates. And this is also helps another aspect, which is so important in the whole ecosystem in terms of data. And not just for the data farmers themselves are able to keep, but they’re also able to get feedback. and even help research and develop the community. So I think one other aspect that we have always championed is the skills aspect. Farmers are skilled in being able to plant, being able to reap, being able to harvest, but this skills aspect also comes with the technology equipping them with digital skills. And these digital skills, an aspect of the internet backpack is being able to have a local cloud, which now becomes a kind of a repository of information where not just the farmers are able to access, but also the communities and themselves, other people to be able to learn. So I think this important aspect in connecting from the planting, from the marketing, from the value chain, not just for individual African countries, but also across the continent. Today, I’ll give an example where in Ghana, or in the West African space, one of our major consumption produces tomatoes. And we market a lot, and especially in Ghana, we get most of our tomatoes from neighboring Burkina Faso. And whenever there are shortages or the discussions and even marketing aspect, there are people who will tell you, you can get it from so-and-so market or you can get it from so-and-so market. So, I mean, in helping the African free trade area that we’ve agreed to in Africa, these are the aspects which broadens the discussions based on the technologies and also boosting from the agriculture to the final consumer, how we’ll be able to help and also grow the policy space in the regulatory aspect. Thank you, Yusef.

Yusuf Abdul-Qadir: And I just want to drop the mic in the co-op. That was just brilliant in the way that she connected issues together and was able to facilitate for both understanding the legal image. Uh, the regulatory frameworks will help to these mics, uh, not just to be able to, uh, you know, share their crop or kind of, uh, commercialize their product to trade on global markets, but also to help in data collection and kind of making sure that, uh, we’re better able to produce more efficiently, more effectively in a way that’s climate resilient. I want to bring in our colleague Ponslet here online. Um, who’s from Jocolo labs. Um, and to kind of share here and, and, and add to the discussion and then we’ll open it up to the floor for folks to kind of engage in the conversation. We, we really are, um, uh, a firm believer. And in fact, our white paper is called digitalizing the grassroots where we’re trying to build from the ground up, uh, and not from the top bottom. And so that a part of that is in this community here. So we want to make sure that we are collaborating and having open dialogue with not just the experts and the folks here, but your expertise and, and making sure we facilitate it for that discussion. Uh, so Ponslet, please join us, um, and, and add to your intervention for this discussion.

AUDIENCE: Yeah, thank you. Thank you very much. Sure. Um, one thing I will say, um, um, good morning. Good afternoon folks is, um, I’m part of a network organization called the association of progressive communication, APC.org of which I sit currently on the board, and one of the things we promote is community networks, I believe in increasing access, um, what African farmers need is access, you know, to be able to explore the various value chain of agriculture that exists. Um, we are, we are still very low on broadband. But one way to go about it is creation of community networks, and we have not really embraced it well. We have some parts of the continent doing it. We have some other parts of the continent not doing it very properly. And I believe that working with our telcos, we have to operate within the spirit of the multistakeholderism of internet governments, whereby all partners have to be on board, whether they are regulatory, whether they are academia, and say, OK, we are coming together to see how, through our various universal access policies, whether they exist or they don’t exist, or through engagement with telcos and broadband providers, we are able to set up community networks where our rural-based farmers can be able to benefit from these community networks and empower them with digital skills. Because they need these digital skills to be able to use technology to sell their products. It’s not about just processing stuff. We have also seen the impact artificial intelligence is playing in mitigating climatic problems. And that is something farmers can benefit if they have the required digital skills. So I’m a big advocate of community networks, in line with what the Association of Progressive Communications have been trying to do with local networks, and I will encourage us in our own various communities to try to work within a multistakeholder framework to set up community networks in rural areas. It’s easy saying that, OK, there’s no broadband here, there’s no broadband there. Over 60% of the continent is still not on broadband. But what are we doing about it? And it has to be a collective approach. It has to be bringing the community together. Joko means togetherness, so we always believe in this philosophy, you know, and getting the required digital literacy skills to be given to people. I’ll give you an example to end with what we did in a rural community in Gambia, in the North Bank region. We worked with horticultural women, horticultural farmers to get them online. People were saying they were not educated, but all of them had smartphones. So we invited these women with their daughters. We did the training over the weekend. We invited these women with their daughters to attend the training and then their daughters were helping support them in using social media and using online platforms. And with no time, daughters and mothers working together, all these women were able to start selling their vegetables online, you know. So we have to know that all girls are going to school. All these girls have mothers. Most of them work in the rural areas. Thank you very much.

Yusuf Abdul-Qadir: No, thank you. Again, why we feel so passionate about the role that internet connectivity can have in achieving the UN’s Sustainable Development Goals and achieving African Union’s future that we want, Agenda 2063, for making sure that there’s gender parity and gender equity, for making sure that there’s economic mobility. It’s not a magic bullet. It’s not a magic wand. It’s not going to solve all the problems and all the crises, but as an accelerant, as a facilitator, it helps to drive in that direction. I promised that I would open the floor. We have about half an hour for discussion. I really, really want us to have a discussion. I could continue. I’m a professor, so I could probably talk all day long. I’d be happy to do that, but I really think it’s valuable and most valuable for us to engage in discussion amongst and with each other. There’s a mic here, so please introduce yourself, give a little background on who you are, and we’ll get into discussion.

Jarell James: Hey. My name is Jarell James. I work on internet resilience research myself. I already know of your guys’ work because I already know you guys. Nice to see you again, Dr. Lee, Dr. McKnight, and Yousef. I really enjoyed this discussion. I had not gotten to see your hardware. You know, last year you guys spoke about the hardware, but I didn’t get to see it. And it seems like a lot of development has been done on it. I actually have a couple questions for Mr. McKnight and then one for both of you that kind of ties together. What is the watt-hour capacity of the battery that powers it? And so, to say, does it go live for 40 hours? And what’s the draw from the antennas and the connectivity hardware itself? Is it 12 volts type stuff? And then from your sides of the perspective here, is there much to be done when it comes to teaching these communities themselves how to build battery-based connectivity hardware? I mean, I myself work on battery-based connectivity hardware, and this has been kind of a question in my mind is, how can we not have to always, like, ship the…

Yusuf Abdul-Qadir: Before you jump in, I want to just answer that question quickly, and Mama Mary, I’ll let you jump in, and then we’ll get to Lee. Our vision is not for us to continue to… I love Rob, and I love the folks at Incon, and I’m happy that they’re doing great humanitarian work. Really, oftentimes, no profit value for them. Like, this is a technology of the very many technologies that they offer as a company. But really, we do want to have a community development, African-based infrastructural capacity building institute to be able to build these technologies, to distribute them across the continent, to deal with the e-waste and recycling of that, to develop the capacity, the jobs, the economic value. viability. The vision is to go in that direction. We’re not there yet today, but that is the long-term, the middle to long-term vision of what we’re trying to endeavor and to achieve. I’m not a fan, and I don’t think any of us are a fan of like, you know, the way with which the current global development aid system operates, and we’re not trying to be a facilitator for that kind of neocolonialist approach. I’m just going to call a spade a spade. I’m not going to jump around it. I’m just going to be straight up with you. I can’t do anything but talk in particular to talk, because we’re already rooted in understanding pan-Africanism. We’ll run 16 to 20 hours on a battery load.

Rob Loud: Dr. McKnight didn’t hit on the battery that was one of the things we really planned on was that it does have AC and DC power, both input and output, but depending where you’re at, what type of situation you are, you can not only power our devices, you could power other devices with it. For instance, we were in a country in Liberia several years ago doing some things, and the building that was hosting us ran out of power, so the PAC was able to run our equipment, their monitors and computers, as well as fans for the room. There was thought put into this to really make it a useful solution, not just to power one device. Like I said, I would give it all day, and Dr. Lee also mentioned that some of the newer PACs have a double battery design, which was done on purpose as opposed to giving one battery a larger charge. This way, one battery can be in use and the secondary battery can always be charged, so that way you could have a continuous 24-hour setup if necessary.

Yusuf Abdul-Qadir: Thank you very much for up there. Kwaku, we want to make sure to bring you in here to talk about, well, I mean, you can talk about anything you want to talk about at this stage, but please jump in, Kwaku.

Kwaku Antwi: Thanks. Thanks, Yusuf. So, yes, I think the ultimate aim that we have all talked about is, one, being able to deploy the device to the community, because the benefits are enormous. Two, the skills development, which my brother constantly talked about, that we see that is evident, that the skills development and even the cascading effect of the skills development is something that is spurred by having the technology itself. And the issue about local context, which is key in Africa, is something that we’ve seen the effects happening. Over the last few years that we’ve seen the improvement in internet connectivity or the access to internet connectivity, and also the dual effect of having the applications and also the devices, we’ve seen a lot more of African content being uploaded, a lot more African content being used across platforms because of the power of internet connectivity. And not just for entertainment purposes, but it is cascading across the various industries from the start of the COVID pandemic, and this has gone in ways which I think, if we continue, will go. I’ll give you a good example with the Ghanaian one. So, the Ghanaian one initially started from a library when Dr. James’ research started, and we’re talking about use cases and also the cascading effect. What happened was that at a point in time, it was being, for safety purposes, the device was being taken to a police station. The police at the station also now were able to even operate it or connect. Then there was a nursing training institution, which was across from where the police station was. And they also were talking about internet connectivity, and they were also able to connect to the device. What am I talking about? Having the internet connected, we’re talking about it all day long, but at the end of the day, we want to be able to see that the device and the connection is there. The content which we talk about can be both on the local cloud, as we were talking about, because the internet backpack has that capacity to have a local cloud, which you don’t necessarily need to connect to the internet, but just connect to a local cloud to be able to use it. And people will now create content to be able to be accessed. The use cases are so enormous that, I mean, at the end of the day, our people will be able to benefit. And then that multi-stakeholder collaboration and all of us who are in the communities who are looking at the grounds up or the grassroots building up will be able to improve the internet connectivity

Yusuf Abdul-Qadir: and help us. So I think what we are also aiming at totally is that it’s more of a collaboration where we are seeing that we’re complementing what is already existing. We are having the various alliances formed, but most importantly, connection to the internet is what is going to drive what we are looking at. And we pray for all the support that we need so that we can be able to move this forward where the Africa We Want 2063 agenda is being able to be achieved. And the commerce or the economic aspect of the African free continental travel area. we’re able to improve that so that true internet connectivity, trade, content, improvement of our communities will be achieved.

Kwaku Antwi: Thank you. You’re safe.

Yusuf Abdul-Qadir: Thank you. Any other questions from folks here in the auditorium? Please.

AUDIENCE: My name is Keeks, also working with Jarrell on the research firm. More of a technical question, but I was curious the number of unique cell phones that can be either connecting or passing traffic between each of the packs, and then the gross, I guess, traffic. Like if you hit that max capacity, let’s say it’s 50 or 100 cell phones, like how fast is the traffic per cell phone? Like is it five megabits per second that each phone can access? Is there anything that can be, I guess, built or handled on the software side to introduce like throttling? Basically, just like understanding a little bit without going too much on the technical details around how many cell phones can be connected, how fast of a connection each cell phone would experience. Thank you.

Yusuf Abdul-Qadir: Let me add to that question, if Lee and Rob can talk a bit about what it was designed to do and what we’re seeing happening in the field. Because what it is designed to do, we’ve actually surpassed that substantially in the field in Ghana, and the research that we’ve done there shows that. Lee, if you want to take that and Rob as well.

Lee McKnight: Sure. Thanks. Thanks for the question. The design parameter was we were looking to have 25 simultaneous users and up to 250 devices. As Yusuf just mentioned, in the field in Ghana, we have records. from Dr. Jane’s doctoral thesis work of hitting over 35 simultaneous users connecting with again adequate bandwidth. What’s the definition of adequate? I can’t say, but 35 teachers we can say we’re able to connect, we’re able to do their work and learn together about digital skills in Ghana. Now the throttling you mentioned is kind of you know we would might call it bandwidth management but that’s what I’m talking about for the patent is this what was awarded in 2022 is this way to sort of actively manage the bandwidth so more simultaneous users. So again up to 250 devices up to 25 people was what we set out for and what we’ve experienced and documented is over 35 and Rob do you have something to add but about the question?

Rob Loud: Here the current modeling with and we’ll go with if we have a larger amount of bandwidth versus a small satellite it can handle up to about a hundred and twenty connections simultaneously. Now when you have one ingress point it it can go a lot of different ways. What I try to remind people when they’re using the internet or so on and I’m sure all of you are aware of this you know just because I’m looking at my device and reading a website doesn’t mean I’m actively accessing at that second and so what our near part of patent is called a narrow bandwidth utilization and what we do is allow the traffic to reprioritize so if we want to make sure emails are the most prominent or a specific website that has that ability and whoever is managing the pack has the ability to set for all those users say you want the video feeds to only be a at a 720 or maybe less, as opposed to a high definition feed. So it can allow that. It can take audio and change it from 128 bit down to 64, 32. And that happens up in the cloud on our side. But it allows the individual user or the pack manager to decide those things to better utilize it. Now, to answer the question what we’ve seen, it all depends on the amount of bandwidth available. So if you’re on the satellite, for instance, you’re not going to get the same throughput that you would with an active cellular connection or a star link. But with the star links and the 5G networks, we’ve seen anywhere total bandwidth running up over 200 megabits a second. Per user, they were still pulling 10 and 12. We’ve had multiple concurrent video sessions going on at once. So it’s kind of a large question to answer, because it depends on the location of what the ingress point of the internet is. And so that’s what we try to do, is make sure that that user experience is actually the best that it can be, given what access may be available at that particular moment. That answers it nicely.

Yusuf Abdul-Qadir: And thank you for that, Rob and Lee. And please let me, based on Dr. Jane’s gradual thesis, she talked about surpassing the number of devices. What was that number or that threshold that she surpassed, Lee?

Lee McKnight: The number of users was the one that I know she surpassed, which was we designed it to meet 25. That was the original parameter. Obviously, Rob’s with star link, that’s gone further. But she had up to 35 simultaneous users connected. And again, as Rob noted, if we were talking about internet of things, like if the agricultural use case when we’re talking about sensor networks. we we expect it to be able to connect up to 250 devices simultaneously um for which is we haven’t actually hit that number in the field for real i think right rob uh just yet um but but if in principle if you’re talking about a low bandwidth sensor network connectivity for agricultural use it could be hundreds of farmers many hundreds of farmers could be connected simultaneously within a community so we haven’t hit that parameter yet because we haven’t had that particular use case come into k into it but but we can do it

Rob Loud: robin just to add on to that lee um for the use of what we’re talking about in this discussion for agriculture there we do have models of the backpack that are iot specific so it actually enables the traffic for iot sensors to work much better and as lee has pointed out we built the backpack with an initial idea of what it was for and i’ll kind of compare it to the ipad we never knew what people would end up using it for we’ve seen it used for telemedicine we’ve seen it used for agriculture lots of education we’ve seen it for rescue work a great example was a couple of months ago when we had a hurricane hit here in the u.s up in the mountains there was no communications roads were washed out so nobody could get there the cellular mobile companies couldn’t get there so rescue workers that were on mules took our pack up into the mountains and allowed people not only to have internet access we were able with a partnership with one of the carriers here to create a micro cell so even though there was no power no cellular signal in this area because we were connecting via starlink at that moment we were able to provide internet but also everybody within that area where we were was able to connect their cell phones as well as make text messages so that’s one of the things that i really love about this pack is it’s not just a static idea or a static product. It’s an ever-evolving and based upon use cases, we always try our best to modify it or build a pack that’s specific to what somebody needs. Because what Lee may need or Kwaku may be completely different. And I won’t take up much more, but the other thing that I would like to mention here is we do have a variation that has a small, I’ll call it a computer server in it. And this kind of goes back to the bandwidth question. So in educational situations, we have partners that have taken an entire school’s worth of data and built a web interface. And they run it locally off of this little server in the backpack that then feeds all of the children for their education, all the different grades, collects the information, then once a day can upload it to the ministry of education. So they can run it through their systems for grading this, that, and the other. So there’s lots of ways to make the bandwidth work really well. It’s not just managing it. It’s, can we do things at the local level to create as much content there based upon the use case that makes the use of that bandwidth even more affordable in the long run.

Yusuf Abdul-Qadir: Thank you very much. I’m gonna give Mama Mary the last word. You’ve got about one minute and then we’ll close.

Mary Uduma: The great thing here is that it’s for everybody. Specify what you want to use it for and there it will be for you. So we are hoping that the developers would also put into context why scaling up our peculiar needs in our environment. If it’s emergency, let it be emergency. If it’s education, let it serve us in the education system. If it is e-health. or telehealth, let it serve the telehealth. And if it’s agriculture, let the agricultural sector benefit from what this device can provide. Thank you.

Yusuf Abdul-Qadir: Thank you, Mama Mary. I wanna give everyone here a round of applause. Thank you for listening to us in this discussion. Thank you for being here. Please clap for yourselves. Excuse me, I’d like to thank our speakers and the folks who engage in interventions, too many to name. And so you see them in the program and I wanna thank all of them for doing so. I wanna thank Wisdom for being visionary and bringing this panel together. I wanna thank the IGF Secretariat for allowing us to have this conversation and welcoming us back to IGF for a third year in a row. I wanna thank the Kingdom of Saudi Arabia and the Saudi people for welcoming us to their beautiful country and this amazing moment that they’re in this beautiful transformation. And it’s something to behold and to witness and it’s fascinating to watch it live. And last but not least, I would be remiss if I didn’t invite you to join us. We cannot continue to do this work if we don’t collectively build the grassroots. If you go to agcip.org, africaglobalcommunityinternetprogram.org, you’ll be able to see our work, contribute to our work. We need to build this together. In the African tradition, Ubuntu is central to who we are and very much deeply embedded in the philosophy of the work that we do. And so in the spirit of Ubuntu, please join us. It is essential that we all work together to make this happen. The internet that we want, the future that we want, the Africa that we want will not happen if we don’t work together to build it. Without further ado, thank you very much and we’ll close the session. Thank you.

L

Lee McKnight

Speech speed

138 words per minute

Speech length

2274 words

Speech time

986 seconds

Backpack technology provides internet access in remote areas

Explanation

The Internet Backpack is designed to bring connectivity anywhere, even without local power or cell infrastructure. It combines multiple technologies to create a portable, easy-to-use solution for remote internet access.

Evidence

The backpack includes solar panels, batteries, cellular and satellite connectivity, and mesh networking capabilities.

Major Discussion Point

Internet Connectivity for Rural Communities

Agreed with

Jimson Olufuye

Jane Asantewaa Appiah-Okyere

Kwaku Antwi

Mary Uduma

Yusuf Abdul-Qadir

Agreed on

Internet connectivity is crucial for rural development

Backpack includes sustainable power and energy capabilities

Explanation

The Internet Backpack is designed as a mini microgrid with sustainable power and energy. It includes foldable solar panels and batteries to maintain connectivity indefinitely without access to a utility grid.

Evidence

The backpack can operate for 16-20 hours on a battery load and newer models have a double battery design for continuous 24-hour operation.

Major Discussion Point

Technical Aspects of the Internet Backpack

J

Jimson Olufuye

Speech speed

133 words per minute

Speech length

1063 words

Speech time

478 seconds

Connectivity enables agricultural development and market access

Explanation

Internet connectivity facilitates agricultural development by enabling farmers to access information, monitor crops, and participate in e-commerce. It allows rural communities to benefit from the African Continental Free Trade Zone agreement.

Evidence

A 10% increase in internet penetration can lead to an 8.2% increase in GDP per capita, according to a UNECA report.

Major Discussion Point

Economic and Social Benefits of Connectivity

Agreed with

Lee McKnight

Jane Asantewaa Appiah-Okyere

Kwaku Antwi

Mary Uduma

Yusuf Abdul-Qadir

Agreed on

Internet connectivity is crucial for rural development

Internet access facilitates participation in digital economy

Explanation

Increased internet connectivity allows more people to participate in and benefit from the digital economy. This is in line with the UN’s Global Digital Compact, which aims to bridge digital divides and ensure everyone benefits from the digital economy.

Evidence

The speaker mentions the UN’s Global Digital Compact and its objectives of bridging digital divides and ensuring everyone benefits from the digital economy.

Major Discussion Point

Economic and Social Benefits of Connectivity

Community ownership is important for sustainability

Explanation

For the Internet Backpack to be sustainable in underserved areas, it’s crucial to engage with the community and ensure their ownership of the technology. This involves thorough consultation and bottom-up engagement processes.

Major Discussion Point

Implementation and Sustainability

Agreed with

AUDIENCE

Jane Asantewaa Appiah-Okyere

Yusuf Abdul-Qadir

Agreed on

Collaboration between stakeholders is necessary for successful implementation

A

AUDIENCE

Speech speed

137 words per minute

Speech length

675 words

Speech time

294 seconds

Community networks can increase rural internet access

Explanation

Community networks are an effective way to increase internet access in rural areas. These networks can be set up through collaboration between various stakeholders, including telcos, regulators, and academia.

Evidence

The speaker mentions successful implementation of community networks in parts of Africa and suggests working within a multistakeholder framework to set them up.

Major Discussion Point

Internet Connectivity for Rural Communities

Agreed with

Jimson Olufuye

Jane Asantewaa Appiah-Okyere

Yusuf Abdul-Qadir

Agreed on

Collaboration between stakeholders is necessary for successful implementation

J

Jane Asantewaa Appiah-Okyere

Speech speed

124 words per minute

Speech length

1025 words

Speech time

492 seconds

Internet access supports education and skills development

Explanation

Internet connectivity in rural areas enables e-learning and digital skills development. This is crucial for empowering communities and allowing them to participate in the digital economy.

Evidence

The speaker mentions examples of women and their daughters in rural Gambia learning to use social media and online platforms for selling vegetables.

Major Discussion Point

Internet Connectivity for Rural Communities

Agreed with

Lee McKnight

Jimson Olufuye

Kwaku Antwi

Mary Uduma

Yusuf Abdul-Qadir

Agreed on

Internet connectivity is crucial for rural development

Partnerships needed between government, private sector and communities

Explanation

Successful implementation of internet connectivity solutions requires collaboration between various stakeholders. This includes government, private sector, civil society, and the communities themselves.

Major Discussion Point

Implementation and Sustainability

Agreed with

Jimson Olufuye

AUDIENCE

Yusuf Abdul-Qadir

Agreed on

Collaboration between stakeholders is necessary for successful implementation

K

Kwaku Antwi

Speech speed

139 words per minute

Speech length

1092 words

Speech time

468 seconds

Connectivity enables e-commerce for rural farmers

Explanation

Internet connectivity allows rural farmers to market and sell their produce online. This opens up new business opportunities and expands their market reach.

Evidence

The speaker mentions examples of people marketing their agricultural produce online through social media applications and intermediate businesses.

Major Discussion Point

Economic and Social Benefits of Connectivity

Agreed with

Lee McKnight

Jimson Olufuye

Jane Asantewaa Appiah-Okyere

Mary Uduma

Yusuf Abdul-Qadir

Agreed on

Internet connectivity is crucial for rural development

M

Mary Uduma

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Internet supports e-learning and e-health services

Explanation

Internet connectivity enables access to e-learning and e-health services in rural areas. This improves education and healthcare outcomes for remote communities.

Major Discussion Point

Economic and Social Benefits of Connectivity

Agreed with

Lee McKnight

Jimson Olufuye

Jane Asantewaa Appiah-Okyere

Kwaku Antwi

Yusuf Abdul-Qadir

Agreed on

Internet connectivity is crucial for rural development

Y

Yusuf Abdul-Qadir

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Access to information improves agricultural practices

Explanation

Internet connectivity provides farmers with access to crucial information that can improve their agricultural practices. This includes data on climate, temperatures, and planting information.

Major Discussion Point

Economic and Social Benefits of Connectivity

Agreed with

Lee McKnight

Jimson Olufuye

Jane Asantewaa Appiah-Okyere

Kwaku Antwi

Mary Uduma

Agreed on

Internet connectivity is crucial for rural development

Universal service funds could support backpack deployment

Explanation

Universal service and universal access funds, which are available in many countries, could be used to support the deployment of Internet Backpacks in rural areas. This provides a potential funding source for expanding connectivity.

Evidence

The speaker mentions that Kenya has $100 million in universal access funds that could potentially be used for such initiatives.

Major Discussion Point

Implementation and Sustainability

Local manufacturing and capacity building should be long-term goal

Explanation

The long-term vision is to develop local capacity for manufacturing and distributing Internet Backpacks across Africa. This includes building the infrastructure for recycling and e-waste management.

Major Discussion Point

Implementation and Sustainability

Agreed with

Jimson Olufuye

AUDIENCE

Jane Asantewaa Appiah-Okyere

Agreed on

Collaboration between stakeholders is necessary for successful implementation

R

Rob Loud

Speech speed

170 words per minute

Speech length

1032 words

Speech time

363 seconds

Device can support 35+ simultaneous users

Explanation

The Internet Backpack can support over 35 simultaneous users with adequate bandwidth. The actual capacity depends on the available bandwidth and type of connection.

Evidence

In field tests in Ghana, the device supported over 35 teachers connecting simultaneously with adequate bandwidth for their work.

Major Discussion Point

Technical Aspects of the Internet Backpack

Software manages bandwidth efficiently for multiple users

Explanation

The Internet Backpack uses software to manage bandwidth efficiently, allowing more simultaneous users. This includes the ability to prioritize certain types of traffic and adjust video and audio quality to optimize bandwidth usage.

Evidence

The speaker mentions a patent for ‘narrow bandwidth utilization’ that allows traffic reprioritization and quality adjustments for better bandwidth utilization.

Major Discussion Point

Technical Aspects of the Internet Backpack

Agreements

Agreement Points

Internet connectivity is crucial for rural development

Lee McKnight

Jimson Olufuye

Jane Asantewaa Appiah-Okyere

Kwaku Antwi

Mary Uduma

Yusuf Abdul-Qadir

Backpack technology provides internet access in remote areas

Connectivity enables agricultural development and market access

Internet access supports education and skills development

Connectivity enables e-commerce for rural farmers

Internet supports e-learning and e-health services

Access to information improves agricultural practices

All speakers agreed that internet connectivity is essential for various aspects of rural development, including agriculture, education, healthcare, and economic growth.

Collaboration between stakeholders is necessary for successful implementation

Jimson Olufuye

AUDIENCE

Jane Asantewaa Appiah-Okyere

Yusuf Abdul-Qadir

Community ownership is important for sustainability

Community networks can increase rural internet access

Partnerships needed between government, private sector and communities

Local manufacturing and capacity building should be long-term goal

Multiple speakers emphasized the importance of collaboration between various stakeholders, including government, private sector, and local communities, for successful implementation and sustainability of internet connectivity solutions.

Similar Viewpoints

Both speakers highlighted the technical capabilities of the Internet Backpack, emphasizing its ability to provide sustainable power and efficient bandwidth management for multiple users in remote areas.

Lee McKnight

Rob Loud

Backpack includes sustainable power and energy capabilities

Device can support 35+ simultaneous users

Software manages bandwidth efficiently for multiple users

These speakers shared the view that internet connectivity enables rural communities, particularly farmers, to participate in the digital economy and improve their agricultural practices through access to information and e-commerce opportunities.

Jimson Olufuye

Kwaku Antwi

Yusuf Abdul-Qadir

Internet access facilitates participation in digital economy

Connectivity enables e-commerce for rural farmers

Access to information improves agricultural practices

Unexpected Consensus

Use of universal service funds for Internet Backpack deployment

Yusuf Abdul-Qadir

Universal service funds could support backpack deployment

While not explicitly mentioned by other speakers, the suggestion to use universal service funds for Internet Backpack deployment represents an unexpected but potentially significant approach to funding connectivity initiatives. This could be a novel way to leverage existing resources for rural internet access.

Overall Assessment

Summary

The speakers demonstrated strong agreement on the importance of internet connectivity for rural development, the need for collaborative approaches, and the potential of technologies like the Internet Backpack to address connectivity challenges in remote areas.

Consensus level

High level of consensus among speakers, with agreement on core issues related to rural connectivity and its benefits. This consensus suggests a shared vision for addressing digital divides and promoting inclusive development through innovative connectivity solutions.

Differences

Different Viewpoints

No significant disagreements identified

The speakers generally agreed on the benefits and potential of the Internet Backpack technology for rural connectivity and development.

Unexpected Differences

Overall Assessment

summary

There were no significant disagreements among the speakers. The discussion was largely collaborative and focused on the potential benefits of the Internet Backpack technology.

difference_level

Low level of disagreement. The speakers generally supported the same goals of improving rural connectivity and leveraging technology for development. This consensus suggests a strong foundation for implementing and scaling the Internet Backpack technology in rural areas.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of local involvement, but Yusuf emphasizes long-term manufacturing capacity, while Jimson focuses on community ownership and engagement.

Yusuf Abdul-Qadir

Jimson Olufuye

Local manufacturing and capacity building should be long-term goal

Community ownership is important for sustainability

Similar Viewpoints

Both speakers highlighted the technical capabilities of the Internet Backpack, emphasizing its ability to provide sustainable power and efficient bandwidth management for multiple users in remote areas.

Lee McKnight

Rob Loud

Backpack includes sustainable power and energy capabilities

Device can support 35+ simultaneous users

Software manages bandwidth efficiently for multiple users

These speakers shared the view that internet connectivity enables rural communities, particularly farmers, to participate in the digital economy and improve their agricultural practices through access to information and e-commerce opportunities.

Jimson Olufuye

Kwaku Antwi

Yusuf Abdul-Qadir

Internet access facilitates participation in digital economy

Connectivity enables e-commerce for rural farmers

Access to information improves agricultural practices

Takeaways

Key Takeaways

The Internet Backpack technology provides connectivity to rural and remote areas that lack traditional infrastructure

Internet access enables agricultural development, market access, education, and healthcare services in underserved communities

The backpack supports multiple connectivity options (cellular, satellite, mesh) and can serve 35+ simultaneous users

Community ownership and multi-stakeholder partnerships are important for sustainable implementation

Connectivity facilitates participation in the digital economy and supports the African Continental Free Trade Agreement

Resolutions and Action Items

Explore using universal service funds to support Internet Backpack deployment

Develop local manufacturing and capacity building as a long-term goal

Engage communities to ensure ownership and sustainability of connectivity solutions

Promote the Internet Backpack as a tool for achieving UN Sustainable Development Goals and African Union Agenda 2063

Unresolved Issues

Specific funding mechanisms for large-scale deployment of the technology

Regulatory frameworks needed to support community networks in different countries

Long-term plans for e-waste management and recycling of devices

Strategies for scaling up manufacturing and distribution across Africa

Suggested Compromises

Use lease financing models to make the technology more affordable for communities

Develop variations of the backpack tailored to specific use cases (e.g. agriculture, education, emergency response)

Combine local content servers with internet connectivity to optimize bandwidth usage

Partner with existing telecom providers to complement rather than compete with current infrastructure

Thought Provoking Comments

The Africa We Want vision seeks to make Africa global, global equal, an integrated economy with accessible digital services for government, businesses, and citizens. It emphasizes e-commerce, e-government, and participation in the fourth industrial revolution, particularly for countries across the continent. However, challenges like limited infrastructure, low internet access, where only 27% of African rural communities are applying, technology gaps remain.

speaker

Yusuf Abdul-Qadir

reason

This comment sets the stage for the entire discussion by outlining both the vision and challenges for digital connectivity in Africa. It frames the conversation around a specific goal and identifies key obstacles.

impact

This framing guided the rest of the discussion, with subsequent speakers addressing various aspects of achieving this vision and overcoming the identified challenges.

What were the origins? What was the purpose? It was to bring connectivity anywhere, no matter what. Meaning with or without a local power supply, with or without cell infrastructure, we wanted to make it possible to connect.

speaker

Lee McKnight

reason

This comment provides crucial context about the Internet Backpack technology, highlighting its versatility and potential to address connectivity challenges in diverse environments.

impact

It shifted the conversation to focus on practical solutions and sparked discussion about various use cases for the technology in different African contexts.

So from the private sector perspective, based on our advocacy, because we believe there should be increased collaboration, okay, and for government, private sector, civil society, academic and technical community to come together to see how they can fulfill or achieve the goal of 100% connectivity.

speaker

Jimson Olufuye

reason

This comment emphasizes the importance of multi-stakeholder collaboration in achieving connectivity goals, introducing a key theme for the discussion.

impact

It broadened the conversation beyond just technology to include governance and partnership aspects, influencing subsequent comments about regulatory frameworks and community involvement.

For me, and I belong to a group in Nigeria, the advisory group for community networks. And this is a big solution for us. Log and play, drop anywhere, any community.

speaker

Jane Asantewaa Appiah-Okyere

reason

This comment connects the technology directly to community networks, highlighting its potential impact at a grassroots level.

impact

It shifted the discussion towards more specific, on-the-ground applications and sparked conversation about community ownership and sustainability.

Our vision is not for us to continue to… I love Rob, and I love the folks at Incon, and I’m happy that they’re doing great humanitarian work. Really, oftentimes, no profit value for them. Like, this is a technology of the very many technologies that they offer as a company. But really, we do want to have a community development, African-based infrastructural capacity building institute to be able to build these technologies, to distribute them across the continent, to deal with the e-waste and recycling of that, to develop the capacity, the jobs, the economic value.

speaker

Yusuf Abdul-Qadir

reason

This comment articulates a long-term vision for African ownership and development of the technology, addressing concerns about sustainability and economic impact.

impact

It deepened the conversation by introducing considerations of local capacity building, economic development, and environmental sustainability.

Overall Assessment

These key comments shaped the discussion by progressively expanding its scope from the initial focus on the technology itself to broader considerations of implementation, collaboration, community impact, and long-term sustainability. They helped to create a comprehensive dialogue that addressed not just the technical aspects of connectivity, but also its social, economic, and developmental implications for Africa. The discussion evolved from explaining the technology to exploring its potential applications, considering necessary partnerships, and envisioning long-term African ownership and development of such solutions.

Follow-up Questions

How can we develop local capacity to build and maintain internet connectivity hardware in African communities?

speaker

Jarell James

explanation

This is important for sustainability and reducing reliance on imported technology.

What are the specific regulatory reforms needed to enable and support community networks?

speaker

Lee McKnight

explanation

Regulatory changes are key to allowing community-driven internet solutions to flourish.

How can universal service funds be effectively leveraged to support internet connectivity initiatives like the Internet Backpack?

speaker

Yusuf Abdul-Qadir

explanation

Utilizing existing funds could provide sustainable financing for expanding internet access.

What are the best practices for engaging local communities in the deployment and maintenance of internet connectivity solutions?

speaker

Poncelet O. Ileleji

explanation

Community involvement is crucial for the long-term success and adoption of internet technologies.

How can we measure and quantify the economic impact of increased internet connectivity on rural agricultural communities?

speaker

Kwaku Antwi

explanation

Understanding the economic benefits could help justify further investment in connectivity solutions.

What are the most effective ways to provide digital skills training to rural communities, particularly farmers?

speaker

Poncelet O. Ileleji

explanation

Digital literacy is essential for communities to fully benefit from internet connectivity.

How can we address e-waste and ensure sustainable lifecycle management of internet connectivity hardware in rural areas?

speaker

Lee McKnight

explanation

Proper disposal and recycling of technology is important for environmental sustainability.

What are the specific connectivity needs and use cases for different sectors (e.g., agriculture, education, healthcare) in rural African communities?

speaker

Mary Uduma

explanation

Understanding sector-specific needs can help tailor connectivity solutions more effectively.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #65 Gender Prioritization through Responsible Digital Governance

WS #65 Gender Prioritization through Responsible Digital Governance

Session at a Glance

Summary

This discussion focused on digital gender inclusion and responsible digital governance, particularly in low and middle-income countries. The panel explored barriers to digital inclusion for women and strategies to overcome them. A case study from Pakistan highlighted a structured national strategy to address the digital gender divide through multi-stakeholder collaboration and targeted working groups. Key barriers identified included lack of affordable devices and connectivity, social and cultural norms, digital literacy gaps, and economic constraints.

Panelists emphasized the importance of creating safe online environments, providing digital skills training, and ensuring meaningful connectivity. The role of community networks in empowering women in underserved areas was discussed, along with the need to extend such initiatives to urban settings. The importance of gender-disaggregated data for informed policymaking was stressed. Private sector initiatives, such as Meta’s programs for women’s digital empowerment, were presented as examples of industry efforts.

The discussion highlighted the need for explicit policy frameworks, multi-stakeholder approaches, and financing mechanisms to bridge the digital gender divide. Panelists agreed that closing this gap requires addressing not just access issues, but also quality of connectivity and online safety concerns. The session concluded by emphasizing the urgency of action and the availability of funding opportunities, such as the Women in Digital Economy Fund, to support initiatives aimed at digital gender inclusion.

Keypoints

Major discussion points:

– The digital gender divide and barriers to digital inclusion for women, especially in low and middle income countries

– Pakistan’s Digital Gender Inclusion Strategy as a case study of a national policy approach

– The role of private sector companies like Meta in promoting digital inclusion

– Community networks as a solution for connectivity in underserved areas

– Financing and policy mechanisms needed to support digital inclusion efforts

The overall purpose of the discussion was to examine challenges and solutions for bridging the digital gender divide and promoting responsible digital governance, with a focus on low and middle income countries.

The tone of the discussion was informative and solution-oriented. Speakers shared examples of initiatives and policy approaches in a collaborative manner, with an emphasis on multi-stakeholder efforts. The tone remained consistent throughout, maintaining a focus on practical steps to address the issues raised.

Speakers

– Waqas Hassan: Asia Lead for Policy and Advocacy at the Global Digital Inclusion Partnership

– Malahat Obaid: Director of Communications at the Pakistan Telecom Authority, member of the team that developed the PTA Gender Inclusion Strategy, digital gender specialist for the Central Bank of Pakistan’s Initiative of Women’s Financial Inclusion

– Onica Makwakwa: Executive Director at Global Digital Inclusion Partnership, Executive Managing Director for Women in Digital Economy Fund

– Cagatay Pekyorur: META’s Head of Community Engagement and Advocacy for Africa, Middle East and Turkey

– Josephine Miliza: Policy and Regulation Lead on Local Networks Initiative at the Association of Progressive Communications, advocate for digital equality based in Nairobi, Kenya, co-chair of the African Community Networks Summit

Additional speakers:

– Audience member from Colombia: Works with an NGO called Colnodo on community networks

– Audience member asking about PTA’s strategy for rural areas in Pakistan

Full session report

Digital Gender Inclusion and Responsible Digital Governance: A Comprehensive Discussion

This panel discussion focused on the critical issues of digital gender inclusion and responsible digital governance, with a particular emphasis on low and middle-income countries. The conversation brought together experts from various sectors to explore the barriers to digital inclusion for women and strategies to overcome them.

Introduction

The discussion highlighted the complex and multifaceted nature of the digital gender divide, emphasizing the need for comprehensive, multi-stakeholder approaches to address it. Panelists explored various aspects of digital inclusion, from policy frameworks to community-driven solutions, addressing fundamental barriers such as digital literacy and online safety.

Key Themes and Discussion Points

1. The Digital Gender Divide: Barriers and Challenges

Onica Makwakwa, Executive Director at Global Digital Inclusion Partnership, identified several significant barriers to digital inclusion for women:

– Lack of access to affordable devices and internet connectivity

– Social and cultural norms that limit women’s access to technology

– Digital literacy gaps and lack of foundational digital skills

– Economic constraints, including limited financial resources and time

– Lack of relevant content in local languages

– Online safety concerns and cyber violence

2. Policy Approaches and Strategies

The discussion highlighted the importance of structured policy approaches with clear implementation plans. Malahat Obaid, Director of Communications at the Pakistan Telecom Authority, presented a case study of Pakistan’s Digital Gender Inclusion Strategy, outlining a three-phase approach:

1. Development of strategy pillars

2. Implementation planning

3. Setting targets, goals, and outcomes with a three-year action plan

Obaid detailed the strategy’s six working groups focusing on:

– Access and connectivity

– Affordability

– Digital skills and literacy

– Content and services

– Safety and security

– Research and development

This structured approach was seen as a model for other countries to follow, demonstrating the value of clear, actionable strategies in addressing digital gender inclusion.

The panelists agreed on the critical need for gender-disaggregated data to inform effective policies. Malahat Obaid and Onica Makwakwa both stressed this point, emphasizing its importance in understanding the depth of the problem and identifying areas for intervention.

3. Multi-stakeholder Collaboration and Private Sector Involvement

The discussion emphasized the importance of collaboration between government, industry, and civil society in promoting digital inclusion. Cagatay Pekyorur, META’s Head of Community Engagement and Advocacy for Africa, Middle East and Turkey, highlighted META’s approach to digital inclusion, focusing on:

– Creating a safe online environment

– Supporting access to digital tools

– Maintaining inclusive stakeholder engagement

Pekioror stressed the need for official policy frameworks and action plans to incentivize private sector involvement.

4. Community Networks and Locally-driven Solutions

Josephine Meliza from the Association of Progressive Communications introduced the concept of community networks as a solution for connectivity in underserved areas. She explained how these small-scale, locally owned infrastructure providers can effectively address digital inclusion by providing tailored solutions that understand and address local context and needs.

Meliza highlighted the impact of community networks on women’s empowerment, particularly in underserved areas, noting their potential to provide economic opportunities and enhance digital skills.

An audience member shared an example of a community networks project in Colombia, demonstrating the potential of women-led initiatives in digital inclusion.

5. Online Safety and Security for Women

The panelists agreed on the critical importance of ensuring women’s safety in online spaces. This included discussions on:

– Developing policies and frameworks for online safety

– Creating support groups to address online gender-based violence

– Implementing gender-responsive laws and legal frameworks

6. Capacity Building and Skills Training

The discussion highlighted the importance of digital literacy and skills training programs for women. Onica Makwakwa emphasized how the lack of foundational digital skills puts women at a disadvantage in accessing opportunities in digital technologies.

7. Financing Mechanisms and Funding Opportunities

Waqas Hassan, Asia Lead for Policy and Advocacy at the Global Digital Inclusion Partnership, discussed the need for financing mechanisms to support digital inclusion initiatives. The Women in Digital Economy Fund (WIDEF) was mentioned as a specific opportunity, with new funding rounds planned for India in 2024 and globally in March 2025.

Challenges and Solutions

While the discussion provided comprehensive insights into digital gender inclusion strategies, several challenges were identified:

1. Engaging women from conservative rural areas where mobile phone use is taboo

2. Balancing the commercial viability of community networks with serving hard-to-reach areas

3. Developing effective methods for collecting comprehensive gender-disaggregated data

4. Changing negative perceptions about women’s use of technology in conservative societies

An audience member raised a question about PTA’s strategy for involving women from rural areas in Pakistan, particularly in the Khyber Pakhtunkhwa region, highlighting the need for targeted approaches in challenging contexts.

Conclusion

The discussion underscored the urgency of action in promoting digital gender inclusion, framing it as both a social and economic imperative. As digital technologies continue to shape global economies and societies, bridging the digital gender divide remains a critical challenge that requires sustained effort, innovation, and collaboration across sectors and stakeholders.

In his closing remarks, Waqas Hassan referenced “The Time Is Now” report, emphasizing the timeliness and importance of addressing digital gender inclusion. The conversation provided a holistic view of the challenges and potential solutions, setting the stage for continued work and innovation in this crucial area.

Session Transcript

Waqas Hassan: And I am the Asia Lead for Policy and Advocacy at the Global Digital Inclusion Partnership. We are a policy advocacy organization working on connectivity and digital inclusion. And one of the consortium members to manage the Women in Digital Economy Fund, which has been launched last year by the White House. Today, our session is about two things. One is, of course, digital gender inclusion, digital gender equality, but we’re going to link it with responsible digital governance. If you see what is digital gender divide, it simply refers to the inequality between resources. When men and women try to access and use the internet, are there equal opportunities for both of them? Or is one gender more disadvantaged than the other? I would include all of the genders in there as well. That is where we see that there is a gender gap. There is a digital gender gap in terms of when we’re speaking about it. Now, when we talk about responsible digital governance, we talk about creating and enforcing policies or frameworks and practices that ensure that we have ethical and inclusive and equitable use of digital technologies. So with that in our mind, and also I would like to remind you that this discussion is mainly focused on the low middle income countries. So that is going to be the focus of our discussion today. What right now, if we see and if we look at a few numbers from, let’s say, from ITU, there are 244 fewer women that are online than men. So gap, this is a huge gap, as you can see. So for example, women are just the 20% women who use internet in LMICs, but as compared to 34% men. And at the same time, because of this digital inequality, there is a huge economic loss which is associated with it. According to estimates by GDIP and others, the countries have almost lost one trillion dollars just by not being able to bridge the digital gender divide. So it is not just a social issue or a social empowerment issue, it is actually now an economical issue as well. So in that sense of the matter, if the countries are to utilize this opportunity, and they must, they can add about half a billion dollars over the next five years. So with this concept in mind, we thought of organizing this session. And we would like to have our fantastic panelists with us who will speak about different areas as per their expertise, I’ll introduce them a bit later on. I’ll just explain the session flow for all of you. What we’re going to do is that first of all, there would be a presentation on a policy best practice, or a good practice, I would say, from Pakistan, which is Pakistan Digital Gender Inclusion Strategy. We have with us Ms. Malahat Vaid, who is the Director of Communications at the Pakistan Telecom Authority, and the member of an all-female team that developed the PTA Gender Inclusion Strategy. And she also serves as a digital gender specialist for the Central Bank of Pakistan’s Initiative of Women’s Financial Inclusion. So once Malahat presents that strategy as a case study, we will then move on towards our rest of the panelists. One of them is Onika, who is the Executive Director at Global Digital Inclusion Partnership, and one of the Executive Managing Directors for Women in Digital Economy Fund. After Anika, we will hear from Kegatay Pekioror, he serves as META’s Head of Community Engagement and Advocacy for Africa, Middle East and Turkey. He has a law background and he has been spearheading META’s policy, public policy and programs prior to this role. And Kegatay prefers to be addressed as they or them. Next up we have Josephine Meliza, who is the Policy and Regulation Lead on Local Networks Initiative at the Association of Progressive Communications. She is a leading advocate of digital equality based out of Nairobi, Kenya and also co-chairs the African Community Networks Summit. So with this context in mind, let’s start with the session and I would request then Malahat to please present and talk about PTH, Digital Gender Inclusion Strategy and what kind of opportunities and challenges were there and how they actually started this process and then made a strategy and is now in the implementation phase of that strategy. So Malahat, I will hand it over to you, if you could please share your screen. You also need to unmute your mic, Malahat.

Malahat Obaid: For the generous introduction, I am grateful to the IGF and the hosts for having this workshop on such a critical subject and considering Pakistan’s Digital Gender Inclusion Strategy as a best practice. So without further delay, I believe I have a very short time to present my strategy. This is the flow of the presentation, I will be giving you some statistics and the gender gaps that exist in Pakistan, the formulation of strategy and the collaborations that we have during the strategy and the methodology we have adopted. Then the consultative… process that we followed, out of which we came out with challenges and barriers, and also the solutions to those barriers to overcome the inclusion issues that we have here in Pakistan. Towards the end, I will be telling you about the three years action plan that has been set out in the gender inclusion strategy, the working groups that we have created, and the impacts they will be creating once the strategy is implemented, and of course, towards the end, the achievements so far while we are implementing. Just to tell you, Pakistan is the fifth most populous country in the world with over 250 million population, mainly the population has around 50% as female population. Of course, almost 91% of our population has access to telecom services, but of course, the subscriber base stays at 196 million. Of course, the literacy rate, like any other Asian economies, is low, but relatively it’s low for Pakistan, it is around 63%. One of the main reasons why digital gender inclusion is not kicking off really in Pakistan, this female ownership of SIM is quite low, we have out of 196 million, only 47 million SIMs that are on female’s names or CNICs. These are the digital gender gaps that Pakistan is grappling with. If you can see, from 2018 to 2024, we have made a lot of improvement, however, when it comes to absolute terms, yes, we are improving, but in relative terms, Pakistan is one country in Asia which has lowest gender gaps. technology. We have made good progress when it comes to awareness about internet but when it comes to ownership and the use of internet of course the gaps are wider. Even in social media we have around 70 million social media users but the gap is, when we started off it was over 71% today it is 59% as it shows on the graph but this is for YouTube there is a you know a different number when it comes to Instagram and it is very you know forthcoming that the gap in usage of Instagram is less it is around 41% so I believe that the younger generation the younger females are using Instagram more frequently than the other the social media applications. For branchless banking we have reduced the gap to 54% and today there are around 35 million mobile wallet accounts that are being used by women and owned by women. Going to the digital gender inclusion strategy these gaps that I have just mentioned and then the rankings that were coming out for Pakistan like GSMA mobile connectivity index or for that matter inclusive internet index or the digital inclusion index Pakistan is not doing well so the government decided to address the issue with structured approach and in this regard a PTA took the lead and started the initiative of gender inclusion. We set up a committee which started working in the month of February in 2022 and following a structured approach we decided to go for the strategy first. In this regard UNESCO one of the UN’s main organization gave us the technical support, and of course, the Ministry of IT and the GSMA were there to support us in building up the strategy. The former Alliance for Affordable Internet also came forward and pitched in their share while we were developing the strategy. Of course, our operators were there to help us out. So when, you know, all this started off, the objective was to create a government platform based on all of the society approach, with members from all stakeholders identifying the challenges, make policy interventions and implement them across the sector, or rather all the sectors, for bringing the change and growth that is required for filling up all these gaps that we’ve been talking about. This is the methodology that we adapted. Phase one was to identify the problem for which we did a very extensive consultation process across the country. In phase two, we did the problem analysis, the areas where there was a requirement to go further deep and see where the problem is lying and how can we address. We came up with the strategy pillars and how to implement it. In the third phase, we set the targets and the goals and what would be the outcomes. Of course, and we came out with the action plan, which is a very, you know, it is although tough, but it’s a three-year action plan with specific targets and goals, and we are hoping that we will be, inshallah, able to manage it. The consultative process that we followed, as I’ve already explained, was quite an extensive one. We did a public perception survey, an IVR survey, then we had, you know, multi-stakeholder workshops that we conducted across Pakistan, and we did some expert interviews of the gender experts, not only in Pakistan, but internationally. as well to understand how to address the issues that have been coming up while we are going through this consultation process. And of course we did an online survey as well which was for all the sectors, females and even males, to participate and come up with their point of view of addressing the digital gender gap in Pakistan. We tried to, for the perception survey and the IVR survey, we tried, specifically for perception survey, we tried to access those areas in Pakistan which do not have the connectivity so that those people can really tell exactly the females where the problem is. Of course, accessibility is one problem, but then there are social and economic problems that came up as well. And the IVR survey that we ran across Pakistan is one of the largest survey for assessing the digital gaps or the digital inclusion state in Pakistan. It was around 100,000 sample size that we had and then there were multiple questions that we ran through that IVR survey. Of course, this was done with the help of our, the licensees, the mobile operators, they did this to assist PTA in running the survey. Of course, GSMA’s consumer surveys that are carried out regularly for Pakistan to assess the gender inclusion that they do every year, we also had their assistance and their contribution in the consultation process. Just to give you a couple of outcomes that we had from these surveys. In this survey, we mainly asked women and men both, if they have a mobile phone and have a SIM, and if they have, are they using it? And are they using the mobile? the internet or not. So the interesting fact that came out was that more women are using mobile than the ones who are owning it. Which means that there is an urge, there is a requirement by the women but they don’t have the phone so they make use of the phones that the family has and they use it. But the good thing was those females who have the mobile or for that matter the same, are actually making use of it and using the internet. So this was a good thing that they were not only having the phone but they were actually meaningfully using it. The perception survey that was ran in the areas of the country which were not connected we saw and we asked the women why are they using, what do you think would they do if they have internet and they were more inclined towards having better economic opportunities to help their families and to have a good communication with the family members who are out of the area where they are living. So with this consultative process we came out with the barriers and challenges that the females of Pakistan are currently facing. One of the major when we started off and then of course through the consultation process we realised that we do not have the gender desegregated data, whatever we have is not good enough for making a strategy or for building up any case for reducing the gender divide. So this was one of the major challenges or the barrier that we feel Pakistan has of non-availability of desegregation. data. Of course then the digital literacy and the availability of local content was not there and it was for the women who were already on internet also felt the need for having the local content. Affordability came out to be one of the major reasons for this digital gap. Women either do not have the capacity or the economic ability to buy or of course there are family concerns and disapprovals which do not allow them to have either the mobile or the same or for that matter having the package, internet package. Yes, infrastructure and accessibility was also one of the major issues where there are a number of areas in Pakistan where the terrain is difficult for even the operators to, the commercial operators to go there and provide the service. Then people, as I have already told you, people also have negative perception about using internet or having a mobile handset for their females. And then of this perception they thought that safety is one of the major issues while they allow their females or the girls to have mobile in their hand or using the internet. With this in mind, the three years action plan that was rolled out in the strategy was on a bigger platform that we started with the steering committee which is headed by the minister for IT and telecom in Pakistan and the secretarial support is given by PTA. Under this, we have identified these six areas of which we made working groups. One was affordability. Then it was accessibility, which was covering the infrastructure requirements. And then there was safety and security, how we can ensure that women should feel safe while they are online. Then we have to create digital literacy for those who are educated or who are literate, but still do not have digital literacy. So we have identified this as a pillar, as a core working group where we have to address this issue. Inclusion is one area where we need to change the perception of the general public and the masses that internet can be used for better purposes, for the economic and social well-being of a female or a family member. And of course, we have this working group on research and data that is going to be very helpful, of course, while we are doing the policy changes and setting up targets and goals for reducing this gender gap. All these working groups are being led by the top government agencies in the country, of course, according to the specific areas. Between the working group and the steering committee exists a technical advisory committee. These are all organizations out there who are actually helping reduce the gender divide or the digital divide across the globe. So they have the visibility, they have the capacity and the capability to guide not only our working groups, but but also help the steering committee identify and take the areas out where there is a possibility, immediate possibility of improving the situation in the country. With this, working groups and the steering committee, we have kicked off this digital gender inclusion strategy in August 2024, and almost all of the working groups are now live and they have started working, they are revising their TORs, although the TORs are already there in the strategy identified, but we always thought that it is good to give them a chance to reassess the situation and see if they can improve it and then will start implementing in their own areas. So with affordability Pakistan, the affordability working group, the impact that we are expecting is to have 25% more women in Pakistan who can now afford after this implementation and the projects that we are going to go through with these working groups, we will be able to increase this number. Similarly, we are expecting that by end of three years action plan, we will be having 20% more women have access to digital services and SIMS will be on their CNIC. So currently, as I told you, it is around 47 million women who own a SIM on their CNIC. We are expecting it to be 20% more towards the end of the next three years. Safety and Security Group is headed by the Human Rights Commission of Pakistan and we are trying to have… gender-responsive laws and legal frameworks that ensure that women are safe while they are online. So the digital literacy group is headed by the Ministry of Education and we expect that the 60% of adult women population will acquire digital skills with the implementation of this strategy and certainly would want this strategy to play its role while we turn around the negative perception of women use of technology through this strategy. And we are working with the Pakistan Bureau of Statistics, the federal commission which provides statistics for the country, they are also leading this working group and are in the process of devising the indicators that are required to assess the digital females participation in the digital arena. With this we have, you know, since August and even before while we were devising the strategy we started having the collaboration. So these are the organisations that we have partnered with and they have rolled out the programmes in digital skills, awareness and of course the reports that are going to come up and the awareness spread programmes that we are having. We are committed to the strategy implementation under all of the stakeholders approach and thank you and over to you Vakas.

Waqas Hassan: Thank you. Sorry for rushing you but we have, as you can see this strategy developed by PTA is why is it presented as a case study because is you can see a structured approach, and you can see a clear plan of implementation. So if you have more questions about the strategy, how it was developed, anything else, you can ask during the Q&A. Or you can reach out to Dr. Khabar, who is the member of Compliance and Enforcement at PTA, and here in the room at the very front. So with this, I will now quickly move to Anika. And Anika, with your extensive experience working with underserved communities and for digital gender equality, what would you say are the key barriers and challenges in low, middle, and income countries?

Onica Makwakwa: Thanks, Warkus. And thank you so much, Malahat, for that presentation. It just really helps us see the picture at the national level when a country really commits to understanding the gender-digital divide, and actually not just adopting policies, but a commitment to implementing for change. So a lot of these that I’m identifying, I think will resonate very much with the presentation that we’ve just had. And I’m going to base this on two particular publications we published this year. One is the Connected Resilience, which looks at gendered experiences of women through meaningful connectivity. And the other one is The Time Is Now, which is a policy impact report that we published through the YDEF initiative to actually really look at policy frameworks that are successful in advancing our efforts to close the gender-digital divide. And I would say that the biggest barriers that we are identifying in most of this report, and a lot of the work that’s been done by many other organizations, is the lack of access to affordable devices and internet connectivity. Having reliable digital information. infrastructure, especially for women in rural areas, is a major barrier that actually keeps them away from being able to enjoy and utilize, you know, digital services as well as be part of a digital economy. So I’m going to go through this very fast, because I know you don’t have a lot of time, but we want to have a little bit more discussion later on. The second one, key one, is social cultural barriers and gender norms. And this really is no surprise for many of us, but we have to continuously work on these on the digital side. They don’t just simply go away simply because we are working on technology. These are issues that exist within our society in terms of restrictions on girls and women’s mobility, also therefore has an impact on their ability to access services such as public Wi-Fi, as an example. The lack of digital literacy and skills, you know, foundational digital skills, really put women at a disadvantage in terms of even being able to acquire the necessary opportunities that exist in digital technologies. And the fourth one is economic barriers, and this one is just not so much a lack of having the financial resources, but it’s also a lack of time as an economic value, right? Because women are predominantly the ones that we expect to fulfill the unpaid care labor in most societies. And so it also means that, yes, they lack the financial resources to buy these devices that are unaffordable, but they also lack the time to be able to dedicate towards the skills and training and developing themselves for utilization of digital technologies. And lastly, maybe not lastly, I’ll just mention two more. One more is the lack of legal and policy frameworks that are very explicit about closing the inequalities. You know, these things are not going to just happen on their own. and we need to be intentional in making so, including safety online for women, having laws and policies that are explicit about giving that protection for them. And lastly, I won’t elaborate on it because that’s something that Malahat spoke a lot about, and that is the lack of gender data gaps. You know, we know what we know now, but we know that it may be quite inadequate because we are not collecting gender-segregated data to be able to really understand how deep the problem is and where the interventions are most needed. So I will pause there for now, and thank you so much for this opportunity.

Waqas Hassan: Thank you, Anika. Thank you for identifying the barriers which are most prevalent in the low-middle-income countries. And I’ll come back to you with a couple of things, but now I’ll move to Kagete. Kagete, first of all, thank you for joining us. And coming from Meta, you know, as a big platform, you know, one of the big techs, what do you think, what are the ideal ways in which the community and the industry and the platforms, you know, can help and overcome these barriers for digital gender inclusion? And how can we influence positive governance practices on this issue? Thank you so much, Vakas.

Speaker 1: I think I can try to answer this question by first talking like in Meta, like it’s Meta, how we are seeing the problem and how we are trying to overcome it. I’m not sure if you can hear me properly, but okay. And then like I may try to come up with a more proper answer to the question itself, maybe, by also including like my take on the issue. But first, I would like to start with by saying that like at Meta, we believe women should have equal access to the economic, educational, and social opportunities that the internet provides. that’s for sure. And we try to take a multifaceted and also multi-stakeholder approach in ensuring that our services are accessible and inclusive for women through all our platforms and products and policies. If I can try to put this in a structure to explain it a bit further, I think for us the first priority here is creating a safe online environment for all genders, but of course like for the women in the context of this panel. And the second pillar would be supporting access to the digital tools and the digital opportunities that our platforms also enable all our users. And the third bucket in a way connected to the first one as well, but also like there’s an independent side of it too that I can explain, that’s maintaining an inclusive stakeholder engagement in relation to our innovation, like when we are innovating a new product, and also our integrity related efforts, like when we are trying to understand the risks. By that what I mean is when we are innovating a new product to make sure that such product is not biased and it reflects the characteristics of all genders, we believe that we should be in consistent engagement with women and group representatives of other genders. And also when it comes to our risk understanding, our risk assessments, again those should be inclusive of the experiences of these user groups. And in doing this, like all three different buckets of work, I would like to say that our approach most of the time requires us to work closely with civil society organizations, like this is what I meant by my multi-stakeholder approach that we have, and also in some instances like we are in partnership with the governments. I will try to keep it as… brief as possible because I know that we want to open it for Q&A, but very briefly, when it comes to creating a safe online environment, it is of course mostly related to our own community standards and our policies which governs which content we allow and which content we don’t allow on our platforms. And we of course do have policies that are specifically designed to protect high-risk users, vulnerable groups including women, such as our hate speech policy, sexual exploitation of adults policy, bullying and harassment policy. They have elements that are specifically designed to protect women, such as from revenge porn or sex torture. And we have a safety center which includes useful information for people who may not feel safe in our platforms or in general online. And there is a specific safety hub that is focused on women’s safety itself too. On creating and supporting access to digital opportunities, especially when we think about low- and medium-income countries, I would like to mention one specific program that we have which is called She Means Business. This program is actually a training program to empower women with the tools that may enable them to benefit the digital economy in a more meaningful way for them. And it goes beyond just like teaching about our own tools, but also it includes information about business resilience, financial literacy and cyber security, because we are seeing that these are actually required to create success there. And in Turkey, we conducted this program in collaboration with the government and also civil society organization. And since it’s launched in 2017, 7,000 women have been trained on this program. Also in Africa, like continents, we focused in Nigeria, Kenya, South Africa and Senegal for this program. And again, thousands of women in these countries have been trained. Another example, like from this country, Saudi Arabia, when we think about metaverses, like more innovative like products that we have. We realized especially for our region, Africa, Middle East and Turkey region, readiness is the key issue regardless of the gender, regardless of the background. Hence like we come up with like some programmatic activities to make sure that the main youth is actually ready with their technical capabilities for these upcoming technologies. And we started Metaverse Academy in this country again in partnership with the government and also the university here. And I am very happy to say that the significant majority of the participants were women in this program and this was one of the goals for us as well. Also for the government. I can definitely articulate more on our stakeholder approach, inclusive but like I also want to be very mindful of the time like for the other panelists. Just before closing I want to say three more things very briefly because like the question is like what are the ideal ways and like it’s of course speculative, it will be speculative of me. But in my experience like when I look at all these projects that I was also involved in, I think we definitely benefit from official policy frameworks and action plans that prioritizes overcoming barriers to digital inclusion of women because they create an incentive for private companies to focus on this area and like come up with programmatic efforts. Again like this is my take. And I think as a second thing there is a huge benefit in facilitating direct engagements with civil society organizations and private again like platforms because like it helps us to as I mentioned like develop a better understanding of the actual situation. But also it allows civil society organizations and their representatives to have a deeper influence on the product development and also the projects that these companies do have. And again in relation to the civil society, I would also like to recognize the value of the advocacy efforts of these groups in keeping both platforms and also governments accountable. When we miss something or like when there’s an area that requires more investment or more government support, it’s always the civil society that puts it under a spotlight and definitely it plays a big role in keeping us accountable and come up with a better governance. Thank you.

Waqas Hassan: Thank you. Thanks, Kegade, for sharing Meta’s approach towards digital inclusion and online safety. And you mentioned She Means Business. That program was also launched in Pakistan. One of the organizations that was implementing that program has now actually been selected as a winner of Round One of Women in Digital Economy Fund. So they are going to be funded and they’re going to conduct these digital literacy trainings across Pakistan, which is great for the country of course and to bridge the gender digital gap. Josephine, I’m going to come towards you now and your experience building community networks and you know doing policy advocacy for that and being deeply connected with those communities on ground. When you take these kind of innovative solutions like community networks to these areas, what kind of impact do you see on women in those areas and how does this work for gender empowerment in those underserved areas? Thank you, Akash, and also for all

Speaker 2: the great panelists who have gone before me. I think a lot of what they’ve shared really resonates with the work that we are doing. And for those in the room who are not familiar with community networks, it’s just essentially small-scale or locally owned infrastructure providers that traditionally are based in places where commercial operators are not going because of profitability issues. And one of my reflections or learnings or just one of the things I’ve seen around the impact of locally driven solutions is really understanding the local context. And an example being in the sense of how they look at gender empowerment or women empowerment and inclusion is that when it comes to traditional operators, you find that they really do not integrate into issues such as distance. How long does a woman need to walk in terms of getting to maybe a cyber cafe where they can access internet as well as the devices, affordability, the other roles that they play at home. And so, what community networks do is being able to hold spaces, which are women’s circles where you get to demystify first what technologies are, but also just develop a program so that it is capacity building, whether it is the service provisioning, that really understand the different concepts around the women need. So we are seeing a lot of changes and a lot of impact in terms of skill building and addressing some of the issues, not just affordability. Right now, with the online space, there’s gender online based violence or technology facilitated violence, which impacts women. Some, yes, are able to get online, but get scared and now leave online spaces. And so, the essence of having community networks is also having not just online support groups, but also in-person support groups that are able to support these efforts. I’m not getting another chance, so I just also wanted to bring in a reflection on how we can be able to collaborate moving forward. I really appreciate the work that the partners such as GDIP have been doing in terms of highlighting where the gaps are and also bringing strong recommendations, whether it is on promoting digital policies that look at this issue, as well as financing with projects such as the YDEF, because a key gap that we’re seeing is when it comes to access to devices, there’s a lot of initiatives that are going towards capacity building, but very limited efforts in terms of ensuring that devices are affordable, as well as there’s actually affordable access and infrastructure. So, financing is a key aspect and digital policies that address this issue. issues, so that even when we are doing allocations for funds, such as the universal service access funds, we can be able to incorporate some of the aspects around inclusion at community levels. Thank you.

Waqas Hassan: Thank you. Thank you, Josephine. I mean, we can all agree that it’s not just that you take a brilliant, innovative solution to a community, it has to be a meaningful connectivity that you eventually take there. And having this financing mechanism out there, there was a session earlier in the morning on financing mechanism. I think it was a wonderful session where the panelists also shared about how those kind of financing mechanisms could be there. We are a bit short of time. I would now like to turn towards the audience. If anybody online or in the room would like to share their experience, or ask a question to the panelists, or if you have any insights, any good policy practice that you see and you’d like to share with us, please just raise your hand, or we’ll give you a mic, and take your views on this. I have a question for Josephine, if there’s none on the floor. There’s one on the floor, and then I’ll come back to you, Annika. Sure.

Audience: OK, thank you. Well, thanks for all the ideas and things that you shared today. I just want to tell you that I’m from Colombia, from an NGO called Colnodo, and we also work with community networks in our country. And the last years, these kind of networks has been related with women, because we have a project that is. is financed by Google, in order to implement 10 community networks in different communities in Colombia. But with the participation of women. Then one of the things that we do in our methodology is create a group we call head stories. I don’t know how to say in English. Head stories like managers of the community networks. But the most of them are women. And they receive capacity in technical issues about how to implement, install, and then sustain the infrastructure for the community network. And other group of women receive capacity also in how to create contents for the community network. And also other group in financial and administrative issues for the sustainability of the network. And additionally, another group that have training, for example, in enterprise, beginning an enterprise, or using technology for, yes, for their own interest. This is because we have been working with Meta also in Colombia in bringing to some women that kind of capacity in using the platforms for their own business. Then just to share with you, this kind of initiative is done where we can work with women in different activities and different contexts. And trying to find also what is the interest of the women. Because not all want to participate in all the things. But we can bring to them the opportunity to have capacities in different topics in accordance. of their interests. Thank you.

Waqas Hassan: Thank you, thank you so much. I hope you can hear me from this, okay. So, thank you so much for sharing this example with us from Colombia. I think what we can see is that we also see a structured approach when we see what is happening in Colombia around CNs and how women-led CNs and women-centric CNs can make a huge difference. We have, Kaketay, you wanna say something? Yeah, sure, sure.

Speaker 1: Thank you so much for sharing this. I mean, it’s so nice to see that, you know, the community found these efforts useful. I just wanted to note, like, while I also believe the value and necessity of, like, organizing this capacity building efforts, I want to share that, like, we are also, like, benefiting so much from another type of a working group. We have it, like, in our region, like, for SSA Sub-Saharan Africa region, we have, like, women’s working group, what we call. And it brings women rights activists and also digital rights activists together and helps us to better understand the issues that they have in online platforms. And, like, thanks to those engagements that we had in those groups with them, we were able to, like, better understand the issues around, like, online gender-based violence, like, feminist rapid response services, and we were able to support them. And we were also able to go beyond just, like, women cause, but, like, together with them, we were also able to address the issues of the LGBTQI plus communities in online, like, in the issues that they have in the online platforms. So I, like, capacity building, definitely, but on top of that, I think, like, there is, like, when we think about a woman, like, there is a value in also investing in tech feminism and tech law and governance space, too. I just wanted to add that.

Waqas Hassan: Thank you. Thanks. for the intervention at KKT makes a lot of sense. Anika, I’m gonna take your question and then we have one from the audience.

Onica Makwakwa: Yeah, sure, well, thank you so much. So I actually have a question for Josephine. You know, we have an emerging divide amongst those who are actually already connected, right? So we’ve got the connected and the unconnected, but amongst those who are connected, we’ve got an emerging divide that’s really centered around the quality of the connection. And so it seems, I’ve sort of observed that whenever we talk about community networks as an infrastructure project to bridge some of these gaps, for the Global South communities, we tend to confine them to rural areas only, right? You know, so I mean, I think in Africa, for the most part, the model has been, it’s only in places where it has been deemed commercially not viable for the operators to provide connectivity. However, the very same mobile operators discriminate against users, especially in urban, viper, urban areas, because they tend to focus more on business clients as opposed to the huge prepaid market that pays extensively high rates to connect. My question is why, is there scope and opportunity to consider community networks beyond rural areas? And I’ll just kind of give you an example that New York City public Wi-Fi is the largest community-owned network that I know of, but I just kind of find it very curious that when it comes to Africa, Asia, and maybe even Latin America, we are told that the only way to have a community network is rural areas, so that competition that comes from community-owned networks is not allowed in urban areas, and it’s unfortunate because what it looks like is that yes, we need competition in terms of digital technologies, but we also need competition. in terms of a financial model that can, you know, service the diversity and inequalities that exist within urban sectors as well. And I just would love to hear your thoughts on that and if, you know, this would be a pipe long-term or not.

Speaker 2: It’s not a pipe demand. Thank you, Onika, for bringing that up because in our conversations with many regulators, there’s usually the issue that there’s a lot of pushback from mainstream operators who see community networks as competitors. And because of the power and the finances and the control that they have over most of the state, it sort of becomes a difficult conversation to have, whereas community networks are really small-scale operators that do not have the financial muscle to push back. And so in a way to appease, I will say in a way to appease the big operators, it’s usually that then regulators say, why don’t you go to underserved areas where they are not operational or where there’s no connectivity so that it doesn’t seem like you’re here to compete with the big operators. But then in the same breath, the expectation is that you will go to the hard-to-reach areas but still become commercially viable because whenever community networks are in the room, there’s always the question of, are you sustainable? Are you sustainable? But then even the large commercial operators are not going to these areas because they are not commercially viable. But what we are seeing is that it’s not just an issue of no access, but it’s also quality as you’re saying. A lot of the opportunities now for people who live in urban areas is digital work and that means that it’s very expensive to access. And we are seeing not just for community networks who are non-for-profit but. even the small scale ISPs, really growing and becoming, you know, forced to reckon with in many of the areas in terms of not just affordable, but also good quality service and the ability to provide good customer care service. So it’s definitely a time to really relook our regulatory frameworks, not just for non-for-profit entities and we are saying even for small scale ISPs, because there is room to serve people, they are locally available, but it’s just that regulation is still tight and it’s because other players are competing fairly. Thank you.

Waqas Hassan: Thank you, Anika. It’s a great insight and I’d just like to mention here that the gender inclusion strategy that PTA has, one of the working groups for access, it does talk about community networks and providing support to community networks. So in a way, this is another good example where it is not, I mean community networks may have been discussed by licensing or other departments, but it is part of the gender strategy which gives it more impotence that you know it is going to be women-centric empowerment technology. I’ve been told that we have four minutes, but I’ll take one last question before we close. Please.

Audience: My question is from Mem Malahat, who is representing PTA, that how PTA is working to plan a strategy for involving women from the rural areas in Pakistan, or specifically in the newly emerged district in Khyber Pakhtunkhwa, even where phones with the women is a taboo. So how will you bring them to the internet, although in Pakistan we have some 49% of the population from women. So how do you see the women inclusion on the internet, specifically in the rural areas, especially in the tribal districts of Khyber Pakhtunkhwa?

Malahat Obaid: Thank you very much for your question. Just to give you a background, when we started off with this strategy development process, Khyber Pakhtunkhwa, for my audience as well, is one of the provinces of Pakistan in which there are, you know, some social barriers are more as compared to the rest of the country. So yes, we included, while we were doing the consultative process, there was extensive consultation undertaken while we were developing the strategy and we considered the viewpoint of the local communities as well. We had them on board while we were discussing the issue, the females, the organizations that are working in that area were also on board. You can go through the consultation process that is already available, the outcomes of the process that is already available on our website, PTA website. There are organizations that I have just mentioned we are collaborating with and they are working in that specific area which you are talking of, the Khyber Pakhtunkhwa region. They will be working on providing the connectivity as well as, you know, conducting programs for digital literacy. So the strategy has a very holistic approach towards all the locations and the areas that are already connected and that still needs some access issues to be resolved. So with time, of course, the accessibility group which is being led by PTA will be considered, will be considering, you know, taking into account if there are still some areas or issues that are left or for that matter are not covered in our TORs and you are most welcome to follow. the process of implementation of the strategy. Thank you.

Waqas Hassan: Thank you. Thank you so much, Malahat, and thank you for your question. We are like almost out of time. I wanted to have a closing statement from each of the panellists, but I think probably on behalf of the panel, Anika, would you like to just close with the closing statement to just represent the panel?

Onica Makwakwa: Yes, certainly. Thank you so much. So I will just close by saying that there’s a lot of initiatives that are taking place to help close the gender digital divide, and I’m just very pleased to share with you that one such initiative is the Women in the Digital Economy Fund, which was launched earlier this year, an $80 million fund that is strictly focused on supporting and funding the scale-up of solutions that are focused on women-led and women-focused initiatives to close the gender digital divide. We currently have a round that is open for India only, so please go to YDEF, W-I-D-E-F dot global. I will put it in the chat as well for those who are online, and see it will be closing soon. We will have another global round that will open in March of 2025. I really hope to see many exciting applications, including from community networks, especially women-led, women-focused, so that we have an opportunity to help close the gender digital divide in the global majority world. Thank you. Thank you so much. And if you have any questions, Wakas is the regional lead for Asia, so please, if you are in the room, bombard him. And if you of course want to know

Waqas Hassan: more about policy recommendations and how to go about bridging the digital gender divide, there is a report that we have out here which very amply says that the time is now, right, so the time is now that we make all efforts. And it is absolutely possible, and it is absolutely necessary to make a… a meaningful difference in the situation of digital gender divide through inclusive policy making, through stakeholder consultation and by processes which are community centric. So with that note, I thank you all for being here, thank you to my panelists, thank you for people who joined us online and have a safe day. Thank you. Take care. Anika and Malat, would you stay on the screen for a minute so that we can take a picture of the panel. We can probably huddle around the screen. Okay, so can you look at the front, thank you. Thank you so much, Malat, Anika, thank you, take care. Thank you, thanks everyone. Thank you, Vakas, thank you, thank you Anika, bye bye. Bye. As-salamu alaykum. Walaykum as-salam. As-salamu alaykum. As-salamu alaykum. As-salamu alaykum. As-salamu alaykum. As-salamu alaykum. . . . . . . . . . . . . . .

M

Malahat Obaid

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Structured approach with clear implementation plan

Explanation

The Pakistan Digital Gender Inclusion Strategy was developed using a structured approach with a clear implementation plan. This includes working groups, a steering committee, and a three-year action plan with specific targets and goals.

Evidence

The strategy has six working groups, a steering committee headed by the Minister for IT and Telecom, and a three-year action plan with specific targets.

Major Discussion Point

Digital Gender Inclusion Strategies and Policies

Need for gender-disaggregated data to inform policies

Explanation

The lack of gender-disaggregated data was identified as a major challenge in developing effective policies for digital gender inclusion. This data is crucial for understanding the extent of the gender gap and informing targeted interventions.

Evidence

A working group on research and data was established as part of the strategy to address this issue.

Major Discussion Point

Digital Gender Inclusion Strategies and Policies

Agreed with

Onica Makwakwa

Agreed on

Need for gender-disaggregated data

Need for gender-responsive laws and legal frameworks

Explanation

The strategy emphasizes the importance of developing gender-responsive laws and legal frameworks to ensure women’s safety online. This is part of the broader effort to create a safe and inclusive digital environment for women.

Evidence

The Safety and Security Group, headed by the Human Rights Commission of Pakistan, is working on developing gender-responsive laws and legal frameworks.

Major Discussion Point

Online Safety and Security for Women

Agreed with

Cagatay Pekyorur

Josephine Meliza

Agreed on

Need for policies and frameworks to ensure women’s online safety

O

Onica Makwakwa

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Lack of access to affordable devices and internet connectivity

Explanation

One of the biggest barriers to digital inclusion for women in low and middle-income countries is the lack of access to affordable devices and internet connectivity. This limits women’s ability to participate in the digital economy and access online services.

Evidence

This finding is based on two publications: ‘Connected Resilience’ and ‘The Time Is Now’.

Major Discussion Point

Barriers to Digital Inclusion for Women

Social and cultural barriers limiting women’s access

Explanation

Social and cultural norms often restrict women’s mobility and access to digital technologies. These barriers persist even in the context of digital technologies and need to be continuously addressed.

Evidence

Examples include restrictions on girls’ and women’s mobility, which affects their ability to access services like public Wi-Fi.

Major Discussion Point

Barriers to Digital Inclusion for Women

Agreed with

Josephine Meliza

Agreed on

Importance of addressing social and cultural barriers

Lack of digital literacy and skills

Explanation

Many women in low and middle-income countries lack basic digital literacy and skills. This puts them at a disadvantage in terms of accessing digital opportunities and participating in the digital economy.

Major Discussion Point

Barriers to Digital Inclusion for Women

Economic barriers including lack of financial resources and time

Explanation

Women often face economic barriers to digital inclusion, including lack of financial resources to purchase devices and internet access. Additionally, the burden of unpaid care work limits the time women can dedicate to developing digital skills.

Evidence

Women are predominantly expected to fulfill unpaid care labor in most societies, limiting their time for digital skill development.

Major Discussion Point

Barriers to Digital Inclusion for Women

S

Cagatay Pekyorur

Speech speed

149 words per minute

Speech length

1356 words

Speech time

544 seconds

Importance of official policy frameworks and action plans

Explanation

Official policy frameworks and action plans that prioritize overcoming barriers to digital inclusion of women are crucial. These create incentives for private companies to focus on this area and develop programmatic efforts.

Evidence

Meta’s experience with various projects shows the benefit of such frameworks in encouraging private sector involvement.

Major Discussion Point

Digital Gender Inclusion Strategies and Policies

Policies and frameworks to ensure women’s online safety

Explanation

Meta has implemented policies and frameworks specifically designed to protect high-risk users, including women, on their platforms. These include policies on hate speech, sexual exploitation, and bullying and harassment.

Evidence

Meta has a safety center with a specific safety hub focused on women’s safety, and policies designed to protect women from issues like revenge porn.

Major Discussion Point

Online Safety and Security for Women

Agreed with

Malahat Obaid

Josephine Meliza

Agreed on

Need for policies and frameworks to ensure women’s online safety

S

Josephine Meliza

Speech speed

141 words per minute

Speech length

829 words

Speech time

352 seconds

Community networks as locally-driven solutions

Explanation

Community networks are small-scale, locally owned infrastructure providers that can effectively address digital inclusion in areas underserved by commercial operators. They can provide tailored solutions that understand and address local context and needs.

Evidence

Community networks have been successful in creating women’s circles for capacity building and addressing issues like online gender-based violence through in-person support groups.

Major Discussion Point

Approaches to Promote Digital Inclusion

Agreed with

Onica Makwakwa

Agreed on

Importance of addressing social and cultural barriers

Support groups to address online gender-based violence

Explanation

Community networks provide not just online support groups but also in-person support groups to address issues of online gender-based violence. This helps women who may be scared to use online spaces due to such violence.

Major Discussion Point

Online Safety and Security for Women

Agreed with

Malahat Obaid

Cagatay Pekyorur

Agreed on

Need for policies and frameworks to ensure women’s online safety

Value of digital policies that address inclusion at community levels

Explanation

Digital policies should address inclusion at the community level, particularly when it comes to allocating funds such as universal service access funds. This ensures that community-level needs and contexts are considered in digital inclusion efforts.

Major Discussion Point

Digital Gender Inclusion Strategies and Policies

W

Waqas Hassan

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Financing mechanisms to support inclusion initiatives

Explanation

Financing mechanisms are crucial to support digital inclusion initiatives, particularly for providing affordable access to devices. There is a need for more efforts in ensuring that devices are affordable, in addition to capacity building initiatives.

Evidence

The Women in Digital Economy Fund, an $80 million fund focused on supporting and funding the scale-up of women-led and women-focused initiatives to close the gender digital divide.

Major Discussion Point

Approaches to Promote Digital Inclusion

A

Audience

Speech speed

118 words per minute

Speech length

388 words

Speech time

197 seconds

Capacity building and skills training programs for women

Explanation

Capacity building and skills training programs are effective in promoting digital inclusion for women. These programs can cover various aspects including technical skills, content creation, and business skills.

Evidence

An example from Colombia where women receive training in technical issues, content creation, and financial and administrative skills for community networks.

Major Discussion Point

Approaches to Promote Digital Inclusion

Agreements

Agreement Points

Need for gender-disaggregated data

Malahat Obaid

Onica Makwakwa

Need for gender-disaggregated data to inform policies

Lack of gender data gaps

Both speakers emphasized the importance of collecting gender-disaggregated data to understand the extent of the digital gender gap and inform effective policies.

Importance of addressing social and cultural barriers

Onica Makwakwa

Josephine Meliza

Social and cultural barriers limiting women’s access

Community networks as locally-driven solutions

Both speakers highlighted the need to address social and cultural barriers that limit women’s access to digital technologies, with community networks seen as a potential solution.

Need for policies and frameworks to ensure women’s online safety

Malahat Obaid

Cagatay Pekyorur

Josephine Meliza

Need for gender-responsive laws and legal frameworks

Policies and frameworks to ensure women’s online safety

Support groups to address online gender-based violence

Multiple speakers stressed the importance of developing policies, frameworks, and support systems to ensure women’s safety in online spaces.

Similar Viewpoints

Both speakers emphasized the importance of structured, official policy frameworks and action plans to address digital gender inclusion.

Malahat Obaid

Cagatay Pekyorur

Structured approach with clear implementation plan

Importance of official policy frameworks and action plans

Both speakers highlighted the need for financing mechanisms to address the lack of access to affordable devices and internet connectivity for women.

Onica Makwakwa

Waqas Hassan

Lack of access to affordable devices and internet connectivity

Financing mechanisms to support inclusion initiatives

Unexpected Consensus

Community networks as a solution for urban areas

Onica Makwakwa

Josephine Meliza

Community networks as locally-driven solutions

While community networks are often seen as solutions for rural areas, there was an unexpected consensus on their potential value in urban areas to address quality of connection issues and provide affordable alternatives.

Overall Assessment

Summary

The main areas of agreement included the need for gender-disaggregated data, addressing social and cultural barriers, ensuring women’s online safety, structured policy frameworks, and financing mechanisms for digital inclusion.

Consensus level

There was a high level of consensus among the speakers on the key challenges and potential solutions for digital gender inclusion. This consensus suggests a shared understanding of the issues and a common direction for addressing the digital gender divide, which could facilitate more coordinated and effective efforts in policy-making and implementation.

Differences

Different Viewpoints

Scope of community networks

Onica Makwakwa

Josephine Meliza

My question is why, is there scope and opportunity to consider community networks beyond rural areas?

Community networks are small-scale, locally owned infrastructure providers that can effectively address digital inclusion in areas underserved by commercial operators. They can provide tailored solutions that understand and address local context and needs.

Onica Makwakwa questions the limitation of community networks to rural areas, suggesting they could be valuable in urban settings too. Speaker 2 focuses on community networks as solutions for underserved areas, implying a more rural focus.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement were limited, with most speakers generally aligned on the importance of addressing digital gender inclusion through various means such as data collection, policy frameworks, and community-based solutions.

difference_level

The level of disagreement among the speakers was relatively low. Most differences were in emphasis or approach rather than fundamental disagreements. This suggests a general consensus on the importance of digital gender inclusion and the need for multi-faceted approaches to address it, which is positive for advancing the topic.

Partial Agreements

Partial Agreements

All speakers agree on the importance of data and policy frameworks for addressing digital gender inclusion. However, they emphasize different aspects: Malahat Obaid focuses on gender-disaggregated data, Onica Makwakwa highlights the inadequacy of current data collection, and Cagatay Pekyorur stresses the importance of official policy frameworks to incentivize private sector involvement.

Malahat Obaid

Onica Makwakwa

Cagatay Pekyorur

The lack of gender-disaggregated data was identified as a major challenge in developing effective policies for digital gender inclusion. This data is crucial for understanding the extent of the gender gap and informing targeted interventions.

Lack of gender data gaps. You know, we know what we know now, but we know that it may be quite inadequate because we are not collecting gender-segregated data to be able to really understand how deep the problem is and where the interventions are most needed.

Importance of official policy frameworks and action plans

Similar Viewpoints

Both speakers emphasized the importance of structured, official policy frameworks and action plans to address digital gender inclusion.

Malahat Obaid

Cagatay Pekyorur

Structured approach with clear implementation plan

Importance of official policy frameworks and action plans

Both speakers highlighted the need for financing mechanisms to address the lack of access to affordable devices and internet connectivity for women.

Onica Makwakwa

Waqas Hassan

Lack of access to affordable devices and internet connectivity

Financing mechanisms to support inclusion initiatives

Takeaways

Key Takeaways

Digital gender inclusion requires structured policy approaches with clear implementation plans

Major barriers for women include lack of affordable access, social/cultural norms, digital skills gaps, and economic constraints

Multi-stakeholder collaboration between government, industry, and civil society is crucial for promoting digital inclusion

Community networks and locally-driven solutions can help bridge connectivity gaps, especially in underserved areas

Online safety and security measures are essential to ensure women’s meaningful participation in digital spaces

Resolutions and Action Items

Pakistan Telecom Authority to implement 3-year action plan for digital gender inclusion strategy

Women in Digital Economy Fund to open new funding round for India in 2024 and global round in March 2025

Unresolved Issues

How to expand community networks beyond just rural areas to also serve urban populations

How to effectively engage women from conservative rural areas where mobile phone use is taboo

How to balance commercial viability of community networks with serving hard-to-reach areas

Suggested Compromises

Allowing community networks to operate in both rural and urban areas to increase competition and service quality

Integrating support for community networks into national gender inclusion strategies

Thought Provoking Comments

According to estimates by GDIP and others, the countries have almost lost one trillion dollars just by not being able to bridge the digital gender divide. So it is not just a social issue or a social empowerment issue, it is actually now an economical issue as well.

speaker

Waqas Hassan

reason

This comment reframes the digital gender divide as an economic issue rather than just a social one, highlighting the massive financial impact.

impact

It set the tone for the discussion by emphasizing the economic urgency of addressing the digital gender divide, leading to more focus on policy and implementation strategies.

We came out with the strategy pillars and how to implement it. In the third phase, we set the targets and the goals and what would be the outcomes. Of course, and we came out with the action plan, which is a very, you know, it is although tough, but it’s a three-year action plan with specific targets and goals, and we are hoping that we will be, inshallah, able to manage it.

speaker

Malahat Obaid

reason

This comment outlines a structured, actionable approach to addressing the digital gender divide, moving beyond theoretical discussion to practical implementation.

impact

It shifted the conversation towards concrete strategies and timelines, prompting other speakers to discuss specific initiatives and programs.

The lack of digital literacy and skills, you know, foundational digital skills, really put women at a disadvantage in terms of even being able to acquire the necessary opportunities that exist in digital technologies.

speaker

Onica Makwakwa

reason

This comment highlights a fundamental barrier to digital inclusion that goes beyond just access to technology.

impact

It broadened the discussion to include the importance of education and skill development, leading to conversations about training programs and capacity building initiatives.

What community networks do is being able to hold spaces, which are women’s circles where you get to demystify first what technologies are, but also just develop a program so that it is capacity building, whether it is the service provisioning, that really understand the different concepts around the women need.

speaker

Josephine Meliza

reason

This comment introduces the concept of community networks as a grassroots solution to digital inclusion, emphasizing the importance of local context and women-centric approaches.

impact

It shifted the discussion towards more localized, community-based solutions, prompting questions about the applicability of community networks in different contexts.

We have an emerging divide amongst those who are actually already connected, right? So we’ve got the connected and the unconnected, but amongst those who are connected, we’ve got an emerging divide that’s really centered around the quality of the connection.

speaker

Onica Makwakwa

reason

This comment introduces a nuanced perspective on digital inequality, highlighting that access alone is not sufficient for true digital inclusion.

impact

It deepened the conversation by introducing the concept of quality of connection, leading to a discussion about the need for community networks in urban areas and not just rural ones.

Overall Assessment

These key comments shaped the discussion by progressively deepening the analysis of the digital gender divide. The conversation evolved from highlighting the economic importance of the issue to discussing specific policy strategies, then to addressing fundamental barriers like digital literacy. It further progressed to exploring grassroots solutions like community networks, and finally to examining nuanced aspects of digital inequality even among those with access. This progression led to a comprehensive exploration of the issue, covering economic, policy, educational, and community-based dimensions of digital gender inclusion.

Follow-up Questions

How can community networks be implemented beyond rural areas in developing countries?

speaker

Onica Makwakwa

explanation

This explores the potential for community networks to address connectivity issues in urban areas, challenging the current focus on rural deployment only.

How is PTA working to involve women from rural areas, especially in newly emerged districts of Khyber Pakhtunkhwa, where phone ownership by women is taboo?

speaker

Audience member

explanation

This addresses the specific challenges of digital inclusion for women in conservative rural areas of Pakistan.

How can we facilitate direct engagements between civil society organizations and private platforms to improve digital inclusion efforts?

speaker

Cagatay Pekyorur

explanation

This explores ways to enhance collaboration between tech companies and civil society to better address digital inclusion challenges.

What are effective ways to collect gender-disaggregated data to better understand and address the digital gender divide?

speaker

Onica Makwakwa

explanation

This highlights the need for more comprehensive data to inform policy and interventions aimed at closing the digital gender gap.

How can we develop and implement gender-responsive laws and legal frameworks to ensure women’s safety online?

speaker

Malahat Obaid

explanation

This addresses the need for specific legal protections to make the online environment safer for women.

What strategies can be employed to change negative perceptions about women’s use of technology in conservative societies?

speaker

Malahat Obaid

explanation

This explores ways to address social and cultural barriers to women’s digital inclusion.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

DCAD & DC-OER: Building Barrier-Free Emerging Tech through Open Solutions

DCAD & DC-OER: Building Barrier-Free Emerging Tech through Open Solutions

Session at a Glance

Summary

This discussion focused on building barrier-free emerging technologies through open solutions to enhance digital accessibility and inclusion for persons with disabilities. Speakers from various organizations highlighted the challenges and potential solutions in this area. UNESCO’s vision for digital accessibility and open content was presented, emphasizing the need for ethical and responsible use of AI and emerging technologies. The importance of involving persons with disabilities in the development of technologies was stressed to ensure their needs are met.

Speakers discussed the role of regulatory authorities in advancing digital inclusion, noting that while regulators may have limited powers, they can advocate for accessibility and advise governments on policy directions. The need for comprehensive competency frameworks and training for educators on inclusive digital education was highlighted. Learning Equality presented their Kolibri platform as a case study of an offline-first, open-source solution designed to provide accessible learning experiences in areas with limited internet connectivity.

The discussion emphasized the importance of a multi-stakeholder approach, involving policymakers, educators, developers, and persons with disabilities in creating inclusive digital environments. Challenges such as the lack of accessible open educational resources and the need for capacity building among teachers were addressed. Participants also stressed the importance of considering cultural, political, psychological, institutional, and professional aspects when implementing educational interventions for learners with disabilities.

Key takeaways included the crucial role of regulators in promoting accessibility, the importance of openness in digital solutions, and the need to consider both digital and non-digital factors in ensuring equitable access to education and information. The discussion concluded by emphasizing the importance of inclusive design and development of technologies to serve their intended purpose for all users.

Keypoints

Major discussion points:

– The importance of making emerging technologies and digital platforms accessible to people with disabilities

– The role of regulators and policymakers in advancing digital inclusion and accessibility

– The need for capacity building and training for educators on inclusive education practices

– The potential of open educational resources and platforms to support inclusive learning

– The challenges of implementing accessible technologies in developing countries

Overall purpose:

The goal of this discussion was to explore ways to build barrier-free emerging technologies through open solutions, with a focus on improving digital accessibility and inclusive education for people with disabilities.

Tone:

The tone was largely informative and collaborative, with speakers sharing insights from their work and research in digital accessibility. There was a sense of urgency around addressing accessibility gaps, but also optimism about potential solutions. The tone became slightly more critical when discussing implementation challenges, particularly in developing countries, but remained constructive overall.

Speakers

– Muhammad Shabbir: Coordinator of IDF’s Dynamic Coalition on Accessibility and Disability

– Tawfik Jelassi: Assistant Director General Communication and Information Sector, UNESCO

– Amela Odobasic: Director of Broadcasting Bosnia

– Mohammed Khribi: Digital Accessibility Services Acting Manager, MADA of Arab states

– Revanth Voothaluru: Global Implementation Project Manager of Learning Equality

– Zeynep Varoglu: Senior Specialist in the Information and Communication Center of UNESCO

– Judith Hellerstein: Co-coordinator at the IGF Dynamic Coalition on Accessibility and Disability

Additional speakers:

– Nicodemus Nyakundi: Fellow for the Dynamic Coalition on Accessibility and Disability

– Itzel: TICET fellow

Full session report

Building Barrier-Free Emerging Technologies: A Comprehensive Discussion on Digital Accessibility and Inclusion

This discussion brought together experts from various organizations to explore ways of building barrier-free emerging technologies through open solutions, with a focus on improving digital accessibility and inclusive education for people with disabilities. The conversation was informative and collaborative, with speakers sharing insights from their work and research in digital accessibility.

Key Challenges in Digital Accessibility

Muhammad Shabbir, Coordinator of IDF’s Dynamic Coalition on Accessibility and Disability, emphasized the lack of consideration for accessibility in technology development and stressed the need for universal design principles. He shared a personal example of encountering inaccessible VR technology, highlighting the challenges faced by people with disabilities in emerging tech environments. Shabbir also stressed the importance of dialogue between regulators and persons with disabilities to address these issues effectively.

Tawfik Jelassi, Assistant Director General Communication and Information Sector at UNESCO, pointed out the specific challenges in making AI and language models accessible. He presented UNESCO’s vision for digital accessibility and open content, emphasizing the organization’s efforts to advance inclusive education through open solutions. These efforts include developing guidelines for inclusive digital learning and promoting open educational resources.

The Role of Regulators and Policymakers

Amela Odobasic, Director of Broadcasting Bosnia, brought attention to the importance of involving persons with disabilities in technology development. She argued that while regulators may have limited powers, they can play a crucial role by advocating for accessibility to policymakers and implementing accessibility provisions within existing frameworks. Odobasic also noted the need for legal mandates to address new technologies like AI.

Open Educational Resources and Platforms

Mohammed Khribi, Digital Accessibility Services Acting Manager at MADA of Arab states, discussed MADA’s work in Qatar, including their digital accessibility services and training programs. He presented the development of an ICT accessibility competency framework and emphasized the importance of the DARE Index (Digital Accessibility Rights Evaluation Index) in assessing countries’ progress in digital accessibility.

Revanth Voothaluru, Global Implementation Project Manager of Learning Equality, presented their Kolibri platform as a case study of an offline-first, open-source solution designed to provide accessible learning experiences in areas with limited internet connectivity. He detailed the platform’s features, including its ability to work offline, support multiple languages, and provide a range of educational content. Voothaluru also discussed the implementation of Kolibri in various contexts, highlighting its potential to bridge accessibility gaps, particularly in developing countries.

Teacher Training and Capacity Building

The discussion revealed a significant need for capacity building and training for educators on inclusive education practices. Khribi stressed the need to integrate accessibility courses in teacher education curricula and emphasized the importance of continuous training for in-service teachers. Voothaluru highlighted the potential of using technology to support differentiation and personalization in education, particularly in large classrooms where individual attention is challenging.

Systemic Approach to Inclusive Education

Voothaluru stressed the need to consider cultural, political, psychological, institutional, and professional aspects when implementing educational interventions for learners with disabilities. He referenced Dr. Fernando Rimas’ framework for a systemic approach to inclusive education, emphasizing the importance of addressing multiple factors beyond just technology.

Audience Engagement and Unresolved Issues

The audience raised important points, including the need to engage open-source developers in creating accessible solutions and ensuring basic education access for students with disabilities before focusing on technology integration. These comments highlighted the complexity of implementing inclusive education and the need for a multi-faceted approach that considers various perspectives and local contexts.

Key Takeaways and Conclusion

Zeynep Varoglu summarized key takeaways, emphasizing the importance of multi-stakeholder collaboration, the need for continuous capacity building, and the potential of open educational resources in advancing digital accessibility. Judith Hellerstein, co-coordinator of the IGF Dynamic Coalition on Accessibility and Disability, reiterated key points from the audience questions, including the importance of basic education access and teacher training.

The discussion concluded with several thought-provoking comments that shaped the conversation. Jelassi’s framing of information, openness, and accessibility as public goods deserving of public support provided a compelling rationale for government involvement. Odobasic’s quote from a representative of persons with disabilities challenged the perception of accessibility as a ‘special needs’ issue, reframing it as a matter of equal rights and universal design.

In conclusion, the discussion provided a rich, multifaceted exploration of the challenges and potential solutions in building barrier-free emerging technologies. It highlighted the crucial role of regulators in promoting accessibility, the importance of openness in digital solutions, and the need to consider both digital and non-digital factors in ensuring equitable access to education and information. The conversation underscored the complexity of the issues at hand and the need for continued dialogue and collaboration among various stakeholders to create truly inclusive digital environments.

Session Transcript

Muhammad Shabbir: Hello, and good morning, everyone. I am Muhammad Shabbir, the coordinator of IDF’s Dynamic Coalition on Accessibility and Disability. And I welcome you in the first session of this webinar. I am Dr. Mohammad Shabir, the coordinator of IDF’s Dynamic Coalition on Accessibility and Disability. And I welcome you in the session, Building Barriers, Free Emerging Tech Through Open Solutions, jointly organized by the Dynamic Coalition on Accessibility and Disability and the Dynamic Coalition on Open Educational Resources. I am thankful to the Internet Governance Forum for the opportunity and also to the team who have worked with me to organize this session. Just a couple of housekeeping rules. There aren’t many. First, we have amongst us some speakers. We will talk about different issues. Each speaker would have about 10 to 12 minutes for their early or initial intervention. Then we would come to the hall and online for the participants if there are any questions. People can address the question to a specific speaker or make general interventions as well. And then we would have the wrap-up by the moderators. and we would have a wrap-up, and in the end, there would be a vote of thanks. So to start with, I would invite our first speaker, Dr. Tawfik Jelassi, Assistant Director, Journal Communication and Information, Sector UNESCO. Dr. Jalassi will speak about overview of UNESCO’s vision for digital accessibility and open content. Dr. Jalassi, over to you. No, it’s okay, okay, okay, thank you.

Tawfik Jelassi: Good morning to all of you. Thank you for coming to this session, and let me also thank our moderator, Dr. Shabbir, but also Dr. Henrich Stein of the Dynamic Coalition on Accessibility and Disability for co-organizing this important session in the context of IGF 2024. I’m very pleased with the topic that was selected, Building Barrier-Free Emerging Technologies Through Open Solutions. Clearly, this is a very timely topic, especially in the context of IGF, the multi-stakeholder approach that characterizes this global forum, but also I think it’s a timely topic in today’s digital environment. I’m sure that the session will explore open solutions and technologies, in particular, to help barriers and foster inclusive digital. spaces. We all know that emerging technologies such as artificial intelligence and generative AI are drastically impacting the way we approach education. When we pair open educational resources with these technologies, the impact can only be transformative. What’s us? What type of learning value can we deliver to pupils and to students? And here I can say something from my more than three decades being a professor and dean and then minister of higher education. I think the new technologies of today with open educational resources give us the opportunity, maybe unique opportunity, to deliver personalized learning value so learners can adapt to the content, to their pace, to their style, sometimes even choosing their preferred language to get access to knowledge. The second key element beyond personalized learning is enhancing accessibility. We can here think as an example of visually impaired students. How can they navigate textbooks? I can help translate text speech and facilitate graphics. This is very important in terms of accessibility, especially for persons with disabilities. The third, I think, major transformative dimension is how to expand localization. And when we talk about today’s world, of course, it’s borderless, it’s global, but also I think technology can help us have relevant value-added content that is culturally relevant. And this is very important nowadays. However, this technology, especially when I say artificial intelligence, tomorrow quantum computing, they have to be used in a responsible and ethical manner. And I want here to mention the landmark recommendation by UNESCO back in 2021 on the ethics of artificial intelligence, a recommendation being currently implemented by more than 60 countries worldwide. So the ethical use of AI is a major challenge for everybody. We need to combat existing biases. If I take the gender-related biases, you know that in some of these gen AI large language models, there are many biases that depict women in domestic roles, and there is an association of women with family, with children, with households, while men related to men are more linked to business career. So obviously what we have seen from our studies at UNESCO is that large language models in generative AI not only replicate online the gender biases that exist offline, but they even amplify them. It’s obviously a very dangerous. Second dimension besides the gender bias is that we were reminded two days ago at the opening of this activity, and therefore they don’t benefit from any digital literacy. This is obviously a major challenge as well. The third is representation in AI systems. 40% of the world’s population lacks access to education in their native language, and therefore they are being excluded. very important and we need to tackle it as well. So these are challenges but of course we have to take stock of some accomplishments, whether it is in terms of enhanced accessibility, whether it is in terms of access to open educational resources which are universally available through digital platforms, and here I want to mention a major outcome of the UNESCO Third World Congress on Open Educational Resources that took place last month in Dubai, and the Dubai Declaration which was endorsed at the end of this major event very much calls for a commitment to advancing inclusive education through open solutions, and this is very important I think to take stock of. This is in line with the 2019 UNESCO recommendation on Open Educational Resources by 193. So a pledge was made in Dubai last month to increase the reach of inclusive education platforms by 25% by 2030, so this is a ambitious goal but hopefully through collective efforts we will achieve it. And let me mention here also the UNESCO revised guidelines for people with disabilities in online and distance learning. These revised guidelines offer a comprehensive roadmap to create open educational resources and digital platforms that can serve diverse needs of learners. Let me try to conclude here by saying that this session this morning is for a range of ideas. ideas, hopefully it is a springboard for change. It’s all in this context. Us as educators, we can advocate for open education resources that are tailored to local needs. Policy makers can ensure internet connectivity, bridging the digital divide, especially in areas. And thirdly, developers can design technologies from the outset. Let’s recall Joseph Stiglitz, who said, information is a public good. And as a public good, information needs to receive public support. I think the same is true for openness and accessibility. And this requires, obviously, a collective commitment by all. Let me assure you that UNESCO is unwavering in its mission to ensure that no one is left behind in the digital age. We should, together, seize this moment, not just to envision change, but hopefully to make it happen. Thank you.

Muhammad Shabbir: Thank you, Dr. Tawfik Jelassi, for these welcoming remarks, as well as the enlightening vision of UNESCO and how UNESCO is contributing in making digital environments accessible through open resources. Our next speaker, Lydia Best, was supposed to be online. But unfortunately, we received a message from her this morning that she fell ill, so she cannot contribute. So we wish her best and good recovery. Next, I will be speaking about challenges and solutions into addressing accessibility barriers in emerging technologies. And this is a topic when, as I, myself, a person with disability, encounter and interact with emerging technologies, there are a number of challenges that come in the way. And it is really unfortunate that the technologies that are coming up these days, they carry a huge potential to facilitate persons with disabilities. But due to certain barriers in the development, in the planning or execution of those solutions, those emerging technological solutions, that when they come to people with disabilities, they encounter certain barriers. And before I move forward, I would like to give a personal example. Some years back, I happened to encounter a wonderful VR solution, headset with some specific solutions. But when I tried to use that, it was, we found that it was only applicable or activatable through vision or touch. And it did not have any visual features. I’m not sure if the latest VR or AR systems, they do come with these kinds of assistive technologies or solutions. But it was about two or three years back. I believe it was in the end of 2021. So we might have those solutions. But unfortunately, some sort of developers, when they start developing solutions, they either forget, either are unaware of, or sometimes they feel it convenient. disregard the accessibility considerations. There comes a lot of sessions like this, and this session is a remembrance that disability, as UNCRPD, the United Nations Convention on the Rights of Persons with Disabilities, says that disability occurs when impairments in persons interact with the societal barriers. And this way, I would say disability is not specific to me or any specific group. It is cross-gender, cross-geographic boundaries, cross-race, cross-religion, and cross the boundaries of developed and developing world. So, any accident, any natural or man-made disaster, or any illness, just by passing off time as we age, this disability can catch us. So, whether you are a policymaker, whether you are a developer, where decisions are being made, you need to ensure that the technologies that are being developed are developed inclusively, and inclusively following the design of universal access or universal design, so that when, if today, we need it tomorrow, it may happen, we won’t push it on anyone, but it may happen that you may need it, and you may find that technology was inaccessible, and the time has passed to take the decisions. There are AI technologies in the system where we are using the technologies, but persons with disabilities are found neglected in the development of those technologies. JGBT and other LMS systems, they certainly provide certain accessibility issues. I would not talk about their biases against disabilities. That’s another topic and not the subject of this session, but we need to consider how they interact with people with different kinds of disabilities. There aren’t any sign language interpretation, for instance, coming with these kinds of solutions or platforms. Similarly, when different banks and different financial institutions, they develop their applications. They develop in a way that makes them secure. They make the websites and the apps inaccessible for people with disabilities. Same is the case with the learning management systems, LMS. I have encountered a number of LMS in Pakistan that are provided through international providers and the local providers as well. That when the students and teachers with disabilities, when they interact with those kinds of technologies and LMS, we found that those technologies were developed without considerations of people with disabilities in mind. So what is the solution then? The question comes to mind. What is the solution? The solution is definitely, number one, the developers need to be aware of the standards that are internationally available to make the solutions and platforms accessible for people with disabilities. such as Web Content Accessibility Guidelines, the stable version is 2.1. And more as we move along and more and more technologies are coming out, these standards are also being evolved. So the developers need to know this. The policy makers need to consider the policies that the development phase, the research phase and the execution phase. All phases include persons with lived experience of disabilities and testing them that these developments and technologies are being developed, accessible and inclusive for everyone. I will stop here and give the cue to the next speaker who is online again. And we will keep discussing this and more topics related to accessibility. But my next speaker is Amila Odubesic, Director of Broadcasting Bosnia. And Amila shall be speaking about regulatory frameworks and policies for accessible digital technologies. Amila, the floor is yours.

Amela Odobasic: Thank you very much, Dr. Shabir. Greetings to all of you from Bosnia and Herzegovina. I would have preferred to be in Riyadh with you, but unfortunately I was not able to. So let me first say that I’m extremely sorry that Lydia Best could not join us because she is such an expert in this field. But however, Dr. Shabir made an excellent introduction into the topic. Let me first say that I am not going to talk only from the perspective of the regulatory authority, but I’m also a co-rapporteur for the question on ICT accessibility for persons with disabilities that is being discussed within the International Telecommunication Union. And I have been involved in the topic for the last, well, since 2014, so it’s quite a long time. So I will definitely touch upon and tell you on what are the biggest challenges when it comes to policy makers and their efficiency or lack of efficiency in the area of creating policies, legal and regulatory framework. But I will also, if you allow me, refer to some of the global practices and perhaps convey what are the biggest challenges that members of the International Telecommunication Union are facing. So, as was previously pointed out, there is no doubt that in our contemporary era, as the digital revolution continues to gain momentum, the profound global impact of information and communication technologies is undeniable across all sectors. And this is something that within the question that I just mentioned, at the International Telecommunication Union, we always stress that the topic on ICT accessibility. disability cannot be singled out. We cannot look at this topic in silos. It should be looked at within a holistic approach. So, Dr Shabir already touched upon the challenges that persons with disabilities are facing. And they’re numerous, believe me, whichever country you look at. So, at the same time, you see, it’s quite interesting to see when, for example, during our meetings at the ITU, we have in the same room at the meetings, representatives of persons with disabilities as well as the representatives of the policy makers, the ministries, the regulatory authorities, other governmental topics. We also have industry. We also have a disabled society. We also have academia, etc. And it’s really, it’s quite, it’s not easy for us who are coming from the policy makers’ area, from the government’s area, really, to face all the challenges that persons with disabilities are facing. And as I said, I mean, there are many. First of all, the biggest challenge that persons with disabilities are facing in the, especially even, I mean, 10 years ago, that the barriers were even more solid and bigger. And that has considerably changed within this period. Nowadays, persons, even persons with disabilities, they became, the associations, I mean, they became more organized, more up front. For example, in Bosnia and Herzegovina, in particular, we encouraged associations of persons with disabilities to be a little bit more, a little bit more, you know, more organized, to be a little bit more, you know, more, you know, loud in advocating for their goals and in advocating for the, you know, going towards the government and really demanding that their needs have been made. And I will just say that during one meeting, you see, we always quite like to refer to this topic that persons with disabilities have their specific needs. And I will always remember that one of the representative or one of the associations of persons with disabilities said once, he said, well, look, our needs are the same as yours, okay, who do not sort of like fall into the category of the persons with disabilities. So we do not have specific needs, our needs are the same to have access to information, access to communication, et cetera. And this is the only thing that we are really asking for. So the first step, persons with disabilities, as I said, I mean, the government, not only the persons with disabilities, but even the government should be open to have a dialogue, to listen to the problems that persons with disabilities do need and the problems that they have and try to do their best in order to accommodate them. However, you see, we always think that the governments are very sort of like closed bodies, they’re closed authorities that are not allowing access to external parties. For instance, we at the Regulatory Authority of Bosnia and Herzegovina, I personally, because I gained all the knowledge at this working group of the International Telecommunication Union from the another lady, Andrea Sachs, and many, many other experts.

Muhammad Shabbir: So, I was personally very adamant that the government, first me as the regulatory authority and then the policy maker, should listen to persons with disabilities. So, the most important thing is not only to have that dialogue, but the most important thing is really to implement, to fulfill all the preconditions in order to make services accessible. Available and affordable, most importantly, to persons with disabilities. So, my personal view is that I always insisted that the implementation of the guidelines, of the standards, of good practices, and there are plenty of them and they’re all outlined in the report that are available on the ITU website under the Bureau of Development, in the Bureau of Development section. So, there are so many good practices that can be replicated, that can be adjusted to our environment, you see. So, there is the governments, for example, the policy maker, we create regulatory framework, excuse me, but what are the governments usually going to say? For example, now we are talking about artificial intelligence. The government is going to say, well, sorry, we can’t really, for example, in Europe, there is a body, there is the European Union, countries, members, Bosnia and Herzegovina and other six countries all together from our region, we are not members of the European Union. However, at the level of the European Union, there is a very distinct legislation that is being developed and implemented in the countries of the European Union. So, in the countries that are not members of the European Union, Union the governments will always sort of like find excuses and they will say well look we are not members so therefore we cannot implement because it’s not obligatory in the sense that let’s say European Union is going to follow and monitor our work and this is basically the role of the regulators authority that is crucial because regulatory authorities although they government linked to the government but we are expert bodies okay our job is really to follow what is happening internationally to see what are the good practices to talk to all the interested parties in this case with persons with disabilities and to try to do our best to through the development of the regulatory framework okay to implement the provisions and to impose certain obligations okay I mentioned in the light of the artificial intelligence at the level of the European Union for instance there is the the law on artificial intelligence has been adopted and put in force as of May this year so inevitably we as the regulator we can follow what are the provisions that are referring to the digital inclusion all together because we are also I mean in this as we said in this global digital revolution we are moving away from singling out only ICT accessibility but putting it in the context of the digital inclusion because that’s the only way you know that that we somehow respond adequately to this cross-cutting topic okay so for example the in our country and in many others like in many other countries in Europe so what we can do for now we can follow the development we can follow the good practices and see if there are any provisions that we can already put in our regulatory framework. Inevitably for that, the regulatory authorities should have legal mandate to do that. However, in most of the countries we do not have legal mandate because even artificial intelligence is such a brand new topic for the regulatory authorities to deal with. However, what the regulators could do, they could follow the topic, they could see what are the good practices and then they could perhaps develop some recommendations

Amela Odobasic: or guidelines for our licenses wherever relevant and try to sort of like impose that as a non-obligatory measure somehow, which can maybe a little bit contradictory. But in that way, what the regulators could do, they educate, they raise awareness, they share knowledge and then at the same time they encourage, let’s say, their licenses to get more involved in the topic. However, at the same time, the regulatory authorities could establish a dialogue with policy makers and then they could advocate that certain provisions should be put or the government should create the laws and certain provisions should be put in laws and encourage the government that in the process of public consultations, persons with disabilities and other interested parties are involved in this process, that their comments are very clear and to make sure that they are implemented in the best possible way. So, this as we could see, this is all. process. Okay. However, it may look, it may look as a very complicated process. Okay. But still, it is possible to do a very specific, to have a very specific results in practice. Okay, I’ll tell you when I first, when I was first, I started to get involved with this topic, when I came back to Bosnia and to my, to the regulatory authority where I work, I was, I was literally confused. And I was sort of like thinking, how can I make the first step? Okay. And a few years later, we managed to have the full, for example, I, we try to identify what is the biggest challenge. Yes, you need to wrap up, please. Okay. So I will cut across with the, I will cut, cut out this practical example. So let me just wrap up and to say that, yes, this, the topic is very challenging for the, for the government. However, we, I believe that all stakeholders, as, as Dr. Shabir already pointed out, should join in their efforts. Okay. Advocacy efforts are extremely important. We should look at it as a cross-cutting topic, and we should really advocate towards the policy makers in this sort of like joint way. So I’ll stop here. If there are any questions later on, I will be available. Thank you.

Muhammad Shabbir: Yes. Hello. hear me? Okay. Thank you very much, Amila, for your great intervention. Surely, our audience and I do have some questions to ask you. But the unfortunate responsibility as a moderator includes to cutting across speakers when they are exceeding their time limit. Our next speaker is from Qatar, Dr. Mohammed Khribi. He is the Digital Accessibility Services Acting Manager, MADA of Arab states. And he will be talking about innovations in accessibility services in the Arab states region. Dr. Mohamed, the floor is yours.

Mohammed Khribi: Okay. Thank you so much. Good morning. Hello, everyone. It’s a pleasure to be here at the internet in the IGF forum 2024. You know, 20 years after I first participated in the OASIS summit in Tunis back in 2005. You know, I participated in the summit representing at that time my university, the virtual university of Tunisia. Today, I’m honored and pleased to represent and privileged to represent the organization to which I belong, MADA, or Qatar Assistive Technology Center, where I work as acting director of the Digital Accessibility Services. But you know, I’d like to like define myself informally, just saying that I’m a passionate advocate for open and inclusive digital education for all. Initially, I have prepared the presentation to shed light on MADA’s contributions to bridging the digital accessibility gaps. But you know, the setting, but it doesn’t, you know, fit to the setting to our panel yesterday. So I’m going to just talk rapidly about my organization and shed light on, you know, some of the MEDA flagship projects in terms of digital accessibility, with the ultimate goal to empower people with disabilities accessing technology in order to live independently and participate in all aspects of life. But let me first get back to what has been said by Dr. Mohamed, actually when he explained it and he focused on how we need to address disability and how to focus on the accessibility barriers that prevent people with disabilities to access technology. So you know disability is often misunderstood, or let me say it’s not only defined from the medical or the charity or the special needs perspectives. In my view, I think we need to focus more on the interactions between persons with disabilities and the barriers preventing them to, you know, avail all digital services and opportunities. And our work is to enhance access for them, is to, I will not say remove these barriers, but at least reduce these barriers. So this is what we are doing in our organization at Qatar Assistive Technology. We are trying to enhance ICT accessibility in Qatar and beyond. Let me rapidly say a few words about Mada. So, Mada, it’s a non-profit organization founded in 2013. At that time, under the Ministry of Communication and Information Technology, now we shifted like two years ago to the Ministry of Social Development and Family. As I’ve previously said, we are focusing on enhancing ICT accessibility for persons with disabilities in Qatar and beyond. And we are working closely with all the stakeholders involved in the field of digital accessibility in order to innovate and to create and develop and offer innovative digital solutions for all. We are offering in Mada a wide range of digital accessibility services, as well as programs and activities. We are conducting a research agenda dealing with ICT accessibility and assistive technology. We are leveraging emerging technologies, like artificial intelligence, in order to develop digital solutions for people with disabilities. Let me talk a little bit about the digital accessibility services that we are offering. Basically, the services are around three pillars. The first pillar is the ICT accessibility services. So, we are partnering with local entities, whether governmental or from the private sector, to enhance the accessibility of their existing digital platforms. websites, web-based applications, kiosks and ATMs, mobile applications, etc. So we are counseling, like consultation sessions with these users, in order to let them make their digital platforms accessible. We are preparing auditing reports in order to check these websites and help them to make these websites accessible. We are offering accreditation services to these users, so that we can make sure that their solutions are fully accessible for persons with disabilities. We are offering also assistive technology services. We have assistive technology assessors that are making assistive technology assessments for persons with disabilities in order to identify which assistive technology solutions or devices that fit better their specific needs. And based upon the assessment, we ensure the provision of assistive technology devices and solutions for persons with disabilities. Based upon our internal policy of AT provision, and also based upon key priorities, like the areas, key strategic sectors that we are focusing on, for example, the education sector, the employment sector and the community sector. We are offering also a one-on-one training session for persons with disabilities to help them use the assistive technology that we have provided them with. We also offer continuous support. for them in order to make sure that the assistive technology devices and solutions are kept to their needs. And last but not least, we are offering also training and capacitive services, not only for persons with disabilities, but for all the stakeholders in order to foster the ecosystem. So we are delivering sessions for, like, teachers from the education sector, from the Ministry of Education, from universities, and also for web development in order to make sure that we are developing and designing digital solutions that are fully accessible and aligned with the standards of accessibility. Our training services are delivered, like, in different training modalities. We have face-to-face training workshops, we have online training courses, and we also conduct, like, blended learning experiences through our mega-academy initiative, based on, like you said, Dr. Muhammad, the learning management system that we’ve developed in order to even cater to the needs of everyone, including persons with disabilities. The second part, I’d like to shed light a little bit on the academy projects that are happening in the U.S. The first pillar is dealing with… The second is the… accessible, open and accessible training materials. So first of all, I would like to take this opportunity to record that those of my studies that I have been involved in, the key findings of this study are that there is a lack and there is a lack in terms of ICT accessibility and also there is a lack in terms of accessible open educational resources and there is like no existing common competency framework that covers all the required competencies around the topics of ICT accessibility and input design. My mic is not working. There is an interruption, right? Okay. But let me first maybe recall the main motives that drive us, you know, working on this, on this, let me say, proposals. You know, the Data Index, the Digital Accessibility Right Evaluation Index, it’s a benchmarking tool developed by G3 ICT organisation, you know, initiative of inclusive ICT organisation, that aims at tracing the progress of countries in terms of offering accessible, digital accessible services. In the Data Index, the edition 2020, Qatar has been ranked first. with a score of 89 out of 100. However, there are many domains in the Derandex that needs more endeavor and more work in order to enhance access for people with disabilities to these different sectors, like the ICT and education from oil sector. And the key findings of the Derandex report is that there is a lack of ICT accessibility competencies and expertise all over the world. There is also a lack in ICT accessibility courses. This means that students basically, in the field of the major, in the discipline of computer science or IT, continue to graduate without having any competencies or skills in the field of digital accessibility. Also, employees that want to build their capacity in the field of digital accessibility, visibility and inclusion, they cannot find professional training or education services to learn more about these topics. Okay, and the most important thing is that the colleges of education want to include in their curriculum topics around inclusive digital education or digital accessibility to let them be able to create and develop accessible digital content. So, based on these key findings, we proposed the ICT App Competency Framework, which is a comprehensive competency framework covering all the required competencies in the field of ICT accessibility. So, there are six competency domains in this competency framework, dealing with how to create accessible digital content, how to create accessible web content, how to become familiar with visibility and accessibility, and other competencies. So, from the one to the six, there are six competency domains. Then, we developed also a common repository hosted on the OER Commons platform. It is called ICT App OER Competency Framework, in which we are gathering all the accessible open educational resources around the themes of ICT accessibility and inclusive design. And we are using these open educational resources to conduct training workshops and to provide continuous online learning. experiences. OK, I got to stop now, I think.

Muhammad Shabbir: Yes, thank you very much, Doctor. And it is indeed a pleasure listening to your work and the kind of activities that you have been doing. It’s Mada Arab organization is doing a lot of wonders in the region. At least I was not aware of this work, so it was really enlightening listening to you. So the next speaker, ladies and gentlemen, is Revanth Voothaluru. I’m sorry if I’m pronouncing the name wrong, so please forgive me for that. Revanath is Global Implementation Project Manager of Learning Equality and shall be speaking about Open Content Platforms for Inclusive Education, the case study and insights. Revanath, the floor is yours. Revanath is online.

Revanth Voothaluru: Thank you so much, everyone. It’s wonderful to be a part of this. And sorry I couldn’t join in person. I’ll very quickly start sharing my screen. Wonderful. Good afternoon to everyone once again. And my name is Revanth Voothaluru, and I am joining from Bangalore, India today. I’ll be talking to you about how my organization, Learning Equality, is creating barrier-free emerging technologies through open solutions with a specific focus on equity, inclusion, and accessibility. The colleagues who have spoken before me have extensively covered about the challenges that exist and have discussed many different ideas. I’ll be talking about how some of this specifically comes alive in the work that we do at Learning Equality. Yeah. So I think in the world today, there are 2.6 billion people who remain offline and are unable to participate in the digital learning revolution. And more than 70% of the learners are unable to read. even a basic text. And this paints a stark picture of the global learning crisis, right? And unfortunately, the learners who are most affected by the crisis are often the ones who also lack access to digital resources. And what that does is it further widens the gap in learning opportunities for these learners. While we know that tech is not a silver bullet solution, we believe that it’s a strong means for addressing some of these gaps. And I think, yeah, and we build at Learning Equality, we build and maintain Colibri, an open source software solution designed to provide offline first teaching and learning experiences. Colibri is free to use and openly licensed and is equipped with over 200,000 open educational resources or OERs as a part of our library, which covers a wide range of subjects and learning needs. And for those of you who may not know what open educational resources are, these are openly licensed materials that can be reused, redistributed, and even repurposed depending on the license. And Colibri serves as a platform that hosts such resources. We also provide support to educators through our platform to differentiate learning and personalize learning. And there’s also features to collect granular data from the learner’s performance, which will further help in facilitating for differentiation. And all of this is enabled by a comprehensive do-it-yourself toolkit with detailed guidance materials to empower individuals and organizations to implement Colibri independently without relying on Learning Equality support. And Colibri is versatile and adaptable, working with a wide range of hardware models, all the way from older and low-cost devices like Raspberry Pis. And it also supports diverse pedagogical approaches, including self-learning, group-based learning, whole-class instruction, while blending technology into the learning environment. Aligned with a focus on equity, we also ensure that our products are compliant for people with disabilities. We work to continually improve the user experience for everyone while adhering to the relevant accessibility standards, and we keep adding new features consistently. The Colibri Learning Platform is partially conformant with WCAG 2.1 Level AA. While the platform is accessible, many OERs still do not adhere to accessibility standards, and I think Dr. Shabbiri spoke about this when he was presenting. Most of the OERs do not adhere to accessibility standards, which prevents them from being useful for learners with disabilities. As one of my colleagues says, it’s like you build a big door, but nobody can get in from there. That’s what it feels like when you work with Colibri and make it friendly for users with disabilities. But the content that is inside is not thoughtfully designed, and that’s a challenge that we often face. And in addition to that, Colibri also supports use of assistive devices, but many of these devices are cost-prohibitive in underserved communities where we work in. And some of these challenges continue to hinder our efforts in equitable access to learning materials, but we keep doing everything that we can to make this more and more accessible for learners with disabilities. And some examples of what this looks like is to ensure accessibility, we focus on multiple features, such as making all text functional, screen reader compatible, and resizable up to like 200%, while some videos also include sign language and captions. And links that are included in Colibri are designed clearly to indicate the purpose that they serve. And all of these features make Colibri beneficial for a range of contexts and learners around the world. And through our organic adoption and strategic partnerships, learning equality reached over 10 million learners across 220 countries and territories. And Colibri also adapts to various implementation models, depending on the unique context and needs of learners, right? Colibri can be implemented in several ways. For self-paced learning through an application. for group settings, enabling collaborative learning, and hybrid learning models as well where learning occurs across multiple different locations. For example, learners can visit a central location connected to a Colibri server, receive lessons and quizzes chosen by an educator, and then continue learning independently at home. And when they’re back at the central location, their data seamlessly syncs, so an educator can make informed decisions for learning journey, all without the internet. And this flexibility showcases the power of thoughtfully designed technologies, ensuring that the products are tailored to the real-world challenges faced by learners and educators in underserved communities. And additionally, to touch upon a little bit around our work with emerging technologies, to ensure that the technology can be used meaningfully, we believe that it’s crucial to make the relevant quality materials available as well. And we have been leaning into a new process that leverages advancements in generative AI and machine learning algorithms based on years of data collected through manually organizing digital content to curricular standards. As a result, we have developed a new holistic process that creates sets of curriculum aligned to digital resources by dramatically reducing the resources needed for an otherwise labor-intensive and knowledge-intensive process. And we’ve been piloting this across three countries and two languages. And in one of the projects that we did, we successfully mapped 6,500 content items to over 2,000 learning objectives and reduced the time spent for this process from months to a couple of days. And to quickly, before wrapping up, I wanted to share an example of what Colibri usage can look like in a school setting. So let’s say there’s a student, imagine that there’s a student called Angela, a student at a school where internet access is limited or unavailable. Angela’s school administrator receives a USB key preloaded with Colibri and digital learning resources. And they install that on a school’s existing laptop, which is then used as a class server. And Angela accesses the pre-curated content on a tablet, exploring lessons aligned to her curriculum. And as she progresses, her teacher can view a detailed report that highlights Angela’s strengths and areas where she’s struggling. And this enables the teachers to provide targeted support, recommend additional resources, and help Angela overcome challenges to succeed. And meanwhile, Angela’s school data is recorded locally, and when possible, it gets synced centrally. This allows the program administrators who are remotely located as well to analyze reports and make iterative improvements in the program. And here’s the most powerful part, which is everything happens seamlessly even without internet connectivity. And this is how Colibri brings impactful learning to students like Angela, who may not have internet accessible. And to close out, I think at Learning Equality, just like all of you, we also similarly want the world to be connected, but we know that the process is stagnating, right? Like even when there is connectivity, it may not be consistent or reliable enough to support classroom instruction. And hence, based on the work that we do, there are a couple of calls to action that I wanna invite you to hold on to. And in your work, I encourage you to consider the tech tools that you’re using and their reliance on connectivity. Who is left out when the internet is not available? That’s a question that I want us to think about. And if you’re advocating for use of emerging technologies like AI, is it being equitably used? And how can it be used as a tool for backend processes that enhance teaching and learning for everyone? Because not everybody can be able to afford technologies that support personalized learning through AI, right? And if you’re developing content, the question is how are you ensuring that the contents created are accessible for all? And I hope that you will consider equity in a new way as a result of this presentation and the examples that we shared through the work that we do. And I invite you to connect with me to discuss more about how we can strive to strive for equity in learning, enabled by edtech, even when internet. is limited. Thank you so much for this opportunity and I’d be happy to answer any questions that might yeah that that come up.

Muhammad Shabbir: Thank you very much for your very insightful comments, case studies and then wrapping up in promptly in time. So thank you very much once again. So before I open the list of my own questions, I would like to see if there are any online questions or someone wants to interact or ask questions from the participants presented here.

Audience: We have an online question. Can you hear me? Is this working? online question and the question is how many other in many countries in Africa persons with disabilities have disabilities. They may be slower learners or they may require different types of learning environment and oftentimes the teachers are not as knowledgeable about how to teach persons with disabilities differently. I know with people with autism and people with dyslexia there’s other different ways of teaching and what can we is there something that we can do that can help provide more training, more capacity building.

Muhammad Shabbir: Is the question directed to a specific speaker? We’ll direct it to either Dr. Shabir or Mohammed. Or learning ecology or to you the last speaker. Dr. Mohamed, or Revant, do you want to take a chance or should I? I’m happy to take, go ahead. I think I can give chance to Dr. Mohamed and then if Revant wants to add in.

Mohammed Khribi: Yes, thank you. Thank you for the question, it’s certainly very crucial. As I previously said in my speech that, here, you hear me all right? Okay. It’s okay, yeah. When it comes to offer like inclusive learning experience, there is, you know, lack in terms of, you know, capacity building for teachers. Their knowledge and their competencies around the topic of inclusive education, how to deal with learners with disabilities, how to prepare like digital education content, which is like fully accessible for people with disabilities. So this was also part of the key findings of, as I’ve mentioned previously, of the Derenbeck’s report. So how to tackle that? I think there is a need that universities, especially colleges of education, there is a need that they integrate in their curriculum courses dealing with accessibility, disability, inclusive education for all teachers, not only for those who are, you know, registered in the special education discipline. For all teachers, there is a real need. to build their capacities in terms of digital accessibility and inclusive education. One way to do that is to invest in the continuous training of teachers in service training. So, we need to collaborate. We need a multi-stakeholder approach. We need to collaborate with all involved parties in order to build the capacities of in-service teachers in terms of how to deal with learning disabilities in an inclusive education perspective. We at MEDAR are doing a lot in this perspective as we are collaborating internally with the Ministry of Education and with local universities and we are offering many training workshops around the topics of digital accessibility and inclusive education.

Muhammad Shabbir: Thank you, Dr. Maman. And instead of going to Vivant, I think I need to ask this question to Dr. Taufiq Jilasi. Since UNESCO is… Okay, so I think Dr. Jilasi left due to his own commitments. So, Revanth, do you want to take a shot on this?

Revanth Voothaluru: Absolutely. Thank you so much, Dr. Shabbir. I think one of the things that I often think about to build off of the response that was already shared is when you talk about supporting learners with disabilities, teacher capacity building is definitely one of the big ways to go about it. And I think to support it, specifically strategies like differentiation and personalization play a crucial role. But when we are talking about developing countries or global south, the classroom sizes are huge. So that’s where I think technology needs to be effectively leveraged in terms of… just getting that data about learner performance and differentiating support using that data so that each learner gets their own, you know, like learning materials that they can engage with. And the teacher is playing a role of like correcting their misunderstandings, clarifying and all of that. And learners can engage with learning at their own pace, which is what is useful for learners who sometimes struggle with certain aspects of learning processes, right? And I think the other point that I also want to add is when I look at problems like this, it’s important to approach it from a more systemic lens. And there is this particular framework that Dr. Fernando Rimas from Harvard recommends. He says, you need to look at any education intervention through five different perspectives. It needs to be cultural, political, psychological, institutional, and professional. I think even for something as simple as providing teachers with the tools to cater to learners with disabilities, there are all of these five things that need to come together so that it’s effectively delivered. I think those would be the two ways in which I would respond to the answer. It’s not an easy solution, but it’s a solution that can be thoughtfully implemented is what I would say.

Muhammad Shabbir: Exactly, I understand. We understand that there aren’t any easy solutions and search for easy solutions is not always the good one. So no matter how difficult we have to adopt these solutions, if we truly want inclusion and participation of all. So Judith, if we have a question online and if they are available, if they can ask the question by themselves. No, we cannot do that, sorry.

Audience: Okay. This question is directed to Amelia and it should be the role of regulatory authorities in advancing digital inclusions for persons with disabilities.

Amela Odobasic: Thank you, Judith. In my intervention, I already, I think, responded to that part of what is the specific role that regulatory authorities should do. However, I would just like to repeat again. that regulators cannot take steps and go ahead in front of the policy makers and governments. However, as expert authorities, as expert bodies, regulators should gain necessary knowledge on the particular topic and then they should really do as much as they can in advocacy effort towards the government in order to achieve the goal. However, there was one activity that I did not mention in my intervention and it’s also equally relevant for the regulators and that is to tailor the activities that they implement under the umbrella of media information literacy, in particular in light of digital literacy, producing researches, mapping the challenges, gathering all these stakeholders together and trying to achieve as much as possible, especially about what was being said when it comes to education. You see, for example, in Bosnia and Herzegovina, the education system is extremely complex and it’s usually also like any changes in that particular area, they take a long time. However, considering the target that the children and minors should be really, should receive necessary support in as much as possible, I mean, so that they can progress in their development without boundaries, the regulators should focus also their activities on media and information literacy and touching, discussing on accessibility and digital inclusion, particularly, I mean, that is something that we do in Bosnia and Herzegovina, but again, that would be another topic.

Muhammad Shabbir: Thank you, Amela. I think I would just want to add a couple of points into your intervention, really the great one, but with a little bit of differentiation to the point that regulators do not have the sort of powers or authority of policymakers. Yes, they do not, I understand that, but what the regulators can do, and this we have been doing in Pakistan as well, with both the telecom regulator and the banking regulator on both sides in terms of digital accessibility, we have been feeding them information about and we are training them about the activities, abilities, and requirements of persons with disabilities. And in turn, because one role of the regulator is they are the specialist in their own area and they feed the policy input to their governments. So what they can do is they can advise the governments to guide the governmental policies in the right direction. And when the policies are made, then it’s the job of the regulator to ensure that they are implemented. So that’s the crucial point where the role of regulator comes in.

Amela Odobasic: I absolutely agree with you. However, you see, I was only sort of like looking at the role of the regulator as a link between, you know, towards the governments. However, the regulators can and they definitely must do as much as they can in order to implement the provisions in their regulatory framework, you see. For example, that is what we did in Bosnia and Herzegovina. We detected that when it comes to TV accessibility, for example, that the percentage was extremely low. So what we did, we completely changed the provisions within the regulatory framework. We did not need a government, a new law for that. And we introduced quotas and considerably improved that area. So, yes, I absolutely agree with you. The regulators can do a lot. And this was just one example.

Muhammad Shabbir: Yes. Thank you very much for that. Thank you. Any in-person participants want to contribute?

Audience: My name is Nicodemus Nyokundi. I’m a fellow for the Dynamic Coalition on Accessibility and Disability. First is to Nela. I think I’ll comment that what she said on that disability needs are not specific needs. They are specific needs like any other person. And so in tackling such, we should be aware that anybody may be in need of such accessibility needs to engage. Accessibility and inclusion, in this case in the education sector. I wanted to add on the learning equality platform that the approach also should much focus on the trainers. Because I remember, I’m from Kenya, and I remember a previous government administration had a digital program for the education sector where they issued laptops. They ended up this way. So it was implemented. I feel like I passed on. Okay, sorry, sorry. Okay, I’ll You can actually focus on educating the and offering assistance to so that they are.

Muhammad Shabbir: Thank you, Nicodemus. I think there isn’t any question. Unless we have any questions, we can move to. OK. From the audience, so yes, and quickly. Very quickly.

Audience: It was more of a comment, actually, to the moderator. What attracted me here was the wide open solution. And you talked a lot about developers, and I’m one. And that is what I think is very important in this talk. Because it’s the developers who are going to build all these solutions. We are talking about what should be default, how it should really work. So I’d like us to always engage this open community, because that is how we can easily embed all these beautiful softwares and concepts into each and every digital platform. That was my comment. Thank you. My name is Itzel. I’m one of the TICET fellows. I don’t have a question. It’s more like a comment. I think that the education for people with disabilities is a common challenge in all the countries. And probably before we think about technology and access to the internet, we should make sure that all students with disabilities have access to basic education. Because in case of Mexico, that’s something that it’s not happening. And how can we think about inclusive education when the system and the government? governments refuse to invest in the development of people with disabilities. Because that’s what’s happening in Mexico. We only think, they speak a lot about inclusion, but it’s just what they say. It’s not happening in the reality.

Muhammad Shabbir: Thank you very much for the comments, both by Nicodemus and Incel. I am sorry to have this unfortunate duty performed once again. We cannot take any more questions because it’s the time to close and wind up the session. So for the key takeaways from the speakers and the discussion, I would like to request Ms. Zeynep Varoglu, who is the senior specialist in the Information and Communication Center of UNESCO. So I would request that you give us a brief summary of the key points from the speakers. Ms. Zainab.

Zeynep Varoglu: Okay. I think the key takeaways we can walk away with are that, first of all, we talked about the importance of governance and the role of regulators in ensuring that there is equality and equity in access to information and to learning. And second of all, the role of openness has been underscored and the role of stakeholders in terms of governments and in terms of educational institutions has been underscored. And a third point that we’d like to just underscore is the fact that what we’re talking about here is not the – it’s not the – there are other factors that are involved in terms of education and in terms of access to information and to learning that has to be taken into account, which are not digital, and that play a very big role in ensuring ensuring that there is actually movement in this way, but technology is an important factor, and when technology is done, it’s important that it’s done inclusively, it’s done with persons with disabilities also guiding the way and making sure that they are part of the process of the development of these technologies to ensure that they are actually serving the purpose that they’re supposed to serve. If I may, I’d like to just give the floor back, Shabir, but to thank also the speakers on behalf of UNESCO also and also underscore the importance that we thought of having a joint cooperation of the two DCs in this meeting because of the complementarity and the richness that comes from the discussion. So with that, I give the floor back to you, Dr. Shabir.

Muhammad Shabbir: Thank you very much, Zainab, and last but not the least, I will now pass on the floor to Judith Hellestein. She is my co-coordinator at the IGF Dynamic Coalition on Accessibility and Disability for the summary of key takeaways from the discussion that we just had after the speakers and the vote of thanks. Judith.

Judith Hellerstein: Thank you so much, Dr. Shabir. Yes, it was very fruitful and meaningful discussion, and some of the key takeaways from the questions are the importance of the regulator and what the regulator can do in helping to advance technical issues for persons with disabilities, whether it is through subsidies for universal access for smart devices for persons with disabilities, whether it is other type of subsidies there, what is their role as policymakers and regulators in ensuring that technology is available to all participants and also underscores the importance of the education department and the work on there in making sure that education is open and available to all persons and not giving the full thought to only one type of education. As we saw from one of our speakers, there are multiple options of learning platforms that can be adjusted and that can work for different persons with disabilities. So we want to make sure that we take into account all types of issues. And thank you so much because we are ending our session on time because the captioners have to leave for lunch. Thank you. And thanks for everyone coming here.

Muhammad Shabbir: Yes, thank you very much, Judith. Thank you very much, Zainab and the DCOER for collaborating with us on this session. I would also like to thank the participants who came here to attend this session. And last but not the least, a profound thanks to all the teams who assisted us, including the technical team, the captioners, the sign language interpreters, and others who made this session a possibility. Thank you once again.

M

Muhammad Shabbir

Speech speed

121 words per minute

Speech length

2613 words

Speech time

1292 seconds

Lack of consideration for accessibility in development

Explanation

Muhammad Shabbir points out that emerging technologies often fail to consider accessibility needs during development. This oversight creates barriers for persons with disabilities when trying to use these technologies.

Evidence

Personal example of encountering a VR headset that could only be activated through vision or touch, lacking audio features.

Major Discussion Point

Accessibility Challenges in Emerging Technologies

Agreed with

Tawfik Jelassi

Amela Odobasic

Agreed on

Importance of accessibility in emerging technologies

Need for universal design in technology development

Explanation

Shabbir emphasizes the importance of universal design in technology development. This approach ensures that technologies are accessible to all users, including those with disabilities, from the outset.

Major Discussion Point

Accessibility Challenges in Emerging Technologies

Agreed with

Tawfik Jelassi

Amela Odobasic

Agreed on

Importance of accessibility in emerging technologies

Importance of dialogue between regulators and persons with disabilities

Explanation

Shabbir emphasizes the importance of dialogue between regulators and persons with disabilities. This dialogue helps regulators understand the needs and challenges faced by persons with disabilities in using digital technologies.

Evidence

Example from Pakistan where telecom and banking regulators are being trained about the requirements of persons with disabilities.

Major Discussion Point

Role of Regulators and Policymakers

Agreed with

Amela Odobasic

Agreed on

Role of regulators in promoting accessibility

Differed with

Amela Odobasic

Differed on

Role of regulators in advancing accessibility

T

Tawfik Jelassi

Speech speed

112 words per minute

Speech length

918 words

Speech time

487 seconds

Challenges in making AI and language models accessible

Explanation

Jelassi discusses the challenges in making AI and language models accessible and unbiased. He points out that these technologies often replicate and amplify existing biases, particularly gender biases.

Evidence

Example of gender biases in large language models, where women are associated with domestic roles while men are linked to business careers.

Major Discussion Point

Accessibility Challenges in Emerging Technologies

Agreed with

Muhammad Shabbir

Amela Odobasic

Agreed on

Importance of accessibility in emerging technologies

UNESCO’s efforts to advance inclusive education through open solutions

Explanation

Jelassi discusses UNESCO’s efforts to advance inclusive education through open solutions. He highlights the potential of open educational resources paired with emerging technologies to transform education.

Evidence

Mention of the UNESCO recommendation on Open Educational Resources endorsed by 193 countries and the Dubai Declaration calling for a commitment to advancing inclusive education through open solutions.

Major Discussion Point

Open Educational Resources and Platforms

Agreed with

Mohammed Khribi

Revanth Voothaluru

Agreed on

Need for accessible open educational resources

A

Amela Odobasic

Speech speed

124 words per minute

Speech length

1558 words

Speech time

751 seconds

Importance of involving persons with disabilities in technology development

Explanation

Odobasic stresses the need to involve persons with disabilities in the development of technologies. This ensures that their needs and perspectives are considered from the beginning of the development process.

Major Discussion Point

Accessibility Challenges in Emerging Technologies

Agreed with

Muhammad Shabbir

Tawfik Jelassi

Agreed on

Importance of accessibility in emerging technologies

Regulators should advocate for accessibility to policymakers

Explanation

Odobasic argues that regulators should advocate for accessibility to policymakers. While regulators may not have direct policymaking power, they can influence government decisions by providing expert input on accessibility needs.

Major Discussion Point

Role of Regulators and Policymakers

Agreed with

Muhammad Shabbir

Agreed on

Role of regulators in promoting accessibility

Differed with

Muhammad Shabbir

Differed on

Role of regulators in advancing accessibility

Regulators can implement accessibility provisions within existing frameworks

Explanation

Odobasic points out that regulators can implement accessibility provisions within existing regulatory frameworks. This allows for improvements in accessibility without necessarily requiring new legislation.

Evidence

Example from Bosnia and Herzegovina where regulators changed provisions within the regulatory framework to improve TV accessibility without needing a new law.

Major Discussion Point

Role of Regulators and Policymakers

Agreed with

Muhammad Shabbir

Agreed on

Role of regulators in promoting accessibility

Need for legal mandates to address new technologies like AI

Explanation

Odobasic highlights the need for legal mandates to address new technologies like AI. She points out that many regulatory authorities currently lack the legal mandate to regulate these emerging technologies.

Major Discussion Point

Role of Regulators and Policymakers

M

Mohammed Khribi

Speech speed

102 words per minute

Speech length

1712 words

Speech time

999 seconds

Development of ICT accessibility competency framework

Explanation

Khribi discusses the development of an ICT accessibility competency framework. This framework covers all required competencies in the field of ICT accessibility to address the lack of expertise in this area.

Evidence

Mention of six competency domains in the framework, including creating accessible digital content and web content.

Major Discussion Point

Open Educational Resources and Platforms

Need for accessible open educational resources

Explanation

Khribi highlights the need for accessible open educational resources. He points out the lack of ICT accessibility and accessible open educational resources in many educational settings.

Evidence

Mention of creating a common repository hosted on the OER Commons platform to gather accessible open educational resources.

Major Discussion Point

Open Educational Resources and Platforms

Agreed with

Tawfik Jelassi

Revanth Voothaluru

Agreed on

Need for accessible open educational resources

Need to integrate accessibility courses in teacher education curriculum

Explanation

Khribi emphasizes the need to integrate accessibility courses in teacher education curriculum. This would help address the lack of knowledge and skills among teachers in dealing with learners with disabilities.

Major Discussion Point

Teacher Training and Capacity Building

Importance of continuous training for in-service teachers

Explanation

Khribi stresses the importance of continuous training for in-service teachers. This ongoing professional development helps teachers stay updated on inclusive education practices and digital accessibility.

Evidence

Mention of MADA’s collaboration with the Ministry of Education and local universities to offer training workshops on digital accessibility and inclusive education.

Major Discussion Point

Teacher Training and Capacity Building

R

Revanth Voothaluru

Speech speed

167 words per minute

Speech length

1820 words

Speech time

651 seconds

Creation of offline learning platform with open educational resources

Explanation

Voothaluru discusses the creation of an offline learning platform called Colibri, which includes open educational resources. This platform is designed to provide offline-first teaching and learning experiences, addressing the needs of learners without internet access.

Evidence

Mention of Colibri having over 200,000 open educational resources covering a wide range of subjects and learning needs.

Major Discussion Point

Open Educational Resources and Platforms

Agreed with

Tawfik Jelassi

Mohammed Khribi

Agreed on

Need for accessible open educational resources

Using technology to support differentiation and personalization

Explanation

Voothaluru discusses using technology to support differentiation and personalization in education. This approach helps address the diverse needs of learners, including those with disabilities, in large classroom settings.

Evidence

Mention of using data about learner performance to differentiate support and provide personalized learning materials.

Major Discussion Point

Teacher Training and Capacity Building

Need to consider cultural, political, psychological, institutional and professional aspects

Explanation

Voothaluru emphasizes the need for a systemic approach to inclusive education. He argues that addressing accessibility in education requires considering cultural, political, psychological, institutional, and professional aspects.

Evidence

Reference to Dr. Fernando Rimas’ framework from Harvard recommending five perspectives for education interventions.

Major Discussion Point

Systemic Approach to Inclusive Education

U

Unknown speaker

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Lack of teacher knowledge on teaching students with disabilities

Explanation

An audience member points out the lack of teacher knowledge on how to teach students with disabilities. This highlights a gap in teacher education and training regarding inclusive education practices.

Major Discussion Point

Teacher Training and Capacity Building

Importance of basic education access before focusing on technology

Explanation

An audience member emphasizes the importance of ensuring access to basic education for students with disabilities before focusing on technology. This highlights the need to address fundamental educational inequalities.

Evidence

Example from Mexico where access to basic education for people with disabilities is still a challenge.

Major Discussion Point

Systemic Approach to Inclusive Education

Role of governments in investing in education for people with disabilities

Explanation

An audience member highlights the crucial role of governments in investing in education for people with disabilities. They argue that without proper government investment, inclusive education remains a distant goal.

Evidence

Example from Mexico where there’s a perceived lack of government investment in the development of people with disabilities.

Major Discussion Point

Systemic Approach to Inclusive Education

Engaging open source developer communities

Explanation

An audience member emphasizes the importance of engaging open source developer communities in creating accessible solutions. They argue that involving developers is crucial for embedding accessibility features into digital platforms.

Major Discussion Point

Systemic Approach to Inclusive Education

Agreements

Agreement Points

Importance of accessibility in emerging technologies

Muhammad Shabbir

Tawfik Jelassi

Amela Odobasic

Lack of consideration for accessibility in development

Need for universal design in technology development

Challenges in making AI and language models accessible

Importance of involving persons with disabilities in technology development

The speakers agree that emerging technologies often fail to consider accessibility needs, and there is a need for universal design and involvement of persons with disabilities in the development process.

Role of regulators in promoting accessibility

Amela Odobasic

Muhammad Shabbir

Regulators should advocate for accessibility to policymakers

Regulators can implement accessibility provisions within existing frameworks

Importance of dialogue between regulators and persons with disabilities

The speakers agree that regulators play a crucial role in promoting accessibility by advocating to policymakers, implementing provisions within existing frameworks, and engaging in dialogue with persons with disabilities.

Need for accessible open educational resources

Tawfik Jelassi

Mohammed Khribi

Revanth Voothaluru

UNESCO’s efforts to advance inclusive education through open solutions

Need for accessible open educational resources

Creation of offline learning platform with open educational resources

The speakers agree on the importance of developing and promoting accessible open educational resources to advance inclusive education.

Similar Viewpoints

Both speakers emphasize the importance of teacher training and capacity building in digital accessibility and inclusive education practices.

Mohammed Khribi

Revanth Voothaluru

Need to integrate accessibility courses in teacher education curriculum

Importance of continuous training for in-service teachers

Using technology to support differentiation and personalization

Unexpected Consensus

Systemic approach to inclusive education

Revanth Voothaluru

Unknown speaker

Need to consider cultural, political, psychological, institutional and professional aspects

Importance of basic education access before focusing on technology

Role of governments in investing in education for people with disabilities

Despite coming from different perspectives (technology developer and audience member), both emphasize the need for a holistic approach to inclusive education, considering various factors beyond just technology.

Overall Assessment

Summary

The main areas of agreement include the importance of accessibility in emerging technologies, the role of regulators in promoting accessibility, and the need for accessible open educational resources. There is also consensus on the importance of teacher training and capacity building in digital accessibility.

Consensus level

There is a moderate to high level of consensus among the speakers on the key issues discussed. This consensus suggests a shared understanding of the challenges and potential solutions in making digital technologies and education more accessible and inclusive. The implications of this consensus are that it provides a strong foundation for collaborative efforts to address these challenges across different sectors and stakeholders.

Differences

Different Viewpoints

Role of regulators in advancing accessibility

Amela Odobasic

Muhammad Shabbir

Regulators should advocate for accessibility to policymakers

Importance of dialogue between regulators and persons with disabilities

While both speakers emphasize the importance of regulators in advancing accessibility, they differ in their approach. Odobasic focuses on regulators advocating to policymakers, while Shabbir emphasizes direct dialogue between regulators and persons with disabilities.

Unexpected Differences

Priority of basic education access vs. technology integration

Revanth Voothaluru

Unknown speaker

Creation of offline learning platform with open educational resources

Importance of basic education access before focusing on technology

While most speakers focused on technological solutions, an audience member unexpectedly emphasized the need to prioritize basic education access for students with disabilities before focusing on technology integration. This highlights a fundamental difference in approach to inclusive education.

Overall Assessment

summary

The main areas of disagreement revolve around the role of regulators, approaches to teacher training, and the prioritization of basic education access versus technology integration.

difference_level

The level of disagreement among speakers is moderate. While there is general consensus on the importance of accessibility and inclusive education, speakers differ in their proposed approaches and priorities. These differences highlight the complexity of implementing inclusive education and the need for a multi-faceted approach that considers various perspectives and local contexts.

Partial Agreements

Partial Agreements

Both speakers agree on the need to improve teacher capacity for inclusive education, but they propose different approaches. Khribi emphasizes integrating accessibility courses in teacher education, while Voothaluru focuses on using technology for differentiation and personalization.

Mohammed Khribi

Revanth Voothaluru

Need to integrate accessibility courses in teacher education curriculum

Using technology to support differentiation and personalization

Similar Viewpoints

Both speakers emphasize the importance of teacher training and capacity building in digital accessibility and inclusive education practices.

Mohammed Khribi

Revanth Voothaluru

Need to integrate accessibility courses in teacher education curriculum

Importance of continuous training for in-service teachers

Using technology to support differentiation and personalization

Takeaways

Key Takeaways

Emerging technologies like AI have great potential but often lack accessibility considerations for people with disabilities

Regulators and policymakers play a crucial role in advancing digital inclusion and accessibility

Open educational resources and platforms can help make education more accessible and inclusive

There is a significant need for teacher training and capacity building on inclusive education and teaching students with disabilities

A systemic, multi-stakeholder approach is needed to truly achieve inclusive education

Resolutions and Action Items

UNESCO pledged to increase the reach of inclusive education platforms by 25% by 2030

Learning Equality developed an offline learning platform (Colibri) with open educational resources to improve access

MADA is offering training workshops on digital accessibility and inclusive education in Qatar

Unresolved Issues

How to effectively implement teacher training on inclusive education at scale

How to ensure governments invest adequately in education for people with disabilities

How to make AI and language models fully accessible and unbiased

How to provide basic education access for all students with disabilities before focusing on technology

Suggested Compromises

Using technology to support differentiation and personalization in large classrooms where individual attention is difficult

Regulators implementing accessibility provisions within existing frameworks when new laws are not possible

Thought Provoking Comments

Information is a public good. And as a public good, information needs to receive public support. I think the same is true for openness and accessibility.

speaker

Tawfik Jelassi

reason

This comment frames accessibility and openness as public goods deserving of public support, which provides a compelling rationale for government involvement and investment in these areas.

impact

It set the tone for discussing accessibility as a societal responsibility rather than just an individual or private sector concern. Subsequent speakers built on this idea of collective commitment and multi-stakeholder approaches.

Our needs are the same as yours, okay, who do not sort of like fall into the category of the persons with disabilities. So we do not have specific needs, our needs are the same to have access to information, access to communication, et cetera.

speaker

Amela Odobasic (quoting a representative of persons with disabilities)

reason

This reframes the discussion of accessibility from a ‘special needs’ perspective to one of equal rights and universal design. It challenges the common perception of accessibility as an add-on for a specific group.

impact

It shifted the conversation towards viewing accessibility as a universal benefit and influenced later comments about inclusive design and development practices.

We need to collaborate. We need a multi-stakeholder approach. We need to collaborate with all involved parties in order to build the capacities of in-service teachers in terms of how to deal with learning disabilities in an inclusive education perspective.

speaker

Mohammed Khribi

reason

This comment emphasizes the need for collaboration across different sectors to address accessibility in education, highlighting the complexity of the issue.

impact

It broadened the discussion from focusing solely on technology solutions to considering the importance of capacity building and systemic approaches in education.

When you talk about supporting learners with disabilities, teacher capacity building is definitely one of the big ways to go about it. And I think to support it, specifically strategies like differentiation and personalization play a crucial role.

speaker

Revanth Voothaluru

reason

This comment brings attention to the pedagogical aspects of accessibility, emphasizing that technology alone is not sufficient without proper teaching strategies.

impact

It led to a more nuanced discussion about the intersection of technology, pedagogy, and accessibility, highlighting the need for a holistic approach.

Overall Assessment

These key comments shaped the discussion by broadening the perspective on accessibility from a narrow focus on technology to a more comprehensive view that includes policy, education, and societal attitudes. They emphasized the need for collaboration across sectors, the importance of viewing accessibility as a universal benefit rather than a special accommodation, and the critical role of capacity building alongside technological solutions. This multifaceted approach led to a richer, more nuanced conversation about creating truly inclusive digital environments.

Follow-up Questions

How can we provide more training and capacity building for teachers to effectively teach persons with disabilities?

speaker

Online audience member

explanation

This is important because many teachers lack knowledge on how to teach persons with disabilities differently, especially for conditions like autism and dyslexia.

How can we effectively leverage technology to support differentiation and personalization for learners with disabilities in large classrooms?

speaker

Revanth Voothaluru

explanation

This is important for providing individualized support in developing countries with large class sizes.

How can we approach education interventions for learners with disabilities from a more systemic lens, considering cultural, political, psychological, institutional, and professional perspectives?

speaker

Revanth Voothaluru

explanation

This holistic approach is important for effectively implementing solutions to support learners with disabilities.

How can regulatory authorities better advance digital inclusion for persons with disabilities?

speaker

Online audience member

explanation

This is important to understand the specific role regulators can play in promoting accessibility.

How can we better engage the open source developer community in embedding accessibility features into digital platforms?

speaker

Audience member (developer)

explanation

This is important because developers play a crucial role in building accessible solutions.

How can we ensure basic education access for students with disabilities before focusing on technology and internet access?

speaker

Itzel (TICET fellow)

explanation

This is important because in some countries like Mexico, basic education access for people with disabilities is still a challenge.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #31 Cybersecurity in AI: balancing innovation and risks

WS #31 Cybersecurity in AI: balancing innovation and risks

Session at a Glance

Summary

This discussion focused on the cybersecurity challenges and ethical considerations surrounding artificial intelligence (AI) systems. Experts from various fields explored the need for trust, transparency, and responsible deployment of AI technologies. They emphasized that while AI adoption is rapidly increasing across industries, concerns about security vulnerabilities and ethical implications remain.

The panelists highlighted the importance of developing comprehensive cybersecurity measures specifically tailored for AI systems. They discussed the need for guidelines and standards to help organizations implement AI securely, addressing issues like data poisoning, model security, and supply chain vulnerabilities. The experts also stressed the significance of AI literacy and education for professionals and the general public to foster responsible AI use.

The discussion touched on the challenges of harmonizing AI regulations across different jurisdictions, with some panelists suggesting that complete harmonization may not be feasible due to cultural and regional differences. However, they emphasized the importance of interoperability and common frameworks for AI governance.

Ethical considerations were a key topic, with panelists exploring the complexities of defining and implementing ethical AI practices across diverse cultural contexts. They discussed the need for balancing innovation with responsible AI development, considering factors such as fairness, transparency, and societal impact.

The experts also addressed the future of work in the context of AI, suggesting that while AI may change job roles, it is likely to create new opportunities rather than eliminate human involvement entirely. The discussion concluded by acknowledging the ongoing challenges in AI security and ethics, emphasizing the need for continued collaboration and adaptive strategies to address emerging threats and ethical dilemmas in the rapidly evolving field of AI.

Keypoints

Major discussion points:

– The importance of trust and transparency in AI systems

– Cybersecurity challenges and vulnerabilities specific to AI

– The need for AI literacy and education across society

– Ethical considerations and cultural differences in AI development and use

– Regulatory approaches and challenges in harmonizing AI governance globally

The overall purpose of the discussion was to explore key security and trust issues related to the widespread adoption of AI technologies, and to discuss potential approaches for addressing these challenges through education, guidelines, and governance frameworks.

The tone of the discussion was largely analytical and solution-oriented. Speakers approached the complex issues with a mix of caution about risks and optimism about potential benefits of AI. There was an emphasis on the need for multi-stakeholder collaboration and nuanced approaches that consider cultural and regional differences. The tone became slightly more urgent when discussing the rapid pace of AI adoption and the need to quickly develop appropriate safeguards and literacy.

Speakers

– Gladys Yiadom: Moderator

– Dr. Allison Wylde: Member of UNIGF Policy Network of Artificial Intelligence team, senior lecturer at UNIGF, assistant professor at GCU London

– Yuliya Shlychkova: Vice President of Public Affairs at Kaspersky

– Sergio Mayo Macias: Coordinator of European Digital Innovation Hub, member of IGF Policy Network of Artificial Intelligence

– Melodena Stephens: Professor of innovation and technology at Mohammed bin Rashid School of Government in Dubai, UAE

Additional speakers:

– Jochen Michels: Online moderator

– Charbel Chbeir: President of Lebanese ISOC

– Christelle Onana: Works for EODNEPAD (developing agency of the African Union)

– Francis Sitati: From Communications Authority of Kenya (regulator for ICT sector)

Full session report

Expanded Summary of AI Cybersecurity and Ethics Discussion

Introduction

This discussion, moderated by Gladys Yiadom, brought together experts from various fields to explore the cybersecurity challenges and ethical considerations surrounding artificial intelligence (AI) systems. The panel included Dr. Allison Wylde, a member of the UNIGF Policy Network of Artificial Intelligence team; Yuliya Shlychkova, Vice President of Public Affairs at Kaspersky; Sergio Mayo Macias, Coordinator of European Digital Innovation Hub; and Melodena Stephens, Professor of innovation and technology at Mohammed bin Rashid School of Government in Dubai, UAE. Additional contributors included Johan as online moderator, Charbel Chbeir from Lebanese ISOC, Christelle Onana from EODNEPAD, and Francis Sitati from the Communications Authority of Kenya.

The discussion focused on several key areas: trust and transparency in AI systems, cybersecurity challenges specific to AI, the need for AI literacy and education, ethical considerations in AI development and use, and regulatory approaches to AI governance. The overall tone was analytical and solution-oriented, with speakers balancing caution about risks with optimism about AI’s potential benefits.

Trust and AI Adoption

A central theme of the discussion was the complex nature of trust in AI systems. Allison Wylde emphasised that trust is subjective and culturally dependent, challenging the notion of universal trust standards for AI. She highlighted the difficulties in defining and measuring trust in AI systems, noting that trust varies across different contexts and cultures. Gladys Yiadom referenced a Kaspersky study indicating that over 50% of infrastructure companies have implemented AI despite trust concerns, highlighting the tension between rapid adoption and lingering scepticism.

Yuliya Shlychkova pointed out that AI, being fundamentally software, cannot be considered 100% safe, which leads to ongoing cybersecurity concerns. To address these issues, she suggested that education efforts could help build trust and harmonisation in AI adoption. This multifaceted view of trust underscored the need for nuanced approaches to fostering confidence in AI technologies.

AI Security Challenges

The discussion delved into specific cybersecurity challenges posed by AI systems. Yuliya Shlychkova highlighted vulnerabilities such as data poisoning, prompt injection, and attacks on various components of the AI development chain. She presented Kaspersky’s guidelines for AI security, which address issues like model security, supply chain vulnerabilities, and best practices for secure AI development and deployment.

Melodena Stephens raised concerns about the lack of algorithmic transparency, which makes it difficult to audit AI systems effectively. The potential security risks associated with open-source AI models were also discussed. The experts stressed the importance of developing guidelines and standards to help organisations implement AI securely, addressing issues like model security and supply chain vulnerabilities.

AI Regulation and Governance

The challenge of harmonising AI regulations globally emerged as a significant point of discussion. Dr. Alison highlighted the difficulties in achieving global harmonisation due to cultural differences, while Sergio Mayo Macias pointed to the EU AI Act as a potential model for regional AI governance. Melodena Stephens suggested that Africa has an opportunity to develop its own AI strategy and standards, reflecting the need for context-specific approaches. This was further supported by Christelle Onana, who mentioned the African Union’s continental AI strategy.

Yuliya Shlychkova emphasised the importance of self-imposed ethical standards by companies, alongside formal regulations. The discussion also touched on the role of private sector companies in AI governance and regulation. This multi-layered approach to governance reflected the complex landscape of AI development and deployment across different jurisdictions and cultural contexts.

Ethical Considerations in AI

The panel explored the complexities of defining and implementing ethical AI practices. Melodena Stephens noted that while AI ethics guidelines exist, they are often difficult to operationalise. Sergio Mayo Macias highlighted the crucial yet challenging task of ensuring algorithmic fairness and the importance of data quality in AI development. Allison Wylde emphasised how cultural norms influence the interpretation and application of ethics in AI contexts.

The discussion also touched on AI’s impact on the workforce, with Yuliya Shlychkova stressing the need for careful consideration of human-AI collaboration and the potential displacement of certain job roles. This highlighted the broader societal implications of AI adoption and the importance of balancing innovation with responsible development.

AI Education and Literacy

A consensus emerged around the critical need for increased AI literacy among professionals and the general public. Gladys Yiadom emphasised this point, while Allison Wylde highlighted the importance of youth mobilisation and education for responsible AI adoption. Melodena Stephens suggested that AI literacy efforts should distinguish between general digital skills and AI-specific knowledge, adding nuance to the discussion on education strategies.

Yuliya Shlychkova stressed the necessity of continuous training on AI risks and best practices within organisations. This focus on ongoing education reflected the rapidly evolving nature of AI technologies and the need for adaptive learning approaches.

AI in Cybersecurity

The potential use of AI in cybersecurity was discussed, with experts noting both the opportunities and challenges. While AI can enhance threat detection and response capabilities, concerns were raised about the potential for AI systems to be exploited by malicious actors. The need for robust security measures in AI-powered cybersecurity tools was emphasised.

Conclusion

The discussion concluded by acknowledging the ongoing challenges in AI security and ethics, emphasising the need for continued collaboration and adaptive strategies. Key takeaways included the subjective nature of trust in AI, the significant cybersecurity challenges faced by AI systems, the difficulties in harmonising global AI regulations, the importance of operationalising ethical guidelines, and the critical role of AI literacy.

Unresolved issues highlighted by the discussion included effective methods for harmonising AI regulations across different jurisdictions, practical implementation of AI ethics guidelines, balancing innovation with security concerns, the long-term impact of AI on the workforce, and ensuring algorithmic fairness and transparency.

The experts proposed several action items, including the use of Kaspersky’s guidelines for AI security, the development of self-imposed ethical standards by companies, and the adoption of risk-based approaches to AI regulation. The discussion underscored the complexity of global AI governance and the need for flexible, context-specific solutions that consider cultural, regional, and ethical dimensions while promoting responsible AI development and use. The importance of a multi-stakeholder approach in developing AI standards and regulations was emphasised as crucial for addressing the multifaceted challenges posed by AI technologies.

Session Transcript

Gladys Yiadom: have as recently witnessed the emergence of AI-enabled system at an incredible scale. Despite various regulatory in- Between- Yes, can you hear me now? Okay. Very good, thank you. Thank you. So I was saying a gap between the general frameworks and concrete implementation remains. We are here today with our distinguished speakers to explore which requirements should be considered and how a multi-stakeholder approach should be adopted to produce new standards for AI system. Organizations like NIST or ISO are actively developing cybersecurity standards for AI-specific threats. However, the standards mostly cover AI foundation models development or overall management of risk associated with AI. This has created a gap in AI-specific protection for organization that implements applied AI system based on existing model. My first question will be to you, Alison, but let me please first share some of your bio. Dr. Alison Wild is a member of UNIGF Policy Network of Artificial Intelligence team. In this capacity, she contributes on interoperability among AI standards, tool and practice. Previously an international commissioner on security standards, she co-shared the first standards to integrate physical and cyber security. Alison is also a senior lecturer at UNIGF. assistant professor at GCU London. She also intervenes at Cardiff University and more. My question to you, Alison, is this one. The use of AI has increased significantly worldwide in recent years. A Kaspersky studies has revealed that more than 50% of infrastructure with a third company have implemented AI and IoT in their infrastructure with a further 33% planning to adopt these interconnected technologies within two years. Does this widespread acceptance of AI mean that the issue of trust is no longer a concern for users and organizations?

Dr. Alison: Thank you, it’s a fascinating question and we’re back to trust. So thank you for inviting us here to IGF 2024. I’m delighted to be here. And I think this question of trust really follows on from earlier talks in the plenary the other day, there was Dr. Abdullah Ben-Sharif Al-Gamadi from SADIA who was talking about trust. And he said, we need to enhance trust in AI products and also to have transparency and trust. And I think this really resonates with your question. So we have the issue of people saying we want trust but the question for us is, well, what do we mean? How do we define trust? Trust is subjective. So maybe I trust you. I think I probably do. I don’t really know you too well, but I trust you. I’m a human. And so our human behavior is naturally to trust. Children trust their parents without thinking about it. And I think that’s one of the issues in business. People see a new technology and they want to be with the top technology, with the new technology. And of course they want to use it really without thinking. And I think that’s part of the issue. And of course, there’s lots more I can say about this. You know, stop me when you’ve heard enough. But I think if we look at basically how are we understanding trust? How are we defining trust? What’s our conceptual framework for trust? What’s your trust in your culture? Are you a high trusting nation or not, depending on where you are in the world? So we need to really look at this as a subjective issue and start with that. So I can come back again, but maybe if I can, a few more things. So I think because trust is subjective, we can’t use statistics. We can’t use regression. We can’t go with central tendency. This is not something we can run a regression model and look at, I don’t know, cultural trust measures and look across the world. We can’t do that because it’s subjective. So we need to have something more sophisticated if we’re going to really try and get the conception right and then ideally get towards some sorts of measurements. So if prominent members are calling for trust, then well, what do they mean? And how are we going to have a conceptual framework for that and how are we going to measure it and how are we going to implement it if we don’t know what we’re talking about? Now, thank you. I’ll hand over. Thank you.

Gladys Yiadom: Thank you very much, Alison, for those points as your highlighted trust is a key element here. So I’ll hand it over now to Yulia, but before asking my question, Yulia Shishkova serves as Vice President of Public Affairs at Kaspersky. She leads the company relation with government agencies, international organization and other stakeholders. She oversees Kaspersky participation in public consultation at regional and national level on key topics such as artificial intelligence, everything related to AI ethics and also governance. My question to you, Yulia is, if there are still concerns regarding the trustworthiness of AI, what are the main reasons for this mistrust? Could you give us a brief overview of the current cyber threat landscape in relation to AI?

Yuliya Shlychkova: Sure. So I am represented in a cybersecurity company and our experts do research on threats. And we actually see that AI is still software and software is not 100% safe and protected. Therefore, there are already registered cases of AI being used by cyber criminals in designing their attacks and also AI has been attacked. So that’s why people with understanding of the matters do have concerns. And this is also only cyber security angle because AI also brings a lot of sociological, social concerns, ESG concerns. But if we back to cyber security area. So we actually see that more and more cyber criminals trying to automate their routine tasks using AI. So there are a lot of talks on the dark webs, them sharing like how to automate this and that. Also on the dark web, they are trying to sell hacked chat GPT accounts and those are trading very high. So we are also being attacked. Some of the examples of attacks include data poisoning, like open source data sets used to train models. We saw backdoors and vulnerabilities there. Also, so such attacks in the wild as prompt injection when attack is targeting the algorithm, how AI model works and trying to impact the output of the model. And what’s happening like because so many organizations like to play with AI. And Gladys mentioned this way Kaspersky did, but those people who were answering how many organizations using AI, they don’t even know the scale of shadow AI use in the organizations because a lot of employees. are reaching chat GPT to do their regular work quickly. So there is an absence of knowledge like how many of these services are used. And what is happening is that employees are sharing confidential business information, financial information with AI models and those models can be impacted and this information can get into wrong hands. So just to summarize that we almost see in the wild attacks on every component of AI development chain. Therefore, cybersecurity should be addressed. We need to talk about this and help not to stop AI usage but to do it safely and have basis for this trust in for AI use in the organization.

Gladys Yiadom: Thank you. Thank you, Yulia for this comment mentioning the use of AI and the idea that we need to be careful in terms of models. It leads me to the question that I will now address to Sergio, but before my question, Sergio Mayo has more than 20 years of innovation program and information system management in various fields such as finance, telecommunication, health and more. He cooperate with IGF as a member of the Policy Network of Artificial Intelligence as a member of it since 2023. He focuses on the social impact of AI and data technologies and digital ethnography. He currently coordinates the European Digital Innovation Hub. So Sergio, thank you very much for being with us today online. My question to you, given that the internet contains a wealth of information, sometimes contradictory or even fake, can one rely on the datasets utilized to train AI models?

Sergio Mayo Macias: Good morning. Good morning. Thank you. Thank you, Gladys. you to the organization for inviting me to this workshop. Well, actually, I think that trusting the data used to train AI models is part trusting the technology and part trusting in the human creating or operating that technology. And that’s a philosophy question. I will not go deeper in this, but going deeper regarding the data issues for trusting or not trusting in data used for training AI, there are an amount of problems really, really big. And I will mention some of them. First of all, and the most important one that comes to our mind is data bias. Data bias, of course, when the training data used to develop AI models is not representative for or of the real world scenario that it is intended to model. And if the data is skewed in terms of gender, ethnicity, location, or any other attributes, the AI model will inherit and amplify these biases. And this can result in unfair predictions, discrimination, and so on. But also, even though we have the data quality issues, like poor quality data, which includes incomplete or outdated information, and it also can severely undermine the reliability of AI models. But at the end of the day, even if we have a good data set, we have a human using this data, and a human creating an algorithm and a model. So going beyond the good or bad data that we used for training this model, we have to put the focus on the algorithmic fairness. And the algorithmic fairness is is an issue that is directly pointed at the human using the data. So the human using the data must be aware of the quality of this data, must be avoid the data bias, the data privacy concerns, for instance, and so on, the data manipulation, the insufficient data representation. But at the end of the day, he’s able to produce a fair algorithm with this data. So I think this is the key point for this question.

Gladys Yiadom: Thank you. Thank you, Sergio, for your comments. So now I will turn over Melodina. Melodina is a professor of innovation and technology at Mohammed bin Rashid School of Government in Dubai, UAE. She has three decades of senior leadership international experience and consult with organizations such as Agile Nation, Council of Europe, and the Dubai Future Foundation. So we were previously addressing regulatory issues. My question to you, to maintain the balance between the progress and the security, it is assumed that the emergence of new technology should be accompanied by the development of a corresponding regulatory base. Can we say that the current governance of AI is adequate? Are existing standards such as ISO or NIST sufficient for the security of AI? Or do we need specific regulations?

Melodena Stephens: So thank you for the question. I think it’s a complex one. So let me start from the top. If you look at how many policies are there for cybersecurity, I think there are more than 100 countries which have policies. While some of them are on security and they’re looking at algorithmic security, we see recently over the last two years maybe more focusing on critical infrastructure. And there’s two things driving it. One is we’re moving away from individual security. or corporate security or industry security to national security. So this becomes an interesting trend, right? And I think the main thing, the challenge we have is fragmentation. AI is global. If you just look at the supply chain of AI, it is impossible to nationalize it. So how can you maintain even national security or individual security or corporate security when AI is global? So that’s the first thing, fragmented regulations. Anu Bradford has written an interesting book that’s called Atlas of AI, and she divides the world into three. On one end, she looks at US as a very market-focused leadership. So you see private tech actually leading and dominating. If you look at US and its allies, I think we’re talking about 27 countries if you’d look at NATO alliance. Then she looks at the EU, which she says is driven by human rights and rule of law and democracy. Again, 27 countries if you look at it. And then she talks about state-driven national strategies, and you’re looking at countries like China. If I just take the BRI project, you’re talking about approximately 140 countries. So then you’ve got a good idea of how this fragmentation and how alliances will be created across the world. So it’s very geopolitical. If I look at the strategies that are currently, or the frameworks that you mentioned, the ISO and the NIST, so there are a couple of challenges with it. One, the scope and context is decided by the organization itself. So it’s not really taking the wider perspective. And we see in strategies like this, we need whole of society, whole of government, and whole of industry perspectives, which are missing, right? And I think also the focus on risks is a challenge itself. Because when you come to a place like cybersecurity, you’re looking at a public value domain space. And it’s really about decisions on trade-offs. Do I put national security ahead of individual privacy? That’s a trade-off. Do I invest in today’s technology knowing that a data center costs billions, right? And I know that it will create an environmental footprint and a sustainability issue later. That’s a trade-off. Do I connect everything through the internet of things, which is great, but that means I am creating vulnerabilities because of all of these connections because no one company has the technology stack from bottom to the end. So that’s a trade-off. I do not think when we talk of risks, we talk enough about trade-offs and that’s one of my concerns.

Gladys Yiadom: Absolutely right, Melodina. And I think we’ll also dive into it a bit later in the session. I also invite afterwards participants to share any question that they would have. So now moving to that, this workshop is also the opportunity to display some of the guidelines that has been produced with Kaspersky team, but also the speakers that are here among us. So I’ll kindly ask the team to share the slides. Yeah, can we please share the slide and the floor will be yours, Julia.

Yuliya Shlychkova: So while we are waiting for the slides. Thank you. Okay, so as Melodina said, a lot of focus is on critical use of AI and on developers of large language models on like national competitiveness in the area of AI. And we see that there is this gap because adoption of AI is happening on the mass. scale and it’s skyrocketing. And these users, these organizations who are fine-tuning existing models and using it also need some sort of guidance. Maybe not regulation, not compliance, not requirements, but at least some guidance. Do these 10 things and you will be at least 80% more secure. And this is what we have put our thoughts into and produced these guidelines. Just a little bit to illustrate the scale of adoption, that more than a million models are available in the public repository. And like developers at GitHub are already saying that the majority of them, they are using AI at some point and industries. So in a few years, I think there will be no one not using this. Attacks I already covered in my short intervention, but again, we see almost every point in AI supply chain can be vulnerable to attacks. In public, we see more than 500 records of vulnerabilities in AI and their accounting. So we asked in our survey, professionals working in organizations, do they estimate the rise or decrease of incidents within their organization? And the majority, more than 70% reported they see a rise in such incidents. But interesting thing that 46% out of these believe that these attacks were with AI use in that way or another. And also the same professionals also reported that they believe they are not equipped enough to address these challenges. They have lack of training, lack of qualified staff, insufficient IT team size. So these problems already here, they already exist. And when we add AI usage, especially shadow usage, so it’s like with immune system, every person has, right? So it breaks under pressure. So that’s why we believe some guidance, some basic requirements are of help to organizations adopting AI. So our guidelines cover four main pillars, key security foundations, infrastructure and data protection requirements, how can resilience achieve through validation and testing, and also adherence to governance and compliance. So talking about AI security foundations, we believe that first of all, leadership organization has to know about what AI services are used and whether they open new threats or not and how those are mitigated. Team has to be trained. IT professionals has to be trained on AI usage and risk associated, and also regular users who can use AI in their work also needs to have this awareness about risks and what to do and what not to do. And these courses has to be regularly updated. There needs to be field exercises and it should be continuous exercise. Also, the response of organization has to be proportional to the use. So each organization is advised to have threats modeling about what, check, check, what threats of non-using AI can be, what threats of misusing AI can be, and how those different threats can be addressed. So to have individual threats modeling is very recommended. Talking about infrastructure security, a lot of organizations are relying on cloud-based services, hence traditional approach to infrastructure security is also relevant here. That access to AI services has to be very, has to be locked, has to be limited only to those employees who need to have this access. They have to be two-factor authentication, there has to be segmentation like data models in one place, weights in another place. So it’s all mentioned in our guidelines and I will provide you link further just mentioning highlights of this. Then talking about supply chain, in a lot of regions some AI models, popular models are not available. That’s why a lot of organizations turn into proxies, some third parties and some of them can be reliable and some not. That’s why it’s very important to check from which source information coming and to have this audit of supply chain. Because of this, a lot of organizations also choose to have localized data models within the organization and if you choose this approach, there is also importance to follow requirements such as login access, keeping and backing up your assets. Then if your use is very wide of AI within an organization, you need to be prepared against machine learning specific attacks and there are already best practices how to do this. You see fancy words like distillation techniques, train models with adversarial examples. Like for policy people like this, it might sound as rocket science but IT people would know what this means. and we provide more details in our guidelines. Then also Sergio mentioned that if you’re using a model from a third party, this model was trained on specific examples, specific data sets. So before releasing it to public, you need to train this on the real life scenarios, on your industry benchmarks in real life. So testing and validation is really important, and you need to be ready to back up to the previous version if testing goes wrong. And also general cyber security requirements. Please ensure to have regular security updates when you monitor in public sources information about vulnerabilities. Have internal audits regularly to test and update based on this test. And of course, vulnerability and bias reporting. As an organization, you need to have information available to public so that users and your clients using your AI services have an opportunity to contact you if they notice vulnerability or bias, and you have an opportunity to fine tune this. And we also as an organization, very advocate for public bug bounties programs to include AI in your bug bounty programs to have more and more community engaged. Check, check. I’m speaking too long. So vulnerability reporting is important. And of course, since regulatory space is very, very active, it’s important to keep an eye and ensure that what you are using is adhering to the standards and regulation. And I think the last slide is the most important So the full text can be accessible upon this link It’s over 10 pages We really did our best and a big thank you to Alison, Melodyne and Sergio in reviewing it and contributing And the idea of these basic standards actually come from cyber security A lot of nations like UK, Germany and ministries of communications and technologies are trying to raise awareness of these basic cyber security standards and publish this information on their website So we believe that it would be a good idea if nations worldwide can also maybe take a look at what we have produced develop, fine tune it and to promote it on national and international level so that mass usage of AI can happen in a more secure way Thank you for the opportunity

Gladys Yiadom: Thank you very much, Julia, for sharing the guidelines Again, do not hesitate also to pass the Kaspersky book if you don’t get the chance to download it here So now, moving to another set of questions Julia, you were mentioning somehow AI trainings, literacy and my question will be to you, Melodyne In such cases, how best to address the issue of increasing AI literacy among professionals but also the wider population?

Melodena Stephens: Thank you First of all, I want to mention that digital literacy is not the same thing as AI literacy So I was having a conversation Some key places people think it falls under but right now most of what passes for digital literacy is actually digital skills training, and I don’t think it’s the same thing. So we need to be very mindful of that. AI is a much more complicated topic. And I think the challenge that we’re really facing is we need societal education, we need education of industry, we need education of policy makers. I have met engineers, I work with IEEE for example, even engineers struggle when you look at AI and you look at some of how it’s being deployed or what implications it has. So this becomes a challenge. And when you look at some of the policies, I just wanna take an example. If I look at NIST, there’s 108 subcategories. If I look at ISO, for example, we’re talking about 93 controls. And what people are doing is making them 93 policies. I don’t know about you, I don’t know who reads 93 policies, but the problem is actually operationalizing it and implementing it. So the way we’re delivering knowledge, the current method is not working. An audit system, the policies put over there, we don’t know how to translate it, we don’t know what it means for me. So we need to be able to translate this for different people based on their level of expertise. And I’ll just give you one example. I heard the word, you mentioned transparency. How can we get algorithmic transparency? If I look at what Google has just released in the last week, which is Willow, it does a calculation in five minutes, which according to them, a supercomputer will take 10 raised to 25 years. That’s 10 septillion years to do. Which human being can go and look at this and trace everything? It is impossible at the speed at which technology is doing. Just another example, if you’re talking about 175 billion parameters, we’re talking about 10 million queries per day. How many people do you have to employ to go and audit 10 million queries per day? So what we’re doing right now is taking a rough sample and we’re auditing it and then we’re reporting error rates and we’re only reporting sometimes one. one type, not false negatives, not false positives. Both are important. So there’s a lot of things that are missing currently right now in the way we’re evaluating AI. And I wanna also highlight something like this because they talk about, let’s have human in the loop. If anyone has read the foreign policy article on Lavender, Project Lavender, which was a facial recognition drone technology, they did have humans in the loop to decide who or what to target. The amount of time they spent, 20 seconds for review. I don’t know about you, but my brain does not think in 20 seconds of review. We’re not computers. So the first thing is I’m not a machine, I’m a human being. My skills are different from a machine. We need to understand both of that. And I think AI literacy is kind of understanding what a machine can do, what a machine cannot do. And I’ll take the last example, which was in 2021, Facebook had an outage. It was a BGP, Border Control Gateway, Border Gateway Protocol issue. Now, what was interesting is they’re very high tech. So their systems are all on facial recognition and authentication. So they should have been able to enter in to fix the issue. Unfortunately, what happened is they got locked out of their own offices. So you have backups and we’re depending on technology for those backups, but at the end, it’s the human being. So you’ve got to have a backup, which is a human being. And my worry right now is the knowledge those human beings are having are becoming obsolete because we’re not valuing it enough.

Gladys Yiadom: Thank you. Thank you, Melodina, for this comment. My next question to you, Alison, how can a zero trust approach be integrated into the development and use of AI?

Dr. Alison: Thank you, I’m just checking. That’s great, thank you. So just very quick, Zero Trust 101, I’m sure you’re all familiar, but for those of you that are not. So as I mentioned before, we’re humans, so we’re predisposed to. presumptive trust, to trust someone without validating. I think my Russian’s really bad. So trust but verify and of course now we don’t trust, we have to verify first. So zero trust, non-presumptive trust, we have to verify an identity whether it’s an individual, a person, a data user, a technology and so on or an application. We have to verify that before we can grant trust. So we have continuous monitoring. So in a process like artificial intelligence where we’re looking across a very complex dynamic ecosystem, we’ve got all of the moving parts all moving at the same moment, the humans taking decisions, the prompts going in, the black box doing its thing with the model we’re not sure where it’s come from, the data we’re using to train the input, the outputs coming out. So we’re saying operate zero trust throughout this ecosystem to give us a chance to verify before things come out the other side and before they’re implemented and as we’ve said, colleagues have said, companies are just doing this without thinking, just like a new technology, just like driving a car before people had a driving license, jump in the car and drive in and people don’t know what they’re doing. Same in industry at the moment. Industry’s adopting this at pace and at scale without, I think the word is guardrails and zero trust can be one of the guardrails. I’m happy to come back in more depth and questions later on. I think interoperability I think is the other thing for zero trust because we’ve got everything happening at the same time at scale with no common frameworks from whether it’s our friends in ESO or NIST or wherever in the corporate world, using technology, developing standards with no interoperability across those different domains. So it’s a very complicated systems-based ecosystem.

Gladys Yiadom: And basically what you’re saying is about how to use it responsibly. So it will lead me to my next question to you, Sergio. Given your experience as a coordinator of a regional European digital innovation hub, could you please tell us more about blueprints of best practices for the responsible deployment of AI in Europe?

Sergio Mayo Macias: Yes, thank you. Thank you, Gladys. Well, actually, the AI environment in Europe is known and has been, let’s say, labelled as a regulation-focused environment. This is because the AI Act and the DATA Act, among many others, as the main European general outcomes or the known reference frameworks, but this is only partly true. The bottom-up work has been going on for a long time. I always put the same example. We don’t have a Boeing company in Europe, we don’t have this US big company, but we have Airbus, which is not a big company, but a consortia of really, really small companies. So the way we are working in Europe is this way, the cooperation, the consortium, and so on. For instance, from 2099, there is a group called the High-Level Expert Group on Artificial Intelligence, established by the European Commission, and they, in 2019, they provided the ethics and guidelines for trustworthy AI. These guidelines emphasise the need for AI systems to be lawful, ethical, and robust, and they are producing, year after year, new drafts regarding this regulation. But we also have the AI office. supporting the development and the use of trustworthy AI. And this is only from the top, but we are also working from the bottom, from small companies and organization and RTOs. And for instance, in January, 2024, the commission launched an AI innovation package called the Gen AI for EU Initiative, which is a really easy reading package to support the startups and SMEs in developing trustworthy AI that complies with EU values. So all these islands are intended and are developed for providing security by default. Let’s say for SMEs and citizens not being able to be aware of the law, to be aware of the AI Act and so on. Another initiative is the Data Spaces Support Center. This Data Spaces Support Center was launched for contributing to the creation of common data spaces. Data spaces are a safe space for collectively create a data sovereignty, interoperable and trustworthy data sharing environment. And they are directly related to the AI deployment. They point to the core issue, the creation of trust. As Alison said, if you can create an environment where data is safe, reliable and secure, you are enhancing trust. And from there, you can go a step farther and use this data for training AI models. And also the network of European Digital Innovation Hubs, I am the coordinator of the one in Aragon region in Spain. We are close to the city. We are producing guidelines, blueprints and a lot of help for this key issue to create security and trust by default and letting people using AI not being aware of big documents or big frameworks or the act or the data act.

Gladys Yiadom: Thank you. Thank you, Sergio. Mentioning regulations and just coming back also to what you said, Alison, about interoperability. Is there a need to harmonize AI regulation from different jurisdictions? If so, is it possible to ensure such interoperability?

Dr. Alison: Thank you. So two parts. The first is, is it a requirement and is there a need? Is that correct? Sorry. Sorry.

Gladys Yiadom: Yes. Let me repeat that question. So is there a need to harmonize AI regulations from different jurisdictions? And if so, is it possible to ensure such interoperability?

Dr. Alison: Okay. Thank you. So I speak from a personal perspective here. So I don’t know if, realistically, I don’t know if harmonization’s possible because we’re looking across the world, across multi-stakeholder groups, private sector, governments, state actors, individuals. And it’s really difficult because there’s different cultures in play. And I think it’s right that individuals should have their culture and should have their way of being. So I really, I think that’s really hard. I think for cybersecurity and risk management standards, we do see some global take-up of the big standards there. So maybe we can look to what’s happened with those iso20s. 27,000, 27,001 family, or even the 9,001, the kind of quality management standards there. And look at what’s happened there as a guide to what might happen in the future. But I think there will always be difference. Differences across the globe, across private sector, different sectors. So I don’t actually know, this is my personal view, if harmonization is possible. Is it desirable in an ideal world, we would have interoperability across tools, across standards and frameworks, across all of those different factors, that would be the ideal. Whether it’s possible, I don’t know. But I certainly think guidelines are really a helpful stepping stone forward. So if everyone has the same framework to work from, and a common understanding, I think that’s a really big step in trying to achieve a future where we all understand where we’re going. If that, I hope that answers your question.

Gladys Yiadom: Thank you. Absolutely, Alice, and thank you. Thank you very much. My next question will be to Yulia and Sergio. Yulia, you mentioned how important it was to address it from a cybersecurity perspective. Why is the issue of cybersecurity crucial for AI systems? What would be state-of-the-art security for AI system look like?

Yuliya Shlychkova: So we believe that’s… It’s not working. Check, check. No. Check, check, check. Check, check, check. Yes, working. Now it’s working. Okay. We can hear you. So with AI is a new thing. So every technology first developing, and then people have this afterthought, oh, I had to put more thoughts about security there. So with AI, we have this opportunity to think about security by design. The same as regulation. Like regulation is always catching up. With AI, hopefully, there is a chance not to be a decade late That’s why it’s important to keep on par and think about cybersecurity Not only about how technologically to protect this, but also to spread awareness about issues so that regular users are not feeding AI with their personal data without necessity. Employees don’t share confidential information, etc. Sergio, do you want to add?

Sergio Mayo Macias: Yes, I agree with you. I think that for AI, we cannot push people to install the antivirus That is not realistic. We need to provide cybersecurity by default We cannot send the elephant in the room to final users. We have to define safe spaces for using the AI systems and we cannot expect final users to do it. For instance, I was mentioning before the data spaces pursue that goal. To create this framework, a space with legal governance and also technical issues are developed and deployed by default, just to be used So, we have the AI Act in the background, but we have to define these spaces for letting users use AI without concerning any other issue

Gladys Yiadom: Thank you, Sergio. Perhaps, turning to the audience to check if there are any questions Yes, we do have one question here. Sir, can we ask you to come to the middle and ask your question So, please share your name, organization and who you address the question to

AUDIENCE: Yeah, so my question is actually from Yulia, as she mentioned, you know, so there has been very difference in the conventional security and the AI security. For example, in conventional security, if you send certain requests, you get the same responses. In AI, it’s very different. So I mean, how do you see the security if every time the response generated is different? I mean, if even you train your model, you cannot expect if it will provide the same answer next time. You know, so like we are actually a security firm and we work heavily in the AI security right now. So we have faced these problems, like I mean, the security options which we provide to our clients. Even if you try after sometimes, the same errors, the same vulnerabilities arises again. You cannot handle it properly. So number one, how do you see that? As far as the vulnerability disclosure program you mentioned, I mean, companies are not taking it seriously. For example, if you report biasness as a vulnerability or as an issue, they’re not accepting. Even if you see the bug bounty program of the open AI and the bug crowd, they have clearly mentioned we are not accepting like biasness or racial or unethical responses in the report. So how do you see that? I would love to see the response on that if you, yeah, thank you.

Yuliya Shlychkova: So I think I like your comments and so I think they’re more like comments than questions. Thank you for sharing your experiences and for bug bounty, it took years for big companies to start doing bug bounties and vulnerability reporting. So I think that we, you, us, we just need to push for it and do this awareness. I’m sorry, we are human beings. It takes a while for us to accept the problem and start moving to the solution. As for the issue with the AI security being different, we also see this, we are using machine learning in our solutions for ages. And again, you need to ensure that you have representative data sets to train your model Then you’re dealing with these false positives, false negatives Like trying to find the bar where the performance is okay and acceptable But still, we have this human control on the top Because 100% confidence is not there That’s why we have human experts who are analysing the output and can interfere So what we call it, it’s multi-layered protection So we are trying to use different models, they’re checking on each other And at the end of the pyramid, there is human factor

Gladys Yiadom: Thank you, Yulia, for your response I’ll just take one online question and then I’ll hand it over to you So I believe, Johan, we do have one question online

Lufunu Chikalanga: More than one question Actually, there are three questions First of all, it was valued very much that the report was shared And also, there was positive feedback to Alan’s remarks with regard to trust must verify And having that transparency aspect with regard to cyber security and artificial intelligence One question to Yulia Please excuse if I misspell your name Lufunu Chikalanga from Osis Orisur Consulting He is interested to get some information about the role of open source and artificial intelligence And in particular, he raised the question whether it is enhancing security or increasing vulnerabilities

Yuliya Shlychkova: It’s a very good question So, on one side, we advocate for open source and it’s great that community being built around AI, models being shared, data sets being shared because it’s not, it’s limited innovation if it’s only proprietary models. So, and especially for regions like Africa and others, I think it gives opportunity to leverage innovation, like this openness, availability for open source information. On the other side, those who are deploying the models needs to own responsibility of security for the things they are using and to check, to audit, to do not admit that if someone developed this for you, it’s like 100% ideal. So, this would be my answer. Please, our panelists, add on to this.

Gladys Yiadom: Do you have any other comments from our panelists, perhaps on this topic?

AUDIENCE: Yeah, I like open source, but I would jump in and say, I think there is a role for closed source. I think it’s perfectly valid, for example, if you’re using AI for cybersecurity and that goes back to a question over here. I think it’s really good to have transparency, to know what you’re using as the training data. But yes, there’s the issue of innovation. I’m sure in the future, there’ll be a way beyond this. So, having a closed system that’s off the cloud, that’s proprietary, that’s able to learn and has that security badge.

Yuliya Shlychkova: I want to have something in the middle because we as a company, we do have transparency centers where in the secure environment, we are sharing the models we’re using, our data processing principles. So, this can be shared, but in a secured environment. Yeah, good point.

Gladys Yiadom: Thank you, Yulia. So, perhaps before taking another question online, Johan, we have one question in the audience here. Can we ask the person to answer?

AUDIENCE: Thank you. Sorry, do you hear me? Yes, we can hear you perfectly. Thank you very much for the panel. It’s very interesting. But I have a question maybe for Julian. I mean, the issue, I mean, when we speak about AI and security, okay, we have AI that could be used for enhancing security. We have the normal security issue about platform infrastructure data, data center, blah, blah, blah. And then we have the data security. I mean, when we speak, if there is any other dimension that we miss, I mean, there is in the algorithmic, in addition to these ones, I mean, because whenever, I mean, I have the feeling that it’s more data security and infrastructure security at large. But there is anything related to, let us say, machine learning process or algorithmic process that we have to consider according to your knowledge on this regard? I’m not sure it’s clear, but I have the feeling that we mix AI security with data security and infrastructure security. Is there is any other dimension? Model?

Yuliya Shlychkova: I have this headset. That’s why I feel that it’s also working as a mic. I believe that you’re right, that model security is also, should be considered in the holistic picture because this is black box and we can be in classic programming to be sure that the code will perform as intended. Therefore, it’s very important to test model. And we already saw adversarial attacks trying to impact the way how model functions, maybe to add noise and invisible for AI and let model misperform. So model security is also in the question, the algorithm. Definitely.

AUDIENCE: So I was just going to add today about, if you look at the traffic on the internet, 70 to 80% is API calls, which basically means it’s code talking to code. And each one of that is a vulnerability. So it’s not just data and critical infrastructure. I think it is also because we’re looking at algorithms which are made with different languages and we’re trying to map them together with interoperability and it is not working. So one update is happening. We’re not updating in real time. And I saw a piece of research that says it takes about 200 days on an average to find a security vulnerability. That’s 200 days for a hacker to access your data. So just think of all of us. We’re here at a conference. How many of you have ensured that your data and your devices are updated? And that’s the challenge, right? Yeah.

Dr. Alison: Thank you. I’ll jump in really quickly. I think some developers are like chefs. They have their cuisine and they use their process for the model and your mother’s process is probably different from mine. So I think there’s probably a lack of, what’s the word? Replicability in the model of who’s designed it and passing the steps to the next person. And once the model starts going, then we don’t know what’s happening and there’s no record. Thanks.

Gladys Yiadom: Sergio, do you have any comments?

Sergio Mayo Macias: Yes, please. Yes, indeed, indeed. I’m really happy to hear this question and I totally agree with Melody and Alison’s comments. And let’s say that we have an ideal world with no data problems and we have fair data, secure data, reliability data, and so on, so on. And data is not problem anymore. This is an ideal world. This is impossible at all, but let’s think about that. Afterwards. As you said, there is a programmer. We have the black box. We have the algorithm. And we have the human being there, using fair data, good data, data with no problems, with no bias, and so on. And what do we do with the black box? It is the same that happened, if you remember, with COVID crisis, with the vaccine. We have the chemistry and so on. The chemistry is data. The components. But afterwards, we have the people working with those components. Let’s say the programmers here with the black box. Do we trust them? And as I already said, at the end of the day, trust is not about data. It’s trust about human beings. So we have going beyond trusting data. We have to go beyond trusting the black box. We have to think about if we are ready to trust in human beings and developing their models.

Gladys Yiadom: Thank you, Sergio. Almost a philosophical question, right? At the end of the day. Yes. Indeed. The key in this. Thank you. Johan, do we have another question online, please?

Jochen Michels: Yes, we have. Some of them were partly answered by Sergio, for example. But I will first share the question. One question is by Max Kevin Belly there. He would like to know what is the relationship between regional legislation and limitations with regard to artificial intelligence and also on… on the level of different states, and whether that is a hurdle to try to find harmonized rules and harmonized, global harmonized regulations in that regard. So some standards, that is a question perhaps to Melodina and Sergio, and there is one further question by Maha Ahmad, and that was also particularly answered by Sergio, it’s about classification of AI technology, and Sergio already referred to the European AI Act and the risk-based approach, but perhaps Alison or Melodina, perhaps you can share examples from other regions, whether there is the same approach or whether there is another approach regarding high-risk AI, low-risk AI, and so forth. Thank you. That were the questions here from the online attendees.

Gladys Yiadom: Thank you, Johan. So perhaps Melodina.

Melodena Stephens: Okay, so the first question. The first question was on AI regulations and regionalization. Okay, so the EU is the only one that I would look at it currently right now that has…

Gladys Yiadom: No, it’s good. It’s working.

Melodena Stephens: Harmonized across its 27 countries, but we also see that it is in implementation, right? So it will take some time, and right now what we don’t have is time. With the rest of the world, what I’m seeing is a strong trend towards bilateral agreements, and part of it is on defense, part of it is on data sharing, and another big one on knowledge and talent. So we’re seeing a slightly, so much more polarized world where it’s focusing on bilateral ties, and this becomes very interesting. If you want to take a step further, is it about governments? Is it about tech firms? I think that is a far more interesting discussion for me. If I look at the 500 cables that are undersea that are transmitting about 99… 99% of the data, most of them have private ownership. If I see data centers, most of them are again private. So I think there’s a whole other discussion which we are not taking into place in policy regulations, which is the role of private sector, many of which these companies are having revenues and market capitalizations much larger than countries. So you can see a power asymmetry coming in over there. I think the second question was on… Classification right here. So I know this is an interesting one. So besides risk, I’m gonna move away from risk. There’s been a lot of debate on whether we should look at it as AI technologies or AI for industry regulations. And this is a hard one because what we’re seeing right now, if I ask you a question, is Tesla a car with software or is it software disguised as a car? What do you think it is? And therefore, how should it be regulated? And the very fact, if we don’t have an answer tells… But the fact it’s… Sorry, he says software.

Charbel Shbir: Hello. Yes, it is. Hello, my name is Charbel Shbir. I’m president of Lebanese ISOC. Regarding your question, I think it’s a software developed by a person or a developer engineer. So therefore, the regulation must be… He has liability regarding the software that he developed. This is my answer. It’s not about the car as it’s a car, because it is autonomous and it’s worked by itself. Reason why he should hold the responsibility because he developed the software. But I have another intervention.

Gladys Yiadom: But I just wanna add one point. You’re right, but when it is registered, how is it registered?

Charbel Shbir: It will be registered as a car. It is registered as a car, but the responsibility is who’s driving the car.

Melodena Stephens: That is why there are challenges. So think of your health app. Apple, is it a watch or is it a health app, right? And I think this is where we’re gonna have these interesting discussions on jurisdiction that AI will move across industries and we don’t have oversight. So the purpose with which it was developed for one purpose allows it to scale into a totally different industry for another purpose and we don’t have transparency on weights, why were those weights developed? It was developed for health, but now it’s being used in X case. And I think that’s the challenge. So thank you, thank you for that answer.

Gladys Yiadom: Thank you, and Melodina, perhaps ask Sergio if you have any further comments regarding the first question that was asked and then I will hand it over to Alison.

Sergio Mayo Macias: Well, actually, yes, it’s just more or less repeating the same that you said, but also I agree with Melodina that regarding data, being able to establish contracts for ensuring trust is the key issue now. Now with data spaces in European Union, we are trying to skip that problem for SMEs and for citizens and to establish this safe space with no need of contracts, with no need of agreements for sharing data. And actually I am aware that this model is being, is let’s say also used in some countries in Latin America. They are consulting us on why we are doing these data spaces and how they work. And they are trying to do more or less the same in South America for sharing data without the need of establishing this type of one contract or one agreement for each time that we share data.

Gladys Yiadom: Thank you, Sergio. Alison, please.

Dr. Alison: Thank you, just to jump back in. So the question of high risk contexts. So I was at Warwick University a couple of weeks ago with some of the MSc students coming in from industry, from all different sectors, critical national infrastructure, nuclear, I mean, everything. can imagine. And everyone wants to use AI for cybersecurity, because of course, we’re just human. But once we have the developers, that was a really interesting point over here about the developer bearing liability. But once the model starts modeling, then it’s gone from the developer. It’s gone from their hands. It’s not in their control anymore. So there was a conversation. And again, from another security institute, the Cognitive Security Institute, really interesting discussion there. And we are human. So a parental relationship, 80% of people we can train, but the other 20%, you know, it doesn’t matter how smart they are, or what, you know, whether they’re on the board, but these are the people that will always click on the link. We know that because that’s human psychology. So do we implement some security and say, okay, we’re going to just implement the security to stop that happening. So let’s secure the system and take out the 20%. I don’t mean that, you know, let’s secure the system so that that can’t happen. And that’s one of those trade-offs that Meledina was speaking about earlier. So maybe the company says, yeah, we’ll have zero trust, we’ll have best practice. But in the end, let’s put some baseline security in just to take away some of that baseline risk. Maybe that’s how we deal with this high risk. And, you know, to get back to our issue of innovation earlier on, it’s a really difficult space, but we can see this unimaginable innovation out there in the future. And it’s just really trying to navigate this difficult space at the moment, so that we can reap, hopefully get to the benefits. Thank you.

Gladys Yiadom: Thank you, Alison. So we’ll take another question from the audience. There’s one lady and then we’ll introduce her.

Christelle Onana: Good morning. My name is Christelle Onana. I work for EODNEPAD, which is the developing agency of the African Union. So my question goes to maybe Meledina and Alison. We discussed earlier, and you say that harmonization happen ideally. So then my question goes of, should we, so last July, the African Union adopted a continental AI strategy. There is quite a lot that needs done on the continent. The countries have different labels of policies and regulation defined. So if there is a continental strategy that has been adopted, it should be implemented sooner or later nationally. Should we then not talk about harmonization because we talk about a system that is global and is difficult to, to may put it to geo-localize to, you know what I mean? That’s one. And what will be your recommendation about then implementing the strategy that has been defined, going about it nationally, engaging with the countries for the development agency that we represent? Thank you.

Melodena Stephens: So you have a mic perhaps, yes. Okay, so I’ll start. I was very pleased to see the strategy, 55 countries, massive, massive, massive. I think we underestimate Africa as a continent and there is a chance now to be actually in the forefront. Now, there are a couple of things that are important to realize between the US private sector model, which is on market capitalization and the European Union model. There are two different things that Africa will have to decide. Are we in it for just the profits for the economy, this thing, or is it also about lifestyle? Because if you look at EU, I remember one of the discussions that was happening in Germany was, Why don’t you list on the stock market? Why don’t you want to be a trillion dollar company? And one of the founders actually said, well, I’m happy with the amount of money that I’ve earned. I can take care of the families. Why do I need to grow? It provides enough. And that’s very different from the other mindsets. That’s one thing that Africa would have to figure out because you’ve got a lot of societal values. Family is important. Society is important. What do you want to focus on? The second thing that I think is important is just to understand what are the assets within Africa. So we know that Cobalt, for example, DRC is a major provider. If we could go across the 55 countries and find unique assets that you could tie in, I think there is a win-win situation for all 55 countries that’s there. This is really important in the future because we see across the world, a lot of countries have assets, but they are sold as commodity products, not value added. And again, I like the EU model because you look at the trade, intra-trade within the EU model, it’s 60 to 70%, which I think is huge. So there is enough for everyone in Africa to benefit if you’re focusing on intra-trade. Non-harmonization, what would come? I think that’s important as standards for interoperability, right? So all of us with USBC, thank you European Union for that. But I think interoperability will be key on how you would want to make it work and even deciding who would be your key markets because who you would sell to will also decide whether you want to align your standards with them. And I think that’s things that you would have to decide at a strategic level.

Dr. Alison: Thanks for that, Melodina. I think I have to come back from an education piece and talk about the ideal world would be something like mobilizing the youth. And there’s all of the IGF youth ambassadors here from different countries. One of a young guy I’ve worked with from Ghana, IGF youth movement there and this vitality. and young people, and really think even going younger and younger, going into schools and doing an education piece that makes sense. So your parents’ business, what happens to your parents’ business, really at that level, so it’s really, it’s understandable the risks that are involved, so that people can embrace the risks and young people particularly can mobilise and get involved and take the actions that they need to, that will help families and help businesses locally. So maybe from the education piece, I mean maybe Kaspersky has something to say on an education piece.

Yuliya Shlychkova: I was just listening to you. Education is indeed important and I think education helps harmonisation. It’s like when people are connected with their minds, it automatically motivates more harmonisation. And I believe that education efforts should also be shared responsibilities that not only governments, but also private sector, university, parents, so that it’s also like a common goal and as a private company ready to contribute.

Gladys Yiadom: Thank you Julia, Melodina and Alison for your comments. Perhaps, Johan, do we have another question online?

Jochen Michels: Currently, we do not have questions online. There is a little bit of discussion between the attendees, but no direct questions to speakers.

Gladys Yiadom: Thank you Johan. We have a question on the audience. Can you ask you, sorry to come by ask your question. So please share your name, organisation and who you address your question to please.

AUDIENCE: Hi, I’m Odas. I’m from… Digital Uganda. We’re based in Kigali, Rwanda. And I want to ask Yulia regarding what you mentioned around data poisoning and open source datasets. So my question is around, have you seen some of these instances where there’s data poisoning and open source datasets and are there tools in the preparatory open source that can be used in security audits of such open source datasets?

Yuliya Shlychkova: We did see data poisoning, unfortunately. Because I’m not a technical expert, so I think I would not be able to move further. But even at the hugging phase, there were some backdoors and so ready to exchange business cards with you and connect with our experts who can provide more information. In terms of AI audit, we also see that this is raising trends. And in Europe, already more companies who are providing audits, adding AI audits in their portfolio. And I was able to chat with some of them. And what they’re saying is that they’re also developing methodology. Their first clients, it’s also their like pilots, pilots are grounded. They’re testing this methodology. So I believe we will see more and more of this.

Gladys Yiadom: Thank you, Yulia. We have another question from the audience.

Francis Sitati: Thank you very much. My name is Francis Sitati from Communications Authority of Kenya, which is a regulator for the ICT sector. My question is about the ethical considerations of AI. When you talk about innovation in AI, you can’t miss to talk about the ethical issues, especially with regard to the psychological effects of developing the data models. We’ve seen big tech companies using proxies, to, you know, leverage the affordable labor or cheaper labor within developing countries. So what do you think are some of the considerations in terms of AI practices, to promote AI practices with respect to the ethical use of AI?

Melodena Stephens: So this is a tough one, right? Because when I look at ethics, I think ethics are great. The line between good and bad is a difficult one. So on one hand I go, I want to increase the level of income. So I come and I choose cheap labor, but I’m also willing to close when I find another cheaper labor source. And this is the challenge we have to face, right? Or I want to introduce AI, but I don’t have any implications on the consequences to environment as an example, water consumption, electricity, e-waste recycling. E-waste is far more toxic than carbon dioxide, but we don’t have enough e-waste recycling centers. So with ethics, I think we need to, and there are many standards. I think the UNESCO has put up one recently at UNGA. They all agreed on certain standards. The problem is again, operationalizing it. So there are guidelines. And I think it’s for us to figure out what does that mean for our country and our people? And I always like it to be people-centric. So if I’m saying transparency, why do I want transparency for my people? And it could be because I want a cultural, I want it to be culturally sensitive. If I think in my culture, a child or someone to the year of 16 or 18, not necessarily 12, then I want it also to be aligned for my culture. Family is important. And I, maybe in my culture, it’s. collective family, it’s uncles, aunts, extended family. So I think translation is the difficulty which we don’t have alignment worldwide. So we have all of these things, we don’t know how to operationalize it, and we don’t know how to go and implement it. So right now at this point, because AI is being perceived as the in thing and because of national security issues, there’s a huge investment in AI. I wanted to mention this, the current tech debt is around 40 to 50%. That means if you put in 1 million into a project, you need to keep half a million for upgrading the system, retraining the system for cybersecurity. We are not considering that and that is leading to a lot of failure. Currently, right now, the AI failure rates is around 50 to 80%. So I just want to share this data set with you. 1.5 million apps on Google and Apple has not been updated for two years. 1.5 million apps. That’s a data vulnerability point. That’s a cybersecurity issue. And in 2022, Apple removed something like a half a million apps. So we’re seeing that we’re starting businesses using AI and the first question is why? What is the benefit for the human being? And the second thing is we’ve not considered we can’t sustain the business. So it becomes a cybersecurity issue. So yes, I think AI ethics, I mean, I’m happy to sit with you separately. IEEE also has a policy on a couple of these things, but they’re all guidelines. We aren’t able to implement it because there are cultural nuances and interpretation.

Gladys Yiadom: Thank you very much, Melodina, for highlighting this. Perhaps Sergio, Julia, any comments? Oh, Alison. No, please. Sergio, please go ahead first and then Alison.

Sergio Mayo Macias: No, no problem. I totally agree that ethics is a grey field. It is difficult to mandate ethics. Let’s say, for instance, if you are hiring people, you are a recruiter, or you are using AI for helping in your recruitment, is it fair, for instance, if you want, let’s say, a German native speaker to develop a system promoting CVs received from Germany? Are you avoiding to use CVs received from other countries? Are you going to read everything in the CV for filtering before calling people to interview? So they are difficult questions. So it is ethic or it is not ethic to define, to develop this type of algorithmics. I was mentioning before the algorithmic fairness. This is something that we have to have in mind, of course, fairness, but fairness is different than ethics. So we should think before developing an AI system if we want to use it for a personal use or for including other people being involved in the use of the AI system.

Gladys Yiadom: Thank you, Sergio. Alison, please.

Dr. Alison: Yes, thanks. This is probably outside of my domain, but I think we discussed earlier that ethics is probably something that’s a cultural norm. So I think maybe ethics for you are slightly different for ethics for different people from around the world. So maybe it’s something you’ve probably already done all of this and thought of this, but maybe something bottom up. What does ethics mean to you? Where does it come from? What are the norms of ethics? And this is probably an education piece at local schools, schools getting involved in consultations and helping you develop those. I’m sure you’ve probably done all of this. and then leveraging, I think, as Melodina was saying earlier, your unique assets, your unique resources with those tech companies, because the tech companies, we know who they are, some of them, well, they’re not, actually, I don’t see any of the exhibition stands, actually, but it’s quite interesting, they’ve got so much weight in the world, but I think if you can look at your assets and say, well, these are our unique assets, and maybe leverage that in this really imbalanced world with those tech companies, maybe, I don’t know, I hope that helps. Thank you.

Gladys Yiadom: Thank you, Alison. Yulia, perhaps, as Kaspersky developed last year, the ethical principles.

Yuliya Shlychkova: Yes, we believe that ethics is important, transparency is important, and also, in addition to mandatory regulation, self-imposed standards also is vital in the whole ecosystem, and we, as a company, developed our own principles, ethical principles, we mandatory declared to adhere to, and I think this is a good practice, and more and more companies, they joined in different pledges, showing their principles, so this has already happened, this is good, but I also wanted to comment that we even internally had this discussion, whether the usage of AI can influence the workforce, because right now, in Kaspersky, we have, like, 5,000 engineers, and our top, top-notch researchers, and we’re really proud of our research teams, because they’re able to discover very advanced cyber-Spanish campaigns, and our researchers are part of the community, which are, like, 100, 300 in the world, so it’s very unique talents, but they all started being regular virus analysts, investigating very simple viruses before they grew up to that level, so we were thinking whether introducing AI to do more simple tasks will kill this journey, maturity journey and actually we ended up with positive thinking because we believe with more AI being used to automate skills like the professional will shift from doing things manually from maybe being more operator of AI model so skills will be a little bit different but still the journey will be there and humans will be required. So at least internally we hope that it will affect human employment but still will introduce more opportunities and different job profiles.

Gladys Yiadom: Absolutely, I think this also has been one of the key questions that we’ve hear in international forum is about the future of work in the context of AI. Thank you very much for sharing that, Julia. We can also take one or two other question. Are there any question from the onsite audience? I don’t see any. Johan, do we have one or two last question? Oh, see, we have one, sorry.

AUDIENCE: Hello, can you hear me?

Gladys Yiadom: Yes, we can.

AUDIENCE: Okay, my name is Paula. I am from GIZ African Union. I think in the presentation, you showed that there was some cyber incidents that had happened based off of AI. But do we have any case studies on cybersecurity incidents based off of AI that have destabilized the nation? For instance, any sort of use of autonomous weapons to attack a particular nation and that.

Jochen Michels: We cannot hear.

Yuliya Shlychkova: But we also started to see more advanced use, used by advanced actors But it can happen in a very persistent manner For example, there is a collection of malware samples for all cyber security companies to refer to And we see that for some time, a malicious actor was sending samples With specific logic So that all cyber security engines later, trained on these samples Would recognize or not recognize this thing I’m trying to explain this in simple words But definitely we see that more advanced attackers are trying also to use these And let’s say to affect machine learning algorithms which are working in cyber security software So that later when they release their highly capable cyber-espionage campaigns The defense technologists would not see it or would act something So unfortunately we will see this more, but this is a race and we’re used to this in cyber security Attackers come in with new technology, we come in with new defense And in defense we also, in layers responsible for anomaly detection We also use a very highly efficient AI which can detect anomalies So we are good, we are on par, so there is hope

Gladys Yiadom: Thank you very much Julia, I think this leads us to the end of the session I would like to first thank our speakers for joining us today Online moderators and participants online and on-site We are available to continue this conversation Please do not hesitate to reach out to us and we’ll be happy to follow up with that The guidelines will be available online, so please also do not hesitate hesitate to check them. Thank you very much.

D

Dr. Allison Wylde

Speech speed

170 words per minute

Speech length

1832 words

Speech time

645 seconds

Trust in AI is subjective and culturally dependent

Explanation

Allison Wylde argues that trust in AI is not a universal concept but varies based on individual perceptions and cultural backgrounds. This subjectivity makes it challenging to measure or quantify trust in AI systems.

Evidence

Allison Wylde mentions that trust is naturally given by humans, such as children trusting their parents without thinking.

Major Discussion Point

Trust and AI Adoption

Zero trust approaches should be integrated into AI development

Explanation

Allison Wylde suggests implementing zero trust principles throughout the AI ecosystem. This approach requires continuous verification of identities and permissions before granting access or trust.

Evidence

Allison Wylde mentions the complex, dynamic ecosystem of AI with multiple moving parts that need continuous monitoring.

Major Discussion Point

AI Security Challenges

Agreed with

Yuliya Shlychkova

Agreed on

AI security challenges

G

Gladys Yiadom

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Over 50% of infrastructure companies have implemented AI despite trust concerns

Explanation

Gladys Yiadom presents data showing widespread adoption of AI in infrastructure companies. This suggests that organizations are implementing AI technologies despite ongoing concerns about trust and security.

Evidence

Kaspersky study revealing that more than 50% of infrastructure companies have implemented AI and IoT in their infrastructure.

Major Discussion Point

Trust and AI Adoption

Y

Yuliya Shlychkova

Speech speed

123 words per minute

Speech length

2812 words

Speech time

1363 seconds

AI is still software and not 100% safe, leading to cybersecurity concerns

Explanation

Yuliya Shlychkova emphasizes that AI systems are fundamentally software and thus inherently vulnerable to security risks. This leads to ongoing cybersecurity concerns as AI adoption increases.

Evidence

Yuliya mentions registered cases of AI being used by cybercriminals and AI systems being attacked.

Major Discussion Point

AI Security Challenges

Agreed with

Allison Wylde

Agreed on

AI security challenges

AI models can be vulnerable to data poisoning and adversarial attacks

Explanation

Yuliya Shlychkova highlights specific vulnerabilities in AI models, including data poisoning and adversarial attacks. These vulnerabilities can compromise the integrity and performance of AI systems.

Evidence

Examples of attacks include data poisoning of open source datasets, backdoors, and prompt injection targeting AI algorithms.

Major Discussion Point

AI Security Challenges

Agreed with

Allison Wylde

Agreed on

AI security challenges

Open source AI models may introduce new security vulnerabilities

Explanation

Yuliya Shlychkova discusses the potential security risks associated with open source AI models. While beneficial for innovation, these models can also introduce vulnerabilities if not properly audited and secured.

Evidence

Mention of backdoors and vulnerabilities found in open source datasets used to train models.

Major Discussion Point

AI Security Challenges

Education efforts can help build trust and harmonization in AI adoption

Explanation

Yuliya Shlychkova emphasizes the importance of education in fostering trust and harmonization in AI adoption. She suggests that shared educational efforts can lead to better understanding and alignment in AI implementation.

Evidence

Yuliya mentions that education helps harmonization by connecting people’s minds and motivating more alignment.

Major Discussion Point

Trust and AI Adoption

Agreed with

Melodena Stephens

Agreed on

Importance of AI education and literacy

Continuous training on AI risks and best practices is necessary for organizations

Explanation

Yuliya Shlychkova stresses the need for ongoing training within organizations on AI risks and best practices. This continuous education helps maintain awareness and preparedness for evolving AI-related challenges.

Evidence

Yuliya mentions the importance of regular updates to training courses and conducting field exercises.

Major Discussion Point

AI Education and Literacy

Agreed with

Melodena Stephens

Agreed on

Importance of AI education and literacy

S

Sergio Mayo Macias

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

The EU AI Act provides a model for regional AI governance

Explanation

Sergio Mayo Macias discusses the EU AI Act as an example of regional AI governance. He suggests that this model could be adapted or considered by other regions developing their own AI regulations.

Evidence

Sergio mentions that some Latin American countries are consulting on the EU’s data spaces model for potential implementation.

Major Discussion Point

AI Regulation and Governance

Differed with

Melodena Stephens

Differed on

Approach to AI regulation and governance

Algorithmic fairness is crucial but challenging to define and implement

Explanation

Sergio Mayo Macias highlights the importance of algorithmic fairness in AI systems. However, he notes that defining and implementing fairness in algorithms is complex and can vary based on context and use case.

Evidence

Sergio provides an example of AI use in recruitment, questioning whether filtering CVs based on language proficiency is fair or ethical.

Major Discussion Point

Ethical Considerations in AI

M

Melodena Stephens

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Lack of algorithmic transparency makes it difficult to audit AI systems

Explanation

Melodena Stephens points out that the lack of transparency in AI algorithms poses challenges for auditing these systems. This opacity can make it difficult to identify and address potential biases or errors in AI decision-making.

Evidence

Melodena mentions the example of Google’s Willow, which performs calculations in minutes that would take supercomputers septillions of years, making it practically impossible for humans to trace or audit.

Major Discussion Point

AI Security Challenges

Africa has an opportunity to develop its own AI strategy and standards

Explanation

Melodena Stephens discusses the potential for Africa to take a leading role in AI development by creating its own strategy and standards. She suggests that Africa can leverage its unique assets and cultural values in shaping its approach to AI.

Evidence

Melodena mentions the recent adoption of a continental AI strategy by the African Union, covering 55 countries.

Major Discussion Point

AI Regulation and Governance

Differed with

Sergio Mayo Macias

Differed on

Approach to AI regulation and governance

AI ethics guidelines exist but are difficult to operationalize

Explanation

Melodena Stephens acknowledges the existence of AI ethics guidelines but points out the challenges in implementing them practically. She highlights the difficulty in translating broad ethical principles into concrete actions and decisions in AI development and use.

Evidence

Melodena mentions various ethical standards, including those from UNESCO, but notes the problem of operationalizing these guidelines in different cultural contexts.

Major Discussion Point

Ethical Considerations in AI

AI literacy should distinguish between digital skills and AI-specific knowledge

Explanation

Melodena Stephens emphasizes the need to differentiate between general digital literacy and AI-specific literacy. She argues that understanding AI requires a more specialized set of knowledge and skills beyond basic digital competence.

Evidence

Melodena points out that current digital literacy often focuses on digital skills training, which is not equivalent to AI literacy.

Major Discussion Point

AI Education and Literacy

Agreed with

Yuliya Shlychkova

Agreed on

Importance of AI education and literacy

U

Unknown speaker

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

There is a need to increase AI literacy among professionals and the general public

Explanation

This argument emphasizes the importance of improving AI literacy across society. It suggests that both professionals and the general public need a better understanding of AI technologies and their implications.

Major Discussion Point

AI Education and Literacy

Youth mobilization and education are key to responsible AI adoption

Explanation

This argument highlights the role of young people in shaping the future of AI adoption. It suggests that educating and engaging youth is crucial for ensuring responsible and ethical use of AI technologies.

Major Discussion Point

AI Education and Literacy

Self-imposed ethical standards by companies are important alongside regulation

Explanation

This argument emphasizes the value of companies developing their own ethical standards for AI use. It suggests that these self-imposed guidelines can complement formal regulations in promoting responsible AI practices.

Major Discussion Point

AI Regulation and Governance

AI’s impact on the workforce requires careful consideration of human-AI collaboration

Explanation

This argument addresses the potential effects of AI on employment and work processes. It suggests that organizations need to thoughtfully plan for how humans and AI systems can work together effectively.

Major Discussion Point

Ethical Considerations in AI

Agreements

Agreement Points

AI security challenges

Allison Wylde

Yuliya Shlychkova

Zero trust approaches should be integrated into AI development

AI is still software and not 100% safe, leading to cybersecurity concerns

AI models can be vulnerable to data poisoning and adversarial attacks

Both speakers emphasize the need for robust security measures in AI development and implementation, highlighting various vulnerabilities and the importance of continuous verification.

Importance of AI education and literacy

Yuliya Shlychkova

Melodena Stephens

Education efforts can help build trust and harmonization in AI adoption

Continuous training on AI risks and best practices is necessary for organizations

AI literacy should distinguish between digital skills and AI-specific knowledge

The speakers agree on the critical role of education in fostering responsible AI adoption, emphasizing the need for specialized AI literacy and continuous training.

Similar Viewpoints

Both speakers highlight the complexity of implementing ethical guidelines and fairness in AI systems, acknowledging the challenges in translating broad principles into practical applications.

Sergio Mayo Macias

Melodena Stephens

Algorithmic fairness is crucial but challenging to define and implement

AI ethics guidelines exist but are difficult to operationalize

Unexpected Consensus

Regional approach to AI governance

Sergio Mayo Macias

Melodena Stephens

The EU AI Act provides a model for regional AI governance

Africa has an opportunity to develop its own AI strategy and standards

Despite representing different regions, both speakers advocate for regional approaches to AI governance, suggesting that tailored strategies can be more effective than global one-size-fits-all solutions.

Overall Assessment

Summary

The main areas of agreement include the need for robust AI security measures, the importance of AI-specific education and literacy, and the challenges in implementing ethical guidelines and fairness in AI systems.

Consensus level

Moderate consensus exists among the speakers on key issues, particularly regarding security challenges and the importance of education. This level of agreement suggests a shared recognition of critical areas that need addressing in AI development and implementation, which could potentially guide future policy and industry practices.

Differences

Different Viewpoints

Approach to AI regulation and governance

Melodena Stephens

Sergio Mayo Macias

Africa has an opportunity to develop its own AI strategy and standards

The EU AI Act provides a model for regional AI governance

While Melodena Stephens emphasizes the potential for Africa to develop its own unique AI strategy, Sergio Mayo Macias highlights the EU AI Act as a model for regional governance. This suggests different approaches to AI regulation in different regions.

Unexpected Differences

Focus of AI literacy

Melodena Stephens

Unknown speaker

AI literacy should distinguish between digital skills and AI-specific knowledge

There is a need to increase AI literacy among professionals and the general public

While both speakers agree on the importance of AI literacy, Melodena Stephens unexpectedly emphasizes the need to differentiate between general digital skills and AI-specific knowledge, which adds a layer of complexity to the discussion on AI education.

Overall Assessment

summary

The main areas of disagreement revolve around approaches to AI regulation, methods of building trust in AI, implementation of ethical guidelines, and the focus of AI literacy efforts.

difference_level

The level of disagreement among the speakers is moderate. While there are differing perspectives on specific approaches and implementations, there is a general consensus on the importance of addressing AI security, ethics, and education. These differences highlight the complexity of global AI governance and the need for flexible, context-specific solutions.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of trust in AI adoption, but they propose different approaches. Allison Wylde emphasizes the subjective nature of trust, while Yuliya Shlychkova suggests education as a means to build trust and harmonization.

Allison Wylde

Yuliya Shlychkova

Trust in AI is subjective and culturally dependent

Education efforts can help build trust and harmonization in AI adoption

Both speakers recognize the importance of ethical guidelines for AI, but they differ in their approach. Melodena Stephens highlights the challenges in operationalizing existing guidelines, while Yuliya Shlychkova emphasizes the role of self-imposed company standards.

Melodena Stephens

Yuliya Shlychkova

AI ethics guidelines exist but are difficult to operationalize

Self-imposed ethical standards by companies are important alongside regulation

Similar Viewpoints

Both speakers highlight the complexity of implementing ethical guidelines and fairness in AI systems, acknowledging the challenges in translating broad principles into practical applications.

Sergio Mayo Macias

Melodena Stephens

Algorithmic fairness is crucial but challenging to define and implement

AI ethics guidelines exist but are difficult to operationalize

Takeaways

Key Takeaways

Trust in AI is subjective and culturally dependent, making it challenging to establish universal standards

AI systems face significant cybersecurity challenges, including data poisoning and adversarial attacks

Harmonizing AI regulations globally is difficult due to cultural and regional differences

Ethical considerations in AI development and deployment are crucial but challenging to operationalize

Increasing AI literacy among professionals and the general public is essential for responsible AI adoption

Resolutions and Action Items

Kaspersky has developed guidelines for AI security that organizations can use to improve their AI systems’ security

Companies should consider developing and adhering to self-imposed ethical standards for AI use

Unresolved Issues

How to effectively harmonize AI regulations across different jurisdictions and cultures

How to operationalize AI ethics guidelines in practical implementations

How to balance innovation with security concerns in AI development

The long-term impact of AI on the workforce and job markets

How to ensure algorithmic fairness and transparency in AI systems

Suggested Compromises

Adopting a risk-based approach to AI regulation, similar to the EU AI Act, to balance innovation and security

Focusing on interoperability standards rather than full harmonization of AI regulations

Leveraging unique regional assets and cultural values in AI development strategies

Implementing multi-layered protection in AI systems, combining automated AI security with human oversight

Thought Provoking Comments

Trust is subjective. So maybe I trust you. I think I probably do. I don’t really know you too well, but I trust you. I’m a human. And so our human behavior is naturally to trust. Children trust their parents without thinking about it. And I think that’s one of the issues in business. People see a new technology and they want to be with the top technology, with the new technology. And of course they want to use it really without thinking.

speaker

Allison Wylde

reason

This comment challenges the assumption that trust in AI is a simple yes/no question. It introduces the complexity of human psychology and how it relates to trust in technology.

impact

This shifted the discussion from a technical focus to considering human factors and psychology in AI adoption and trust. It led to further exploration of how to define and measure trust in AI contexts.

We almost see in the wild attacks on every component of AI development chain. Therefore, cybersecurity should be addressed. We need to talk about this and help not to stop AI usage but to do it safely and have basis for this trust in for AI use in the organization.

speaker

Yuliya Shlychkova

reason

This comment provides a comprehensive view of the cybersecurity challenges in AI, emphasizing the need for a holistic approach to security.

impact

It broadened the discussion from general trust issues to specific cybersecurity concerns across the AI development chain. This led to more detailed conversations about security measures and best practices.

If you look at how many policies are there for cybersecurity, I think there are more than 100 countries which have policies. While some of them are on security and they’re looking at algorithmic security, we see recently over the last two years maybe more focusing on critical infrastructure. And there’s two things driving it. One is we’re moving away from individual security or corporate security or industry security to national security.

speaker

Melodena Stephens

reason

This comment highlights the evolving nature of AI security policies and their increasing focus on national security, introducing a geopolitical dimension to the discussion.

impact

It shifted the conversation towards considering the broader implications of AI security at a national and international level, leading to discussions about the need for global cooperation and standards.

We need to provide cybersecurity by default. We cannot send the elephant in the room to final users. We have to define safe spaces for using the AI systems and we cannot expect final users to do it.

speaker

Sergio Mayo Macias

reason

This comment challenges the current approach to AI security by emphasizing the need for built-in security measures rather than relying on end-users.

impact

It sparked a discussion about the responsibilities of AI developers and providers in ensuring security, leading to conversations about potential regulatory approaches and industry standards.

Currently, right now, the AI failure rates is around 50 to 80%. So I just want to share this data set with you. 1.5 million apps on Google and Apple has not been updated for two years. 1.5 million apps. That’s a data vulnerability point. That’s a cybersecurity issue.

speaker

Melodena Stephens

reason

This comment provides concrete data on AI failures and vulnerabilities, highlighting the scale of the cybersecurity challenge in AI applications.

impact

It brought a sense of urgency to the discussion and led to more focused conversations about practical steps needed to address these vulnerabilities and improve AI reliability.

Overall Assessment

These key comments shaped the discussion by broadening its scope from initial considerations of trust to encompass complex issues of human psychology, cybersecurity across the AI development chain, national security implications, the need for built-in security measures, and the urgent challenges posed by current AI vulnerabilities. The discussion evolved from theoretical considerations to practical concerns and potential solutions, emphasizing the multifaceted nature of AI security and the need for collaborative, proactive approaches across various stakeholders.

Follow-up Questions

How can we develop a conceptual framework for trust in AI?

speaker

Allison Wylde

explanation

Trust is subjective and can’t be measured with traditional statistical methods. A conceptual framework is needed to define, measure, and implement trust in AI systems.

How can we address the issue of shadow AI use in organizations?

speaker

Yuliya Shlychkova

explanation

Many employees are using AI tools without organizational oversight, potentially exposing confidential information. Understanding the scale of shadow AI use is crucial for security.

How can we ensure algorithmic fairness in AI systems?

speaker

Sergio Mayo Macias

explanation

Even with good data, the human creating the algorithm must ensure fairness. This is a key point in addressing bias and ethical concerns in AI.

How can we balance national security concerns with individual privacy in AI regulations?

speaker

Melodena Stephens

explanation

This trade-off is crucial in developing AI policies and regulations that protect both national interests and individual rights.

How can we address the challenges of AI security given that AI responses can be different each time?

speaker

Audience member

explanation

Traditional security measures may not be effective for AI systems that produce variable outputs, creating new challenges for vulnerability detection and mitigation.

How can we develop and implement AI-specific protection standards for organizations using applied AI systems?

speaker

Gladys Yiadom

explanation

Current standards mostly cover AI foundation models, leaving a gap in protection for organizations implementing applied AI systems based on existing models.

How can we effectively harmonize AI regulations across different jurisdictions, particularly in Africa?

speaker

Christelle Onana

explanation

With the adoption of a continental AI strategy in Africa, there’s a need to understand how to implement it nationally while considering the global nature of AI systems.

What are the ethical considerations in AI development, particularly regarding the use of cheaper labor in developing countries?

speaker

Francis Sitati

explanation

There’s a need to explore ethical AI practices that balance innovation with fair labor practices and cultural sensitivities.

Are there case studies on AI-based cybersecurity incidents that have destabilized nations?

speaker

Paula from GIZ African Union

explanation

Understanding the real-world impact of AI in cyber warfare and national security is crucial for developing appropriate defenses and policies.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #103 Aligning strategies, protecting critical infrastructure

WS #103 Aligning strategies, protecting critical infrastructure

Session at a Glance

Summary

This discussion focused on strategies for protecting critical infrastructure cybersecurity through international cooperation and multistakeholder approaches. Participants emphasized the need for a holistic approach to address the growing cyber threats to interconnected critical infrastructure systems. Key points included the importance of developing common definitions and standards across jurisdictions to reduce fragmentation, which was identified as a major security risk. Speakers highlighted the crucial role of public-private partnerships and information sharing, while noting challenges around trust and incentives for collaboration.

The discussion explored how capacity building, especially for under-resourced countries, is essential to improve global cybersecurity. Participants stressed the need for policies that enable rather than restrict private sector cybersecurity efforts, particularly around data flows and encryption. The intersection of critical infrastructure with commercial technologies was noted as an important consideration for future-focused policies. Speakers also addressed the role of international norms and agreements in combating cybercrime and promoting responsible state behavior in cyberspace.

There was broad agreement on the importance of multistakeholder collaboration to address the complex challenges of critical infrastructure protection. Participants emphasized that this collaboration must be meaningful and inclusive, ensuring diverse perspectives are incorporated. The discussion concluded with calls for more concrete action to address growing cyber threats, noting the massive economic impact of cybercrime and the urgency of improving global cybersecurity resilience.

Keypoints

Major discussion points:

– The need for a holistic, coordinated approach to protecting critical infrastructure cybersecurity across sectors and borders

– The importance of international cooperation, standards, and capacity building to address cybersecurity challenges

– The role of public-private partnerships and multi-stakeholder collaboration in improving critical infrastructure protection

– The impact of broader technology policies (e.g. on encryption, data flows) on critical infrastructure cybersecurity

– The need to move from high-level discussions to concrete, actionable measures

Overall purpose:

The goal of this discussion was to examine strategies for aligning efforts to protect critical infrastructure cybersecurity across sectors and countries, and to identify key challenges and opportunities for improving critical infrastructure protection through policy, partnerships, and international cooperation.

Tone:

The tone was largely collaborative and solution-oriented. Speakers built on each other’s points and emphasized the need for coordination and joint action. There was a sense of urgency about addressing growing cybersecurity threats, but also optimism about the potential for progress through multi-stakeholder efforts. The tone became more action-focused towards the end, with calls to move beyond conversation to concrete measures.

Speakers

– Timea Suto: Global digital policy lead at the International Chamber of Commerce, moderator of the session

– Rene Summer: Director for Government and Industry Relations at the Ericsson Group, Chair of the ICC Global Digital Economy Commission

– Francesca Bosco: Chief Strategy and Partnerships Officer at the Cyber Peace Institute

– Julia Rodriguez: From the Permanent Mission of El Salvador to the United Nations

– Mr Wouter Kobes: Standardization Advisor for Netherlands at the Standardization Forum

– Mr Chris Buckridge: Senior Strategy Advisor at the Global Forum on Cyber Expertise

– Ms Robyn Greene: Director for Privacy and Public Policy at META

Full session report

Expanded Summary of Critical Infrastructure Cybersecurity Discussion

This discussion, moderated by Timea Suto from the International Chamber of Commerce (ICC), focused on strategies for protecting critical infrastructure cybersecurity through international cooperation and multistakeholder approaches. The session brought together experts from various sectors to address growing cyber threats to interconnected critical infrastructure systems.

ICC Paper and Key Challenges

Timea Suto introduced an ICC paper on critical infrastructure protection, which outlines key challenges and recommendations for a multistakeholder approach. The discussion highlighted significant challenges, including:

1. Fragmentation and complexity in security (Rene Summer, Ericsson Group)

2. Lack of consensus on defining critical infrastructure (Julia Rodriguez)

3. Misalignment of definitions across jurisdictions (Wouter Kobes)

4. Rapid evolution of cyber threats (Francesca Bosco, Cyber Peace Institute)

5. Intersectionality of the technological landscape complicating policy approaches (Robyn Greene, META)

Multistakeholder Collaboration and International Cooperation

Participants agreed on the crucial importance of multistakeholder collaboration and international cooperation. Key points included:

1. Need for a holistic approach involving all stakeholders (Rene Summer)

2. Importance of public-private partnerships and information sharing (Julia Rodriguez)

3. Multistakeholder input for developing effective frameworks (Francesca Bosco)

4. Challenges around trust and incentives for collaboration

5. Need for regulatory interoperability across jurisdictions (Robyn Greene)

6. Essential role of capacity building, especially for the Global South (Chris Buckridge)

Global Forum on Cyber Expertise (GFCE)

Chris Buckridge highlighted the work of the GFCE in coordinating cyber capacity building efforts globally. He mentioned:

1. GFCE’s role in bringing together governments, private sector, and civil society

2. Focus on practical, coordinated approaches to cyber capacity building

3. The Women in Cyber fellowships program to promote diversity in the field

Standards, Policies, and Regulatory Approaches

The discussion emphasized the importance of standards and policies in addressing cybersecurity challenges:

1. Standards help address misaligned definitions across jurisdictions (Wouter Kobes)

2. Policies should be compatible with internet infrastructure and values (Robyn Greene)

3. Importance of encryption and data flows for cybersecurity

4. Need for privacy-by-design concepts in normative frameworks (Julia Rodriguez)

5. Resistance to unnecessary data retention mandates (Robyn Greene)

Emerging Technologies and Future Threats

Participants explored challenges posed by emerging technologies and future threats:

1. AI-enabled attacks as a growing concern (Chris Buckridge, Francesca Bosco)

2. Potential for fully autonomous cyber attacks (Francesca Bosco)

3. Importance of forecasting future technological needs and threats (Robyn Greene)

4. Need for responsible deployment of emerging technologies

Societal Impact and Broader Policy Implications

The discussion broadened to consider wider implications of cybersecurity failures:

1. Understanding societal impact of cyber attacks on critical infrastructure (Francesca Bosco)

2. Ensuring non-cybersecurity policies are compatible with cybersecurity best practices (Robyn Greene)

3. Cyber Peace Institute’s work on analyzing how cyber threats harm society and impact critical infrastructure

International Initiatives and Tools

Several international initiatives and tools were mentioned:

1. UN Cyber Crime Convention (Robyn Greene)

2. Global Cyber Capacity Building Conference in May in Geneva (Francesca Bosco)

3. internet.nl tool for measuring security standard adoption (Wouter Kobes)

Moving from Discussion to Action

The session emphasized the need for concrete, actionable measures:

1. Encouraging use of tools like internet.nl to measure security standard adoption

2. Need for awareness-raising and knowledge-building on international processes

3. Making it easier for companies to work with and share information with governments (Robyn Greene)

Unresolved Issues

Key unresolved issues included:

1. Achieving consensus on defining critical infrastructure across jurisdictions

2. Balancing security needs with privacy and human rights concerns

3. Addressing residual risks that industry cannot defend against alone

4. Preparing for potential future threats like fully autonomous cyber attacks

Conclusion

The discussion concluded with a sense of urgency about addressing growing cyber threats, balanced with optimism about the potential for progress through multi-stakeholder efforts. The ICC paper presented at the beginning and end of the session provided a framework for ongoing discussions and actions in this critical area of cybersecurity.

Session Transcript

Timea Suto: one second, and then we will start. Okay. Now it’s working. Okay. Welcome, everyone. I think we are ready to start. For those of you who might wonder if you are in the right room, this is workshop number 103 at the Internet Governance Forum on aligning strategies protecting critical infrastructure. This is a workshop that we’ve convened with the International Chamber of Commerce and our partners. My name is Timea Suto. I’m a global digital policy lead at the International Chamber of Commerce, and I will be moderating this session today. So why have we chosen to put this topic forward for the IGF? We’ve chosen it because we feel strongly that digital transformation is now part of every country’s development that creates enormous opportunities and enables basically everything from distance learning to economic advances, manufacturing, agriculture, all societal divisions in all sectors of the economy, and that cyber security is central to making this space work. But as we see the cyber space evolving, and it’s the centrality that it has to our everyday lives, it also poses a number of risks, and it needs us all to work together to ensure trust in the digital economy through the protection of the availability, integrity, confidentiality of these most essential infrastructures that make the Internet and digital technologies work and the services that they provide so that they are truly resilient. So that’s all I really wanted to say about the importance of the discussion that will happen today. My role here will be the easy one. I’m just going to ask the questions, but I have a number of experts. both here in the room and online, who will do the hard job in trying to provide some answers to why we need to talk about this, where we are at, and where we’re heading towards. So just for a quick introduction before I hand over, we will have with us, and in the order of which they will be speaking in, Mr. René Sommer, online, who is Director for Government and Industry Relations at the Ericsson Group, and also the Chair of the ICC Global Digital Economy Commission, who will be our keynote speaker today. And then we’ll have a panel of conversation with Ms. Julia Rodriguez-Acosta, online as well. Hello, Julia. From the Permanent Mission of El Salvador to the United Nations. Mr. Wouter Corbes, Standardization Advisor for Netherlands at the Standardization Forum. Mr. Chris Buckridge, to my left, Senior Strategy Advisor at the Global Forum on Cyber Expertise. Ms. Francesca Bosco, online, who is Chief Strategy and Partnerships Officer at the Cyber Peace Institute. And last but not least, Ms. Robyn Greene, sitting in front of me, who is Director for Privacy and Public Policy at META. So without further ado, I think we’re ready to jump in and hear from René, a bit of a keynote address to kick us off and discuss a little bit about what is the current state of play in protecting critical infrastructures and their supply chains and what has ICC done about all this in the recent past. So René, I’m passing it over to you. I hope you can hear us and you’re ready for your keynote. Do we have René online? It seems like his screen might be frozen here. René, can you hear us? Can we try and connect? Hello? René, are you with us? Can you hear me? Can you try and speak? Can you message him, please, to try and speak?

Rene Summer: Can you hear me?

Timea Suto: Yes, we can.

Rene Summer: I can’t hear you in case you’re talking to me.

Timea Suto: Yes, I’m talking to you, but we can’t. Can we make sure that Rene has audio? I’m not apologies for the technical confusion. Or just while we’re trying to figure out, maybe if somebody can put in the chat that he can start his keynote. My apologies for the technical difficulty here.

Francesca Bosco: May I? Can you hear me?

Timea Suto: Yes, I can hear you, Francesca.

Francesca Bosco: I sent him a message. Let’s see if he can see.

Timea Suto: Thank you.

Rene Summer: Yes, I can hear now. Yes, perfect. OK, so Rene, I was just saying we’re ready for your keynote. Great, thank you very much. And great to see that we always have the challenges of technology today as well. So I guess that’s blame on the technology companies. Thank you very much for inviting me today. I’m Rene Summer with the ICC, the Global Digital Commission. And for those who don’t know us, we are representing 45 million companies from about 170 countries. And we are advocating for solutions and for policy recommendations and bring a wealth of experts in our network. So we do take a lot of effort and bringing a lot of expertise to make solid and insightful contributions. So with this in mind, we took some steps and reflected on what is it that we see unfolding, happening in our world today, and what is really at stake before we get into the more of the details of this discussion. And if we move to the next slide, please, I think what we are really concerned with is the current development in our cyberspace. And this is really putting new challenges and risks to our companies, but also goes beyond our companies and has significant impact both on public safety, economic stability and security and national security. And this of course means that more and more focus and emphasis is also put by national policymakers and regulators on the issue of cyber resilience and cybersecurity. So this of course motivated us looking at the next slide then to think more harder on what does this really mean when we not only have this broader picture and context of deteriorating cyberspace, but also that we see increased sophistication in cyber threats. So that means that we see more and more novel threat vectors and actors coming in to that play. And that is coupled with increased interconnectedness between what is ICT and other critical infrastructures. So we see also an expanded threat surface through this dependency. And that of course also means that there will be more and severe consequences if cyber attacks are successful. So this means that with this development and the increased emphasis by policymakers on cybersecurity and resilience of critical infrastructures and the supply chain, there is of course more pressure also on the industry to do more. And this is in many aspects rightly to take place, but it also means that we are facing a number of challenges, not only from a growing burden of compliance. taking off particularly the operators of critical infrastructures, but also these initiatives create challenges in terms of policy and regulation. And this is why we also want to be part of this discussion. So this really brings me to the purpose of why we took the steps we took and the details that we are presenting here today. And if we move to the next slide, please. This is the contribution that we are making here today and share the insights from our working paper on protecting cybersecurity of critical infrastructures and their supply chains. And at this really highest level, what we really want to convey as a key message is that there is a need for a holistic approach. And I will delve into what that means in more detail, but also that we need all stakeholders to be involved and particularly, of course, the governments that have to fulfil their roles as well. We many times hear that cybersecurity is a team sport, which is of course largely true, but there is also distinct roles and responsibilities that each stakeholders need to take and that also includes governments and policymakers. So if we can then move to the next slide and think about what are the dilemmas that we as, on one hand side, industry, but also other stakeholders and governments face in terms of doing more, we have in our paper identified some of the key dilemmas that at least from our end, we see limiting the effectiveness of what can be done more and better to increase the resilience of critical infrastructures. And of course, starting from a policy perspective… One of the challenges we see is that many jurisdictions that have developed critical infrastructure frameworks, which is far from all countries, have taken quite different approaches in terms of definitions and so on and so forth. And this creates at least two challenges. One is the question of policy targeting. If policy targeting differs between jurisdictions, of course, that means different objectives are ultimately being pursued. But secondly, as these frameworks then bend into also obligations and requirements, this brings complexity and fragmentation. And I would really like to undermine all of us here to think about that fragmentation and complexity is the number one enemy of security. This is not just a trade argument. This is really a security argument that fragmentation and complexity are the number one enemy of security posture. Then, of course, some jurisdictions have moved beyond the question of critical infrastructures only and speak of actually the essential services that these critical infrastructures deliver and bring to the public sector, or to the consumers, or to other industries. And I think that is another important element, is that ultimately we are not only protecting the critical infrastructures as policymakers, but the essential services that these render. And it is worthwhile to undermine that distinction as governments and nations move on to develop further frameworks. I think also something which we are trying to address in this paper is the increase interdependency between what has been typically that are seen as the telecom sector or the digital sector, and when those get interlinked, what previously were seen as separate industries being the energy grid, power distribution, and so on and so forth. This interdependency also creates additional risks and threats that need to be considered and addressed. Because of global supply chains and the suppliers that supply the equipment and solutions into these sectors, we also need to think about the global interconnectness and the impacts that may come from these dependencies. So we don’t only see a cascading risks or effects between different national critical sectors, but also from the national arena into the international space when we have also international supply chain. And as all of you well know, cybersecurity does not know any borders. So this, of course, brings additional challenges. I think it is also important to highlight the aspect of third-party suppliers in the supply chain that have been also increasingly targeted by threat actors and become an entry point into impacting critical infrastructures. And here, of course, a number of challenges that we will talk about later need to be addressed, but also important to keep in mind that there are, of course, different type of suppliers and they have different level of maturity. And making sure that we have sufficient capacities and capabilities in the supply chain to address these risks and exposures is extremely important. Which brings me then to, well, how do we move beyond dilemmas? And if we go to the next slide, please. We, of course, took good care… time and effort to think about what are the best industry best practices and what do we see on policy and regulatory side. And by no means this is a unique insight by ICC and its members because of course there is a lot of good work done by others and we have definitely stolen with pride where there are other entities or stakeholders that have put a lot of effort and thought into these questions. And here you can see a number of examples that we have addressed in our paper what we think is important to take on board and how we can also make use of these best practices when I talk more about public-private partnership. And some of these examples here of course such as having comprehensive security measures or strong data backups and so on are fundamental considerations that we believe the industry needs to lead with and it’s necessary part of the solution. But again we also have policy and regulatory approaches that we need to take care about and consider how they impact the culture of critical infrastructures and the operators of those. And it is of course so that as any other industry we talk about the operators of critical infrastructures they also face a number of constraints and that means that when we look at the different regulations and approaches it is important to think about how we make sure that those are effective, targeted and achieve their objectives so that we can work with the scope of trade in the most effective way and achieve the outcomes we I think all are looking forward to achieve which is secure critical infrastructures and the supply chain. So moving beyond this generic state. If we go to the next slide, please. We do, of course, think that there is more to be done and we believe from the industry side there are a number of priorities, thinking about the constraints again. For instance, start with the baseline security requirements first and make sure what needs to be done first is really in place. You don’t need to start with the perfection on day one, but really make sure that the bare minimum is in place and work from there rather than trying to fix everything at the same time. Secondly, I think what is important because of the dependencies I touched upon earlier, it is also important to think about what are the third parties, the supply chain actors doing in terms of contributing to or actually decreasing the security posture of critical infrastructures. So please do keep that in mind. And of course, from a more commercial point of view, partnerships between the critical infrastructure operators and suppliers is key. And that’s something which, of course, needs to be incentivized, but also there needs to be frameworks in place that make sure that, again, the bare minimum at least is achieved. On the policy side, I would say that there are a few things which we already see being developed in several jurisdictions. We see that there are now requirements on suppliers and third parties on how a secure software development process should look like. This is something which I think should be expected. And as we see that more and more sectors are becoming more software driven and software rich, this is definitely an important aspect of security. Speaking also of the supply chain. and where not only resilience, security, but also trust is important, diversification is key. And this is another element of policymaking that we see is developing, that you want to make sure that you have a resilient, secure and trusted supply chain. And then lastly, I think also, or we think that there is an essential aspect of policy to make sure that on one hand side, there are clear roles and responsibilities, but also that cooperation and coordination between the stakeholders is encouraged because we don’t want to see an environment when risk averse behavior stops the behavior of sharing information, being proactive and sometimes even taking risks, especially when we talk about in the heat of the moment when incidents and threats are unfolding and measures need to be taken. So with that, if we can move to the next slide, if we put some of these examples in a kind of a broader or bigger picture, what is it that we are really looking for? And this I think needs repetition and repetition because it takes time from staying this and seeing this being implemented into policy action. But number one, again, there is no single silver bullet here and that’s why we are advocating for a holistic policy that is both well balanced, but also well targeted to make sure that the critical infrastructures and essential services providers and their supply chain are working together towards a set of goals. In the context of collaboration, it is also important that we see that there is both emphasis on enforcement, but also on incentivizing appropriate behavior. And this, I think, is particularly important to keep in mind because cybersecurity is not an end in a sense that we come to a situation where everything is cybersecure. It is a continuous journey. It’s a state that is always on the move. So we will never be done. There is no final checkpoint. And that’s why it’s important to also have incentives for appropriate behavior. Then I think, and this comes back to my initial call, that also governments have a real important role. And while cybersecurity is a team sport, but there is also a clear role for governments. And there are residual risks, even if you develop an appropriate security regulation framework and you take appropriate mitigating measures on board, there will always be residual risks. And this is where governments in particular have a very important role to play. And you see some examples on measures how you can actually address these residual risks. This is something which the industry will not be able to fix. And there are no insurances for this to be taken. And even if so, it doesn’t mean that the negative consequences will not happen just because you have an insurance. So please do think about those as well, how to tackle the residual risk, which is very, very important. If we move to the next slide then, and here I think we have three more slides to kind of go a little bit more into some of the recommendations we have in this paper. From a policymaking perspective, it is absolutely necessary that nations do have an independent, competent cybersecurity agency. This is a competence area that needs to be developed and needs to be present, because as policy makers, you’re not only developing laws, but you’re actually also protecting in real time and take action to deal with incidents. Just having regulation and secure products doesn’t mean that threats will go away. And when developing these national frameworks, the reasons why we speak about holistic approach and a coordination between national cybersecurity agencies and policies is also because one thing is about having a clear framework that we as industry understand what is expected of us. But again, cybersecurity is also something that is happening in real time. We talk about incidents, vulnerabilities, mitigation, and so on, and it is absolutely necessary that there is a clear understanding of who is doing what and when. So that we can also take actions when actually attacks are successful, and we need to recover quickly and get back into operations with minimum damage and consequences. And this of course requires collaboration. So it is important to think about in the regulation that yes, we need enforcement, we need clear rules, but we also need good collaboration between the private sector and the national agencies. And lastly, when we talk about supply chains, again, I think looking at national fragmentation of requirements that breeds complexity, which is the enemy number one of security, international technical standard is a necessary feature of good security posture. If we move then to the next slide, please, which brings me to the international cooperation. Again, it is so that what happens at national level will not be bound by cyber incidents and cyber events from a national perspective. So, to address the issue, for example, of response or the complexity challenges through fragmentation, it is essential that governments do what is achievable in terms of working with their peers and strive to take action internationally and globally to make sure that we can have as much harmonization from the rules, requirements, and the standards so we can create a common platform for addressing challenges, but also work with the complexity and reduce the complexity through fragmentation. Coming back to the residual risks, this is where, of course, governments and nation states play an enormously important role. And this is coming back to the question, how do we address the residual risk? And this is where the international norms centric against 10% sponsored cyber attacks is very important. That may include things like thinking through more, how can we make sure that there is public attribution following incidents, that there is an implementation of robust deterrent measures for cyber attacks, and that we promote collaboration. If we move to the next slide, then, this is really to emphasize, and maybe not to dwell so much on, that, of course, industry collaborating with national stakeholders and international stakeholders is key. You see some examples of that mentioned here, but doesn’t really bring anything new. So in a matter of time, maybe we can skip this slide and then just finish off with that. I hope you find this information of interest and value. We do have a paper available. You have the links, both in English, Spanish, and in Mandarin. We hope that this is going to be an interesting read. If you have any further questions or interest in this information, please feel free to also reach out to the Secretariat of ICC, where we can schedule more interactions. I really hope that this intervention has inspired some of you and I look forward to the discussions that are to follow after my speech. Thank you again for the opportunity and I hope you have fruitful discussions. Thank you very much. Over to you, Timea.

Timea Suto: This was quite a comprehensive introduction and I do hope that it gives food for thought for the conversation that we have planned going forward. Of course, a little advertisement here for the ICC paper. If you come to our booth just outside this room here, we have a QR code from where you can easily download not only this one, but all the other publications that ICC has on cyber issues. But coming back to the conversation and picking up one of the last points that you’ve mentioned here, Rene, the need for collaboration around the protection of cyber security of critical infrastructures, and especially the collaboration in the international space. I’d like to turn to Julia and ask a little bit about how is this going and how are we seeing any barriers that might impede some cross-border collaboration and also what opportunities do you see in aligning national responses to security challenges with international and transnational agreements that we already have in place or we are developing. So, over to you, Julia.

Julia Rodriguez: Thank you so much. Can you all hear me okay?

Timea Suto: Yes, we can.

Julia Rodriguez: Beautiful. Good morning, good afternoon and good evening to all. Thank you so much, Rene, for the thought-provoking presentation. It is a pleasure to join this important conversation from New York very early in the morning. I extend my gratitude to the International Chamber of Commerce for organizing such a time… family and significant discussion, it is truly an honor to share views with such a group of speakers. To set the tone for today’s discussions and in response to the main questions, I would like to begin by highlighting the work that we have been doing at the United Nations regarding the protection of critical infrastructure. This issue is well-developed within the framework of responsible state behavior, which lay out voluntary norms for expected conduct in cyberspace, and the norm related to critical infrastructure emphasize two key principles, the current framework that we have today, more on like kind of positive obligations, the protection of critical infrastructure, and more kind of restrictive obligations in reference to what was just exposed by Rene, kind of like refrain from actions that damage or disrupt such infrastructure, particularly when they impact availability and integrity. And this normative framework is crucial, especially for those infrastructures that provide essential services, including the general availability of the internet itself. So it is worth noting that the importance of protecting critical infrastructure has long been recognized within the United Nations system. For over 20 years ago, this discussion began primarily from a development perspective, but in recent years, it has evolved into a core element of international security. And discussions now recognize that the protection of critical infrastructure is central to maintaining international peace and security, particularly in our interconnected world, where societal well-being cannot be separated from societal, economic, and human rights. humanitarian consideration. So bringing this discussion to the present, right now the UN, the United Nations Open-Ended Working Group on ICT and Security, has made significant progress in advancing this agenda. And one of the recent developments is in the just-published annual progress report in the critical infrastructure sectors that require protection, and now we have an inclusion on sectors that range from healthcare, maritime, aviation, financial services, and energy. And I think that the sectoral approach is a significant step forward, because it acknowledged that protecting critical infrastructure involves cross-border challenges with global implications, and second, because adopting a sector-specific risk-based approach allows for the development of target operational measures that reflect the unique characteristics and vulnerability of each sector. However, we also must acknowledge those barriers that impede cross-border collaboration in cybersecurity, as it was meant from one key challenge lies in the lack of aligned definitions and standards among nations. While the UN’s voluntary norms on responsible state behavior provide a clear framework, differences in national interpretations and legal frameworks often hinder operational coordination. Additionally, of course, there is gaps in trust, misaligned priorities, and the absence of unified approaches to identifying and responding to threats, and this border complicates these efforts that we’re trying to do at the multilateral level. Yet these challenges also… present with opportunities, aligning national responses with international agreement, for example, and not only at the international level, but also at the regional level, the creation of shared understanding on trends and coordinated responses, and of course, that by fostering trust and promoting partnerships, both public, private, and multilateral, we can enhance our collective ability to address the global risk facing critical infrastructure. So this directly addresses the first policy question on cross-border challenge that hinder operability and coordination. And for us, the role of public-private partnership in strengthening safety and security is key. So El Salvador has actively engaged in all the multilateral arena to advocate for concrete implementation measures. And we have emphasized the importance of partnerships and collaboration with service providers, for example, as these are essential to ensure the protection of critical infrastructure. While the understanding of the need for multi-stakeholder collaboration is well-established, we still are facing challenges at the UN for interslating this broad principle into actionable policy-oriented recommendation. So I will stop here, and particularly those colleagues that represent other stakeholders to share current best industry practice. I think that Rene presented some very well recommendations for enhancing cyber resilience, and I remain eager to engage further during the Q&A session and comments. And I thank you so very much.

Timea Suto: Thank you, Julia. We’re gonna return to the room here from the online world, and I’m gonna turn to Vautier here in front of me. We’ve mentioned the role of norms, we mentioned the role of regulations, but I wanted to ask you about standards and protocols that also need to work with jurisdictions, sorry, I think I’m losing my microphone, to make sure that the systems we put in place are actually operational. and we don’t have the fragmentation that Rene was talking about in the beginning. So how do you see that from the point of view of where you’re sitting with the standardization organization in the Netherlands? Yes, thank you very much.

Mr Wouter Kobes: So as part of the Dutch government, we are using standards as a vessel to achieve various goals. One of them is interoperability within government, but also strategic independence from large vendors. And specifically on those standards that address cybersecurity, of course, the security of the government as well. And we actually see that when we are pushing for adoption of these standards, the result is that also other parts of critical infrastructure are positively affected by this, because they start implementing certain standards as well. And I think the interesting connection to the keynote of Rene is that the holistic approach to cybersecurity is also seen through security standards. You have really organizational standards. The well-known are, of course, the ISO 27K1 and 2 standards, which give your organization basically a guideline to implement cybersecurity measures at an organizational level. Then moving on, there are technical standards that, well, each of these standards really serve a goal in actually protecting your organization better or addressing a design flaw of the internet itself in terms of cybersecurity. And I think the benefit of those standards is that it’s quite easy to measure if a standard is adopted or not. And when all that fails, then there are also standardized methods to share information, for instance, between CSIRTs, SOCs, and vulnerable organizations. Think about indicators of compromise, vulnerabilities that have been found within systems. And in recent years, even a standard has been developed where you basically can publish in a standardized way, contact information which can be used by security researchers or ethical hackers to contact you in case if they find you vulnerable. find a security issue in your system organization which was not found in any of your previous efforts to improve cybersecurity. So this is very nice, these are very nice standards to have but of course a standard needs to be adopted before it becomes effective and this is where our main challenges lie and I think in our experience one of the the best methods to actually increase adoption is to show how well standards are adopted within the Dutch government and we we have developed also a tool, a measuring tool for this purpose that actually can report for every website, for every email domain how well the standards are adopted within a certain government organization and throughout presenting these measure results regularly we see over time these important security standards which in return will not solve all the challenges that Rene have laid out as cybersecurity because I completely you are never done with cybersecurity but it it has in fact benefited the security of the Dutch government in that sense and it’s really nice also to have published this measuring tool as an open source project for well basically everyone in the world to to use and to to measure their adoption of these important security standards. So that was my contribution, thank you.

Timea Suto: So I’m going to turn to Chris here on my left because we’re talking here about a holistic approach making sure that things work across borders making sure that we share information, we don’t lose sight, that all takes me into thinking about perhaps we need some capacity building to really enable this whole of society approach that we need to cybersecurity and to mainstream the conversations that we’re having on the cybersecurity critical infrastructures into the general thinking around digital transformation so how do you how does the GFC see that and where do you see it from where

Mr Chris Buckridge: you’re sitting? Yeah so thank you very much Tamea and I mean I so Chris Buckridge I’m here as a senior strategy advisor with the Global Forum on cyber expertise and based on what I already had based on listening to Rene’s keynote there which was wonderful I should apologize in advance because I’m going to go into full marketing mode for the GFC here but I mean I think it’s all it is really relevant and that idea of capacity building is so central to a lot of this. I think Rene’s comment that really resonated me about fragmentation and complexity are the enemy of security is really it’s at the kernel of what the GFC is about and it’s sort of flipping that and saying coordination and clarity are really the fundamentals of security and so the GFC is an organization it’s the platform for international cooperation on strengthening cyber capacity building and expertise globally and it was established in 2015. It’s a multi-stakeholder organization we have around 250 members and partners 88 of those states organize the state nation states 16 international organizations and then the remainder are private sector academia NGOs so it is really quite a broad community a lot of diverse expertise and awareness there and and working together in really a number of ways to try and facilitate essentially that that cyber capacity building CCB and make sure that’s happening in the best way so we do that by connecting sort of the network of implementers donors and those who are in need making sure that they finding each other in the global sense it’s about identifying in developing best practices so there are certain approaches that we know work well and there are other approaches that we try out from time to time and they maybe don’t work as well and so that that’s a really important community activity finding that out learning together and then also I mean highlighting the importance of cyber capacity building it was there in Renee’s presentation as well it that building capacity and building it at the global level not just in you know the global north but also looking to the global south because that the cyber security threats are global is really essential and so I can speak to a few of the different activities that the the GFC has been involved in in in sort of some different aspects different ways in which we’re doing it the first one I’ll mention and Valter spoke about standards there so I won’t say too much about this but the triple I initiative the internet infrastructure initiative is something that GFC has been doing for the last few years or facilitating for the last few years and it’s very much in line with that with promoting and educating about standards like IPv6 DNS sec TLS RPKI DKIM and DMARC so really looking at lots of different elements in the technological stack and standards and how they can be usefully employed and deployed for for better security turning to a slightly different aspect it would be in in terms of thinking about policy frameworks the sort of alignment in in what we’re trying to achieve and I think something useful to highlight there would be the the Accra call which came out in 2023 was an output of the the global conference on cyber capacity building with the first one of those was in Ghana in Accra in 2023 GC 3b we call it which we have yeah regularly get wrong wrong order there so I’m not sure if we’ve made it easier by calling it that but and we have another of those the second GC 3b is going to take place in Geneva in May next year. But that’s really about, again, this sort of coordination. It’s connecting the cyber security and cyber capacity building communities with the development community, with what’s going on in international development. And it’s got really four voluntary, non-binding, but direction-setting actions that people can sign on and commit to and then report on, strengthening the role of cyber resilience as an enabler for sustainable development, advancing the demand-driven, effective, and sustainable cyber capacity building, fostering stronger partnerships and better coordination, very important, and then the last one, which is equally and perhaps even more important than any of them, unlocking the financial resources and implementation modalities. So that’s always the struggle here. I mean, there is governments, private sector, any of these stakeholders have priorities, have limited resources. So making the case that investing in cyber security, investing in capacity building is essential, is a really fundamental element in all of this. And that’s, I think, where the ACRA call is important. The last, I’ll just say one more point here, and it’s kind of tying into what Julia was talking about as well, and particularly in the international cyber diplomacy scene and what’s going on in the open-ended working group. One of the projects that the GFC has been really thrilled to be involved in and coordinating is the Women in Cyber fellowships. And that’s been working with donors, donor states from around the world. At the most recent OEWG meeting, which was just a couple of weeks ago in New York, we actually had 47 fellows from different Global South member states taking part, traveling to New York, taking part in training, but also taking part actively in those OEWG negotiations. And so, obviously, this is wonderful in terms of taking some steps towards gender balance, which is important. But I think also, and really importantly here, is that without that funding, without that project, a lot of what you would have had there in those New York negotiations particularly from Global South countries would not be bringing in subject matter experts. They’d be using their staff in New York. They’d be using their permanent representations, which is great, but to be able to have the subject matter experts there in the room, enriching the negotiation and the discussion around the OEWG is almost, to my mind, the bigger achievement, the bigger important thing that we’re doing there. And then having that expertise filter back to the national level. When they go back to capital, when they go back to their governments. So that sort of level of coordination and capacity building is, I think, really fundamental in achieving, again, what Renee spoke about, the need for some coordination of approach and across different jurisdictions. So I’ll stop there. Thanks, Timea.

Timea Suto: Thank you so much, Chris. A lot in a very short time from what the GFC is doing, and we know that there’s more. But what you told me, the last point, I think it was the most striking. Because if we enable the participation of those who might otherwise not be at the table, it is really the way through which we benefit and can make sure that the policies that we’re thinking about actually work in practice on the ground and they’re actually implementable. And I want to stick with that idea as I turn to Francesca online. Do you talk a little bit about what the Cyber Peace Institute is doing and also how you see the role of stakeholders in these conversations? Especially when we turn to multilateral discussions, we see quite a gap there, but we are here in the heart of multistakeholder at the IGF. So how do we bring those two elements together? Thank you so much. Can you hear me well? Yes, we can. Okay, perfect.

Francesca Bosco: Thank you so much. And thanks a lot for the invite. And it’s an honor to speak today. I’m very sorry not to be able to be there in person. Maybe just a quick remark on who the Cyber Peace Institute is and what we are doing. The Cyber Peace Institute is an international non-profit organization. We are based in Geneva, but the mandate is global. I would say that at the backbone of the expertise of the Institute to analyze how evolving cyber threats are harming society and notably impacting critical infrastructure, specifically in the civilian domain. We provide direct cybersecurity assistance and capacity building, and we advocate for responsible behavior in cyberspace, providing policymakers with data-driven insights. So thank you so much for the opportunity to intervene in this discussion. It’s difficult, I would say, to come after an excellent previous intervention. So I would just maybe share a couple of thoughts when it comes to which are the challenges that we see when it comes to the international approach to protecting critical infrastructures and maybe sharing also a couple of potential ideas on how to address this. Indeed, as René very well highlighted in his remarks, but also Giulia mentioning specifically the UN processes and specifically the open-ended working group discussions, I would say that a couple of significant obstacles that we see are the lack of consensus among the states when defining critical infrastructure. Indeed, the great sectors have been identified, notably the healthcare sector. But, I mean, clearly, then there needs to be also one of the elements that René mentioned, which is moving, let’s say, from policies into action. So first of all, the definition of the… of the critical infrastructure. And the second part that I would like to mention is also the rapid evolution of cyber threats that adds to these challenges. It was hinted by Rene in his initial remarks. But indeed, a practical example that comes to mind is the ransomware attacks on health care systems during the COVID-19 pandemic that exposed the technical vulnerabilities, but also the lack of preparedness, basically, to ensure the service continuity. I’m mentioning specifically the health care sector because I think it is a good example, according to your question, Timea, in where the multistakeholder community can really bring an added value. Because I think that the progress that we saw at the open-ended working group level, so integrating the inputs and the voices of the multistakeholder communities brought to this, basically. And I can tell you from a very practical standpoint, what we did at the Cyber Peace Institute. So the Cyber Peace Institute was launched at the end of 2019, well on time, basically, to start during the pandemic. Which was, on one hand, we transformed it, in a way, into an opportunity. Because the mission of the Institute is to protect the most vulnerable in cyberspace. At that time, the most vulnerable in cyberspace was the health care sector, basically, widely identified from hospitals, to labs, to civil society organizations that were working. For example, when it comes to developing countries, we’re working, basically, to provide essential services. So we took this comprehensive approach. And we tried to understand, OK, how the critical infrastructure are, this critical sector is impacted by cyber attacks. Not so much from the angle of, let’s say, simply, allow me to say, collecting information about the damages, the cost, how many devices were infected. but try to understand what it really means for society. So what is the real impact and the real harm that these attacks are causing to society? Practical example is how many ambulances redirected, how many people could not get the vaccine, and showing this both with, as I mentioned before, a very strong technical analysis to highlight the modus operandi of the malicious actors to identify, let’s say, the critical sectors that are targeted, in which countries, and so on and so forth, but also highlighting this harm aspect and how international laws and norms were violated. So having this all-encompassing view coming from a neutrally independent civil society actors is one of the examples of how we can advance multi-stakeholder cooperation in a very concrete way. And I mean, the platform that we develop is publicly available. We use the same capability to develop the platform to monitor the attacks against the civilian infrastructure in the context of the Ukraine conflict. And I mean, the platform is developed by the Institute, but not in silo, meaning that we’ve been working on this with other civil society partners, with academic partners, with the private sector that is providing key data infrastructure services and expert views. We’ve been socializing this and extensively worked on this also via our engagement at the open-ended working group level. So I think it’s a very concrete example of how the multi-stakeholder collaboration can work. Allow me maybe to just to mention a couple of things when it comes to what we need to do, let’s say, with some some sort of like actions that we can take when it comes to the challenges that we see in international cooperation sectors. Building on the excellent remarks that Julia made, I think there is one point which is, again, as Rene was saying, not just having the norms but operationalize them. And we truly believe that transparency is the way to go. And again, we need to have concrete, actionable measures. And so, for example, we’ve been consistently advocating for voluntary state reporting on what constitutes a critical infrastructure within national frameworks, but also basically to enhance predictability and enable collaborative risk management across borders. Measuring the harms. I mentioned that, for example, in our work regarding the healthcare sector, regarding the civilian infrastructure in the context of the Ukraine conflict, we always add the harms dimension. We develop a specific methodology. And this is really key to understand how the impact is going beyond, let’s say, the pure, I would say, financial monetary damages. But you really need to understand the impact of cyberattacks on society, especially those cyberattacks that are obviously targeting the critical services that are making our societies running. Just a couple of points in terms of key actions. Rene mentioned the emerging technologies. Allow me to say, indeed, it’s a critical area where, obviously, I’m thinking about artificial intelligence, quantum, are bringing amazing opportunities. But at the same time, improper deployment could create new vulnerabilities, especially when we think about a critical infrastructure because still important to remember that many critical infrastructure that we are still seeing today are running on legacy systems, meaning that they were not conceived of basically to be connected just to start with. So this is extremely important to have a sort of like responsible approach in deploying emerging technologies. And then, I mean, I was smiling when Chris was talking about the GC3B because indeed one of my key points was to definitely scalable capacity building specifically for under-resourced communities. And I really appreciated that Julia also mentioned the, let’s say the connection between the, I would say the evolution between the understanding that cybersecurity is a key component of development as well. And to this end, I was encouraging basically the audience as well to build on existing initiatives like the excellent work done by the GFC and the opportunity that we have with the Global Cyber Capacity Building Conference that is upcoming in May in Geneva, really to bridge this gap between cybersecurity and development communities and critical infrastructure protection is one of the, I would say the key pillars. And maybe just to finish, we talk about multi-stakeholder collaborations. I gave some practical examples and I’m happy to, I mean, to dig into this more if I may. And it’s a sort of like a personal mantra. It needs to be meaningful. I mean, multi-stakeholder collaboration means nothing if it’s, I mean, if it’s just on paper or if it’s just to tick the box. And I really like what Renee was mentioning at the very beginning in terms of like partnerships are working where, when basically each partner is providing, let’s say, his or her best, let’s say expertise. to create basically the best solution possible, but according, let’s say, to what they can bring at the table and not simply because they want to be sitting at the table. So I think we need to see multistakeholder collaboration starting valuing much more, which is the impact of a multistakeholder collaboration instead of just having it as a nice to have.

Timea Suto: Thank you so much, Francesca. So we’ve covered quite a bit of ground that Renée started. We’ve heard on the importance of international norms and their implementation. We’ve heard about standards, capacity building, multistakeholder partnerships. So I have one more element that I would like to throw at Robyn and hear a bit of an insight on that, which is what is the role of policies, national policies in this? How do we make sure that policies are responsive to everything that we’ve heard here? What is it that’s out there that is helpful? What is it that we still need? And how do we move towards perhaps a bit more interoperability or harmonization of what’s happening in national context, going back to the initial thought of fragmentation being so harmful to cybersecurity? So a short little question there for you, if you can cover that.

Ms Robyn Greene: Sure, I will do my best. Thank you so much for having me here. I’m really excited to talk about this critical issue. One of the things that I think you’re going to see throughout my comments is how the things that I’m going to be recommending are not only applicable when you’re thinking about critical infrastructure and cybersecurity. I think when we get into the policy space, we really have to confront the fact that critical infrastructure is no longer just critical infrastructure. It is something that intersects with commercial technologies, with everyday sort of technologies and with the people who use those technologies. And as a result of that, one of the first things that we need to do from a policy perspective is to really take a holistic assessment of the technological landscape, as well as the threat landscape, so that we can understand things like what are the kinds of devices that interact with what we consider to be core critical infrastructure. This is especially important as private sector services are increasingly intersecting with or actually building and providing that core critical infrastructure. In addition to that, we need to make sure that policies around cybersecurity for critical infrastructure include security requirements that are technically compatible with the internet infrastructure and consistent with the values of an open, interoperable and secure internet. As I’m going to discuss in more detail a little bit later in my comments, this includes things like not mandating any legal or regulatory threats to key security tools like encryption, such as requiring things of the private sector like building key escrow or other so-called backdoors into encrypted products and services, content scanning and labeling requirements or traceability requirements that undermine encryption. This also includes resisting implementing mandates around private sector data localization and restrictions on private sector data transfers. The other thing that we really need to do is look to the future. What does the future of technology look like? What will future technologies require and how will they intersect with our critical infrastructure? How will the next generation of technologies even potentially replace today’s critical infrastructure? Partnerships with the private sector can be uniquely impactful in helping governments to do this kind of, you know, looking into the crystal ball, if you will. And since private sector entities, technology companies in particular, but also academia and other multi-stakeholder experts are really at the vanguard of these technological advancements and can be uniquely helpful in doing that kind of forecasting so that we can make sure that cybersecurity protections for critical infrastructure aren’t only responding to the threats of yesterday and today, but also preparing for the threats of tomorrow. In addition to that, and this is one of the most important things, and I think one of the greatest challenges that we see in the policy landscape, make it easy for companies to want to work with and share information with governments, cyber threat indicators, that is, and make sure that those relationships with companies are not, you know, the big don’t is don’t establish relationships with private sector on the basis of regulatory threats or threats to services, to their license to operate. Legal frameworks that promote human rights norms, rule of law, and legal predictability, not only in the context of cybersecurity, but also in the context of other policy spaces are the ones that will promote willing collaborations and do ensure that relationships are reciprocal. At the end of the day, the willing collaboration is one of the most important things for private sector partnership with the public sector in critical infrastructure protection, because, of course, you don’t want companies in the position where they’re only focused on checking boxes, and they’re, you know, only doing what they’re absolutely obligated to do. You want companies that are really looking at the holistic cybersecurity and threat landscape and proactively sharing information with governments that they think will really lift all boats, if you will. And so this makes this one of the most important elements of encouraging this willing collaboration beyond not having it be sort of like a mandatory or fear-based mechanism is making sure that these relationships are reciprocal, making sure you’re sharing, governments are sharing information back with the private sector early and often. This not only helps to lift all boats by enabling companies to better protect their clients and users, but it also builds trust and incentivizes these companies to come to the table in the first place. Beyond just reciprocal information sharing, I think the other types of sort of reciprocal partnerships can also include skill building and reciprocal access to technological tools and new technologies. The next thing that I think is going to be really important in having a better policy space that is providing more robust protection for critical infrastructure is actually starting to track the broader policy landscape. And this is something that I sort of touched upon a little earlier in my comments, but we need to really start to internalize the fact that regulatory debates and proposals that are not directly about cybersecurity or about critical infrastructure will inherently affect our ability to protect critical infrastructure in particular. And so, as I mentioned, you know, resisting the impulse to pursue policies that require data localization is, I think, you know, one of the more important things that we can do. At the end of the day, data localization is actually one of the more harmful policies for cybersecurity, not only in terms of like private sector protection of information and things like that, but also in terms of protection of critical infrastructure. This is because it increases costs for companies and for the government, in many cases, to actually apply state-of-the-art cybersecurity solutions. It restricts access to and deployment of those state-of-the-art cybersecurity services and measures, and it limits and disincentivizes regular system updates. It also limits resilience measures, like storing backups of systems in multiple locations. In addition to that, it’s critically important to restrict to, excuse me, resist restrictions on international data transfers for the private sector. When we’re thinking about protecting cybersecurity, information is absolutely essential. And because of how the private sector intersects with critical infrastructure so much, and as I mentioned, in many cases, actually operates or owns critical infrastructure. infrastructure, it’s really important that companies be able to have that global visibility into what the threat landscape is and be able to access information as quickly as possible. One of the most limiting factors to that is restricting the flow of information because that is inherently going to limit your view to the domestic threat landscape rather than the global threat landscape. So encouraging data flows is actually encouraging cybersecurity in many ways. And then finally, resisting the adoption, resisting the impulse to undermine or chill the adoption of end-to-end encryption and quantum resistant encryption. Encryption is by far the most effective tool that we have to protect privacy and security of communications. This applies not only to private communications, but also to government communications and data. And ultimately, any time you see policies or regulations that mandate weaknesses in encryption, even if they are meant only to apply to private sector tools and systems, they inherently wind up intersecting with government and critical infrastructure systems. And so what you wind up doing is actually lowering the global security level of anything, you know, that’s going to be touching those systems. We actually have a very sort of like current, if you will, example that’s also a very stark example, to be honest, of how important encryption is to protecting cybersecurity and critical infrastructure in particular. As folks may be aware of Salt Typhoon, this is a major story in the U.S., but I imagine it’s being followed throughout the world where, you know, foreign spies have essentially taken advantage of vulnerabilities in telecommunications and ISP systems in order to infiltrate those systems. And, you know, while they may have access to be targeting lots of different people’s communications and private data, they are, in fact, targeting government officials. And so this is, you know, one of those examples of how we see the private sector intersecting with critical infrastructure and the government and the need for encryption. The last thing is resisting the impulse to mandate data retention beyond what is necessary. You’re just keeping data that could be useful to, you know, cyber criminals and other malicious actors unnecessarily if you’re imposing data retention mandates that go beyond, for example, what’s necessary for business purposes or what’s necessary for operational purposes, depending on the kind of entity that’s subject to these requirements. The next issue that I think is really important when it comes to the, and this is the last issue, when it comes to the policy environment and protecting critical infrastructure and cyber security of critical infrastructure is international cooperation. This is certainly not surprising, as we’ve heard this many times throughout the panel already, but ultimately this does not just include the sort of traditional types of cooperation around cyber threat information sharing and securing supply chains. It also includes things like regulatory interoperability. Make sure that not only cyber security regulations are interoperable with other regulations from other, like cyber security regulations from other governments, but make sure that non-cyber domestic and foreign regulations that implicate cyber security are compatible with current cyber security best practices. Too often we see, you know, regulatory proposals that are meant to address social concerns like, you know, online safety and things like that, which are critically, critically important, but that would actually wind up doing things like undermining encryption, and this is, of course, incompatible with cyber security and critical infrastructure, cyber security best practices. And so I think, you know, as a global community, it’s incumbent upon us not only to look at the policy landscape through the lens of what is directly affecting critical infrastructure because it is literally regulating critical infrastructure, but what are the secondary and tertiary policies that we’re considering and applying to government and the private sector that could actually still have significant ramifications for critical infrastructure and cyber security globally. In addition to that, addressing cyber crime safe haven jurisdictions is critically important. You know, we need to make it harder and more risky for malicious actors, whether they’re working independently, for criminal organizations, or directly or indirectly for nation states to attack critical infrastructure, particularly as, you know, we see the growing closeness between critical infrastructure and private sector technologies and stakeholders. The U.N. Cyber Crime Convention was originally proposed and promoted by several of these safe haven states, and that’s somewhat ironic, perhaps, but we are sort of in a place now where the negotiation has completed, and parties are going to move to negotiating the modalities for the protocol discussions and the protocols themselves and adoption and ratification of the treaty. Rule of law governments need to prioritize ensuring that the protocols are not only providing for specific procedural and human rights safeguards that weren’t included in the convention text, but also accountability mechanisms to ensure that all parties play by the same rules and that they work cooperatively towards investigating and preventing global cybercrime, not only when it serves their specific geopolitical or national interests. Finally, capacity building is another really important element of international cooperation and private sector, public sector collaboration. This is something the Cyber Crime Convention has a lot of potential to improve, not just as it applies to cybercrime investigation, but also to technically advance the technical skills and practices of other parties to the convention. The technically advanced and well-resourced governments can and should provide material support and technical training to augment the cyber security capabilities and practices of the less resourced and technically advanced nation states that are a party to the convention. I think there’s just, the policy landscape is something that we often think of as being very specific to critical infrastructure or to supply chain or something like that, but one of the things that I think we should really start to focus on as we think about cyber security and critical infrastructure is how the broader policy landscape and how relationships between governments and private sector entities can really impact that space too. Thank you.

Timea Suto: Thank you so much, Robyn. Quite a lot of information in that as well and also exploring in this extra element of cyber security, but actively fighting cybercrime, which we know in the U.N. it’s two separate processes, but in real practice they are, they go hand in hand. We had a second round of questions prepared, but I don’t think we will have time for that. We are 15 minutes away from the end of the session and I do want to turn to the audience as well and hear a little bit if you have any questions, if you have any remarks on what we’ve heard from the speakers before I give them the last word. So anybody, if you have comments online, please put your hand up, we can turn to you or here in the room. Likewise, put your hand up physically and we’ll get you a microphone. So are there any questions or comments? Think you were very comprehensive or very exhaustive, either or the other? Well, if there are. audience has no questions or input, then I think I’ll do a round-robin and then in the very end I’ll get to Rene on the account of first and last words. So in the order that I’ve called you previously, perhaps I can turn to Julia and ask what are your takeaways from this discussion and what is the one element that you think we should take forward as a message from this session for the IGF and for the global multi-stakeholder community

Julia Rodriguez: to ponder upon or perhaps act upon? Yes, thank you. Thank you so much for a great conversation. It has been really interesting. I think that the panel is a proof of why a stakeholder collaboration is crucial because I think that each one of the speakers has contributed with its insights on their competencies. I think it is impossible to summarize, but one of the main things that stuck with me, the importance of standards. From my perspective, these can directly address these mass-aligned definitions that make operationalization a challenge. Capacity building is key across the technical aspects as network security, encryption, incident response, but also from the more social and economic and humanitarian perspective and of course the impact of this cyber diplomacy that we’re trying to develop and great comments on data minimization. I think that we need to incorporate more privacy by design concepts into the normative framework of the United Nations and I think that many of these cyber intrusions at the end affect individuals. So I think this has been very well highlighted by the cyber peace institutes harms methodology. I think it is a great, great takeaway and my one sentence takeaway, it will be multi-stakeholder collaboration is essential to protect And I will stop there. Thank you.

Timea Suto: Thank you so much, Julia. Wouter?

Mr Wouter Kobes: Yes, thank you. Well, hinting on the words Francesca said, I think the Network Information Security Directive, version 2, we have in the EU, does a really nice attempt in defining critical infrastructure. And I think the point of Robyn, where you have to involve your commercial sector as well, is also captured there, because it extends towards the supply chain of this critical infrastructure. So I think that’s a nice attempt, at least by the EC, to define that critical infrastructure. Yeah, I think my giveaway to the audience and the panelists is also to lead by example in adopting internet security standards. So I invite you all to, after this session, navigate to our security adoption tool, internet.nl, and measure your own organization and see where you have room to improve in leading by example in these internet standards. So with that, I would like to thank you all for this very interesting discussion.

Timea Suto: Thank you, Wouter. Chris?

Mr Chris Buckridge: Yeah, thanks, Timea. And thanks for organizing this session and for a really interesting discussion and set of interventions. I was happy not to be the first person here to mention AI. I think too often the conversation seemed to be turning to that. But it is a really interesting and significant point. And I mean, at the OEWG the other week, really every meeting of the OEWG, more member states are highlighting AI as an area of real concern for them. And it makes sense. I know ISC2, I think, did a survey late last year and more than half of CISOs, security professionals, are anticipating AI-enabled attacks or AI-enhanced attacks to be part of what they have to defend against. Now, it’s not entirely clear what’s actually happening and to what extent that’s happening in real life at this stage. And I think there were some states that also made that point. But, I mean, Unity has done a study which sort of really highlights that sort of arms race we’re in where AI is enhancing the abilities of attackers, it’s enhancing the ability of defenders. But that really centers back to the need. for capacity building, it centers back to, that’s great that the defense is sort of continuing to ratchet up along with the attack, but if you’re in the global south, and if you’re not really on that sort of, in on that arms race, you’re becoming increasingly vulnerable to these attacks. So this is not something where we can leave people behind. If you get left behind, you’re going to be a vulnerability, and that’s going to be a vulnerability for the entire system. So we need to be ready to, sorry, invest, sorry, in cyber capacity building. And I mean, to use another very overused term, we need to be agile about that. We need, and I think Robyn mentioned, the changing landscape, the sort of ever-moving landscape that we have in terms of security. That cyber capacity building activity also needs to reflect that. It needs to be ready to engage with what the latest threats are, the latest vulnerabilities are, and to be ready to mitigate that. So it is, as Wouter, I think, also said, a constant. It’s not something where we can say one and done. It’s something we need to keep evolving and working on as time goes on. Thanks.

Timea Suto: Thank you, Chris. Francesca?

Francesca Bosco: Thank you so much. So what was tried me, I mean, I think is, the clear articulation. So thank you so much for the excellent discussion, because I think there was a very good segue among the different speakers. And I think we all reiterated the fact that the dependency is not only from a technical standpoint, but there is a dear need to understand the complexity of the ecosystem when we talk about a critical infrastructure. And I really appreciated the last comment from Robyn, the last remarks from Robyn, specifically on this, how the policies are kind of like intertwined. I also very much appreciated that the, one of the things that I’ve been, I mean, I spent all my life in the country in cybercrime, cybersecurity, misuse of technology, and so on and so forth. And one of the key challenges is always information sharing. It’s doable, but it needs to go both ways. And I think Robyn very well highlighted the fact that it cannot be just, let’s say, private sector vis-a-vis government, for example. But I mean, again, we need to create the ecosystem for the information sharing. So I think this is super important to be stressed. We mentioned several times building capacity, I would say, as very well Chris was mentioning, for now and for the future. Interestingly, I mean, in these days, I’m working on the potential risk of fully autonomous cyber attacks, impacting, for example, critical infrastructure. And indeed, the idea is not only to conceive it, but also to potentially build the capacity for being able to respond. And let me finish maybe with some of the remarks that building on what Rene and Julia also were saying. And again, going back to the idea of the meaningful multi-stakeholder collaboration standards. The standards are key. Or for example, international processes are key. But let’s be honest, not all the actors that should be involved in the multi-stakeholder approach have the means, have the resources, have the understanding on even how to engage. I’m thinking about the civil society difficulties in engaging with the standards bodies, for example. Or I’m thinking about many companies that would like to engage, for example, in the open-ended working group and similar processes, but they didn’t even know where to start, basically. So kudos to the ICC for organizing these panels because I think it’s also, I mean, helping in this direction. But I would say that more awareness-raising and really knowledge-building needs to be done in this sense.

Timea Suto: Thank you, Francesca.

Ms Robyn Greene: We just got the five-minute warning. So I’m gonna be extremely brief, especially since I wasn’t very brief in my initial comments. And I think I’ll just sum everything up with three thoughts. One, keep in mind how intersectional the technological landscape is, and therefore how intersectional we need to think about the policy landscape, and how that will impact the ability for the private sector to partner with government in the protection of critical infrastructure. Two, never ever underestimate the impact of encryption on cybersecurity. the importance of ensuring that all policies protect and promote the adoption of encryption rather than undermining it. And then three, also never, ever, ever underestimate the importance of data flows and the risks of data localization mandates, especially as applied to private sector entities and how that will ultimately lead to ramifications for critical infrastructure cybersecurity. Thank you so much, this has been a great panel.

Timea Suto: Thank you, Robyn. Rene, I give you the first word, I’m going to give you the last word as well. From your keynote speech after hearing all our speakers, what has changed from what you said or what would you like to highlight to build on what you said?

Rene Summer: Thank you, Timea, thank you all. Well, I mean, a lot has been said, so maybe on the margin of what has been already mentioned, I was thinking about what to say and then Elvis Presley’s song came to mind, a little less conversation, a little more action. And I think it falls down that we see the need for more actionable progress. I would really like to stress that many of the threats we see growing are stemming from those residual risks where industry will not be able to defend itself and how to address the residual risks, I think is very, very important. And the cost of inaction here is growing day by day. I think we have seen numbers that the global cost of cyber today is about 11 trillion US dollars that correspond to three G7 countries’ nominal GDP from 2022, I think, meaning Germany, UK and Japan. And we need to change the tide of this development.

Timea Suto: Concise as always, thank you, Rene, but it’s quite powerful as well. As a last word to take away. That only leaves me with one job, is to thanking you all for being here, for accepting ISIS’s invitation for this conversation and for sharing all your expertise and insight with us and with the audience here in the room and online. There will be a report of this session on the IGF website, so we will be. coming to you with that. And, of course, the ICC website is always there, so please take a look at our publications, not only on cyber security, but as Robyn highlighted, we also need to look into what we have done on data issues, especially on government access issues to data. So I’ll leave you with that. Huge thanks to my panelists, and a huge round of applause to all of you who’ve been here. Thank you. Thank you very much. Bye-bye.

R

Rene Summer

Speech speed

139 words per minute

Speech length

3123 words

Speech time

1342 seconds

Fragmentation and complexity hinder security efforts

Explanation

Rene Summer argues that fragmentation and complexity are the main enemies of security. He emphasizes that different approaches and definitions across jurisdictions create challenges for policy targeting and implementation.

Evidence

Rene mentions that many jurisdictions have developed different critical infrastructure frameworks, leading to complexity and fragmentation.

Major Discussion Point

Challenges in protecting critical infrastructure

Agreed with

Mr Wouter Kobes

Ms Robyn Greene

Agreed on

Importance of addressing fragmentation and complexity

Need for holistic approach involving all stakeholders

Explanation

Rene Summer advocates for a holistic policy approach that is well-balanced and targeted. He stresses the importance of involving all stakeholders, including governments, in addressing cybersecurity challenges.

Evidence

Rene mentions the need for clear roles and responsibilities, as well as cooperation and coordination between stakeholders.

Major Discussion Point

International cooperation and multistakeholder collaboration

Agreed with

Mr Chris Buckridge

Francesca Bosco

Agreed on

Need for capacity building, especially in the Global South

J

Julia Rodriguez

Speech speed

119 words per minute

Speech length

957 words

Speech time

480 seconds

Lack of consensus on defining critical infrastructure

Explanation

Julia Rodriguez points out that there is a lack of consensus among states when defining critical infrastructure. This lack of agreement creates challenges in developing and implementing effective protection measures.

Evidence

Julia mentions that while some sectors like healthcare have been identified, there is still a need to move from policies into action.

Major Discussion Point

Challenges in protecting critical infrastructure

Differed with

Mr Wouter Kobes

Differed on

Approach to defining critical infrastructure

Importance of public-private partnerships and information sharing

Explanation

Julia Rodriguez emphasizes the crucial role of public-private partnerships in strengthening safety and security. She highlights the need for collaboration with service providers to ensure the protection of critical infrastructure.

Evidence

Julia mentions El Salvador’s active engagement in multilateral arenas to advocate for concrete implementation measures and partnerships.

Major Discussion Point

International cooperation and multistakeholder collaboration

Agreed with

Rene Summer

Francesca Bosco

Ms Robyn Greene

Agreed on

Need for multistakeholder collaboration

Need to incorporate privacy-by-design concepts in normative frameworks

Explanation

Julia Rodriguez suggests that privacy-by-design concepts should be incorporated into the normative framework of the United Nations. This approach would help address privacy concerns in cybersecurity efforts.

Major Discussion Point

Role of standards and policies

M

Mr Wouter Kobes

Speech speed

131 words per minute

Speech length

648 words

Speech time

294 seconds

Misaligned definitions make operationalization challenging

Explanation

Mr Wouter Kobes points out that misaligned definitions of critical infrastructure across jurisdictions create challenges in operationalizing protection measures. This misalignment hinders effective implementation of security strategies.

Major Discussion Point

Challenges in protecting critical infrastructure

Agreed with

Rene Summer

Ms Robyn Greene

Agreed on

Importance of addressing fragmentation and complexity

Standards help address misaligned definitions across jurisdictions

Explanation

Mr Wouter Kobes argues that standards play a crucial role in addressing misaligned definitions of critical infrastructure across jurisdictions. He suggests that standards can provide a common framework for understanding and protecting critical infrastructure.

Evidence

Wouter mentions the Network Information Security Directive version 2 in the EU as an attempt to define critical infrastructure.

Major Discussion Point

Role of standards and policies

Differed with

Julia Rodriguez

Differed on

Approach to defining critical infrastructure

Standards adoption demonstrates leadership in internet security

Explanation

Mr Wouter Kobes emphasizes the importance of leading by example in adopting internet security standards. He suggests that organizations should measure their own security adoption to identify areas for improvement.

Evidence

Wouter invites the audience to use their security adoption tool, internet.nl, to measure their organization’s security standards adoption.

Major Discussion Point

Role of standards and policies

M

Mr Chris Buckridge

Speech speed

156 words per minute

Speech length

1496 words

Speech time

575 seconds

Capacity building essential, especially for Global South

Explanation

Mr Chris Buckridge emphasizes the critical importance of capacity building, particularly for countries in the Global South. He argues that leaving countries behind in cybersecurity capabilities creates vulnerabilities for the entire global system.

Evidence

Chris mentions the increasing vulnerability of countries not involved in the AI ‘arms race’ between attackers and defenders.

Major Discussion Point

International cooperation and multistakeholder collaboration

Agreed with

Rene Summer

Francesca Bosco

Agreed on

Need for capacity building, especially in the Global South

AI-enabled attacks anticipated as growing concern

Explanation

Mr Chris Buckridge highlights the growing concern about AI-enabled or AI-enhanced attacks. He notes that many security professionals are anticipating these types of attacks as part of what they need to defend against in the future.

Evidence

Chris cites an ISC2 survey indicating that more than half of CISOs and security professionals anticipate AI-enabled attacks.

Major Discussion Point

Emerging technologies and future threats

Capacity building must evolve to address latest threats

Explanation

Mr Chris Buckridge argues that capacity building efforts need to be agile and evolve to address the latest threats and vulnerabilities. He emphasizes the need for continuous adaptation in cybersecurity practices.

Major Discussion Point

Emerging technologies and future threats

F

Francesca Bosco

Speech speed

141 words per minute

Speech length

1879 words

Speech time

796 seconds

Rapid evolution of cyber threats exposes vulnerabilities

Explanation

Francesca Bosco points out that the rapid evolution of cyber threats exposes vulnerabilities in critical infrastructure. She emphasizes the need to understand and address these evolving threats to ensure better protection.

Evidence

Francesca mentions ransomware attacks on healthcare systems during the COVID-19 pandemic as an example of exposing technical vulnerabilities and lack of preparedness.

Major Discussion Point

Challenges in protecting critical infrastructure

Multistakeholder input crucial for developing effective frameworks

Explanation

Francesca Bosco emphasizes the importance of meaningful multistakeholder collaboration in developing effective cybersecurity frameworks. She argues that diverse expertise and perspectives are necessary to address complex cybersecurity challenges.

Evidence

Francesca mentions the Cyber Peace Institute’s work on monitoring attacks against civilian infrastructure in the Ukraine conflict as an example of multistakeholder collaboration.

Major Discussion Point

International cooperation and multistakeholder collaboration

Agreed with

Rene Summer

Mr Chris Buckridge

Agreed on

Need for capacity building, especially in the Global South

Need to prepare for potential fully autonomous cyber attacks

Explanation

Francesca Bosco highlights the need to prepare for potential fully autonomous cyber attacks that could impact critical infrastructure. She emphasizes the importance of building capacity to respond to these future threats.

Evidence

Francesca mentions her current work on assessing the potential risks of fully autonomous cyber attacks on critical infrastructure.

Major Discussion Point

Emerging technologies and future threats

Responsible approach needed in deploying emerging technologies

Explanation

Francesca Bosco argues for a responsible approach in deploying emerging technologies, particularly in critical infrastructure. She emphasizes the need to consider potential vulnerabilities, especially in legacy systems not designed for connectivity.

Major Discussion Point

Emerging technologies and future threats

M

Ms Robyn Greene

Speech speed

154 words per minute

Speech length

2223 words

Speech time

862 seconds

Intersectionality of technological landscape complicates policy approaches

Explanation

Ms Robyn Greene emphasizes the intersectionality of the technological landscape and its impact on policy approaches. She argues that critical infrastructure now intersects with commercial technologies and everyday systems, requiring a more holistic policy approach.

Major Discussion Point

Challenges in protecting critical infrastructure

Agreed with

Rene Summer

Julia Rodriguez

Francesca Bosco

Agreed on

Need for multistakeholder collaboration

Policies should be compatible with internet infrastructure and values

Explanation

Ms Robyn Greene argues that policies around cybersecurity for critical infrastructure should be technically compatible with internet infrastructure and consistent with the values of an open, interoperable, and secure internet. She emphasizes the importance of not undermining key security tools like encryption.

Evidence

Robyn mentions examples of policies to avoid, such as mandating key escrow, backdoors, content scanning, or traceability requirements that undermine encryption.

Major Discussion Point

Role of standards and policies

Regulatory interoperability needed across jurisdictions

Explanation

Ms Robyn Greene emphasizes the need for regulatory interoperability across jurisdictions. She argues that not only cybersecurity regulations should be interoperable, but also non-cyber domestic and foreign regulations that implicate cybersecurity should be compatible with current best practices.

Major Discussion Point

International cooperation and multistakeholder collaboration

Agreed with

Rene Summer

Mr Wouter Kobes

Agreed on

Importance of addressing fragmentation and complexity

Importance of forecasting future technological needs and threats

Explanation

Ms Robyn Greene highlights the importance of looking to the future and forecasting technological needs and threats. She argues that partnerships with the private sector can be uniquely impactful in helping governments anticipate future challenges.

Major Discussion Point

Emerging technologies and future threats

Policies must consider broader technological landscape impacts

Explanation

Ms Robyn Greene argues that policies must consider the broader technological landscape and its impacts on critical infrastructure protection. She emphasizes the need to track regulatory debates and proposals that are not directly about cybersecurity but can affect the ability to protect critical infrastructure.

Evidence

Robyn mentions examples such as data localization policies and restrictions on international data transfers, which can harm cybersecurity efforts.

Major Discussion Point

Role of standards and policies

Agreements

Agreement Points

Need for multistakeholder collaboration

Rene Summer

Julia Rodriguez

Francesca Bosco

Ms Robyn Greene

Need for holistic approach involving all stakeholders

Importance of public-private partnerships and information sharing

Multistakeholder input crucial for developing effective frameworks

Intersectionality of technological landscape complicates policy approaches

Speakers agreed on the critical importance of involving all stakeholders, including governments, private sector, and civil society, in addressing cybersecurity challenges and developing effective frameworks.

Importance of addressing fragmentation and complexity

Rene Summer

Mr Wouter Kobes

Ms Robyn Greene

Fragmentation and complexity hinder security efforts

Misaligned definitions make operationalization challenging

Regulatory interoperability needed across jurisdictions

Speakers emphasized that fragmentation in approaches, definitions, and regulations across jurisdictions creates complexity and hinders effective cybersecurity efforts. They stressed the need for alignment and interoperability.

Need for capacity building, especially in the Global South

Rene Summer

Mr Chris Buckridge

Francesca Bosco

Need for holistic approach involving all stakeholders

Capacity building essential, especially for Global South

Multistakeholder input crucial for developing effective frameworks

Speakers agreed on the importance of capacity building, particularly for countries in the Global South, to ensure a more secure global cybersecurity ecosystem.

Similar Viewpoints

Both speakers emphasized the importance of standards and policies that are compatible with internet infrastructure and values, and can help address misalignments across jurisdictions.

Mr Wouter Kobes

Ms Robyn Greene

Standards help address misaligned definitions across jurisdictions

Policies should be compatible with internet infrastructure and values

Both speakers highlighted the need to prepare for future threats, particularly those involving AI and autonomous systems, in the context of critical infrastructure protection.

Mr Chris Buckridge

Francesca Bosco

AI-enabled attacks anticipated as growing concern

Need to prepare for potential fully autonomous cyber attacks

Unexpected Consensus

Importance of encryption for cybersecurity

Ms Robyn Greene

Julia Rodriguez

Policies should be compatible with internet infrastructure and values

Need to incorporate privacy-by-design concepts in normative frameworks

While coming from different perspectives (private sector and government), both speakers emphasized the importance of protecting encryption and incorporating privacy-by-design concepts in cybersecurity frameworks, showing an unexpected alignment on this issue.

Overall Assessment

Summary

The main areas of agreement included the need for multistakeholder collaboration, addressing fragmentation and complexity in cybersecurity approaches, the importance of capacity building (especially in the Global South), and the need to prepare for future threats like AI-enabled attacks.

Consensus level

There was a high level of consensus among the speakers on the major challenges and necessary approaches to protecting critical infrastructure. This consensus suggests a growing recognition of the complexity of the issue and the need for collaborative, holistic solutions. However, specific implementation details and prioritization of actions may still require further discussion and negotiation among stakeholders.

Differences

Different Viewpoints

Approach to defining critical infrastructure

Julia Rodriguez

Mr Wouter Kobes

Lack of consensus on defining critical infrastructure

Standards help address misaligned definitions across jurisdictions

While Julia Rodriguez highlights the lack of consensus in defining critical infrastructure as a challenge, Mr Wouter Kobes suggests that standards can help address these misaligned definitions.

Unexpected Differences

Emphasis on encryption

Ms Robyn Greene

Other speakers

Policies should be compatible with internet infrastructure and values

While most speakers focused on broader cybersecurity issues, Ms Robyn Greene placed a strong emphasis on the importance of encryption, which was not as prominently discussed by other speakers. This unexpected focus highlights the potential tension between security measures and privacy concerns.

Overall Assessment

summary

The main areas of disagreement centered around the definition of critical infrastructure, the role of standards, and the emphasis on specific technical aspects like encryption.

difference_level

The level of disagreement among speakers was relatively low, with most differences being more about emphasis and approach rather than fundamental disagreements. This suggests a general consensus on the importance of protecting critical infrastructure, but varying perspectives on how to achieve this goal effectively.

Partial Agreements

Partial Agreements

Both speakers agree on the need for a comprehensive approach to cybersecurity, but they differ in their focus. Rene Summer emphasizes stakeholder involvement, while Robyn Greene highlights the complexity of the technological landscape and its impact on policy.

Rene Summer

Ms Robyn Greene

Need for holistic approach involving all stakeholders

Intersectionality of technological landscape complicates policy approaches

Similar Viewpoints

Both speakers emphasized the importance of standards and policies that are compatible with internet infrastructure and values, and can help address misalignments across jurisdictions.

Mr Wouter Kobes

Ms Robyn Greene

Standards help address misaligned definitions across jurisdictions

Policies should be compatible with internet infrastructure and values

Both speakers highlighted the need to prepare for future threats, particularly those involving AI and autonomous systems, in the context of critical infrastructure protection.

Mr Chris Buckridge

Francesca Bosco

AI-enabled attacks anticipated as growing concern

Need to prepare for potential fully autonomous cyber attacks

Takeaways

Key Takeaways

A holistic, multistakeholder approach is needed to protect critical infrastructure cybersecurity

International cooperation and alignment of policies/standards is crucial

Capacity building, especially for less-resourced countries, is essential

The broader policy landscape beyond just cybersecurity impacts critical infrastructure protection

Emerging technologies like AI present new challenges and opportunities

Encryption and data flows are vital for cybersecurity and should not be undermined

Public-private partnerships and information sharing are key, but need to be reciprocal

Resolutions and Action Items

Participants encouraged to use the internet.nl tool to measure their organization’s security standard adoption

More awareness-raising and knowledge-building needed on how to engage in international processes

Need to operationalize existing norms and move from conversation to action

Unresolved Issues

How to achieve consensus on defining critical infrastructure across jurisdictions

How to balance security needs with privacy and human rights concerns in policy approaches

How to effectively address residual risks that industry cannot defend against alone

How to prepare for potential future threats like fully autonomous cyber attacks

Suggested Compromises

Balancing enforcement of security requirements with incentives for appropriate behavior

Finding ways for less-resourced stakeholders to meaningfully participate in standards development and policy processes

Considering both cybersecurity and development needs in capacity building efforts

Thought Provoking Comments

Fragmentation and complexity are the number one enemy of security.

speaker

Rene Summer

reason

This succinctly captures a key challenge in cybersecurity, emphasizing the need for coordination and simplicity.

impact

Set the tone for subsequent discussions on international cooperation and standardization.

We need to really start to internalize the fact that regulatory debates and proposals that are not directly about cybersecurity or about critical infrastructure will inherently affect our ability to protect critical infrastructure in particular.

speaker

Robyn Greene

reason

Highlights the interconnected nature of policies and their unintended consequences on cybersecurity.

impact

Broadened the conversation to consider wider policy implications beyond direct cybersecurity measures.

We tried to understand, OK, how the critical infrastructure are, this critical sector is impacted by cyber attacks. Not so much from the angle of, let’s say, simply, allow me to say, collecting information about the damages, the cost, how many devices were infected, but try to understand what it really means for society.

speaker

Francesca Bosco

reason

Shifts focus from technical impacts to societal consequences, providing a more holistic view of cybersecurity.

impact

Encouraged consideration of broader societal impacts in cybersecurity discussions.

Make sure that not only cyber security regulations are interoperable with other regulations from other, like cyber security regulations from other governments, but make sure that non-cyber domestic and foreign regulations that implicate cyber security are compatible with current cyber security best practices.

speaker

Robyn Greene

reason

Emphasizes the need for regulatory coherence across different domains and jurisdictions.

impact

Highlighted the complexity of policy-making in cybersecurity and the need for a more integrated approach.

We need to be agile about that. We need, and I think Robyn mentioned, the changing landscape, the sort of ever-moving landscape that we have in terms of security. That cyber capacity building activity also needs to reflect that.

speaker

Chris Buckridge

reason

Emphasizes the dynamic nature of cybersecurity threats and the need for adaptable capacity building.

impact

Shifted the discussion towards the importance of ongoing, flexible approaches to cybersecurity.

Overall Assessment

These key comments shaped the discussion by emphasizing the complex, interconnected nature of cybersecurity challenges. They broadened the conversation from technical specifics to include wider policy implications, societal impacts, and the need for international cooperation. The discussion evolved from identifying problems to exploring holistic, adaptable solutions that consider the rapidly changing technological landscape and the need for coherent, cross-sector approaches to cybersecurity policy and practice.

Follow-up Questions

How can we operationalize international norms on cybersecurity and critical infrastructure protection?

speaker

Francesca Bosco

explanation

Moving from policies into action is crucial for effective implementation of cybersecurity measures.

How can we measure and understand the real societal impact and harm caused by cyberattacks on critical infrastructure?

speaker

Francesca Bosco

explanation

Understanding the full scope of harm beyond just technical or financial damages is important for developing appropriate responses and protections.

How can we responsibly deploy emerging technologies like AI and quantum computing in critical infrastructure while addressing potential vulnerabilities?

speaker

Francesca Bosco

explanation

Emerging technologies offer opportunities but could also create new vulnerabilities, especially when interacting with legacy systems in critical infrastructure.

How can we improve engagement and participation of civil society and smaller companies in international cybersecurity processes and standards development?

speaker

Francesca Bosco

explanation

Many stakeholders lack the resources or knowledge to effectively engage in important cybersecurity discussions and standard-setting processes.

How can we address the challenges of cyber crime safe haven jurisdictions?

speaker

Robyn Greene

explanation

Safe havens for cybercriminals pose significant risks to global cybersecurity efforts and critical infrastructure protection.

How can we ensure that non-cybersecurity policies and regulations are compatible with cybersecurity best practices?

speaker

Robyn Greene

explanation

Policies in other areas can inadvertently impact cybersecurity, so a holistic approach to policy-making is necessary.

How can we better prepare for and defend against potential AI-enabled cyberattacks on critical infrastructure?

speaker

Chris Buckridge

explanation

AI-enhanced attacks are an emerging concern for cybersecurity professionals and require proactive preparation and defense strategies.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.