WS #260 The paradox of inclusion in Internet governance

WS #260 The paradox of inclusion in Internet governance

Session at a Glance

Summary

This panel discussion focused on the paradox of inclusion in Internet governance, exploring the challenges of creating truly inclusive processes in international cybersecurity and digital policy forums. The speakers highlighted how efforts to increase participation, such as proliferating initiatives and multi-stakeholder forums, can paradoxically create barriers due to the high resource demands of engaging in numerous processes.

Key themes included the need for better coordination between national and international levels, the importance of interdisciplinary teams in government delegations, and the challenge of balancing political control with meaningful inclusion of diverse stakeholders. Speakers discussed examples like the UN Open-Ended Working Group on cybersecurity and the Palma Process on cyber intrusion tools to illustrate these dynamics.

The discussion emphasized structural inequalities that persist despite inclusive processes, such as developing countries lacking resources to participate effectively in multiple forums. Participants noted the importance of national-level coordination mechanisms and capacity building to enable more diverse and substantive engagement internationally. The need to include marginalized communities and identities in digital governance was also raised.

Speakers proposed some best practices, including creating ownership through early stakeholder consultations, fostering interdisciplinary teams within governments, and calibrating political risk to allow for more distributed leadership of initiatives. Overall, the discussion highlighted the complex challenges of achieving meaningful inclusion in Internet governance while maintaining effective processes and outcomes.

Keypoints

Major discussion points:

– The paradox of inclusion in internet governance: efforts to be more inclusive can create barriers to meaningful participation due to the proliferation of forums and initiatives

– Challenges of coordinating between different government agencies and stakeholders at both national and international levels on cyber/internet governance issues

– The need for multidisciplinary teams and better knowledge transfer between different internet governance forums and processes

– Balancing political control with genuine inclusivity and openness to diverse perspectives

– Ensuring representation of minority and underrepresented groups in internet governance processes

Overall purpose:

The goal was to explore the “paradox of inclusion” in internet governance – how efforts to be more inclusive can paradoxically create new barriers to participation – and discuss potential solutions or best practices to address this challenge.

Tone:

The tone was collaborative and constructive throughout. Panelists and participants shared insights and experiences in a collegial manner, building on each other’s points. There was a sense of shared purpose in trying to tackle a complex challenge. The tone became more solution-oriented towards the end as participants reflected on key takeaways and potential next steps.

Speakers

– James Shires: Co-director of Virtual Roots, a UK-based NGO working on cybersecurity and Internet governance research, education, and public engagement

– Yasmine Idrissi Azzouzi: Cybersecurity program officer at the ITU (International Telecommunication Union)

– Hurel Louise Marie: Associate Fellow with Virtual Roots, works in the cyber program at RUSI (Royal United Services Institute)

– Corinne Casha: Representative from Malta’s Ministry of Foreign Affairs

Additional speakers:

– Audience member: Julia Eberl from the Austrian Foreign Ministry, working at the mission in Geneva

– Audience member: Akriti Bopanna from Global Partners Digital, previously worked in India’s foreign ministry for G20

– Audience member: Natasha Nagle from the University of Prince Edward Island

Full session report

Expanded Summary: The Paradox of Inclusion in Internet Governance

This panel discussion, featuring experts from various backgrounds in cybersecurity and internet governance, explored the complex challenges of creating truly inclusive processes in international cybersecurity and digital policy forums. The central theme was the “paradox of inclusion” in internet governance, a concept introduced by James Shires, co-director of Virtual Roots.

1. The Paradox of Inclusion

The discussion began with Shires explaining that efforts to increase participation in internet governance, such as proliferating initiatives and multi-stakeholder forums, can paradoxically create barriers due to the high resource demands of engaging in numerous processes. This proliferation of initiatives makes it difficult for stakeholders, especially those with limited resources, to participate meaningfully across all forums.

Louise Marie Hurel, Associate Fellow with Virtual Roots, expanded on this concept by highlighting how the specialisation of debates leads to fragmentation of discussions. She also raised the provocative point that inclusion efforts can be weaponised for political purposes, with the proliferation of initiatives sometimes serving as a political strategy to control the scope of debates and who participates in them.

2. Challenges of Coordination

A significant portion of the discussion focused on the challenges of coordinating between different government agencies and stakeholders at both national and international levels on cyber and internet governance issues. Yasmine Idrissi Azzouzi, Cybersecurity program officer at the ITU, emphasised the importance of national-level coordination for effective international participation. She highlighted the need for creating ownership at the national level across different expertises, including various ministries and critical infrastructure providers.

Corinne Casha, representing Malta’s Ministry of Foreign Affairs, echoed this sentiment, discussing the establishment of national cybersecurity committees for better coordination. She also pointed out the difficulty in maintaining consistent representation across multiple forums, underscoring the challenge of balancing specialisation with comprehensive engagement.

The lack of communication between different UN processes, particularly those based in Geneva and New York, was raised as a concern by an audience member from the Austrian Foreign Ministry. This highlighted the need for better coordination not just within nations, but also between international organisations and processes.

3. The UN Open-Ended Working Group (OEWG) on Cybersecurity

Louise Marie Hurel provided significant details about the UN OEWG on cybersecurity, highlighting it as an example of the proliferation of international forums. She discussed the challenges associated with this process, including the difficulty of meaningful participation for smaller states and non-state actors, and the potential for forum shopping by more powerful actors.

4. The Palma Process and Multi-stakeholder Initiatives

Corinne Casha discussed the Palma Process on cyber intrusion tools as an example of an initiative attempting to address inclusivity challenges. This process aims to develop guidelines for the responsible development, transfer, and use of cyber intrusion tools through multi-stakeholder consultations. It illustrates efforts to balance political control with diverse participation in addressing complex cyber issues.

5. Strategies for Improving Inclusion and Representation

The speakers proposed several strategies to address the challenges of inclusion:

a) Interdisciplinary Approaches: Yasmine Idrissi Azzouzi stressed the need for interdisciplinary teams to engage in various processes, combining technical, diplomatic, and policy expertise to address complex digital issues effectively.

b) Multi-stakeholder Consultations: Azzouzi and Casha both emphasised the importance of creating ownership through multi-stakeholder consultations, involving diverse stakeholders in the decision-making process.

c) Capacity Building: Casha mentioned funding initiatives to support participation from developing countries, addressing the resource imbalance that often hinders inclusive participation. Specific examples included the Women in Cyber Fellowship and the Global Conference on Cyber Capacity Building.

d) Fostering Productive Disagreement: Hurel highlighted the importance of fostering dialogue that allows for productive disagreement, suggesting that true inclusion requires openness to challenging perspectives.

6. The Role of Ministries of Foreign Affairs

Corinne Casha and other speakers discussed the crucial role of Ministries of Foreign Affairs in coordinating cyber issues at both national and international levels. They emphasized the need for these ministries to act as bridges between various domestic stakeholders and international forums, ensuring coherent national positions and effective representation in global discussions.

7. Persistent Challenges and Unresolved Issues

Despite these proposed strategies, the discussion highlighted several persistent challenges:

a) Structural Inequalities: Hurel pointed out that structural inequalities persist despite efforts at inclusion, particularly affecting developing countries and their ability to participate effectively in multiple forums.

b) Balancing Political Control and Inclusion: There was a recognition of the tension between maintaining political control and achieving genuine inclusivity, with Casha noting that relinquishing some control is necessary for true inclusion.

c) Representation of Minority Identities: An audience member, Natasha Nagle from the University of Prince Edward Island, raised the important question of how to ensure representation of minoritised identities in digital governance spaces.

d) Circumvention of National Legislation: Akriti Bopanna from India provided an example of how international forums can be used to circumvent national legislation, illustrating another aspect of the paradox of inclusion where global processes might undermine local democratic decisions.

8. Conclusion and Future Directions

The discussion concluded with Hurel summarizing three key paradoxes: meaningful leadership, meaningful coordination, and meaningful dialogue. These encapsulate the ongoing challenges in achieving true inclusion in internet governance.

Corinne Casha suggested a follow-up session or report to further explore the issues raised during the panel. The discussion also touched on the Global Partnership for Responsible Cyber Behavior as a potential framework for addressing some of the challenges discussed.

The speakers agreed that addressing the paradox of inclusion requires careful balancing of specialisation and comprehensive engagement, political control and diverse participation, and national coordination and international representation. The ongoing nature of these challenges underscores the need for continued dialogue and innovation in approaches to internet governance.

Session Transcript

James Shires: Yes. Hi, Louise. Testing. Can we hear and see you? Can you hear and see us? Yes, all good. Can you hear me okay? Very well. Let’s get started. So, hi, everybody. Hi. There’s behind us. And welcome, everybody, online. We’re very happy to be hosting this panel on the paradox of inclusion in Internet governance. My name is James Shires. I’m co-director of Virtual Roots. Virtual Roots is a UK-based NGO that works in cybersecurity and Internet governance research, education, and public engagement. So, we have a fantastic lineup of speakers today. We have Yasmin Azouzi, to my right, in person. We have Louise Marie Harrell online. And we have Corinne Kasher, who, unfortunately, is in a taxi coming from a very similarly named conference center that she was accidentally taken to and will arrive soon. These things happen. So, I’ll just say a little bit about the purpose of the panel overall, and then I’ll hand over to our speakers. I will start with Louise online. Then I’ll go to Yasmin. I’ll talk a little bit about my perspective on the paradox of inclusion. And hopefully, by then, Corinne will have sorted out her travel issues. We’ll then open the floor to questions and discussion for everyone, both in person and online. We’re very much looking forward to the discussion. And thank you all for being here early on a Thursday morning. So, we put together this panel because… because we felt that there was a real issue with internet governance and inclusion. And we call this the paradox of inclusion. The idea here is that we see a proliferation of efforts to bring in different actors in internet governance, whether these are multi-stakeholder forums, whether these are efforts to include developing countries and smaller states or states with fewer resources, and there’s lots of different efforts to do these, through different conferences, initiatives, meetings, and so on. In fact, there’s so many of these different efforts that actually keeping up with them all, keeping track of them all, and participating meaningfully in them all, is itself a high resource burden. And that’s what we term the paradox of inclusion. Internet governance recognizes that it has to be inclusive. It has to bring in multiple stakeholders. But those who are really able to track the real range of internet governance forums from this one to those of the UN, such as the OEWG on cybersecurity, through to the Global Digital Compact, through to the Cybercrime Convention, through to the multi-stakeholder initiatives such as the Paris Call, et cetera, et cetera. So this is what we want to talk about today, is the starting point, recognizing that inclusion matters, that there are genuine and very well-developed efforts to make internet governance inclusive, but that sometimes these efforts, as we would say in English, for want of a better phrase, shoot themselves in the foot. They actually bring up barriers to participation through requiring such a thin spread of attention and resources across the internet governance portfolio. That’s the idea behind the session. We’d love to hear your thoughts on this paradox of inclusion, but before we do so, I’ll turn to some short opening remarks from each of our speakers. Our first speaker is Louise Marie Harrell, who is an Associate Fellow with Virtual Roots and is also working at the Royal United Services Institute. Louise, I’ll leave you to do a much better introduction of your own work than I can, and the floor is yours.

Hurel Louise Marie: Thank you very much, James, and thank you all. I would love to be there with all of you, but sadly, and thankfully also, because since we’re talking about inclusion, I think the fact of just being able to connect the IGF has always been great in that sense. And I think James already kind of set a very interesting tone to our conversation here. I am Louise Marie Harrell. As James said, I work in the cyber program at RUCI. And we’ve been reflecting a lot on different elements related to that, but personally, I’ve been attending the IGF for 10 years now, which is kind of like baffling. And I think there’s no other better place to actually have this conversation because being involved in the IGF throughout different cycles of maturity, and also other spaces such as ICANN, but also being increasingly involved in the cybersecurity discussions, which is the bit that I’m going to talk a little bit more about, I think you see those different communities of practice emerging and specializing. So when we look at, in particular, at the proliferation of initiatives, especially when it comes to cybersecurity, and that has something, like if you look from 2017, or 2015, right, to today, it’s quite impressive to see how many initiatives, especially on cybersecurity that have emerged. you would have the group of governmental experts at the UN as the one place, and it was a very kind of multilateral, I mean, it’s still a multilateral process, but you would have 30 governments or so discussing what is state responsibility and how international law applies to cyberspace. And back then also you would have the Global Forum on Cyber Expertise, which back then was also like just called the London Process, starting to mature and to become a bigger platform. And today these initiatives have consolidated quite a bit. I mean, obviously the negotiations at the UN have been taking place for at least 20 years when it comes to state responsibility, but the traction that these dialogues have had is quite substantive. So just to frame this a bit, right now I think we see two movements. And one is we can look at the proliferation of these initiatives in firstly as the specialization of the debate, right? I remember I used to attend the IGF like in the, like again, like 10 years ago, and I would look, where is the cyber community here? And you would have one or two panels talking about this from, let’s say a more cyber diplomacy initiative. And you would see government representatives, and I remember in Geneva, trying to talk about the GGE at the IGF. But nowadays there’s so many other spaces. There’s the Counter Ransomware Initiative, which talks about rent, well, as the name says, you know, ransomware. The Pal-Mal Process, which looks at commercial cyber proliferation. The OEWG, which is looking at state responsibility. And also, I mean, to some degree, the interdependence between state and non-state responsibility in cyberspace. We have the GFCE, which is looking at capacity building, cyber capacity building, and obviously the… you know, Yasmin will definitely, you know, touch upon like capacity building from the ITU’s perspective. We have the Tech Accord, which is an initiative that was spearheaded by Microsoft, but that tries to create this community of practice and thinking within the private sector in different parts of the private sector and how they see norms for their own, let’s say, sector when it comes to cybersecurity and the Paris call as James very already mentioned, which is kind of a mix of different stakeholder groups. So one way in which we can see that discussion, as I said, is the proliferation as the specialization of the debate, where we think that, you know, we cannot have this ethereal, broad conversation. We need to get to these smaller bits and spaces. But obviously the other side of the coin is looking at the proliferation of the debate as also being a political strategy, which it is in many ways. So if you think about the ad hoc committee on cybercrime, that is the result of a long friction at the geopolitical level of Russia trying to push forward in some ways, the discussion of actually have, not just Russia, but in that case, the presentation of the resolution to have a legally binding instrument on cybercrime. And that echo, that really just contemplates the vision of many other countries that have not been involved in the Budapest Convention or that don’t necessarily agree that they should just subscribe to something, that they should be part of the development of it, which is, you know, an increasingly and very valid point from their standpoint. So you have those movements, such as the ad hoc committee, which has ended right now, that becomes part of that, let’s say, political strategy. Another example of proliferation being a political strategy is precisely to specialize debate because then you can control a bit more. or what the scope is, and who is involved in this conversation. So on the other hand, if we look at the counter ransomware initiative, it started out as something that was very much state department led, right? The US spearheading that, but then it has increased throughout the last couple of years. And that requires again, kind of how do you create a platform for a particular dialogue, but that you ensure that you’re still open and flexible to bring others on board. And I’m sure James will talk more about the Pal-Mal process as something that’s quite interesting as well in terms of that proliferation as a political strategy. But I’d say like from an OEWG standpoint, and James, please flag to me when I am, if I’m speaking too much, but I just wanted to give a little bit of a glimpse of the OEWG as part of this paradox of inclusion, right? I think it comes as this proposed solution. So back in 2019, when you had the start of like two simultaneous processes, the GGE, the last GGE and the OEWG, you had this narrative that the OEWG as an open-ended working group, as a UN mechanism for a particular type of dialogue, it would be more inclusive firstly, because it would include all member states. So it would shift the conversation to all of the GA members, the General Assembly members. So from a composition standpoint, it seems that it would probably be more inclusive and also that it would have some kind of participation from non-state actors. But so the enabler is that we’re going from 30 to 193 countries. The challenge there is obviously that enabling effective participation of member states as part of this process is a whole different ball game, right? We’ve been working quite a lot at RUSI to facilitate like workshops on responsible cyber behavior and just working with other governments, like let’s say small island states, like talking about ransomware and trying to really kind of democratize the access to some bits of the debate or to go deeper into some elements of the OEWG agenda. There are structural elements that are just reproduced in these spaces, which is normally you have one person, if it is a small UN mission, you have one diplomat, one person that’s there covering a myriad of themes, right? So even if you have a process that has gone to the 193 countries, is it actually an effective participation? Because again, many countries won’t see that, state responsibility in cyberspace is the first topic on their national priority. And they only have one person in the UN covering these topics. And on top of that, they really don’t, they don’t prioritize because that’s also challenging. And if they want to bring someone from the capital, right, to participate, that’s the cost of meaningful inclusion as part of that expansion of those that can participate. So you have an enabler from a process standpoint, but that does not address the structural challenges over there. Obviously, there’s some solutions such as the Women in Cyber Fellowship, which is an initiative that is funded by state department, the UK government, Australia, and a couple of others that seek to bring women diplomats, or let’s say representatives of national cybersecurity agencies to be the representatives at the OEWG. So again, how do you kind of, you enable a process, but then how do you make sure that that process is actually inclusive at the end of the day? So it’s more of a walking the talk paradox that we’re thinking over here. And in some ways, the- The second logic of inclusion within the OIWG, so we talked about the state one, the non-state actor inclusion, again, the process does enable non-state actors to participate, unlike the GGE, as I said, which is 30 states that participated, it was just them. No opportunity to look at what they were discussing or even like webcast at the UN, and the OIWG does all of those things, which is great. But the disabler, I’d say, or let’s say the paradox of inclusion for non-state actors is obviously that it becomes, it has become a weaponized discussion. So since the start of this latest OIWG, the 2021 to 2025, what we’ve seen is right at the start, at least a year of the process, or more than half of the first year of the process, there was the stalemate between states that wanted to promote effective modalities for stakeholder participation, so that they would be able to provide their speech, to give a speech over at the UN, or that they would be able to listen in through the UN webcast, or that they would be able to be accredited. And there were other states that said, no, we don’t need to have those stakeholders. But they also said, well, if we have to have these stakeholders, we need to have a veto power over who gets to be in the room. So that has led to a stalemate for most of the first bit. And after that, to the effective vetoing of different organizations, including my organization, and also, let’s say, really important technical community experts, such as the Foreign Money and Incident Response Team, which could effectively provide inputs into some of the conversations, but that they were also vetoed by some member states. So you see that there is a process enabling… but that there are political challenges or things when it comes to the meaningful inclusion of non-state actors in these spaces. And just to finalize, because I’m sure that I’m almost done with my time, is looking at the third logic of inclusion. So we talked states, non-state actors, and I think the third one is thinking about the context where this dialogue is being held being more inclusive. So this is a first committee process, which means that usually it’s the highest level of the conversation on international peace and security when it comes to cyber. So obviously the chair of this process has a lot of responsibility to shape meaningfully that inclusion. So the enabler there for thinking about this broader context of the dialogue is the chair, for example, hosting online convenings. He organized the high level round table on cyber capacity building, where organizations from different parts of the world could effectively share their experiences in implementing cyber capacity building. You see also in this process, different proposals from developing countries gaining traction, such as Kenya, suggesting and tabling recommendation for a portal to look at threats so that other states that might not have as much cyber threat intelligence or that might have less access to information that they can share. So you also have coalitions of different states coming together, both developing cross-regional representatives, which is not something specific about the OEWG, but it says that the process is enabling those types of interactions in spite of geopolitical tension between two poles that you see. see effectively happening in the room. But you see, for example, El Salvador working with Estonia, working with Switzerland to think about the applicability of international law in cyberspace and tabling, let’s say, documents for a further conversation. But the outcome, or let’s say, the background tension in this third bit of like the space of this dialogue is really that is the question of what comes next. So I don’t know how many of you are familiar, but the OEWG is coming to an end in 2025 in July. And there is another proposal for a program of action. So let’s see another way in which we structure the dialogue at the UN within this, you know, a regular institutional dialogue for cyber. And there is this dichotomy between these two proposals. One is obviously the OEWG was the result of a Russian table proposal, which has effectively been, you know, successful in the past five years in actually pushing the conversation forward, at least maintaining that dialogue. But there is obviously a need for a more dynamic dialogue that can go deeper into different topics. And that, you know, can more effectively include stakeholders. And that’s the program of action. Not that one is better than the other, but there are different proposals for how that dialogue should happen, that regular institutional dialogue, and the member states will need to decide. So within this context of these three paradoxes, in many ways, how do you think that going forward? And I think there is a very politicized tension between these two proposals. And I think right now is, is thinking about the design of the process, right? And I don’t think that necessarily we’re always tackling those underlying inequalities. But in any case, I just wanted

James Shires: to stop there. I think there are other bits in terms of the relationship between the IGF and the OEWG. AWG or the coexistence of different UN processes, especially on cyber, but I’m very happy to talk about that afterwards. But I just wanted to maybe set the scene from an AWG standpoint of what are these different logics of inclusion and what are the challenges to these three logics of inclusion? Louise, thank you so much. That is a incredibly rich introduction and overview of the paradox of inclusion. And I really appreciate you breaking it down into this question of between states with non-state actors and also these other modalities of inclusion as well. Given that you covered so much ground there, I do just want to give the people in the room and online the chance to respond or ask a few questions while it’s fresh in their mind. And then we will turn to our next panelist. So if there is anyone who would like to come in online, please do put your question in the chat. If you’d like to come in in person, obviously just raise your hand and we will bring the mic to you. So while you’re maybe thinking of that, and if anyone is thinking of questions, I would just highlight one recent publication from Virtual Roots through our site, Binding Hook. Now, Binding Hook is a way to disseminate academic research in an accessible way to a wide audience. And there’s a new piece from last week on Pacific Island cybersecurity, how to co-design cybersecurity governance for and with Pacific Island states. So if you’re interested in that part of Louise’s remarks, please do check out that piece that has just come out on Binding Hook. If there are no questions in the chat, and everyone here seems very content, I will move on to our next panelist, who is Yasmin Azouzi. So I’m going to turn it over to Yasmin, who’s going to talk a little bit more about cybersecurity and how it can be used in the future. So Yasmin, again, please do introduce yourself a little bit more, and the floor is yours.

Yasmine Idrissi Azzouzi: Thank you very much, James. Thank you, Louise, for that incredible overview. Good morning, everyone. So my name is Yasmin. I’m a cybersecurity program officer at the ITU. So the ITU, as many of you know, is the U.N. cybersecurity and technology agency, and the ITU is a global organization. So we have a virtual moment for Internet governance. So next year, we have the WSIS plus 20. We’re currently navigating the global digital compact, topic specific ones like the ones in cybersecurity, open-ended working group, ad hoc committee on cyber crime, and we’re seeing in general, as mentioned, very nicely, that there is a proliferation of fora, which has obviously both opportunities and challenges. So I’m going to talk a little bit about what we’re doing, and how we’re doing it. So we have a number of fora that are also addressing overlapping issues, overlapping Internet governance issues, and this is being compounded, of course, by duplication and silos at times. So, for example, I can give a specific example from the ITU. So agreements that member states and resolutions that member states have voted on at ITU statutory meetings on the topic of open-ended working groups, and we have a number of fora that we’re working on, and we’re also working on the open-ended working group agenda item on cybersecurity. And this is partly due to member state representation in the ITU being mainly ministries of ICTs, ministry of communication, of digitalization, while the open-ended working group, first committee diplomats at times, but also representatives of national cyber agencies, this shows basically a lack of coordination at national level. So, for example, the open-ended working group, this is a representative of the national cyber agency, and it’s a group that is not represented in the open-ended working group, and it’s a group that prevents them from being represented in this. It’s about financial resources, at times it’s also the technical expertise and ability to communicate, and it’s also the technical expertise and ability to communicate, and it’s also the technical expertise and ability to navigate the interdisciplinary nature of digital policymaking. So this I think is the core of why this paradox exists, because the silos that are present at national level are being reflected internationally. In fact, digital issues touch upon multiple disciplines, so it can span from national security to economic development to human rights to sociological change, and this very interdisciplinarity, while enriching, also contributes to the fragmentation. So I’ll take cyber security as an example. As I mentioned, the Open Ended Working Group addresses cyber security within the first committee of the General Assembly, which focuses on peace and security, and the Committee on Cybercrime operates in the third committee, yet cyber security can also have implications in the second committee on economic development, which has a critical role in security. And in parallel to that, at the same time, at the ITU, at WSIS processes as well, we are emphasizing technical cyber capacity building for the purpose of sustainable development and economic and social development, and this is far removed from peace and security aspects being discussed elsewhere. The Global Digital Compact sees cyber security more as an enabler of securing digital space in general, and it focuses very much on the harms of privacy protection and calls for international cooperation more in a high-level way. So this way of compartmentalization makes it challenging for stakeholders, particularly from low-resource nations, to align their priorities and also to maintain continuity across these different sectors, which causes, again, the silos and duplication. Paradoxically, given also the topic, the solution actually may lie in reducing this fragmentation at national level by improving, for example, inter-agency cooperation, focusing on fostering interdisciplinary teams that are equipped to engage meaningfully in I think that this approach can offer some key advantages. So first off, countries need to establish multidisciplinary teams that can have expertise in technical, diplomatic, and also policy-making communities. For example, representation of national cybersecurity agencies or national computer incident response team at the open-ended world often results in practical, context-specific experiences that are a bit different compared to, let’s say, traditional career diplomats. However, this, of course, requires a pipeline of trained multidisciplinary professionals that have the expertise in technology, but also in diplomacy, so being able to operate in that nexus, in a way. Second, capacity-building with inclusivity in mind must be key. So initiatives must prioritize inclusive capacity-building that can bridge technical and policy silos. So, for example, at the ITU, we have a program that is a part of the ITU that is a part of the ITU that is a part of the ITU that is working on a number of issues, both technical and policy silos. So, for example, at the ITU, that bring together those two communities at national level so that they are accustomed, let’s say, to interagency in that manner. Programs should focus on enabling countries to engage also in Internet governance forum in a holistic manner, so being equipped both, again, from the technical and the policy side, but also from the policy side. So, for example, the ICT forum is a very utopic goal, realistic or useful to think of consolidating all Internet governance forum, but what we can focus on is actually enhancing coordination and avoiding duplication by aligning mandates, creating, let’s say, better linkages between discussions, for example, on the capacity building, cyber security capacity building, these agendas can be implemented in a more holistic way. So, for example, the ICT forum is a very utopic goal, realistic or useful to think of consolidating all Internet governance forum, but what we can focus on is enhancing coordination and avoiding duplication by aligning mandates, creating, let’s say, better linkages between discussions, for example, on the capacity building, cyber security capacity building, these agendas can be implemented in a more holistic way. So, for example, the ICT forum is a very utopic goal, So, I would like to conclude by saying that I think that we need to be very, very careful about how we create these interdisciplinary teams and this can also include having coordination mechanisms in place that can regularly consult across disciplines so that there is consistency when it comes to international negotiations and international fora. So, just to conclude, as we’re looking at, say, the future of Internet governance , we need to be very careful about if there is a difference between

James Shires: . Thank you, Yasmin. And just to repeat my call from earlier, if anyone does have any questions for Yasmin or Louise at this stage, then please do bring them into the discussion. We would love to hear from you whether you are in person or online. I am extremely pleased to have our third speaker here, Corinne. Corinne, please do come up to the table. You snuck in behind me and I didn’t even see you, so that is clearly operating in stealth mode. Now, Corinne, hopefully we will follow on with her perspective on the paradox of inclusion and maybe also the paradox of travel in Riyadh as well. Corinne, it is a pleasure to have you here. I will hand over the mic to you.

Corrine Casha: Yes, hi. Thank you. It is a pleasure to be with you here today. I don’t have much to add, actually, to what Yasmin already said, because I think she really encompassed the discussion very well, and I noticed that she really sort of hit the nail right on the head about the paradox of inclusion. So I just really wanted to add on to what Yasmin said. I think it’s important to avoid having the fragmentation of all these processes through also the fact that you need both the technical and the political level to work very closely together. And the issue of resources was one that really struck me. I know this is one of the main issues, that there are a lack of resources. And from our perspective as Ministry of Foreign Affairs, we are really working hard to fund resources so that we aim to actually provide resources where necessary. So if it’s, for example, the fact of the lack of representation, we fund fellows. We fund also diplomats from, let’s say, least developed countries, et cetera, so that they are able to be represented at the highest levels of decision making. One other aspect that struck me was the need to harmonize the processes and also to enhance coordination. I think this is really key. And obviously, the Global Digital Compact only came into, let’s say, adoption last September at the UN General Assembly. So we will see how it will fit in with the other processes. But that’s all from my end. I will add some closing remarks as well. But I really wanted to hear what participants think about this and what their views are on how to promote more inclusion and also to avoid the fragmentation of the processes. Because this will be really sort of key in us as governments to factor in what needs to be done for this to be, let’s say, a more harmonized process.

James Shires: Thank you. Thank you, Corrine. And yes, we look forward to your concluding remarks as well. So before we turn to an open discussion, and at that point, I will be asking everyone what your perspective is on the paradox of inclusion. So please do get some interventions ready. What I’m going to do is reflect a little bit on one example of the paradox of inclusion that I’ve been working on very closely. And what I’ll do is I’ll use Louise’s framework, because I think it’s a very helpful one of inclusion at a state level, inclusion at a non-state or multi-stakeholder level, and inclusion in terms of modalities as well, to illustrate how this paradox emerges through a particular process. And the one I’ll talk about isn’t the OEWG or the Global Digital Compact, these sort of very high-profile major ones. It’s a little bit more niche. It’s the pow-mow process. Now, can I get a quick show of hands in the room if you’ve heard of or know about the pow-mow process? If anyone does, then please put your hand up. Glad the panelists do as well. If you don’t, then I think I will give a quick overview of what it is. So recently, there’s been a recognition among many states that many offensive cyber tools, otherwise known as cyber intrusion capabilities, have both positive and negative uses. They have positive uses because they are necessary for cybersecurity. They help organizations test their defenses and improve their defenses through things like penetration testing within cybersecurity, so asking someone external to try and hack into your networks, so you know where the holes are and you can fix them. They have negative uses when they are used by cyber. criminals, for ransomware or other theft, and also when they are misused by state actors. So this is where companies would offer cyber intrusion capabilities commercially, often known as spyware, and then states would buy that and then use it. This is a, in many cases, perfectly legitimate activity, right? States need such surveillance capabilities for reasons of national security, but often, in many cases, they have overstepped the line, right? They have used capabilities in ways that are not proportionate, that lead to significant human rights violations as well. This is a recognized issue in internet governance, and there have been many efforts to try and address this at both a national and a multi-stakeholder level. And so, for example, you have the US, which is a major producer of these capabilities, right? It’s a major center for spyware development, research and sale. The US then imposes sanctions on particular companies that it thinks have violated the norms or boundaries that it wants to impose. So there are very high profile cases of US sanctions and indictments and other measures, such as restrictions on government procurement, so US government agencies can’t buy from certain companies in order to shape this market. There are also efforts at the multi-stakeholder level. Some of you may have heard of the Cyber Tech Accord, and this is a group of tech companies, industry companies, who came together to develop their own voices. on internet governance. They produced principles for what they called curbing the cyber mercenary market, right? So they put a different frame on it. But again, they’re trying to intervene in this sphere. Now, again, that wasn’t especially effective. And so what happened last year is a new initiative was launched called the PowerMile process. This aimed at both bringing in industry, including the spyware industry itself, including the companies worried about that, such as the big tech companies, including the states buying and using spyware and commercial cyber intrusion capabilities, and those being affected by it. So in short, it was a big tent initiative, right? It wanted to get everyone together to find a solution to this complex problem. And let’s unpack that initiative, which has now been running for just under a year in the three levels that Louise mentioned. Firstly, at a state level, was it inclusive? Well, in one way, yes, right? Anyone who wanted to, any state that wanted to sign up to the PowerMile declaration published in February could do so, right? They could attend the conference, they can engage in discussions, and put their name at the bottom. And now the state interaction on these discussions is getting more detailed. But it was still not completely equal, right? The sponsors of this initiative, the funders and organizers, are the British and French governments, right? They are the ones running it. They are aiming to include as many other states as possible. But ultimately, the conferences are in Europe, they’re in the UK and France. Most of the attendees and organizers are from these states. there between having a very stereotypical sort of European perspective on the issues and making it as wide an invitation as possible. So that’s at the state level. At the multi-stakeholder level, yes, there are efforts to include multi-stakeholders. So Virtual Roots is a multi-stakeholder participant in the Pal-Mal process and multi-stakeholders have the opportunity to engage at these conferences. They can submit responses to a consultation that closed a couple of months ago on these issues and they will continue to be able to feed into the process. But again, this is clearly a two-tier system. You have in each event a day reserved for multi-stakeholder discussion and then a day reserved for state-only discussion. So while the multi-stakeholders are able to input, they are not able to observe or have any understanding of what is going on in the state negotiations. So in a way, it’s a bit like the OEWG on one side and then the GG on the other side, right, where it’s just the states in a closed forum. So multi-stakeholder, yes, but also two-tier. And then finally, in terms of modalities, and this is where it is very interesting, the open consultation that ran for three months over the summer this year was on the declaration on the way forward for the Pal-Mal process. And what was interesting was a lot of industry coalitions and companies contributed to this consultation. Most of them were from the cyber security industry, right? They were contributing from the defensive side. But what the aim was, was for also to get the companies who build and make and sell the spyware to contribute as well, right? It’s a genuinely multi-stakeholder process. And it didn’t quite succeed in that, right? They were looking for more and more contributions from all parts of the industry, as well as civil society as well. And so, again, when you go into each of these processes, and that’s just one example, you can unpack the layers in which efforts at inclusion are both very laudable, right? They do, in fact, increase inclusion. But on the other hand, they only go so far. And indeed, the barrier to entry to these processes, the amount of knowledge you have to have to enter is far beyond maybe that of an embassy diplomat or a non-specialist, right? You need to really be engaged in these processes to contribute effectively. So with that example, I will now open the floor. Open the floor. And so, how we’d like to run this second half of the session is just to ask the participants in person or online. If you’re online, then please do put it in the chat. I will read it out, and we can ask the panelists to reflect on your remarks. To ask you some very simple questions. Firstly, does the paradox of inclusion ring true to you? Is it something that you recognize in your own work? Is it something that you think, nah, what are you talking about? Why are we even here, right? So that’s question one. Question two is, where do you see it most relevant to your work? And how might you try and overcome it, right? So maybe first, is it something that you recognize? And then where and how do you see it happening? I’ll pause. In the room, please do put your hand up if you’d like to contribute. Online, please do come in on the chat. Yes. Okay, I can hear myself. That’s good. Thank you so much. I’m Julia Abel from the

Audience: Austrian Foreign Ministry, currently working at the mission in Geneva, and I’ve been involved in some of the processes that you’ve talked about. So it was very enlightening to see kind of a full overview, because I can very much resonate with the questions that you’ve raised, that as a diplomat, we tend to work on a couple of these processes, but not all of them. So I was not aware of the Palma declaration, for example, because I’ve been on a mission abroad, and not in capital working on the processes holistically, for example. So that was very interesting. Thank you very much for bringing that up as a question. I wanted to make three points on everything that has been discussed. And one was, from my point of view, we need different expertise in all these processes. And what you’ve brought up, for example, cybercrime, cybersecurity, when you when we look at it from a national point of view, cybercrime, we had a lot of our criminal law experts involved nationally looking at actual criminal law provisions and how it would be applied for us. When we talk about cybersecurity, we get a lot of the defense side in, we get international law questions in. So this requires quite a different expertise and different people also from a national level. But I do hear the need for also coordinating better nationally. So that that is something for sure. And we also always try to bring national experts also into our delegations when we have these discussions on different processes and negotiations. But Austria also funds other like developing country diplomats to come to certain processes. We did it, for example, in the cybercrime process, that we wanted to get the experts from capitals, from developing countries, to be part of the process, because we don’t only want to talk among diplomats, we want to talk with the people that actually have to implement this at a national level. So that is something we did, and I know the Council of Europe is also very active on capacity building when it comes to that. So there are initiatives and there is a wish to bring in the people that actually work on these issues, at least when we talk about government participation. One thing that resonated with me is what Yasmin said about what the ITU does and then what the different committees in New York do, and having worked in both environments, I do see that there is a bit of a lack of communication between Geneva and New York on processes like that, and then, of course, you have the capitals in it as well. And that’s a bit of my question also to the panellists of, have you seen any best practices, or do you have any ideas of how to help, both from a member’s perspective, from a stakeholder perspective, and also from a UN agency perspective? Like, how can we strengthen the communication between these processes to avoid duplication, which is a strain for all of us, really? Thank you so much for your intervention,

James Shires: and a combination of both very insightful points and also a good question to push us to identify best practices as well. Yasmin, given we had a little bit on the ITU, maybe I could turn to you first, and then, Louise, online, to give a little response. Over to you. Thank you very much, and thank you for the very insightful question.

Yasmine Idrissi Azzouzi: Indeed, it’s… I think it’s a decades-long problem of having the lack of communication between the Geneva and New York. But when it comes to good practices, one thing that pops into mind is the kind of work that we do at the ITU at national level in particular. So if I give you an example, when we are supporting developing countries in particular, when it comes to their development or establishment of something like national cybersecurity strategies, part of that agreed upon methodology is to actually have consultation workshops prior that are inclusive of many different stakeholders. And at times we found ourselves in contexts where those same actors had never actually been in the same room together. But this is sort of a prerequisite that we put in terms of if you need your strategy to be developed, these are the people that you need to have in the room. And so that stems from, of course, an inclusion need, but also a very practical need. I mean, if a strategy, since it’s a living document, it’s something that needs to be implemented, is developed by a small group of stakeholders and then is asked actually to be implemented by wider stakeholders, it makes it difficult. So creation of ownership, I think, at national level across different expertises, so ministries and national agencies, but also critical infrastructure providers. We’ve had in the same room central banks, energy representatives, but also ministries ranging from MFA all the way to, of course, defence, interior and others, because, of course, it’s extremely interdisciplinary and a national strategy also needs to have all of those elements be taken into consideration. So this is, say, a model that we have seen being also the start also of better coordination at inter-agency level, where you didn’t have it before. We’ve often felt a bit of resistance at times from, let’s say, the lead agency at national level due to, I guess, wanting to keep ownership over things. things, but then gradually seeing that shared ownership has actually yielded better results in terms of coordination, but also in terms of effective implementation of things. And having also different perspectives on the same topic has actually taught a lot to different stakeholders. So I think this could be one good practice. So having, again, might seem like a leitmotif here, but multi-stakeholder actors at the table prior to, let’s say, major negotiations or major establishment of strategies or policies is part of the solution. It might sound obvious or simple or easy, but I think this is really where it starts.

James Shires: Thank you. Thank you, Yasmin. And Louise, over to you.

Hurel Louise Marie: Yes, thank you so much for that question. And I mean, I 100% agree with what Yasmin just said. But just to add on top of that, I see there are two points that came to mind when you were kind of asking your question. And one of them is, I wonder whether we need to create a community that goes to these places and that kind of like, I know that that is like spreading one thin. And I think that’s exactly where we started this conversation of, do we have to follow so many processes that we can’t actually do that? So I mean, from both a government and a non-governmental perspective, I think, for example, attending the IWG and attending the IGF, that community being able to leave that privileged space. And also, I mean, you could argue that there are some privileges to being able to actually attend the IGF. But being able to have those communities going to other places means that you can, that whenever there’s not an overlap, there’s a new set of community there. But you ensure that there is that knowledge transfer, or at least that experience from that particular room, it’s going to the other. place. So I think the question here is how do we take these New York dynamics or New York-centric dialogues and how do we translate it, transpose it, and let’s say provide the space for those same dialogues, even though let’s say not in the same format, to happen in other places. And I think there’s some answers or let’s say best practices that I’d like to highlight. The first one is, well, just a couple of days ago, and starting from that same productive discomfort, we, RUSI and Virtual Routes, we organize the cyber policy dialogues over here at the IGF, well, over there at the IGF, which is a networking session, right? And the purpose is really to identify people that are doing that kind of similar research or that are engaging in those spaces or that are interested in those spaces, right? So creating the space for us to have those kinds of New York-centric conversations like in another geographic location is absolutely fundamental. And that is one way in which we can do that from, let’s say, an IGF, OAWG kind of cross-pollination, but that could obviously apply for many, many other, let’s say, processes. Another possibility there or example is that over at RUSI, we have launched the Global Partnership for Responsible Cyber Behavior, which is a platform for researchers from all different regions to kind of come together and reflect on what responsible cyber behavior means from their, let’s say, perspective, from their geographic location. And we organized a discussion over in Singapore during Singapore Cyber Week with researchers and government representatives, not just from the Southeast Asian countries, but other regions as well that were attending CICW to discuss norms of responsible state behavior and the practical behaviors that they see from their regional perspective. So we got like small island countries saying that, you know, for example, climate-related concerns and critical infrastructure protection is much more relevant, or that is actually kind of like state responsibility and responsible state behavior is ensuring that the climate discussion is connected to the critical infrastructure discussion at the first committee, right? Which is a very different interpretation, you would argue, but that it is like once you go to the regional level and once you do that cross-pollination is that that can be quite useful. So, I mean, that is another example of a best practice of how we take those very specific bubbles and how do we, let’s say, expand those dialogues and how we as non-governmental actors can do that, but also as governments, I think one other example that I would give here is my other, let’s say, side is I’m part of the National Cybersecurity Committee in Brazil, which has been established as an outcome of the National Cybersecurity Policy. And that committee is mostly government representatives, different parts of the government, of the public administration, but you do have three civil society representatives and three industry representatives and three representatives from the technical community. And what we’ve been discussing, one of the things is how to make sure that the Ministry of Foreign Affairs can better coordinate across, do more of the interagency coordination to then take those, let’s say, inputs to the international fora, right? So, for some countries, you don’t need to have an informal mechanism. And for Brazil, it hasn’t been a formal mechanism, but just having the conversation about how to do that better and how to kind of like calibrate because the ministry has developed its own cyber division, right? How do you foster that or facilitate that liaison role that the MFA crucially plays in collating those different views, right? Even if you can’t have a more diverse delegation going with you to, let’s say, New York or Geneva. So, my question, perhaps, back to the diplomats in the- room is, you know, is there something about best practices of the MFA being more well equipped, or best practices there to think about how to facilitate that interagency coordination, or even if you have a shortage of resources, how you can feed back into those places or feed into in a better setting? Do we need to have MFAs more well equipped for this scenario where there’s a really a spread of different cyber related processes? Anyways, I’ll stop there. Because as you’ve seen, I speak way too much, but hopefully, I’d like to hear from you all.

James Shires: Thank you very much, Louise. And just to add a short footnote to what Louise and Yasmin have said. I think Louise started by saying there are two reasons right for this proliferation of initiatives. One is specialization. And the other one is politicization, right? And Yasmin pointed out that one of the main challenges is giving participants, whether multi-stakeholder or states, a sense of ownership in each initiative, right? So you want to engage early, engage transparently, and set out a clear roadmap for how things will go ahead and when people should and how they should intervene. Now, the danger there, right, is if you engage very transparently and openly early on, with people who do not share the same objectives of the initiative, maybe even would like to sabotage or delay it or see it not exist, right, then that gives them an opportunity to do so very easily. Because you say, okay, we will not move ahead until we have got full ownership from everyone in the room. And so someone says, Oh, I don’t want this to happen. I’m not going to agree. And it doesn’t move ahead. So the best practice that I think of here is Be as open as possible, so do engage really transparently early on, but with clear deadlines and with clear suggestions for who will take forward action after those deadlines. So you have things like the PowerMile consultation period. That is good because it invites broad interventions, it has a deadline, and it’s clear who will go on afterwards. So it’s very hard to say, you know, there was no opportunity to be involved from maybe those that don’t want to see it go ahead or want to push forward for a delaying tactic. But still, the real nub of the issue is that how do you identify those actors that will take it forward? Is it going to be the same people? So ideally not. So in the PowerMile process, in the absence of anyone else, they close the consultation and it’s the UK and France who then take it forward. So you’re back to the original problem of inclusion. Ideally, you would have a different set of actors, but who will be nominated and ready and funded with the resources to take it forward. So just to highlight the politicisation happens in those processes as well. We do have a question online. So I wonder if we could enable unmuting and Akriti, please do ask your question. Hi, thank you so much. I guess it’s more of a comment to Louise’s point than a question. And I’d love to be able to unmute video if that’s possible, by the way, it can’t be. I think just to the point about politicisation and what Louise was saying about the MFA.

Audience: So I’m Akriti, I work at Global Partners Digital, and before that I was working in India as a foreign minister in the G20. And something that we noticed, and especially more so when we were organising a convention on cybercrime and AI at NFT. and this was part of the G20 presidency, so it was under the aegis of technically the foreign ministry, but because there are so many departments, like for us the internal security is done by a ministry called the Ministry of Home Affairs, and then we have a Ministry of Information and Technology, and then we have the Ministry of Foreign Affairs. So and I was coordinating technically on the side of the MFA, but coming from tech policy I was kind of more aware than say most foreign policy people were about some of those discussions, and I think what I noticed was that a lot of the times the positions that came up were something that at least my MFA were just checking to make sure that it didn’t go against, sorry, what we had said at the UN. It wasn’t so much that they were making the policy as so much that they were just checking that it wasn’t contradicting our international opinion on something else. So I just wondered that even if we did have diplomatic practices from the MFA, I think a lot of constructive input, and that’s our challenge as civil society is kind of to make sure that we have connections or sort of a community or interactions with different, like the internal political machinery, which is different ministries, because a lot of times you can come in at the very last stage, but we’re not really involved at the point where the policy is being discussed, so much as just where it’s being vetted, and to me sort of that’s a, you know, very clear delineation between how you participate. So there’s, of course, the things that the MFA can do, but I just wonder how much of that onus on how to involve civil society will then fall on, say, the MFA, whereas permeating that culture within the internal politics where you invite opinion from civil society and different departments, and I guess there is going to be one ministry that needs to lead that, and if it’s an internet governance, which I guess a lot of times happens in international discussions and whether that’s an MFA prerogative or whether it’s someone else, but I think that’s a huge challenge for us as civil society from a national point of view, that how is our national engagement so strong that when we say something internationally, it really comes from kind of the local perspective. or that we’re heard at the very first layer that we can be heard at. Thank you.

James Shires: Preeti, thank you so much. And yeah, it again just highlights this important for coordination at both layers, right? You cannot have a inclusive and effective international layer without first working hard and solving problems in the national layer. We do have a representative from a Ministry of Foreign Affairs on our panel. So I was wondering, Corinne, if you would maybe say a little bit about your perspective on what the role of the Ministry of Foreign Affairs is in these issues, sort of how technical should it be? How can it rely on other technical communities and draw on those different parts of government?

Corrine Casha: First of all, I wanted to make a point on the Palma process. We received an invitation from the French and the UK governments to participate. So we received a formal invitation. And I have to say that we thought about it and we thought that it was a very important process, particularly on the point of also promoting inclusivity and on also getting industry and other different factions on board. So we did participate. This was the first time for us. And I was aware also of the different factions that participated. We had one representative from the industry and one representative from the Ministry, sorry, from a line ministry that participated. And I was happy to see that they were included in the consultation process. And I think for us, this was something that we would like to also encourage other states to sign up to, because it’s very important to not only in terms of, as I said, including the other, let’s say, factions that are not always included in the decision-making, but also as a way of promoting, let’s say, coordination between different states. So that was a point on the Palma process. And on the points raised, I very much share the same thoughts as the Austrian colleague as well, from Ministry of Foreign Affairs. There are a lot of different processes and with the different processes, you have to see which representatives are going to attend which process. Sometimes it’s very difficult also from our side side I think particularly difficult was the cyber crime convention because we had an issue of not having our delegates participate directly in New York, our criminal lawyers, it was very difficult to get them to participate in negotiating sessions. So it was very challenging for us because we had to rely on our delegates in New York and to coordinate back sort of with the line ministry. So that was very challenging because also one other thing is with the sort of proliferation of different processes, I mean even the line ministries themselves are having to keep up or having to keep up pace with all these different processes taking place and sometimes you know you don’t have the necessary specialists, especially with cyber crime for example, there are certain technicalities that where we don’t have the necessary expertise to deal with them. So it’s very hard to keep pace with all these different processes to make sure that you have all these specialists on board and I would say that the cyber crime convention for us was the most difficult because of the fact that we didn’t have these specialists who would be able to go back and forth from New York to Malta and negotiate. And I was also very struck by what Louise said about the sort of transfer of knowledge from New York to other centers of discussion. I mean the fact that I’m participating the first time in the IGF is also sort of decision based on that. The fact that I’m coming from the foreign ministry but at the same time I’m participating in a forum where it’s not just the foreign ministry but also other technical fellows, civil society etc. participating. I think it’s important to share knowledge. in that respect. And there was another point on, I believe that Louise raised, about the sort of need to have this sort of harmonization process going also with respect to the Cyber Security Committee, which she mentioned about Brazil. We also have a Cyber Security Committee in Malta. I’m a representative on that committee. And again, it brings together all the players, all the line ministries, all the representatives of the industry. But the foreign ministry is really sort of coordinating, let’s say, has a coordinating role in that, make sure that what the representatives or the delegates say, as also the online participant mentioned, doesn’t sort of run counter to what we say in New York or in other areas. But I think the fact that this Cyber Security Committee was established was very, not only timely, but also very important. I think it has helped a lot to, first of all, bring sort of to the table items or let’s say issues that maybe not all participants or delegates are aware of. I’m thinking in particular about the issue of the application of international law in cyberspace. For example, that was an area which was being discussed at the EU level, but not every delegate, let’s say from the police force to, let’s say, the critical infrastructure department. I mean, they were not aware of the discussions that were taking place. I mean, they were aware of it in a general sense, but not so much into the detail. And I think it was very important that this was raised at the committee. And so, I mean, the committee has a very important role to play, because it brings together the different sort of factions, but it also enables certain issues to be exchanged, to have more information. And for us, it was important. I think without the committee certain sort of items or issues would maybe fall through the cracks and then line ministries or other entities would come to know about it much later than they would actually come to know about it if there was not the committee. So I think it’s a very important framework as well. For us it’s sort of a formal establishment, it’s under the office of the prime minister so there’s a sort of also prime ministerial lead over that which gives it also a certain influence and a certain weight to take decisions. But I believe that this is also one aspect where harmonization comes into play and where we avoid sort of also the fragmentation of cyber issues because at the end of the day there’s so many processes, so many different ministries tackling different aspects of cyber. The sort of committee brings them together and for us nationally it’s helped a lot. Thank you. Thank you very much Karina and that’s a

James Shires: yeah fantastic insight into how these national structures of coordination work as well. Akriti I can see you’ve got your hand up to respond again and then I will turn open the floor. So

Audience: Akriti come in and then we’ll open the floor. Yeah thanks, just to point out because you know she mentioned the issue of cybercrime convention and from our point of view, so when India we had this law in India which was struck down by the supreme court which was our highest law in the land because the speech was extremely vague in terms of what it marked as offensive and the overreach was such that it was marked down and then eventually India tried to bring the exact like literal verbatim exact same prohibition to the cybercrime convention trying to circumvent our national jurisdiction to you know then have it obviously the international law and then you have to ratify it then you know to get it back into our national legislation and it was I mean it was honestly a little bit shocking to us as a civil society that that route was trying to be used to you know legitimize the chilling effect on freedom of speech again. But also another point was that because that was happening at the cybercrime convention and at least in India the capacity to follow international conversations on intent governance and such is much more limited than national. So the kind of traction that we got on that was so little like if you weren’t very specifically tracking the cybercrime convention which is kind of literally maybe one organization if any in India then it didn’t get as much traction as when of course the national debate was happening and I think that was quite for us just kind of alarming to see that they were moving to the international forum from the national forum and kind of the like because the harmonization works that the way it does they thought that they could try to get away with it and eventually didn’t pass of course so it’s not a reality today but it was quite alarming to see that that kind of those kind of actions also happen and the lesser we see harmonization and even just civil society kind of input or attention on the internet governance spaces it can really come back from what people consider an elitist internet governance level to really our national legislation so just to respond to that thank you. Thank you and yeah that example of

James Shires: states trying to circumvent maybe civil society or popular resistance to certain measures of legislation by going through the international level is really fascinating. Is there anyone else in the room who would like to come in with their perspective on the paradox of inclusion? I see no hands would anyone online like to come in please do raise your hand or put something in the chat if not then I will offer the floor to Yasmin and Louise for their interventions but there is someone already so Sasha please do come in and can we give Sasha some video as well if you’d like to have video if you don’t want then leave it off. Hi, good day. Can you all hear me? Yes. All right. I can’t get the video so that’s that’s

Audience: fine. All right. So my name is Natasha Nagle. I’m here with the from the University of Prince Edward Islands. And my perspective is through the inclusion, digital inclusion perspective. And just inquiry considering the way in which we, we look at the way in which we fragment internet, generally speaking, and the way in which we identify the subtext within the presentation of information. And when it comes to internet government governance, how do we consider the identities that are being put forward? Is it going to be a situation where when it comes, when you look at that lateral transfer from physical inclusion to digital inclusion, what structures are in place to ensure that minority identities are being presented in such a fashion that it is represented on the world stage? And just just general comments on on that particular space, because when it when it comes to our intersectionality and internet identity, it’s important that we don’t lose sight of those minor minority identities within that digital space. Thank you.

James Shires: Thank you very much, Sasha. In which case, what I will do if there’s no other comments in in person, then I will offer the floor to Yasmin and Louise to give some concluding remarks, but also specifically to address that question of ensuring we have the digital

Hurel Louise Marie: inclusion of minoritized identities and communities. Louise, would you like to go first? Sure, happy. do so, and thank you for that question, Sasha. I think from where I’m standing, and from the, let’s say, sorry, sorry if my voice is a bit robotic at the moment, but just recovering from a cold. From where I’m sitting, and from the example that I gave, I think there is a reflection to be made as to how we best include or recognize minorities in the context of these very high-level processes, right? And I think I gave a very brief example of how some member states have been trying to do that, and I know that that doesn’t respond very specifically to the kind of physical offline, online kind of representation, but again, from that process specifically, let’s say like the OEWG, you do have member states facilitating and supporting the Women in Cyber Fellowship, which I think creates, let’s say, a precedent for not only having more like gender balance there, but effectively having women and folks from different, let’s say, standpoints to actually negotiate an official text, and I wouldn’t underestimate that because there are different ways of approaching the negotiation of a text, and obviously the subjectiveness of your background, where you’re coming from, be it geographically, be it in terms of your gender, that really kind of like plays into the way in which you navigate the world, and that’s not different when you enter a UN windowless room, right? And I think that’s actually very important, and I, you know, just seeing that specific fellowship taking place throughout the last, like, I don’t know, since 2018 at least, 2019, sorry, with the start of the OEWG, you do see the of similar women attending these spaces. And I think that’s good because it just maintains a memory of having effective representatives there. So hopefully that responds a bit to your question, but again, from the very specific standpoint, my concluding remarks is almost like a summary because as you know, James, I tried to kind of like cluster things and structure kind of thoughts. So going back to the notion of paradox, I think we have three paradoxes if that’s the word now reflecting. I think the first one is thinking about the paradox of meaningful leadership. We talked about lots of different processes. I think there is a thing over here, which is there is a value in spearheading certain initiatives and setting into motion, so structuring it. But I think there’s a very important point. And I think that’s something that you raised quite nicely, James, which is, is there a moment for us to delegate some of that leadership? And if that leadership is delegated, how that should happen? Also calibrating political risks, right? Because that’s what member states are usually doing, right? It’s like, I don’t wanna lose control over this process, but I want to indicate that it’s actually inclusive. But is there something about calibrating between spearheading, setting it in motion and delegating? What does delegation look like? The counter ransomware initiative has different working groups with different countries doing that. And I think non, like also like non-state actors, like sharing that, but I might be wrong. The second paradox then, meaningful leadership. The second one for me is like meaningful coordination. And in the meaningful coordination, I think what we saw is calibrating between like interagency mechanisms, developing those if that is something that’s relevant at the national level, be it like a committee structure. So similarly to Malta, like in Brazil, it’s right. like the office of the president, so that provides some political capital domestically, but how do you ensure also that you are having a mixed delegation once you go externally, right? So how do you calibrate between like inter-agency coordination and kind of like international projection? And the third paradox, and finally, is a meaningful dialogue. So, and this is a provocation really, like, okay, it’s very nice and easy to say like, oh, we’re open to dialogue, to meaningful dialogue, we’re going to bring these stakeholders in, we’re going to do consultations, we’ll have a very nice timeline, and this will look very structured, but are we actually open to productive disagreement? Even if like for member states that are funding, like, you know, there were lots of examples of like funding to go to the ad hoc committee, funding, like, these developed countries or developing countries or underrepresented communities, is that actually in the end kind of like, is there an openness for a productively discomforting, uncomfortable dialogue? That’s the word. And open to other expert input from maybe communities that we don’t know, or that we haven’t figured out. And I think one example is the Global Conference on Cyber Capacity Building, which obviously there’s this whole kind of like cyber capacity building community within the cyber world, but there’s a development community, so how do you bridge those? Not saying that the GC3B, the Global Conference on Cyber Capacity Building, is the example, but that is one case where you see this attempt to articulate that conversation, and it’s still very, you know, it’s still gaining its own kind of like traction, right? And I’m sure Yasmin will have some additional thoughts on that. So these are the three kind of like points that I like to add there. But thank you very much all for your contributions. This was great, especially at

James Shires: 6am until like 8am here in London. Thank you, Louise, and impressively awake. I was really struck by this idea… of calibrating political risk and maintaining control, right? Because at the end of the day, right, to have a more inclusive process to really spread ownership, there has to be some kind of letting go, right? Some states who are currently in charge or others new in charge process have to relinquish some kind of control and be ready for the progress to go in different directions. And that is really an uncomfortable place to be in, especially if your whole mandate as a Minister of Foreign Affairs or other official is to steer and maintain control in that way. Yasmin, minoritized identities and communities, and then concluding

Yasmine Idrissi Azzouzi: remarks. Thank you. Sounds good. Thank you very much, James. So I would like definitely to echo what has been said until now and really, really, really keep the focus also on fostering ownership at national level. I think the point on the delegation of leadership definitely is a challenging one, but I’ve seen through national processes, lead agencies that sort of relinquish a little bit that lead role to a certain extent and seeing the usefulness in doing that. That, of course, doesn’t stop us from keeping the balance, of course. And I think both sort of approaches are necessary. So one of them is definitely creating the focus on interdisciplinary teams that are equipped, again, to engage meaningfully in different fora. So in a perfect world, we would have similar something to like what the Austrian representative here has mentioned, multidisciplinary teams that bring different experts at international level as well. And I think to ensure also sort of continuity with these sort of multidisciplinary teams, keeping the lead agency as being sort of the core is also necessary. So the lead agency can sort of keep tabs on the processes, on the different ones, and give that overview as well on the different things that may be lacking at times, while in parallel to that, delegating some of that power or some of that leadership when it comes to having specific processes that are topic-specific, for example. So apart from the national level that I keep coming back to that I think is really, really core, it’s really the key here, of course, inclusivity at international level is needed as well. Akriti’s very relevant example definitely showed the need for obviously going beyond let’s say multilateral or state-focused processes and keep that inclusivity of civil society at international level as well, but I think a lot of it actually needs to happen at national level. Thank you. Thank you very much, Yasmin. And finally, I would turn to our third panellist, Corrine

James Shires: Kasher, for your concluding remarks. We have had our five-minute warning, so we will be wrapping up after these remarks. And thank you, everyone, for your participation. Corrine, over to you.

Corrine Casha: Thanks, James. Well, not much to say. I think the two panellists before me actually wrapped up everything nicely. I mean, we’ve discussed a lot today, and I think we’ll take home, I’ll definitely take home a few of the remarks that participants made today, especially, yes, about the Austrian colleague mentioned interdisciplinary teams, and also Louise mentioned, for example, the transfer knowledge. Yasmin mentioned also the need to sort of, about the consultation process. It was one thing that struck me most about best practices. So I think we have quite a checklist of things that we have gathered here today. And I think they were all very, very valid remarks. I myself, I think, I’m also pleased to be here, not only because I shared some of my experiences, but because I took home a lot of points to consider. So maybe, I don’t know, I mean, we can definitely come up also with a sort of report from this session, and maybe circulate to participants as well. But I think we have all spoken very much about the need to reduce fragmentation, about the need for inclusivity, about the need, as you said, of political risk and sort of relinquishing control. I personally think that what we discussed here today would be very relevant to take forward. Perhaps you can have another session also to follow up on this. And from my perspective, I mean, it ends there. I think we’ve discussed a lot today. And I’m very happy to have participated and to have listened to everybody’s take here. So thank you very much.

James Shires: Thank you, Corrine. And as a quick reminder, before we close, please do check out Roussi’s Global Partnership on Responsible Cyber Behavior, which is online. Louise is running that. And of course, do visit Virtual Roots, and we will be doing more activities in this space. We will be engaging more, so we’d love to continue to have this conversation in future. Have a great last day of the IGF, and thank you everyone. Thank you.

J

James Shires

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Proliferation of initiatives creates barriers to meaningful participation

Explanation

The increasing number of internet governance initiatives makes it difficult for stakeholders to effectively engage in all of them. This creates a paradox where efforts to increase inclusion actually lead to exclusion due to resource constraints.

Evidence

Examples of various initiatives mentioned: UN OEWG on cybersecurity, Global Digital Compact, Cybercrime Convention, Paris Call

Major Discussion Point

The Paradox of Inclusion in Internet Governance

Agreed with

Hurel Louise Marie

Yasmine Idrissi Azzouzi

Corinne Casha

Agreed on

Proliferation of internet governance initiatives creates challenges for meaningful participation

H

Hurel Louise Marie

Speech speed

154 words per minute

Speech length

4192 words

Speech time

1629 seconds

Specialization of debates leads to fragmentation of discussions

Explanation

As internet governance discussions become more specialized, they split into separate forums and processes. This fragmentation makes it challenging to maintain a holistic view and coordinate across different areas.

Evidence

Examples of specialized initiatives: Counter Ransomware Initiative, Pal-Mal Process, OEWG, GFCE, Tech Accord

Major Discussion Point

The Paradox of Inclusion in Internet Governance

Agreed with

James Shires

Yasmine Idrissi Azzouzi

Corinne Casha

Agreed on

Proliferation of internet governance initiatives creates challenges for meaningful participation

Differed with

Yasmine Idrissi Azzouzi

Differed on

Role of specialization in internet governance debates

Inclusion efforts can be weaponized for political purposes

Explanation

Some states use the creation of new inclusive processes as a political strategy to advance their interests. This can lead to the proliferation of initiatives that may not genuinely promote inclusivity.

Evidence

Example of Russia pushing for a legally binding instrument on cybercrime through the ad hoc committee

Major Discussion Point

The Paradox of Inclusion in Internet Governance

Structural inequalities persist despite efforts at inclusion

Explanation

Even when processes like the OEWG aim to be more inclusive by involving all UN member states, structural barriers still limit effective participation. Smaller states often lack the resources to engage meaningfully in all discussions.

Evidence

Example of small UN missions with limited staff covering multiple topics

Major Discussion Point

The Paradox of Inclusion in Internet Governance

Importance of knowledge transfer between different forums

Explanation

Facilitating knowledge transfer between various internet governance forums is crucial for maintaining coherence and continuity. This involves creating opportunities for participants to share experiences and insights across different processes.

Evidence

Example of organizing cyber policy dialogues at the IGF to discuss New York-centric conversations in a different geographic location

Major Discussion Point

Challenges of Coordination Across Different Forums

Fostering dialogue that allows for productive disagreement

Explanation

True inclusivity in internet governance processes requires openness to productive disagreement and uncomfortable dialogues. This involves going beyond superficial consultations and being willing to engage with diverse and potentially challenging perspectives.

Major Discussion Point

Strategies for Improving Inclusion and Representation

Y

Yasmine Idrissi Azzouzi

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

National-level coordination is crucial for effective international participation

Explanation

Effective participation in international internet governance forums requires strong coordination at the national level. This involves bringing together various stakeholders and agencies to develop coherent positions and strategies.

Evidence

Example of ITU supporting developing countries in creating national cybersecurity strategies through inclusive consultation workshops

Major Discussion Point

Challenges of Coordination Across Different Forums

Agreed with

Corinne Casha

Hurel Louise Marie

Agreed on

Importance of national-level coordination for effective international participation

Creating ownership through multi-stakeholder consultations

Explanation

Engaging various stakeholders in consultations during the development of national strategies or policies can create a sense of ownership. This approach leads to better coordination and more effective implementation of internet governance initiatives.

Evidence

Example of ITU’s methodology for supporting the development of national cybersecurity strategies through inclusive consultation workshops

Major Discussion Point

Strategies for Improving Inclusion and Representation

Need for interdisciplinary teams to engage in various processes

Explanation

Effective participation in internet governance requires interdisciplinary teams that can engage meaningfully across different forums. These teams should combine expertise in technical, diplomatic, and policy-making areas to address the complex nature of digital issues.

Major Discussion Point

Challenges of Coordination Across Different Forums

Differed with

Hurel Louise Marie

Differed on

Role of specialization in internet governance debates

C

Corinne Casha

Speech speed

136 words per minute

Speech length

1619 words

Speech time

709 seconds

Relinquishing some control is necessary for true inclusion

Explanation

To achieve genuine inclusivity in internet governance processes, states and organizations leading initiatives must be willing to give up some control. This involves being open to different perspectives and allowing for outcomes that may diverge from initial expectations.

Major Discussion Point

The Paradox of Inclusion in Internet Governance

Difficulty in maintaining consistent representation across multiple forums

Explanation

The proliferation of internet governance forums makes it challenging for states to maintain consistent and expert representation across all processes. This is particularly difficult for smaller states with limited resources.

Evidence

Example of challenges in participating in the cybercrime convention negotiations due to lack of specialized expertise

Major Discussion Point

Challenges of Coordination Across Different Forums

Agreed with

James Shires

Hurel Louise Marie

Yasmine Idrissi Azzouzi

Agreed on

Proliferation of internet governance initiatives creates challenges for meaningful participation

Establishing national cybersecurity committees for better coordination

Explanation

Creating national-level cybersecurity committees can improve coordination among different government agencies and stakeholders. These committees can help ensure coherent positions across various international forums and facilitate knowledge sharing.

Evidence

Example of Malta’s Cyber Security Committee bringing together various ministries and industry representatives

Major Discussion Point

Strategies for Improving Inclusion and Representation

Agreed with

Yasmine Idrissi Azzouzi

Hurel Louise Marie

Agreed on

Importance of national-level coordination for effective international participation

Funding initiatives to support participation from developing countries

Explanation

Providing financial support for representatives from developing countries to attend international forums is crucial for improving inclusion. This helps ensure a more diverse range of perspectives in internet governance discussions.

Evidence

Mention of Austria funding developing country diplomats to participate in the cybercrime process

Major Discussion Point

Strategies for Improving Inclusion and Representation

Role of foreign ministries in coordinating national positions

Explanation

Foreign ministries play a crucial role in coordinating national positions across various internet governance forums. They need to ensure consistency in positions taken at different international venues while also facilitating input from various domestic stakeholders.

Evidence

Example of Malta’s foreign ministry coordinating with the national Cyber Security Committee to ensure coherent positions

Major Discussion Point

Challenges of Coordination Across Different Forums

Agreed with

Yasmine Idrissi Azzouzi

Hurel Louise Marie

Agreed on

Importance of national-level coordination for effective international participation

A

Audience

Speech speed

172 words per minute

Speech length

1562 words

Speech time

542 seconds

Lack of communication between Geneva and New York-based processes

Explanation

There is insufficient coordination between internet governance processes taking place in Geneva and New York. This leads to duplication of efforts and makes it difficult for stakeholders to engage effectively across all relevant forums.

Major Discussion Point

Challenges of Coordination Across Different Forums

Ensuring representation of minoritized identities in digital spaces

Explanation

It is important to consider how minoritized identities are represented in internet governance processes and outcomes. This includes addressing the transition from physical to digital inclusion and ensuring diverse perspectives are included.

Major Discussion Point

Strategies for Improving Inclusion and Representation

Agreements

Agreement Points

Proliferation of internet governance initiatives creates challenges for meaningful participation

James Shires

Hurel Louise Marie

Yasmine Idrissi Azzouzi

Corinne Casha

Proliferation of initiatives creates barriers to meaningful participation

Specialization of debates leads to fragmentation of discussions

Difficulty in maintaining consistent representation across multiple forums

All speakers agreed that the increasing number and specialization of internet governance initiatives make it difficult for stakeholders, especially those with limited resources, to participate effectively across all forums.

Importance of national-level coordination for effective international participation

Yasmine Idrissi Azzouzi

Corinne Casha

Hurel Louise Marie

National-level coordination is crucial for effective international participation

Establishing national cybersecurity committees for better coordination

Role of foreign ministries in coordinating national positions

Speakers emphasized the need for strong national-level coordination mechanisms, such as cybersecurity committees, to ensure coherent positions and effective participation in international forums.

Similar Viewpoints

Both speakers highlighted the tension between political control and genuine inclusion, suggesting that true inclusivity requires a willingness to relinquish some control and engage with diverse perspectives.

Hurel Louise Marie

Corinne Casha

Inclusion efforts can be weaponized for political purposes

Relinquishing some control is necessary for true inclusion

Both speakers emphasized the importance of meaningful engagement with diverse stakeholders, including being open to disagreement and challenging perspectives, to create genuine ownership and inclusivity in internet governance processes.

Yasmine Idrissi Azzouzi

Hurel Louise Marie

Creating ownership through multi-stakeholder consultations

Fostering dialogue that allows for productive disagreement

Unexpected Consensus

Importance of interdisciplinary approaches in internet governance

Yasmine Idrissi Azzouzi

Corinne Casha

Hurel Louise Marie

Need for interdisciplinary teams to engage in various processes

Difficulty in maintaining consistent representation across multiple forums

Importance of knowledge transfer between different forums

There was unexpected consensus on the need for interdisciplinary approaches to internet governance, with speakers from different backgrounds agreeing on the importance of combining technical, diplomatic, and policy expertise to address complex digital issues effectively.

Overall Assessment

Summary

The main areas of agreement included the challenges posed by the proliferation of internet governance initiatives, the importance of national-level coordination, the need for inclusive and diverse participation, and the value of interdisciplinary approaches.

Consensus level

There was a high level of consensus among the speakers on the key challenges and potential solutions for improving inclusion in internet governance. This consensus suggests a shared understanding of the complexities involved and a common desire to address the paradox of inclusion. The implications of this consensus are that future efforts in internet governance may focus on developing more coordinated and interdisciplinary approaches, both at national and international levels, to ensure more effective and inclusive participation across various forums.

Differences

Different Viewpoints

Role of specialization in internet governance debates

Hurel Louise Marie

Yasmine Idrissi Azzouzi

Specialization of debates leads to fragmentation of discussions

Need for interdisciplinary teams to engage in various processes

Louise Marie argues that specialization leads to fragmentation, while Yasmine emphasizes the need for interdisciplinary teams to address this fragmentation.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the approach to specialization vs. interdisciplinary engagement, and the specific mechanisms for improving national and international coordination.

difference_level

The level of disagreement among the speakers is relatively low, with more emphasis on complementary perspectives rather than outright contradictions. This suggests a general consensus on the challenges of inclusion in internet governance, with differences primarily in the proposed solutions and areas of focus.

Partial Agreements

Partial Agreements

Both speakers agree on the challenges of inclusion, but Louise Marie focuses on structural inequalities, while Corinne emphasizes the practical difficulties of representation.

Hurel Louise Marie

Corinne Casha

Structural inequalities persist despite efforts at inclusion

Difficulty in maintaining consistent representation across multiple forums

Both speakers agree on the importance of national-level coordination, but propose different mechanisms to achieve it.

Yasmine Idrissi Azzouzi

Corinne Casha

National-level coordination is crucial for effective international participation

Establishing national cybersecurity committees for better coordination

Similar Viewpoints

Both speakers highlighted the tension between political control and genuine inclusion, suggesting that true inclusivity requires a willingness to relinquish some control and engage with diverse perspectives.

Hurel Louise Marie

Corinne Casha

Inclusion efforts can be weaponized for political purposes

Relinquishing some control is necessary for true inclusion

Both speakers emphasized the importance of meaningful engagement with diverse stakeholders, including being open to disagreement and challenging perspectives, to create genuine ownership and inclusivity in internet governance processes.

Yasmine Idrissi Azzouzi

Hurel Louise Marie

Creating ownership through multi-stakeholder consultations

Fostering dialogue that allows for productive disagreement

Takeaways

Key Takeaways

The proliferation of internet governance initiatives creates barriers to meaningful participation, especially for actors with limited resources

There is a tension between specialization of debates and fragmentation of discussions across multiple forums

National-level coordination and capacity building are crucial for effective international participation

True inclusion requires relinquishing some control and being open to productive disagreement

Structural inequalities persist despite efforts at inclusion in internet governance processes

There is a lack of communication and coordination between different internet governance forums (e.g. Geneva vs New York-based)

Resolutions and Action Items

Consider creating a report summarizing the key points from this session to circulate to participants

Explore having a follow-up session to continue the discussion on inclusion in internet governance

Unresolved Issues

How to effectively balance specialization of debates with the need for coherent, coordinated governance

How to ensure meaningful inclusion of minoritized identities and communities in digital governance spaces

How to improve coordination between different internet governance forums and processes

How to address structural inequalities that persist despite inclusion efforts

Suggested Compromises

Creating interdisciplinary teams that can engage across multiple governance forums

Delegating some leadership/control to other stakeholders while maintaining overall coordination

Balancing state-led initiatives with meaningful multi-stakeholder consultation and civil society inclusion

Fostering national-level coordination mechanisms (e.g. cybersecurity committees) to inform international engagement

Thought Provoking Comments

The idea here is that we see a proliferation of efforts to bring in different actors in internet governance, whether these are multi-stakeholder forums, whether these are efforts to include developing countries and smaller states or states with fewer resources, and there’s lots of different efforts to do these, through different conferences, initiatives, meetings, and so on. In fact, there’s so many of these different efforts that actually keeping up with them all, keeping track of them all, and participating meaningfully in them all, is itself a high resource burden. And that’s what we term the paradox of inclusion.

speaker

James Shires

reason

This comment introduces the central concept of the ‘paradox of inclusion’ which frames the entire discussion. It highlights how efforts to be more inclusive can paradoxically create barriers to participation.

impact

This set the stage for the entire discussion, providing a framework for analyzing various internet governance initiatives and their inclusivity challenges.

So you have those movements, such as the ad hoc committee, which has ended right now, that becomes part of that, let’s say, political strategy. Another example of proliferation being a political strategy is precisely to specialize debate because then you can control a bit more or what the scope is, and who is involved in this conversation.

speaker

Louise Marie Hurel

reason

This comment introduces the idea that the proliferation of initiatives can be a deliberate political strategy, adding complexity to the discussion of inclusivity.

impact

It shifted the conversation to consider the political motivations behind the creation of new forums, deepening the analysis beyond just logistical challenges.

So creation of ownership, I think, at national level across different expertises, so ministries and national agencies, but also critical infrastructure providers. We’ve had in the same room central banks, energy representatives, but also ministries ranging from MFA all the way to, of course, defence, interior and others, because, of course, it’s extremely interdisciplinary and a national strategy also needs to have all of those elements be taken into consideration.

speaker

Yasmine Idrissi Azzouzi

reason

This comment highlights the importance of national-level coordination and inclusivity as a foundation for effective international participation.

impact

It broadened the discussion to consider how national-level processes impact international inclusivity, leading to a more holistic analysis of the challenges.

And I was happy to see that they were included in the consultation process. And I think for us, this was something that we would like to also encourage other states to sign up to, because it’s very important to not only in terms of, as I said, including the other, let’s say, factions that are not always included in the decision-making, but also as a way of promoting, let’s say, coordination between different states.

speaker

Corinne Casha

reason

This comment provides a concrete example of how a specific initiative (the Palma process) is attempting to address inclusivity challenges.

impact

It moved the discussion from theoretical concepts to practical examples, allowing for a more grounded analysis of potential solutions.

And just inquiry considering the way in which we, we look at the way in which we fragment internet, generally speaking, and the way in which we identify the subtext within the presentation of information. And when it comes to internet government governance, how do we consider the identities that are being put forward?

speaker

Natasha Nagle

reason

This comment introduces a new dimension to the discussion by focusing on the representation of minority identities in digital spaces.

impact

It broadened the scope of the inclusivity discussion beyond just state and organizational representation to consider individual and community identities.

Overall Assessment

These key comments shaped the discussion by progressively expanding and deepening the analysis of inclusivity in internet governance. Starting with the introduction of the ‘paradox of inclusion’, the conversation moved through considerations of political motivations, national-level coordination, practical examples of inclusive initiatives, and finally to questions of identity representation. This progression allowed for a multifaceted examination of the challenges and potential solutions to achieving meaningful inclusivity in internet governance processes.

Follow-up Questions

How can we strengthen communication between different UN processes (e.g. Geneva and New York) to avoid duplication?

speaker

Austrian Foreign Ministry representative

explanation

This is important to improve coordination and efficiency in international cybersecurity discussions.

What are best practices for Ministries of Foreign Affairs to facilitate interagency coordination on cyber issues?

speaker

Louise Marie Hurel

explanation

This could help improve national-level coordination to better inform international positions.

How can we ensure meaningful inclusion of minority identities in internet governance processes?

speaker

Natasha Nagle

explanation

This is crucial for ensuring diverse perspectives are represented in digital policy discussions.

How can we calibrate between spearheading initiatives and delegating leadership in international processes?

speaker

Louise Marie Hurel

explanation

This is important for balancing control and inclusivity in multi-stakeholder initiatives.

How can we foster openness to productive disagreement in international dialogues?

speaker

Louise Marie Hurel

explanation

This is necessary for truly inclusive and meaningful discussions on complex issues.

How can we better bridge the cyber capacity building community with the broader development community?

speaker

Louise Marie Hurel

explanation

This could lead to more holistic and effective approaches to digital development.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

YCIG & DTC: Future of Education and Work with advancing tech & internet

YCIG & DTC: Future of Education and Work with advancing tech & internet

Session at a Glance

Summary

This session, co-hosted by the Dynamic Team Coalition and Youth Coalition on Internet Governance, focused on addressing the evolving demands on education and workforce shaped by AI, quantum computing, blockchain, and robotics. Participants discussed the challenges and opportunities presented by these technologies, particularly in the context of the Global South and marginalized communities.

Key points included the need for innovative educational strategies that incorporate AI and emerging technologies into curricula, while also addressing digital divides and ensuring ethical use. Speakers emphasized the importance of adapting education systems to prepare students for a rapidly changing job market, with a focus on critical thinking, problem-solving, and ethical considerations in technology use.

The discussion highlighted the complexities of implementing AI in education, including concerns about over-reliance on AI tools and the need for proper guidance on their use. Participants stressed the importance of inclusive design in educational technology and the need to consider diverse perspectives, including those of indigenous communities.

Several speakers addressed the need for regulations and ethical guidelines for AI use in education and the workforce. They emphasized the importance of involving multiple stakeholders, including students, educators, and policymakers, in developing these frameworks.

The session also touched on the challenges of digital exclusion in the Global South and the need for policies that ensure both technological inclusion and high-quality education. Participants called for more research into local needs and contexts to inform policy development.

In conclusion, the discussion underscored the need for collaborative action among various stakeholders to navigate the complex landscape of AI and education, with a focus on creating inclusive, ethical, and effective strategies for preparing youth for the future workforce.

Keypoints

Major discussion points:

– The need to adapt education systems and curricula to prepare students for AI and emerging technologies

– Challenges of digital divides and unequal access to technology, especially in the Global South

– Importance of teaching ethical use of AI and technology, not just technical skills

– Balancing technology use with human interaction and social skills

– Need for regulations and policies to guide ethical AI use in education and workforce

Overall purpose:

The goal of this discussion was to explore how education systems and workforce development need to evolve to address the rapid advancement of AI and other emerging technologies, while considering challenges like digital divides and ethical concerns.

Tone:

The tone was largely collaborative and solution-oriented, with participants building on each other’s ideas. There was a sense of urgency about the need to adapt education systems quickly. The tone became more nuanced as the discussion progressed, with increased focus on challenges faced by marginalized groups and the Global South.

Speakers

– Marko Paloski: Moderator

– Ananda Gautam: Coordinator for Asia-Pacific region from the Youth Coalition on Internet Governance

– Mohammed Kamran: Advocate from Pakistan, co-coordinator

– Marcela Canto: Representative of the Youth Brazilian Delegation

– Ethan Chung: Youth Ambassador of the Mampa Foundation from Hong Kong

– Umut Pajaro: Teacher at university and high school

– Denise Leal: From the Youth Coalition, Latin American Caribbean region

Additional speakers:

– Nirvana Lima: Facilitator and participant of Youth Programme Brazil

– Sasha: Student at University of Prince Edward Island

Full session report

Revised Summary: AI, Education, and the Future Workforce

Introduction

This session, co-hosted by the Dynamic Team Coalition and Youth Coalition on Internet Governance, addressed the evolving demands on education and workforce shaped by emerging technologies, particularly AI and robotics. The discussion brought together speakers from diverse backgrounds to explore challenges and opportunities presented by these technologies, with a focus on the Global South and marginalized communities.

Key Discussion Points

1. Adapting Education Systems to AI and Emerging Technologies

Speakers agreed on the urgent need to adapt education systems for a rapidly changing technological landscape. Umut Pajaro emphasized interdisciplinary and project-based learning approaches, as well as teaching critical thinking and problem-solving skills. The discussion highlighted innovative educational strategies incorporating AI into curricula while balancing technology use with human interaction.

2. Digital Divides and Unequal Access

Ananda Gautam raised concerns about the digital divide, particularly in the Global South. Marcela Canto provided a perspective on technological colonialism, stating, “We are now experiencing a new configuration of colonialism. While the global north has large technology companies that employ CEOs and software engineers, we in the south are left with the worst jobs.” Gautam also noted the progression from digital divide to AI divide and highlighted the importance of community networks in providing internet access to underserved regions.

3. Ethical Use of AI and Technology in Education

Mohammed Kamran and Ethan Chung emphasized the need for digital literacy and proper use of AI tools. Kamran advocated for government regulations, while Chung focused on the lack of ethical guidelines for AI use in education. Umut Pajaro suggested involving students in creating rules for AI use, highlighting the need for diverse perspectives in policy-making.

4. Language Accessibility and Cultural Sensitivity

Denise Leal discussed the importance of language accessibility in digital education, referencing the UFLAC IGF session. Umut Pajaro raised points about addressing language barriers for indigenous populations, emphasizing cultural sensitivity and self-determination in technology adoption.

5. Challenges in AI-Assisted Learning

Mohammed Kamran discussed the challenges of detecting AI-generated homework and the need for more experience-based assignments. Ethan Chung elaborated on the proper use of AI in education and assessments, suggesting a balance between AI-assisted research and original thought.

6. Technology Overuse and Digital Detox

Ananda Gautam raised concerns about the overuse of technology and the need for digital detox, emphasizing the importance of maintaining a balance between digital and non-digital experiences in education.

7. Future of Work and AI Impact

Marko Paloski highlighted the risk of job losses due to automation, underscoring the importance of preparing students for a rapidly changing job market with a focus on critical thinking and problem-solving skills.

Proposed Solutions and Action Items

1. Include more debates about education and involve educators in future IGF discussions

2. Create standards for appropriate use of AI and technology in education

3. Implement project-based assessments that require original thought alongside AI-assisted research

4. Develop guidelines for AI use in education alongside formal regulations

5. Balance technology integration with preservation of traditional teaching methods and social interaction

6. Connect different UN spaces and regulations related to technology and internet governance

7. Promote community networks to improve internet access in underserved regions

Conclusion

The discussion emphasized the need for collaborative action among stakeholders to navigate the complex landscape of AI and education. It highlighted the importance of creating inclusive, ethical, and effective strategies for preparing youth for the future workforce while addressing challenges such as digital divides, ethical concerns, and cultural sensitivities. The session stressed a holistic approach considering technical skills, critical thinking, and the preservation of diverse cultural perspectives in the evolving digital landscape.

The Youth Coalition election registration was briefly mentioned at the end of the session, encouraging youth participation in internet governance.

Session Transcript

Marko Paloski: So, let me, hello everyone and welcome to our session, the, just a second, to our session I think that also online participants can hear us, yes, so this session is co-hosted by the Dynamic Team Coalition and the Youth Coalition on Internet Governance with a will to address the evolving demands on the workflows, workforce shaped by the AI quantum computing and blockchain infrastructures and robotics. The focus will be on identifying innovative educational strategies and career paths that adapt to these, to adapt to nowadays technologies. Participants will discuss interdisciplinary learning, project-based models, micro-credentials, tech appreciations, lifelong learning platforms and language divide, also gender divide, aiming to align educational and professional development with the future technological landscapes. This session is a roundtable session, so we’ll guide the dialogue on educational systems and employment, also going through the digital divide and the existing gaps in the countries that have less access to digital literacy, empowerment and infrastructure. According to UNESCO, around 3.6 billion people worldwide still lack reliable internet access and in the developing countries, access to digital literacy and infrastructure is limited. By the ITU reports, only 19% of the individuals in the least developed countries use the internet, compared to 87% in the developed countries. There is no longer a clear pathway to success through education. A report by the MC Kinsey Global Institute suggests that by 2030, up to 800 million jobs worldwide could be lost to automation, representing one-fifth of the global workforce. As the youth and the team dynamic coalition on and internet governance. We see as a part of our responsibility to promote this dialogue on the perspective of education and future consideration the advancements of the technology and the internet. With this, I want to give the floor now to participants. First, I will go with the on-site participants, and then on the online participants. Give the floor to give introduction and why their statement is very important here in this panel, and also to give us answer on what are their personal views on the current situation of education and employment, especially on the topic of the youth. I mean, this is the panel for the youth, but we can also cover other areas. But here, this question is to the topic on the youth. I would give the first question to Ananda Gautam to introduce himself and try to answer his personal point of view.

Amanda Gautam: Thank you, Marco, for inviting me here. And I think I’m audible, right?

Marko Paloski: Yes.

Amanda Gautam: OK, cool. So my name is Ananda Gautam. I’m from Nepal. So I’m also coordinator for the Asia-Pacific region from the Youth Coalition on Internet Governance. I lead various other youth initiatives in global, regional, and national level. So my major engagement is capacity building of young people. So as Marco said, the challenges of education and young people, so we are in a very kind of situation where a lot of digitization has happened. I can reflect when I think if someone is from the 90s, like we can say, the Gen Z has been so much kind of like in the digitized world. We were not that much connected, because we grew up with the technology, I believe. In the 90s, the technology was just thriving on the public sector. It was the 70s when the technology got into its footsteps, but in the 90s it was in thriving situation and we got to experience the development of technology with the development of our livelihood. We saw the offline part of the world as well, but now if there is no internet, I think we barely use our electronic devices. But nowadays there used to be electronic devices without internet as well, and they were used for purposes, and then before that there were typewriters and other things. And coming today, now we are in the kind of like, we used to talk about digital divide and now we are in the age where we talk about AI divide. So this is kind of situation, but still there’s a very kind of situation where we need to separate global north and global south because we have existing traditional gaps as well. We have literacy gaps, we have digital gaps, and we have AI gap. You need to have first literacy and then you need to have digital access, and then you can have access to AI and other emerging technologies, but we have all the three gaps. But today’s young people have chances to thrive with all the technologies and they have literacy, they have access, they have access to AI technologies and other emerging technologies as well. So the point here is how do we actually segregate or how do we separate what is the maximum limit that we can use the technology, or are we leveraging technology for good, or are these technologies always useful, are there any threats while teens and young people are using the internet, along with the opportunities that it provides, we are seeing so much threats. that are now aided by AI and all kinds of things. Another thing relating to education is now we are using generative AI and we believe that some children and young people might believe that the content generated by the generative AI applications is legitimate. That is another issue because people don’t know what legitimate or what is the originality of the content, people might not know. So we need that kind of literacy now. It is not like we need to regulate or we need to ban those kind of users but we need to teach people how they can leverage this internet and other aided technologies like AI and there might be another technology sooner, they need to be taught or we need to have capacity building, digital literacy programs that enables them to properly utilize this technology. I’ll stop here for the first round and I think we’ll be definitely going for the second round. Thank you.

Marko Paloski: Thank you, Ananda. You point out very important topics. I mean, there is not just digital divide but only digital literacy. There is also other divide, gender gap, which are very important, especially in the youth, as you mentioned, we are coming or raising now with those kind of technologies every day but still it’s a good thing and also other points which we’re gonna come back later with the questions. Now I will give the floor to Mohammed Kamran to introduce himself and give some personal state on the current situation.

Mohammed kamran: Thank you, Marco. It is not less than an honor for me. So hello guys, everyone. Mohammed Kamran from Pakistan. I’m advocate, I’m practicing in my own province right now. Also I’m with as a co-coordinator and apart from that I am with other organizations but no need to mention. and all of that. I agree with coming back to the point that how we see from Pakistan. Like, I’ll talk about my own country because I belong to Pakistan. So how we see all of this situation, how we see what is happening around, I totally agree with my brother, with my friend, that the shift from digital world, digital computers to the AI computers was too fast. We couldn’t see that coming. And all the teachers, students, and the parents, we are really confused how to deal with them and how to tackle the situation. Like, most of the teachers nowadays in the institutions that I know, they’re detecting the AI homeworks. Like, the students that have done homeworks with the AI, they are busy in detecting such homeworks. They do not know if it is useful if students are using AI. I’ll just give you, like, I won’t go to the explanations, which we’ll be talking in a while. Two, we lack literacy. And by literacy, I mean that students, they are trying to use AI, but they do not know the limits. And who are the ones who are teaching the limits? The adults again. So I think we have to start from the adults. And from adults, we will move to the youth because youth are the future of the country. So first, we have to have some regulations about the AI, that what are the limits to which they are not dangerous. Because if we use a lot of AI, I think that is also going to be a bit dangerous. So we’ll talk about this in detail, inshallah, in a while. That was a brief talk about this in a while. Thank you, guys.

Marko Paloski: Thank you. Thank you, Mohamed. Yes, I mean, the future is AI. So I think we’re going to more, more, and more use AI. But we’re going to see later. if there is any what we can do about in the future, yes. Now I will give the floor to Marcella Canto, which is coming from Brazil, to introduce and give some personal statement on the current situation.

Marcella Canto: Thank you so much. So I’m from Brazil. I’m here representing the Youth Brazilian Delegation. And I’m from Rio de Janeiro, but not the city of Rio. I live in the state of Rio. And my city is called Sao Gonçalo, which is the second most populous city of the state of Rio de Janeiro, yet it remains a peripheral city. And I think we need to take like two or three steps behind to talk about education and digital education specifically, thinking about inequalities specifically of the global south. And especially when we are talking about public policies who will make change and make difference. But I will talk about this later in my presentation, so I will pass the mic.

Marko Paloski: For the introduction and on answering the question, now I will give the floor to Ethan Chung for the last on-site participants to introduce themselves and give a short statement.

Ethan Chung: Hi, guys. I’m Ethan Chung, and I’m a Youth Ambassador of the Mampa Foundation. I’m from Hong Kong. And I’m not actually working for a job, so I can only tell you guys a perspective for education and a perspective of how we are educated. So I think in order to be related to technology, the education system is not enough because we’re doing paperwork right now, and we’re writing papers on normal writing and comprehension things on papers. But I think that we should educate something that is related to AI or maybe other technologies. And the education system should actually educate us how to use the AI and how to use these technologies much harder, not only just how to use it. Because we all know how to use it, right? Check GPT, you just input the question and it gives you an answer. But how to use it correctly? How to use it better that it won’t be against the law? This is the key point and I think should be discussed later.

Marko Paloski: Yes. Thank you very much. Now I would go to the online participation. I would give the floor to Umut. Can we just, yeah.

Umut Pajaro: Hello everyone. Good morning from my side. Well, we see that these kind of technologies are a rapid advancement and have those like taking my surprise as someone you already say. And it’s actually revolutionizing the industry and redefining the skill that we need to try in the modern workplace and the education system. Well, this kind of transformation actually presents not only a challenge, but also opportunities for educational institution and individuals seeking to navigate this evolving evolution of the workplace. So in my personal opinion, I think that educational institution must adapt to these changes to adequately prepare the future workforce. And I hope that we explore a little more on that or during this conversation, especially in the educational strategies that we can adopt to adapt these technologies and examine the changes in career pathways or different workforce that we can have in the future.

Marko Paloski: Thank you. Umut, I will pass the floor to Denise.

Denise Leal: Hello everyone. I think you can hear me well. I am Denise. I’m the CEO of the company. And I’m here to talk to you about the future of education in the future. And I’m here to talk to you about the future of education in the future. And I’m here to talk to you about the future of education in the future. I am Denise Leal from the Youth Coalition. I am from Latin American Caribbean region. When we first started to think about this session, this Dynamic Coalition session, we were in a dialogue with the DTC, the Dynamic Team Coalition, and we were talking a lot about how the future is changing and how it’s changing right now, right in front of our eyes. And we have to think about the work and the education in a very innovative way. We cannot, we can’t anymore think about the education as traditional as we’ve always seen before. We need to adapt because future that is already here is asking for it. We’ve discussed a lot about many things during our meetings to come to this session and to have a dialogue on it, not only thinking about the youth view as the 18 and 35 years old youth view, but also for children and teens. What is it like to be a student in a classroom where you are studying a thing that you won’t use, considering that you actually will need to know about AI, about technology, about social media, and lots of other stuff that you are not learning in your class. So here we are today waiting for discussions on these topics and willing to hear you all. And I am super excited to talk about it. And I also will bring some aspects on marginalized people, global South and traditional people also. So, because we have to include them in the discussions and talk about them and also about infrastructure because. if we are talking about digital era education and work. We have also to consider that there are people excluded, there are people that don’t have access to internet, proper access to internet. So we have a lot to talk and now that I have presented myself I will give the word to you, Mirko, so we can continue the flow.

Marko Paloski: Thank you, Denis. Yes, I want to point also one thing that we discussed in the youth coalition between us that the youth coalition is from 18 to 30 and we represent the youth, but the youth sometimes it’s not just 18 from to 30. That’s why I think the teen dynamic coalition is very much needed because nowadays I’m, for example, 29 and when I was studying or going to primary school, high school, I was still using the technology, Facebook, I don’t know others, but it’s not the same like it’s now with the TikTok, with much more usage. I know my nephew is using the tablet sometimes too, he knows automatically, so it’s totally different viewpoint and sometimes even we are as a youth still cannot represent their, how can I say, issues, their challenges or what they are and how they are experiencing the technology. So that’s very important and this coalition should be established like dynamic teen coalition and get more and more involved to be also their voice heard. Okay, I will go now to one of the questions, I mean in the beginning Ananda mentioned that there are some more divide gaps, Karen also mentioned, Cameron, sorry, also mentioned that should also be education to the adults because they are the ones who are teaching the youth and sometimes the youth may know more than the adults. There we also had it, I think Umut mentioned that the institution get more involved, so I would ask the question, what is, I mean, with the current situation we see that more and more AI is evolving and other advanced technology like quantum computing, robotics, those kind of things, but there is a big demand of the workforce shaped by the AI. So my question here is, considering this, which innovative educational strategies could be used that we can adapt to these technologies? So in the future, but maybe not in the future, maybe in the present, because most of this technology, depending on the country, are already taking over. So what do you think is the innovative educational strategy that we should go with that one, or how should we come to that adapt or, I don’t know, approach, let’s say? I will get, I don’t know who first would want to answer you. Okay, Kamran.

Mohammed kamran: Yeah, so as I’ve mentioned earlier that the jump from the digital machines to the AI machines was too quick. I think we didn’t see that coming, but yeah, it is what it is. So first of all, I think some people, they see it like the strategy, we’re talking about the strategies, how we can face all of these things. So like there are three ways. Firstly, people would be mostly adults, because the youngsters, the youth are using AI more than the adults. It is their thing. So the adults, what they do is they flee from the situation, like they leave the situation and they go to some, we are not seeing it, so it’s not happening. Secondly is that they fight for it, like they fight against it, that, you know, you shouldn’t be doing this. As I’ve mentioned earlier that most of the adult teachers, they’re busy in detecting the homeworks that have been done by AI. I think that is also wrong, that is injustice. If a kid is able to solve his homework with the help of an AI, It is just another tool that is here to help us. As our vehicles are here, as other technologies are here, I think AI is also here to help us if we do not cross the dangerous limits. We’ll talk about that as well. But this is the second one. We shouldn’t be fighting it as well. Then what should we do about it? I think we should adopt to it. That let’s not flee from it, let’s not run from it, let’s not fight it either. Let’s adopt to it. Why don’t we adopt to it? Because this is the future. We cannot say that AI is not happening and I’m not going to allow that. That adults who say that, let’s do it the old-fashioned way. I think that is also wrong because if there is something, this is as if we would say that, let’s not go to that place by our vehicle. Let’s not use the aeroplanes. Why not to use horses? It is the same to me. No, they’re just listening. So yeah, using AI is not bad, but to a certain limit. So I think the strategy that we should adopt is the adaptation to the situation, adaptation to the AI, and also there should be some regulations that to what limit AI, and not only AI, but all the other technologies that you’ve mentioned earlier, of course. But as AI is the boss, we know that. So that is why I’m mentioning, I am referring to AI again and again. So yeah, we should have some regulations. We should have some limits. Like this is the limit. This is, but yeah, who’s like, then you’ll ask us, you’ll ask me that, who’s going to watch over a kid with an iPad? As Marco has mentioned, that our children, like our people, like the babies, they know how to use, how to turn on the Wi-Fi. They’d go to the YouTube, they’d search for the shark, so baby shark, yeah. I think all of you know that. So our kids, they do know about all these things. When we were kids, we didn’t even know how to turn on a mobile phone. Like we’d ask our parents to please open the snake game for us. We’d be playing the snake games. I don’t know if you guys have done that. But nowadays, kids have adopted it. And I think we shouldn’t stop their way, but we should teach them their limitations. Because if they exceed their limitations, their screen time increases. That is going to be a bit dangerous, because we do not want machines in the future. We want humans. We want them to have feelings. And we’ll be talking about the future strategies, inshallah. I have a point in that too, but all in good times. I’ll get back to that in that question. But now we are talking about the present strategies. So I think the biggest strategy, and to me, the simplest strategy, I’ll say this in very simple words, is the adaptation to the situation. We shouldn’t run from it. We should adapt to it. Because as it is said that, when we do the reconstruction of a house, let’s say that we are like the beautification or something like that. If we want beautification in this room, we’ll demolish one wall of the room only to reconstruct it in a better way. So I think if we do not run from the situation, let’s reconstruct it in a better way. Let’s do the beautification in it. Let’s beautify it. Let’s make it in another way. If this is not working, let’s do this in another way, in the modern way that is needed. So yeah, that’s from my side. Thank you, Marco.

Marko Paloski: Thank you for your deep answer, I would say. And thank you for pointing out that there are many, many more things that are included here. It’s not just one thing. And that the AI is… part of all that I mentioned, robotics, in every technology that we are using now, AI is a part, as you mentioned, AI is the main component or the boss. I would give the floor to Umut now, if he can share also some experience on this question, and then we’re going to revert on-site to someone.

Umut Pajaro: Okay, yeah. Well, in my case, I actually think on this question every day, because I’m not only a teacher in university, but also in high school, when actually I had teens from 13 to 18 and 17 using AI in my classroom. And besides thinking on ways, on how they are using it as a tool, I start to explain to them that most of the answers probably are not exactly correct. So I started to use the tool in a different way than everyone else. One of the things that I do is to put the actual tool into test in the classroom, so they actually can know exactly what are the limits of the tool inside of the classroom. So they actually can know the implication of using that technology by experience. So that’s one of the things that I think it should be incorporated in the curriculum when it comes to emerging technologies, is having to use the technology and the implication of that technology by experience. So the students are actually going to know beforehand in really early stages, as I do in my case because most of my students have 12 or 13 years, so they are really young. So from that age they are already starting to learn what are the implications of using that and what is the limit of using that. Another thing that I started to find out in my practice as a school teacher is that not only this hands-on experience or the real-world applications on air disposal, also the interdisciplinary approach to the use of these technologies is also really helpful because it gives them a more realistic way of understanding how these technologies can be used and how this technology can actually help them to improve their daily lives. Another thing that I would try to do in my classroom is actually create some critical thinking and problem solving. It’s to use this tool, the different tools, as a way to improve their analytical thinking and problem solving skills and try to give them some abilities to adapt to the new challenges. Another thing that I try to improve there is not to demonize or to be against these technologies in the classroom. I just wanted to understand that what is there actually could help them also to increase their creativity, increase their design thinking, and ability to generate solutions to new ideas. This could be the pinpoint to another solution or what the problem that they had. So that’s what I’m trying to do. And another thing that I use this kind of tools in the classroom, and I think it’s important to have it besides all the things I already said, is to understand that this tool could also have an important role of creating collaborative efforts inside a classroom and outside of the classroom. Because if you have this different kind of tools to resolve a problem in a real life context, you will actually have to work together and collaborate between peers. So that is important. That is an important ability, not only in our educational system, but for our life. When we’re going to start to work in some, I don’t know, some important entrepreneurship or something like that, we’re going to work with people and we’re going to collaborate together. So we need to adapt to have these technologies inside of the classroom, but thinking on the future of the workforce. Yeah, that would be all for my side now.

Marko Paloski: Thank you. Thank you very much, Umut, for sharing your experience from the other side, from the institution. Now I will give the floor maybe to Marcela to give more on the Global South and those opinions.

Marcella Canto: I think to answer your question, it’s crucial to consider that digital skills must be included in the curriculum of basic education. The changes I think that people used to see as a have to, I see as a must to. There’s a misconception around the use of digital device will take the educational process more innovative. Some examples are using tablets in class or using AI to increase productivity, but it’s a mistake. If education doesn’t challenge social structures that perpetuate inequality, discrimination, and oppression, then it’s not really innovative because it doesn’t promote significant change there. We are now experiencing a new configuration of colonialism. Migration and international division of labor continues, but in a new guise. While the global north has large technology companies that employ CEOs and software engineers, we in the south are left with the worst jobs, whether it’s mining for processor production or removing objectable content from the internet. Technology alone will not save us from our problems because technology is not neutral. There is always a purpose and a bias. What I mean is that if the global south maintains the colonial logic, the structure remains the same. Aerocentrism remains the same. The underemployment that we are facing in the world is the same as the underemployment in all of the contexts of dependent capitalism. Our development will still be underdevelopment because it is still operating the colonial system where the south feeds the north in an unequal relationship not only with raw materials, but also with precarious labor. So, what I believe is that the curriculum in digital education needs to focus on training technology creators, not just average users with common digital competencies and non-technical know-how. We don’t need only competencies as promoted, for example, in this document of UNESCO called Global Citizens for Education. We must have systemized it and historically accumulated knowledge that enables critical think. Because through this, we will be able to criticize and transform reality. And I need to highlight the curriculum’s power. The curriculum in basic education is what is projected as a society. And curriculum produce ways of being as decent interpreting the world. When a current problem in a region or a need of a group of population are not included in the curriculum, it’s also a political decision. If technology curriculum does not address race, gender, class, is this curriculum truly emancipatory and effective in changing the current scenario and combating online violence? Does this curriculum really manage to significant change in a way that impacts the collectivity? Or just one or another individual will be able to ascend economically and socially different for the community? Does this curriculum empower people who are not recognized as knowledge producers and whose very humanity has been denied to justify subjugation of people in the colonial process? A process that ensured the exploitation of all the possible forms of the global South and developed in the North-South? In addition, can we produce a society technology creators who will behave ethically if the curriculum does not proactively combat diverse forms of discrimination? That’s say, I don’t think it’s possible for a single person or even a single stakeholder group to point out effective solutions to such a complex and profound issue. But I would like to use this basic granted to me at the IGF to extend an invitation to who’s are here, either in person or online, especially our colleagues from the global South. How can we articulate an emancipatory digital technology curriculum that truly combats extraterrestrials and propagate oppression? What are the needs of your country, your state, your city, and your community that should be addressed by educational policies? What are the latent concerns? How do we need to encourage different groups that excluded to become technology producers? How can we organize a way that respects multiculturalism and the diversity of region and allows for global cooperation? Thank you so much.

Marko Paloski: Thank you very much for bringing the whole perspective. I mean, it was more than answering. No, no, it’s good. It was more than going deep and explaining all the issues that are happening, and that it’s not just one issue, but they are more connected. I would give now the floor to Ananda to give his opinion on this question.

Amanda Gautam: Am I ready? OK. So my point is, on this point, I think she has created a lot of thoughts. And being a bit late on the panel is like you can opt into whatever is shared already. And then the challenge is I have to bring something new. So my point would be, as we have been discussing how it has changed the landscape for the young people and teens, with the changing landscape, we are now also talking about digital detox as well. Some people doesn’t have access. Some people has over access, like so much of screen time, and then the health issues related to that, and then the kind of social interaction that we are missing in most urban parts of the world. That is one thing. And another thing is we need to. leverage these same technologies to provide same equal opportunities to the people who are in remote locations. So these are the very fundamental two things that needs to be covered and balanced, you know, for where we have like already access and other kind of thing. Children today are very tech-savvy, I think someone told, I think Mohammed told that, they can just find anything out of the smartphone, that I have never been able to using a smartphone, but they will be overusing it at times because what we do, we go to the door in our house and then we just send a text message, can you open a door for me, we have like forgotten to knock the door or like call someone to open the door, you know, like these are the kind of things. If we need something, we’ll just text someone in the kitchen, in the home, we’ll not opt in for talking with our families, we’ll, if we are in a dinner table, we’ll be like looking at our smartphones and we can like, when friends meets at the restaurant for a meetup, everyone is looking at the social media. So this is the kind of essence we are losing at one point. And another part is like, there are some people who need to have access to these kind of things. So the balance would be, we need to have mechanisms, right set up, like I will share a case study for you. We were doing a USIGF in Nepal, we had like a fellowship calls for that, and I think there were 150 applications for 15 spots. And then while going through the reviewing the application, we found out that 90% of the applicants use AI to draft their applications. And the strength, another interesting point is the 10% who didn’t use the AI was not because they don’t want to. want to because they didn’t know how to use it. So sooner or later, it is not like we wouldn’t be using AI or generative AI. It is that we have to use it. I do it every day, but how do we use it is very important. So we need to find out the ways we need to teach. Like someone was talking about integrating it into the school syllabus. Yeah, exactly. We need to integrate how do we people use digital technologies along with the emerging technologies so that they are aware. And also, we need to teach them. We need to know, or we need to have, I think there are not much resources done. What is the optimal time that they will be hanging up with their gadgets? Because we do it starting from day to night. If you tell my kind of routine, I wake up in the morning, get in my desk, and I start using laptop. And then I think it’s until midnight of the day. That’s my almost routine. But we need to have enough research in place so that we know what is the balance in between having social interactions or having time in real nature and then using those technologies to kind of foster our work. Someone was asking a question how these things can be done in rural context. Maybe we can use these technologies to empower education and everything in rural context. But at the first, we need to know the nuances that how do we do it. Otherwise, we’ll be just throwing a technology to people. We’ll start using it. We can see. I don’t know how popular is TikTok in other countries. In Nepal, people started making money out of TikTok. And then people started. throwing their clothes out you know like because that gave them money. They thought that is the good way of making money. People were like topless and then like and they found out in a while that getting a nipple will block their account. They started just blocking their nipples and getting topless in the TikTok life because they thought this is the legitimate way to make money. So throwing out technology without education will be kind of like massacre but we need to have technology in place. We need to provide access to them. Along with them empowerment is very important. We need to know what are the right set of skills they need to know before using that technology. I’ll stop here.

Marko Paloski: Thank you. Yeah thank you very much Anandan for pointing out several issues here especially the one for too much consumption and using of the internet and while we have still places where they cannot have the basic access and needs to access to the internet. I will give the floor before to Dennis and then come back to Ethan.

Denise Leal: Thank you. We are having some online participation so I would like to mention before speaking we have here a friend from Africa saying that we need to consider the UNESCO guidelines on ethical use of AI as a possible answer which is pretty interesting. I was having a dialogue here on the online chat saying that in some countries we don’t have regulations and we don’t have basic information on how to consider the use of AI and how to monitor to supervise it. So in the discussion here that our online friends have been talking about is that ethical use of AI is a major point to consider in education. implementation, but how we supervise it. And it is a challenge to comply with the principles of the safety on internet. So it is also a discussion I want to bring because our friends here are telling us it’s important. And we have some questions I would like to address. A friend from Bangladesh, IGF, Charmaine, has asked how can remote work opportunities be made accessible in underserved regions? And there is another question, but I will talk about this one. It is indeed a challenge. It is important to discuss about it. We do have some places across Latin American Caribbean which don’t have proper internet access. So we have some projects, some community networks that take internet to these places. And they also work with empowering communities to teach them how to use internet and all the internet network. And it’s really interesting. But that shows us that we don’t only need to talk about the use, this very futuristic thing which is using AI and technology in schools, but we also need to talk about the implementation of internet in some places and how we empower these people to properly use internet and telephones and mobile apps and all this stuff. Because this is a discussion and it’s very beautiful to say that we are implementing technology in our schools. In Brazil, we have some places in some regions, the schools of future, they are so-called like that because they implement programming classes. as part of the education and this is impactful, but we also have this discussion on how we implement the education on the very basic aspect of using internet. We also need to talk about the literacy of internet, of using internet. So, I wanted to talk about it and say that I heard some very important things about this topic in UFLAC IGF this year. We had a session on indigenous people where a community of an organization called KIST talked with us saying that when it comes to internet education, we need to provide language accessible internet education. So, it’s our very futuristic to include a language accessible internet education. So, when it comes to technology, we have to provide a language accessible education. So, these people that come from different backgrounds, regions and communities can also develop their technologies and create data and create information in internet. So, I will stop what I’m talking now and I will just mention some questions that we have here on the chat so that other speakers can address the answers if they want. We have this other question here. How can we ensure ethical use of AI and data in both education and workforce management? Also from Bangladesh Women IGF. And we have some comments here. Another question from Gregory Duke is I’ve heard that regulations could affect innovation negatively and I like this new revolution. What should be considerations from youth leaders? in policy input. And Sasha is asking, as youths, how do you envision AI to be used in assessments in educations? That’s it. A lot of questions, a lot of discussions. Now you guys get there. Thank you so much.

Marko Paloski: I will give the floor first to Ethan to answer the main question, and then we can jump in the question that they’re asking. Maybe we will open later the floor to other questions from the participants on site and of course online.

Ethan Chung: Hey, okay, let me take this off. So first of all, before I start the pitch, I want you to keep in mind that AI is here, not for us to rely on, but for AI should be assisting us to maximize its effectiveness on the human society. So here’s the thing. When we rely on AI, it leads to a lot of misbehaviors, and therefore it will have negative effects. So for example, my own experience, so last year I had an exam about coding, and then it’s a JavaScript system. And what my friend did is he opened another website and he used the AI to generate a whole code that works. And we submitted it, but he failed, I passed, because his AI generated code, it’s in Python method, but not JavaScript, so he failed the exam. And by this, we could tell that when we rely on AI, it will have negative effect. He did not process it in his brain. So what we need to use, like what we need to do for AI, or not do for AI, I should say, what we need to use AI for is we need to use the information provided by AI, and we process it, and we do a summary of the information that it provides. And so I think, for me, I would like to be educated with the education strategy that I would want so that I can adapt to these kind of technologies is that. some kind of project-based works. Because, so for example, we’re doing a project, right? And the teacher shall allow us to use AI beside of banning us from using AI. And by this, I mean that the teacher should guide us how to use AI correctly. And so we can know that that’s how we use AI. That’s how we actually, how to process AI’s information, but not only relying it to generate information and we use it. Because by that, we cannot improve. That then we will stop here. The improvement of the whole society will stop here if we rely on AI. So that’s a kind of education strategy. So to enhance the use of AI. And yeah, that’s kind of the educational strategies that I can give it.

Marko Paloski: Yes. Thank you, Ethan. I mean, it’s a good point of view how you make, how we use AI and it was for what? Not just to relay, I mean, on AI. To use for our, to approach our maximum or to, I don’t know, establish maximum potential or to help the things that we can, I mean, we spend time much more on something else. I would now go to the two questions that Denise mentioned from the online. And after that, open the floor on on-site if there are some questions so we can proceed. The question that was asked online and Denise tried to answer some of them, were how can remote work opportunities be made accessible in under-reserved regions? And then the second one was, how can we ensure ethical use of AI and data in both education and workplace management? The second one was, how can we ensure ethical use of AI and data in both education and workforce management? I mean, we mentioned something. I saw Ananda that he gets the mic.

Amanda Gautam: Yeah, it is about rural context. So I think I can talk about it. So. The first starts with the access, of course, because without access we cannot talk about anything else. So there will be, how do we, if there is no community networks or not, they are the last mile networks where community manages their own network and there are a lot of funding opportunities available for doing community networks. That is the one how we can, and there are universal service funds that are available to take the access into the underserved regions. That would be a start. And then having only access to internet is not enough. Like Dennis mentioned, I was about to mention that as well. If you take an example of Nepal, only 75% of the people are literate. And if people don’t have enough literacy to access the content, there might be a barrier with language. In Nepal itself there are more than 120 languages spoken. So having access to meaningful content would be more challenging. After we bring the access to the internet, having access to meaningful access is equally challenging and equally important. That’s why TikTok 5, you don’t need to have much knowledge. You just need to either record yourself in your language you know and you post it and people don’t need to do anything. Just swipe, you get the knowledge. Or like you get the content you want. And that is how TikTok got idol in Nepal. So those kinds of platforms might be sometimes very essential to maintain the access to meaningful content. But we need to know, have the right balance, what contents are shared on those platforms. Another thing I think somebody asked in the question as well. what is the right balance of over-regulation or what are the things that we can have ethical AI. There might be three instances. One might be legislation, policy, or code of conduct. Among these three, I think code of conduct is very important because we know the digital world is just as mirror of our real world that we are living in. Umut mentioned in some way some days back we have biases in society and data are made by the society and those biased data lead to the biased decision by the AI system, which is obvious. And similar on the how do we use technology, we have a kind of like non-ethical code of conducts that we do, misconducts that we do in the society and same thing we reflect that in the digital world. So we need to have like good code of conducts so that we use those things decently and the thin line between decency and indecency varies from place to place. So it is very contextual and it is very local but we need to identify what is decent and what is indecent. So these are like very basic fundamental things. Of course we need to have legislations, we need to have policies that promote these technologies that provide access to the people where these are not underserved because there is no profit in the ruler context but it is not always about profits. There are ways to do it, many agencies are doing it, community networks are the very good examples which have been thriving in Africa and other regions. I think we can discuss about it all day but I’ll end it here. Thank you.

Marko Paloski: Thank you very much especially for the pointing out for the use and how we can of these technologies. I would give now floor to Umut because he can also relevate to these questions?

Umut Pajaro: Well, there is two questions that I call my attention. One is how we can ensure the ethical use of AI in education and workforce. Well, my answer to that is actually, it could be quite obvious. It’s actually remote. It’s actually to try to get those involved or the stakeholders that are actually part of the education system. And that implies to include even students as part of making the rules, the key rules to use these kind of tools in education, in education, the clearest as possible. But those rules, let’s say in that way, it can be not only as a recommendation or suppose or kind of best practices or something like that. We need also that the government start to regulate this, the use of this kind of technologies inside the educational systems because the educational system institutions, because there is a way somehow that they can be complemented. And actually, can we somehow enforce or make accountable these ethical guidelines that we are putting into place in a different discussion or consensus that we get as different stakeholders. In this case, I include the stakeholder from the students. Also, that means you include teens, children, or everything on everyone that is part of the educational process. The other question is about innovation. And I, my answer to that is that regulation, it doesn’t actually, it doesn’t actually cut innovation. It’s actually, if you do a good regulation based on human rights, the innovation that you make and respect those laws that are based on human rights, are going to last longer if you actually respect the human rights and respect the law. So innovation make it into a framework, and making it into a framework that respect human rights and the characteristics or the context of where those laws came from, actually made those innovation and the law lasting in time, because it actually is responding to what the people needs. And actually it’s respecting what the people needs and what the people want from the society. So yeah, that’s pretty much what I wanted to say.

Marko Paloski: Thank you very much, Umut. I would ask now the audience if they have some questions here. I know that online there is a lot of questions, but let’s first see if there is. We have one question. Let me, I will try to use this mic.

Nirvana Lima: Good morning, everyone. My name is Nirvana Lima. I’m a participant of Youth Programme Brazil. I work as a facilitator and I’m here with the delegation. So first I would like to thank you for your panel. Congratulations for all of you. I live in a indigenous territory in Brazil composed for 29 towns. It’s in Ceará state. One of these towns is called Brejo Santo, and that is sited in Brejo Santo, the third official indigenous school of Brazil. They have a curriculum based on their ancestral traditions, philosophies, and culture. And until now, they don’t have access to the internet. I would like to ask you about the positive and negative aspects that they can face when they are connected to the internet in the very near future. Thank you.

Marko Paloski: I would ask any of the participants if they want to answer this question, also online. Or to take the second one? Okay, you can. Okay. Umut, just to take the second one, and we’re going to go to both. Okay.

Audience: Okay, thank you so much. My name is Mariana. I’m from Youth Brazil also. Thank you for the panel. I think all you said is really important, but my question goes a lot with the curriculum that was said and some things about the AI. I think we need to think, I’m going to be direct to my question, given to the challenges faced by the global south, such as digital exclusion of many young people due to an unequal access to internet and technology, as well the biases built in technology systems. What public policies do you think can be implemented to ensure both technology inclusion of these young people also has high quality technology education, while ethically protecting these young people, individuals in labor market that are increasingly driven by technology skills in their CVs, like entering the market and entering all these things. And we need to understand that, like, English right now. is something that you have to understand to insert yourself in the market, so technology skills also. So how do you guys think we can build these public policies for the future? Thank you so much.

Marko Paloski: Thank you. I would give you now the floor to Umut, because he wanted to answer the first question.

Umut Pajaro: Yes, when it comes to indigenous population, when they want to have access to technology, one of the things that I learned in the recent years is they have to decide what actually one to access on the internet and how they want to access on the internet. They’re going to face a lot of challenges, as far as I see. Until now, probably one of the main challenges is the language barrier. Dennis already mentioned that before in his speech, that the language barrier is one of the main things that they’re going to face. And there is a couple of wars in a region in Latin America where indigenous people are actually using the internet to actually protect their languages. So for example, in Colombia, we have in the north part of Colombia, we have indigenous people that is called the Waju. They’re actually trying to protect their languages, their language, creating Wikipedia in their language. And they’re trying now in the second stage not only having Wikipedia on their language, but also creating some specific capacity building on cybersecurity and other things in the language also. So that’s probably the next stage, not only in defining that they’re going to face the same problem as everyone else in the internet, but trying to make it in their own language and respecting the way they want to be involved in the internet. because that’s the other aspect that it gains here. Yeah.

Marko Paloski: Thank you very much, Umut, for answering the question. I would ask now the speakers who would like to take the second question or maybe also add something on the first one.

Mohammed kamran: Okay, so I think the second question is, she asked that, how can we regulate the useful, the positive use of AI and all of such things? I think I will go with what Anand here said that it should be from the government, from the parliament. And I would like to add to it, and that is that it shouldn’t be a taboo anymore, that let’s talk openly about what is right and what is wrong. Our parents, they tell us that this is right and this is wrong, but we do not tell the youth that you shouldn’t go for, I do not know, but something is in our head that don’t talk about these things. We do not tell our youth people, like our youth that don’t prefer nudity, as he has mentioned, that on a lot of social medias, the youth is going towards that way in order to make more money. I don’t know, because they get more views for that and they get a lot of money for that, so they are going that way. It’s just a small example. Hacking, for example, is nowadays so into us, like, sorry for moving from, fleeing from the topic, but these are the problems that we are facing, so let’s not give a deaf ear to this thing, let’s discuss it, but before that, it should be from the government. In Pakistan, for example, we have a PICA Act, Prevention of Electronic Crimes Act 2016, it was promulgated in 2016. but these are only for the prevention of electronic crimes like cyber crimes. We should have such acts, such regulations by the parliament which would enclose the hacking, sorry not hacking, but the nudity, like very specifically. I’m again saying nudity is just a small example of the problems that we are facing on the social media, there are a lot more than that. So all of them should be pointed out and should be promulgated by the government, from the government, for the public, and they should regulate the acts. Only then I think we would move to the positive use of AI with 100% like the positive, like we will have the positive impact of all the AI and all the stuff. So yeah, that’s from my side. Thank you.

Marko Paloski: Thank you very much.

Audience: I think the first step to understand the needs or what we need to combat, I think we need to create and research the needs of locations, states, and even countries in a way that we know we are facing, you know. You know to do the research, ask to local communities, to people, and in the way there is a interaction between government, private sector, and civil society. So I think it’s the first step, and then after that we need to face these questions. I don’t think only education, but education is a really important step. We need to address like gender issues, race issues, class issues, and any other type of violence should be talked about in curriculum of digital education. So I think it’s, I don’t think it’s the only way. but I think it’s really important.

Marko Paloski: Thank you. Thank you. I will go now online to Denisse to see if there is online some questions and to raise some comments.

Denise Leal: Yes, there are a lot of questions and discussions here on the chat. There is also a participant that wants to speak, but I will before talk about an important topic that we’ve been talking here. Well, they’ve asked us about regulations and innovation and what should be our considerations in policy input. And we’ve started a discussion on regulations and how it’s important that we get our regulations improved. Because we have these very old innovation policies, and also the intellectual property laws, they are not really adapted to the digital era reality. So we have to improve our regulations. And we also need, and it’s also another comment I made here, and also that I’ve been making in other spaces, we need the United Nations spaces to connect with each other. You see, we have these spaces from the intellectual property, and we have these spaces from biodiversity and climate change. And they are all creating treaties and impacting regulations. And they are all having discussions on AI, internet, and other spaces that encompasses the digital era and digitalization. But why don’t we have these people and these United Nations spaces also here in IGF, or in the United Nations Data Forum, for example, because the thematics, they are related. So there is a… a mechanism being developed to regulate the DSIs, the digital sequence information, but it’s being developed in the UN Biodiversity Forum and it’s not related to other spaces, but how is it? Because it will impact internet and other things and data specifically. So for me, the aspect is we need regulations to improve, to guarantee the safe use of internet and innovations that are inclusive. And we need them to connect these internet regulations and the other regulations, they need to be connected. They need to talk, these spaces need to have conversation. And a very important aspect of it is we cannot create a regulation about internet and technology without hearing the technical community because we are going to create a law that doesn’t have an effective impact. Because we will say you have to supervise it, but if we don’t say how and why, without hearing the technical community, it will probably not be a good regulation with effective results. But I see some comments here and I would ask our IGF person here in the chat, if we can open Sasha’s mic, so she can speak. She wants to make a comment. May I unmute her?

Marko Paloski: You will see now. I’m sorry.

Denise Leal: No?

Marko Paloski: I would check now, but I think let me just.

Denise Leal: Okay, so.

Marko Paloski: Now she should be able to unmute herself.

Denise Leal: Yes, thank you. We can see you.

Audience: Good day, everyone. Very lovely presentation and session so far and definitely glad to be here. So if you could hear the assumption in my voice, I’m from Trinidad and Tobago in the Caribbean. And I had a, it was such a revolting conversation. I had just one or two comments there based on the conversation. When we consider inclusive education, when we consider inclusive design, we also considered within the education spectrum, how do we design technology to fit the user? And so the user, the students are the center of the process and it hasn’t always been that way, but it is definitely the direction in which we hope that we’ll take adaptive technologies and inclusive technologies and design for the persons in mind. Just like the chairs that you sit on and you think about human-centered design, it’s really the direction in which education and educators hope to take technology and AI. And that’s the problem within itself is that you have to be able to define the limitations of AI technology. And it’s such a fast moving field that it’s difficult to pinpoint. And so it’s also important to consider, okay, before we even put that into the curriculum to teach students, how do we regulate it? How do we guide that practice and move from governing policies to practices, to curriculum, to the school content and then to the classroom environment and the way in which we implement it when it comes to project or problem-based learning. So it’s a very complicated issue that we’re having right now and that’s within my field of study here at the University of Prince Edward Island. And so it’s very, very… complicated to really begin that process. And I would love to get a little more, you know, idea of your perspectives when it comes to that particular space and what you have been experiencing. From the point of the Indigenous culture as well, when you look at that transfer from physical space to digital space, what’s being lost in that process? And so many parts of their culture as well would be lost within the process of that change of physical identity to digital identity. And you lose yourself in that process of digital exchange. So definitely, please, if you could give me some more of your perspectives within your education of how you see AI being used in the most proficient way when we can’t really define its parameters at this time. It’s such a fast-changing space. Thank you.

Marko Paloski: Thank you, Sasha. I would give who wants to answer this question, but I will try to make one minute maximum because we are close to the end of our session. So, okay.

Mohammed kamran: Hello. Yeah. So I think, like, the last point, I would just add to the last point that how we see the use of AI when it’s like no regulations, we don’t have anything on the floor, and yet we are using AI. So I think, as I’ve mentioned very in the start of the session, that all of us are confused. The teachers, the parents, the students, we are all confused how to make use of it. Some are saying that, okay, use it like the teachers, for example. I’m giving example of teachers again and again because we are here to talk about the youth, right? But it is not specifically to the teachers. All of us, like, some say that it’s good to use. Others are here only to… Like only to stop us from using it. So I think everyone is confused right here and I myself I’m unable to answer to the question that how are we like how we are doing with this stuff because we have nothing on the floor yet a 100% regulations like we have the traffic rules. We have other rules, but we have generally the cybercrime rules and Regulations like that, but not very specifically to pinpoint the this very issue. We have nothing on the floor so I think we are confused that but Using our humanity using our limits of humanity. We should I think only use it for the positive ways We should go for the positive ways. We should teach our children. We should teach our youngsters the Positivity in everything that the like the way we use positivity in our in our in our lives We should do the same way in this issue as well. So, thank you. That’s all for my side

Marko Paloski: Thank you Because we are closing to the our time I would give to every speaker just maximum one minute to wrap up and what is from this I don’t know take over or action points that we everybody should Take and in consideration or maybe it’s actionable stuff that we should do it Maybe to the next stage of IGF or maybe even closer. So yeah Who would like Maximum one point just one minute. Sorry

Amanda Gautam: The major would be It is a like common one, but it is a collaborative action It is not a part of like one stakeholder can do it. There is a role to play by every stakeholder so but navigating the Landscape it might be complex for people. So we need to make it easy like people don’t know how to take on some places so we need to Make a loud about internet works how they can be deployed opportunity is to do that. And I think we need a kind of transparency in terms of information available out how to do the things. I’ll stop here for the sake of time. Thank you. Thank you so much, everyone.

Marko Paloski: I think you should maybe turn on.

Denise Leal: I think in the next IGF, we should have more debates about education and more debates calling teachers and educators, specifically, that want to work from the technical committee and try to analyze more the public policies of each country, each continent, and have more of that debates. I think, I guess.

Marko Paloski: OK, thank you. It’s OK. Yeah.

Ethan Chung: All right, so for the next IGF, first of all, I think I won’t be here due to some, because I got to go for an exam. So I won’t be here. But I’ll try my best to contribute. And I think for next IGF, we are all together here from different nations, different race, and to here to actually create a standard for people to know what we are doing. And we want the people to follow the standard so that they know what’s not suitable, what’s suitable. And I think that’s what IGF is doing now and what it is made for. Yes.

Mohammed kamran: So being the last speaker online, sorry, on-site. Yeah, sorry. We’re just confused, online and on-site. Sorry. So being the last speaker on-site, I think it’s a responsibility for me. I don’t think I’ll be able to wrap up as it deserves to be. But I think sticking to the topic, education should be more experience-based, like for example. For example, if we are going to go for something, once again, I’ll go to this school or college. The teachers should test the students on their experience. If they give them an assignment to write an essay on, let’s say, the Riyadh, of course, they’re going to use the AI for that, or the internet. So I think the assignment should be more like an experience based, how was your experience in Riyadh? Of course, the AI doesn’t know about that. Also, about the meaning of something to them. Let’s say what internet means to them. So that would be another way that they are going to contribute on their own to it. Apart from that, they can ask them about the application of something, that if we have this mic, what is your application? What do you think we can use this for? So they’re going to put a lot of efforts from themselves, from their own mind and their own heart. And apart from that, being humans, I think that we should ask them about their feelings about something. OK, what do you feel about Saudi Arabia? What do you feel about the IGF 2024 that has happened in Riyadh? So I’m sure each one of us are going to write, I think, maybe 500 pages essay on this. And so yeah, I think our education should more focus on ourselves, instead of going for just a stereotypical kind of style. So thank you so much for having us. Thank you, Marco. Thank you.

Marko Paloski: Thank you. I would just give a brief one minute to Moot.

Audience: Denis, maybe shorter. Well, in my case, it’s going to be fast. It includes educators and students from all the stages. or designing, implementing and developing the technologies that actually are affecting the daily lives.

Denise Leal: Thanks everyone for being here. It was very interesting. I wanted just to thank everyone and especially I wanted to say that I really enjoyed hearing people from just to thank everyone and especially I wanted to say that I really enjoyed hearing people from Caribbean region. I am glad for it because usually we from Latin America lack this contact, so it’s important for us to hear from you from Caribbean. And also I wanted to point that this was a session that was thought and created with DTC, Dynamic Team Coalition. Unfortunately they are not here. They need to receive more support next years to be in IGF, but I wanted to thank Ethan for being here representing the teams because he is a team with us, a very brilliant and intelligent one, so you don’t even notice he’s so young. And I really appreciate our debate. Thanks for everyone that joined. Our speakers not only from the Youth Coalition but also from Pakistan and the projects there and also from the Brazilian Youth Program and other programs here. Thanks everyone for you joining us and we count on you on our next activities. We wanted to remember that we are on our election registration time from Youth Coalition, so if you are part of our mailing list, please register yourself. That’s it.

Marko Paloski: Thank you, Denis, and I want to thank you everybody, to the panelists, to the audience here and also online. Thank you for coming here and enjoying in this discussion. And before we wrap up, I would say maybe make a group picture while you are still on screen. So yeah, thank you everyone.

A

Ananda Gautam

Speech speed

147 words per minute

Speech length

2176 words

Speech time

886 seconds

Digital divide and unequal access to technology

Explanation

Ananda Gautam highlights the existing gaps in digital access and literacy between the global north and south. He points out that there are multiple divides including literacy gaps, digital gaps, and AI gaps.

Evidence

Gautam mentions that according to UNESCO, around 3.6 billion people worldwide still lack reliable internet access.

Major Discussion Point

Challenges of AI and technology in education

Agreed with

Mohammed Kamran

Umut Pajaro

Ethan Chung

Agreed on

Need for digital literacy and proper use of AI tools

Balancing technology use with social interaction

Explanation

Gautam discusses the need to balance the use of technology with real-world social interactions. He points out the importance of teaching proper utilization of technology while maintaining human connections.

Evidence

He gives examples of people texting family members in the same house instead of talking, and friends looking at social media when meeting at restaurants.

Major Discussion Point

Ensuring ethical and inclusive use of technology

M

Mohammed Kamran

Speech speed

165 words per minute

Speech length

2163 words

Speech time

784 seconds

Need for digital literacy and proper use of AI tools

Explanation

Mohammed Kamran emphasizes the importance of teaching students and adults how to use AI tools properly. He argues that there’s a need for regulations and education on the limits and ethical use of AI.

Evidence

Kamran gives an example of teachers detecting AI-generated homework and not knowing how to handle it.

Major Discussion Point

Challenges of AI and technology in education

Agreed with

Ananda Gautam

Umut Pajaro

Ethan Chung

Agreed on

Need for digital literacy and proper use of AI tools

Need for government regulations and policies

Explanation

Kamran argues for the need for government regulations and policies to guide the ethical use of AI and technology. He suggests that these regulations should be specific and address various issues related to technology use.

Evidence

He mentions Pakistan’s Prevention of Electronic Crimes Act 2016 as an example, but notes that more specific regulations are needed.

Major Discussion Point

Ensuring ethical and inclusive use of technology

Differed with

Umut Pajaro

Differed on

Approach to AI regulation in education

U

Umut Pajaro

Speech speed

123 words per minute

Speech length

1333 words

Speech time

647 seconds

Importance of hands-on experience with AI in classrooms

Explanation

Umut Pajaro advocates for incorporating AI tools into classroom learning. He suggests that students should be taught how to use these tools effectively and understand their limitations through practical experience.

Evidence

Pajaro shares his experience of using AI tools in his classroom and testing them with students to demonstrate their limits.

Major Discussion Point

Challenges of AI and technology in education

Agreed with

Ananda Gautam

Mohammed Kamran

Ethan Chung

Agreed on

Need for digital literacy and proper use of AI tools

Need for interdisciplinary and project-based learning approaches

Explanation

Pajaro argues for the implementation of interdisciplinary and project-based learning approaches in education. He believes this provides a more realistic understanding of how technologies can be used in real-world situations.

Major Discussion Point

Adapting education systems to emerging technologies

Agreed with

Marcela Canto

Denise Leal

Agreed on

Adapting education systems to emerging technologies

Importance of teaching critical thinking and problem-solving skills

Explanation

Pajaro emphasizes the need to develop critical thinking and problem-solving skills in students. He suggests using AI tools to enhance these skills and prepare students for future challenges.

Major Discussion Point

Adapting education systems to emerging technologies

Agreed with

Marcela Canto

Denise Leal

Agreed on

Adapting education systems to emerging technologies

Importance of stakeholder involvement in creating guidelines

Explanation

Pajaro stresses the importance of involving all stakeholders, including students, in creating guidelines for AI use in education. He argues that this inclusive approach will lead to more effective and accountable ethical guidelines.

Major Discussion Point

Ensuring ethical and inclusive use of technology

Differed with

Mohammed Kamran

Differed on

Approach to AI regulation in education

Addressing language barriers for indigenous populations

Explanation

Pajaro discusses the challenges faced by indigenous populations in accessing digital content due to language barriers. He emphasizes the need for content in indigenous languages to ensure inclusive access to technology.

Evidence

He gives an example of indigenous people in Colombia creating Wikipedia in their language to protect and promote it.

Major Discussion Point

Ensuring ethical and inclusive use of technology

Rapid advancement of AI is redefining needed skills

Explanation

Pajaro points out that the rapid advancement of AI and other technologies is changing the skills required in the modern workplace. He argues that educational institutions must adapt to these changes to prepare the future workforce adequately.

Major Discussion Point

Future of work and education

M

Marcela Canto

Speech speed

135 words per minute

Speech length

781 words

Speech time

345 seconds

Curriculum should address social issues like race and gender

Explanation

Marcela Canto argues that digital education curricula should address social issues such as race, gender, and class. She emphasizes that these issues are crucial for creating an emancipatory and effective educational system.

Major Discussion Point

Adapting education systems to emerging technologies

Agreed with

Umut Pajaro

Denise Leal

Agreed on

Adapting education systems to emerging technologies

Differed with

Ethan Chung

Differed on

Focus of digital education curriculum

Focus on training technology creators, not just users

Explanation

Canto emphasizes the need for education systems in the Global South to focus on training technology creators, not just users. She argues that this approach is necessary to combat inequality and change the current scenario of technological dependence.

Major Discussion Point

Adapting education systems to emerging technologies

Agreed with

Umut Pajaro

Denise Leal

Agreed on

Adapting education systems to emerging technologies

E

Ethan Chung

Speech speed

168 words per minute

Speech length

663 words

Speech time

236 seconds

Lack of regulations and ethical guidelines for AI use in education

Explanation

Ethan Chung points out the lack of clear regulations and ethical guidelines for AI use in education. He emphasizes the need for education on how to use AI correctly and within legal boundaries.

Evidence

Chung gives an example of a friend who failed an exam by using AI-generated code without understanding it.

Major Discussion Point

Challenges of AI and technology in education

Agreed with

Ananda Gautam

Mohammed Kamran

Umut Pajaro

Agreed on

Need for digital literacy and proper use of AI tools

Differed with

Marcela Canto

Differed on

Focus of digital education curriculum

M

Marko Paloski

Speech speed

155 words per minute

Speech length

1908 words

Speech time

737 seconds

Risk of job losses due to automation

Explanation

Marko Paloski highlights the potential risk of job losses due to automation. He points out that a significant portion of the global workforce could be affected by this trend in the near future.

Evidence

Paloski cites a report by the McKinsey Global Institute suggesting that by 2030, up to 800 million jobs worldwide could be lost to automation, representing one-fifth of the global workforce.

Major Discussion Point

Future of work and education

D

Denise Leal

Speech speed

132 words per minute

Speech length

1732 words

Speech time

785 seconds

Need for lifelong learning and adaptability

Explanation

Denise Leal emphasizes the importance of lifelong learning and adaptability in the face of rapidly changing technology. She argues that traditional education approaches are no longer sufficient and that innovative thinking is needed.

Major Discussion Point

Future of work and education

Agreed with

Umut Pajaro

Marcela Canto

Agreed on

Adapting education systems to emerging technologies

A

Audience

Speech speed

138 words per minute

Speech length

791 words

Speech time

342 seconds

Importance of human-centered design in educational technology

Explanation

An audience member emphasizes the importance of human-centered design in educational technology. They argue that technology should be designed to fit the user, with students at the center of the process.

Evidence

The speaker draws a parallel with the design of chairs, highlighting the need for human-centered design in technology.

Major Discussion Point

Future of work and education

Agreements

Agreement Points

Need for digital literacy and proper use of AI tools

Ananda Gautam

Mohammed Kamran

Umut Pajaro

Ethan Chung

Digital divide and unequal access to technology

Need for digital literacy and proper use of AI tools

Importance of hands-on experience with AI in classrooms

Lack of regulations and ethical guidelines for AI use in education

Speakers agreed on the importance of teaching students and adults how to use AI tools properly, addressing the digital divide, and providing hands-on experience with AI in classrooms.

Adapting education systems to emerging technologies

Umut Pajaro

Marcela Canto

Denise Leal

Need for interdisciplinary and project-based learning approaches

Importance of teaching critical thinking and problem-solving skills

Curriculum should address social issues like race and gender

Focus on training technology creators, not just users

Need for lifelong learning and adaptability

Speakers agreed on the need to adapt education systems to include interdisciplinary approaches, critical thinking skills, and addressing social issues, while focusing on creating technology creators and promoting lifelong learning.

Similar Viewpoints

Both speakers emphasized the importance of creating guidelines and regulations for AI use in education, with Kamran focusing on government involvement and Pajaro stressing the inclusion of all stakeholders, including students.

Mohammed Kamran

Umut Pajaro

Need for government regulations and policies

Importance of stakeholder involvement in creating guidelines

Both speakers highlighted the importance of considering cultural and social aspects when implementing technology in education, with Gautam focusing on maintaining human connections and Pajaro addressing language barriers for indigenous populations.

Ananda Gautam

Umut Pajaro

Balancing technology use with social interaction

Addressing language barriers for indigenous populations

Unexpected Consensus

Importance of human-centered design in educational technology

Audience

Umut Pajaro

Ethan Chung

Importance of human-centered design in educational technology

Importance of hands-on experience with AI in classrooms

Lack of regulations and ethical guidelines for AI use in education

There was an unexpected consensus on the importance of human-centered design in educational technology, with speakers from different backgrounds agreeing on the need to prioritize user experience and ethical considerations in AI implementation.

Overall Assessment

Summary

The main areas of agreement included the need for digital literacy, adapting education systems to emerging technologies, creating regulations and guidelines for AI use, and considering cultural and social aspects in technology implementation.

Consensus level

There was a moderate to high level of consensus among the speakers on the main issues discussed. This consensus suggests a shared understanding of the challenges and potential solutions in integrating AI and emerging technologies into education systems. The implications of this consensus include the potential for collaborative efforts in developing educational strategies and policies that address the identified challenges and opportunities.

Differences

Different Viewpoints

Approach to AI regulation in education

Mohammed Kamran

Umut Pajaro

Need for government regulations and policies

Importance of stakeholder involvement in creating guidelines

While Kamran emphasizes the need for government regulations to guide AI use, Pajaro advocates for a more inclusive approach involving all stakeholders, including students, in creating guidelines.

Focus of digital education curriculum

Marcela Canto

Ethan Chung

Curriculum should address social issues like race and gender

Lack of regulations and ethical guidelines for AI use in education

Canto argues for a curriculum that addresses broader social issues, while Chung focuses more specifically on the need for education on ethical AI use within legal boundaries.

Unexpected Differences

Perspective on AI’s role in education

Mohammed Kamran

Ethan Chung

Need for digital literacy and proper use of AI tools

Lack of regulations and ethical guidelines for AI use in education

While both speakers discuss AI in education, Kamran unexpectedly takes a more positive stance, viewing AI as a tool to be adapted to, while Chung focuses more on the potential risks and need for strict guidelines.

Overall Assessment

summary

The main areas of disagreement revolve around the approach to AI regulation in education, the focus of digital education curricula, and the balance between embracing AI and setting boundaries for its use.

difference_level

The level of disagreement among speakers is moderate. While there is general consensus on the importance of addressing AI in education, speakers differ in their specific approaches and priorities. These differences reflect the complexity of integrating AI into education systems and highlight the need for multifaceted solutions that address various concerns including regulation, curriculum design, and ethical considerations.

Partial Agreements

Partial Agreements

All speakers agree on the need for proper education on AI use, but they differ in their approaches. Gautam emphasizes balancing technology with social interaction, Kamran focuses on teaching limits and ethical use, while Pajaro advocates for hands-on experience in classrooms.

Ananda Gautam

Mohammed Kamran

Umut Pajaro

Need for digital literacy and proper use of AI tools

Importance of hands-on experience with AI in classrooms

Balancing technology use with social interaction

Similar Viewpoints

Both speakers emphasized the importance of creating guidelines and regulations for AI use in education, with Kamran focusing on government involvement and Pajaro stressing the inclusion of all stakeholders, including students.

Mohammed Kamran

Umut Pajaro

Need for government regulations and policies

Importance of stakeholder involvement in creating guidelines

Both speakers highlighted the importance of considering cultural and social aspects when implementing technology in education, with Gautam focusing on maintaining human connections and Pajaro addressing language barriers for indigenous populations.

Ananda Gautam

Umut Pajaro

Balancing technology use with social interaction

Addressing language barriers for indigenous populations

Takeaways

Key Takeaways

There is a need to adapt education systems to emerging technologies like AI, blockchain, and robotics

Digital divide and unequal access to technology remain major challenges, especially in developing countries

Ethical use of AI and data in education and workforce management is a key concern

Interdisciplinary, project-based learning approaches are needed to prepare students for future technological landscapes

Curriculum should address social issues like race, gender, and class alongside technical skills

Balancing technology use with social interaction and critical thinking skills is important

Regulations and policies need to be updated to address emerging technologies while fostering innovation

Resolutions and Action Items

Include more debates about education and involve educators in future IGF discussions

Analyze public policies on education and technology across different countries and continents

Create standards for appropriate use of AI and technology in education

Involve educators and students from all stages in designing and implementing educational technologies

Unresolved Issues

How to effectively regulate AI use in education when the technology is rapidly evolving

How to ensure ethical use of AI and data in both education and workforce management

How to make remote work opportunities accessible in underserved regions

How to balance innovation with regulation in technology and education policies

How to address the potential loss of indigenous cultures in the transition to digital spaces

Suggested Compromises

Use AI as a tool to assist learning rather than relying on it completely

Implement project-based assessments that require original thought alongside AI-assisted research

Develop code of conduct guidelines for AI use in education alongside formal regulations

Balance technology integration with preservation of traditional teaching methods and social interaction

Thought Provoking Comments

We are now experiencing a new configuration of colonialism. Migration and international division of labor continues, but in a new guise. While the global north has large technology companies that employ CEOs and software engineers, we in the south are left with the worst jobs, whether it’s mining for processor production or removing objectable content from the internet.

speaker

Marcela Canto

reason

This comment provides a critical perspective on how technological advancement is perpetuating global inequalities, challenging the notion that technology inherently leads to progress for all.

impact

It shifted the conversation to consider the broader socioeconomic implications of technological advancement and the need for more equitable development.

We need to have mechanisms, right set up, like I will share a case study for you. We were doing a USIGF in Nepal, we had like a fellowship calls for that, and I think there were 150 applications for 15 spots. And then while going through the reviewing the application, we found out that 90% of the applicants use AI to draft their applications.

speaker

Ananda Gautam

reason

This real-world example illustrates the pervasive use of AI in unexpected areas and raises questions about authenticity and fairness in competitive processes.

impact

It led to a discussion about the need for guidelines on ethical AI use and the importance of teaching critical thinking skills alongside technological skills.

We need to integrate how do we people use digital technologies along with the emerging technologies so that they are aware. And also, we need to teach them. We need to know, or we need to have, I think there are not much resources done. What is the optimal time that they will be hanging up with their gadgets?

speaker

Ananda Gautam

reason

This comment highlights the need for a holistic approach to digital education that goes beyond just teaching technical skills to include digital wellness and time management.

impact

It broadened the discussion to include considerations of digital well-being and the importance of balancing technology use with other aspects of life.

When it comes to indigenous population, when they want to have access to technology, one of the things that I learned in the recent years is they have to decide what actually one to access on the internet and how they want to access on the internet.

speaker

Umut Pajaro

reason

This insight emphasizes the importance of cultural sensitivity and self-determination in technology adoption, particularly for indigenous communities.

impact

It led to a discussion about the need for culturally appropriate approaches to digital inclusion and the preservation of indigenous languages and knowledge in the digital space.

Overall Assessment

These key comments shaped the discussion by broadening its scope from purely technical considerations to include critical perspectives on global inequality, ethical concerns, digital well-being, and cultural sensitivity. They challenged participants to think more holistically about the societal impacts of technology and the need for inclusive, equitable approaches to digital education and development. The discussion evolved from a focus on skills and access to a more nuanced exploration of the complex interplay between technology, society, and culture.

Follow-up Questions

How can we articulate an emancipatory digital technology curriculum that truly combats extraterrestrials and propagates oppression?

speaker

Marcela Canto

explanation

This question addresses the need for a curriculum that tackles discrimination and promotes equality in digital education, especially in the Global South context.

What are the needs of your country, state, city, and community that should be addressed by educational policies?

speaker

Marcela Canto

explanation

This highlights the importance of understanding local contexts when developing educational policies for digital literacy and technology.

How do we need to encourage different groups that are excluded to become technology producers?

speaker

Marcela Canto

explanation

This question addresses the need for inclusivity in technology production, especially for marginalized groups.

How can we organize a way that respects multiculturalism and the diversity of region and allows for global cooperation?

speaker

Marcela Canto

explanation

This question explores how to create inclusive and globally cooperative approaches to digital education and technology development.

How can remote work opportunities be made accessible in underserved regions?

speaker

Charmaine (online participant)

explanation

This question addresses the need to extend digital work opportunities to areas with limited access to technology and internet.

How can we ensure ethical use of AI and data in both education and workforce management?

speaker

Bangladesh Women IGF (online participant)

explanation

This question highlights the need for ethical guidelines in the use of AI and data across educational and professional contexts.

What should be considerations from youth leaders in policy input regarding regulations and innovation?

speaker

Gregory Duke (online participant)

explanation

This question explores how youth perspectives can be incorporated into policy-making for technology regulation and innovation.

As youths, how do you envision AI to be used in assessments in education?

speaker

Sasha (online participant)

explanation

This question addresses the potential applications and implications of AI in educational assessment from a youth perspective.

What are the positive and negative aspects that indigenous communities can face when they are connected to the internet in the very near future?

speaker

Nirvana Lima (audience member)

explanation

This question explores the potential impacts of internet access on indigenous communities, considering both benefits and challenges.

What public policies can be implemented to ensure both technology inclusion of young people and high-quality technology education, while ethically protecting these young people in a labor market increasingly driven by technology skills?

speaker

Mariana (audience member)

explanation

This question addresses the need for comprehensive policies that promote digital inclusion, education, and ethical protection for youth in the evolving job market.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #37 Her Data,Her Policies:Towards a Gender Inclusive Data Future

Open Forum #37 Her Data,Her Policies:Towards a Gender Inclusive Data Future

Session at a Glance

Summary

This discussion focused on creating gender-inclusive data policies and a more equitable data future in Africa. Panelists from various sectors explored the opportunities and challenges in achieving this goal. Key points included the need for representative data collection that considers intersecting identities, addressing biases in algorithms and data sets, and ensuring data privacy and security. Participants emphasized the importance of involving diverse communities, especially women and youth, in designing and implementing data initiatives.

The discussion highlighted the role of governments in developing inclusive policies, raising awareness about data rights and risks, and collaborating with multiple stakeholders. Tech companies were urged to prioritize inclusivity in product design and stakeholder engagement. The importance of capacity building, digital literacy, and education was stressed as crucial for empowering marginalized groups to understand and protect their data rights.

Challenges discussed included the implementation gap between policy creation and execution, data interoperability issues within Africa, and the need for greater transparency in data practices. Panelists agreed that progress is being made, with many African countries developing data protection frameworks, but emphasized that continued efforts are needed to build trust and improve policy communication.

The discussion concluded with calls for ongoing collaboration, education, and skill development to create a more inclusive data future in Africa. Participants recognized that while significant strides have been made, achieving a truly gender-inclusive data ecosystem requires sustained effort and engagement from all stakeholders.

Keypoints

Major discussion points:

– The importance of gender-inclusive data policies and practices in Africa

– Challenges and opportunities in implementing data protection laws and raising awareness

– The role of different stakeholders (government, tech companies, youth, civil society) in shaping inclusive data governance

– Strategies for engaging communities and building trust around data issues

– The need for collaboration, education, and transparency in data policy implementation

The overall purpose of the discussion was to explore how to create more gender-inclusive data policies and practices in Africa, with a focus on engaging different stakeholders and addressing challenges in implementation and awareness.

The tone of the discussion was generally constructive and solution-oriented. Panelists shared insights from their various perspectives and experiences. There was an emphasis on the progress being made, while also acknowledging ongoing challenges. The tone became more urgent when discussing the need for youth involvement and practical implementation of policies. Overall, the conversation maintained a hopeful outlook on achieving more inclusive data governance in Africa.

Speakers

– Christelle Onana: Senior Policy Analyst and lead of the digitalization unit at the African Union Development Agency

– Catherine Muya: Online moderator

– Suzanne El Akabaoui: ICT advisor to the ICT minister in Egypt, advisor for data governance

– Victor Asila: Data manager and lead data scientist at Safaricom (a telecommunications company in Kenya)

– Emilar Gandhi: Global Head of Stakeholder Engagement and Policy Development at META

– Bonnita Nyamwire: Head of the Research department at Pollicy Uganda

– Osei Keja: IGF Riyadh representative, public interest technologist

Additional speakers:

– Melody: Audience member

– Chris Odu: Audience member from Nigeria

– Peter King: Representative of Liberia Internet Governance Forum

Full session report

Gender-Inclusive Data Policies in Africa: Challenges and Opportunities

This discussion explored the creation of gender-inclusive data policies and a more equitable data future in Africa. Panelists from government, technology companies, and civil society organisations examined the opportunities and challenges in achieving this goal.

Key Themes and Insights

1. Importance of Gender-Inclusive Data

Bonita Nyamwire emphasised that truly inclusive data should represent all genders and their intersecting identities, including factors such as race, ethnicity, age, educational level, socioeconomic status, and geographical location. This comprehensive definition set the tone for a nuanced discussion about the complexity of achieving inclusive data practices.

Speakers highlighted the need to identify and address biases in data collection, algorithms, and technology design. Victor Asila from Safaricom specifically mentioned the importance of algorithmic audits to prevent bias.

2. Strategies for Achieving Gender-Inclusive Data

Panelists proposed various strategies to achieve more inclusive data practices:

a) Capacity Building and Education: Nyamwire advocated for transforming data collection processes through capacity building, while Suzanne El Akabaoui, ICT advisor to the Egyptian ICT minister, emphasised the need for broader digital literacy initiatives.

b) Community Engagement: Nyamwire stressed the importance of involving diverse communities in designing data initiatives. Emilar Gandhi from Meta echoed this sentiment, highlighting the value of stakeholder engagement and trust-building.

c) Transparency and Accountability: El Akabaoui emphasised the need for transparency and accountability in data practices, as well as the implementation of privacy-enhancing technologies.

d) Sharing Best Practices: Nyamwire suggested sharing good practices on collecting and reporting gender data across different regions and sectors.

3. Role of Technology Companies

Emilar Gandhi from Meta outlined several responsibilities for technology companies:

a) Ensuring inclusivity by design in products and policies

b) Hiring people from underrepresented groups

c) Engaging with stakeholders and building trust

Gandhi also highlighted Meta’s initiatives to support youth involvement, including their trusted partner program and efforts to engage with civil society organizations and academia.

4. Youth Involvement in Data Governance

Osei Keja, the IGF Riyadh representative, raised critical points about youth involvement:

a) Youth are often left out of policy conception and implementation

b) There’s a need for a shared vision and continuous learning to ensure youth buy-in from the beginning of policy development

c) Young people face challenges in accessing decision-making spaces and having their voices heard

d) The importance of creating opportunities for youth to participate in policy discussions and implementation

Christelle Onana from the African Union Development Agency also emphasised the value of youth perspectives in policy discussions, indicating broad agreement on this issue.

5. Government Roles and Responsibilities

Suzanne El Akabaoui outlined several key responsibilities for governments:

a) Developing inclusive policies and regulations

b) Implementing privacy-enhancing technologies

c) Promoting digital literacy initiatives

d) Ensuring transparency and accountability in data practices

El Akabaoui highlighted Egypt’s efforts in this area, including the establishment of the Personal Data Protection Authority and the implementation of data protection laws.

6. Challenges in Policy Implementation

While progress is being made in developing data protection frameworks across Africa, speakers identified several challenges in implementation:

a) Lack of public awareness about the importance of data protection

b) Need for improved transparency and collaboration in policy communication

c) Importance of contextualising approaches for different regions

d) Implementation gap between policy creation and execution

Specific Initiatives and Examples

1. Egypt’s Personal Data Protection Authority and data protection laws

2. Meta’s trusted partner program and engagement with civil society organizations

3. African Union Development Agency’s role in promoting youth involvement in policy discussions

4. Safaricom’s focus on algorithmic audits to prevent bias

Thought-Provoking Comments and Future Directions

1. Nyamwire’s comprehensive definition of gender-inclusive data, which broadened the conversation beyond simple gender binaries.

2. Keja’s call for a shared vision that includes youth from the conception stage of policy development, challenging typical top-down approaches.

3. Keja’s acknowledgement of male privilege in a patriarchal society, calling for men to be more engaged in supporting gender-inclusive policies.

Audience Questions and Responses

Audience members raised questions about:

1. The practical implementation of data protection policies

2. Strategies for improving data literacy among different population groups

3. The role of technology companies in supporting underrepresented groups in developing countries

Panelists emphasized the need for continued collaboration between governments, tech companies, and civil society organizations to address these challenges.

Conclusion

The discussion revealed a strong commitment to creating more gender-inclusive data policies and practices in Africa. While significant progress has been made, particularly in developing data protection frameworks, challenges remain in implementation, awareness-raising, and ensuring meaningful inclusion of diverse perspectives, especially those of youth and underrepresented groups.

Key next steps identified by panelists include:

1. Improving collaboration between governments and tech companies on data transparency

2. Developing sustainable programs to protect and empower underrepresented groups

3. Enhancing mechanisms for policy implementation

4. Exploring secure ways for African countries to share data among themselves

5. Continuing to promote digital literacy and awareness of data protection issues

Panelists’ closing “one word” summaries:

– Emilar Gandhi: “Collaboration”

– Suzanne El Akabaoui: “Awareness”

– Victor Asila: “Inclusivity”

– Osei Keja: “Action”

– Bonita Nyamwire: “Transformation”

These summaries underscore the multifaceted approach needed to achieve a more equitable and inclusive data future for Africa.

Session Transcript

Christelle Onana: On behalf of the African Union Development Agency, I am honoured to welcome here today for this session which is actually a continuity of a discussion we started at the African IDF in Addis Ababa. So the African Union Development Agency is mandated to support the socio-economic development of African countries. And part of it is this year we have been working on supporting the domestication of the EU data policy framework because we do support the implementation of policies and strategies defined at the African Union level. So we are committed to supporting the implementation, as we just said, of the African Union data policy framework that was adopted in 2022. We are working to help the member states to develop robust national strategies and national data policies and build capacity in general within data governance and specifically with the data protection authorities. As we work to build data-driven economies across the continent, we must be acutely aware of the persistent gender digital divides we have been hearing from the beginning of the forum on Sunday and the gender gap that exists within the data governance landscape. So these gaps may pose a significant barrier to what we are trying to achieve, to the full participation of African women. marginalized groups in the digital economy. So this session will explore the importance of a gendered approach to data and digital environments. We must ensure that the unique needs of women, girls, and marginalized communities are recognized and met. This requires the intentional application of a gender lens in the implementation of the AU data policy framework and the development of national data strategies and policies. So we have with us this morning distinguished panel on site and online that I choose who will be sharing with us their expertise and insight on this crucial topic. So my name is Christelle Nana. I work for the African Union Development Agency. I’m a senior policy analyst and I also lead the digitalization unit. So to me, with me here this morning, we have online Mrs. Suzanne El Akabaoui, who’s the ICT advisor, the advisor of the ICT minister in Egypt. Welcome, Suzanne, if you’re online, if you can hear.

Suzanne El Akabaoui: Yes, I can hear you. Thank you so much.

Christelle Onana: We also have Mr. Victor Asila online, who’s a data manager at Safaricom. Welcome. Thank you, and good morning. On site, we have Madam Bonita Nyamwire, who’s the director of research at Policy. We do have Madam Emilar Gandhi, who is Global Head of Stakeholder Engagement and Policy Development at META. And to close the loop, we have Mr. Osei Kagea, who is IGF Riyadh representative. So welcome to all of you. I think we’ll start the discussion straight. I would like you, starting with the speakers online, to introduce yourself, share with us in two minutes, briefly, what you do that is relevant to our topic today. Starting with Mrs. Suzanne Akabawi. Thank you.

Suzanne El Akabaoui: Thank you. Good morning, esteemed panelists. My name is Suzanne Akabawi. I am advisor to the ICT minister for data governance. My main role when I joined the ministry was to establish the Personal Data Protection Authority of Egypt so that we can implement the personal data law that was issued back in 2020 as part of the creation of a legal context that is favorable for digital transformation.

Christelle Onana: Thank you, Suzanne. Victor?

Victor Asila: Thank you. Good morning. My name is Victor Asila. I work for Safarico, a telecommunications company in Kenya. As a lead data scientist, so on a day-to-day basis, my work is to lead a team that builds data products using scientific methods that can be used for data protection. give insights to the business so that the business can work effectively. It’s a pleasure being here, and I’m glad to be part of the panel.

Christelle Onana: Thank you, Victor. Bonita?

Bonnita Nyamwire: Good morning, everyone. My name is Bonita Nyamwire, and I work for Policy. Policy is based in Kampala in Uganda, and at Policy we work at the intersection of data design and technology to ensure that experiences, needs of women are amplified in tech and data, and digital technology overall on the African continent. Thank you.

Osei Keja: Hello, good morning, good afternoon, depending on where you are joining us from the world. My name is Osei Keja from Ghana. I’m a public interest technologist working at the intersection of society and technology, and I’m also here as an African youth rep on this panel. The topic is a very nuanced one, and youth being the central core of this conversation, whether it’s forming or using the Internet, we hope to be part of the discussion where we get to contribute. I’m excited to be here and hope to learn more. Thank you very much.

Emilar Gandhi: Thank you so much, everyone. My name is Emilar Gandhi, and good morning to you all. I’m head of stakeholder engagement at Meta, and my role really is to ensure that we have strategies in place to ensure that, you know, whenever we are building our products or our policies, we engage externally. We talk to people who use our products. experts, we talk to people who are interested in the issues that we are dealing with. So that’s the team that I work on. And this is an important topic. And thank you so much for including us. And I’m really looking forward to having this discussion and learning from everyone on the panel.

Christelle Onana: Thank you very much. I think now that we know who we are in the room, we can kick off with the discussion. So we’ll start with Bonita with the first question. So what, for you, is a gender-inclusive data future, specifically for Africa? And how can it be achieved?

Bonnita Nyamwire: Thank you so much, Christelle. So a gender-inclusive data is one that is representative of all genders. It also is representative of their intersecting identities. By intersecting identities, I mean like race, ethnicity, their age, educational level, socioeconomic status, geographical location, so that everyone is captured and no one is left behind. Because intersection reveals injustices, inequalities, and so on and so forth. Then the other one on gender-inclusive data is one that actively identifies biases and then addresses them. There are several biases in data, but also in technology. For instance, there is bias in algorithms. I remember this was talked about on Monday in the plenary session. There is bias in data, which can make data skewed or unevenly distributed, which means that even the outcomes of such data that has bias will also be unevenly distributed. And so this also affects the other processes that come after the data, where such data will be used. For instance, in decision-making, it will also be uneven and so on and so forth. So there’s also bias in designing technologies, for instance, bias in the languages, you know, not supporting diverse languages, for instance, dialects on the African continent, which then limits accessibility. Then the other one about agenda-inclusive data is one that ensures safety and privacy, generally protecting individuals from harm and exploitation, especially due to data misuse, but also the biases that come from the data. Then the other one is agenda-inclusive data should be one that ensures agency and ownership in terms of allowing individuals and communities to have control over their data in a way that they are controlling to how the data is collected, how the data is stored, how the data is used. If there are any changes that need to be made to the data, for instance, they are involved. So generally like citizens participating in the data, but especially on the gender side and other marginalized communities. And so how can this be achieved? So one is to transform, no, one is to mainstream gender into national statistics in terms of in planning, in research, because mainstreaming helps to assess gender data collection and identify gaps relating to a missing agenda related. indicators. So mainstreaming is one aspect that can be done to achieve a gender-inclusive data and gender-inclusive data initiatives. Then the other one is transforming data collection processes through capacity building. For instance, capacity building on designing data collection tools to be able to capture data in all different gender diversities, and then training data collectors and researchers to understand what gender inclusivity is, because not everyone may be aware or have that kind of training. And then, again, under transforming data collection and capacity building, there is also equipping researchers and other stakeholders like policymakers with the skills to identify and mitigate the biases that I already talked about. And then the other one is to involve and engage diverse communities in designing and implementing data initiatives. For instance, collaborating with women and feminist organizations to align goals and processes of initiatives. And then the other one is to share good practices on collecting and reporting gender data so as to shape the notions and impact of excellence. So this is very good, sharing good practices, what are the different stakeholders doing in terms of gender-inclusive data initiatives so that we all can be able to learn from each other. And then also to connect gender data to gender equality agendas, because gender equality agendas ideally are based on evidence and facts. And the facts come from the data that is collected, whether it is is text data, whether it is numbers. So connecting these two gender data to gender equality agendas is very important. And then invest in research and innovation, funding interdisciplinary research focused on intersection of gender, data, and technology is also very important. And then also sustaining this funding in terms of upskilling that I already talked about, funding to maintain collaboration and so on and so forth. Yeah, thank you so much.

Christelle Onana: Maybe I should have interrupted you earlier because you gave quite a lot of insight towards the answer we were expecting, but it will also be good to give the responsibilities or suggest as you name. You mentioned quite a lot to answer the question. What I noticed is you mentioned the agenda, inclusive data involving the cultural representation within the design, the algorithm, safety and privacy, the agency and ownership of the communities and the individual. And then lately you were giving the answer to the how it can be achieved. Yeah, mostly about involving the communities to collaborate on the gender inclusive data, good practice sharing, investment into research. I think we’ll leave it there. We need to digest it and we’ll move to the next speaker, Mrs. Suzanne L. Akabawi. So. What are our government, our African government, doing currently to ensure that women and marginalized groups have control over their data in a way that respects privacy and agency?

Suzanne El Akabaoui: Thank you very much for allowing me to have a word about this matter. I think that mainly the work that is being done in African countries revolve around trying to have inclusive policy and regulations. It’s important to develop gender inclusive policies and enforce these policies that would expressly address the needs and rights of women and marginalized groups. These policies should include ensuring that data protection laws are inclusive and consider unique vulnerability of these groups. A very important principle in this case is the principle of transparency and accountability, whereby regulations should require companies to be transparent about their data practices and hold them accountable for any issues related to misuse of data. In this case, governments should provide for regular audits and impact assessments to ensure compliance with privacy standards. Another aspect would be related to the education and digital literacy. In which case, providing education and training on digital literacy would empower women and marginalized groups to understand the rights, to practice them. And the implications of data sharing, this would include teaching them how to protect the personal data and personal information online. Of course, teaching or encouraging women and marginalized groups to pursue education and career in fields like science, technology, engineering, and mathematics. These types of education encourage critical thinking, creativity, and problem-solving skills. So, by having more of these people thinking critically, it will help us also implement the laws in a more efficient way. On the side of the businesses, designing tech companies should prioritize the development of technologies that are inclusive and accessible. In the case of, in order to mitigate the impacts of the digital illiteracy in certain instances, having systems that instate privacy and personal data protection ex ante is important. So, it’s important to think during the design and testing phases. include personal data protection principles is important. As mentioned earlier by my colleague, addressing bias is key. So it’s important to implement measures to identify and mitigate bias in data sets. Obviously, recently in AI systems as well. So it’s important to use diverse database and involve marginalized groups in the development process to ensure fair and equitable outcomes. Community engagement is another important pillar, whereby both governments and tech companies should actively engage with communities to understand the needs and concerns. And this can obviously be done through consultations, focus groups, partnerships with local organizations. The collaboration with civil society is important as well, because working with NGOs and advocacy groups and other civil society organizations can help ensure that the voices of women and marginalized groups are heard and considered in policymaking and technology development. Finally, strengthening personal data protection through data protection laws that are robust and data protection laws that provide strong protection for personal data through the issuance of clear guidelines on how to obtain consent. how to minimize data and to guarantee the right to access and deletion of personal data where applicable. The implementation of privacy enhancing technologies is becoming important. Encryption, anonymization and securing data storage to protect users’ data from unauthorized access and misuse is an important aspect as well. To give a quick overview of where Egypt stands in this case, Egypt does recognize the importance of data protection and it has introduced the law 151 of the year 2020 and the law aims at protecting the personal data and penalize the misuse of personal data. So it is part of the strategic goals and the vision, the Egypt vision 2020 and we work on achieving and guaranteeing gender equal, gender equality through the empowerment of women economically, socially and politically. We do try to give women control over their data. The Personal Data Protection Center has an important project in this case ensuring that their privacy and agency over the data are respected. So generally speaking, it is important that the visions emphasize the importance of creating inclusive digital societies where all… citizens, especially women and marginalised groups, can benefit from digital transformation and protection initiatives. Thank you.

Christelle Onana: Thank you very much, Suzanne, for your very elaborated answer. Allow me to follow up with you on a certain of points that you mentioned. Definitely, our member states are working towards strengthening the data protection authorities and working on enforcing the data protection side of the data for the communities overall, and I’m sure for the minorities, the marginalised groups, for the women, for our girls. But you talked about the government that need to work with the companies to be transparent about their data processes. How does this practically happen? That’s one. How is it enforced? I would like to ask. What do our states do? That’s one. How do we track inclusive technologies? I mean, practically, how does that work at the national level? What do the government do with the research, the academia? Because you mentioned the data protection authorities, you mentioned the companies, the commercial side of it, you mentioned the civil society. What happened at the academia level? And practically, do the engagement with the communities happen in the countries? How often? How is it further enforced or implemented? How do we measure the impact? How can we evaluate the impact of such engagement? Thank you.

Suzanne El Akabaoui: Thank you for your question, that is very interesting. Actually in Egypt, the personal data protection law has established the Personal Data Protection Authority and it has given the authority a certain set of mandates varying from building the capacity of personal data protection officers and personal data protection consultants, but it has also provided for the controllers and processors to obtain licenses. And in the case of Egypt, this has been an important part because it is allowing the personal data protection center to review the practices with the tech companies and generally speaking with controllers and processors on sound personal data management practices. The center instates methodologies on how to handle personal data in a more secure way and through the licensing process review the methodologies and the policies and the collection so that we can guarantee that the principles relevant to personal data protection are respected, such as minimization, purpose limitation, etc. So in practice, what happens is that through the review, through the licensing process, granting the license process, we get to review with the various stakeholders their practices relevant to the personal data protection, both from a legal perspective. and from a technical perspective. On the other hand, the center has another mandate that is to raise awareness. So we work very closely with the civil society, with the private sector, and we try to raise awareness through various events. And this is another aspect of how we get to include all stakeholders. In the case of issuing policies, guidelines and up-to-date policies and guidelines with the fast-paced developments in technology, we also get to have public consultation on these guidelines. So it gives us an idea about the interests and how the different stakeholders see the implementation of the law so that we can implement it in the most efficient ways. Academia is heavily involved because training data protection officers, who are the ones that need to assist the personal data protection in implementing the law in their respective, be it a controller or a processor, are involved. We are trying to include curricula related to personal data protection in various disciplines through academia as well. So the center has vast mandates, a vast range of mandates that are, if working together and put together, should allow to mitigate the impacts and have a more inclusive approach in the journey of digital transformation. Thank you very much. I hope I have answered your question.

Christelle Onana: Definitely. Thank you very much, Susan. Thank you. We’ll now move to the technology side. We’ll look at Victor, who works daily on big data and data science. So generative AI and big data analytics are shaping the future of the information. You do a lot of computation, a lot of analysis. We get insight from that. What opportunities and what risk do these technologies present for gender-inclusive data policies, looking specifically at the African context for our women, mothers, ourselves, our girls, and then the marginalized group?

Victor Asila: Right. Thank you. So there are numerous opportunities. Victor, maybe before you start, I would like you to think of if my grandmother was in the room and you were to explain that to her so we can all understand. I will try. I will try to be as basic as possible. So having said that, I think it will be imperative that I try to… kind of define what generative AI and big data analytics means for a basic person. So, I’ll start with big data. So, we generally describe big data using what we famously call the three Vs. So, we describe it using the volume, that is the amount of data we generate per unit time. We also define it by a second V, which we call veracity. So, how frequent do we generate this data per unit of time? Then thirdly, we define it in terms of variety. So, what different pieces of data do we generate within a specific unit of type? So, generation, the different data sets that can be generation could be classified as text, probably images, sound, video, and what have you. So, I think from a basic perspective, that’s how we describe big data. Then analytics is just the tools and methodologies that we use to get insights from the data, from the big data that we have generated and collected. Now, what is generative AI? So, generative AI is a type of AI that can generate new content, and the new content can be a text, can be a word, can be a picture, can be a video, or whatever that is possible within the technology. So, I think in that sense, my grandmother should be able to understand what generative AI and what big data means. and moving on to what potential it holds in terms of shaping gender and inclusive data policies. We’ll start by opportunities. I think from a technology point of view, having the ability to generate huge datasets within a unit of time at a faster rate, and having varied types of data being collected, then we have an opportunity, number one, to ensure that we get granular insights. And these insights are not just insights for the sake of insights, but insights that are related to gender disparities, insights that are going to help us identify these gender disparities and give us a glimpse into the areas that need intervention. So looking at analyzing big data, I mean, we have an opportunity to uncover nuanced patterns and trends that relate to gender. So that’s the first part. I mean, we have an opportunity to collect gender-specific data, and we have an opportunity to analyze these gender-specific data to ensure that we uncover the patterns that relate to gender. Then number two, as a practitioner, most of the time we do help the business using data to kind of tailor specific products that speak to specific appetites for our customers. So we can flip that and also use the same technologies to come up with solutions using data that address gender-specific related issues. And in doing so, we are going to promote. inclusivity and also promote equity. Now, the second opportunity that I look at from a practitioner perspective is that as practitioners when we build these models, we use algorithms and partly these algorithms do propagate the bias that we as humans, the biases that are inherent in the data that we generate and collect as humans. Therefore, by propagating these biases, then we inadvertently kind of perpetuate the same into the algorithms. What we can do is that we usually do what we call algorithmic audits. From a Safaricom perspective, we specifically come up with policies and practices that each data scientist who is building a model or an algorithm that they adhere to. Part of the checks that we do have is to ensure that the algorithms that we build do not perpetuate biasness and that they are fair and that they are equitable. From a craft perspective, that’s what we do at Safaricom. From also a craft perspective, we try to ensure that the data that we use to build these models and algorithms is as diverse as possible. One thing we usually do is to ensure that the data is balanced. We encourage our data scientists to ensure that their data is balanced and that it’s inclusive to all. groups of interest. And more so, it does not, it does not negatively impact any group. A third opportunity that I see is policy development and implementation. Once we have these insights, we can make informed decisions. And therefore, policymakers who make these, these decisions can leverage these insights to craft more effective and inclusive gender policies. There’s another bit to that, which is monitoring and evaluation. Since we are collecting timely data, I think we have an opportunity to sort of in a near real time basis, I always believe that we cannot, you know, achieve real time kind of monitoring. But we can achieve a near real time monitoring where we continuously monitor the impacts of gender policies, providing real time feedback to our policymakers and enabling them to make adjustments where needed. So I’ll quickly cover the risks. So one risk that as a practitioner, I see is that these these is on data privacy and security. So whenever you handle gender specific information or data, then that exposes you to, to, you know, information that, you know, can, can be used in a negative way. And therefore, it can fail to, to, it can expose privacy issues of individuals by by, you know, exposing their sensitive information. So any breach could could have serious implications. And therefore, for the bad actors, they can use that as an opportunity to misuse that data. It could be misinterpreted inadvertently. are leading to policy that can inadvertently harm rather than help the gender inclusivity agenda. So the other one, I think I’ve spoken about it, which is bias and discrimination. Then we also can run into ethical and legal challenges. I’ve heard of cases where, you know, some of the companies have been penalized by the regulators because of the biases that, you know, that the algorithms do inherently carry. And also, you know, just by doing that, they have failed to adhere to the regulatory compliance landscape around data usage and and the complexity of AI. So I think in a nutshell, those are the risks and opportunities that I see from a practice perspective. Thank you.

Christelle Onana: Thank you very much, Victor. Maybe Emilar has something to add there. Thank you so much. Adding to what he just said or in general? In general, to what he said.

Emilar Gandhi: Yeah, definitely. Thank you so much, Victor. I was writing notes. I know you asked, you know, how to describe this for, you know, like our grandparents, but I was writing notes as well as I was talking because we all benefited from that. I will, I think just beyond just adding to what he said, I think obviously it’s important to look at inclusion for products, you know, to look at inclusivity by design and not just think about it as an afterthought. I thought that was really, you know, that’s something that Safaricom is doing. I think that’s something that’s really, really. But just, I think, adding on to what he said, I think for us as a tech company, you know, when we think through about what inclusion means for us, and just going back to what some of my colleagues have already said, for us, you know, diversity and inclusion for data practices, just obviously it starts not only when, you know, you see the product out there, inclusion for us is at the core of our mission as a company, it defines what we do, and by that, I mean, I think let’s take a step back, because when we are thinking of just products and policies, we are forgetting that there are people behind them. So I think, and I’ve seen, I think, even some research by policy as well, that it’s important for tech companies to even, you know, when they’re doing the hiring practices, to actually hire the, you know, people who come from these underrepresented groups. I’m really gratified to see, I think even for Safaricom, you have people like Victor who are leading this work, rather than having someone sitting somewhere trying to design products for a society that they are not, you know, that they are not in, because lived experiences I think are very, very important. You might read about something, you might, you know, learn about something from books, but actually having lived experiences is important. So for us it matters, inclusion is at the core of our mission, we hire people, you know, with that in mind, but also when you hire someone, and we know this professional development is also important, because you want to keep them, so making sure that they actually stay, you know, in the company. I’ve been at Meta for eight years now, you know, so, you know, I think prioritising professional development is important. I’ll also look at where I work in, which is stakeholder engagement. And for us, stakeholder engagement is not just outreach. And I think for some, it’s just focusing on outreach, but we know stakeholder engagement is about relationship building. And once we talk about relationships, particularly for us in these parts of the world, there is the issue of trust. And I’ll be the first to acknowledge that there is a trust deficit for us, especially in tech companies and the people that use our platforms. How do we ensure that we build trust? And it’s not something that, by just being here at the IGF and having this conversation, we build trust. It’s a marathon. It’s not something that you just build trust. One thing Madam Susan mentioned is that it’s important for tech companies, and we do this as well, where there is a trust deficit, working with local partners to ensure that they are an intermediary for us. We have a trusted partner program. I’m sure for some of them, you’ve heard about that, where we have 400 organizations globally that we work with. And we’ll be here, if you’re part of that program, to hear from you as well. But our inclusive stakeholder engagement strategy is anchored on three things. One is expertise. And by expertise, I know in this room, we have some people who have PhDs, but for us, expertise, we’re also looking at lived experience as a form of expertise. And I think that broadens the people that we talk to. I’m happy to see someone from the youth group, and I was saying to him, what’s the limit of youth groups? Because we know in some regions, it’s as high as you can be. So expertise is one pillar that we look at. We also look at transparency. So when we are identifying the… that we are going to talk to about our policy development or product development, we go beyond, you know, when we’re identifying, we go beyond geographical diversity, we go beyond gender diversity or even language, but, you know, or expertise, but we really look at it from a comprehensive, you know, from a comprehensive perspective. I liked what Bonita said, you know, about intersecting identities, because if we just talk about underrepresented groups, is it women, is it people with different needs, so they are, you know, intersecting identities, I think that we can look at when we are engaging. And the last one would be transparency. It doesn’t, it doesn’t, it’s not, we can do all these things, you know, look at people with lived experiences, people who have the expertise, we can be as inclusive as we think we want to be, but if we are not transparent about the work that we are doing, and we are not talking about it and, you know, responding to the questions that we receive and sharing as much information in the decisions that we are making and sharing information about who we are engaging, why are we engaging with them, what have we had from them, then it’s a fruitless exercise. So being transparent about all these things, I think would be important. And I think I’ll just end there, and let me know if there’s anything else. I know it was just about, do you have anything to add? And I’ve added a whole.

Christelle Onana: We’ll come back to you. Thank you very much, Emilar. So now we’ll turn to the young man in the room representing the Yelp voice. So, Osei, how do you think young people in our society can play a role in shaping a more inclusive society? data governance ecosystem. Where do we start? Can you share any initiative where you guys have voices and it has successfully influenced a policymaker in the data governance landscape?

Osei Keja: Thank you very much. A lot has been said. Whilst my able panelists were presenting, I picked some words, representative, inclusive, transparent, lived experience, not just outreach, expertise. I think we need to start from the conception space of policies, then we talk about implementation. And also one word you use, afterthought. Oftentimes, most young people are seen as afterthought in these stakeholder engagements or say formulation of policies. We are just like the props to the occasion when everything has been done. Hey, young people, come. And we just add them out. But I’m very, very happy in this discussion, we have a young person on board. And throughout the whole process, the design of frameworks, data frameworks, you voice are not there. I made this comment whilst, I think on this same topic in Ethiopia, Honorable Stanley Olagide mentioned that there was a youth forum and one young person did made a comment which changed the entire perspective of parliamentarians. Young people, especially in West Africa and also Africa, I will lead with Mariam as coordinator for West Africa Youth IGF and also the Ghana IGF. We’ve been doing amazing things through. So Ghana IGF, we did a virtual hackathon, tech hackathon this year. And there was so many ideas which were churned out. We did push to. We had our report and we did push in our policy makers. But as I said, from the conception stage to the implementation stage, there’s that kind of big gap. Young people are left out. The implementation process or, say, legislative process, young people are left out. We have the expertise, too. We are not saying we are a repository of all knowledge or we are a monolith of knowledge. We know better than our fathers or our mothers. No. But we need to be included. There’s that big gap there. From the conception stage, we’ve not had anything. So the West Africa Youth IGF, we had good engagement with our parliamentarians. There were outcomes. But we don’t know the end game of it. We’ve not been included in the whole process at the end of it. So the young people have been doing African Youth IGF. We came here last time in Ethiopia. We’ve done incredibly well. We’ve had our outcomes. But we don’t know. We just, OK, go. That’s it. Young people have been at the forefront of advocacy, awareness, and also mobilization. We’ve been effective mobilizers. So Safaricom, META, AU-NEPAD, GIZ, doing amazing works. But I think we can be great conveyor belts in speaking to people with lived experience and bringing people with lived experience on board, conveying the message out there, being part of the implementation process. I hope I’ve fairly answered your question.

Christelle Onana: I’m following up. So I understood that you have been voicing, you have been doing great things, and there have not been any positive outcome from your engagement or your handovers, which means there is no successful example of what you have worked on that has influenced the policymaker. Is it correct?

Osei Keja: On the granular level or, say, on personal level, I personally may have worked on projects which have influenced things. But as a collective, it’s firefighting is very hard. It’s like you are shouting and across the continent, Africa, a lot of young people feel dejected on head. It’s like they are screaming but they are not heard of because they keep pouting the same things. On a personal level, on a granular level, people may have or I may have some experiences here.

Christelle Onana: Okay, may I follow up by asking, so what do you recommend two to three recommendations, practical on how we can engage and make sure that your voice is not only heard but act up and on regarding the issues we’re discussing now to the policymakers, to the private sector, to the researcher, to us as a development agency, to the partners?

Osei Keja: Yeah, vision. I would like to quote my favorite teacher from primary school. He said, if you know the road and you don’t know where you are going, it will lead to nowhere. So, we need to have a shared vision. So, from the conception stage to the implementation stage, we know where we are going so that the young people may be bought into the idea. We are going here. We are going here. That’s how we are going. And also, system thinking. We need to continuously think through things. There may be some faults from the conceptualization or say frameworks where we can piece things together, the jigsaw. We can piece things together so that we know we are moving somewhere. Continuously thinking through things through linear complexity or say diverse complexity. We need to continually think through things. And also, continuous learning. Whether it’s policymakers or big tech, obstacles, we need to continuously learn. That leads to personal mastery. Continuously learning through things. Learning, benchmarking from other experiences. we need to robustly think through the framework where we need to learn from other people to benchmark appropriately. So this is my answer to you.

Christelle Onana: Thank you very much. Thank you, Jose. So I would like to come back to Emilarbefore we open the floor for the first break to the participant. We heard you when you were complimenting Victor’s answer about how you consider the inclusivity at META, how you’re doing that. I would like to add, how is it tailored to the different context, the inclusivity that you incorporate into your processes from the beginning, looking at different perspective? Thank you.

Emilar Gandhi: Thank you so much. Thank you, that’s a very important question. I think before I even respond to that, I think I was just writing notes on what he was saying, and I think also youth need as much support as possible to get to where they are going once there’s a shared vision. In terms of contextualizing our approaches, here’s what we do, and it really depends on each context. But what’s important for us, first of all, is really to ensure that external stakeholders, when we are engaging, are involved right from the beginning. So it’s not when we are now going out to them, but actually understanding what are their issues, what is it that we need to prioritize? We have the products, we have the Facebook, Instagram, all these platforms, but what are some of the issues? that you are facing in terms of, you know, our community standards. Yeah. So what are those issues? So understanding that and bringing it internally to ensure that we, you know, when we look at our policies, we look at that, you know, from that lens. So that’s number one. So ensuring that we are actually prioritizing issues that we hear on the ground, that we hear from local participants. The second thing is actually devising our engagement approach with, you know, with an understanding of our stakeholders. And by this, I mean, not all formats of engagement, you know, can work with different stakeholders. Zoom, there’s a Zoom fatigue. I don’t want to say Zoom, not like company Zoom, but, you know, there is, you know, just engaging virtually. For some, it doesn’t work. That’s number one. Some people prefer face-to-face. Also understanding language, because we know, you know, different languages. What we might express in English is not what it is in Izizulu or in other languages. So really understanding who needs to be in the room as well. I might be the one working on the issue, but for him to understand, maybe I’m not the one who can talk to him about it. Can we do it via policy or can we do it through other organizations? So understanding that to ensure that our engagement strategy speaks to that. So what I’m trying to say here is that there are processes that we have to put in place before we get to the destination, as we are saying. So many ways, I think, of slicing the cake.

Christelle Onana: You know, since you started talking, I was about to say, what have you done in relation to the youth? Talking about that. Can you share with us practically?

EmilarGandhi: or what we’ve done up with the youth. So first of all. Considering the point that he mentioned to be embarked from the beginning, having the vision as. Yes, yes. So what have we done with the youth? Quite a number of initiatives. So I can start one around capacity building because we also know that for youth to actually contribute meaningfully into our product and policies developments, you have to understand the issues. Otherwise just you and me talking will not be as useful. So one of the things that we have done is to put resources into capacity building initiatives like the African Internet Governance School, which I think the last one was in Addis. So making sure that initiatives like that are well supported and well resourced. Also supporting some of the local youth IGFs as well. Supporting in terms of resources, but also ensuring that we even have some of our internal experts, if you invite them to some of the events. The other thing also is we talked about recruitment, but also even actually having programs where we have some young people working within the company and also learning what is happening and ensuring that they can bring that through internships, through placements as well, to ensure that they can bring some of the things that they learn externally. We’re also working with some universities as well to support their programs around tech degrees or apprenticeships or courses as well. So quite a few multifaceted ways that we are working with the youth. But we also know that it’s not something that we can do by ourselves. So we’ll need to work with governments like Madam Susan’s departments or other organizations like. policy who are already entrenched in the processes and NEPAD as well who are already doing quite a lot with Agenda 2063 and all these other things.

Christelle Onana: Thank you very much Emilar. We’d like to pause here for now to open the floor to the participants on site but also online if we do have questions before we continue. Maybe we’ll look online first with our online moderator Catherine. Do we have questions?

Catherine Muya: So we have one question from Gahar Nye who says greetings from Afghanistan. Could anyone share a sample of the strategy to draw on it? Yeah so I think it’s not particularly about the discussion but maybe of the strategy like the one we gave in the description the AU data policy framework and the strategies we are developing but I’m not sure if the tech support can allow him to ask his question.

Christelle Onana: Maybe we can try to find out if you like with the participant online and then we come back to you to have a more accurate question I will suggest. So we’ll take a question on site. Yes Melody.

Audience: Thank you. So mine is more of a contribution. I think one of the issues you raised was we want something more practical and if you are to explain to your grandmother they understand and I think when we are talking about our community engagement and capacity building. Can you hear me? I’m going to give an example. I don’t work for META but I’ll give an example of WhatsApp for example. My family lives in rural Zimbabwe, so there is not any form of entertainment at all. So imagine you have been working the fields the whole day, you come home, there is no entertainment. But recently when I was talking to my mother, she was telling me that there is a WhatsApp group she joined. So every week they post like a chapter of a novel. Then she sits with her daughter-in-law and they read the novel. So I was thinking that capacity building and community engagement should not be that difficult. It is about finding something that will facilitate an engagement with your community. So if it means using a WhatsApp platform, for example, to reach out to so many people and talking about issues of privacy, we are talking about issues of gender inclusion and access to data, that would be one way. I think something very practical and a way of actually reaching out to the communities. Yes, I don’t work for NETA, but I think it is quite relevant and in my context I find it quite useful as well.

Christelle Onana: Thank you very much, Melody, for your contribution. Once more, I think it highlights the need to collaborate because you were talking about the WhatsApp group, but this has to be initiated maybe by the local community group, the NGO on the ground, who will have to work maybe with people like us or with the company so much. Any other questions on site? The ladies, no questions? The men, no questions? We come back to our online participant question. Has it been refined? Maybe we will open it to the panelists if they would like to contribute to it. And it has been drafted. Pete?

Catherine Muya: Greetings from Afghanistan. Could anyone share a sample of the strategy to draw on it? Thank you.

Christelle Onana: Is it an engagement strategy or? I’ll suggest to answer as you understand it, because it’s quite vague.

Emilar Gandhi: So, OK, OK, I will respond to and also just drawing up, I think, from what Melody just said, I think to. You know, to draw up a strategy, you have to understand, I think you have to have a clearer picture of what success looks like, like what is it that you want to do and then you start sort of working backwards. And the second thing is to know that a strategy is not something that, in my opinion, that you finalise and then you say, now let me go out there and do what I laid out, because you might have a strategy, but you need to fine tune it as you go. Because and I’ll give you an example where sometimes we are working on a policy at Metta and like a few weeks ago, we were working on something around eating disorders. And you think, oh, let’s talk to, you know, medical professionals who deal with this issue or psychologists. But then you realise actually you need to talk to young people who might be affected by this or creators who are creating content around, you know, having a certain body type and having a certain, you know, body image and all that. So once you do that, once you maybe talk to a few people, you come back to say, you know what, actually, I need to re-look at my strategy. So I think just to answer to him, you know, a strategy, of course, you might have the frame, who is it that you want to talk to? So the identification. And first of all, understanding the problem, identifying who it is that you want to talk to, maybe lay out the different formats of the engagements. Is it going to be in-person? Are you going to do virtual engagements? What resources do you need? Like what budget do you need? Or do you not need a budget? The people that you’re talking to, are they willing to talk to you? Are they able to talk to you? Or are they willing but unable? Because maybe they don’t have a time to talk to you. So I think there are quite a few things that you might look at. And the last thing I think that you mentioned is impact measurement and looking at how do you measure the results of your engagement?

Christelle Onana: Thank you very much, Emilar We will resume with the questions. Do you, oh, sorry.

Audience: Hi, can you hear me? Good, good. Good morning or good afternoon for those online in case they’re turning in from somewhere in the afternoon. I come from The Gambia and there was an issue for the FGM bill that was called to be amended. And it caused a riot in the country. And a lot of people from the local communities and from the urban areas as well, they came out and had, I think, about a week or two. It was, the country was in uproar. They didn’t want the bill amended. It wanted the bill to be just out. And it kind of calls on to show how people, when they’re concerned about something, when they understand what it is, they actually push for it. And so in my country, for instance, we’ve had the data protection bill drafted since 2019, where in 2024 now, it’s been five years. And so we don’t have that kind of uproar or that kind of concern from civil society, from the students, young people, from the academia or anything like that, that much, that concerned, like they were with the FGM. And although there’s- this is an important issue as well. I think data is also an important issue. So the context is in such a way that people don’t understand data, its importance, why it needs to be protected and what measures are to be put in place to ensure that there’s inclusive data policy, inclusive data future that we’re talking about here. So what can the various stakeholders, the policymakers, the youth, the big tech companies, the academia and government do to ensure that people have a deeper and more concise understanding of data, its importance and things like that? What can they do collaboratively or at individual levels as well to kind of build that understanding within the community?

Christelle Onana: Thank you very much for your question. I believe some of the answer to the question has been given but I will open the floor again. I’ll let the panelists, maybe starting with Suzanne to tell us what can the country do for the citizen to be aware, to be sensitized about such issues in such a way that they feel concerned or they react when need be. Suzanne, please.

Suzanne El Akabaoui: I’m sorry, I won’t open my video because I have an internet issue. So I’m barely hearing most of what’s happening. If I understand well your question, you’re asking about what governments should do. Could you please repeat the question again?

Christelle Onana: Yeah, so the participants said it’s challenging for the population to react to some issues if they are not aware of the subject, if they don’t know what’s going on. And we all understood or we all know that data is very important, data protection, data privacy is important. What can be done by, the question was, what can be done by the different stakeholders to ensure that the population, the citizen are sensitized, aware. of the importance of data management, data protection, data privacy, data security issues?

Suzanne El Akabaoui: Okay, thank you very much. Let me tell you that the issue with data protection is that we used to have relationships between human beings that see each other. And now with digital transformation, we are sharing our data with people we don’t know. And the fact that we don’t know the risks associated with such sharing is what is an actual problem. Because if we understand the risks of misuse, the value of personal data and how it is, it has become a very valuable asset to the citizen and to businesses. It then becomes embedded in the culture and inherent to our day to day actions. So governments, mainly the role would be to raise awareness on various aspects, raise awareness about the rights to citizens, raise awareness about the risks associated with the mishandling of personal data. Putting in place a proper taxonomy of risks is important. Having scenarios of the risks associated with misuse of data, shared with citizens, so that they understand the importance of protecting their personal data and the value of their data and how to ask what will be done with their data is important. This is mainly done through education. It takes a long time because there will be an important cultural shift associated with this. Most African countries are warm countries and they feel closer when they share their data to each other. Now we are putting them in a context where most of the services are moving to a digital space where they don’t know what is happening and where this data will end up being with. So it’s important to raise awareness. It’s important also to give a lot of responsibility and accountability to the companies and controllers and processors about the importance of properly handling data. Governments should emphasize the value of data as an asset that is worthy of protection like any other asset a company would have, that it gives a competitive edge when there is a security of personal data. People will be more encouraged to deal with those who have sound personal data practices. So cascading down methodologies on controllers and processors on how to handle the data and how to secure it and how to see the value and draw value out of it. will encourage them to implement those practices and internal policies that allow such protection. And in parallel, of course, raising awareness, including in curricula for school students, university students, the importance of personal data and personal data protection. This is also a multi-stakeholder approach and the involvement of all stakeholders, including youth, is very important. In Egypt, we work a lot with the Ministry of Youth in trying to find solutions so that they are interested in reading privacy policies and understanding the rights. So it’s important that governments work on various pillars to achieve the target and purpose of raising awareness about the risks, about the opportunities, and the value of data.

Christelle Onana: Thank you very much, Suzanne. Do any of the panellists would like to add to the response? Yes.

Bonnita Nyamwire: Thank you. I would like to add on what Suzanne was explaining. So raising awareness on risks, benefits, but also government needs to maintain transparency throughout the whole process. Because most of the time you find that citizens lack information or some information is withheld for reasons also that we do not know. So transparency is very key. Then the other one is collaboration. as government is raising awareness, they need to collaborate with other stakeholders. There’s academia, there are civil society organizations that work with citizens a lot. You know, so this collaboration is very important. They can go where, you know, into raising awareness together with government, where government cannot reach, civil society will reach, academia will reach. Then the other one is also, as they do awareness raising, it should be done on platforms and channels that can reach all the citizens, you know? Because for instance, if I’ll give an example of what government in Uganda did when they were introducing the digital ID. We didn’t know much about the digital ID, but we just had, oh, you need to go and register. You need a national ID for you to be able to access services, but we’re not told, what are the benefits? What are the risks? You know, what do we need to register? And many people misinterpreted because it is an exercise that came at a time when we’re nearing elections. And again, they are going to do the same thing. They are going to renew our national IDs when we are nearing elections in 2016 and nothing is being done, just like the other time. So explaining to the citizens, you know, why are we doing this? So that time, all of us misunderstood the exercise to like wanting to track the voters within the country and because of multi-party politics. So people say, I am not registering. Others gave wrong information. And now people are suffering because of the wrong information they gave during the national ID registration exercise. So they got the transparency is key, involving other stakeholders, but also the different channels, you know, because if you use a radio and then they say my mother in the village who doesn’t have a radio, or if you. which are going around the city with megaphones. What about those people who are deep down in the village? What, how will they get the information? So, and that’s what I can add on that. Thanks.

Christelle Onana: Thank you very much, Bonita. We know transparency collaboration with the different stakeholders and making sure that the channels and the platform used can reach all the citizens. Anything else to add?

Osei Keja: Quick one. I think the topic is very, very interesting. Data have policies towards a gender-inclusive data future. I sit here as a man, and I would like to tell all the men here that we are in a position of privilege. In this society we live is deeply patriarchal and we should not be very dismissive in terms of the position we do find ourselves in our offices when these policies are brought to us. We should not be dismissive of what we are talking about. I think that part is often neglected because of how gender and society norms are. That’s what I would say.

Christelle Onana: Thank you very much. Yes, please, sir. May I as well suggest that you present yourself before you ask the question so we know? Could you please present yourself briefly and then you ask the question? Thank you.

Audience: Okay, I’ll be very quick. I hope I’m audible. Yes. Good morning, everyone. My name is Chris Odu from Nigeria. It’s a very good thing I’m actually in this space listening to all what we’ve been saying in the conversations. And I think I’m here to learn and I want to know. We’ve been talking about these data policies and the rest. And I think over time what I found out is we’re not lacking policies. In fact, we have a good repository of policies, but we always have issues. And I’m speaking from my own primary constituency, which is Africa. We do have issues when it comes to this implementation. Are there actually mechanisms that we can start using? to actually improve how we implement these policies. Because you come up with a good policy, yes, you want to include women and all of that and everything, but two years, three years down the line, it’s the same result. So we’re still repeating the same thing, going around the same cycle. It’s just something we can start doing to improve how we implement these policies. That’s one. The second one, which I have an issue, is data interoperability within Africa. How are we sharing data? How secure is it? Can we even share data amongst ourselves within the African continent? It’s an issue, which I think I want to learn. I want to know more. How can you help with this so that I can take something back home? Thank you.

Christelle Onana: Thank you very much for your question. We’ll take the second one and then we’ll quickly have an attempt to answer them.

Audience: Okay, good morning to all and good day everywhere you are. My name is Peter King and I am from Liberia. I represent the Liberia Internet Governance Forum. My question goes to the META lady. Please, I heard you talk about trust partnership program, inclusive stakeholder strategy. My issue goes to the idea, what META as one of the global brand in terms of data as data. The data you have at your disposal, what measure or what not just, because I want to also commend you for program that you’ve sponsored or supported at the Africa School Internet Governance. Beyond that, what program or what project do you have in mind in terms of sustainability that look at the issue of protecting policy for under represented groups and under safe countries? For me, I speak also for the… region in the MROU, that is Liberia, Sierra Leone, Guinea, and Ivory Coast. We do not see a program that affects your users in terms of data. Because if you look at it, there are so many things that people need to be capacitated towards it. And when you talk about gender-inclusive data future, how is the future protected when a lot of content that make you get the money and from the people who are not even seen by you? That is my issue. Thank you so much.

Christelle Onana: Thank you very much for your questions. So we will have an attempt to answer to all of them before we get kicked out of the room. Yeah, we have exactly five minutes to finish the session. Oh, so do you want me to jump in quickly? Yes, please.

Emilar Gandhi: So we have five minutes, and we can always discuss later. So good to hear from you. We have a team that’s responsible for Anglophone speaking countries. And I’ll be happy to introduce you to that team as well. Because I think, yes, local partnerships. And the example that I gave is just one of the many things that we are working on. But as you say, I think it’s so difficult to just say these things in these big forums. But you are not seeing something at the local level. And it will be great, I think, for you to meet with some of our local teams as well. Yeah. There was a question about the implementation of the policies. So one of the participants said he doesn’t think that we lack policies. But we lack the implementation of the. Does any of the panelists would like to take that?

Osei Keja: Yeah, thank you very much. Oftentimes, the importance of data protection frameworks or, say, laws are often misconstrued in some of the policy communication around it. I know for a fact in 2022 or 2023, I stand to be quite right, Nigeria implemented a data protection act, which is very, very paramount, very, very necessary. But the policy communication around it, so the average person is seen as, oh, these data frameworks, it’s not necessary. But Africa has come a long way. So far, more than 30 countries have developed frameworks. And it’s still a work in progress. And data interoperability is quite a big issue. But I think we are also making a significant stride as a continent, Africa. But the issue has to be trust. The issue has to be trust. So we need to build on trust. And most importantly, too, is the policy communication around it. How do people, governments bind their trust? How we can exchange data and all that? But still, I think we are getting somewhere, compared to, say, 10, 5 years ago. Thank you very much.

Christelle Onana: Thank you, Osei, for your answer. Any other view to complement?

Bonnita Nyamwire: To add on what Osei has said, I think the AU is doing a great job on getting different African countries to comply on data protection, on privacy, and all these other issues. And even GIZ is also doing a great job supporting the AU in all these aspects. So like Osei has said, there are baby steps that we are taking. But we have moved. And we are somewhere. And we are continuing to move. Maybe by the end of some 10, 20 years, we’ll be somewhere. And also, African countries are taking into consideration benchmarking and learning from each other. So Rwanda is doing well. Different countries are learning from it, and also the others.

Christelle Onana: Thank you very much. Just to reemphasize what you just said, the work is in progress. We are doing baby steps. But eventually, we’ll be there. So indeed, we may not be lacking the policies and the regulation. But even in the way we develop them, we are now including implementation plans, which means that we have the intent to have them implemented, domesticated, and potentially enforced. That’s one. This is the work that we’re doing. We work for a development agency. Our work is to implement the policies that are defined at the union level. So we are making progress. And just to re-say what has been said during the week, we talk about harmonization. It may be an ideal concept. But we are looking into aligning policies regionally, aligning them continentally. So there is a projection to have the system, the technology, all that to communicate. Let’s put it this way. So before we get kicked out of the room, I would like each of my distinguished panelists maybe to say a word to resume our conversation today. One word. We’ll start with Suzanne and Victor online. And then we’ll move back to the room.

Bonnita Nyamwire: Thank you. It’s really difficult to say just one word. But I think that the main word I like is education. I believe education is the key to understanding, to securing, to critically think. And it’s important that we keep raising awareness and educating people about their rights, their duties, and responsibilize them to act soundly. Thank you very much.

Christelle Onana: Thank you very much, Suzanne. Victor?

Victor Asila: Yeah, thank you. So I’ll summarize it in a sentence or two. So we have something we say. One word? We train for the world. So we train. One word? We train for the world. One word. Skills. Thank you.

Christelle Onana: Emina? Collaboration. Thank you. Multi-stakeholder. Thank you. Inclusivity. Thank you. Thank you. Thank you very much for today. Thank you to our distinguished panelists. Thank you for taking the time to be with us for the conversation we had on the subject, on the topic. This is also how we raise awareness. We talk about that. We discuss that. We say sometimes things that we have heard a thousand times. But you know, in French we say, la répétition est la mère de la science. So repetition is the mother of the science. I would like as well to thank all my participants on site. Thank you for your attention and for your participation to the conversation. Have a good day. Bye. I would like to invite the room to have a family picture before we get kicked out of the room. Thank you.

P

Bonnita Nyamwire

Speech speed

123 words per minute

Speech length

1325 words

Speech time

645 seconds

Data should be representative of all genders and intersecting identities

Explanation

Gender-inclusive data should represent all genders and their intersecting identities such as race, ethnicity, age, education level, and socioeconomic status. This ensures that everyone is captured and no one is left behind in data collection and analysis.

Evidence

Intersection reveals injustices and inequalities

Major Discussion Point

Importance of Gender-Inclusive Data

Agreed with

Suzanne El Akabaoui

Emilar Gandhi

Victor Asila

Agreed on

Importance of inclusive data policies and practices

Need to identify and address biases in data and algorithms

Explanation

Gender-inclusive data actively identifies and addresses biases in data and algorithms. This is important because biases can lead to skewed or unevenly distributed data, affecting decision-making processes.

Evidence

Bias in algorithms was discussed in a previous plenary session

Major Discussion Point

Importance of Gender-Inclusive Data

Agreed with

Victor Asila

Agreed on

Addressing bias in data and algorithms

Importance of ensuring data safety, privacy and individual agency

Explanation

Gender-inclusive data should ensure safety, privacy, and agency for individuals. This involves protecting people from harm and exploitation due to data misuse and allowing individuals and communities to have control over their data.

Major Discussion Point

Importance of Gender-Inclusive Data

Transform data collection processes through capacity building

Explanation

To achieve gender-inclusive data, there is a need to transform data collection processes through capacity building. This includes training on designing data collection tools to capture diverse gender data and equipping researchers with skills to identify and mitigate biases.

Major Discussion Point

Strategies for Achieving Gender-Inclusive Data

Agreed with

Suzanne El Akabaoui

Agreed on

Importance of education and capacity building

Differed with

Suzanne El Akabaoui

Differed on

Approach to achieving gender-inclusive data

Involve diverse communities in designing data initiatives

Explanation

Achieving gender-inclusive data requires involving and engaging diverse communities in designing and implementing data initiatives. This includes collaborating with women and feminist organizations to align goals and processes of initiatives.

Major Discussion Point

Strategies for Achieving Gender-Inclusive Data

Share good practices on collecting and reporting gender data

Explanation

Sharing good practices on collecting and reporting gender data is important for shaping notions and impact of excellence. This allows stakeholders to learn from each other’s experiences in gender-inclusive data initiatives.

Major Discussion Point

Strategies for Achieving Gender-Inclusive Data

S

Suzanne El Akabaoui

Speech speed

91 words per minute

Speech length

1698 words

Speech time

1112 seconds

Governments should develop inclusive policies and regulations

Explanation

Governments need to develop gender-inclusive policies and enforce regulations that address the needs and rights of women and marginalized groups. This includes ensuring that data protection laws are inclusive and consider the unique vulnerabilities of these groups.

Evidence

Egypt’s Personal Data Protection Law (Law 151 of 2020) aims to protect personal data and penalize misuse

Major Discussion Point

Importance of Gender-Inclusive Data

Agreed with

Bonnita Nyamwire

Emilar Gandhi

Victor Asila

Agreed on

Importance of inclusive data policies and practices

Need for education and digital literacy initiatives

Explanation

Governments should provide education and training on digital literacy to empower women and marginalized groups. This includes teaching them about their rights, how to protect personal information online, and encouraging pursuit of STEM education.

Major Discussion Point

Importance of Gender-Inclusive Data

Agreed with

Bonnita Nyamwire

Agreed on

Importance of education and capacity building

Differed with

Bonnita Nyamwire

Differed on

Approach to achieving gender-inclusive data

Implement privacy-enhancing technologies

Explanation

There is a need to implement privacy-enhancing technologies such as encryption, anonymization, and secure data storage. These technologies protect users’ data from unauthorized access and misuse.

Major Discussion Point

Strategies for Achieving Gender-Inclusive Data

Ensure transparency and accountability in data practices

Explanation

Regulations should require companies to be transparent about their data practices and hold them accountable for any issues related to misuse of data. This includes providing for regular audits and impact assessments to ensure compliance with privacy standards.

Major Discussion Point

Strategies for Achieving Gender-Inclusive Data

E

Emilar Gandhi

Speech speed

162 words per minute

Speech length

2374 words

Speech time

875 seconds

Need to ensure inclusivity by design in products and policies

Explanation

Tech companies should prioritize inclusivity when designing products and policies. This means considering inclusion from the start of the development process, not as an afterthought.

Major Discussion Point

Role of Technology Companies

Agreed with

Bonnita Nyamwire

Suzanne El Akabaoui

Victor Asila

Agreed on

Importance of inclusive data policies and practices

Importance of hiring people from underrepresented groups

Explanation

Tech companies should hire people from underrepresented groups to ensure diverse perspectives in product and policy development. This is important because lived experiences are crucial in designing inclusive products and policies.

Evidence

Meta hires people with inclusion in mind and prioritizes professional development to retain diverse talent

Major Discussion Point

Role of Technology Companies

Value of stakeholder engagement and trust-building

Explanation

Stakeholder engagement is crucial for tech companies, going beyond outreach to focus on relationship and trust-building. This is particularly important in addressing the trust deficit between tech companies and users in certain parts of the world.

Evidence

Meta has a trusted partner program with 400 organizations globally

Major Discussion Point

Role of Technology Companies

V

Victor Asila

Speech speed

116 words per minute

Speech length

1213 words

Speech time

622 seconds

Opportunity to use big data for gender-specific insights

Explanation

Big data analytics provide an opportunity to uncover nuanced patterns and trends related to gender. This can help identify gender disparities and areas that need intervention.

Evidence

At Safaricom, data is used to tailor products that address gender-specific issues

Major Discussion Point

Role of Technology Companies

Agreed with

Bonnita Nyamwire

Suzanne El Akabaoui

Emilar Gandhi

Agreed on

Importance of inclusive data policies and practices

Need for algorithmic audits to prevent bias

Explanation

There is a need for algorithmic audits to prevent bias in AI models and algorithms. This involves implementing policies and practices to ensure that algorithms are fair and equitable.

Evidence

Safaricom has policies requiring data scientists to conduct algorithmic audits to prevent bias

Major Discussion Point

Role of Technology Companies

Agreed with

Bonnita Nyamwire

Agreed on

Addressing bias in data and algorithms

O

Osei Keja

Speech speed

158 words per minute

Speech length

1128 words

Speech time

427 seconds

Youth often left out of policy conception and implementation

Explanation

Young people are often excluded from the conception and implementation stages of policy development. They are often seen as an afterthought rather than being included from the beginning of the process.

Major Discussion Point

Youth Involvement in Data Governance

Need for shared vision and continuous learning

Explanation

There is a need for a shared vision and continuous learning in policy development and implementation. This involves system thinking and benchmarking from other experiences to improve policy outcomes.

Major Discussion Point

Youth Involvement in Data Governance

A

Audience

Speech speed

163 words per minute

Speech length

1048 words

Speech time

384 seconds

Lack of public awareness about data protection importance

Explanation

There is a lack of public awareness about the importance of data protection and privacy. This makes it challenging for the population to react to data-related issues or policies.

Evidence

Example of The Gambia where there was public uproar about an FGM bill but not about the data protection bill

Major Discussion Point

Challenges in Policy Implementation

U

Unknown speaker

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Need for transparency and collaboration in policy communication

Explanation

There is a need for transparency and collaboration in communicating policies to the public. This involves working with various stakeholders and using diverse channels to reach all citizens.

Evidence

Example of Uganda’s digital ID implementation where lack of clear communication led to misunderstandings

Major Discussion Point

Challenges in Policy Implementation

Importance of contextualizing approaches for different regions

Explanation

It’s important to contextualize engagement approaches for different regions and stakeholders. This involves understanding local needs and preferences in communication and engagement strategies.

Major Discussion Point

Challenges in Policy Implementation

Progress being made but still work to be done on implementation

Explanation

While progress is being made in developing data protection frameworks in Africa, there is still work to be done on implementation. Trust-building and effective policy communication are key challenges.

Evidence

Over 30 African countries have developed data protection frameworks

Major Discussion Point

Challenges in Policy Implementation

Agreements

Agreement Points

Importance of inclusive data policies and practices

Bonnita Nyamwire

Suzanne El Akabaoui

Emilar Gandhi

Victor Asila

Data should be representative of all genders and intersecting identities

Governments should develop inclusive policies and regulations

Need to ensure inclusivity by design in products and policies

Opportunity to use big data for gender-specific insights

Speakers agreed on the need for inclusive data policies and practices that represent all genders and intersecting identities, from government regulations to product design in tech companies.

Addressing bias in data and algorithms

Bonnita Nyamwire

Victor Asila

Need to identify and address biases in data and algorithms

Need for algorithmic audits to prevent bias

Both speakers emphasized the importance of identifying and addressing biases in data and algorithms, with Victor Asila specifically mentioning algorithmic audits as a method to prevent bias.

Importance of education and capacity building

Bonnita Nyamwire

Suzanne El Akabaoui

Transform data collection processes through capacity building

Need for education and digital literacy initiatives

Both speakers highlighted the need for education and capacity building to improve data collection processes and empower marginalized groups in the digital space.

Similar Viewpoints

These speakers all emphasized the importance of engaging with diverse communities and stakeholders in the development of data policies and initiatives.

Bonnita Nyamwire

Suzanne El Akabaoui

Emilar Gandhi

Involve diverse communities in designing data initiatives

Need for education and digital literacy initiatives

Value of stakeholder engagement and trust-building

Unexpected Consensus

Recognition of progress in African data protection frameworks

Osei Keja

Bonnita Nyamwire

Progress being made but still work to be done on implementation

AU is doing a great job on getting different African countries to comply on data protection

Despite the focus on challenges, there was unexpected consensus on the progress being made in developing data protection frameworks in Africa, with both speakers acknowledging advancements while noting ongoing implementation challenges.

Overall Assessment

Summary

The main areas of agreement included the importance of inclusive data policies, addressing bias in data and algorithms, the need for education and capacity building, and the value of stakeholder engagement.

Consensus level

There was a moderate to high level of consensus among the speakers on the key issues discussed. This consensus suggests a shared understanding of the challenges and potential solutions in creating gender-inclusive data policies, which could facilitate more coordinated efforts in addressing these issues across different sectors and stakeholders.

Differences

Different Viewpoints

Approach to achieving gender-inclusive data

Bonnita Nyamwire

Suzanne El Akabaoui

Transform data collection processes through capacity building

Need for education and digital literacy initiatives

While both speakers emphasize education, Bonnita Nyamwire focuses on transforming data collection processes through capacity building, while Suzanne El Akabaoui emphasizes broader digital literacy initiatives.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement were subtle and primarily focused on different approaches to achieving similar goals in gender-inclusive data practices.

difference_level

The level of disagreement among speakers was relatively low. Most speakers shared similar views on the importance of gender-inclusive data and the need for education and awareness. The differences were mainly in the specific strategies and focus areas each speaker emphasized, which could be seen as complementary rather than contradictory approaches.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of including diverse perspectives, but Bonnita Nyamwire focuses on community engagement in data initiatives, while Emilar Gandhi emphasizes hiring practices within tech companies.

Bonnita Nyamwire

Emilar Gandhi

Involve diverse communities in designing data initiatives

Importance of hiring people from underrepresented groups

Similar Viewpoints

These speakers all emphasized the importance of engaging with diverse communities and stakeholders in the development of data policies and initiatives.

Bonnita Nyamwire

Suzanne El Akabaoui

Emilar Gandhi

Involve diverse communities in designing data initiatives

Need for education and digital literacy initiatives

Value of stakeholder engagement and trust-building

Takeaways

Key Takeaways

Gender-inclusive data is crucial and should represent all genders and intersecting identities

There is a need to identify and address biases in data collection, algorithms, and technology design

Governments should develop inclusive policies and regulations while promoting digital literacy

Technology companies have a responsibility to ensure inclusivity by design in their products and policies

Youth involvement in data governance is important but often lacking in policy conception and implementation

Progress is being made on data protection policies in Africa, but implementation remains a challenge

Resolutions and Action Items

Transform data collection processes through capacity building

Involve diverse communities in designing data initiatives

Share good practices on collecting and reporting gender data

Implement privacy-enhancing technologies

Ensure transparency and accountability in data practices

Hire people from underrepresented groups in technology companies

Conduct algorithmic audits to prevent bias

Unresolved Issues

How to effectively implement existing data protection policies

How to improve data interoperability within Africa

How to ensure sustainable programs for underrepresented groups in different African regions

How to measure the impact of community engagement efforts

Suggested Compromises

Balancing the need for data collection with privacy concerns through education and transparency

Collaborating across different stakeholders (government, private sector, civil society, academia) to address data governance challenges

Contextualizing approaches for different regions while working towards continental alignment of policies

Thought Provoking Comments

A gender-inclusive data is one that is representative of all genders. It also is representative of their intersecting identities. By intersecting identities, I mean like race, ethnicity, their age, educational level, socioeconomic status, geographical location, so that everyone is captured and no one is left behind.

speaker

Bonita Nyamwire

reason

This comment provides a comprehensive definition of gender-inclusive data that goes beyond just gender to include other important demographic factors. It highlights the complexity and intersectionality involved in truly inclusive data.

impact

This set the tone for a more nuanced discussion about what gender-inclusive data really means and the many factors that need to be considered. It broadened the conversation beyond just male/female to consider multiple dimensions of identity.

We need to have a shared vision. So, from the conception stage to the implementation stage, we know where we are going so that the young people may be bought into the idea.

speaker

Osei Keja

reason

This comment emphasizes the importance of including youth from the very beginning of policy development, rather than as an afterthought. It challenges the typical top-down approach.

impact

It shifted the discussion to focus more on how to meaningfully involve youth throughout the entire process of developing and implementing data policies. Other panelists began to discuss more concrete ways to engage young people.

Considering the point that he mentioned to be embarked from the beginning, having the vision as. Yes, yes. So what have we done with the youth? Quite a number of initiatives.

speaker

Emilar Gandhi

reason

This comment directly responds to and builds on the previous point about youth involvement, demonstrating active listening and engagement between panelists.

impact

It moved the conversation from theoretical ideas about youth involvement to concrete examples of initiatives, providing more practical insights. It also modeled how panelists could engage with and build on each other’s points.

I sit here as a man, and I would like to tell all the men here that we are in a position of privilege. In this society we live is deeply patriarchal and we should not be very dismissive in terms of the position we do find ourselves in our offices when these policies are brought to us.

speaker

Osei Keja

reason

This comment brings attention to the role of men in addressing gender inequality, acknowledging privilege and calling for men to be more engaged in gender-inclusive policies. It’s a powerful statement coming from a male panelist.

impact

This shifted the conversation to consider the role and responsibility of those in positions of privilege in creating more inclusive data policies. It added a layer of self-reflection to the discussion.

Overall Assessment

These key comments shaped the discussion by broadening the understanding of gender-inclusive data beyond simple gender binaries, emphasizing the importance of youth involvement from conception to implementation of policies, providing concrete examples of initiatives, and highlighting the role of those in positions of privilege. The discussion evolved from theoretical concepts to more practical considerations and self-reflection on the roles different stakeholders play in creating inclusive data policies. The interplay between panelists, building on each other’s points, led to a richer, more nuanced conversation that touched on multiple aspects of the complex issue of gender-inclusive data policies.

Follow-up Questions

How do governments practically work with companies to ensure transparency about their data processes?

speaker

Christelle Onana

explanation

This question addresses the practical implementation of data transparency policies, which is crucial for effective data governance.

How do we track inclusive technologies at the national level?

speaker

Christelle Onana

explanation

Understanding how to measure and monitor the inclusivity of technologies is important for ensuring equitable access and use of data.

What do governments do with research from academia regarding data policies?

speaker

Christelle Onana

explanation

This question explores the connection between academic research and policy implementation, which is vital for evidence-based policymaking.

How often do engagements with communities happen, and how is their impact measured?

speaker

Christelle Onana

explanation

Understanding the frequency and effectiveness of community engagements is crucial for ensuring that data policies are responsive to community needs.

What can the various stakeholders (policymakers, youth, big tech companies, academia, and government) do to ensure that people have a deeper and more concise understanding of data, its importance, and related issues?

speaker

Audience member from The Gambia

explanation

This question addresses the need for widespread data literacy, which is essential for informed public participation in data governance.

Are there mechanisms that can be used to improve the implementation of data policies?

speaker

Chris Odu from Nigeria

explanation

This question focuses on the critical issue of policy implementation, which is often a challenge in many African countries.

How are African countries sharing data among themselves, and how secure is this data sharing?

speaker

Chris Odu from Nigeria

explanation

This question addresses the important issue of data interoperability and security within the African continent.

What programs or projects does Meta have for sustainability that address the issue of protecting policy for underrepresented groups and undersafe countries?

speaker

Peter King from Liberia

explanation

This question explores the role of large tech companies in ensuring data protection for vulnerable populations in developing countries.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #18 World Economic Forum – Building Trustworthy Governance

Open Forum #18 World Economic Forum – Building Trustworthy Governance

Session at a Glance

Summary

This panel discussion focused on the future of the internet and the development of digital technologies, exploring regulatory, ethical, and practical considerations. Participants emphasized the importance of building a global infrastructure to support emerging technologies like AI and the metaverse. They discussed the need for adaptable, interoperable regulations that promote digital connectivity while respecting data privacy and security.

The conversation highlighted Greece’s digital transformation journey, showcasing how investment in digital public infrastructure can lead to economic growth and improved governance. Panelists stressed the importance of creating regulatory frameworks that are flexible enough to keep pace with rapidly evolving technologies while addressing cross-border challenges and accountability issues.

Ethical considerations for the private sector were explored, with emphasis on integrating ethical principles into product development and building user trust. The discussion touched on data stewardship and sovereignty, noting the tension between maintaining national digital sovereignty and preventing internet fragmentation. Participants agreed on the need for collaborative, multi-stakeholder approaches to governance that prioritize user privacy, security, and consent.

The panel also addressed the importance of cultural engagement in new digital spaces and the challenges posed by evolving hardware standards. They concluded by emphasizing that all stakeholders have an active role in shaping the future internet, and that a principled approach focusing on user needs and economic opportunities is essential for positive development.

Keypoints

Major discussion points:

– The importance of building trust, transparency and user control into emerging internet technologies and platforms

– The need for adaptable and interoperable regulatory frameworks that can keep pace with rapid technological change

– The role of digital public infrastructure in enabling economic growth and improved governance

– Balancing data sovereignty with the need for global data flows and interoperability

– Ethical considerations and accountability in AI and other emerging technologies

Overall purpose:

The discussion aimed to explore key considerations for shaping the future of the internet and digital technologies in a way that promotes trust, economic opportunity, and good governance while addressing potential risks and challenges.

Tone:

The tone was largely collaborative and optimistic, with panelists from different sectors sharing perspectives on how to responsibly develop emerging technologies. There was a sense of shared purpose in wanting to create a better internet future, even while acknowledging complexities and challenges. The tone became more action-oriented towards the end, with calls for active participation in shaping the future of the internet.

Speakers

– Judith Espinoza: Governance Specialist, World Economic Forum (Moderator)

– Hoda Al Khzaimi: Advisor to multiple industries and companies

– Brittan Heller: Senior Fellow of Technology and Democracy, Atlantic Council

– Robin Green: Representative from Meta

– Apostolos Papadopoulos: Chief Technology Officer, Hellenic Republic of Greece

Additional speakers:

– Audience: Representative from Digital Impact Alliance (DIAL)

Full session report

The Future of the Internet: Navigating Emerging Technologies and Governance Challenges

This panel discussion, moderated by Judith Espinoza, brought together experts from various sectors to explore the future of the internet and the development of digital technologies. The conversation focused on regulatory, ethical, and practical considerations for shaping a digital landscape that promotes trust, economic opportunity, and good governance while addressing potential risks and challenges.

Key Themes and Discussion Points

1. Emerging Technologies and Their Impact

The panelists emphasized that the future internet will be shaped by a constellation of emerging technologies, including artificial intelligence (AI), extended reality (XR), blockchain, and quantum computing. Judith Espinoza highlighted that AI should be viewed as an enabler for other technologies rather than a standalone product. This perspective underscores the interconnected nature of technological advancements and their collective impact on the digital landscape.

The discussion touched upon the need for a global infrastructure to support these emerging technologies, with particular emphasis on the development of the metaverse. Panelists agreed that building trust, transparency, and user control into these platforms is crucial for their successful integration into society.

2. Regulatory Frameworks and Governance

A significant portion of the discussion centered on the need for adaptable and interoperable regulatory frameworks that can keep pace with rapid technological change. Robin Green, representing Meta, stressed the importance of technology-neutral legal frameworks that can evolve alongside innovations. This view was echoed by Brittan Heller, who emphasized the need for cross-border regulation and coordination for effective internet governance.

The panel highlighted the challenges of balancing data sovereignty with the need for global data flows and interoperability. Robin Green argued for the importance of maintaining an open, interoperable internet while respecting national digital sovereignty concerns. Hoda Al Khzaimi emphasized the importance of respecting legal sovereignty rights when developing technology regulations across different jurisdictions.

The panelists agreed on the need for adaptable and flexible governance frameworks, with Hoda Al Khzaimi suggesting sandboxing approaches for developing regulations for emerging technologies.

3. Digital Public Infrastructure and Economic Growth

Apostolos Papadopoulos, representing the Greek government, shared insights from Greece’s digital transformation journey. He provided specific examples and statistics, such as the implementation of a national digital identity system, which led to a 25% increase in digital service adoption. The country also saw a significant reduction in bureaucratic processes, with 94% of public services now available online. This real-world example illustrated the potential benefits of embracing digital technologies at a national level.

The panel agreed that digital public infrastructure, including payment systems and digital identity, serves as a crucial pathway for connection and economic opportunity. Judith Espinoza emphasized the alignment of interests between users, human rights advocates, and economic development stakeholders in building a robust digital ecosystem.

4. Trust, Ethics, and User-Centric Design

Hoda Al Khzaimi stressed the importance of incorporating ethical considerations into product functionality from the outset. She advocated for transparency and accessibility in AI algorithms, as well as the implementation of user-centric dashboards that clearly show how personal data is being used and processed. Al Khzaimi also highlighted the need for a single source of truth in trust stack guidelines.

Robin Green echoed these sentiments, highlighting Meta’s commitment to responsible innovation principles that focus on user trust and safety. He provided practical examples of how these principles are applied, such as implementing privacy-by-design features and conducting regular human rights impact assessments. Green also emphasized the importance of accessibility in technology design.

5. Challenges and Opportunities in the Digital Age

The discussion touched upon potential risks associated with emerging technologies, including increased surveillance capabilities and the erosion of privacy. Brittan Heller raised concerns about accountability and transparency in automated systems, emphasizing the need for robust safeguards.

The panel explored the evolution of consent mechanisms for new computing platforms, recognizing that traditional models may not be sufficient in immersive or AI-driven environments. Brittan Heller highlighted the potential loss of cultural engagement spaces in the next iteration of the internet and stressed the importance of hardware floor considerations in emerging technologies like XR.

Hoda Al Khzaimi pointed out the potential of government technologies as a growing industry, suggesting opportunities for innovation in this sector.

6. Multi-stakeholder Approach to Governance

A key takeaway from the discussion was the importance of a collaborative, multi-stakeholder approach to internet governance. The panelists agreed that all stakeholders—including governments, private sector entities, civil society organizations, and users—have an active role in shaping the future internet.

The discussion also touched on the challenges faced by developing nations, with Ibrahim raising a question about how African countries can develop data governance frameworks.

Unresolved Issues and Future Directions

While the panel reached consensus on many points, several unresolved issues emerged:

1. Effectively balancing data sovereignty with cross-border data flows

2. Addressing potential increased surveillance and privacy erosion in new technologies

3. Resolving hardware floor issues in emerging technologies like XR

4. Evolving consent mechanisms for new computing platforms

5. Ensuring accessibility and inclusivity in the future internet across different regions and demographics

6. Developing appropriate data governance frameworks for developing nations

The discussion concluded with a call for continued dialogue and collaboration among stakeholders to address these challenges. The panelists emphasized the need for a principled approach that focuses on user needs, economic opportunities, and ethical considerations in shaping the future of the internet.

In summary, this thought-provoking discussion highlighted the complex interplay between technology, regulation, user rights, and societal values in the digital age. It underscored the need for adaptable frameworks, trust-building mechanisms, and the preservation of cultural spaces as we navigate the evolving landscape of the internet and emerging technologies.

Session Transcript

Robin Green: changing, but in order to make this happen, it’s going to be essential to have the global infrastructure that supports it. Data centers are a great example of some of the kinds of infrastructure that we’re going to need, but in order to really, I’m so sorry, I think some people online couldn’t hear me. In order to grow that infrastructure, it’s going to be really important that we have a regulatory and legal environment that supports it. This means having globally predictable, interoperable, and adaptable regulations that promote digital connectivity and really bridge the digital divide, and that promote data flows and secure communications like encryption of data and transit.

Judith Espinoza: I really appreciate what you said about AI always being part of these technologies, right? I think it’s easy, perhaps from a consumer perspective, to look at things as siloed developments, but as we move into the next phase of internet, we can see that none of this is developed on its own. These are things that have to go together. AI is a enabler for lots of these technologies, but it’s not a product on its own, so I think this is perfect. With that, I also want to share, part of the way that at the forum we’re envisioning the future of the internet is that these are digital intermediaries for connection, whether it be through social media, whether it be to commerce, whether it be to health, agenda AI, you name it, right? And one of those ways, one of those pathways forward is through digital public infrastructure. So the way that people can connect to each other, also economic opportunity, growth. And with that, I want to turn to Apostolos, and I want to ask you, how has Greece advanced the next iteration of the internet experience through digital public infrastructure? How are you developing DPI in Greece, and what are some of the, maybe, the governance opportunities that that presents, right? DPI as an enabler of good governance, as a means of connection.

Apostolos Papadopoulos: Thank you very much for your question, and I’m excited to be here. So in the Greek context, I think, the digital transformation journey of the country is in two stages, in two phases. We’re currently in a stage where we are doing a lot more work in AI and working with emerging technologies, and you were talking about experiences, and I think the permeating word that would delineate this would be trust and directness and transparency. So citizens would like to interact with governments and to have a direct and easy way to do that. And a way to do that in a way. So currently, we’re doing a lot of work in AI. We are doing work in LLMs, where we created a government chatbot, so citizens can interact with the government portal and figure out easy ways to interact with every service and have access to digital services. We’re doing work in AI and education with digital tutoring and homework assignments. So in this phase, we’re investing a lot in new and emerging digital public infrastructure, emerging technologies. The first phase that allowed us to do that starts in 2019 with the creation of a digital ministry, digital transformation ministry, and that was because up to that point, some of that did not exist in Greece, and that created the baseline for the second phase to be able to be executed. So from 2019 to 2023, there’s been a digital tiger leap, as people have called it, in the sense that digital adoption was very low in Greece. 2018, we had 8 point something million digital transactions in total. Greece is a country of about 10 million people, so it’s a very low number of adoption. But 2023 ended with 1.4 billion. So if you chart that, it’s exponential growth, both in terms of supply as well as demand. So this stage, this first stage, created the regulatory framework, the engineering framework, the platforms for us to be able to go in the second phase and do more work with emerging technologies. And the regulatory framework, speaking of that, is a crucial layer of this stack. So you have to have common sense, light touch approaches, regulation, people can trade both internal and inside the government, as well as external partners. And overall, I would say, API in Greece currently is very much a given, and digital is considered something that is, you know, by default.

Judith Espinoza: People and businesses expect of the government. Thank you so much. I want to follow up with one more question for you. You’re talking about exponential growth and usership, and in following this model, do you think you see this as an essential way, I guess, for also for financial growth for the country, right? You’re connecting, it’s peer-to-peer, it’s also services-to-peer, and also, I guess, for businesses as well. How do you see this growth?

Apostolos Papadopoulos: Yes, very positively. One of the deliverables of this approach has been $2.5 billion in investment in FDI. We have, we are the, Greece is the only European member state, the only European member state, along with Poland, a major high-risk country. So, okay. Can you hear me better now? Perfect. Okay, great. All right, thank you. Sorry. So, FDI is a crucial part of this equation. Can you hear me better? Yes, that is fantastic. Okay. So, we had a microphone problem. So, I was just saying, FDI is crucial, and it is a direct byproduct of the strategy, and of the execution of the strategy. So, the Greek government has been working with international and local partners, and there has been a great synergy between all the stakeholders, and both in terms of job growth, as well as in terms of investments, has been a very positive story so far. Thank you so much.

Judith Espinoza: With that, you know, there is an interesting narrative that we are starting to weave here, right, which is investment, and that leads to growth, and that leads to opportunity. And that builds good governance, right? This is an opportunity to build better governance, to build better trust among stakeholders. And with that, I really want to pivot now to Britain. You know, we are talking about the Internet evolving, and as these technologies evolve, I wonder, what do you think are the core regulatory and policy obstacles that we must overcome to really make a better Internet, right? What have we done wrong? Where can we do things better? And are there really any new risks that you think regulators should be paying attention to? Thank you.

Brittan Heller: Can you all hear me? Great. So, I teach international law and AI regulation, and have worked in emerging technologies for about eight years now. So, I’m going to give you the conclusion first. The conclusion is that emerging technologies are a constellation, and if your regulatory approach focuses on one aspect in lieu of the others, you’re going to miss the bigger picture. So, you have to think about the way that AI will be interacting with immersive technologies. We’ll look at new payment systems like blockchain. We’ll look at the new petrol of the Internet, quantum computing, and seeing how all of those systems will feed off each other, will interact with each other, and how existing law may not be a clean fit for these new technologies. There are four things that I think can be valuable when you’re trying to figure out this puzzle about if your existing law will fit, and how to determine what needs to be addressed first in a regulatory regime. The first obstacle is ensuring that these regimes, which were designed primarily in the late 1990s and early 2000s, are adaptable enough to keep pace with the rapid evolution of these technologies. One example that I work a lot on are virtual reality or XR systems. We put on a conference at Stanford Law School last year called Existing Law and Extended Reality, because you can’t just take laws formulated for 2D computing, put them into 3D spaces, and expect that they’re going to work the same way. The way that you formulate jurisdiction, the way that privacy concerns operate in a technology that is different from your laptop because it has sensors that must reach out into the environment to calibrate your devices. your privacy looks different when it’s not just based on the words that are going in and out of servers, when it’s actually location-based and based on your biometric data. So looking at that, how adaptable is your legal system? Second is the question of cross-border regulation, and I know I sound like I’m coming straight at you from 2006, but it’s a very important issue, and when you look at all of this, look at it with all puns intended as a second bite at the apple. All of the things that you wish could be different about the way internet governance works and manifests in your jurisdiction, in your company, in your stakeholder group, you have a chance to do it differently this time. Take that opportunity. So looking at the way that data protection laws align with regulations in other parts of the world so we don’t create another fragmentary regulatory landscape, and how do you create the coordination necessary to make this work across different countries? Third is a question of accountability and transparency. As we rely more on automated systems, the question of who is responsible when something inevitably goes wrong becomes much more complicated. So when I evaluate AI regulatory regimes, it’s not just the robustness or strength of the laws that I look at, it’s the actual enforceability of those regulations. And the way that laws that are cut and pasted from one country and placed into another legal context may not have the same impact on the ground and in the business sector based on the way your corporate laws are structured. So you can’t expect the same results by cutting and pasting. And finally, in terms of new risks, one of the most pressing concerns is the potential for increased surveillance and erosion of privacy. As these technologies are evolving, you see enabled, they enable more granular tracking and profiling of individuals, oftentimes without knowledge or consent. And in new technologies where AI grows legs and walks out in the world amongst us, you you need this type of information to calibrate the device. So your conception of privacy, of consent, freedom of freedom of information, all of these things need to shift in the type of understanding that you see embedded in earlier generations of laws. Overall, regulators need to think about these risks on a broad scale, focusing on fundamental rights while fostering innovation. And the nice thing about these new ecosystems is that what is good for users is also good for human rights. So they don’t have to develop at odds with each other when you’re starting to create these systems of new. Thank you. Thank you so much. And I think this is a perfect segue to

Judith Espinoza: you, Dr. Hoda. We’ve heard now what those policy gaps are. I wonder, this is governance and policy, right? But from your perspective, you’ve advised multiple industries, multiple companies. What do you think the most important ethical considerations are for the private sector when developing these technologies? How do you think that this can be built in a trustworthy way? And then also, we always talk about trust at the forum, right? We want to talk about how you build trust with users, with society at large. But what are the metrics then to know that something is trustworthy, right? We can all say that something is trustworthy, but how can we prove that there is trust there, right? Whether it’s the government level, whether it’s at the product level. I

Hoda Al Khzaimi: think one of the most important aspects that faces the private sector is how can you bring the ethical stack into the trust component, into the functionality of the final product that you’re putting into the market. We have talked about several trust frameworks that exist internationally, with the OECD, with the UN, and as well with the World Economic Forum and in TASSI, which is mostly that’s addressable to accountability, transparency, security, inclusivity, and interoperability. But when you look at the technology that’s being produced in the market today, you don’t see that kind of holistic deployment of ethical components across map and the technology stack. So how can we encourage that at the algorithmic level is very important. And I think right now in 2024, when we are trying to publish in my research group and any kind of AI top tier conferences, what I see very positive is the fact that they kind of encourage you to make sure that your algorithm is accessible and the transparency is available in the system. And that’s quite important, because then you start changing the system. And you don’t get access to publication unless you do that. And I would like to see these kind of not just as well existing on the platform level. Because when we talk about the current social media platform, for example, we don’t see the same level of transparency. I mean, I’m not talking about annual reporting or reporting that exists at specific periodic level, but that kind of dynamic, quick at the tip of your finger level of ethical transparency that exists, that will tell you who used your data for what purpose you use your data, that kind of end user dashboard platform that should exist for user. And I think in the in the research space, we do a lot to improve security, we do a lot to make sure that we have, you know, privacy aspects, zero trust systems, homomorphic encryption, federated learning, these big tools that takes us sometimes years to develop in order to bring trustworthiness and level of reliability and security into the technology, but not necessarily always we see them used or transformed into the product cycle. So that’s kind of concerning on the map, on the holistic map. And I think the 2000, this area of 2025 to 2030, would be the period where we perfect this, perfect this kind of transitioning of ethical components into technology. That’s the first head, I think, or challenge that we see across the map. And the second challenge is for us to understand that bringing ethical and trustworthy digital solutions into the platform is a multi stack layer kind of challenge. So you’re not dealing only with the technology or with the ethical stack, but also dealing with the regulation aspects with the harmonization efforts that exist across the globe. You’re dealing with how we should write those into policies and to, as well, regulations that would bring data acts into action in different jurisdictions, respecting the indigenous differences of those jurisdictions is very important, because for as Breton just said, it’s quite different to bring activation of laws when you’re dealing with it in specific jurisdiction versus another. And we should respect that. And we should allow those kind of legal sovereignty rights of developing the law when it comes to technology to exist across different markets. So this is the second, I think, challenge that I see existing. And it worries me at the moment that everybody is looking at the EU, for example, AI act as the grand flagship regulation to be used across jurisdiction, which is not going to be the same because it’s risk oriented kind of framework of legislation that might not work for Asian countries, for example, where they are much more concerned about the value principle based kind of approach, and they want that to be, as well, translated into the platform. So interpretation of ethics and legislation into the platform is very important. And your second question is, what do we have to include when we are talking about the trust stack across board? I mean, if you ask the technology oriented person, the answers will be different than if you asked a legal kind of entity. And if you asked a different stakeholder who’s coming from the policy framework or from the implementation industrial framework, and in my opinion, the first thing we should have is a single source of truth into this, like a governance structure that would tell you a trust stack should, and the best kind of guidelines aspect include these different layers. And to me, the first layer is the ability to allow users to have accessibility to their data, and also visibility of data transactions that exist across the map, and authenticating who actually access those data transactions at different layer of the mapping. And this is something that we had in conversation and research communities, and as well industrial communities since 2009, because we had this massive technological crisis where user woke up one day and realized that they want to have acquisition of their own, as you have said, and also our colleague from Greece just highlighted, accessibility to data and accessibility to data market is considered today an economy by itself. The work we’re seeing around the DPIs and the government technologies, which is the new rise of technologies that we are gonna see until 2030, is gonna be massing to over 6 trillion USDs. So it’s a huge industry that’s being developed on the back of the data that’s being provided by the citizens. So how can we make sure that the first layer of providing accessibility to those data in a secure manner is available for the users? The second layer is about the security stack, and this is what we already have, and I think we’ve done quite a rigorous work around it. We just have to perfect the adaptability of those security stack onto different platform, especially if we’re talking about the metaverse, or if we’re talking about this kind of real-time transactions, then we need to make sure that they are, I would say, fast enough, they are as well light in operations to be able to be computed into different devices as well. And the third layer is the layer of legislation and regulation, because, I don’t know, we have discussed this several times across the map, but I think I just wanna reiterate this for people who don’t understand that legislation takes time. Legislation takes, I mean, a cycle of three years or a cycle of more than three years in certain jurisdictions to take an effect. And technology development is not waiting for legislation to be passed. We see new models of AI being deployed and pushed across markets, so how can we protect users through legislations if we can produce maybe something that’s faster than what we’re having in the current cycle?

Judith Espinoza: It’s very important. I wanna come back to a couple of your points, especially on data and open-source modeling, but I wanna, in the interest of time, open this up now to the audience. I wanna see if anyone has any questions. We can go ahead and pass the microphone around, and I’m also gonna ask that we monitor the chat online to see if there’s some questions. But we have a question over here. I can pass you the mic. Please tell us what your name is and where you’re from, and please address. Sure, fantastic, thank you.

Audience: My name is Ibrahim, and I’m from the Digital Impact Alliance, DIAL. We work in supporting countries in Africa to deal with or develop data governance frameworks which are in-line, up-to-standard, with global best practices. Now, with that in mind, what, and Dr. Hoda, I’m looking at you for this question, probably. Britain said this legal framework development is a second bite at that apple, which I think is quite exciting. But with countries in Africa, which are latecomers into this digital governance space, and with the advent of fast-paced development of technologies that require consume, ingest, but at the same time produce a whole lot of data, how do you expect countries in Africa, or how do you advise for them to deal with private sector actors at this point in time with enabling legal frameworks, with supportive legal frameworks that are not stifling innovation, but at the same time creating that ability to drive value out of engagement with the private sector? Ibrahim, right?

Hoda Al Khzaimi: Thank you so much for the question, I would say. The first thing I would say that Africa is not a latecomer into this conversation, because Africa itself have produced the first, I would say, payment infrastructure. Like, within DPI infrastructure, you care about payment scales, you care about digital identity, you care about, as well, accessibility to healthcare services and other type of services on the platform, and regulation. And Africa, as I said, is not a latecomer into this conversation. And Africa, with examples that happened in Kenya, like the M-PASA, for example, and the payment structure, were pioneering in this space, even globally, so I would say. And I think it’s one of the first or two payment global systems that existed. And as well, Rwanda itself, at the moment, is building loads of good stack when it comes to government tech, that is also pioneering on a government level. So I think there is a lot to learn from Africa when it comes to their mass deployment onto those structures. And also, when we talked, like in 20, I think, 23, we talked to the Minister of Technology and Infrastructure in Rwanda, and they were also trying to pass this knowledge through African countries to African countries from Rwanda, which is great to see. My advice, when it comes to developing legislation or regulations for government technologies in general, touching emerging technologies, not just one aspect of technology, is try to embody what we have already seen in the global de facto, which is a sandboxing approach. A sandboxing approach normally is something that we see mostly in financial sectors, because you’re trying to de-risk the threat that might come into the financial space from adapting a new technology or adapting as well a new emerging aspects into the mass deployment of a system. So a sandboxing approach into those technologies between private sector and public sector is quite important. And this is what we have tried to push for with the World Economic Forum in UAE as well. We have established a framework for the World Economic Forum in UAE as well. We have established this kind of a global trade regulatory structure where countries are encouraged to come and be on boarded on it to understand how can they deploy specific technologies like AI into different domains, not just in the government, but as well in the public sector as well as industry. So I think learning from those global examples and building your own niche localized example is quite important for you to understand the current pressing needs in your markets and try to keep that kind of indigenous a space of solution making and build your own jurisdiction of regulations and policies because this is something you should not, as Britain said, I do agree on this 100%, you should not copy paste from a global structure. You should try to understand the nonsense and the problems and the challenges that you have in the ground and that you’re trying to solve for because it’s part of the sovereignty aspects of technology, sovereignty aspects of data, sovereignty aspects as well of the infrastructure that you will be developing for these type of technologies across the map.

Judith Espinoza: Do we have anyone else in the room with a question? If not, can we pull up maybe the chat from the Zoom room so we can also look at that? Okay, while we wait for that to come in, I wanna… Okay, we’ve touched on some of these, but I wanna touch on something that came up here in this conversation. And I wanna really, there seems to be a tension, right? In most bodies of research and some work about having sovereignty, right? But also making sure that the internet that we develop isn’t fragmented. And I wanna, and a large part of that is this data economy that you touched on. And a lot of that is this really just global data stewardship, right? I mean, we’re talking about tech and we’re talking about platforms whether decentralized or centralized that really span multiple physical jurisdictions, right? Across countries, across nations, regionally. So I wanna come in and I wanna open this up. First, I wanna direct it to Robin. How is meta thinking about this data stewardship aspect of this technology, of this future internet? All of these technologies are sort of changing the way users either produce data or interact with data. So yeah, how is meta thinking about this? And how do you see it maybe changing or affecting again, building on that user trust?

Robin Green: Thanks, that’s such an important question. And I think it applies not only when you’re thinking about the metaverse and AI and things like that, but really to the way that we are interacting with the internet in general. I think we really need to get crisp on what we mean by sovereignty, right? Because there are a lot of different approaches and in different definitions to digital sovereignty. For some, it can mean sovereignty of government and often that historically has been very territorial in nature and physical in nature. But then with the internet, that sort of shifts all of that. But then there’s also the concept of personal sovereignty, digital sovereignty. And so I think one of the most important things to do is make sure that as we are creating different governance frameworks, we’re doing two things. One, making sure that they’re interoperable with one another so that we are not creating frameworks that are not compatible so that you can’t offer services in two separate jurisdictions at the same time that are more or less the same. And so I think that’s sort of one of the key things essential to ensuring that is making sure, as I mentioned earlier, we’re promoting things that are foundational to an open, interoperable and secure internet, in particular, the free flow of data across borders and digital security and broader adoption of some of the best technologies and tools that we have to augment digital security, like encryption of data. data in transit and data at rest. The second thing is we need to make sure that governance is adaptable. And that is a really hard needle to thread. I think we do this in every space of digital governance the best we can, but we’re still really trying to get to good. And the reason for that is because it’s really hard to know what the future’s gonna look like. I think Britain was absolutely hitting the nail on the head when she was talking about how these laws that we’re often applying today that were created in the 80s, 90s, and early aughts, they don’t necessarily seamlessly fit with the technologies of today. So let’s take that as a cautionary tale, not only around making sure that we are not just copy, pasting, and making the mistakes of yesterday, but also making sure that as we’re creating legal frameworks, we’re building them with, sorry, this keeps going out on me. We’re building them with enough flexibility and adaptability, and in a way that in some sense is really technology neutral, even though we’re still talking about tech governance, so that in 20, 30 years, we’re not in the same position where we have a slower to develop legal framework than technology is adopted that really is not fit for purpose. To that end, I think governance has to be collaborative, cooperative, and multi-stakeholder. One of the most essential things in how we think about not only product and, excuse me, product and service governance, but also just creating what the policy frameworks and legal frameworks around the world, what we think that they should look like, is making sure that we’re collaborating with other private sector peers, not only within our sector, but with other kinds of companies in different sectors as well, collaborating with government, civil society, academia, and users, and I think that’s one of the great examples of why FORA’s like IGF are so critical. It gives us this opportunity to come together and to really promote that kind of multi-stakeholderism, and then I think the last thing is we have responsible innovation principles, and one of the things that’s really important about those principles is we’ve developed them in a way that is meant to be adaptable in just the same way that I’m sort of suggesting our legal frameworks need to be adaptable. They’re high-level principles that we have to execute on in a way that users trust, and the way that we know we’re doing that right is because users are happy with it, and it’s exactly like Britton said. What’s good for users is good for human rights, and frankly, what’s good for users and human rights is also good for economic development and digital transformation, and so our responsible innovation principles are never surprise people. A good example of that is on our smart glasses, the Meta Ray Bans, if they’re turned on, you can see a little LED light, so people will know if a person in their vicinity is using these glasses to take pictures or to livestream or something like that, and if the user actually tries to cover up the LED, they’ll get a prompt that they have to uncover it in order to continue using the product as they want. In addition to that, we wanna provide controls that matter. This is especially important as it applies to youth using our products, not only making sure that youth have those controls and that we’re starting with built-in privacy by default, but also making sure that parents have the kinds of controls that they want so that they can play a really active role in guiding the experiences of the, excuse me, the experiences that their children are having online using these technologies. In addition to that, consider everybody. Consider everybody is our third principle, and it’s really meant to ensure accessibility. It’s meant to ensure that this is an internet and these are technologies for everybody. An example of how we do that is by making sure that we have adjusted height, for example, on our Meta Horizon operating system, which means that whether you are standing up or sitting down you can have the same really comfortable experience in VR. We also have a put people first principle. This is all about privacy and security. Oh, I’m sorry, I’m not good at holding microphones. You’d think I was a digital native, and so some of this would be easier, but I’m not great with technology, although I guess this isn’t really digital technology. So anyways, put people first, privacy, security, I could go on about that for a very long time. In the context of VR in particular in the metaverse, well, VR and augmented reality and XR, I think we think a lot first and foremost about the youth experience and making sure that we’re building privacy and security into that, but then the other aspect of that is making sure that adults have that same kind of control over their experiences and autonomy. We implement this through a lot of different approaches that range from the kinds of user controls that we’ve talked about, but also privacy enhancing techniques like processing data on device. And then we also try to minimize data collection as much as we can, and then we do safety and integrity as one of the major things, and I think you’ll notice that safety and integrity sort of principles are woven throughout some of our other principles, but it’s also its own standalone principle. And we really try to live that and make sure that our users can experience that principle by fostering safe and healthy communities. We want to make sure that we are promoting communities where people can gather with shared intent incentives and establish positive norms to connect online. We want to be empowering people, developers, creators, and users with the kinds of tools to create the experiences they want, but we also need to make sure that people with bad intentions are not able to just do whatever they want on services. And so with that in mind, we have a code of conduct for virtual experiences that makes sure that we do things like prohibit illegal, abusive behavior, or excuse me, behavior that promotes illegal activity, behavior that is abusive, or behavior that could actually lead to physical harm. And then we’re also doing things to promote admins and their ability to moderate their spaces. And so we just want to make sure that as we’re thinking about these things, those high-level values, those principles are really adapted into governance structures that governments are considering so that we can really be maximizing voice, safety, authenticity, dignity, and privacy in the growing adoption of these new technologies.

Judith Espinoza: Thank you, Robin. I think that was very comprehensive. And I want to touch on one thing that I think is really important, right? So when you’re developing these frameworks, right, you really do need a whole society approach, but there’s also something interesting here that I think we can all take away, which is there really is an alignment of interest, right? And it’s an alignment of interest for everyone because trust makes things work, right? When a user trusts a technology or trusts a platform or a service, that can expand, that can grow. That’s an opportunity for growth for everyone. And with that, I want to pass this on to Apostolos now. You’re sort of the example of what private-public cooperation can do. It’s kind of like the bread and butter that we do at the forum. So I want to ask you, how does Greece approach this, right, this issue of data? How do you approach data stewardship? How do you come up with these frameworks that work, that are trustworthy, that are interoperable, and that leverage all of these sort of new technological innovations so that people can have better access to opportunities through digital intermediaries? And then I’m going to pass on to Britton after that on a similar question, but I’ll let Apostolos go first. Please, go ahead.

Apostolos Papadopoulos: Thank you, Judith. Fantastic question. I think in the Greek context, trust, privacy, and data security are defining axioms and characteristics of the digital transformation strategy. Everything that was done and is still being done has always put users first, citizens first, their data, and everything happens with consent. So my colleagues here mentioned a bunch of great words earlier. Transparency, consent is important. So anything, anytime, a digital service, whether that’s commenced by the citizen or by another government organization, has to access data. The citizen has to consent to that data processing. Other than that, from an institutional perspective, when the Ministry of Digital Governance was created, the minister, it was designed that he was endowed with CIO roles, let’s say. That means he or they had the unilateral power to connect any data set they want. But I think connect is the operative keyword here because it’s not about owning the data sets. It’s not about owning the data. It’s about simply connecting different registries with the intent of producing a digital service outcome. for the citizen and the citizen has explicitly asked for that. So it’s not about the government going out there on its own and processing data and creating new registries and creating, you know, stuff like that. But it’s about creating the experience and creating the trust culture that people know, oh, I want to do X, Y, Z. Here’s how I do it. Here’s one platform to do it. And it’s being done in a transparent way to me and to my understanding. So trust, openness, trustworthiness are defining characteristics of the digital transformation strategy.

Judith Espinoza: Okay, thank you so much. You know, when we talk about traditional digital public infrastructure, the things that kind of come up really always are, you know, data exchange, online payment systems, and digital identity. And so, you know, across the stage, we see how people approach that in different ways, right? Whether you’re building soft digital identities and footprints through like a meta account or, you know, your Google account or whatever it is. But these all sort of build on this aspect of connection. And I want to pass on to you now, Britton. What do you think are those gaps really? Because we’re talking about, you know, theoretically, and we see this alignment, right? This is a good alignment of incentives. But what do you think is the gap there then to take us there? And then you can talk about it from a regulatory standpoint, but what do you think are the gaps there to make sure that we sort of all align and take this work forward? Three things.

Brittan Heller: Number one, I think if we are not deliberate about creating spaces for cultural engagement and education in the next iteration of the internet, we will not have them in the same way that we did in the first. When you look at the people who created the internets, the first time, they all were professors who were trying to share information. They really privileged, they worked for government organizations. They got their funding from government organizations. With the next iteration, having extensive private investment into it, it is not a natural evolution to have a cultural space emerge if civil society does not ask for it and if governments aren’t aware that that is a gap. You can look at this with the metaverse where you saw certain countries starting to create cultural properties. Barbados created an embassy in the metaverse. South Korea had a widespread presence. And if you look at Saudi Arabia, there’s actually augmented reality aspects of their cultural tours when you go to some of their UNESCO World Heritage Sites. So you have to think about how the things that make people unique, the things that your people value, the things that make you special, translate into the new mediums of computing. The second is you have to think about hardware floor because the hardware floor for some of these new technologies is not solidified yet. What this means is that we risk creating fragmentation via technical means when we may not intend for that to happen. The example for that is Magic Leap just announced that they are going to stop supporting their first edition of their XR headset. So all of the content that was created for the last eight years will no longer be accessible in a matter of weeks. This is happening again and again and again, and there are many industry groups and user groups within the XR community who are very, very concerned about the loss of their data, the loss of their creative energy because the hardware floor is not settled. We don’t know the format. There are groups working on that now that are just starting to emerge, like the Metaverse Standards Forum. Most people are very surprised to learn that it was just this year that the file format for 3D assets to actually move between worlds and function between worlds was created by Adobe, so the equivalent of a PDF-type format for digital assets. We’re really at that phase in some of these new computing platforms, and so you have to think about what that means and what will be lost if we don’t bring it along. I think the final piece is looking at ways that concepts like consent can be evolved with new computing platforms. I did a study that was published and presented at ISMAR, which is a big conference about spatial computing. Kind of strange for an international law professor to be there, but we were looking at different ways that the notice and consent mechanisms that you have in flat-screen traditional computing could be adapted to 3D computing, and if the affordances of 3D technology meant you could do it differently. And we found that, yes, you could do it differently. Users liked the mechanism that we built that showed them that their eyes were being tracked and how the eye tracking was working. They responded really, really positively to that, and then they felt like they were able to consent to the use of their data in more meaningfully informed ways. That’s kind of anathema to what a lot of companies thought, that if you showed people that their eyes were being tracked, it might freak them out, to be honest. But they liked understanding what the data flows… We visualized the data flows for them and explained to them how the device worked. That was the basis for meaningfully informed consent that you couldn’t do on a flat screen. You had to do it in 3D. I think those are the three pieces that might get overlooked if we’re just looking at it through a pure kind of platform policy or regulatory lens. That’s fantastic. Thank you.

Judith Espinoza: And we have now the warning three-minute mark, but I want to wrap up. And I think there’s some good takeaways to this, right? First, I think when we think about the future Internet, all of us are active participants in how we build that future together, right? None of us are, like, passive users of the Internet or online or digital intermediaries. We all have an active role in how we shape that. And I want us all to feel empowered and walk away in knowing that what we do matters, right, from a user standpoint or through your own personal capacities in whatever way you join us. I see Jeff from Amazon Web Services here, and we’ll chat in a bit with him. But the second takeaway is, regardless of what the future Internet looks like, right, we have to make sure that we’re taking a principled approach to how we build this, right? We want to make sure that the users at the center, that digital public infrastructure really is a means to further, whether it’s economic opportunity or connectivity, whether it’s metaverse, whether it’s projects like the ones that Brittan mentioned. And there’s also, you know, there’s the Duaverse now, which is like a Dubai Electricity and Water Association created this, like…

Hoda Al Khzaimi: I mean, in UAE, we have many. We have, as well, the one with MR and the land authority where you can pay and actually co-pay for real estate assets on this spot. We have, as well, developed a strategy that is extremely applicable to a wide scale of industries, and we are encouraging the industry to build that kind of metaverse collaborative space that reflects back into the economy and different FDI structures. So I think it is about how the leadership of this space will happen. I mean, we have advocacy on across the map from the leaders of the country, which translate to building economies and building companies and building a solution that translates across map. But this is exactly what we talked about, right?

Judith Espinoza: So we, in these examples, see how metaverse or AI is being built into DPI, right? This is really pushing forth how people are going to experience the future of the internet. And I think, lastly, right, all of our incentives align. No one advocates. No one wants, like, a bad future internet. So it’s important to all come together. And I want to thank, to close up, I want to thank the IGF for hosting us and allowing us to have this space. I want to thank all of you for being wonderful supporters of our work, but also really great collaborators in what we do. And, you know, the final takeaway is this is kind of the example of what we want moving forward, right? This is all of society represented on this panel and through the work that we’ve been doing here for the last couple of days. So I encourage you to take that with you and be active participants in the future internet that we want to create, right? It’s not static. It’s a product that keeps evolving. And we keep evolving with it. So, again, thank you so much. I’ll let all of us go. Again, thank you for spending the last day of the forum with us. We’re super grateful. And if you have questions and you want to hang around, please do so. We’ll be here for a couple more minutes. Thank you. Round of applause for our wonderful panelists. Thank you. Thank you. Thank you.

J

Judith Espinoza

Speech speed

211 words per minute

Speech length

1761 words

Speech time

500 seconds

AI as an enabler for other technologies, not a standalone product

Explanation

Judith Espinoza argues that AI is not developed in isolation but is integrated with other technologies. She emphasizes that AI acts as an enabler for various technologies rather than being a standalone product.

Major Discussion Point

The Future of the Internet and Emerging Technologies

Digital public infrastructure as a pathway for connection and economic opportunity

Explanation

Judith Espinoza highlights the importance of digital public infrastructure in facilitating connections and creating economic opportunities. She views DPI as a crucial pathway for advancing digital connectivity and fostering growth.

Major Discussion Point

The Future of the Internet and Emerging Technologies

Alignment of interests between users, human rights, and economic development

Explanation

Judith Espinoza highlights the alignment of interests between users, human rights, and economic development in building the future internet. She emphasizes that trust is crucial for the growth and expansion of technologies and platforms.

Major Discussion Point

Building the Future Internet

B

Brittan Heller

Speech speed

147 words per minute

Speech length

1453 words

Speech time

591 seconds

Need for adaptable legal frameworks to keep pace with rapid technological evolution

Explanation

Brittan Heller emphasizes the importance of creating legal frameworks that can adapt to rapidly evolving technologies. She argues that current laws, often designed for earlier tech generations, may not fit seamlessly with new technologies.

Evidence

Example of laws from the 80s, 90s, and early 2000s not fitting well with current technologies

Major Discussion Point

The Future of the Internet and Emerging Technologies

Agreed with

Robin Green

Agreed on

Need for adaptable and interoperable legal frameworks

Importance of cross-border regulation and coordination for internet governance

Explanation

Brittan Heller stresses the need for coordination in cross-border regulation for effective internet governance. She highlights the importance of aligning data protection laws globally to avoid a fragmented regulatory landscape.

Major Discussion Point

The Future of the Internet and Emerging Technologies

Constellation of emerging technologies (AI, XR, blockchain, quantum computing) shaping the future internet

Explanation

Brittan Heller describes the future internet as being shaped by a constellation of emerging technologies. She emphasizes that focusing on one technology in isolation will miss the bigger picture of how these technologies interact and influence each other.

Evidence

Mentions AI, XR, blockchain, and quantum computing as examples of interconnected emerging technologies

Major Discussion Point

The Future of the Internet and Emerging Technologies

Potential for increased surveillance and erosion of privacy with new technologies

Explanation

Brittan Heller warns about the potential for increased surveillance and privacy erosion with new technologies. She points out that emerging technologies enable more granular tracking and profiling of individuals, often without their knowledge or consent.

Major Discussion Point

Challenges and Opportunities in Digital Transformation

Importance of accountability and transparency in automated systems

Explanation

Brittan Heller emphasizes the need for accountability and transparency in automated systems. She argues that as reliance on automated systems increases, it becomes more complex to determine responsibility when things go wrong.

Major Discussion Point

Challenges and Opportunities in Digital Transformation

Need for deliberate creation of cultural engagement spaces

Explanation

Brittan Heller stresses the importance of deliberately creating spaces for cultural engagement in the next iteration of the internet. She argues that without intentional effort, these spaces may not naturally emerge as they did in the first iteration of the internet.

Evidence

Examples of countries creating cultural properties in the metaverse, such as Barbados creating an embassy and Saudi Arabia using augmented reality for cultural tours

Major Discussion Point

Building the Future Internet

Importance of addressing hardware floor issues in new technologies

Explanation

Brittan Heller highlights the need to address hardware floor issues in new technologies to prevent unintended fragmentation. She warns that unsettled hardware standards can lead to loss of content and creative energy.

Evidence

Example of Magic Leap discontinuing support for their first edition XR headset, making years of content inaccessible

Major Discussion Point

Building the Future Internet

Evolution of consent mechanisms for new computing platforms

Explanation

Brittan Heller discusses the need to evolve consent mechanisms for new computing platforms. She argues that 3D computing environments offer new possibilities for obtaining meaningful informed consent from users.

Evidence

Study presented at ISMAR showing users responded positively to visualizations of eye tracking and data flows in 3D environments

Major Discussion Point

Building the Future Internet

R

Robin Green

Speech speed

152 words per minute

Speech length

1506 words

Speech time

591 seconds

Importance of interoperable governance frameworks to avoid fragmentation

Explanation

Robin Green emphasizes the need for interoperable governance frameworks to prevent fragmentation of the internet. She argues that frameworks should be compatible across jurisdictions to allow consistent service offerings.

Major Discussion Point

Data Governance and Digital Sovereignty

Agreed with

Brittan Heller

Agreed on

Need for adaptable and interoperable legal frameworks

Need for technology-neutral and adaptable legal frameworks

Explanation

Robin Green stresses the importance of creating legal frameworks that are technology-neutral and adaptable. She argues that this approach will ensure the frameworks remain relevant as technology evolves rapidly.

Major Discussion Point

Data Governance and Digital Sovereignty

Agreed with

Brittan Heller

Agreed on

Need for adaptable and interoperable legal frameworks

Balancing data sovereignty with an open, interoperable internet

Explanation

Robin Green discusses the challenge of balancing data sovereignty with maintaining an open and interoperable internet. She emphasizes the need to promote free flow of data across borders while ensuring digital security.

Major Discussion Point

Data Governance and Digital Sovereignty

Differed with

Hoda Al Khzaimi

Differed on

Approach to data sovereignty and internet governance

Need for cross-border data flows and digital security measures

Explanation

Robin Green highlights the importance of promoting cross-border data flows and implementing strong digital security measures. She specifically mentions the need for encryption of data in transit and at rest.

Major Discussion Point

Data Governance and Digital Sovereignty

Importance of regulatory frameworks supporting digital infrastructure

Explanation

Robin Green emphasizes the need for regulatory frameworks that support digital infrastructure development. She argues that such frameworks are essential for the growth of technologies like AI and the metaverse.

Evidence

Mentions data centers as an example of necessary infrastructure

Major Discussion Point

Challenges and Opportunities in Digital Transformation

Need for globally predictable, interoperable, and adaptable regulations

Explanation

Robin Green stresses the importance of creating globally predictable, interoperable, and adaptable regulations. She argues that such regulations are crucial for promoting digital connectivity and bridging the digital divide.

Major Discussion Point

Challenges and Opportunities in Digital Transformation

Responsible innovation principles focusing on user trust and safety

Explanation

Robin Green discusses Meta’s responsible innovation principles that prioritize user trust and safety. She emphasizes the importance of providing controls that matter and considering everyone in the development of new technologies.

Evidence

Example of LED light on Meta Ray Bans to indicate when they are in use for recording or livestreaming

Major Discussion Point

Trust and Ethics in Technology Development

Importance of privacy, security, and user controls in new technologies

Explanation

Robin Green highlights the importance of privacy, security, and user controls in new technologies, especially for youth. She emphasizes Meta’s approach of starting with built-in privacy by default and providing parental controls.

Evidence

Mentions privacy enhancing techniques like processing data on device and minimizing data collection

Major Discussion Point

Trust and Ethics in Technology Development

Agreed with

Apostolos Papadopoulos

Hoda Al Khzaimi

Agreed on

Importance of user privacy and consent in data processing

Multi-stakeholder approach to internet governance

Explanation

Robin Green advocates for a multi-stakeholder approach to internet governance. She emphasizes the importance of collaboration between private sector, government, civil society, academia, and users in shaping policy frameworks.

Evidence

Mentions the Internet Governance Forum (IGF) as an example of a platform for multi-stakeholder collaboration

Major Discussion Point

Building the Future Internet

A

Apostolos Papadopoulos

Speech speed

131 words per minute

Speech length

838 words

Speech time

381 seconds

Greece’s digital transformation journey and exponential growth in digital adoption

Explanation

Apostolos Papadopoulos describes Greece’s rapid digital transformation, which he calls a ‘digital tiger leap’. He highlights the exponential growth in digital transactions and adoption in the country since 2019.

Evidence

Increase from 8 million digital transactions in 2018 to 1.4 billion in 2023

Major Discussion Point

Challenges and Opportunities in Digital Transformation

Importance of user consent and transparency in data processing

Explanation

Apostolos Papadopoulos emphasizes the importance of user consent and transparency in data processing in Greece’s digital transformation strategy. He states that all data access and processing requires explicit citizen consent.

Evidence

Mentions that citizens must consent to data processing for any digital service

Major Discussion Point

Data Governance and Digital Sovereignty

Agreed with

Robin Green

Hoda Al Khzaimi

Agreed on

Importance of user privacy and consent in data processing

H

Hoda Al Khzaimi

Speech speed

154 words per minute

Speech length

1843 words

Speech time

714 seconds

Incorporating ethical considerations into product functionality

Explanation

Hoda Al Khzaimi emphasizes the importance of integrating ethical considerations into the core functionality of technology products. She argues that ethical components should be deployed across the entire technology stack.

Major Discussion Point

Trust and Ethics in Technology Development

Importance of transparency and accessibility in AI algorithms

Explanation

Hoda Al Khzaimi stresses the need for transparency and accessibility in AI algorithms. She highlights the positive trend in academic conferences encouraging researchers to make their algorithms accessible and transparent.

Evidence

Mentions the requirement in top-tier AI conferences for algorithm accessibility and transparency

Major Discussion Point

Trust and Ethics in Technology Development

Need for user-centric dashboards showing data usage

Explanation

Hoda Al Khzaimi advocates for user-centric dashboards that provide real-time information about data usage. She argues for a level of transparency that allows users to easily see who used their data and for what purpose.

Major Discussion Point

Trust and Ethics in Technology Development

Agreed with

Robin Green

Apostolos Papadopoulos

Agreed on

Importance of user privacy and consent in data processing

Differed with

Robin Green

Differed on

Approach to data sovereignty and internet governance

Agreements

Agreement Points

Need for adaptable and interoperable legal frameworks

Brittan Heller

Robin Green

Need for adaptable legal frameworks to keep pace with rapid technological evolution

Need for technology-neutral and adaptable legal frameworks

Importance of interoperable governance frameworks to avoid fragmentation

Both speakers emphasize the importance of creating legal frameworks that can adapt to rapidly evolving technologies and remain interoperable across jurisdictions to prevent fragmentation.

Importance of user privacy and consent in data processing

Robin Green

Apostolos Papadopoulos

Hoda Al Khzaimi

Importance of privacy, security, and user controls in new technologies

Importance of user consent and transparency in data processing

Need for user-centric dashboards showing data usage

These speakers agree on the critical importance of user privacy, consent, and transparency in data processing, emphasizing the need for clear user controls and information about data usage.

Similar Viewpoints

These speakers share the view that transparency and accountability are crucial in the development and deployment of AI and automated systems, emphasizing the need for responsible innovation that prioritizes user trust and safety.

Brittan Heller

Robin Green

Hoda Al Khzaimi

Importance of accountability and transparency in automated systems

Responsible innovation principles focusing on user trust and safety

Importance of transparency and accessibility in AI algorithms

Unexpected Consensus

Cultural engagement in the future internet

Brittan Heller

Judith Espinoza

Need for deliberate creation of cultural engagement spaces

Digital public infrastructure as a pathway for connection and economic opportunity

While not explicitly discussed by other speakers, both Brittan Heller and Judith Espinoza touch on the importance of cultural engagement and connection in the future internet, suggesting an unexpected consensus on the need for deliberate efforts to create spaces for cultural and social interaction in digital environments.

Overall Assessment

Summary

The speakers generally agree on the need for adaptable and interoperable legal frameworks, the importance of user privacy and consent, and the necessity of transparency and accountability in AI and automated systems. There is also a shared recognition of the interconnected nature of emerging technologies and their impact on the future internet.

Consensus level

There is a high level of consensus among the speakers on core principles such as user-centric approaches, the need for adaptable regulations, and the importance of transparency. This consensus suggests a shared vision for the future internet that prioritizes user rights, innovation, and responsible development of technologies. However, there are some variations in emphasis and specific approaches, particularly in how different countries or organizations are implementing these principles.

Differences

Different Viewpoints

Approach to data sovereignty and internet governance

Robin Green

Hoda Al Khzaimi

Balancing data sovereignty with an open, interoperable internet

Need for user-centric dashboards showing data usage

Robin Green emphasizes the need for interoperable governance frameworks and cross-border data flows, while Hoda Al Khzaimi focuses more on user-centric control and transparency in data usage.

Unexpected Differences

Cultural engagement in the future internet

Brittan Heller

Robin Green

Need for deliberate creation of cultural engagement spaces

Responsible innovation principles focusing on user trust and safety

While both speakers discuss the future of the Internet, Brittan Heller unexpectedly emphasizes the need for the deliberate creation of cultural spaces, which is not directly addressed by other speakers who focus more on technical and regulatory aspects.

Overall Assessment

summary

The main areas of disagreement revolve around the balance between data sovereignty and internet openness, the approach to user data control and transparency, and the emphasis on cultural aspects in the future internet.

difference_level

The level of disagreement among the speakers is relatively low, with more emphasis on complementary perspectives rather than direct contradictions. This suggests a generally aligned view on the future of the internet, with differences mainly in specific focus areas and implementation strategies.

Partial Agreements

Partial Agreements

Both speakers agree on the need for adaptable legal frameworks, but Brittan Heller emphasizes the importance of considering the constellation of emerging technologies, while Robin Green focuses more on technology-neutral approaches.

Brittan Heller

Robin Green

Need for adaptable legal frameworks to keep pace with rapid technological evolution

Need for technology-neutral and adaptable legal frameworks

Similar Viewpoints

These speakers share the view that transparency and accountability are crucial in the development and deployment of AI and automated systems, emphasizing the need for responsible innovation that prioritizes user trust and safety.

Brittan Heller

Robin Green

Hoda Al Khzaimi

Importance of accountability and transparency in automated systems

Responsible innovation principles focusing on user trust and safety

Importance of transparency and accessibility in AI algorithms

Takeaways

Key Takeaways

The future internet will be shaped by a constellation of emerging technologies including AI, XR, blockchain, and quantum computing.

There is a need for adaptable and interoperable legal frameworks to keep pace with rapid technological evolution.

Data governance and digital sovereignty must be balanced with maintaining an open, interoperable internet.

Incorporating ethical considerations and user trust is crucial in developing new technologies.

Digital public infrastructure and digital transformation offer significant opportunities for economic growth and improved governance.

A multi-stakeholder, collaborative approach is essential for effective internet governance.

Resolutions and Action Items

Develop governance frameworks that are interoperable across jurisdictions

Implement responsible innovation principles focusing on user trust and safety

Create user-centric dashboards showing data usage and processing

Establish sandboxing approaches for testing new technologies in regulatory environments

Deliberately create spaces for cultural engagement in new computing platforms

Unresolved Issues

How to effectively balance data sovereignty with cross-border data flows

Addressing potential increased surveillance and privacy erosion in new technologies

Resolving hardware floor issues in emerging technologies like XR

How to evolve consent mechanisms for new computing platforms

Ensuring accessibility and inclusivity in the future internet across different regions and demographics

Suggested Compromises

Adopting technology-neutral legal frameworks to allow for future adaptability

Balancing innovation with user protection through responsible development principles

Using sandboxing approaches to test new technologies within existing regulatory structures

Implementing privacy-enhancing techniques like on-device data processing to balance functionality with data protection

Thought Provoking Comments

The conclusion is that emerging technologies are a constellation, and if your regulatory approach focuses on one aspect in lieu of the others, you’re going to miss the bigger picture.

speaker

Brittan Heller

reason

This comment introduces a holistic perspective on regulating emerging technologies, emphasizing the interconnected nature of different innovations.

impact

It shifted the discussion towards considering the broader ecosystem of technologies rather than isolated innovations, setting the stage for a more comprehensive analysis of regulatory challenges.

Overall, regulators need to think about these risks on a broad scale, focusing on fundamental rights while fostering innovation. And the nice thing about these new ecosystems is that what is good for users is also good for human rights.

speaker

Brittan Heller

reason

This insight aligns user interests with human rights, suggesting a win-win approach to regulation and innovation.

impact

It reframed the discussion around finding solutions that benefit both users and broader societal interests, encouraging a more balanced approach to technology governance.

The first thing we should have is a single source of truth into this, like a governance structure that would tell you a trust stack should, and the best kind of guidelines aspect include these different layers.

speaker

Hoda Al Khzaimi

reason

This comment proposes a concrete solution to the complex issue of building trust in digital systems across different jurisdictions.

impact

It sparked a more detailed discussion about the specific components needed in a trust framework, moving the conversation from theoretical concerns to practical implementation.

Let’s take that as a cautionary tale, not only around making sure that we are not just copy, pasting, and making the mistakes of yesterday, but also making sure that as we’re creating legal frameworks, we’re building them with enough flexibility and adaptability, and in a way that in some sense is really technology neutral.

speaker

Robin Green

reason

This insight highlights the need for flexible, future-proof regulatory approaches that can adapt to rapid technological change.

impact

It encouraged participants to think more critically about long-term implications of current regulatory efforts and how to create more adaptable frameworks.

Number one, I think if we are not deliberate about creating spaces for cultural engagement and education in the next iteration of the internet, we will not have them in the same way that we did in the first.

speaker

Brittan Heller

reason

This comment brings attention to the often-overlooked cultural and educational aspects of internet development.

impact

It broadened the scope of the discussion beyond technical and regulatory concerns to include cultural preservation and education in the digital age.

Overall Assessment

These key comments shaped the discussion by encouraging a more holistic, user-centric, and culturally aware approach to internet governance and emerging technologies. They moved the conversation from siloed thinking about individual technologies or regulations to considering the broader ecosystem and long-term implications. The discussion evolved to emphasize the importance of adaptable frameworks, trust-building mechanisms, and the preservation of cultural spaces in the digital realm. This comprehensive perspective highlighted the complex interplay between technology, regulation, user rights, and societal values in shaping the future of the internet.

Follow-up Questions

How can we ensure that governance frameworks for new technologies are interoperable across jurisdictions while still respecting local needs?

speaker

Robin Green

explanation

This is important to avoid creating incompatible frameworks that prevent offering consistent services across different jurisdictions.

How can we make governance frameworks for digital technologies more adaptable to keep pace with rapid technological change?

speaker

Robin Green

explanation

This is crucial to avoid the problem of outdated laws not fitting new technologies, as happened with laws from the 80s-00s being applied to current tech.

How can we create spaces for cultural engagement and education in the next iteration of the internet?

speaker

Brittan Heller

explanation

This is important to ensure cultural aspects are not overlooked in the development of new internet technologies, which are largely driven by private investment.

How can we address the issue of the unsettled hardware floor in new technologies like XR?

speaker

Brittan Heller

explanation

This is crucial to prevent the loss of content and creative work due to rapid obsolescence of hardware platforms.

How can concepts like consent be evolved for new computing platforms?

speaker

Brittan Heller

explanation

This is important to ensure users can provide meaningful informed consent in new technological environments like 3D computing.

How can African countries develop supportive legal frameworks for digital governance that enable innovation while creating value from private sector engagement?

speaker

Audience member (Ibrahim)

explanation

This is important for countries that are newer to digital governance to effectively manage rapid technological development and data issues.

What metrics can be used to prove that a technology or system is trustworthy?

speaker

Judith Espinoza

explanation

This is important for building and measuring trust with users and society at large in new technologies.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #208 Democratising Access to AI with Open Source LLMs

WS #208 Democratising Access to AI with Open Source LLMs

Session at a Glance

Summary

This discussion focused on democratizing access to AI through open-source large language models (LLMs). Panelists explored how open-sourcing can influence innovation rates in the AI industry and prevent monopolization by large entities. They highlighted the potential of open-source LLMs to foster collaboration, address local needs, and empower smaller economies and the Global South.

Key points included the importance of truly open-source models that allow free use, modification, and redistribution. Panelists discussed the challenges of building open-source AI infrastructure, particularly for developing countries, including the need for computing power, technical expertise, and high-quality data. The discussion touched on initiatives in countries like the Dominican Republic and Brazil to develop localized AI models that reflect cultural nuances and languages.

Participants debated the role of regulation versus open-source approaches in addressing monopolies and ensuring equitable AI development. Some argued for hard regulation to manage competition and protect data sovereignty, while others emphasized the potential of open collaboration and shared resources.

The conversation also covered the risks associated with open-sourcing, such as potential misuse and reduced incentives for large-scale investments. Panelists stressed the need for governance structures, ethical considerations, and investment in local capacity building to mitigate these risks. The discussion concluded with calls for trust, collaboration, and a focus on inclusive AI development that serves the public good and represents diverse populations.

Keypoints

Major discussion points:

– The role of open source in democratizing access to AI and large language models

– Challenges and opportunities for developing countries in leveraging open source AI

– The need for infrastructure, computing power, and data to support open source AI development

– Concerns about monopolization of AI by large tech companies and how open source can help address this

– Cultural and linguistic representation in AI models, especially for underrepresented regions

The overall purpose of the discussion was to explore how open source approaches to AI and large language models can promote more equitable access and development of these technologies, especially for developing countries and underrepresented groups. The panelists aimed to highlight both the potential benefits and challenges of open source AI.

The tone of the discussion was generally optimistic about the potential of open source AI to democratize access, but also realistic about the significant challenges involved, especially for developing countries. There was a mix of idealism about open collaboration and pragmatism about the resources required. Toward the end, some panelists expressed a more cautious view about the need for regulation in addition to open source approaches.

Speakers

– Ihita Gangavarapu: Coordinator of India Youth IGF, works in cybersecurity domain in India

– Daniele Turra: Private Sector, Western European and Others Group (WEOG)

– Melissa Muñoz Suro: Director of Innovation at the Government Office of ICTs in the Dominican Republic, GRULAC

– Bianca Kremer: Civil Society, GRULAC

– Abraham Fifi Selby: Technical Community, African Group

Additional speakers:

– Yug Desai: Online moderator from South Asian University

– Purnima Tiwari: Rapporteur for the session

– Audience

Full session report

Expanded Summary: Democratising Access to AI through Open-Source Large Language Models

This discussion explored the potential of open-source large language models (LLMs) to democratise access to artificial intelligence (AI), with a particular focus on fostering innovation and empowering smaller economies and the Global South. The panel, comprising experts from diverse backgrounds and regions, delved into the opportunities and challenges presented by open-source AI, as well as the implications for governance and regulation.

Understanding Open-Source AI

Daniele Turra of ISA Digital Consulting provided a foundational explanation of open-source software and its four freedoms: the freedom to use, study, modify, and redistribute the software. He emphasized that truly open-source AI models should adhere to these principles, allowing for free use, modification, and redistribution. This context set the stage for discussing the potential and challenges of open-source AI.

Benefits and Potential of Open-Source AI

The panelists broadly agreed on the positive impact of open-source AI on innovation and accessibility. Ihita Gangavarapu, coordinator of India Youth IGF, emphasized that open-source enables broader access and participation in AI development. Daniele Turra noted that open-source models can reduce costs and foster innovation. Melissa Muñoz Suro, Director of Innovation at the Government Office of ICTs in the Dominican Republic, highlighted the potential for customization to meet local needs and languages.

Abraham Fifi Selby, an expert in AI development in the Global South, argued that open-source approaches can level the playing field for regions with limited resources. He stressed the importance of multilingualism and local policy development in addressing African needs. Bianca Kremer, a researcher and activist from Brazil, added that open-source can help address biases in AI models, contributing to more inclusive and representative technologies.

Challenges and Limitations

Despite the optimism, significant challenges were acknowledged. Daniele Turra pointed out that substantial computing resources, such as GPU clusters, are still required to train large models, which can be a barrier for many organizations and regions. Melissa Muñoz Suro and Abraham Fifi Selby both highlighted the lack of infrastructure and expertise in developing countries as major hurdles.

Melissa Muñoz Suro drew attention to the ongoing costs of maintaining and scaling open-source systems. The need for high-quality local data to improve models was emphasized by Selby. These challenges underscore the complexity of implementing open-source AI solutions, especially in resource-constrained environments.

Governance and Regulation

The discussion revealed differing opinions on regulating open-source AI development. Daniele Turra stressed the need for clear definitions and licensing of truly open-source models. In contrast, Bianca Kremer called for hard regulation to address competition issues, suggesting a more interventionist approach.

Melissa Muñoz Suro emphasized the importance of data sovereignty and local control of AI systems. Abraham Fifi Selby proposed exploring public-private partnerships to support open AI development. Daniele Turra suggested a “computing tax” and partnerships with civil society organizations as potential governance structures.

Practical Applications and Cultural Context

Melissa Muñoz Suro shared insights about the development of ‘Taina’, an AI system in the Dominican Republic designed to reflect local culture and language. This project exemplifies the potential for open-source AI to be tailored to local needs while respecting cultural nuances. Melissa Muñoz Suro detailed how Taina was developed using open-source tools and local data to create a culturally relevant AI assistant.

Bianca Kremer provided examples from Brazil, including the Tucano and Maritaca AI projects, which demonstrate successful open-source AI development in Portuguese. She also highlighted the issue of algorithmic racism, using an example of how Chat GPT associated the term “favela” with negative connotations, underscoring the importance of addressing bias in AI models.

Abraham Fifi Selby offered perspective on the African context, highlighting how open-source AI systems are enabling young innovators to develop solutions at a lower cost, despite challenges in funding and infrastructure.

Audience Engagement and Unresolved Issues

The discussion included audience questions, particularly regarding competition and monopolies in AI development. This led to a broader conversation about balancing open collaboration with the need for regulation in AI governance.

Other unresolved issues highlighted include:

1. Effectively distributing computing power for open-source AI development

2. Ensuring cultural nuances from underrepresented regions are included in AI models

3. Creating sustainable funding mechanisms for open-source AI in developing countries

Conclusion

In their final remarks, panelists reiterated the transformative potential of open-source AI in democratizing access to technology and fostering innovation, particularly in developing regions. They emphasized the need for continued collaboration, investment in local capacity building, and addressing both technical and socio-economic challenges to realize the full potential of open-source AI for inclusive global development.

Session Transcript

Ihita Gangavarapu: All right. Hi everyone. Good morning. Welcome to our session. I’m the coordinator of India Youth IGF and I also work in the cybersecurity domain back in India. So our session is titled democratizing access to AI with open source LLMs, large language models. It’s a 60 minute session where we have ample time for audience interaction. So when we talk about, we have, although before we start, we have a few speakers offline, but we do have a few speakers online, including our online moderator. So when we talk about democratizing access to AI, we are talking about making sure that artificial intelligence technologies and resources are accessible to a broad range of people, not just to the large corporations, governments, or the highly skilled participants. And the goal is to ensure that everybody is empowered, even the small businesses, educators, researchers, and organizations from all backgrounds, all economies, and they benefit from AI. The development and dissemination of AI, particularly the large language models, are increasingly dominated by major technology companies right now. And that does raise certain critical issues around access, control, and equity. Now with proprietary models that are accelerating innovation economic gain for some, they are also risking consolidation of power and limiting the technological diversity. So when we speak of open sourcing LLMs, we are looking at it creating a pathway to democratize AI, potentially reducing the costs and fostering innovation by enabling more and more stakeholders to participate in the development of AI. So today’s discussion is going to be focusing on the strategic, economic, and social implications of open sourcing. open sourcing AI, LLMs particularly, and the potential to counteract monopolistic controls and encourage a broader distribution of technological and economic benefits. So before I start, I’d like to introduce you to our panel. I am Ahitha, but I’m joined by Daniel A, who is working with ISA Digital Consulting. I also have Abraham from Payag. Online, we have Yug Desai, who’s the moderator, online moderator from the South Asian University. We have Purnima Tiwari, who is the rapporteur for our session, as well as our speaker, Melissa, joining us remotely, who’s working in the innovation cabinet, Dominican Republic. Thank you all for joining us, and I now start off with our discussion. Very first policy question is to Daniel A. How does open sourcing influence innovation rates within the AI industry? What are the long-term implications of open source AI on the structure of the tech industry itself?

Daniele Turra: Thank you so much, Ahitha, for presenting me today. I’m so glad to be here to discuss this very important topic. Everybody right now is talking about AI. Open source has been around for a very, very long time, and the narrative is being, in a way, of course, influenced by these large big tech giants that we have just mentioned. But open source has, in a way, a different history, especially free and open source software. So before getting into the specific industry implications, I would like to spend a minute to just introduce, once again, the concept of open source. And we can just start by saying that open source was a philosophy that was first, in a way, brought forward by Richard Salmon and other important scholars in the United States. States that believed that open source should mean, of course, sharing the code, but they also tightly related with the concept of freedom. So free, not as in beer, as they say, but free as in freedom. That is, freedom of speech, but especially the four freedoms that define the core ideas of open source. There are the freedom to use code, the freedom to study code, to redistribute it, and to modify it. So there should not be any large or small actor entitled to, in a way, own strong intellectual property on that code. And this, of course, is an idea that can benefit so many actors, from the smaller to the larger, but in a way, can enable others to join the industry as well. So when it comes to the AI context, I think I had a few slides. I don’t know if tech support can put them on. But we are talking about specific solutions and software that are always created by two different parts. So there is not only a general idea of open source software. We are talking about models and weights. So when you produce a model, there are mainly two files. One is about the weights, and one is about the model itself. So based on this, the Open Source Initiative published the free and open source idea that defines both models and weights to be fully open source. So anybody that can access a truly open source LLM has access to both the model and the weights that is also the result from the training of the data set used to train that model. Then on the other side of the spectrum at the very opposite side we have the idea of a fully closed model you know when you are just maybe accessing it through APIs or anything like that but it’s already in production it’s not something that can be in a way you know inspected modified or distributed right and in between models that have been defined as open weights where you know there are licensing where you are as a researcher maybe allowed to explore either the model or the weights again but are not really entitled to use it as for commercial purposes and of course most often you’re not even allowed to you know redistribute it itself so this of course creates situations in which not everyone can actually benefit from those those models and the again I would like to stress that the only definition that is truly compliant with open source as we free free and open source software as we know it is the one that embodies all four freedoms as defined by the thinkers of the free and open source thought so I don’t know again if we have lights I don’t know about my timing right now but again we can think next slide please yeah here you can see the the frameworks I was talking about different licenses and not all providers actually have the same models and the same licensing for the models they provide so in this other slide that is AI as a service stack. I would like to bring the focus again on the components that are needed to build an actual AI solution from end to end. So a few scholars traced some comparisons between the cloud computing model. And at each of these layers, open source software can always be employed. So we should also ask ourselves, are we actively as private entities or public entities and so on, are we really entitled to have something that is truly closed source, even if we are employing so, so many community efforts coming from the entire open source community? And so in a way, this can be some food for thought, thinking about the different steps that are in implementing actively AI solutions. And when it comes to actively industry impacts, we can also think of all the open source software that goes both at the AI software services training and fine tuning the models, down to the actual infrastructure that is needed to have the computing power to have those solutions actively built. Because in the end, as the last slide shows it, please change the slide to the last one. Um, last slide, please. OK, the last one. There is a supply chain that starts from data collection to data storage, data preparation, algorithm training, application development. And at each of these stages, the entire idea of having community-supporting solutions that are open source can be something that can benefit also the private sector. So again, this is an entire invitation to think in terms of who builds the software and all the different steps that are needed to get to it and how technologies that are already around there can actively help in achieving truly, truly open source models. Thank you.

Ihita Gangavarapu: Thank you so much for your points. I would actually bring in Melissa here, who’s joining us remotely, to answer the same question. How does open sourcing influence AI innovation in the entire industry? And what are the long-term implications? Melissa, over to you.

Melissa Muñoz Suro: OK, can you hear me well? Yes. Perfect. Good morning, everyone. So yeah, I’m going to start. My name is Melissa Munoz and I work as the Director of Innovation at the Government Office of ICTs here in the Dominican Republic. And basically what we do in OPTIC is using technology to improve lives and make everyday interactions with government more efficient, inclusive, and even more enjoyable. I wanted to answer this question, illustrating it with a case of what we are doing here in OPTIC. And one of the most exciting ways that we are doing this through our national AI strategy, in which a big part of that vision is Taina. That basically is an open source AI system that in the future will make the government services faster, smarter, and even more personal. That’s what we are trying to do. Taina isn’t ready yet. Right now we are focusing on laying the groundwork with a project called Ciudadania. Open source technology plays a key role in this project because in a word it opens the door for more collaboration and innovation. We are building a strong foundation for Taina by collecting and organizing the data that we need basically to make it work. And how does it work? Well, we are collecting data from existing government systems like PuntoGov that is in personal service points and from online service platform GovDo and the 462 point line. And these systems let us possibly gather insights of how citizens interact with public services. We have also set up specific interaction points where people can actively contribute to the data. Things like how they phrase requests, questions. And this isn’t about collecting personal information at all. It is about understanding the way Dominicans communicate so the AI reflects our culture and our language. And this is a collaboration between government, citizens, and local universities. The universities help us basically to ensure that the data is… accurate, well-structured, and aligned with privacy standards. What is interesting about it is how open-source doesn’t just fool innovation itself, but it also shapes the structure of the tech industry, especially in smaller economies like DDR. And by using open-source frameworks, what we are breaking is the dominoes of the big tech companies. Instead of relying on their tools, we are creating solutions tied to the Dominican Republic specific needs. For us, that means building systems that understand Dominican and Spanish, that is different to all Spanish, and reflect our culture, solve our local challenges, but the potential doesn’t stop there. Specifically, open-source means that other Spanish-speaking countries can learn from what we’re doing, and that’s what we’re trying to do, to escalate this regionally. And Ciudadanía could inspire similar projects in the region, fostering cooperation, creating a shared path towards a more inclusive AI development. And open-source isn’t just about influencing innovation rates, it is about basically fundamentally reshaping how technology serves people, that’s what we think in DDR. And our current work with Ciudadanía and our vision for TIE at the end shows how open-source principles can empower governments to, and also engage citizens, and create opportunities for smaller economies to thrive. I got to believe that technology should make life simpler, that’s ultimately happier, and open-source is a key tool to achieve this vision and create a more inclusive and accessible world, and people-centered tech industry. Thank you.

Ihita Gangavarapu: Thank you so much for your points, Melissa, you also highlighted on certain initiatives. I would now, first, I mean, before I hand over the question to Abraham, I actually would like to introduce Bianca, who’s joined us, thanks for joining, she’s from CTSFGV Brazil. So the same question applies to you, Abraham, and after which we would like to take a comment or a question from the audience, before we move on to the second policy question.

Abraham Fifi Selby: All right, thank you very much for the session, and I’m very happy to join this panel. I’m from the Global South, especially in the African context, so I will be speaking based on that context, and we understand how open source in terms of a large language models can help us. When we say the impact of innovation rates, I’m going to highlight some few points, then we discuss about see how it is going to help the African context, the Global South context. In terms of influencing innovation rates, what we see in Africa is that we have a democratization of AI development, which means that there is a very low cost in terms of using the open source system. In Africa, getting funding for startup researchers in terms of developing AI systems is very hard, and we don’t have large systems, large data centers, so investments have to go through before we get that. This open source AI system is helping young people to bring out innovation because they can tap on such systems at a very low cost or very low rate so that they can improve upon development. Let me also go into fostering collaborations. Basically, the world is changing, and everything also evolves around technology is now moving into AI. We cannot focus on advanced countries and also not look at the local countries, so at least open source systems is helping the people from the Global South to make sure that at least they also provide a source of data which can improve the other regions and which the other regions can also tap upon. In Africa now, there are some countries creating a policy document, so let’s say startups, business people who need to get information about which policy they are doing this kind of policy. I need this policy, which does this law, policy, regulating that. Now these AI systems, people are feeding in data on it. They’re connected to open source AI systems, AI tools, which also helps. So if someone is in Europe, want to get an idea of business or some regulation policies in Africa, because there is a collaboration between Africa and Europe, they can be able to get and also ACI and other American countries. Let’s also look at addressing local needs. This AI tools that we use, basically in terms of, let me come into multilingualism, we have languages that people may even seem not to understand because of the evolving in Africa. Now collaborating this AI tools in terms of addressing local needs means that there are some needs that it may be in the context of global South and Africa, which may not be a need in Europe. So whilst we are talking about this open source system, it helps us to understand that we need to localize some kind of documentation, some kind of data, which will help us in the global South context. Let me also move into assessing some kind of investment that we need in Africa. This large language models helps so much to understand the context that there is more investment in there, but there is not much investment in Africa. So Africa is really very happy that at least we can tap this open source AI system from advanced countries to also improve on our livelihood. Let me go to some long-term implementations, which will help the structure of the tech industry. I’m looking at the tech industry around the globe and also specifically moving into the African context. There is a growth of local AI ecosystem in Africa. Africa, because now we are now tapping. Despite all this in the policy implementation, ethics has been a very difficult time, because I know Europe and some other countries have developed their AI strategy documented. Africa is still struggling in that. So if there are some ethics that gives maybe global concerns, licensing people, helping people to tap into this open source, I think it’s a very good way that we can enhance the rate of innovation. And also, monopolization. We see big entities having the data source and everything about open AI instance. Every system is now tapping from them. They have the large language model. They have everything, like the child GPT and other stuff. These AI tools is moving, and Africa is lagging behind. Why? Because we feel that if governments have upper hand on these AI tools, they can develop on their own. But we need to also connect to other sources and other open. So we need more investments to the large language models. We need more collaboration on that. And it will also reduce the monopolization, because startups can also build their own AI models that can also support development in Africa context. And the last thing is that in capacity building. We see so many AI labs that I’m very even happy about it, where my other speakers also talk about in their respective countries. But the global South, where is it? We are not getting it that. We are not getting it that the schools, the academic, are not also moving. So we must also invest much in our academia, bringing capacity building of AI models within that context. And I will leave my other colleagues to talk. Then I will come back later with some other point.

Ihita Gangavarapu: Thank you. Great points, Abram. And I think Melissa, as well as you, have spoken about addressing the local needs with respect to cultural languages as well. What we’d like to do now is hand it over to you all. Within a minute, if you have. a comment or a question. We will of course also be having a Q&A round towards the end of the panel discussion, but is there any comment or a question, we’ll be happy to hear from you. Yes. Meanwhile, Yuk, if there’s any comment or a question in the chat or from the online participants, please let us know.

Audience: Thank you for this. My name is Lina and I work with the Council on Tech and Social Cohesion and Search for Common Ground, it’s a peace building organization. So we work in conflict affected contexts and we are trying to deploy AI to build trust and collaboration. And there’s two challenges that I see. So you just mentioned ethics. So we’re actually trying to build things on top of the commercial models. And then there’s still an ethical question about where the data goes and how much we can be sure whether or not that data is being used to train those models. And the second is, you just said Africa needs to build its own models, but the resources required to actually have built these models is because they had billions of dollars in investment, including from the Saudi government and many other places to make this an enterprise that will dominate the market and which will end up becoming a major revenue builder. So there’s something a little bit almost naive about this idea that we can compete. There’s no competition unless you regulate the monopolies here. Otherwise, there will be no. So it’s kind of two questions. Thank you.

Abraham Fifi Selby: you’d like to answer. Yeah, I agree with you 100%. There is no competition in terms of this. That’s why I was even emphasizing that we have to foster collaboration. Because we need the global north data, and the global north also need African data. Despite that, we also have to encourage our government and member states in global south to also have some investment, as you said, in the infrastructure. What I want to address is that the building of our localness is where I was much emphasizing that. Let’s say the global south cannot come and build the data sources that we need locally. So let’s say we have some language models in Africa, Swahili, let’s say Arabic, French, and other stuff, Portuguese. All these things address localness in some African country context. So if we Africans are not making it up to also build some context that can connect to the large open source systems, we may be lacking behind. And this is the way that ethics comes in, that we must copy from what the global north is doing and build upon on ourself. But we can always not rely on the global north from the global south perspective. We must understand that we need to also build our own data models that we can use it to share with the other continent that can improve our AI development and AI policy. So this is how I was addressing that context. And I hope I have answered your question well, because you made a very clear point based on the investment in ethics, investment in infrastructure. And I really agree with you, because Africa is lacking because we don’t have that infrastructure. and we must all rely on the Global North. But in relying on Global North, we must also contribute to the Global North perspective by providing data that address local needs so that everyone can tap in information when they need according to the AI development. Thank you.

Ihita Gangavarapu: Is it okay if we just pick the question in 15 minutes? We have, yeah, we have, we’d move on to the second set of the panel discussion and then we’ll pick it up again. Well, the question I wanna ask is directly related to the monopoly competition issue. So I’d- Please go ahead.

Audience: Okay, so I’m disagreeing with the last question. I thought the whole point of open source was that it was open and that you essentially are sharing. So if you’ve developed a model, you’ve made all that capital investment and if it’s open source, which I understand that meta is halfway there or maybe full, maybe the first presenter would like to comment on this. But the point is you can have access to that. You don’t have to have the investment, you can use it. And regulation presumes that they are monopolies and you’re going to regulate how they, I don’t know how they sell it, which to me does not distribute the knowledge, does not distribute the capability. It is much better to go open source than it is to go through regulation. So I’m confused about how you’re approaching this issue.

Ihita Gangavarapu: So our next question is actually on monopolies, but maybe Daniel, if you want to just keep it under a minute to address this and we can pick up the discussion in a few minutes again.

Daniele Turra: Sure, actually, I was about to introduce some of the points that might help in that sense in this following question. But in a way, I agree on the fact that open source is a tool, is a philosophy that can help. in not really not systematically regulating monopolies but for sure sharing the knowledge and giving the opportunities to other actors to you know again get the skills and not being you know blocked by specific intellectual properties on that. So this is a way to do it again and Meta is again as you said halfway there. They have Lama as an open weight model so it’s it’s also not always some flavors of it are not commercially available. They are available for for example researchers to use it but only in some contexts the models also from other providers are allowed to be used in a commercial context. But again let’s always think about the skills and the resources needed to build those models and if some actors are really you know should be entitled really put a fully open source definition on that because there is an entire supply chain and I would like to avoid some let’s say open washing of things. We need to you know really categorize things and call them with the right name. Hope that was a you know answer could you know answer some some of your doubts. Thank you.

Ihita Gangavarapu: All right thank you. Thank you so much for your question as well. I would now like to request you can you confirm you can hear me? Yes I can hear you. Perfect so you will be taking care of the second segment of the discussion so I hand it over to you.

Audience: Yeah before that I think there was a hand raised in the online audience so I would like to pass the floor to Raj Jahan, if he has anything to add on the first policy question related to innovation and open source. Raj, are you there? Okay, it seems he’s not able to speak right now, so let’s, let’s jump to the second question and we’ve already sort of gotten into it already. So, the second policy question is, in which ways can open source models prevent a few large entities from monopolizing the AI landscape and what governance structures could be necessary to manage this? And to answer this question, I first want to pass it on to Bianca, Bianca, if you can share your thoughts on this question.

Bianca Kremer: Hi, everybody hears me? First of all, I’d like to apologize for the delay and other procedures, we’re in workshops room one, so thank you so much for the invitation. I’m sorry for all the logistic trouble, but I’d like to introduce myself, I’m Bianca Kremer, I’m a researcher and also an activist from Brazil. I work with AI and law, especially in the topics of discrimination, specific topics of racial discrimination in Brazil. And for this panel, I have, I have been questioning or proposing three specific questions that could address the topic in a way we can be, this panel could be a food for thought for us to exchange a little bit on these topics. So unfortunately I lost the first part of Daniel’s presentation, but I observed that he could bring a good opportunity for us to understand the supply chain of LLM. And I have some observations about what is happening in Brazil from your perspective and as well as you, Abraham. So the first question is, I’d like to take some steps back so we can move forward on the topic. The first one is, what actually are we talking about when we talk about open source LLMs? Otherwise, if we don’t address these topics, we will have some misunderstandings on questions about these subjects. So we have the difference between the closed source LLMs and open source LLMs. This is the first question that we will address. The second one is, what qualifies as an open source LLM? This is actually really important for us to address the challenges we will face on this. And the third one is, where are specific concrete cases where we can find possibilities with open source LLMs? And after that, I will bring some experiences from Brazil that we have been experiencing on open source platforms. And in a way, we can address the competition problem as well, being developed by universities in our country. It’s hard to do this, but I think we have been addressing. So what are we talking about when we talk about these open source LLMs? We are talking about these large language models as AI models that they are publicly accessible. OK, I think we have been talking about it. But what it means is that the source code and training data are made available to the public. And what happens? It allows us, not only the developers, but also us researchers and organizations from civil society as well, to freely use, modify and improve it, like the models, in their own purpose. So, this is something we have been, when we talk about the ratio, gender bias, for example, when we have open sources models, we can have an opportunity for us not only to improve it but make it better in a way that companies and business models are not interested in achieving due to economic purposes. Okay? So, what differences, how do these open sources differ from the other counterparts? We have the closed source LLMs, they are developed and maintained by companies like, as we have been saying, OpenAI, Entropic, Google, and they are typically proprietary. What it means is that you cannot access the underlying architecture of data that model was trained on. The other open source LLM, on the other hand, they are models that are free to download, they are free to modify, and they are free to be adapted. So, these projects have been instrumental, actually, in making models available to the public in order to also, not only, but also address social problems that we have been facing in the development of these technologies in some certain societies. Since we have been talking about global south, for example, in Brazil, we had a case, a concrete case that I would like to share with you about a deputy called Renata Souza, which she wrote on Chat DPT, for example, the word favela. Favela is a community, a poor community in Brazil, I don’t know if it has been heard about before, but she wrote a black woman in favela, and what happened is that an image was generated about a black woman holding a gun, pointing up. She didn’t write anything about guns, but it happened. So, it was a case of what we have been studying. the last 10 years what we call algorithmic racism in platforms in Brazil, for example. It’s a case, concrete case about how these generative AI technologies have been developing. Not talking about the gender bias we all know, when you have been written two years ago, name 10 philosophers and they were all European and white men. And then you say there are no women and they are always white women, European or North American. Also always white. So these are some bias we have been facing in the usage of these platforms that open source, for example, could be open to address and also to modify the model and addressing topics of solving some problems of bias. Not all of them, because when you have algorithms you always have bias, but some of them. So this is something I don’t want to talk too much, but just to address the topic of what we have been talking about and after we can open for questions and things. But I would like to also exemplify with the cases of Tucano activity and also Maritaca AI. Tucano and Maritaca are two birds from Brazil. We have several birds in our region, especially Rio de Janeiro where I’m from. So they are both birds and these are projects from public universities in Brazil developing open source technologies and we have been very successful in developing these technologies in Portuguese. And this is the third part that I want to have my remarks so I can hear my other colleagues, but just to not only clarify but also make an imagery of what we have been talking about. I am from CTS, Center for Technology and Society University in Brazil, but also I am a member, a board member of the Brazilian Internet Steering Committee, a political party. BDS that has said that the internet is the place for Ù¾ Pulling makes a perfect thing. With the information from the participants it’s a political position. This year we have a forum held in Cape Verde Aprica and a lot of members from the community are here to talk about the importance of the internet in a broader community, in a broader perspective. And it’s even more dramatic for us because when you go to African countries, for example, they speak Creole among them others much more than Portuguese. So this is something I have been talking about. It’s also a matter of sovereignty. So I would like to thank you for the opportunity and I keep myself open for questions and to exchange with my colleagues. Yoke, over to you.

Ihita Gangavarapu: Yeah, maybe I’ll take it from here. I think we have time for one or two more questions and then we’ll take it over, Yoke, since we cannot hear you. So we, I would like to actually pose the same question to Daniel. And I want to understand in what ways can open-source models prevent a large entity from monopolising the AI landscape.

Daniele Turra: Thank you for the question. I think the answer is that technology licensing itself cannot really alone prevent a large activity from taking over in that sense. As I stressed earlier, I believe that we have to, in a way, protect the actual and correct definition of open source and software. But of course, software sharing practices and business can help, as one of the men here in the audience actually started to pointing to in terms of monopolies. So when we’re talking about licensing, I believe that open source is ideal, but also open parameters models can be a good way to achieve that sharing of knowledge in the larger ecosystem and be a big boost in this type of ecosystem, even for the global south. Some of these are not completely open source, but still an important role. But again, I would like to stress about the resources. I think that having, for example, a publicly managed infrastructure could be something that can help us. Just like the man in the audience actually started to pointing out, these large models are developed by big companies with private money, but it doesn’t mean that we cannot, in a way, benefit from those altogether. So not all businesses, especially SMEs, can employ these models. The global south is poor in terms of computing power and therefore does not have enough power to train these models. So the fundamental infrastructure there is lacking, and in that sense, this makes all these models not inclusive from the real beginning, because we are not including researchers and civil society organizations that could provide good input also in the sharing and the management of that infrastructure. So in that sense, we could also think of in terms of building the models and running the models in production. And in general, in both cases, I would say that… Now, one important thing that I would like to bring as a proposal is to have large cloud businesses that have the computing power offer this capacity for free when it comes to develop truly open source models that can be, in a way, published as open source and true open source. So, for example, the allocation of that computing power could be managed by a law, by, for example, paying, let’s call it a computing tax of some sort, or maybe partnership with some civil society organizations that work for coordinating the freedom of open source software production. Let’s think in terms, for example, the Eclipse Foundation or the Python Foundation. They supervise a lot of efforts in the open source community. I think we can do something very similar, including civil society organization in the production of open source. There is a lot of food for thought here. I’ve seen a few, I’ve raised a few eyebrows probably. But again, this is new for everyone. So I think if we get a look at how open source community works, we can get a few good inputs on how to develop the new open source model for the future. Thank you.

Ihita Gangavarapu: Very well answered, actually. Now, I’d like to request Melissa, who’s joining us online, for your comments on this question, if you could kindly keep it under three minutes, please. Thank you. I think we’re facing some issues. We can’t hear the online participants. Seems they cannot hear the online. Yeah, I think you’re all audible now. Please, Melissa, over to you. Can you hear me well? Yes. Okay, perfect.

Melissa Muñoz Suro: So basically, building on what I was mentioning earlier about our national AI strategy back in the Dominican Republic, one of its core principles basically is achieving technology and data sovereignty. This may ensure that the tools, systems, and data we create remain under national control, protecting both our public access and privacy for our citizens. And that’s why we choose to develop, in this case, our RELM, to not only anti-U.N. title from scratch, using our personal framework, but platforms like, for example, GPT-4 are robust. They came with significant recent dependency and data exposure. Open source allows us to design systems that align with our national priorities and values, ensuring independence and security in managing these technologies. Open source AI models can reduce reliance on external corporations, enabling nations to build systems tailored to the needs while fostering regional cooperation, mainly by using open frameworks which retain control over our tools and data, preventing external exploitation, ensuring that technology serves, actually, for public interest. However, open source is not without challenges. That’s something important I wanted to mention before. Building and developing these systems require more than just access to code. It demands robust infrastructure, technical expertise, high-quality data. And these are areas where developing countries like mine must focus to ensure successful implementation. One of the biggest challenges we’re facing right now with open source AI is having the right tools to make it work. These models need powerful computers to run what we call GPU clusters, and they don’t come cheap. For countries like the Dominican Republic, it’s hard to justify spending so much money on equipment when there are so many other priorities, like education, health, I don’t know, poverty. We like external services where the infrastructure is already handled for you. And with open source, we’re the ones who have to set it up, maintain it, and make sure it works. That’s something good to have in mind. Another big issue is getting the models to perform how we need them to. Open source models don’t come ready to solve… every problem, they are like a black canvas. So you need to put in the effort to fine tune them, to teach them how to understand specific techniques. And that takes time, expertise, and yes, more money that we, of course, don’t have in the global South as the one fellow was speaking before. And there is a blurring also with the data, a lot of data we have in the government systems, messy, it’s all scattered, and it’s not always useful. For a project in Surinam, for example, we have to work hard to clean it up, this data, and combine it with new information. We collect it from different government platforms. I was first in Surinam, and we have also set up places where citizens can share how they talk and ask questions so that they can build and truly understand our culture and language. Finally, the cost of keeping everything running. Open source sounds great, but because you are not paying someone else every time you use it, but the truth is, it’s still expensive to keep systems working over long-term. You have to upgrade the hardware, phase issues, make sure they can handle more users as it grows. And the reality is, open source AI isn’t something you just turn on and forget about it. It needs investment, planning, teamwork, and that’s why we are looking for partnership in the DR with other countries and trying to make it regional with international organizations too. And so we can share resources and make open source AI a solution that works for everyone. And that’s it on my part. Thank you.

Ihita Gangavarapu: So much, Melissa. All your points have been noted. I would now, given that we are a little short on time, we would like to open the floor for all of you for our interaction. But before that, I’d like to pose a question to all of you. What specific risks do you see open sourcing pose? And such as it could be the increased potential for misuse or reduced incentives for large scale investments in AI research. And how can these risks be mitigated? while still promoting open development and harnessing the opportunities. The floor is yours. Do we have anyone who would like to add a comment or a question? Yes, please.

Audience: Is it working now? Yes, perfect. Hi. Thank you very much for your panel and the interesting discussion that you were having. It’s not an answer to your question, but more of a comment or a question to you. I wanted to ask you more concretely about what are the enablers of open source LLM, so open source AI, because you were touching on the competition issue. And we see thus far from a market incentive, Facebook goes some extent towards open source, but they are still lacking. And they have little incentive to actually open the model fully to full reuse. Just for these large, big tech companies, there’s no real incentive to do it. And their models might have the ability to outperform open source models for a time. So we’re talking about this issue of smaller languages than English. So I would guess there would need to be some kind of common data sets, open data sets. Daniela was touching on the issue of how to distribute computing power. So I was wondering maybe if you could make concrete recommendations of what would we need to build, and how could we do so to actually set up a system in which open source LLM, so open source AI, can thrive. Thank you very much.

Ihita Gangavarapu: That’s a very detailed question. So maybe could we take one more comment or a question before we let the panelists answer them? And, Joerg, if you have anything from the online participants, feel free to unmute.

Audience: We had a question a while back, and I think this was related to what Abraham was saying. How do we ensure cultural nuances from Africa are included in these models, as well as sovereignty is maintained? That’s all I have online.

Ihita Gangavarapu: OK, OK, perfect. Thank you. I think for the second question, Abraham did answer quite a bit. But what I’ll do is, if we can have any comments on the first question. question, please, from the panel. Would you like to go?

Daniele Turra: Yeah, I’ll try to be very brief. So one key difference that we can see in open LLMs when it comes to their nature than other types of open source software is the fact that it needs lots of computing power to train. This is not the same for other products that are iteratively developed over years and years. And in a sense, can become much without specific compute power needed. But when it comes to training, we need GPU infrastructure. And that thing, again, doesn’t come cheap. So actual proposals, as I was saying, is, again, a way to make sure that who has the resources either be a private sector institution or a public sector research center, whatever it is. They have mechanisms in their governance systems to make sure they can allocate at least a percentage of that power to the development of those unrepresented, for example, communities or languages. I don’t know. I might have brought a few proposals earlier during my comments. But again, the general takeaway message of like you folks here to have is this. Let’s try to redistribute and better share that computing power.

Abraham Fifi Selby: OK. All right. Thank you. So I like the question that I’m asking about data sovereignty in Africa. The context is that in Africa now, we are now growing in terms of the digital landscape and economy. And data sovereignty is something that we cannot leave out because of how data is stored, data is collected, because there is a gaps related to the policies and regulations. the data And in terms of building in the global south context, there must be an established funding mechanism that can help in terms of grants, private or public partnerships, and also an investment that supports open AI or open source AI development within the global south by connecting researchers and innovators to the advanced countries on the global north, which can also bring that knowledge back to the global south for development. So this is what I would say. And it’s a very useful section that I really appreciate the experts and the questions that have been asked. And we can all build that together in that context.

Ihita Gangavarapu: Thank you very much. Bianca, over to you, please.

Bianca Kremer: Thank you. Very briefly, I’m less optimistic. I do believe that to address competition and the advance of the economics of these digital platforms as we are facing, we need hard law. So we need regulation. Of course, I’m from law. Maybe I’m biased. But in Brazil, we have just discussed the AI bill and the data protection laws for the last six years. And I do believe that when we have regulation on these topics and the participation of government on the free development and industrialization or deindustrialization of our countries for our participation in economy, if we don’t rely on regulation, hard regulation on these topics, we won’t move forward in our own development, not only as countries, but also as economic partners in the global south, for example. So this is why I’m not that optimistic. I do believe we need more enforcement in terms of legal participation on this process.

Ihita Gangavarapu: All right. Thank you, Bianca. Melissa, if you can hear us, your closing remark, please.

Melissa Muñoz Suro: Can you hear me well? Okay. Well, I think we should focus on trust and collaboration, as Bianca was saying, as the foundation for employee and I. This means prioritizing data ethics and being transparent about where the data comes from, how it is used, and ensuring it serves the public good at the end. And we also need to invest in local capacity. I think that’s the most important message that I can leave here on this panel. Our partnership with universities, research institutions, to develop talent and create a culturally relevant data set that truly represents the whole population, in the case of governments, for example, and also an invitation to invest in inter-regional collaboration to building resources and infrastructure to make AI accessible for all. Well, and inclusive AI should be at the center of our efforts, ensuring that technology works for people, builds trust, and attracts sustainable investment at the end. That’s my final thought. Thank you.

Ihita Gangavarapu: With this, we come to the end of the session. Thank you all for joining us. And when we talk about democratizing access to AI, there are a spectrum of concerns that come into play, and many of which our panelists have highlighted. So thank you very much for your inputs, and thank you all for joining us. And we hope that you carry forward these deliberations and come up with great recommendations in the halls of IGF and after. Thank you. Thank you. Thank you.

I

Ihita Gangavarapu

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Open source enables broader access and participation in AI development

Explanation

Open sourcing AI technologies makes them accessible to a wider range of people, not just large corporations or governments. This democratization of AI allows small businesses, educators, researchers, and organizations from diverse backgrounds to benefit from and contribute to AI development.

Evidence

The speaker mentions that open sourcing can reduce costs and foster innovation by enabling more stakeholders to participate in AI development.

Major Discussion Point

Impact of Open Source on AI Innovation and Industry

Agreed with

Daniele Turra

Melissa Muñoz Suro

Abraham Fifi Selby

Agreed on

Open source AI enables broader access and innovation

D

Daniele Turra

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Open source models can reduce costs and foster innovation

Explanation

Open source AI models allow for free use, modification, and improvement of the technology. This can lead to reduced costs for AI development and implementation, while also encouraging innovation by allowing more people to contribute to and build upon existing models.

Evidence

The speaker discusses the four freedoms of open source software: freedom to use, study, redistribute, and modify code.

Major Discussion Point

Impact of Open Source on AI Innovation and Industry

Agreed with

Ihita Gangavarapu

Melissa Muñoz Suro

Abraham Fifi Selby

Agreed on

Open source AI enables broader access and innovation

Significant computing resources still required to train large models

Explanation

Despite the benefits of open source AI, training large language models requires substantial computing power. This presents a challenge, especially for smaller organizations or those in regions with limited resources.

Evidence

The speaker mentions the need for GPU infrastructure and the high costs associated with it.

Major Discussion Point

Challenges and Limitations of Open Source AI

Need for clear definitions and licensing of truly open source models

Explanation

The speaker emphasizes the importance of protecting the correct definition of open source software. This includes ensuring that models labeled as open source truly embody all four freedoms of open source software.

Evidence

The speaker discusses different types of AI model licensing, including fully open source, open weights, and closed source models.

Major Discussion Point

Governance and Regulation of Open Source AI

Differed with

Bianca Kremer

Differed on

Role of regulation in open source AI development

M

Melissa Muñoz Suro

Speech speed

156 words per minute

Speech length

1346 words

Speech time

516 seconds

Open source allows customization for local needs and languages

Explanation

Open source AI models enable countries to develop systems tailored to their specific needs and cultural context. This is particularly important for addressing local challenges and preserving linguistic diversity.

Evidence

The speaker discusses the development of Taina, an open source AI system in the Dominican Republic designed to make government services faster, smarter, and more personal.

Major Discussion Point

Impact of Open Source on AI Innovation and Industry

Agreed with

Ihita Gangavarapu

Daniele Turra

Abraham Fifi Selby

Agreed on

Open source AI enables broader access and innovation

Lack of infrastructure and expertise in developing countries

Explanation

Developing countries face challenges in implementing open source AI due to limited infrastructure and technical expertise. This includes a lack of powerful computers and GPU clusters needed to run these models effectively.

Evidence

The speaker mentions the difficulty in justifying expensive equipment purchases in countries with competing priorities like education and healthcare.

Major Discussion Point

Challenges and Limitations of Open Source AI

Agreed with

Abraham Fifi Selby

Agreed on

Challenges in implementing open source AI in developing countries

Ongoing costs of maintaining and scaling open source systems

Explanation

While open source AI may seem cost-effective initially, there are significant long-term expenses associated with maintaining and scaling these systems. This includes upgrading hardware, addressing issues, and accommodating user growth.

Evidence

The speaker states that open source AI isn’t something you just turn on and forget about, emphasizing the need for ongoing investment and planning.

Major Discussion Point

Challenges and Limitations of Open Source AI

Agreed with

Abraham Fifi Selby

Agreed on

Challenges in implementing open source AI in developing countries

Importance of data sovereignty and local control of AI systems

Explanation

The speaker emphasizes the importance of maintaining national control over AI tools, systems, and data. This ensures protection of public access and citizen privacy while aligning with national priorities and values.

Evidence

The speaker mentions the Dominican Republic’s national AI strategy, which includes achieving technology and data sovereignty as a core principle.

Major Discussion Point

Governance and Regulation of Open Source AI

Invest in local capacity building and talent development

Explanation

To promote inclusive AI development, there is a need to invest in building local capacity and developing talent. This involves partnering with universities and research institutions to create culturally relevant datasets and AI solutions.

Evidence

The speaker mentions their partnership with universities in the Dominican Republic to develop talent and create culturally relevant datasets.

Major Discussion Point

Strategies for Promoting Inclusive AI Development

Prioritize data ethics and transparency

Explanation

The speaker emphasizes the importance of prioritizing data ethics and transparency in AI development. This includes being clear about data sources, usage, and ensuring that AI serves the public good.

Major Discussion Point

Strategies for Promoting Inclusive AI Development

A

Abraham Fifi Selby

Speech speed

141 words per minute

Speech length

1408 words

Speech time

598 seconds

Open source democratizes AI development in regions with limited resources

Explanation

Open source AI enables regions with limited resources, such as Africa, to participate in AI development. It allows for innovation at a lower cost, benefiting startups, researchers, and young people who may struggle to secure funding for AI projects.

Evidence

The speaker mentions that open source systems help young people in Africa bring out innovation because they can access these systems at a very low cost.

Major Discussion Point

Impact of Open Source on AI Innovation and Industry

Agreed with

Ihita Gangavarapu

Daniele Turra

Melissa Muñoz Suro

Agreed on

Open source AI enables broader access and innovation

Need for high-quality local data to improve models

Explanation

To improve AI models for specific regions, there is a need for high-quality local data. This includes data on local languages, cultural nuances, and specific needs of the region.

Evidence

The speaker discusses the importance of feeding data on African languages and local needs into AI systems to improve their relevance and effectiveness in the African context.

Major Discussion Point

Challenges and Limitations of Open Source AI

Agreed with

Melissa Muñoz Suro

Agreed on

Challenges in implementing open source AI in developing countries

Potential for public-private partnerships to support open AI development

Explanation

The speaker suggests that public-private partnerships could help support open AI development in regions like Africa. This could involve collaboration between governments, private sector entities, and international organizations.

Evidence

The speaker mentions the need for established funding mechanisms, including grants and public-private partnerships, to support open source AI development in the Global South.

Major Discussion Point

Governance and Regulation of Open Source AI

Foster regional collaboration to share resources

Explanation

The speaker emphasizes the importance of regional collaboration in AI development. This involves sharing resources, knowledge, and infrastructure among countries in the Global South to advance AI capabilities collectively.

Evidence

The speaker suggests connecting researchers and innovators in the Global South with advanced countries in the Global North to bring knowledge back for development.

Major Discussion Point

Strategies for Promoting Inclusive AI Development

B

Bianca Kremer

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Open source can help address biases in AI models

Explanation

Open source AI models allow researchers and organizations to identify and address biases in the technology. This is particularly important for tackling issues like racial and gender bias that may be present in proprietary models.

Evidence

The speaker mentions a case where a chatbot generated an image of a black woman holding a gun when given the prompt ‘black woman in favela’, despite no mention of weapons in the input.

Major Discussion Point

Impact of Open Source on AI Innovation and Industry

Ensure cultural relevance and representation in datasets

Explanation

The speaker emphasizes the importance of including diverse cultural perspectives and languages in AI datasets. This ensures that AI models are relevant and effective for different cultural contexts.

Evidence

The speaker mentions projects in Brazil developing open source technologies in Portuguese to address local needs and cultural nuances.

Major Discussion Point

Strategies for Promoting Inclusive AI Development

Call for hard regulation to address competition issues

Explanation

The speaker argues for the need for strong legal regulation to address competition issues in the AI industry. This is seen as necessary to ensure fair participation of Global South countries in the digital economy.

Evidence

The speaker mentions Brazil’s recent discussions on AI legislation and data protection laws over the past six years.

Major Discussion Point

Governance and Regulation of Open Source AI

Differed with

Daniele Turra

Differed on

Role of regulation in open source AI development

Agreements

Agreement Points

Open source AI enables broader access and innovation

Ihita Gangavarapu

Daniele Turra

Melissa Muñoz Suro

Abraham Fifi Selby

Open source enables broader access and participation in AI development

Open source models can reduce costs and foster innovation

Open source allows customization for local needs and languages

Open source democratizes AI development in regions with limited resources

The speakers agree that open source AI models promote wider access to AI technologies, foster innovation, and allow for customization to meet local needs, particularly benefiting regions with limited resources.

Challenges in implementing open source AI in developing countries

Melissa Muñoz Suro

Abraham Fifi Selby

Lack of infrastructure and expertise in developing countries

Ongoing costs of maintaining and scaling open source systems

Need for high-quality local data to improve models

Both speakers highlight the challenges faced by developing countries in implementing open source AI, including limited infrastructure, lack of expertise, and the need for high-quality local data.

Similar Viewpoints

Both speakers emphasize the importance of investing in local capacity building and fostering regional collaboration to advance AI capabilities in developing regions.

Melissa Muñoz Suro

Abraham Fifi Selby

Invest in local capacity building and talent development

Foster regional collaboration to share resources

Both speakers stress the importance of addressing biases in AI models and ensuring cultural relevance and ethical considerations in AI development.

Bianca Kremer

Melissa Muñoz Suro

Open source can help address biases in AI models

Ensure cultural relevance and representation in datasets

Prioritize data ethics and transparency

Unexpected Consensus

Need for regulation in open source AI development

Bianca Kremer

Melissa Muñoz Suro

Call for hard regulation to address competition issues

Importance of data sovereignty and local control of AI systems

Despite the general focus on the benefits of open source AI, both speakers unexpectedly agree on the need for some form of regulation or control to ensure fair competition and data sovereignty.

Overall Assessment

Summary

The speakers generally agree on the benefits of open source AI in democratizing access, fostering innovation, and addressing local needs. They also recognize common challenges in implementing open source AI in developing regions, including infrastructure limitations and the need for local capacity building.

Consensus level

There is a moderate to high level of consensus among the speakers on the main benefits and challenges of open source AI. This consensus suggests a shared understanding of the potential of open source AI to address global inequalities in AI development, while also acknowledging the practical difficulties in implementation. The agreement on these points implies a need for collaborative efforts and targeted investments to fully realize the potential of open source AI, particularly in developing regions.

Differences

Different Viewpoints

Role of regulation in open source AI development

Daniele Turra

Bianca Kremer

Need for clear definitions and licensing of truly open source models

Call for hard regulation to address competition issues

While Daniele Turra emphasizes the importance of clear definitions and licensing for open source models, Bianca Kremer argues for strong legal regulation to address competition issues in the AI industry.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the role of regulation, the approach to addressing resource limitations, and the balance between open source benefits and implementation challenges.

difference_level

The level of disagreement among the speakers is moderate. While there is general agreement on the potential benefits of open source AI, there are differing perspectives on how to implement and regulate it effectively. These differences highlight the complexity of democratizing AI access across diverse global contexts and the need for nuanced approaches that consider both technological and socio-economic factors.

Partial Agreements

Partial Agreements

All speakers agree on the potential of open source AI to democratize access, but they differ on how to address the challenges of limited resources and infrastructure in developing countries.

Daniele Turra

Melissa Muñoz Suro

Abraham Fifi Selby

Significant computing resources still required to train large models

Lack of infrastructure and expertise in developing countries

Open source democratizes AI development in regions with limited resources

Similar Viewpoints

Both speakers emphasize the importance of investing in local capacity building and fostering regional collaboration to advance AI capabilities in developing regions.

Melissa Muñoz Suro

Abraham Fifi Selby

Invest in local capacity building and talent development

Foster regional collaboration to share resources

Both speakers stress the importance of addressing biases in AI models and ensuring cultural relevance and ethical considerations in AI development.

Bianca Kremer

Melissa Muñoz Suro

Open source can help address biases in AI models

Ensure cultural relevance and representation in datasets

Prioritize data ethics and transparency

Takeaways

Key Takeaways

Open source AI models can democratize access and foster innovation, especially in developing regions

Open source enables customization for local needs and languages, helping address biases

Significant challenges remain around computing resources, infrastructure, and expertise for open source AI in developing countries

There is debate over whether regulation or open collaboration is the best path forward for inclusive AI development

Investing in local capacity building and regional collaboration is crucial for open source AI to benefit the Global South

Resolutions and Action Items

Invest in local capacity building and talent development for AI in developing countries

Foster regional and international collaboration to share AI resources and knowledge

Prioritize data ethics and transparency in AI development

Ensure cultural relevance and representation in AI training datasets

Unresolved Issues

How to effectively distribute computing power for open source AI development

How to balance open collaboration with the need for regulation in AI governance

How to ensure cultural nuances from underrepresented regions are included in AI models

How to create sustainable funding mechanisms for open source AI in developing countries

Suggested Compromises

Large tech companies could allocate a percentage of their computing power to develop AI for underrepresented communities

Combine open source collaboration with some level of government regulation and public-private partnerships

Develop shared, open datasets that include diverse cultural and linguistic information

Thought Provoking Comments

Open source has, in a way, a different history, especially free and open source software. … There are the freedom to use code, the freedom to study code, to redistribute it, and to modify it.

speaker

Daniele Turra

reason

This comment provides important historical context and defines key principles of open source, setting the foundation for the discussion.

impact

It framed the conversation around the core values and goals of open source, influencing how participants approached the topic of open source AI models.

We are building a strong foundation for Taina by collecting and organizing the data that we need basically to make it work. … This isn’t about collecting personal information at all. It is about understanding the way Dominicans communicate so the AI reflects our culture and our language.

speaker

Melissa Muñoz Suro

reason

This comment highlights a concrete example of how open source AI can be tailored to local needs and cultural contexts.

impact

It shifted the discussion towards practical applications and challenges of implementing open source AI in specific cultural contexts, especially in developing countries.

In Africa, getting funding for startup researchers in terms of developing AI systems is very hard, and we don’t have large systems, large data centers, so investments have to go through before we get that. This open source AI system is helping young people to bring out innovation because they can tap on such systems at a very low cost or very low rate so that they can improve upon development.

speaker

Abraham Fifi Selby

reason

This comment brings attention to the unique challenges and opportunities that open source AI presents for developing regions.

impact

It broadened the conversation to include perspectives from the Global South and highlighted the potential of open source AI to democratize access to technology.

So, if you’ve developed a model, you’ve made all that capital investment and if it’s open source, which I understand that meta is halfway there or maybe full, maybe the first presenter would like to comment on this. But the point is you can have access to that. You don’t have to have the investment, you can use it.

speaker

Audience member

reason

This comment challenges the notion that open source necessarily requires massive investment from all parties and highlights the collaborative nature of open source.

impact

It sparked a discussion about the true nature of open source and how it can be leveraged even by those without significant resources.

Building and developing these systems require more than just access to code. It demands robust infrastructure, technical expertise, high-quality data. And these are areas where developing countries like mine must focus to ensure successful implementation.

speaker

Melissa Muñoz Suro

reason

This comment provides a reality check on the challenges of implementing open source AI, especially in developing countries.

impact

It deepened the discussion by highlighting the complexities beyond just having access to open source code, leading to a more nuanced understanding of what’s needed for successful implementation.

Overall Assessment

These key comments shaped the discussion by broadening its scope from theoretical principles of open source to practical challenges and opportunities in diverse global contexts. They highlighted the potential of open source AI to democratize access to technology while also acknowledging the significant hurdles, especially for developing countries. The discussion evolved from defining open source to exploring its real-world implications, cultural adaptations, and the need for supporting infrastructure and expertise. This led to a more comprehensive and nuanced dialogue about the role of open source in AI development globally.

Follow-up Questions

How can we ensure cultural nuances from Africa are included in AI models while maintaining sovereignty?

speaker

Online participant (via Joerg)

explanation

This is important to ensure AI models are culturally relevant and don’t perpetuate biases against underrepresented groups.

What are the specific enablers of open source LLMs?

speaker

Audience member

explanation

Understanding these enablers is crucial for creating an environment where open source AI can thrive and compete with proprietary models.

How can we create common open datasets, especially for smaller languages?

speaker

Audience member

explanation

This is necessary to improve AI model performance for less-represented languages and cultures.

What concrete recommendations can be made for building systems to support open source AI?

speaker

Audience member

explanation

Practical guidance is needed to implement and support open source AI initiatives effectively.

How can computing power be distributed more equitably for AI development?

speaker

Daniele Turra

explanation

Addressing the disparity in access to computing resources is crucial for democratizing AI development.

What governance structures are necessary to manage open source models and prevent monopolization?

speaker

Ihita Gangavarapu (moderator)

explanation

This is important to ensure fair and equitable development and use of AI technologies.

How can we mitigate the risks posed by open sourcing AI, such as potential misuse or reduced incentives for large-scale investments?

speaker

Ihita Gangavarapu (moderator)

explanation

Addressing these risks is crucial for the responsible development and deployment of open source AI.

What funding mechanisms can be established to support open source AI development in the Global South?

speaker

Abraham Fifi Selby

explanation

This is necessary to ensure equitable participation in AI development from developing countries.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #77 The construction of collective memory on the Internet

WS #77 The construction of collective memory on the Internet

Session at a Glance

Summary

This panel discussion at the Internet Governance Forum focused on the challenges of preserving collective memory in the digital age. Experts highlighted how the internet has fundamentally changed how memories are created, stored, and accessed. Key issues raised included the ephemeral nature of online content, with studies showing a significant percentage of web pages becoming inaccessible over time. Panelists emphasized the political and economic aspects of digital memory preservation, noting that curation decisions reflect power dynamics and monetary interests. The digital divide was identified as a major concern, with many countries, especially in the Global South, lacking robust internet archiving capabilities. Speakers discussed various initiatives to address these challenges, such as Brazil’s Grauna Project for archiving threatened websites. The discussion touched on the impact of emerging technologies like AI on collective memory, raising questions about data sovereignty and the authenticity of AI-generated historical content. Panelists stressed the need for more inclusive approaches to digital preservation that consider marginalized communities and indigenous languages. The conversation highlighted the complex interplay between memory, technology, and societal power structures, emphasizing the urgent need for comprehensive strategies to preserve diverse digital heritage for future generations.

Keypoints

Major discussion points:

– The challenges of preserving collective memory in the digital age, including issues of data storage, accessibility, and curation

– The unequal distribution of internet archiving efforts globally, with most concentrated in the Global North

– The political and economic aspects of memory preservation, including questions of whose memories are preserved and why

– The impact of new technologies like AI on collective memory and information retrieval

– The need for more inclusive approaches to digital memory preservation, especially for marginalized communities

The overall purpose of the discussion was to explore the complex challenges and implications of preserving collective memory on the internet, considering technological, social, political and ethical dimensions.

The tone of the discussion was largely academic and analytical, with speakers providing in-depth perspectives on various aspects of digital memory preservation. There was an underlying sense of concern about current inequalities and challenges, but also cautious optimism about potential solutions and the importance of addressing these issues. The tone became slightly more urgent towards the end as speakers emphasized the need for action on these topics.

Speakers

– Bianca Correa: Board member of the Brazilian Internet Steering Committee, PhD in law and technology

– Marielza Oliveira: Chair of the advisory board of the e-Government Institute at the United Nations University, former director of UNESCO Communications and Information Sectors Division

– Juliano Cappi: Manager of the Brazilian Internet Steering Committee Advisory Team, PhD in communications

– Ricardo Medeiros Pimenta: Coordinator of teaching and research at the Brazilian Institute of Information Science and Technology, professor at Federal University of Rio de Janeiro

– Samik Kharel: Journalist and researcher from Nepal

– Carlos Alberto Afonso: Director of NUPEF Institute in Rio de Janeiro, co-founder of Brazilian Internet Steering Committee

Additional speakers:

– Jean-Carlos Ferreira dos Santos: (role not specified)

– Tatiana Jereissati: (role not specified)

– Juliana Holmes: (role not specified)

Full session report

Revised Summary of Panel Discussion on Preserving Collective Memory in the Digital Age

This panel discussion at the Internet Governance Forum explored the complex challenges of preserving collective memory in the digital era. Experts from various fields discussed technological, social, political, and ethical dimensions of digital memory preservation.

Key Challenges in Digital Memory Preservation

1. Ephemeral Nature of Online Content: Bianca Correa highlighted the rapid disappearance of online content, emphasizing the need for robust archiving systems.

2. Selective Digitization: Marielza Oliveira pointed out that high costs lead to selective digitization and storage, potentially excluding important information.

3. Global Disparities: Carlos Alberto Afonso noted the lack of internet archiving capabilities in Global South countries, particularly in South America, the Caribbean, and Mexico.

4. Government Accountability: Ricardo Medeiros Pimenta discussed the issue of broken links and vanishing government websites, which poses challenges for maintaining public records and accountability.

5. Technological Obsolescence: Oliveira highlighted the problem of obsolete storage formats, emphasizing the need for continuous technological updates in preservation efforts.

6. Indexing and Searchability: Oliveira mentioned the challenges of making preserved content easily searchable and accessible.

7. Political Transitions: Afonso pointed out the risk of content disappearance due to political changes, particularly in government websites.

8. Real-time Backup: Afonso emphasized the challenge of real-time backup for archiving projects, especially for rapidly changing content.

Political and Economic Aspects of Digital Memory

1. Political Agenda: Pimenta framed memory preservation as a political agenda, suggesting that decisions about what to preserve reflect power dynamics.

2. Curation as a Political-Economic Process: Oliveira emphasized that curation decisions are shaped by political and economic factors, raising questions about whose memories are being preserved and why.

3. Monetization of Data: Oliveira noted that the monetization of data often drives preservation efforts, potentially skewing priorities.

4. Forensic Evidence: Afonso highlighted the potential use of archived content as forensic evidence in legal and historical contexts.

5. Government Accountability: Samik Kharel discussed the need for accountability in data collection and use by governments.

Emerging Technologies and the Future of Collective Memory

1. AI and Language Models: Kharel explored how AI and large language models are reshaping memory construction and access.

2. Algorithmic Governmentality: Pimenta raised concerns about the challenges of algorithmic governmentality in social existence and its impact on memory formation.

3. Rapidly Changing Technologies: Oliveira discussed the challenges of preserving memory in the context of constantly evolving digital technologies.

Proposed Solutions and Action Items

1. Developing technologies to mine the Common Crawl for preserving collective memory in Global South countries.

2. Building capacities of individuals to preserve their own meaningful memories online.

3. Increasing efforts to digitize older content still in paper formats or obsolete digital formats.

4. Improving indexing and searchability of preserved digital content.

5. Considering data sovereignty issues in storing and accessing preserved memories.

6. Creating a dedicated institution in Brazil for digital preservation, as suggested by Alex Moura.

7. Exploring the possibility of NIC.br taking on the challenge of creating a Brazilian Internet Archive, as proposed by Carlos Afonso.

Additional Points of Discussion

1. The Grauna project: Afonso discussed this initiative aimed at preserving indigenous languages and cultures online.

2. Preserving multilingualism: Oliveira emphasized the importance of maintaining linguistic diversity in digital preservation efforts.

3. The Tempora tool: Pimenta mentioned this platform for analyzing temporal aspects of digital content.

4. Public vs. Internet Memory: An audience question raised the issue of mismatch between public memory and what’s preserved online.

The discussion concluded with a recognition of the urgency and complexity of preserving collective memory in the digital age. Panelists emphasized the need for multifaceted approaches that consider technological, social, and ethical dimensions, as well as the importance of inclusive and equitable preservation efforts that represent diverse perspectives and experiences.

Session Transcript

Bianca Correa: Welcome to the workshop, the construction of collective memory on the Internet. As the IGF draws to a close, I believe this has been an intense and productive week for debates on Internet governance. You must be tired, but we have a very interesting discussion ahead that’s sure to energize and inspire you. I would like to introduce myself. My name is Bianca Correa. I’m a board member of the Brazilian Internet Steering Committee, and I hold a PhD on law and technology. And I would like to thank the audience, both online and in person, here in Riyadh. A special thanks to the expert panelists who have kindly agreed to share their ideas and thoughts on this topic today. The workshop, titled The Construction of Collective Memory on the Internet, will last for 90 minutes. To make the most of our time, we will follow this discussion format. Each speaker will have 10 minutes to present their ideas. After that, we’ll move to a question-and-answer session, prioritizing interaction with both the in-person and online audiences. Finally, the panelists will deliver their closing remarks. So let’s get started. Memory is a vast and complex topic. It becomes even more complex when we think about the relationship between memory and the Internet, in preserving memory, promoting social memory, and constructing memory itself. This workshop aims to foster a debate on the challenges of preserving memory in the digital environment. It seeks to explore how the Internet and digital technologies can serve as tools for preserving, promoting, and constructing online memory. especially in a context where much of our culture, social and political processes are mediated by and even originate on the Internet. Memory preservation on the Internet involves tackling issues such as preserving the integrity of information, countering disinformation, protecting the right to information, promoting underrepresented cultural heritage, preserving multilingualism and more. We often say naturally everything is on the Internet but is everything on the Internet? Feeling frustrated at not being able to find information online seems to be becoming more and more common. Whether it is a news page, a blog, tweet and etc. Content on the Internet can disappear for different reasons. Online materials can be deleted, vanishing information is a reality and a study conducted by the US-based think-tank Pew Research Center Research that suggests that a quarter of all web pages that existed at one point between 2013 and 2023 are no longer accessible as of October 2023. In most cases this has become because of individual page was deleted or removed on an otherwise functional website. For older content this trend is even starker. Some 38% of web pages that existed in 2013 are not able today, available today, compared to 80% of pages that existed in 2023. So 23% of news web pages contain at least one broken link as do 21% of web pages from government sites. News sites with the high level site traffic and those with less are about equally likely to contain broken links. Local level government web pages and those who belong to city governments are especially likely to have broken links. So, given this context, this workshop aims to address some questions. What are the challenges brought by the Internet and the digital platforms to the preservation of collective memory? How do these new challenges relate to the promotion of information integrity, the protection of rights to information, the promotion of underrepresented cultural heritage and other issues traditionally debated in the Internet governance field? We’ll start the discussion with Maria Usa Oliveira. She is online. She is the chair of the advisory board of the e-Government Institute at the United Nations University, a former director of the UNESCO Communications and Information Sectors Division for Digital Inclusion, Policies and Transformation, where she led the support member states to strengthen capacities for access to information, digital inclusion, digital transformation and protection of documentary heritage. Before I call Maria Usa, I would like to introduce our moderator. Unfortunately, I won’t be able to be here the whole panel due to crossing agendas with other panels of IGF, but I would like to introduce a very important person for us that will be on my behalf moderating this panel. He is Juliano Cappi. He holds a master’s and a PhD in communications from the Pontifical Catholic University of Sao Paulo. He is the manager of the Brazilian Internet Steering Committee Advisory Team. He coordinated the creation of the Center for Studies on Information and Communication Technologies, the CITIC.br, the UNESCO Regional Center for Studies on the Development of Information Society and the Brazilian School on Internet Governance, the EGI. And I would like to introduce and to raise their names, last but not least. Jean-Carlos Ferreira dos Santos, Tatiana Gereissati, and Juliana Holmes, without whom this panel would not be able to exist. Thank you so much for your hard work on this topic. Marielza, thank you so much for being with us, our dear friend, so the floor is yours.

Marielza Oliveira: Thank you very much, Bianca. Can you all hear me well? I hope so. Yeah, we do. Okay, great. Thank you. It’s so nice to see you again, and it’s nice to be with the CGI colleagues and all the colleagues around the room and on the internet that are participating and watching this panel. I think that this is one of the most absolutely relevant topics that we could be discussing because the internet is really changing the way we think about and record and recall our own memories. It’s changing it completely, and it has done that from the very beginning. When we started and we started accumulating some information online and when the browsers, you know, first browsers came around, we stopped really thinking about memorizing things because we could always find it on the internet. You know, it was just about, you know, oh, you can Google it. You know, it’s literally, you know, the browsers became our collective memory of what was happening, except that the browsers, you know, and the internet itself, it doesn’t have everything, you know, that, you know, we have in our own minds. We digitize very selectively, but we are less selectively, we’ve been less selectively over time. And the internet actually changed the way that we actually record things. And artificial intelligence made a huge change in the process as well. The first steps that we had was essentially we put things online. We created content, digital content online, and we digitized material. But now we actually went beyond just digitizing to actually data-fying content so that we could actually start searching and using content in a different way than before. In the beginning, because there’s a huge gap, a lot of disparity on the internet in terms of who has compute capacities and who has the skill sets and who actually can access the internet. In the beginning, it was even worse. Nowadays, we have about 70% of humanity online already. It’s 5.6 billion, if I’m not mistaken, out of the 8 billion that exists. In the beginning, we had quite a few less people online. And then, therefore, the content that was online was essentially the content that came from northern countries, from the US, from Europe, essentially, and with a lot less content being recorded by other countries that had less computer capacity, less access to the internet, and so on. So we end up with, for example, nowadays, 46% of the content that we have on the internet is actually in English, and very little. content is in other languages. We have 7,061 languages in existence in the world, in use in the world, and actually less than 300 of those are in use online. And of course, we’re seeing quite a lot of effort to increase that number of languages that are active, that we can actually translate from one language to another. But still, we see the vast majority of content that the internet has memorized in its 15.3 million websites. It’s essentially from a subset of the countries that are available and that exist. But we digitize and we digitize with a lot of disparity as well, like I was saying, because we simply don’t, not all countries have the capacity, but also the digitization process itself is a costly process. And in the beginning, we ended up with, we’re using technologies that are nowadays quite obsolete already. So for example, I don’t know about you guys, but raise your hand if you have CDs. If you have CDs, do you have CDs? I have 400 CDs and no CD players anymore. It used to be that computers came with a CD player. And nowadays, if you ask for one, people go, why do you need that? It’s essentially we moved from a technology that existed before that no longer exists. And a lot of the storage that this technology had, that is the capacity. to restore was left behind and a lot of the archives that were digitized already, you know, were lost. Were lost, you know, because this is no longer an accessible format, you know, for most computers. And just like that kind of format became obsolete, there are quite a lot of different formats obsolete as well from the very beginning. You know, computers started, my first computer, I actually, my first personal computer that I used, you know, it had recorded things on tape, you know, and, you know, so those are gone. And, you know, so we lost, you know, quite a lot of what was memorized, you know, and recorded in this kind of archive. And that’s not the only gap that exists, you know, in terms of storage. In terms of storage, you know, collectively, we store less than 10%, you know, we in data centers than what we actually produce, you know, in terms of information or content. Now, I’m not going to even call it information because a lot of it is not necessarily information, it’s content that we put on the internet. In 2010, already, you know, about 15 years after the first browser, you know, was made available, in 2010, we had two zettabytes of data online, a zettabyte being one trillion gigabytes, essentially. Now, we had in 2010, two zettabytes. In 2020, we had 64.2 zettabytes online. And in 2025, five years later, we are expected to have 181. zettabytes of content. So in 10 years, we went from two zettabytes to 64. And now in less than five years, we are going to multiply that by three. So the amount of content that we produce with the number of people online is growing at a pace that is incredible. But storing this content is highly expensive and very selective. So what we have online is not necessarily what we have in storage in terms of data centers, for example. And those are very expensive, very expensive technologies. And not only expensive in terms of the creation of the tech itself, the infrastructure itself that is very costly, but also environmentally costly in terms of water that it drinks to cool the data centers, the energy that it consumes to power these data centers for them to continue working. So digitization is a process that is incredibly expensive. So selection of what ends up stored is a process that is on a continuous base, making a lot of what we produce being discarded. And that discarding is not necessarily done by us. It’s not a process that we select to do. It’s by the organizations and the platforms that we use that end up making that kind of selection. What is worth keeping? And what is getting thrown away on a daily, on a continuous basis? So for every byte that we have to store nowadays, another byte has to be thrown away. So, how do we select that? Organizations make that choice, and we end up not having access to a lot of information. We have the broken links that were mentioned in the beginning by Bianca. We have a lot of this loss of content that we use to store in the cloud, or in different types of systems that end up obsolete, and discarded, and so on and so forth. But it’s beyond that. Digitization is this costly process, but datafication is a costly process as well. We need to be able to actually search this content, and the vast amounts of content that exist to be able to be searchable, to be accessible. They have to become beyond just a record. They have to be a searchable record, and being searchable is a complex process as well. You actually have to datafy, create, index this kind of information, this kind of content, so that this content can then be accessed in different ways. The process of indexing is very complex as well. It used to be, for example, that when we scanned text, for example, we scanned a book, that we took a picture of that book. Essentially, it was a digital Xerox copies of that book. It’s not a searchable mechanism. You just have this kind of a picture. Now, you actually, then we started using. using OCR technology, you know, the optical character readers technology that actually converted a page, you know, instead of being just a picture to being, to reading the text and absorbing it. But now it’s, even that, you know, it became at some point the heart. So you actually have to index it in different ways, finding, you know, keywords, for example, for text and et cetera. So who decides those keywords? Who decides on what basis you access information on the internet? It is the kind of thing that when we start looking, we find all kinds of issues with that. For example, I don’t know whether, you know, you’re familiar with a, the ImageNet, you know, a dataset, which was a dataset created, I think it was Harvard, you know, that created this dataset, you know, and started, you know, putting a big set of pictures, you know, together and somebody had to figure out a way of making sure that this, this data was searchable. And so they started labeling the pictures and the labeling pictures, and it became an issue that brought in all kinds of biases and discriminations, you know, and, you know, for example, you know, it would look at, at the faces of, of, you know, people that are black or brown, you know, or faces that are not the typical blonde, blue eyed Northern, you know, and, and label them in different, you know, as, you know, in many derogatory ways, you know, hyper sexualizing, you know, women, for example, women of color, you know, or calling men of color with a criminality linked associations and so on and so forth. So that’s the kind of thing that ended up happening. And then when we search for these images, when we try to recall the memories that these images encode, you end up bringing these biases in as well. So you have all kinds of issues with digitization. Then you have the process of datafication. And then you actually try to generate. Nowadays, we use this vast amount of data to generate applications. For example, using to generate generative AI, artificial intelligence, large language models, diffusion models, and so on. And those encode this datafication mechanisms that are quite biased, quite disrespectful, actually, of different cultures, and are keeping content from cultures that are not necessarily representative of all the cultures of the world. So we end up with generative AI, a set of collective memories on the internet, and particularly in data centers that are not the memories that we put in, the content that we put in. And then we end up with this content that is coming out that is not necessarily. It’s a kind of pasteurized, amalgamated, average content that is not the memory of the world. you know, but it’s the memory of everyone, and it’s not respectful of cultural heritage and cultural precedence. But, you know, this is what we have online. And of course, generative AI, it actually generates content as well. And the generation of content by generative AI actually creates tremendous issues on memory that we collectively have on the internet. First, it hallucinates. You know, it creates information or content of things that never happened. It doesn’t exist and don’t exist. You know, it doesn’t have any links to reality or to facts. It simply predicts, you know, the next image or the next picture or the next word. And, you know, so it predicts, you know, those and end up creating, you know, citations of books that don’t exist, pictures of events that never happened, and so on. And actually, historians, many of those are actually using, you know, the images to illustrate, you know, generating images to illustrate episodes in history that had no photograph, you know, of them happening, you know, before photography was invented. So you actually now have pictures of, you know, that never existed of an event. And, you know, and those pictures are incredibly biased as well. For example, generative AI, one of the things that it’s interesting, it just generates on the basis of what exists. There are quite a few tests, for example, about, you know, for it, but trying to generate. images of black doctors treating white children in hospitals. It happens every day. But generative AI has enormous difficulties creating this kind of image. But it makes it easy for you to create, for example, images of Indians in the US, wearing traditional clothing and sitting around negotiating treaties with cowboy-dressed white men in the 16th century. So it’s not accurate. And we end up with these images polluting our environment as well. So we have hallucination, which is the unintended creation of fact-free content, when I call it fact-free. Then you have actual intentional creation of content that is also fact-free. It’s not linked to reality. And then you have actually malignant kind of distribution of this, which is misinformation, disinformation, which is actually created with the intention to deceive. And so we actually spit it all out on the internet again. And we keep polluting our information environment to the point that now we are in the process of digitization, datafication, and usage of, you know, this information online, the biggest skill that we need to have is actually the skill to verify, you know, to say, is this real? You know, is this true? And how do you do that is becoming more and more difficult, exactly because of the broken links and the disappearing behind paywalls of the content that is trustworthy. So, such as content from media organizations that have to charge, you know, for this presentation of this content in order for them to survive, you know, instead of what platforms do in presenting information to us with by that they monetize through ads and other means such as that. So, yeah, you know, we live in a completely different world, you know, from when you could just Google it, when actually search engines are using generative AI to hallucinate results and offer them to us, you know, including as a first option. So, we don’t have the memories of humans anymore. We have, you know, content generated by computers being presented to us as, you know, the collective memory of the world. We need to be very, very cognizant, you know, very, very, um, we need to really understand the impact that this has on everything we do, you know, the valuing of science, for example, you know, if facts can be mixed up with non-facts, you know, with fact-free content such as that, what is the value? of the trustworthy organizations that use to generate content for us, you know, science, media, you know, authorities, and so on. They’re becoming less trustworthy as well, you know, simply because we cannot differentiate between content that is generated, you know, that is part of our collective memory, that is fact-based, evidence-based content, to, you know, something that is being, you know, put on the internet by, you know, some artificial entity. So, just some provocation to start, because I think that this is one of the most important topics that we have. How do we preserve the, you know, the validity, the reliability of our information environment? This is the question that we have, you know, for the next few years. It’s the most important question that we could, you know, be discussing. Thank you.

Juliano Cappi: Thank you so much, Mariela. I’m assuming that we should now pass to the next speaker, which is Ricardo Pimenta. So, Ricardo Pimenta, the floor is yours.

Ricardo Medeiros Primenta: Thank you. Thank you, Juliano. So, good morning. I’d like to begin by thanking CGI for the invitation and also the Ministry of Science, Technology and Innovation of Brazil for this honor of representing it. So, to begin with, let me share a popular Yoruba saying from Brazil’s Afro-Brazilian culture. that says, Eshu killed a bird yesterday with the stone he threw today. So, Eshu, we know, it is a figure of movement and transition in Yoruba mythology, bridges the human and divine, enabling communication and connecting them. This notion of interconnectedness reminds us that maintaining and developing connections in our digitized world is both our responsibility and a challenge, even when the connection is between past and present. In fact, in our current digital reality, these connections generate immense data and information raising pressing questions. What should be preserved and how do we distinguish the essential from the superfluous? The challenge of maintaining collective memory has grown exponentially. We now face a flood of disorganized and even lost data stored across countless devices, complicating retrieval and comprehension. For public policy, this issue is particularly urgent, given the unprecedented speed and volume of data production in the past three decades. So, memory, as highlighted in the Yoruba saying, it isn’t just about the past. It is actively constricted in the present. Remembering today shapes our understanding of yesterday. And memory itself is updated and rewritten in real time. In Brazil, the time has come to think about yesterday’s bird. So, memory, it’s a political agenda, not just a cultural one, which should primarily unite public and third sector institutions. so that it doesn’t end up being driven mostly by the market, leading to what Andrés Hussein has described as a disneyfication of memory, which through its overexploitation would also invite us to greater collective and irremediable forgetfulness. This has profound implications for public and collective memory in the digital age. We must approach it ethically, curating what is preserved, while recognizing that not everything can be saved. Social platforms like Instagram or Facebook, for example, add complexity as the content they host belongs to their owners, including disinformation and toxic narratives. This threatens the representations of our past and present, and meanwhile Brazil’s more than 5.3 million internet domains contribute every day to the vastness of this challenge. To tackle this, initiatives by institutions like IBICT, the Brazilian Institute of Information Science and Technology, where I am a researcher and currently the teaching and researching coordinator, provide some valuable examples. First of all, I can speak something about the Tynakon software. The Tynakon is a software that digitizes and systematizes cultural collections from IFAM, that is an institute for our historical and artistic heritage, National Institute, and the IBRA, that is an institute from Brazilian museums. So ensuring the Tynakon could ensure access to museums and memory institutions. This is one example that are developed in IBICT inside the Ministry of Science, Technology and Innovation. The second is the Cariniana Network. That is a network that preserves… over 700 open-access electronic journals automating processes like storage and validation. The third is the Arquivo.gov. It’s a kind of a pilot project that archived nearly all Brazilian government websites in 2021 with plans for user-driving websites collection and preservation inspired by models like the Arquivo.pt and the Internet Archive but more the experience of Arquivo.pt is the reference for us. And the last, the Tempora. Tempora is a digital tool developed in a digital humanities laboratory in Ibict. It is a platform for archiving and visualizing digital information in the form of a timeline which during the 2022 presidential elections we started publications from fact-checking agencies with the intention of creating a timeline of disinformation events and contributing to the memory of that event in the midst of this disinformation fever we are experiencing globally. So, these efforts showcase Ibict’s potential leadership in preserving Brazilian Internet memory but broader challenge will remain, particularly regarding who preserves the entirety of Brazil’s online presence and now and how storage limitations are addressed. The issue recalls the Argentine writer José Luis Borges who wrote Funes de Memorios where the desire to remember everything leads always to a paralysis. Memory thrives in balance. between remembering and forgetting, recovery and erasure. The technological promise to store everything is illusory. We must curate what defines the memory of the Internet, shaping what is remembered and what is not. To do this, two challenges stand out. The first is about management, in my perspective. The challenge of memory today is its management, its control in a scenario where space and time are atomized and the volume of information expands entropically, invites us to feel this kind of Freudian death drive, intimately that pushes us to confront, to innovate and generally to the vibrant creation of means, techniques, strategies, policies and practices capable of making us overcome it one day at a time. The second could be about governance, a good one. A good one that is capable of circumscribing different actors, able to decide what to preserve and who makes those decisions. This isn’t just a technical issue, but a political and institutional one, requiring ethical collaborative solutions. Furthermore, if the object we are looking at is the Internet, how will any proposal to preserve its memory be able to progress without thinking about the mechanisms that need to be aligned with the devices, actors and institutions that regulate it? So, in my perspective, governance could play one singular role to keep proper access to information and freedom of expression without major ethical complications and transgenerational public and collective memory mediated by information… communication technologies that are now in different parts of our private and public daily lives. In closing, I return to the Yoruba saying, the actions we take today to preserve the Internet’s memory will determine if the bird was indeed killed or not yesterday. So thank you. Thank you so much Ricardo and I

Juliano Cappi: just gave the floor to Ricardo without presenting. Ricardo then I’m sorry and I’m doing it just as now. Ricardo is currently the coordinator of teaching and research in information and science and technology at the Brazilian Institute of Science and Technology and he’s a permanent professor at the postgraduate program of in information science at the Federal University of Rio de Janeiro. Ricardo has been a full research at the Brazilian Institute of Information Science and Technology since February 2013. Sorry Ricardo and thank you so much for your insightful thoughts. Then I would give I give the floor to Samik Karel. Samik is a journalist and researcher from Kathmandu at Nepal with over a decade experience in reporting on contemporary issues for national international media. He has contributed to leading research institutions focus on technology ethics and human rights. Karel has received a multiple international fellowship and grants and he teaches critical thinking at university at university while exploring electronic music. Karel thank you for your participation. The floor is yours.

Samik Kharel: Hi, can you hear me? Yeah. So yeah, thank you very much. Hello to everyone at the IGF in Riyadh. From myself enjoying a sunny winter afternoon in Kathmandu, at least a couple of minutes back. I’m overwhelmed to be a part of this esteemed panel. I would like to thank the CZI for this wonderful opportunity to talk about collective memories in digital realms. I think it’s the collective memories that have actually brought us together, our past activities that’s on the internet. And so, yeah, although this is a very deep and awesome to dive in, I would like to start very general and narrow my interest towards my own expertise and probably my geographic reason as well. So yeah, just an anecdote to start with. When I was very young, I was given a chalk and a slate, you know, like by my parents. And a formidable technology at that time to write and learn first alphabets. It was not very long ago, but like, you know, it was like three decades back. And I thought it was the most convenient tool because I could scribble on it, write anything. And if I didn’t like it, I could erase it as well because this tool was very ephemeral, you know. So I don’t remember what I scribbled then, you know, like not much memories of it, except writing a few alphabets and maybe like scribbling some Mickey Mouses and Donald Ducks. But passing this phase, I was given a notebook and a pencil. Now I was told I was to have more structures, you know, like write between the lines, do this, do that, be more disciplined and only erase the errors. A little bit later, like a few years after the pencil, I was given a pen. a more permanent it was it was an idea which gave me more permanence and I and what I scribble stayed a little bit longer there were no no traces of my chalk and slate experiences and now still now like I although I don’t find anything else in my basement in my parents basement I still find like some scribbles of whatever I did with the pen and pencils you know like so yeah I mean that’s how I would like to start and how these were my memories and that were keep kept in like soup boxes in my parents basement and probably many of these contexts you can relate to as a collective memory yourself so we have a tendency to save and retrieve our memories as desired and as memories play a huge role in construction of our identities so fast forward to my teens so like you know like we get a computer with with a little bit of access to the internet a little bit later they’re more restricted and diverse I was being watched by my guardians to go there not to go there probably being logged and being checked my history compared to the most more analog past I had the internet seemed to make everything present you know even the past was so well weaved with the present everything now felt like a block this is likely because at present our memory function is increasingly organized via media systems specifically digital media and which has become entry very integrated this integrated media system internalizes the main functions as cultural memory now which has become a focal point of the document is in system of the past and the present Example like now I use Google photos and it gives me like, you know, seven years back memories You were in the ocean and today you’re in the ocean. So you I mean you’re doing well, or I don’t know Yeah, this is this is how you Tag your memories with this tokens so like coming again to an ad is what they say like the internet never forgets, but people do and And When people do then internet actually rightly reminds you again that you have not forgotten and so now you know like with internet and digital technologies and in particular internet and web-based information and communication technologies Our memories our collective memories are formed and shaped during the digital era While internet systems have enabled Kind of demo democratizes in memory with it with You know, like everyone’s basic technology and internet, you know, like devices can produce their own content promote it on the web um while now the big part is many who have been left behind as even with the lack of basic technologies and infrastructures Are not being able to do so my one of my country and the reason and the majority world about chronicles these digital divides which majority which measurably still affects already vulnerable population and the marginalized ones in our reason and my country so, uh this reason witnesses particularly of Of a patriarchy on internet as well as majority of narratives and discourses are still male dominated You know, like all these narratives discourses coming from political institutions parties Universities are still very patriarchal Uh, that’s what I feel. Um the same population with this which actually did not have cameras books access to libraries, information, newspapers, access to education, basic health care. They’re the same population who don’t have access to the internet, which is really sad. Their memories have never been documented. Rather, sometimes they’ve been part of subaltern narratives which have been seen by others and brought out to the world. While this divide is closing in data with more access to technology, the debates on what we call meaningful uninterrupted access still lingers. That’s where we stand in this region, particularly Nepal, India, Bangladesh, Sri Lanka, and the rest of South Asia. Where we lag is while social media helps forming collective communities. You have people who play games, different interest groups who don’t have to be face-to-face. These vulnerable populations are still left out of discourse. They don’t know what’s happening, where they are, where they stand in this technological world, which was supposedly principled under democratization and participation of collective memory making. Coming forward, the process of creating, storing, managing, removing, manipulating digital data. Let’s talk about public data and collective memory. Where we stand right now in the digital age, collective memory are often intertwined with the data we generate, from the photos we post online to the interactions we have in social media. This raises concerns about who controls our public collective memory, how it is used, whether it is a subject to manipulation. Most likely, we are very vulnerable when it comes to the government using our data. With the lack of very comprehensive data policies, mostly in this part of the world and elsewhere too, there’s a lack of accountability. While the governments have been proactively using available technologies to collect data, data from the citizens, there has been less accountability of where this data is being used, where it is being stored, for what purposes, for how long, how it will be used, in what cases it will not be used. There has been no accountable answers to these. There have been several breaches and leakages of data and personal information, personal data example. I would like to give a few examples of the election data that was collected but that was breached and used for other things. The government yet to realize people’s, the value of people’s data and being accountable for it. Also data being collected for one purpose and being used for another one, like you use for national, you know, like a national demographic population data for something else, you give it to like some marketeers, corporate houses, for their own benefits. So that is another problem. Also data being, also other sensitive cases of data collection and retrieval being procured to other countries because we don’t have the expertise to manage our own data, which is very, which keeps us very vulnerable position in the absence of comprehensive data law. Then again, there’s like the trustworthiness of social media platforms. Which have been pretty active in most of these countries. While we’re using social media platforms in our day-to-day activities, from our information sources to businesses, we tend to use all this like big social media as vital tool for our information, for even for our businesses. But no one questions the trustworthiness of it. The government has tried to grip the social media companies in this part of the world and other places as well. Asking them to work in coordination, filter harmful data against national integrity and national interest. And also establish a focal communication person for the, so that they can. actually been in touch with these companies. Few companies like TikTok, which was banned in Nepal and less some other countries in South Asia have also adhered to the government’s proposal, established a focal person, worked with the government for data breaches, but still it’s in a very nascent phase. TikTok was banned by government of Nepal, which has been lifted after they agreed to set up their centers and go accordingly. Also the rife of misinformation in the platforms is ever increasing, political parties and political wings are using the internet and social media to change narratives that have been abundant, which is like everywhere, especially during the crisis, which is a crisis during elections or some natural disasters or the pandemics. Whitewashing, smear campaigning, conspiracy theories, an area of these enforced to collective memories. At the same time, memories shared publicly in social media have also been very crucial during natural disasters and pandemics. I’m not saying all is bad, there’s good things as well. The recent floods, the use of social media and like the posts made by citizens actually helped rescue many people as well. Also coming back, like as a journalist, I need to bring this together, like the best example of consolidated open source, what we say is Wikipedia, which does not conform to historical recording practices. However, internet as a whole and social media are also a great tool for open source. Now, as a journalist reporting with limited resources from this country, not being able to travel everywhere on foot, I think open source has been very crucial to my coverage on very sensitive issues. It gives me multiple perspectives, angles, diverse ideas, and approach to report. I think it’s a marvel for modern news journalism if you know how to use it. So yeah, the future, I know like I have been closely following the LLMs and as Mariella also pointed out, how it’s gonna herald new ecosystem for the collective memory. Is it gonna be the future of collective memory is a question, particularly generative AI seems to have taken a technological leap and with building new infrastructures for memory, while it also enables combination of various diverse encounter memories. Now LLMs are being used to memorialize chat with historical figures and philosophers, bringing them from past life. There’s this Silicon Valley thing of saying long-termism and like, yeah, memorializing someone. So you can talk to Russo, even if he was dead like, I don’t know, many hundred years back. So the Russo chatbot becomes more dynamic in engaging in public memory with all the interactions with other people, quite exciting times, even those, the saturated discourses are likely to be dynamic again. So while AI could be the future of collective memories, it could be crucial to ensure participation of marginalized communities from the global South in progress towards inclusion and multilingualism and multiculturalism. That’s what I think. So we cannot be left behind and our already vulnerable community is getting more vulnerable without the lack of internet, with the lack of internet and connected infrastructures. So I would like to end there and I would like to discuss more. Thank you.

Juliano Cappi: Thank you so much, Samick. As we are advancing to the closing of the session, I go straight to Carlos Afonso. Carlos Afonso has a master’s degree in economics from University of Toronto, also a doctoral studies and social political. Through a thought at the same university, he has worked in human development field since the early 70s. He is co-founder of the Association for Progressive Communication, APC. He coordinated the Eco92 Internet project with APC and United Nations. He is a member of the United Nations Working Group on Internet Governance. He is a special advisor of the Internet Governance Forum. He was, in 2007, a member of the UNCTAD Expert Group on ICT and Poverty Alleviation. He was a member of the UNCTAD Working Group on Enhanced Cooperation. He was a member of the Mood Stakeholder Advisory Group of the IGF. He is co-founder and member of the Brazilian Internet Steering Committee. He is co-founder and chair of the Brazilian Chapter of Internet Society. Finally, he is a director of the NUPEF Institute in Rio de Janeiro. The floor is yours.

Carlos Alberto Afonso: Good morning. Were you hearing?

Juliano Cappi: Yes, we hear, but with a little bit of noise, but yes.

Carlos Alberto Afonso: Let me see if I can switch.

Juliano Cappi: I’m sorry, we are having difficulty to listen.

Carlos Alberto Afonso: Yes, can you hear me now?

Juliano Cappi: Yes, yes, perfect, perfect, great.

Carlos Alberto Afonso: Okay, thank you. Thank you. Well, good morning, good morning, right? You are, no, it’s still morning there? No, it’s not. It is, yes. So, it’s five in the morning here, so. Well, you probably are looking at a map that is from Wikipedia, which I posted there. And the map, as most maps are, is distorted, benefiting the Northern Hemisphere. So, the Northern Hemisphere shows much bigger than the Southern Hemisphere. But the important thing is, the countries painted green are the countries which have significant Internet archiving services, like the Internet Archive, like many other efforts to archive the Internet. Countries below the equator, which takes most of South America, and also the Caribbean and Mexico, there is no indexing, no indexing of the Internet in those countries. When I say there is no indexing, I say there is no significant indexing, which is worth mentioning. There are experimental ones. We are a small institute. We are doing a project like that. But it’s too small to figure in the map, no? In Africa, you have only one country with a… an important indexing service, internet indexing, web indexing service, which is Egypt. And why Egypt? Because they have the Alexandria Library, which does internet archiving. Wonderful, no? But it’s only Egypt in the entire Africa, no? In the Southern Hemisphere, you have only Australia and New Zealand doing significant internet archiving. So this is a major challenge for the southern countries in the so-called Global South, no? And we need to address that because we are losing a lot of information because as other speakers mentioned, the information on the internet is anything but eternal. It disappears and many government sites disappear when political issues arise, no? And this happened recently in Brazil, several sites almost disappeared. We are trying in Brazil, there are initiatives, but are not at the scale which could be present in that map. But there are initiatives trying to do something. And one of them is our small institute which we call the Grauna Project. Grauna is a bird, is a bird with a strong, tremendous resistance to environmental challenges and so on. And it’s also a figure of a famous cartoon in Brazil, which represented people, impoverished people in the northeast of Brazil. So we use the name Grauna to represent our project of trying to do indexing of the internet in Brazil. And it has two components. One of them is indexing based on the technologies used by the Internet Archive by Arquivo.pt, which is the major indexer in Portuguese language, but does not index Brazil, index only Portugal. And several others which use open source technology and the reproducible technology to index the Internet. And this Gaona project also includes a local server, a very small server, which is a small box, which you can carry with you anywhere, which has a copy of many information systems which are there to be used in remote communities which have poor or no connectivity to the Internet. So they have a reproduction of Wikipedia in Portuguese, for instance, in this box, and several other facilities, information facilities. So this is part also for Gaona project, no? And what are we doing right now is the project in an experimental phase and trying to protect content relevant to the democratic processes, which is a potential target of hacker attacks, censorship, political pressure, or eventually, which cannot be backed up satisfactorily, no? The Gaona Archive stores websites selected using a methodology that prioritizes qualitative interviews and analysis of the political scenario. It is very experimental. In this experimental phase, some priority areas are defined, like environment, health, culture. human rights, but we have defined in principle 18 thematic areas to index and the challenges we are confronting are quite interesting and we had to do it to understand why people are not indexing the Internet and now we know it’s very difficult, it’s a big, big challenge. We have created several interesting features in the system for archiving like the ability to belong to a group of users, for example, if a research group wants to have multiple users creating archives for the same project in the system. Ability to schedule, requiring archiving to maintain different versions, display of archiving date and time, which is typical of the major Internet archives. And we have defined, to begin, 18 themes from culture to government, racial equality, gender, elections, communication, etc., etc. In recent years, there have been several cases of removal of or alteration of the content of public information, as well as deliberate attacks on web pages. There are also frequent reports from civil society about greater difficulty in accessing previously available public information. Despite some relevant experiences in the academic field, for instance, at the Federal University of Rio Grande do Sul, there is an indexing initiative, Brazil still lacks permanent projects aimed at archiving the web on a scale compatible with the breadth and reach of the Internet in Brazil. Disappearance of information in all elections due to poor management or incorrect application of electoral law is an issue which has to be considered. And Graona started in 2018, and we managed to get some funding from the Open Society, the Media Democracy Fund, and others to help us start the project. We have support from NIC.br with equipment and from the National Research Network, which provides connectivity to our project. And we conducted about 60 interviews about threatened websites, relevant content, security of their own websites. And we also had a legal context document prepared by our lawyer regarding archiving of content which may be challenged by the actual owners of these contents. And this is a challenge that has to be contemplated in these projects. In 2022 and 2023, we improved the infrastructure to ensure the necessary conditions for the system to run securely. And part of it is to provide almost real-time backup of the system, which is one major challenge if the main data center fails. You have to have a backup to run immediately, almost immediately. And this is also a challenge that has to be contemplated. We initially had 18 themes active with 227 archived sites and more than 100.gov.br sites, government sites, archived. That was especially important because there was a political transition in Brazil in which many of these government sites were challenged or disappeared. The scale of indexing is much smaller than the Internet Archive and others, and at this stage of the project is specifically aimed at preserving content at risk for several reasons. It’s also an experiment that seeks to address the challenge of indexing content that is publicly available, but often extremely difficult to capture. There are several reasons, use of increasingly complex technologies, frequent changes in the technologies used, huge databases, sites with multiple depth levels, many other challenges. There is the possibility of archiving that is not made public, is one of the features that we managed to install in the system, which is useful, for example, for the storing of sites that promote disinformation, and that we do not want to multiply, but can preserve. There is still controversy, however, about the use of archiving as forensic evidence. Preservation of dialogues in Brazil about preservation of web contents are happening in the sessions, in dialogues, and meetings of academics and other interest groups in Brazil since at least 2019, and we are discussing this now here at the IGF. And in the IGF, there is, I understand, an intersessional initiative, a policy network dedicated to highlighting best practices for preserving and creating local content. And a major challenge, for instance, for the original idioms, languages, which are a challenge in our region especially. We are finishing the first version of the software of the system, the RULA, and will be available on GitHub for free development using application by other organizations and also by the public authorities. And we are organizing a permanent curation team or committee to preserve more sites and review archiving criteria, which is a big challenge. The criteria for archiving, which was mentioned here by, I think, Marielza. And advanced research on public debate on the formats to archive, which have to be compatible with several library and other standards. Advanced debate on the authenticity of archives in WARC format so that they can constitute evidence. Establish support partnership to advance the development of the project. And train people to perform archiving. Hire a team to perform more complex or large volume archiving. Further improvement in usability of the tool, which is already online, by the way. Keep the system up to date in light of the constant transformation of the web and expand the infrastructure to increase processing storage capacity. Preserving content in any language is a complex challenge. Brazil currently has more than 300 indigenous ethnic groups with more than 270 languages, all of which are at risk of disappearing. And with them, an entire culture disappears. Similar challenges occur in Latin America and other countries, in the Portuguese-speaking countries and so on. How can internet resources be used to support the preservation and continuity of these languages and cultures is a big challenge. That’s it. Thank you. I talk too much. Here, the address of our institute is nupef.org.br. I will put this in the chat. And the address of the Grauna project is grauna.org.br. I’ll put there as well. Thank you.

Juliano Cappi: Thank you so much, Carlos Afonso. We have one question from the online audience. I would ask if someone here in the room would like to have a question. We have a question here.

Audience: Hi, I am a researcher based in Germany. I would like to ask Ricardo, can you hear me? You mentioned the link between memory or political agenda, or collective memory and the elections in Brazil. Can you give us some examples to elaborate how collective memory has been used or has impacted the results of the elections in Brazil? I also have a question for the journalist from Nepal. Can you give us some examples of the relations between whitewashing and collective memory? And if possible, could you give us examples from Nepal? And I would like to give, I’m sorry, but I forgot the name of the first speaker, the only female speaker in the room. Okay, okay. If you can hear me, actually I’m also working on collective memory. And when it comes to the deaf people in famine and natural disasters in the past, I couldn’t find the number of you know dead female bodies just because in the past only males house only for age households males dead bodies were counted what would you suggest me do it like you know doing to count to counter this challenge and that is I think that is also the question for all the panelists there was no data on certain issues in the past and maybe when you apply for funding when you apply for when you talk to your editors when you will talk to your bosses to convene someone of your research proposals they would ask you where’s your data what do you get data from and in case when there is no data due to historical injustices what would you do thank you

Marcelo Ferreira da Costa Gomes: hi I’m Marcelo Ferreira from Osvaldo Cruz Foundation and CGI.br thank you for the interesting very interesting interventions and I was thinking about the interventions you mentioned public institutions NGOs or civil society institutions looking for for memory open sourcing initiatives also very interesting but I found it is a lack of business interest on memory like companies and the compared to my Maria said that the technology expensiveness and indexing expensively we see on business today people saying that cloud storage and cloud processing it’s cheap so what I feel after this is that you have technology that are cheap for business interests for producing products and private services and expensive reform memory and I’d like you to comment about that because we see that you have technology for private interest that they’re cheap and available but when you think on public interest on there is no market interest And I’m not thinking only on states, but on the public, the common goods and the public interest. We don’t have investments of states or even of business. I’d like to comment this difference between the access and availability of technology for private interest and public interest like memory. We have this hard way to do that.

Alex Moura: Hi, I am Alex Moura, originally from Brazil. I work here in Saudi Arabia currently in the Cal State University. And I have a question for Carlos Afonso. As I have a past working back in RMP, the Brazilian Academic Network, I am aware of the challenges that happen in the science and education area, where people struggle to also have data for scientific purposes, for educational purposes, in universities, in research institutions. And this brought me a recollection that this is an open problem in Brazil, that we don’t have a specific institution towards storage or preservation, digital preservation. So how are you tackling this part of the problem of the storage capacity for the Grauna project? And what are your thoughts on how Brazil and other countries can address the problem of the storage capacity for many purposes, not only for internet memory, but also for scientific education and cultural and arts and etc.?

Bianca Correa: We have one question that we received from the online audience, I think I will also pose this question because then we can give the floor to all the speakers to answer them. So Dr. T. V. Gopal from Anna University in Chennai, India, he asks, he says, public memory is short, internet memory is seldom so. Any solutions for the mismatch hazard in the geopolitical space?

Juliano Cappi: Well, we have very good question in very short time. Then I would ask panelists to make their final remarks, trying to address the questions, which are very important and interesting, but I would also have to ask you to not go further than three minutes, because we will have to close the session very soon. Then we could start from backwards with Carlos Alberto Afonso, and then Samick, and then Marcelo, Ricardo, Pimenta, and then Mariausa, please, Carlos Afonso, the floor is yours.

Carlos Alberto Afonso: Thank you. I’ll be very brief, a good question. I recall that the Grauna project is an experiment, still an experiment, exactly to measure the difficulties which you mentioned, among others, like for instance, backing up in real time is a tremendous challenge. The cost of doing that is already very expensive in a big scale. That’s why we restricted the breadth of the information that the project can capture. to mostly civil society organizations’ web information. And on the basis of this experiment, we will try to progressively expand. But, of course, considering that this means more storage, more memory, and more backup, which is tremendous, the challenge is tremendous. Our idea with the project is also to provide a sort of a small reference, but a useful reference for an organization that could tackle the challenge in full and really do a Brazilian Internet Archive. And I have to say that one of the organizations that has these resources to do that, especially technical resources, is NIC.br. And we do hope that they consider this in the future. Thank you.

Juliano Cappi: Sameek, please.

Samik Kharel: Hi, thank you. I would just like to address the question from a lady in Germany. She asked about parties, memory, and whitewashing, I think. So, like, it’s been a common trend for major political parties in Nepal and the reason to deploy what we call the cyber army. So, what they do is look around the Internet and, you know, like, if there is any criticisms about them or if there is any critical discourse about them, they document that and they go to make a counterargument against that to make their image better. So, it’s very common for them to do that these days. And to inject some populist ideas and what is go against whatever is trendy. So, that’s how it works. Anyway, finally, speaking of collected memory with the ubiquity of Internet, the way we are accessing. Collective memories, storing, discovering, and retrieving these collective memories has changed with emerging technologies. The way we interact with our memories has changed, and I think it will keep on changing with the advents of LLMs and generative AI, mainly social media and platformizations have also augmented new ways to approaching these memories by allowing us to actively contribute to them, making collective memories more interactive and collaborative now. However, we should be careful to ensure everyone has equal access and infrastructures to these. There should be accountability of our data, and the future should be shared, equal, and be democratic, and bring together all marginalized and vulnerable populations of the majority world together. Thank you.

Juliano Cappi: Thank you so much, Sameek. Luis, Thiago, Ricardo, Pimenta, I’m sorry.

Ricardo Medeiros Primenta: Okay, so I’ll try to answer in one brief. So about the question about elections. The memory is always a place of struggle, political struggle. Many people tell about the cultural side of memory, but that is okay, it exists, obviously, but even the cultural side of memory, if we can talk about this, is a result from political struggles, struggles about power. So how it impacts precisely related to the fact that it is potentially violated or rewritten as a field of dispute. by those who seek to dispute the discourse on fruit, political past, science, and so on. And let me tell you something about the tool I was talking about, the Tempora. The Tempora, this digital tool, was created during the COVID-19 pandemic in Brazil. From 2019 until mid-2022, we collected there, with this digital tool, almost 6,000 notices from media in Brazil, Brazilian media. Stories about how COVID spread in Brazil, something like that. And, obviously, the Brazilian media that didn’t have a paywall. So, in the process, most of the news stories produced by the Ministry of Health in Brazil and other Brazilian government bodies’ websites had their links broken. In 2019, this happened very quickly. And then we realized this back in 2019. We tried to develop the system so that we could save an image, a kind of PDF website, and also scrap the entire corpus of news that soon tended to disappear. So, how the question of memory could impact the elections, for example. The elections are a place of struggle, dispute, about discourse, about the past, the near past. and about projects of future, so we can afford this kind of thing and we need to develop something, some strategies to avoid that this kind of discourse could stay in some groups, some political groups that could do all bad things that we almost know, we already know that they are. So the other thing that I think I can answer about the question of data and so on, it’s a perspective about algorithmic governmentality, it’s a kind of new regime of truth. So about data, I think our biggest danger is the automation of social existence. I think we all talk about that. Automation of social existence through computational process deployed in online media. Its memory that comes from it will not be a memory preserved by the demands and conflicts of social groups, institutions or cultural practices. This kind of memory, rather it will be mathematically elaborated by algorithmic devices that are in turn programmed by groups such as the acronym GAFAMI, that is Google, Apple, Facebook, Amazon, Microsoft and IBM. So in the end, the perspective of a political surveillance that we are talking about here today, we know that it’s something that we must fight against. But the surveillance by the market, many of us let this kind of thing happen. So is this correct? I think there is a kind of reification of the practice when we just give to GAFAMI, for example, our data in change of visual and informational kinds of consume. So look, I don’t agree with any kind of surveillance, but it’s a fact that we all practice it on different scales, the culture of following on social networks, the culture of attention that we all share a little bit, a little bit more, a little bit less, our surveillance practice also. So that we carry out in intimately and in a valid way. So I find this paradigm difficult to overcome. And right now, the answer is that I don’t know how we can solve this problem. But I know that in stats, these types of questions is important.

Juliano Cappi: Ricardo, thank you so much. And then we close the session with Marielza Oliveira. Please, Marielza, the floor is yours.

Marielza Oliveira: Thank you, Juliano. Well, it was a fabulous exchange. Thank you very much for this. I’m going to close with a very simple thing. Curation is a political economic process. It’s as simple as that. We have to ask, whose memory is being preserved? you know, why do we care to preserve memory now if we didn’t care so much before, you know, and the proof that we didn’t care so much before is simply that physical archives are being led to rot, you know, essentially, you know, you go into warehouses of documents that are, you know, exposed to, you know, floods and fires and mildew and simply neglect, and we haven’t really digitized everything that should have been digitized, you know, since the beginning. One of the simplest statistics is that about 40% of the birth records of people above 60 years of age are still on paper and not digitized, you know, it’s simply because there’s this tremendous backlog of content that have never been digitized to begin with, and we simply don’t do it, we don’t get to it, we keep looking at the, you know, creation of digital records, you know, new digital records, you know, the birth records of the young children that are born now, but we forget that we haven’t done it, you know, equally for everyone, that we left behind the older generations, for example, since they didn’t, you know, start with digital archives to begin with. So we have to really ask this question, you know, and the reason why we are caring so much about, you know, digitization right now, the preservation of memory, is that we found that it has a value, a monetary value, because, you know, if the point is not to preserve memory, the point is to create data that then feeds into generally, you know, generative AI and other, you know, mechanisms such as that, you know, that then can be monetized and drive the economy for other reasons. you know, for other purposes. We are looking at the AI, not at the memory. But we need to really ask, you know, about the memory. How do we preserve it? And as a matter of fact, you know, there are very simple things that need to be done. First, we need to digitize more. Digitize, close the digitization divide that exists. For example, the content for older generations that is still, you know, on paper, or if it’s not on paper, has been digitized, is on outdated, obsolete formats that need to be brought into, you know, this new format. Then we need to look at this issue of indexing that was mentioned before. You know, the fact that we need to really think about how, you know, we create the mechanisms for searchable data, you know, because just creating data is insufficient. We need to look at searchable data. And the cleaning up of, you know, quite a lot of content that is, you know, literally toxic, or, you know, cleaning up and separation of this content. We have already the common crawl, you know, which is done once a month, you know, about once a month, crawling the entire internet. So for Global South, you know, developing the technologies and the methodologies to really mine the common crawl, to extract what is the collective memory in particular countries would be a huge thing. But we also need to build capacities of people, you know, to preserve their own memories, you know, because we don’t, we can’t just preserve memory for somebody else. We need to allow them, to give them the tools to preserve their own memories, the ones that are meaningful for them. Because actually, the terrible thing is that. only 2% of what gets on the internet, you know, it gets preserved hardcore, and only about 10% get preserved overall. It’s increasing, you know, because more data centers are being built because companies such as the GAFAM are really investing to the point that they are building nuclear reactors to power these data centers. They value it so much. But we need to value our own memories as well as, you know, the global south. And think about where do we store it in terms of data sovereignty as well. You know, how do we keep access to these memories if, you know, and to this content, if it end up, you know, ends up being, you know, switched off somewhere else. And that’s, you know, that degrades and the quality of our own, you know, collective memories in different countries. So I’m going to stop here and say thank you for the chance to have this fabulous conversation.

Juliano Cappi: Thank you so much, Marielza. Thank you, all panelists, Ricardo, Samik, Carlos Afonso. We had a great panel and this is a first initiative to debate the challenges related to memory, collective memory online. Hope that we can have further discussions in considering this event, which is in the core of internet governance on this debate. Thank you and we now finish the session. Thanks a lot for everyone. Thank you, everyone. Bye bye. Bye bye.

B

Bianca Correa

Speech speed

128 words per minute

Speech length

906 words

Speech time

422 seconds

Rapid disappearance of online content

Explanation

Bianca Correa highlights the issue of online content disappearing quickly. She cites a study showing that a significant portion of web pages from the past decade are no longer accessible.

Evidence

A study by Pew Research Center found that 25% of web pages from 2013-2023 are no longer accessible as of October 2023. For older content, 38% of web pages from 2013 are unavailable today.

Major Discussion Point

Challenges of preserving collective memory online

Agreed with

Marielza Oliveira

Carlos Alberto Afonso

Ricardo Medeiros Pimenta

Agreed on

Challenges in preserving online content

M

Marielza Oliveira

Speech speed

123 words per minute

Speech length

3421 words

Speech time

1659 seconds

Selective digitization and storage due to high costs

Explanation

Marielza Oliveira discusses the high costs associated with digitization and storage of online content. This leads to selective preservation of information, with much of what is produced being discarded.

Evidence

Less than 10% of produced content is stored in data centers. The amount of online data has grown from 2 zettabytes in 2010 to an expected 181 zettabytes in 2025.

Major Discussion Point

Challenges of preserving collective memory online

Agreed with

Bianca Correa

Carlos Alberto Afonso

Ricardo Medeiros Pimenta

Agreed on

Challenges in preserving online content

Differed with

Carlos Alberto Afonso

Differed on

Approach to preserving online content

Dominance of English and Northern countries’ content online

Explanation

Oliveira points out the disparity in online content representation, with a dominance of English language and content from Northern countries. This results in an unequal representation of global perspectives and languages online.

Evidence

46% of online content is in English. Out of 7,061 languages in the world, less than 300 are in use online.

Major Discussion Point

Biases and inequalities in digital memory preservation

Agreed with

Samik Kharel

Agreed on

Biases and inequalities in digital memory preservation

Obsolescence of storage formats

Explanation

Oliveira discusses the problem of obsolete storage formats leading to loss of digitized content. She emphasizes that as technology evolves, older storage formats become inaccessible, resulting in loss of archived information.

Evidence

Example of CDs becoming obsolete as storage medium, with computers no longer including CD players.

Major Discussion Point

Technological challenges in memory preservation

C

Carlos Alberto Afonso

Speech speed

113 words per minute

Speech length

1783 words

Speech time

945 seconds

Lack of internet archiving in Global South countries

Explanation

Carlos Alberto Afonso highlights the disparity in internet archiving services between Global North and South countries. He points out that many countries in the Southern Hemisphere lack significant internet indexing services.

Evidence

Map showing countries with significant Internet archiving services, with most of South America, Africa, and parts of Asia lacking such services.

Major Discussion Point

Challenges of preserving collective memory online

Agreed with

Bianca Correa

Marielza Oliveira

Ricardo Medeiros Pimenta

Agreed on

Challenges in preserving online content

Differed with

Marielza Oliveira

Differed on

Approach to preserving online content

Loss of indigenous languages and cultures

Explanation

Afonso raises concerns about the preservation of indigenous languages and cultures online. He emphasizes the risk of these languages and cultures disappearing due to lack of digital representation and preservation efforts.

Evidence

Brazil has over 300 indigenous ethnic groups with more than 270 languages, all at risk of disappearing.

Major Discussion Point

Challenges of preserving collective memory online

Difficulties in capturing and indexing complex web content

Explanation

Afonso discusses the technical challenges in capturing and indexing complex web content for archiving purposes. He highlights the difficulties in preserving content from websites with complex structures or frequent changes.

Major Discussion Point

Technological challenges in memory preservation

Need for real-time backup systems

Explanation

Afonso emphasizes the importance of real-time backup systems for preserving online content. He points out that this is a significant challenge, especially for large-scale archiving projects.

Major Discussion Point

Technological challenges in memory preservation

R

Ricardo Medeiros Pimenta

Speech speed

104 words per minute

Speech length

1616 words

Speech time

929 seconds

Broken links and vanishing government websites

Explanation

Ricardo Medeiros Pimenta discusses the issue of broken links and disappearing government websites. He highlights how this affects the preservation of important public information and historical records.

Evidence

During the COVID-19 pandemic in Brazil, many news stories and information from government websites had their links broken, especially in 2019.

Major Discussion Point

Challenges of preserving collective memory online

Agreed with

Bianca Correa

Marielza Oliveira

Carlos Alberto Afonso

Agreed on

Challenges in preserving online content

Memory preservation as a political agenda

Explanation

Pimenta argues that memory preservation is inherently political. He emphasizes that the process of preserving or rewriting memory is a result of political struggles and power dynamics.

Major Discussion Point

Political and economic aspects of digital memory

Challenges of algorithmic governmentality in social existence

Explanation

Pimenta discusses the concept of algorithmic governmentality and its impact on social existence. He argues that this new regime of truth poses dangers to how memory is preserved and accessed.

Evidence

Mentions the role of tech giants like Google, Apple, Facebook, Amazon, Microsoft, and IBM in programming algorithmic devices that shape our online experiences and memories.

Major Discussion Point

Emerging technologies and future of collective memory

S

Samik Kharel

Speech speed

147 words per minute

Speech length

2251 words

Speech time

914 seconds

Exclusion of marginalized communities from digital discourse

Explanation

Samik Kharel highlights the issue of marginalized communities being left out of digital discourse. He emphasizes the need for equal access and infrastructure to ensure inclusive participation in collective memory creation.

Major Discussion Point

Biases and inequalities in digital memory preservation

Agreed with

Marielza Oliveira

Agreed on

Biases and inequalities in digital memory preservation

Patriarchal narratives dominating online discourses

Explanation

Kharel points out that online narratives and discourses in his region are still largely male-dominated. This results in a patriarchal perspective shaping the collective memory being formed online.

Evidence

Mentions that narratives and discourses from political institutions, parties, and universities are still very patriarchal.

Major Discussion Point

Biases and inequalities in digital memory preservation

Agreed with

Marielza Oliveira

Agreed on

Biases and inequalities in digital memory preservation

Use of “cyber armies” by political parties to shape online narratives

Explanation

Kharel discusses the trend of political parties deploying ‘cyber armies’ to influence online narratives. These groups actively work to counter criticisms and promote favorable narratives about their parties.

Evidence

Describes how these cyber armies document criticisms, make counterarguments, and inject populist ideas to improve their party’s image.

Major Discussion Point

Political and economic aspects of digital memory

Impact of AI and large language models on memory construction

Explanation

Kharel discusses how emerging technologies like AI and large language models are changing the way we interact with and construct collective memories. He emphasizes the need for equal access to these technologies.

Major Discussion Point

Emerging technologies and future of collective memory

Potential of generative AI in memorializing historical figures

Explanation

Kharel mentions the use of generative AI to create interactive experiences with historical figures. This technology allows for new ways of engaging with and preserving historical memories.

Evidence

Mentions the ability to ‘talk’ to historical figures like Rousseau through AI chatbots.

Major Discussion Point

Emerging technologies and future of collective memory

Need for inclusive participation in AI-driven memory preservation

Explanation

Kharel emphasizes the importance of ensuring participation from marginalized communities and the Global South in AI-driven memory preservation efforts. He argues for inclusion and multilingualism in these technological advancements.

Major Discussion Point

Emerging technologies and future of collective memory

A

Alex Moura

Speech speed

97 words per minute

Speech length

156 words

Speech time

95 seconds

Lack of storage capacity for scientific and educational data

Explanation

Alex Moura raises concerns about the lack of storage capacity for scientific and educational data in Brazil. He points out that this is an ongoing problem for universities and research institutions.

Major Discussion Point

Technological challenges in memory preservation

Agreements

Agreement Points

Challenges in preserving online content

Bianca Correa

Marielza Oliveira

Carlos Alberto Afonso

Ricardo Medeiros Pimenta

Rapid disappearance of online content

Selective digitization and storage due to high costs

Lack of internet archiving in Global South countries

Multiple speakers highlighted the difficulties in preserving online content due to rapid disappearance, high costs, lack of archiving services in certain regions, and issues with broken links and vanishing websites.

Biases and inequalities in digital memory preservation

Marielza Oliveira

Samik Kharel

Unknown speaker

Dominance of English and Northern countries’ content online

Exclusion of marginalized communities from digital discourse

Patriarchal narratives dominating online discourses

Gender biases in historical data collection

Several speakers addressed the issue of biases and inequalities in digital memory preservation, including language dominance, exclusion of marginalized communities, and gender biases in data collection.

Similar Viewpoints

Both speakers emphasized the importance of preserving diverse cultural perspectives and languages in digital memory, particularly focusing on indigenous and marginalized communities.

Carlos Alberto Afonso

Samik Kharel

Loss of indigenous languages and cultures

Need for inclusive participation in AI-driven memory preservation

Both speakers highlighted the political and economic aspects of memory preservation, emphasizing that decisions about what to preserve are influenced by costs and power dynamics.

Marielza Oliveira

Ricardo Medeiros Pimenta

Selective digitization and storage due to high costs

Memory preservation as a political agenda

Unexpected Consensus

Impact of emerging technologies on memory preservation

Marielza Oliveira

Samik Kharel

Ricardo Medeiros Pimenta

Obsolescence of storage formats

Impact of AI and large language models on memory construction

Challenges of algorithmic governmentality in social existence

Despite coming from different backgrounds, these speakers all addressed the significant impact of emerging technologies on memory preservation, highlighting both challenges and opportunities. This consensus suggests a growing recognition of the transformative role of technology in shaping collective memory across various contexts.

Overall Assessment

Summary

The main areas of agreement among speakers included the challenges of preserving online content, biases and inequalities in digital memory preservation, the importance of cultural and linguistic diversity in digital archives, and the impact of emerging technologies on memory construction.

Consensus level

There was a moderate to high level of consensus among the speakers on the key challenges and issues surrounding digital memory preservation. This consensus implies a shared understanding of the complex nature of preserving collective memory in the digital age and the need for multifaceted approaches to address these challenges. However, there were some variations in the specific focus areas and proposed solutions, reflecting the diverse backgrounds and perspectives of the speakers.

Differences

Different Viewpoints

Approach to preserving online content

Carlos Alberto Afonso

Marielza Oliveira

Lack of internet archiving in Global South countries

Selective digitization and storage due to high costs

While both speakers acknowledge the challenges in preserving online content, Afonso focuses on the geographical disparity in archiving services, particularly in the Global South, while Oliveira emphasizes the economic constraints leading to selective preservation.

Unexpected Differences

Role of AI in memory preservation

Samik Kharel

Ricardo Medeiros Pimenta

Impact of AI and large language models on memory construction

Challenges of algorithmic governmentality in social existence

While both speakers discuss AI’s impact on memory, their perspectives differ unexpectedly. Kharel sees potential benefits in AI for memory preservation, while Pimenta expresses concerns about algorithmic governmentality’s impact on social existence and memory.

Overall Assessment

summary

The main areas of disagreement revolve around approaches to content preservation, the role of technology in memory construction, and the political implications of digital memory.

difference_level

The level of disagreement among speakers is moderate. While there is general consensus on the importance of preserving digital memory, speakers differ in their focus areas and proposed solutions. These differences reflect the complexity of the issue and the need for multifaceted approaches to address the challenges of preserving collective memory online.

Partial Agreements

Partial Agreements

Both speakers agree that memory preservation has political implications, but they differ in their focus. Pimenta discusses it as a broader political struggle, while Kharel provides specific examples of how political parties actively shape online narratives.

Ricardo Medeiros Pimenta

Samik Kharel

Memory preservation as a political agenda

Use of “cyber armies” by political parties to shape online narratives

Similar Viewpoints

Both speakers emphasized the importance of preserving diverse cultural perspectives and languages in digital memory, particularly focusing on indigenous and marginalized communities.

Carlos Alberto Afonso

Samik Kharel

Loss of indigenous languages and cultures

Need for inclusive participation in AI-driven memory preservation

Both speakers highlighted the political and economic aspects of memory preservation, emphasizing that decisions about what to preserve are influenced by costs and power dynamics.

Marielza Oliveira

Ricardo Medeiros Pimenta

Selective digitization and storage due to high costs

Memory preservation as a political agenda

Takeaways

Key Takeaways

Preserving collective memory online faces significant challenges including rapid content disappearance, selective digitization due to high costs, and lack of archiving infrastructure in Global South countries.

There are major biases and inequalities in digital memory preservation, with dominance of English and Northern countries’ content, and exclusion of marginalized communities.

Technological challenges include obsolescence of storage formats, difficulties in capturing complex web content, and need for robust backup systems.

Memory preservation is inherently political and economic, with curation processes shaped by power dynamics and monetization incentives.

Emerging technologies like AI and large language models are reshaping how collective memory is constructed and accessed online, raising new challenges and opportunities.

Resolutions and Action Items

Develop technologies and methodologies to mine the Common Crawl for preserving collective memory in Global South countries

Build capacities of people to preserve their own meaningful memories online

Increase efforts to digitize older content still in paper formats or obsolete digital formats

Improve indexing and searchability of preserved digital content

Consider data sovereignty issues in storing and accessing preserved memories

Unresolved Issues

How to address the digital divide in memory preservation between Global North and South

How to ensure preservation of underrepresented languages and cultures online

How to balance privacy concerns with the need for comprehensive archiving

How to fund large-scale digital preservation efforts, especially in developing countries

How to mitigate biases in AI-driven memory preservation and retrieval systems

Suggested Compromises

Focusing preservation efforts on select high-priority content given limited resources

Balancing between preserving raw data and curated/indexed content

Collaborating across sectors (government, academia, private) to share costs and expertise in preservation efforts

Thought Provoking Comments

Memory is a vast and complex topic. It becomes even more complex when we think about the relationship between memory and the Internet, in preserving memory, promoting social memory, and constructing memory itself.

speaker

Bianca Correa

reason

This comment sets the stage for the entire discussion by highlighting the multifaceted nature of memory in the digital age. It prompts participants to consider memory not just as preservation, but as an active process of construction and promotion.

impact

This framing guided the subsequent discussion, encouraging speakers to address various aspects of digital memory beyond simple archiving.

We digitize very selectively, but we are less selectively, we’ve been less selectively over time. And the internet actually changed the way that we actually record things. And artificial intelligence made a huge change in the process as well.

speaker

Marielza Oliveira

reason

This comment introduces the idea of evolving selectivity in digital preservation and the transformative impact of AI. It challenges the notion that digital archiving is comprehensive or neutral.

impact

It shifted the conversation to consider the biases and limitations in our current approaches to digital memory, leading to discussions on representation and the role of AI in shaping collective memory.

Memory, as highlighted in the Yoruba saying, it isn’t just about the past. It is actively constricted in the present. Remembering today shapes our understanding of yesterday. And memory itself is updated and rewritten in real time.

speaker

Ricardo Medeiros Pimenta

reason

This comment provides a cultural perspective on memory as an active, present-tense process. It challenges the static view of memory and introduces the idea of memory as a dynamic, constantly evolving construct.

impact

It broadened the discussion to include cultural and philosophical aspects of memory, encouraging participants to consider how digital technologies interact with these dynamic processes of remembering and forgetting.

Countries below the equator, which takes most of South America, and also the Caribbean and Mexico, there is no indexing, no indexing of the Internet in those countries.

speaker

Carlos Alberto Afonso

reason

This comment highlights a significant global disparity in digital archiving efforts. It brings attention to the geopolitical aspects of digital memory preservation.

impact

It shifted the discussion towards issues of global inequality in digital preservation, prompting consideration of the potential loss of cultural heritage and the need for more inclusive archiving efforts.

Curation is a political economic process. It’s as simple as that. We have to ask, whose memory is being preserved?

speaker

Marielza Oliveira

reason

This comment cuts to the heart of the issue by framing digital memory preservation as a political and economic process. It raises critical questions about power, representation, and the motivations behind preservation efforts.

impact

It prompted a deeper examination of the underlying forces shaping digital memory, encouraging participants to consider issues of data sovereignty, representation, and the economic drivers of digital preservation.

Overall Assessment

These key comments shaped the discussion by broadening its scope from technical aspects of digital preservation to include cultural, philosophical, geopolitical, and economic dimensions. They challenged simplistic notions of digital memory as neutral or comprehensive, instead highlighting issues of selectivity, bias, and global inequality. The discussion evolved from considering how to preserve digital memory to questioning whose memories are being preserved and why, emphasizing the active and political nature of digital memory construction in the present.

Follow-up Questions

How can we preserve the validity and reliability of our information environment?

speaker

Marielza Oliveira

explanation

This was highlighted as one of the most important questions to be discussing in the coming years, given the challenges of misinformation and AI-generated content.

How can internet resources be used to support the preservation and continuity of indigenous languages and cultures?

speaker

Carlos Alberto Afonso

explanation

This was identified as a big challenge, particularly for Brazil with its 300+ indigenous ethnic groups and 270+ languages at risk of disappearing.

How can we address the lack of historical data on certain issues, particularly related to marginalized groups?

speaker

Audience member

explanation

This was raised as a challenge for researchers when there is no data available due to historical injustices or biases in data collection.

Why is there a lack of business interest in memory preservation compared to other technological investments?

speaker

Marcelo Ferreira

explanation

This question highlights the disparity between cheap technology for business interests and expensive technology for public interest projects like memory preservation.

How can Brazil and other countries address the problem of storage capacity for various purposes, including internet memory, scientific research, education, culture, and arts?

speaker

Alex Moura

explanation

This was identified as an open problem in Brazil, where there is no specific institution dedicated to digital preservation across various sectors.

What solutions exist for the mismatch hazard between short public memory and long internet memory in the geopolitical space?

speaker

Dr. T. V. Gopal (online audience)

explanation

This question addresses the potential consequences of the disparity between how long information persists online versus how long it remains in public consciousness.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #84 The Venn Intersection of Cyber and National Security

WS #84 The Venn Intersection of Cyber and National Security

Session at a Glance

Summary

This discussion focused on the critical intersection of cybersecurity and national security in today’s data-driven world. Experts from various countries and organizations explored the challenges and strategies for addressing gaps in policies and practices. The panelists emphasized the importance of trust in digital systems and the need for a multi-stakeholder approach to tackle cybersecurity issues.


Key points included the evolution of cybersecurity from a technical issue to a central national security concern, the importance of aligning national security priorities with rapidly evolving cyber threats, and the need for robust legislative frameworks. Participants discussed the vulnerabilities exposed by cyber threats such as spyware, phishing, and cyber warfare, as well as the potential of decentralized digital solutions to enhance resilience.


The discussion highlighted the importance of international cooperation and information sharing to combat global cyber threats. Panelists stressed the need for capacity building, particularly in developing countries, to address the digital divide and enhance cybersecurity capabilities. The role of public-private partnerships and the importance of involving academia in cybersecurity efforts were also emphasized.


Challenges such as balancing privacy with security, the need for technical literacy among policymakers, and the importance of routinizing threat information sharing were discussed. The conversation also touched on the potential of emerging technologies like AI and IoT to both enhance and complicate cybersecurity efforts.


In conclusion, the discussion underscored the urgent need for stronger policy innovation, collaborative efforts, and a shared approach to addressing cybersecurity challenges in the interconnected global digital landscape.


Keypoints

Major discussion points:


– The intersection of cybersecurity and national security in the modern data-driven world


– The need for international cooperation and information sharing on cyber threats


– Challenges around trust, privacy, and data sovereignty in cybersecurity efforts


– The importance of capacity building, education, and awareness on cybersecurity issues


– The role of government, private sector, and civil society in addressing cybersecurity challenges


The overall purpose of the discussion was to explore the complex relationship between cybersecurity and national security, identify gaps in current policies and practices, and discuss strategies for enhancing cyber resilience through collaborative efforts.


The tone of the discussion was largely collaborative and solution-oriented. Participants shared insights from their diverse perspectives and experiences, acknowledging challenges while focusing on opportunities for cooperation. The tone became more urgent when discussing the need for immediate action, but remained constructive throughout. There was a sense of shared responsibility and recognition that addressing cybersecurity issues requires a multi-stakeholder approach.


Speakers

– MODERATOR: Session moderator


– Ihita Gangavarapu: Cybersecurity expert, works in private sector and contributes to cybersecurity community


– Paula Nkandu Haamaundu: Coordinator and advisor at GIZ African Union, seconded to African Union Commission


– Monojit Das: Experience in academia, media, and government


– Samaila Atsen Bako: Security manager at Code for Africa, director of communication at Cybersecurity Experts Association of Nigeria


– Karsan Gabriel: Coordinator of the African Parliamentarian Network on Internet Governance


– Lily Edinam Botsyoe: PhD candidate in IT at University of Cincinnati, focus on privacy


Additional speakers:


– Sreenath Govindarajan: European Law Students Association, specializes in international law


– AUDIENCE: Representative from the FBI


Full session report

Revised Summary of Cybersecurity and National Security Discussion


Introduction


This panel discussion brought together experts from various countries and organizations, including representatives from the private sector, academia, government agencies, and international organizations, to explore the critical intersection of cybersecurity and national security in today’s data-driven world. The diverse panel, which included an FBI representative, examined challenges and strategies for addressing gaps in policies and practices, emphasizing the importance of trust in digital systems and the need for a multi-stakeholder approach to tackle cybersecurity issues.


Key Themes and Discussion Points


1. Intersection of Cybersecurity and National Security


The discussion highlighted the evolving nature of cybersecurity from a purely technical issue to a central national security concern. Ihita Gangavarapu, a cybersecurity expert from the private sector, emphasized that cybersecurity directly impacts national security and critical infrastructure. Lily Edinam Botsyoe, a PhD candidate in IT, described cybersecurity and national security as “two sides of the same coin in the digital age,” using an analogy of a market and shopkeepers to explain cybersecurity concepts. The FBI representative further highlighted trust as a key factor in the relationship between cybersecurity and national security.


2. Challenges in Cybersecurity Collaboration


Several speakers identified significant challenges in cybersecurity collaboration:


– Lack of trust and information sharing between organizations (Paula Nkandu Haamaundu, GIZ African Union)


– Political factors and changes in government leadership disrupting initiatives (Samaila Atsen Bako, Code for Africa)


– Differences in data localization and privacy policies between countries (Monojit Das, academia/media/government experience)


– Balancing privacy and security concerns (FBI representative)


– Emerging technologies like IoT and AI posing new challenges


3. Strategies for Enhancing Cybersecurity


The panelists proposed various strategies to enhance cybersecurity measures:


– Comprehensive legislative and institutional frameworks (Ihita Gangavarapu)


– Capacity building and implementation focus (Paula Nkandu Haamaundu)


– Improving digital literacy and infrastructure to address the digital divide (Samaila Atsen Bako)


– Developing international cooperation and frameworks (FBI representative)


– Decentralized solutions for cybersecurity challenges


– Youth involvement in cybersecurity efforts


4. Role of Different Stakeholders in Cybersecurity


The discussion underscored the importance of a multi-stakeholder approach, highlighting roles for government, the private sector, civil society, NGOs, and international organizations in developing and implementing cybersecurity solutions.


Country Perspectives


India’s Cybersecurity Initiatives:


Ihita Gangavarapu and Monojit Das provided insights into India’s cybersecurity landscape, discussing the country’s efforts in data localization and the challenges faced in balancing these efforts with global tech companies’ policies.


African Perspective:


Karsan Gabriel mentioned the African Parliamentarian Network on Internet Governance, highlighting regional efforts to address cybersecurity challenges.


International Cooperation and Information Sharing


The panel emphasized the critical need for international cooperation in addressing global cyber threats. The FBI representative advocated for developing international frameworks, while other panelists stressed the importance of information sharing and trust-building between nations and organizations. Specific international forums mentioned included the Global Forum on Cyber Expertise and the Anti-Phishing Working Group.


Specific Cybersecurity Challenges and Solutions


– Education sector identified as a major target for cyberattacks (Ihita Gangavarapu)


– Need for tailored approaches to address unique challenges faced by different nations and sectors


– Importance of addressing the digital divide to enhance overall cybersecurity posture


Key Takeaways and Action Items


1. Explore opportunities for bilateral and multilateral cooperation on cybersecurity issues


2. Develop robust frameworks for sharing threat intelligence and best practices internationally


3. Focus on building trust between nations and organizations to facilitate better information sharing


4. Prioritize capacity building initiatives, especially in developing countries


5. Work towards creating international standards or frameworks for cybersecurity


6. Increase youth involvement in cybersecurity efforts


7. Address challenges posed by emerging technologies like IoT and AI


Conclusion


The discussion underscored the urgent need for stronger policy innovation, collaborative efforts, and a shared approach to addressing cybersecurity challenges in the interconnected global digital landscape. It highlighted the complex interplay between national security, economic development, and technological advancement. The conversation emphasized the need for tailored approaches and multi-stakeholder engagement in cybersecurity efforts, recognizing the diverse perspectives and unique challenges faced by different nations and sectors. Moving forward, continued collaboration and trust-building among all stakeholders will be crucial in effectively addressing the evolving cybersecurity landscape and its implications for national security.


Session Transcript

MODERATOR: I believe it’s a yes here section actually so during this session I’m very happy to be here with you today and I’m very pleased to be joined by Dr. Iheeta and Dr. Manojit. Yes, just to give you a brief introduction of the session, as you may know, over this nine minutes, we aim to explore the intricate and increasingly critical relationship between cybersecurity and cyber security. The goal of this session is to dissect the overlapping challenges within the intersection as well as to identify actionable strategies for addressing gaps in policies and practices. Through cases, case studies, experience sites and collaborative discussions, we will examine how cybersecurity has evolved from being a technical issue to a central issue. We will also explore the challenges of cyber security and how it has contributed to global health. Together we will explore topics such as the vulnerabilities exposed by spyware phishing and cyber warfare and how decentralized digital solutions and robust legislative frameworks can enhance resilience. We will also explore the challenges of cyber security and how it has contributed to global health. We will also invite participants to share their thoughts and questions through the sessions, whether through the chat for those who are online and the Q&A for those who are here on the site. As we focus on the case of cyber security, we will also highlight the challenges posed by cyber crime and its impact on national stability, particularly concerning youth and illicit digital activities. We will also highlight best practices such as cybersecurity by design principles and open sources, decentralization, to build a more secure and sustainable cyber ecosystem. We will also highlight the challenges posed by cyber crime and its impact on national stability, particularly concerning youth and illicit digital activities. We look forward to your active participations in this critical discussion or let’s say conversation. So to welcome the panelists and introduce themselves, I would like to introduce myself. My name is Alvisar. On site, allow me to give the floor to Hita to introduce herself and then move to Paula and then Dr. Manojit. You have the floor, Hita. Thank you so much. My name is Alvisar. I’m the coordinator of the cyber security team. We will be giving five minutes each of you to introduce the panel.


Ihita Gangavarapu: Perfect. Hi, everyone. Thank you so much for joining our session. I am Hita. In addition to this, I work a lot in the cyber security space both as a private sector but also contribute quite actively to the community. Thank you.


Paula Nkandu Haamaundu: Can you hear me now? Okay. A very good morning to you all. Thank you very much for the invitation to be on this panel and thank you for thank you to the audience for joining us. My name is Hita. I’m the coordinator and advisor at GIZ African Union, seconded to the African Union Commission. My role there is basically to enhance the cyber security posture for the African Union Commission and its internal processes. My experience really has been in the private sector, cyber security, and really just working on information security . I am also quite active in the cyber security community. I am primarily focused on enhancing capacity building for young women in cyber security. I am a mentor at cyber girls fellowship which is really a program that’s trying to ensure that we have adequate skills in the young upcoming women. Thank you very much.


Monojit Das: Thank you for having me. I would like to mention a few things that might be relevant to this. I was initially with academia. Then I moved on to media. And then now I switched on to government. So I have practically the experience of all stakeholders and I will be loving to hear what you have to say. Thank you very much. Thank you.


MODERATOR: Thank you. I’m very happy to be also part of this youth supported by GIZ. I was supported on 2022 in Ethiopia, which is a very good participation we had. It was very interesting because you have had to experience all this. Having the heart of civil society, academy, and now you join it officially, the government, which will be very interesting as well. Moving to the online participation, I think we have Sumaila who is online. Sumaila, if you wish to have the floor to introduce yourself, please.


Samaila Atsen Bako: Thank you so much. Thank you for the opportunity to be here. My name is Sumaila. I’m a professional based in Nigeria. In summary, I work with a couple of NGOs. One of them is Code for Africa, which is a continent-wide NGO that focuses on different technology-based initiatives. I work there as a security manager. I’m responsible for in-house security culture and awareness as well as being a subject matter expert on our external projects. I’m also the director of communication at the Cybersecurity Experts Association of Nigeria. It’s a pleasure to be here, and I look forward to engaging with the audience as well. I would like to thank you very much for being here and I would like to thank the other panelists on this important topic.


MODERATOR: Thank you, Samaela, for your introduction. I keep one key word, cyber security. Hopefully you will be sharing very good insights from your country’s perspective how to tackle nationally these issues. Thank you very much. I would like to give a special thank you to Carsten, who is here. Her video is not available based on the geographical difference we have in terms of time. But we have one more last. Shall we connect this online as well? Carsten, are you here?


Karsan Gabriel: Thank you very much. My name is Carsten. I work as the coordinator of the African Parliamentarian Network on Internet Governance. What we do is we empower African legislators in their work of representing the masses, but also in the work of legislation oversight in terms of digital policy and also to make more informed decision-making in terms of the work of the African Parliamentarian Network. We do a lot of research as well as research to get the nuances and differences between different cybersecurity frameworks, but also what it means to our policymakers, and just the context about our session today, it has been highly inspired by Dr. Manojit here, who has a wide tapestry of experience towards national and cybersecurity, and I’m very much looking forward to the next session.


MODERATOR: I think we definitely are eager, I mean, to have your perspective as policymakers in terms of digital policy and legislation. We have felt the importance of having policymakers this year, and I think this time, this is the right moment to have your insights and draft together a resolution or a framework that will enable us to tackle these issues together. If I’m not wrong, I think we have also Ernest. Kazan, can you confirm, please?


Karsan Gabriel: No, Ernest is not available. Let’s proceed.


MODERATOR: Thank you, distinguished panelists, for your introductions. Now we are moving into the discussions. We have, I would prefer to ask two questions, and it’s up to you to answer one of them. The first one is, how can we better align national security priorities with rapidly evolving cybersecurity threat, or what gaps exist between cybersecurity practices and national security agendas, and how can we bring them together? An Indian perspective would be very interesting as well. Thank you.


Ihita Gangavarapu: All right, I’ll be happy to. So a lot of my talk today is about the best practices, and maybe a little bit around the developments that have happened in India in the past decade. So when we talk about cybersecurity, it has direct implications on national security, and there are certain key initiatives and strategies that nations have taken, and my perspective will be purely from an Indian context. Internet infrastructure, spanning across the banking sector, healthcare, BFSI, telecommunications, even education for that matter, so any disruption to these infrastructures can cause catastrophic effects for nations. And there are certain state actors, non-state actors, proxy actors who are continuously seeking ways to exploit these vulnerabilities, and given that we have a lot of national secrets and very strategic assets that are kept online now, it becomes even more correspondent. So this brings us to this critical realization that cybersecurity and national security actually intersect in a Venn diagram that must be sorted out with precision and urgency. So certain key initiatives and strategies that India has taken, I will take you through from a legislative measure and the institutional frameworks for that matter. First, we have the Information Technology Act, which is a new law that has been passed by the Indian government to make sure that cyber incidents are not reported to the public. First, we have the Information Technology Act. So this particularly has provisions that establishes critical entities such as your CERT, as well as your infrastructure protection center in the country that plays a pivotal role in handling and responding to cyber incidents, to the critical internet infrastructure. Then we also have a defense cyber agency, which is an additional level. Then we have the National Cyber Security Institute, which is a new framework that came out very recently in 2024, which offers guidance to organizations to establish a robust cybersecurity architecture. Now, this from a national perspective, we also have sectoral regulations and guidelines that, for example, in the banking sector, we have RBI, which is a reserve bank of India, compulsory cybersecurity training for the senior management, as well as board members of banks. So we have a lot of regulations and guidelines. Then we have a lot of the devices that we’re getting. We need to make sure they’re sourced from a trusted source. So the government mandates that there has to be an induction of trusted and security-certified products in the networks, and the guidelines are also determined by the government. So you mentioned IoT. IoT, the proliferation is increasing. It’s a tremendous deployment in the country now. There’s a lot of visibility into the landscape of IoT deployments in the country, and then that is where trusted telecom becomes very important, trusted components. But we can’t do all of this without awareness. We have something called ICEA program, which is information security educational awareness program. That is for the entire country. It’s free of cost. You can train yourself. And something that has happened very recently is that we’ve had the national annual cyber exercise for all critical sectors, and we’ve had a lot of people come to us and say, hey, we want to do this. And this, I think, these among other inputs, I’m sure Mr. Monojit will be highlighting a few more, has ensured that in the ITU’s global cybersecurity index, earlier, India was at the 47th rank, but right now we’re on the 10th rank, and we are at the 14th rank. So we’re at the top of the list, and we’re very, very excited about how, when you start prioritizing cybersecurity initiatives, there is a tremendous change that you can see, but I also feel like it is not just the government. It is a significant work that the private sector also has to do. So just in the last couple of years, we’ve seen almost 300 plus companies come up in the country looking at cybersecurity solutions and services, and I actually come ‑‑ I work as a consultant in the ITU, and one of the things that I have seen is that there are a lot of companies that are looking at monitoring, and you’ll be surprised to know that one of ‑‑ as per their findings, you’ll be surprised to know that can you maybe guess what sector is most impacted or which sector has had most amount of cyberattacks? It’s the education sector, not banking, not healthcare. So, you know, that highlights the critical point that we need visibility into the threat landscape, we need visibility into the threat landscape, and we need to be able to make targeted strategies, and we need to be able to make targeted strategies, you know, and what are the strategies that we currently have to ensure that we make targeted strategies sectoral or from a national perspective? Thank you.


MODERATOR: Thank you for this wonderful point you have highlighted. I still remember about national cyber framework you have just highlighted. I’m not sure if you can give us a little bit more detail about the implementation of this very interesting framework.


Monojit Das: Well, Anita has already highlighted the majority of the facts, but I would like to give you a perspective that I have managed to acquire through my experience. Initially I was in private sector working with a company, then I moved on to academia to learn, and I did my PhD on the same topic of Internet governance, then I moved on to ICT, then I began my own하게 career of Arts and Sciences outside of architecture. Then I ended up majoring in IT as a technician. I moved on to pursue my law focusing on Cyber law and now in. So my point of discussion here is that cybersecurity today to me was the significant development we have done. Today we are courtesy that again sometimes brings a debate whether, you know, whether the move was good or not, bringing private players to give in data at a very cheaper rate. Today we enjoy one of the cheapest Internet data. Like $4 a month or roundabout of less than $5 a month you get per day 2 GB of data, which is huge. So that creates a lot of data, I mean, user data. That can be used for multiple purposes. Today if you can map a user, it will not be difficult to find out his whereabouts using that. But my point of discussion here is that cybersecurity is not just a technology. It is also a human right. My main suggestion or like a discussion here is that when we talk about the best practices, we need to incorporate that cyber security and national security is not just today rely only on critical infrastructure, but also the other structures as well. Like for example, the submarine cable, it’s other format like you see the emerging players like for example, Starlink, the lower earth orbit satellites, which we are not discussing at that openly, but that it becomes a big challenge to us because it involves multiple agencies and also potentially ruining the bilateral relation. When I mean in this case is that suppose you see the scenario when a lower orbit satellite, whether it is Starlink or any other company that the space debris is caused, it can impact the other space objects that are there. So what can be the repercussion in this? So not only we are going to have, but at the same time we’re going to have a strained relationship between the bilateral, which again going to ruin the relations, not just digitally, but also in the physical, I mean among the countries. But at the same time, you see internet or the technology at large, we are trying to fix it in the form of a like a software diplomacy as well. Today, Indian government releases a large number of scholarships in form of ICCR, Indian Council for Cultural Relations. So we have this specific support to African countries through African scholarships overseas as a part of software diplomacy. Now understanding the importance of cyber security and as you know, diplomacy plays a key role in ensuring the national security, we have been giving us a very high number of scholarships for studying computer security only. I mean the cyber security. Similarly, we have a dedicated program of ITEC. So through ITEC, it’s again under the Minister of External Affairs Division, we train the foreign, I mean the friendly countries IT experts. So just to ensure the best practices are shared among us. And at the same time, I’d like to also share that the gap in understanding the legislation, the existing legislation, my colleague Aita pointed out, that the IT Act assert, you know establishments are there, but they do lack a coordinated approach because the elements of cyber security or the acts are transnational. We need a thorough coordination and collaboration with all of the partner countries because the origin of a server, you know, it can be a step don’t have an extradition treaty. Suppose, for example, even we can trace it, the location to be somewhere, but we don’t have an extradition treaty, how do we do that? So we need a very coordinated approach and again, acknowledgement of regional bodies or for example, if we take the example of NATO or you see international criminal court, many countries don’t recognize it. You know, taking cases of other incidents when you have a person blacklisted or he’s under the, you know, arrest list that he will be arrested, the first country don’t comply to that. So how do we focus on it? So considering this, all the hindrances that we have, I feel that we need to focus on some converging areas, which all the countries who don’t really agree to this point, whether they’re signatory to such extraditions or not, but can agree to minimum points that includes like, for example, preventing child pornography, like this, this few topics that are convergent to everyone, you know, everybody will agree. While others may not agree, like for example, cyber offense to another country because there is no mention of a threshold. Like if you see today, United States and NATO, they say that if at all there is an attack on the critical infrastructure, they will be retaliating in full scale. But they don’t explain that what is the threshold to it, like into what sense. So with this, I’d like to pass on so that, you know, next when you come across, I can share a little more. Thank you so much.


MODERATOR: Thank you, Dr. Menojit, for your insight and collaborations in the multi-stakeholder approach is really essential, I mean, to tackle these issues. Before to give you the floor, Paula, to talk to us about how international organizations and more specifically, GIZ support these mechanisms, let me introduce our online speakers, Lily, who might be recorded as well. So can you just share, please, Lily?


Lily Edinam Botsyoe: Hi everyone, my name is Lily Edenabotre, and I’m excited to be joining you today online. And this is one of the reasons why we are so thankful for the gift of the internet. And like I mentioned, my name is Lily, I’m originally from Ghana, but I’m right now, I’m a PhD candidate in IT at the University of Cincinnati. And so for a topic like this one, I’d love to share from a point of interest, which is my interest in privacy, because that is what my dissertation is about. I’ll start with an analogy to get us to understand what it is we’re talking about. Because usually, when we talk about cybersecurity, sometimes it feels far-fetched and it looks as though it’s only something that people and technology should care about. So I’m going to start with this analogy and move from there. So imagine walking to a very busy market, right, and every shopkeeper locks up their stall and their portion of the market. At the end of the day, that is not just to protect their goods only, it’s also to ensure that the whole market remains a safe space for commerce or for business. Think about it. You lock your space, it’s secure, then the whole market probably is locked up. If it’s an open market, it puts everything away, so nobody comes in to be able to steal from you, right? Now, maybe the cyberspace and this global market, which is our generation, our time, and this revolution where everything is characterized by digital tools. So in this space, there is data, not goods, that is traded. So it’s not just physical goods. We have real data that has been sent across many networks. Many things are happening. So just like this single lock stall in a market can probably jeopardize the whole market, it looks like what our cybersecurity looks like. In the same way that if you don’t protect a very small part of what you are supposed to take care of, it can lead to a breakdown of the whole. So for instance, somebody gets into a particular spot in the market and can go through and join another or enter another shop. It means that everything has been replicated. So when we talk about the cyberspace, cybersecurity in relation to national security, it means it called for this stakeholder angle. And usually, we say it’s kind of repetitive, but really, that is what it is. It’s actually very protected. So now, how do we see that what I’ve described really merges with our cybersecurity world? So the question I’m going to be answering now is, how do we see this cybersecurity measure in national security in a data-driven age? And it’s one of our policy questions. So in this data-driven age, cybersecurity is also a big part of national security. And the reason being that governments now depend on data to protect borders, to conduct diplomacy, and manage critical infrastructure. This is very true. So the threats no longer come in solely from physical attacks at all, but also from invisible threats. And they exploit all of these vulnerabilities in the networks. And it can come from anywhere. Usually, even misinformation that we see on different platforms can lead to people doing things that can literally jeopardize national security. So from things like ransomware that is crippling hospitals, to disinformation campaigns, to targeting elections, to cybersecurity breaches, all of these have the potential to, in essence, destabilize an entire nation. And that is why this conversation is important. So cybersecurity really merges with national security when safeguarding data becomes as critical as protecting your borders. Let’s also think about what is a national defense strategy. And that could include some things that would make sure that your country is safe, both online and offline. So some of these things include just being proactive in protecting the online assets you have so that you can prevent attacks and anything that can undermine sovereignty and public trust. You want to take proactive steps towards it. So in that sense, I also want to go to answer the topic of what the intersection is between policy and security in the cyberspace. So policy and security are two sides of the same coin in the cyberspace world, right? And policies establish a framework for behavior, for accountability, and racism. On the other hand, enforces that these frameworks, true technology, or it enforces the framework through technology and practice. So that’s what it looks like. And policies guide how we share intelligence, how we regulate encryption standards, and how we set even penalties for cybercrimes. In essence, what is the penalty if somebody does something wrong? And then at this intersection, we need the effective collaboration between policymakers and security aspects to ensure that the regulations are both realistic and enforceable. And in that sense, you don’t just pick anything. and their systems in place for it. And there are people who also have expertise to be able to implement them. And when we are talking about one of our policy questions that deals with how do we improve synergy to enhance cybersecurity legislation in the Global South. As somebody from the Global South myself, I feel like this is a burning topic and something that is really important. And policymakers have to pretty much redouble their efforts in this area. And all of us are playing a role to be able to have this conversation started. So in this area, there’s an improving synergy in the Global South, which requires addressing three key areas. One of the very first ones is capacity building. We’ve been talking about it a lot, but very much important. We are equipping, how do we equip policymakers and institutions to craft informed legislation. They didn’t know what’s happening in the cyber world. Can they bring expertise even to their policymaking space? And then there’s also another critical area, which is the public-private partnership. So we have to encourage a collaboration between government, the private sectors, and civil society to leverage diverse expertise and resources. And another very crucial one is regional cooperation. So in that case, we’ll be fostering cross-border alliances to share best practices and respond to trends that really maybe cause national boundaries to pretty much be at risk. And so all of these would also be linked to international support and funding for these initiatives to ensure that they create foundation for sustainable improvement in cybersecurity legislation. I think there’s another question about, in our policy question that detailed, what is a defining base of parameters for shaping inclusive cyber laws and prioritizing digital security and national security policies? I know there are many times we’ve said as Africans, or spoken about the Malabo Convention, and spoken about how countries haven’t ratified it, despite the importance of cybersecurity, right? But when we are building some of these inclusive cyber laws, we must prioritize accessibility, especially in the area of making it applicable and understandable to all citizens, and not just only people who are technical aspects, right? And we also have to look at equity in the sense that we have to address the digital divide so that marginalized groups are not disproportionately impacted by cybersecurity measures. And then we also have to think about resilience and even the ability to bounce back. And a big part for me, like I said, I love privacy. So we also have to have a balance between security and individual rights so that we avoid any overreach and build public trust. And so with all that I’ve said, there is a need for us to emphasize all of these principles, taking into consideration humans, what we have as our national assets, what we have as our online assets, and expertise so that people understand that this is something that we are collectively doing and everybody should be a part. So in our digital age, security is no longer just about locked doors and guarded borders. It goes way beyond that. It also includes what we do online. It includes fostering collaboration and building frameworks to protect both individuals and nations. And one person saying one thing online, if not checked, can cause the chaos. And sometimes you’ve seen all of these upheavals coming without any implications, but it’s time for us to rethink it. So by treating cyber security as an integral part of national security, countries in Asia, in Africa and whatnot, we can create a resilient, inclusive policy or policies that safeguard our collective digital future. I hope this gives some light to some of the discussions we’re having and gives an intersection, like we call it a Venn intersection, between what we have as national security and cyber security. Thank you so much, and I hope you do have a good time interacting with the rest of our speakers. Thank you.


MODERATOR: Thank you, Lili, for your presentation. I have to tell you that Lili is one of the very young youth coordinators to bring youth into the Internet governance ecosystem. Before we move to Paula, Kazan, if you may prepare yourself to tell us a little bit about how, I mean, legislation or how, what does, I mean, legislation play in strengthening the nexus between cyber security and national security in Tanzania? Allow me to give the floor to Paula. Tell us a little bit more about how this international organization can contribute to support the mechanism towards financial assistance or capacity building program,


Paula Nkandu Haamaundu: whatever you think it’s going to be. Thank you. Thank you, Dr. Jose, and maybe just to open up my perspectives on the topic. Hope you can hear me. Yes, just to open up my perspectives on the topic. When it comes to cyber security, I always try to think, what is the end goal? Okay, we’re protecting infrastructure, we’re protecting systems, we’re protecting this data, but what is the end goal? And for me, the end goal is trust, trust in the systems, trust that the data that I’m seeing is correct data, trust that the data that is available to me has not been tempered with. So we put across all these measures, all these controls, because we want to be able to trust the systems and want to be able to trust the data that we’re getting from these systems. In the context of national security, obviously every country has to define what is critical infrastructure to them, best of their culture, best of their needs, and really an assessment of what’s important to them. And so for the government to be able to trust whatever systems they are using, for instance, if it’s like from the perspective of the health sector, trusting that data in order to make informed decisions, we need to put in place cyber security. And that’s where, for me, I see the link between cyber security and national security, meaning it’s one side of the same coin, so to say. You can’t have one without the other in this current data-driven age. But in terms of what international organizations can do to enhance the cyber security posture, I will refer to – so GIZ has a program called Global Cyber Security Program, and there’s a particular project called Partnership on Strengthening Cyber Security, which is funded by the German Federal Foreign Office. This particular project is working and collaborating with the various partners to enhance cyber security across the globe. There has been a lot of progress. For instance, if you look at the ECOWAS region, the ECOWAS region just recently adopted three CBMs, confidence-building measures, and these CBMs really are to the context of the region itself. And so the partnership between GIZ and between ECOWAS is really to enhance cyber security in that region and or rather reduce the rates of cyber crime. One of the things that has been done is really to capacitate the policy makers with an understanding of what cyber security is. So there’s a lot of discussions around cyber diplomacy, ensuring that the member states are able to interact and cooperate with the Organization for Security and Cooperation in Europe, which was one of the first to have the CBMs. So you see that there’s a lot of cooperation that’s happening, and this is why international organizations can come in to partner with various member states to ensure that cyber security posture is enhanced. I think Yuta had mentioned the issue to do with having an understanding of the threat landscape and the data to make informed decisions. I like to think about cyber security more from the cyber risk management perspective because it’s almost impossible to ensure that there’s 100% cyber security. And so sometimes you have to weigh, okay, what are we able to do? Then there’s trade-offs that are going to happen. What are we able to do? What can we do in the next few years? And you identify what’s really critical for you or what’s high risk, and then you address those issues. So another form of collaboration that I would say is happening between GIZ and the ECOWAS region is to enhance the ECOWAS region’s threat data to understand what their assets are, what their vulnerabilities are, and just try to improve how they make decisions based off of their risk management that will come from a cyber security perspective.


MODERATOR: Thank you, Paula, for sharing this wonderful point and initiative that we wish you can have also the opportunity to work on. Kazan, can you tell us about what is going on in Tanzania in terms of cyber security and how to deal with these challenges? We will also have our last speaker on the line, Laila, who should also be speaking about how can decentralizing data national security, if so, how? And then we will open the floor to the audience to ask questions. Kazan, are you here?


Karsan Gabriel: Yes, I’m here. Thank you very much. It’s quite a pleasure to listen to the different nuances that have been shared by my previous speakers. When we drove into the question at hand, the critical intersection of cybersecurity and national security in today’s data-driven world, it’s more than a Tanzanian question. I think it’s all global, just like the internet, because the digital age has already connected in an unimaginable way. But these connections do come with a lot of vulnerabilities, and these vulnerabilities transcend borders, institutions, and even generations, because we do have almost 60 years with the internet now. We need to start by acknowledging that the reality, the boundaries are blurred between cybersecurity and national security. A single vulnerability, like, for example, the log4j flaw, can expose a lot of government data and disrupt critical infrastructure, but in the end, jeopardize citizens’ safety. As they say, the weakest element in any cybersecurity chain is the user. It’s important when we yield our solution to the person, people-centeredness. So cybersecurity is no longer just a technical issue, but it is a matter of national resilience. Consider Tanzania now. We have a very youth-driven population, and they’re vulnerable to phishing and online scams, and many people are being exploited in terms of the financial systems, and the core element of understanding and literacy still does not exist. We see countries like Nigeria also have problems with big attacks on biometric databases, which have raised a lot of national security concerns, and both these issues are tied to a lot of human and institutional behaviors. So to understand the connections between the human and institutional behaviors, we see they align directly to how the user might interact with the system, hence cause the risk. It’s important when we bridge the gaps. In terms of the policy intersections, I think where the cybersecurity initiative meets is in areas of critical infrastructure, because cybersecurity policy operates now at intersections of very many competing resources. Example, the protection of individual rights, like the privacy and freedom of expression issue, is mostly part of the cybersecurity question, protecting also the critical infrastructure, like power grids, financial systems, and the digital backbone infrastructure of many nations, and also ensuring national sovereignty, because a lot of national resources are also protected online in this globalized area of digital threats. So in Tanzania, we do have a cybersecurity role that has become operational since 2020, and a lot of people have been pulling towards the understanding of what it really means to protect their resources, their platforms, and their processes, but it has been highly connected to the different regional cyber acts, like the Malabo Convention, but also the EU Act, in creating more sustainability and cross-border interoperability of the data, because to protect oneself, we need to have a good understanding on how trust is shared cross-border, and one of the best practices we have been exploring involves the building of cyber resilience, because to strengthen cybersecurity and national security, we need to prioritize critical strategies, such as security by design, use of open source decentralization, but also education and inclusion, and because policy and systems must be embedded with a security angle from the start, incorporating encryption, but also regular assessments on the systems, but as well as the people’s understanding. Think of it like building a house with fireproof materials, instead of throwing the sprinklers when the fire starts, it’s good when you have prevention, because prevention is always better than cure. Decentralized systems are also harder to compromise, for example, new technologies such as blockchain, with good transparent and decentralized enhanced security, can also be applied, and applying these principles of decentralized infrastructure could mitigate a lot of points of failure to most systems, but in the end, literacy programs can help, especially for Africa, with a youth tech-savvy population, to be prepared for threats and cyber security issues that happen, but also turn them into assets of fighting and protecting. Imagine a program which can train a lot of African youth, or the so-called yahoo boys, to become ethical hackers, and to see the bigger picture in building stronger systems. I think these practices are not just technical, but they are a big shift on how we see security, because security now in the digital world is a bigger question. It’s a technology property and characteristics. Thank you.


MODERATOR: Thank you for sharing your points. I think we just missed having your picture on the screen. Sorry, we wish to know who is behind the screen, talking about such interesting points. So, for the next stage, for sure, we wish to have you as a video. Now, we will move to Samaila, if you can please tell us about how can decentralized digital solutions enhance national security, and if so, how?


Samaila Atsen Bako: Thank you so much for throwing the mic to me at this point. I think, just before I go into my own comments, I would say the speakers before me have done an excellent job. I think issues around issues around the use of cyber measures, and even understanding the impact of emerging technology, it’s critical in this conversation. I just want to add that when we talk about this topic, cybersecurity and national security, you realize that it’s really from the angle of diplomacy that most of these issues are even political than they are actually technology-based or people-focused, in fact. And so, what that means is that the bulk of the efforts lies in the government. The issue there is, are they people who are intellectual, young enough to think of the future, and so are they more focused on governance, or are they more focused on building power? Because that will determine to a large extent where their priorities lie. And if you talk about efforts, whether in Nigeria or even regionally, I think Paula mentioned ECOWAS, and when you talk about ECOWAS now, the number is about to drop in January, right? So, the measures that have been adopted or used in the last, would I say, 18 months when there have been some risks in the region to douse tensions and make sure that there’s better collaboration. So, I feel like if you now compare that with what happens in Europe, where there’s been collaboration for decades, and there’s trust, there’s information sharing on a very high scale and things like that, you see that the results speak for themselves. And so, I think that’s a huge missing piece regionally, especially from the West African or African perspective. And so, for me, there’s a cap on what private sector or civil society or end users can do. But now, let me go back to my own question, which, again, is a similar answer in the sense that whatever you’re building may still be limited by government, or we still be at the mercy of the government. You know, if you talk about solutions, you look at how certain governments are more focused on things like maybe surveillance, for instance, or they’re always looking for ways to bridge people’s privacy, as opposed to maybe funding the national program on serocity awareness, or even, in fact, even improving the budgets for academia to make sure we build distance on a larger scale. So, what I’ll say at national level, there are decentralized approaches that have been taken, and I think a bit similar to India from what one of the earlier speakers have mentioned, where there’s a structure in place to have what we call emergency response teams in different sectors, from the telecom sector to the banking sector, moving forward for the government agencies and the defense industry as well, which helps when you also now have a coordinating body called the National Cyber Security Center. So, it gives a bit of structure, and then when you’re adding things like laws and you empower the regulators, you know, you give them the capacity and you build their skills so they are able to actually give the right directions to the organizations within their build a certain level of resilience for the country at the central level. At the end of the day, it’s always good to have direction. We’ve seen cases where there are laws or there are agencies being built up, but there’s no structure. instance, who do people report incidents to, or if you notice something wrong in terms of cyber security, you know, who do you report it to, or who coordinates the response to those issues. So it’s very important that the structure in place makes sense. There’s a very important need for a link between academia, a link between academia and users. A lot of the times these groups of people feel left out, simply because sometimes it seems as if even the government itself targets, you know, civil society and end users, or while academia may feel like they don’t get enough funding, you know, some countries don’t have a good R&D culture. I’m just trying to, I keep saying some countries, but I don’t want to specify a particular country, but yeah, and even the private sector often feel like government is just on their toes to tax their money to a level. So I feel like the direction the government takes plays a huge role about these issues. And like I said earlier, for me, these issues are usually more political and rely on diplomacy than they are about technology itself. At the end of the day, if you have leaders who do not understand the criticality and even the devastation that can be caused when security is not taken seriously, or cyber security in particular is not taken seriously, then you’ll find your country languishing in so many kinds of issues. I think I will yield the mic at this point, so we can move on to the others. Thank you.


MODERATOR: Thank you, Samaira, for sharing with us this very interesting perspective. And we all know that this, I mean, discussion is very sensible and we have to get in collaboration. Now I will open the Q&A to have questions from the audience. And hopefully, our distinguished panelists will be able to answer these questions. So who might be the first? Yeah. Can you just introduce yourself and ask the question, please?


Sreenath Govindarajan: I’m Srinath Kovindarajan from the European Law Students Association, and I specialize in international law. And my question is targeted towards India, but I would appreciate a global perspective on it, too. So the talent manual on cyber attacks is largely undecided on what critical and when you look at regional powers, does India view its neighbors and ever coming to an agreement on what CIs are in the future? It’s a blanket question.


MODERATOR: Thank you for your questions. I think I see Dr. Manojit is always ready to answer kind of questions. Or perhaps, Yehita, you wish to go for it?


Monojit Das: My views will be mine. It’s not the stand of the government. But the very first line you said, you know, is very important that it’s nothing is, you know, constructive or nothing, you know, is fixed. As I mentioned, if you can kindly recollect, that if you see even the threshold for a cyber war is not defined. Many countries, as I mentioned, that they have the provision written very much clearly that we will retaliate in a full scale if at all any war is raised on our critical infrastructure. But to at what point? Because we have seen the Office of the Personal Management in a large scale, but still that was not. Again, we have seen a Stuxnet attack. That was in a very large scale. But then what exactly will be the highest threshold? And to your context of asking whether there will be trust, you know, in cyberspace, you can’t trust anyone. With any country, there were instances, you know, five eyes spying on each other, whether it is a ten eyes, every eyes will be spying on the other eye. Coming to this factor that, you know, particularly if you talk about India, you will have a doubtedly the world’s largest democracy at this stage. There may be some questions that sometime calls it to be a one-sided, but given the large fact that internet-wise or data-wise, you know, people are very largely connected to internet. You have to agree to this part, irrespective of the differences in opinion that may be circulating in the good morning messages that is very popular in India, the good morning culture. But if you see the problem that comes with democratic country like India is that the debate between initially getting defense versus development, when one section used to say focus on defense, other section used to focus on development. There’s a huge gap in that outcome you see, you will experience it today. But today the debate largely lies on privacy versus security. So at one side, the government urges that if you want a total privacy, security, then in some cases you have to give the control to us. Like in what happens when you see your building infrastructure, you just or a society at large, you see, we have not just the security personnel outside the premises, but inside also we have the armed guards. So in a case when I have my medium-scale or large-scale enterprise, if I want myself to be, you know, secured from a transnational threat that origins from another country, in somehow, if I wish my government to safeguard me, like how we give control to third-party softwares or third-party companies to audit our firm, to have a third-party firewall. So we have to give the same liberty for the government to come inside and to have a control on that so that even they can save. But at the same time, when it is a government enterprise, you see, we will have that inner thought between whether they are going to make the data go against us or not. So that apprehension, because cyber skepticism is still alive. One of the very senior researchers who still believes that AI is not very much popular and AI doesn’t have potential, like you see, to disrupt the proceedings. But it is seen, you know, cyber has the potential to disrupt. That’s why Stuxnet happened. And so we have to realize that cyber holds potential. There used to be people who are researchers, you know, the very culture of doing a selfie like this, and these people have, you know, managed to draw the fingerprint and, you know, through silicon printing, they have managed to unlock phones. So nothing is impossible. It’s possible, but the same argument I’d like to put, and since you are focusing more on international law, I’d also like to request to let things come from your side as well, you know, the challenges that does exist. Suppose, for example, the submarine cable breakouts that are happening nowadays, it forms a very critical… The talks are there, if at all submarine cable, then who is handling? It’s the Navy, the naval forces, then again it comes to the Ministry of Defense. So the Ministry of Defense coming and securing the submarine level, then where is Ministry of IT? Because in every country almost, the ministries are different. There is nowhere a coordination that talks about the Ministry of IT will be focusing on Ministry of Defense and guiding them what they will do, because the bureaucratic hassle everywhere, you know, they will be, you know, it’s my job, I am the superior, then it comes the cadre and then it comes the services in every way. So we need to understand this, and that’s why even if you talk about the lower orbit satellites, you know, it’s going to be disruptive for sure. So we need to focus on this international dimension. And okay, so I’ll not take much time. Thank you.


MODERATOR: Thank you, Doctor. We do face interlinked issues. I think it’s also important to hear from the perspective of African high-level sites, and I wish to hear from Kazan. How do you deal with the low-levels that address critical infrastructure, cyber attacks? Kazan, are you around?


Karsan Gabriel: Kazan, are you around? Yes, I am. And thank you for your question. To be honest, we are still at a very entry phase in building critical infrastructure. And this is not just for Africa, it’s mostly for the world. We’re still tied within these paradigms of global north or global south issues, but the cybersecurity context is always about the person. So if we center it around the person, then we can get the nuances of the cultural element. For example, in Tanzania, we’re still building our infrastructure. We’re still shaping how our infrastructure will be enabled, especially being a young country. It means that it’s the youth who will be the actual utilizers, and they will get the context if they’re based on principles. But the principles of cybersecurity and security should be the same for any human being. You’re protecting your resources for the best interest in terms of utility and passing it on in terms of sustainability to the next generation. So our culture is highly around the issue of create, curate, and disseminate based on the interest of the specific person. And we want, first, our demographic to be literate in terms of using the resources that are available in a place where still there’s a big population that still has no basics of computing or digital literacy in itself. Security by design and a competent civil service or policy element that is people-centered and understands the nuances of the culture are important. And I think this is for any country or every country. So in Tanzania, we are still based on the same element that collectively build, but also collaboratively enhance the knowledge of the people in understanding the core pillars of security, confidentiality, integrity, and availability of all the resources in their best interest. Those are my remarks.


AUDIENCE: I’m hearing a lot about, I think, the shared approach to cybersecurity, national security, and that intersection. Do you feel like we are adequately sharing globally the indicators of compromise and the threat side? Do you think there’s more to make sure we’re making it more difficult for our global adversaries who are targeting all of our networks?


MODERATOR: Thank you so much for your questions. But we also wish to hear from you as a US perspective. How do you deal with national security?


Ihita Gangavarapu: Yes, I think that’s an incredible question. And yes, there you are. So we’d love to hear from you, of course. But this is just one thing I wanted to highlight, is when you look at, from a cybersecurity or threat intelligence perspective, all of us tend to focus on the indicators of compromise. There’s an attack, which is more proactive. So when you look at a cyber kill chain, you have, let’s say, at the reconnaissance stage when you’re gorging the entire threat landscape, then you’ll have an initial attack once you have some initial entry into the chain or in the ecosystem. And finding the exact initial attack vector, your indicator of attack, it makes a lot of difference because a compromise is post. So sharing best practices and ensuring that we have a repository or data around what are the potential indicators of attack will ensure that there is more national perspective, even sectoral for that matter. Maybe I’d like to hear from you if you want to add something to this.


AUDIENCE: Yes, I think, so from the FBI’s perspective, absolutely, I agree with you. Looking at and going back, the only challenge, I think, and this is where the public-private partnership becomes so important, is that many times it’s really taken a lot of work for us. I love that you all use the word trust because I think that’s really what this all comes down to. And so building the trust with also our private sector companies that we’re here for them to protect them, but that way they feel comfortable they’ve actually had attacks. And of course, that’s how we’re seeing a lot of this. And then we can go back, and I’ll use a US term, but reverse engineer or go back to look for what were the attack indicators. I think that’s right. And that’s why I wanted to find out, I’m not sure we’ve connected specifically with those of you around the table and online with our African nation partners to make sure we’re connected. And we’re sharing those best practices and those indicators of attack and the compromise as well so that we are, again, tightening. Because I think what we’re also finding is there are global advanced persistent threat actors. There are global criminal enterprises and they’re targeting all of our networks because for financial, we all have financial resources. We all have defense resource. I’m not sure we’ve knitted the cybersecurity community globally. So I wanted to hear from you and your perspectives. Do you feel that way? I can definitely see some room for growth after hearing your perspectives. And so just wanna make sure we’re doing our part.


Paula Nkandu Haamaundu: I just wanna add to that from my experience in the private sector. And so I worked in the financial services industry and I’ll give an example of what would normally happen. Bank A experiences a cyber incident. Two days later, it’s going to be Bank B experiencing the same type of cyber incident with the same order of brandy. Three days later, the next bank and so on and so forth. And the biggest challenge for me was that we didn’t have a community of sharing information. And when we would bring this up to the regulator, the collective issue was that there was no guiding principle on how we’d be able to share this information. Mostly with the private sector, if you share is that it’s going to go out to the public and the public will know you’ve been hit and then you’re going to lose your reputation and things like that. So if we have a guideline of sharing threats So if we have a guideline of sharing threat intel, that would still safeguard the company. I think we’d see more organizations coming forth and reporting these incidents. I also just wanted to add that there is an organization called Shadow Saver, and they work with different governments and countries and search to share threat intelligence across the globe. So you could, and then they’ll be able to share what they’re seeing from there and to ensure that your understanding of the threats that are coming to you is better enhanced.


AUDIENCE: on how we’re working with you, but certainly in the international space, we’re very active. But let me tell you sort of how we do it domestically and regionally, because I love that you all share those perspectives. For us, I think what we’re really finding is you sort of can’t have enough representation. And for us, that’s at the national level. So we’re also trying to influence policy and development and make intelligence as well as sort of all of the designs around critical infrastructure, that they’re all factored. And I loved what one of you mentioned, I apologize, I don’t remember, but you said you need to have sort of technical literacy with your policymakers. So we’re at that strategic level. And then internationally, also trying to share it. That’s why I’m here, by the way, is we’re getting more active in the standards bodies to try to bring this perspective into it. But separately, we even go out down to our field office level. We have 55 field offices, and we have this group called InfraGard. And the whole design behind InfraGard was to bring private sector partners into the fold and be able to share with them what we’re learning even from the international community down to, okay, here’s how to protect your business. We do a ton of public service announcements where we’re going out. In fact, I know many of you, I was gonna ask this. I know you’re hearing about Salt Typhoon, our recent targeting of our telecommunications industry. We just have been going out with messaging over the last week, week and a half to share a guide of that, and then giving guidance on how to protect. So I think that’s an area, too, where we’re going out to our international partners to say, are you also seeing this type of vector, this type of targeting, this type of presence in the networks, and what that looked like? And if you are or are not, what were the actions that were taken? And again, how do we make sure that there can be detection? And I think that’s a huge part of this. I don’t know if I adequately answered your question. I was just trying to give you a flavor for how we kind of take it, as you all said, from the local to the regional to the national to the international. And I think that’s, and I think you’ve all said it here, too, there’s a bottom-up approach to cybersecurity, and there’s a top-down. And we have to make sure that those are intersecting in the most meaningful ways. And it is difficult. I say it, it sounds so easy to say the problem, but it is very difficult in practice for all the reasons that you just said. You’re right, admitting that you have been attacked means you have admitted to vulnerability in the eyes of the private sector. And our citizens and our users. That is not what we want. But there has to be a little bit of openness to be able to ensure the next victim is not vulnerable, and that there’s, you know, we stop the harm. But those are some of the local groupings. And again, there are international forums. I just, I’m not sure if we’ve, I’ll use the word routinized, like we’ve made it a part of always practice to just always go back to the default to share. Who needs to know this, and how quickly can I get it to them?


MODERATOR: Thank you so much. I always. said, openness and international access is essential, I mean, I mean, as stakeholders to be able to collaborate together to tackle these issues. We have just heard a perspective from the FBI on how these mechanisms work. Ismaile, can you tell us how can we collaborate together to make sure that these challenges can be addressed in a multi-stakeholder or let’s say international cooperation?


Samaila Atsen Bako: Thank you for the question. I would say there are quite a number of multi-stakeholder, sorry, I can hear an echo, this is kind of distracting, but anyways, I would say there are quite a number of events and conversations. For instance, even the IGA is one of those discussions. There’s the GC3B and some other ones. And I mean, personally, again, speaking for myself, personally, I think that these conversations can go on, but at the end of the day, what happens when it comes to implementation? You know, I feel like sometimes because we tend to rely on government effort, it can be a problem because, for instance, when power changes from one government to the other, who may be heading certain agencies, who may retire, or who may be posted to other jobs. And so sometimes because the effort is usually sometimes on the investment in a particular department or agency, when they leave, those efforts tend to either stall or even regress. But that being said, right, that being said, I think there’s some promise in the sense that we see the efforts of the private sector. I mean, even META in Nigeria tends to do a lot around child online protection, anti-fraud efforts, a lot of, a whole lot of non-profits. I think Paula mentioned she mentored in the Cyber Girls program. There are so many other NGOs like the Cyber City Foundation and the ones I’m part of that do a lot of things on digital literacy, on raising awareness, engaging with governments when they are coming up with regulations and laws, helping to give them the end-user perspective to make sure that they are not just looking at it from maybe from the angle of how to serve you, you know, people and things like that. So everyone has a role to play. General, like I said, after these conversations are had, the priority or the focus or the goal of the implementing people or organizations is what usually takes precedence. If a law is, if for instance they want to create a Data Privacy Act, but they are targeting maybe funding that can be added to the law, it means that at the point of implementation, the goal would be to make sure that that funding does come in, not necessarily how do we guarantee data privacy, even though the law itself is a privacy law. That’s why I say a lot of these things tie back into the political that drive it. But we can’t give up as, you know, end-users, as private sector, as professionals in the industry. We have to keep pushing, keep speaking out of what should be, hoping that these things do come into play. From a practical perspective, I would say the key thing is to fix the education curriculum, fix the infrastructure deficits. Within our region, there’s a lot of what we call digital divide, you know, and if people can’t come on four devices, or they don’t have access to even networks at all, then how do they come into the data, or how do we bring them into the data? How do we even make them part of the global economy, for instance, nationally and things like that? So I think we need to start from the basics, we need to get to a point where infrastructure itself is good, where the necessary funding is put towards things like academia, you know, instead of just a fraction of the budget. And we build an R&D culture where, you know, it’s in the second nature to do research so much within the continent, not just relying on what comes from outside. I mean, we have open source things that can be leveraged as well. So I think if we take the conversation from this perspective, you know, as the global south, you know, then it helps us to build capacity as a whole, as a region, as a country, and then from there, even your citizens tend to benefit from the economic side of things. So I think those are my ideas on how we can move things forward and collaborate.


MODERATOR: Thank you, Samaila, for addressing this important collaboration between the global south and the global north. Dr. Melejit, if we consider the FBI’s ambition to collaborate, the three points on which we can work on, on the top of the list?


Monojit Das: In addition, by our esteemed, I’d like to say now you are more than a part of our speaking panel only. You have added a very new dimension. So let us be on to the reality that, you know, relation between India at large and the U.S. has been very cordial, given the few instances that has happened geopolitically, whether you live in 1971, 1999. But, you know, trust, when you talk about trust in terms of cyber security, you see big giants like Meta, Google, they have an agreement with the U.S. government that it bounds them to share the information or at large the data. But we attempted the same with one of our startups. We failed miserably, and that attempt was largely highlighted by the U.S. government. So, by the so-called the West to ensure, you know, India is trying to bring on surveillance, but in other way, because whenever we attempted anything from our side, we never got that same support. Just to ensure that, you know, monopoly or duopoly from the West is never harmed. We actually don’t intend to harm anyone, you know. All is that we want our indigenization because we are really progressing and we attempt to do in that way. What I feel that a greater collaboration in terms of really trustworthy, when I mention this word trustworthy, it should be not in exchange of, you know, we support in exchange of data, but really we support the ideas of data localization. Let the data be within us. If you need, you kindly request and we are always ready. We have several exchange agreements, whether you’re starting from the agreement of supporting your shifts in limo or we have all sorts of agreements. We can do that, but not to take data from the other way around. I feel that digital cooperation and certainly we have very flagship initiatives like of ensuring digital divide or I like to say overcoming the digital, I’m sorry, that we have developed some applications that are by the Ministry of Education that calls Anubhavini app. Like that translates, and I’m really happy to say far better than even Google to ensure the language, the whatever India we speak English in a very courtesy of population. So I feel this type of applications, we not only promote 22 languages of India, but also other 9 and 10 overseas languages. So if this can be potential area of collaboration where we together can, you know, outreach this to our African brothers and sisters so that they can get in touch because you have the outreach, we have the product and we can certainly do so. So this can be one. And other than cyber security, as you mentioned, the APT are advanced persistent threat. So I feel this is the one of the key area because you see our neighborhood, Bangladesh, the Bangladesh Bank highest, you know, it remains by the so-called North Korean actor. As we thought, like, you know, it was not managed, if you can recollect kindly. So this type of, you know, the collaborations can certainly help in preventing because today India’s money or like every people in India, every person at large will have a phone pay, Google pay or all sort of, you know, payment mechanism, BIM, government doesn’t know. So we are largely dependent on mobile pay or QR everywhere. So just to ensure safeguarding in this line as well. So these three can be the one and largely what through your embassy, the United States can do is undertake cyber security. Otherwise it does a very good work. Like the United States embassy has been very active in India, promoting culture through your scholarship and other methods. But I feel cyber security awareness can also be taken as a part and we can do so.


MODERATOR: Thank you. Thank you, Dr. Going back to the FBI. I mean, cyber issues is quite like climate change today is internationally or on common interest. We need to collaborate together as we have been highlighting. I mean, like that collaborate work on from the FBI perspective, do you think we should have international cyber security framework or a bilateral cooperation can be enough to change or establish kind of capacity programs?


AUDIENCE: Yeah, thank you. And thanks to my colleague from India, those were great points and I will actually take that back because our legal attache, of course, there at the embassy could likely be very helpful to you. So if you’d like to answer the question, I think we need bilateral cooperation for those areas, like to bring this back to national security. Sometimes there are, you know, certain things that are sensitive because it’s, you know, maybe targeted a very sensitive part of your critical infrastructure and sharing could actually open up additional targeting or, you know, vulnerabilities to be identified. So I think there are certain times when bilateral cooperation, particularly in national security is probably required, but more so than that, I think what you’re getting at, and again, the colleague here as well, the financial systems part of this are ubiquitous, ubiquity. I don’t know if it translates to everyone’s language, but it’s this idea of it’s everywhere now, it’s persistent. I think there’s a ubiquity of certain things like financial services, I think like the applications for communication, where there’s a great opportunity for international cooperation on really trying to understand and evaluate, and I think one of you mentioned it too, this idea of the intersection between privacy and security. I think everyone wants to make that about things like, you know, and I’ll say it from our point of view at the FBI, they want to make it, you know, solely about encryption or solely about, you know, as though that’s it. And in reality, I think we all understand sometimes it’s sort of security versus security, which is if you want absolute privacy, then that means there’s absolute sometimes anonymity, right, of a person and all of their activities. I think that’s where we’re trying to find a little bit of balance and understanding so that we can make sure that users, whatever their level of digital literacy, and companies, whatever their level, are thoughtful and deliberate about just making those decisions, where an individual going into a global common, it’s such a powerful, wonderful thing, and it’s huge for economies. I think you mentioned that for economies, it is about my colleague online. But there is now, I think, I agree with the person who said, you know, security is now the next real challenge. It’s out there, it’s real, and everyone’s sort of paying attention now because there’s been levels from individual to corporations and governments. So that’s where I think the international piece has to play a more prevalent role. That’s why I use that word routinization or routine. We have to make it a common fabric that we’re consistently trying to make the opportunity smaller for our adversary. And there are a lot of ways to do that. But it is also one difficulty is we are in information overload from, I think, individuals, you know, there’s so much information hitting us every day, but also at the government level and probably on the, certainly on the private sector side, as they try to understand also the markets. So as we possibly can, and I feel like it’s been growing out of control for a while. So anything we can do to shrink that would be welcome.


MODERATOR: Thank you so much for sharing this very important point and for your presentation, let’s say, because we have learned a lot from you, and we are eager to continue to learn more and more. I think we only have three minutes left, so I will be giving the floor to all the speakers, I mean, for 30 seconds to perhaps answer any questions you think, or for your comments, perhaps answer any questions you think, or for your closing remarks. So let’s start with our distinguished leader.


Ihita Gangavarapu: All right, I’ll keep it short. I think I appreciate the point around reducing the size of the attack surface. And given that we have so many emerging technologies coming in and the cyber security threats that they’re posing, I don’t, I think cyber security should have just started giving it a priority a long time ago. Given now that, especially with AI and IoT and all, so the applications that are coming in, we have to be very cautious. The other thing I just want to address with respect to the different forums that are there, two that I’ve been engaged with in some capacity is the Global Forum on Cyber Expertise. We have GFC, where a lot of organizations, including governments and private sector, come together to discuss best practices. Then you also have, for a more nuanced kind of cyber issue discussion, is the APWG, which is the Anti-Phishing Working Group. I think these, and I’m sure there are a lot more if somebody would like to highlight. Yeah, I think with this, I’d like to hand it over to my colleagues to close it.


Paula Nkandu Haamaundu: I’ve really enjoyed the discussion today, and I do more of understanding from the different perspectives, such as India and the USA. But in my closing, I just wanted to mention three points, which is cooperation, capacity building, and implementation. So from the perspective of cooperation, I think we can’t deny the need for different regions, different partnerships to happen. For instance, from the perspective of GIZ, the Partnership for Strengthening Cyber Security is a project that’s really trying to ensure that all our partners have enhanced cyber security postures. I think the gentleman there had asked a question on international law in the cyber space, and had adopted the common position on international law, on application of international law in the cyber space. And one of the things that they’ve been doing to ensure that member states in Africa build their capacity is to hold roundtable workshops, where they can get different perspectives from the different member states and have that conversation, because it’s very important. I was in one of the workshops, and a lot of conversations came up, but especially around data sovereignty. So there’s a lot of conversations there. Okay, I see I’m being given time. But essentially, cooperation is very important. We can’t deny that. Capacity building, we need to ensure that from the technical perspective, we, from the policymakers, from the cyber diplomats, we have that capacity built. And lastly, implementation. I think Samahila had mentioned how important it is. That one we can’t deny. We can talk, talk, talk, but if we don’t implement, then we’re not going to go anywhere.


MODERATOR: Thank you so much, Paula. Ednas, let’s say Kazan, please, you have 10 seconds to tell us like about


Karsan Gabriel: what don’t. Yes. Great. Thank you very much. I think, I think, for me, the most important part should be based on trust. That trust should be a principle that is building all of our security architecture and access policy. So when we have trust in systems which are implemented, I think we can have a good intersection with cyber protection and cyber security.


MODERATOR: Thank you so much, Kazan. Over to Samahila. 10 seconds as well.


Samaila Atsen Bako: Maybe I should just ask you a question. In my closing remarks, in all the times we find out that the attackers aggressed governments, so what can we do? It’s a question to all of us. What can we do to use or stop government from attacking people, attacking different political interests? To all, or perhaps to the FBI, I may say anything. No, no, no, to everyone. I can’t just…


MODERATOR: Thank you so much. Based on the experience, I think the FBI can answer this question. So, Dr. Menejit, please, 10 seconds.


Monojit Das: Well, terming it as a closing remark, but I’d love to make it as an opening path for our first, you know, possible discussion henceforth, and as highlighted by our participants and guest special invitee here, I consider. So I feel we can collaborate and at least have the common ground where we can convergence. We have convergence and sort out, start focusing on that because they become a tool for geopolitical aspect as well. So differences will be there, and that will be used for, you know, what you call surveillance, reconnaissance, and whatever the other factor, but let us find some common ground as a part of converging area, and we collaborate. And let’s start from today once we are done with the session, and I feel there’s more to it. Thank you.


MODERATOR: Thank you so much. As we conclude this insightful session on the VN intersection of cyber and national security, I would like to thank you, panelists, for your expertise and all participation to the discussion as highlighted, the critical and growing overlap between cyber security and national agencies, emphasizing the urgent need for stronger policy innovation, innovative practices, and collaborative effort to address the challenges we face in this digital age. Thank you, participants, and thank you, LFBI, for your also participation in this very interesting discussions. Thank you so much all. Group photo, please.


I

Ihita Gangavarapu

Speech speed

217 words per minute

Speech length

1220 words

Speech time

337 seconds

Cybersecurity directly impacts national security and critical infrastructure

Explanation

Ihita Gangavarapu emphasizes that cybersecurity has direct implications on national security. She points out that any disruption to internet infrastructure across various sectors can have catastrophic effects on nations.


Evidence

Mentions sectors like banking, healthcare, BFSI, telecommunications, and education as critical infrastructure that needs protection.


Major Discussion Point

Intersection of Cybersecurity and National Security


Agreed with

Lily Edinam Botsyoe


Karsan Gabriel


Agreed on

Cybersecurity is integral to national security


Implementing comprehensive legislative and institutional frameworks for cybersecurity

Explanation

Ihita Gangavarapu discusses various legislative measures and institutional frameworks implemented in India to enhance cybersecurity. These include the Information Technology Act, defense cyber agency, and National Cyber Security Institute.


Evidence

Mentions specific initiatives like CERT, infrastructure protection center, and sectoral regulations for banking sector.


Major Discussion Point

Strategies for Enhancing Cybersecurity


Private sector involvement is essential for developing cybersecurity solutions

Explanation

Ihita Gangavarapu highlights the significant role of the private sector in developing cybersecurity solutions. She mentions the emergence of numerous companies in India focusing on cybersecurity solutions and services.


Evidence

States that over 300 companies have emerged in India offering cybersecurity solutions and services in recent years.


Major Discussion Point

Role of Different Stakeholders in Cybersecurity


K

Karsan Gabriel

Speech speed

151 words per minute

Speech length

1244 words

Speech time

492 seconds

Cybersecurity is not just a technical issue but a matter of national resilience

Explanation

Karsan Gabriel emphasizes that cybersecurity goes beyond technical aspects and is crucial for national resilience. He stresses the importance of building critical infrastructure and shaping it with a focus on security.


Evidence

Mentions the need for security by design and a competent civil service that understands the nuances of culture in implementing cybersecurity measures.


Major Discussion Point

Intersection of Cybersecurity and National Security


Agreed with

Ihita Gangavarapu


Lily Edinam Botsyoe


Agreed on

Cybersecurity is integral to national security


A

AUDIENCE

Speech speed

158 words per minute

Speech length

1477 words

Speech time

558 seconds

Trust is a key factor in the relationship between cybersecurity and national security

Explanation

The audience member (FBI representative) emphasizes the importance of trust in cybersecurity efforts. They highlight the need for building trust between public and private sectors to effectively share information about cyber threats and attacks.


Evidence

Mentions the challenge of private sector companies feeling comfortable admitting they’ve been attacked and sharing that information.


Major Discussion Point

Intersection of Cybersecurity and National Security


Balancing privacy and security concerns is a key challenge in cybersecurity

Explanation

The audience member discusses the challenge of finding a balance between privacy and security in cybersecurity efforts. They point out that absolute privacy can sometimes conflict with security needs.


Evidence

Mentions the debate around encryption and the need for thoughtful decision-making about privacy and security trade-offs.


Major Discussion Point

Challenges in Cybersecurity Collaboration


Differed with

Monojit Das


Differed on

Data localization and privacy policies


Developing international cooperation and frameworks for cybersecurity

Explanation

The audience member emphasizes the need for international cooperation in cybersecurity efforts. They suggest that while bilateral cooperation is necessary for sensitive national security issues, there’s a great opportunity for international cooperation on common cybersecurity challenges.


Evidence

Mentions the need for cooperation on issues like financial services and communication applications that are ubiquitous across countries.


Major Discussion Point

Strategies for Enhancing Cybersecurity


Agreed with

Paula Nkandu Haamaundu


Agreed on

Need for international cooperation in cybersecurity


L

Lily Edinam Botsyoe

Speech speed

167 words per minute

Speech length

1496 words

Speech time

536 seconds

Cybersecurity and national security are two sides of the same coin in the digital age

Explanation

Lily Edinam Botsyoe argues that in the data-driven age, cybersecurity is inseparable from national security. She explains that governments now rely on data for various critical functions, making cybersecurity essential for national security.


Evidence

Provides examples of how cyber threats can impact national security, such as ransomware crippling hospitals and disinformation campaigns targeting elections.


Major Discussion Point

Intersection of Cybersecurity and National Security


Agreed with

Ihita Gangavarapu


Karsan Gabriel


Agreed on

Cybersecurity is integral to national security


P

Paula Nkandu Haamaundu

Speech speed

164 words per minute

Speech length

1324 words

Speech time

482 seconds

Lack of trust and information sharing between organizations hinders cybersecurity efforts

Explanation

Paula Nkandu Haamaundu highlights the challenge of insufficient information sharing between organizations regarding cyber incidents. She explains that fear of reputational damage often prevents companies from sharing information about attacks they’ve experienced.


Evidence

Provides an example from the financial services industry where banks experience similar cyber incidents but don’t share information due to lack of guiding principles for information sharing.


Major Discussion Point

Challenges in Cybersecurity Collaboration


Focusing on capacity building and implementation of cybersecurity measures

Explanation

Paula Nkandu Haamaundu emphasizes the importance of capacity building and implementation in cybersecurity efforts. She argues that while discussions and frameworks are important, actual implementation of cybersecurity measures is crucial.


Evidence

Mentions the need for capacity building from technical, policymaker, and cyber diplomat perspectives.


Major Discussion Point

Strategies for Enhancing Cybersecurity


Agreed with

Samaila Atsen Bako


Agreed on

Importance of capacity building in cybersecurity


International organizations facilitate cooperation and knowledge sharing in cybersecurity

Explanation

Paula Nkandu Haamaundu discusses the role of international organizations in facilitating cybersecurity cooperation. She highlights how organizations like GIZ work to enhance cybersecurity postures across different regions and countries.


Evidence

Mentions specific initiatives like the Partnership for Strengthening Cyber Security project and roundtable workshops for member states in Africa.


Major Discussion Point

Role of Different Stakeholders in Cybersecurity


Agreed with

AUDIENCE


Agreed on

Need for international cooperation in cybersecurity


S

Samaila Atsen Bako

Speech speed

162 words per minute

Speech length

1752 words

Speech time

647 seconds

Political factors and changes in government leadership can disrupt cybersecurity initiatives

Explanation

Samaila Atsen Bako points out that political factors and changes in government leadership can hinder the implementation of cybersecurity initiatives. He argues that when power changes hands or key personnel are moved, cybersecurity efforts can stall or regress.


Major Discussion Point

Challenges in Cybersecurity Collaboration


Improving digital literacy and infrastructure to address the digital divide

Explanation

Samaila Atsen Bako emphasizes the need to address the digital divide by improving digital literacy and infrastructure. He argues that without access to devices and networks, many people cannot participate in the digital economy or benefit from cybersecurity measures.


Evidence

Mentions the need to fix education curriculum and infrastructure deficits to address the digital divide.


Major Discussion Point

Strategies for Enhancing Cybersecurity


Agreed with

Paula Nkandu Haamaundu


Agreed on

Importance of capacity building in cybersecurity


Civil society and NGOs contribute to awareness and capacity building in cybersecurity

Explanation

Samaila Atsen Bako highlights the role of civil society organizations and NGOs in raising awareness and building capacity for cybersecurity. He mentions various initiatives focused on digital literacy, awareness raising, and engaging with governments on regulations.


Evidence

Mentions specific organizations like META, Cyber Girls program, Cyber City Foundation working on child online protection, anti-fraud efforts, and digital literacy.


Major Discussion Point

Role of Different Stakeholders in Cybersecurity


M

Monojit Das

Speech speed

175 words per minute

Speech length

2617 words

Speech time

895 seconds

Differences in data localization and privacy policies between countries pose challenges

Explanation

Monojit Das discusses the challenges arising from differences in data localization and privacy policies between countries. He highlights the tension between India’s attempts at data localization and the policies of Western tech giants.


Evidence

Mentions India’s failed attempt to implement data sharing agreements similar to those between U.S. tech companies and the U.S. government.


Major Discussion Point

Challenges in Cybersecurity Collaboration


Differed with

AUDIENCE


Differed on

Data localization and privacy policies


Government plays a crucial role in setting policies and frameworks for cybersecurity

Explanation

Monojit Das emphasizes the critical role of government in establishing policies and frameworks for cybersecurity. He discusses various initiatives and strategies implemented by the Indian government to enhance cybersecurity.


Evidence

Mentions specific programs like ICCR scholarships for cybersecurity studies and ITEC program for training IT experts from friendly countries.


Major Discussion Point

Role of Different Stakeholders in Cybersecurity


Agreements

Agreement Points

Cybersecurity is integral to national security

speakers

Ihita Gangavarapu


Lily Edinam Botsyoe


Karsan Gabriel


arguments

Cybersecurity directly impacts national security and critical infrastructure


Cybersecurity and national security are two sides of the same coin in the digital age


Cybersecurity is not just a technical issue but a matter of national resilience


summary

Multiple speakers emphasized the inseparable link between cybersecurity and national security, highlighting how cyber threats can significantly impact critical infrastructure and national stability.


Need for international cooperation in cybersecurity

speakers

AUDIENCE


Paula Nkandu Haamaundu


arguments

Developing international cooperation and frameworks for cybersecurity


International organizations facilitate cooperation and knowledge sharing in cybersecurity


summary

Speakers agreed on the importance of international cooperation and knowledge sharing to address global cybersecurity challenges effectively.


Importance of capacity building in cybersecurity

speakers

Paula Nkandu Haamaundu


Samaila Atsen Bako


arguments

Focusing on capacity building and implementation of cybersecurity measures


Improving digital literacy and infrastructure to address the digital divide


summary

Speakers emphasized the need for capacity building, including improving digital literacy and infrastructure, to enhance cybersecurity efforts.


Similar Viewpoints

Both speakers highlighted the importance of collaboration between the private sector and government in developing and implementing cybersecurity measures.

speakers

Ihita Gangavarapu


Monojit Das


arguments

Private sector involvement is essential for developing cybersecurity solutions


Government plays a crucial role in setting policies and frameworks for cybersecurity


Both speakers emphasized the critical role of trust in facilitating information sharing and collaboration between different stakeholders in cybersecurity efforts.

speakers

AUDIENCE


Paula Nkandu Haamaundu


arguments

Trust is a key factor in the relationship between cybersecurity and national security


Lack of trust and information sharing between organizations hinders cybersecurity efforts


Unexpected Consensus

Role of non-governmental organizations in cybersecurity

speakers

Samaila Atsen Bako


Paula Nkandu Haamaundu


arguments

Civil society and NGOs contribute to awareness and capacity building in cybersecurity


International organizations facilitate cooperation and knowledge sharing in cybersecurity


explanation

While government and private sector roles were expected to be discussed, the emphasis on the role of NGOs and international organizations in cybersecurity efforts was an unexpected area of consensus.


Overall Assessment

Summary

The main areas of agreement included the integral relationship between cybersecurity and national security, the need for international cooperation, the importance of capacity building, and the roles of various stakeholders including government, private sector, and NGOs.


Consensus level

There was a moderate to high level of consensus among the speakers on the fundamental aspects of cybersecurity and its relationship to national security. This consensus suggests a shared understanding of the challenges and potential strategies for addressing cybersecurity issues, which could facilitate more coordinated and effective approaches to cybersecurity at national and international levels.


Differences

Different Viewpoints

Data localization and privacy policies

speakers

Monojit Das


AUDIENCE


arguments

Differences in data localization and privacy policies between countries pose challenges


Balancing privacy and security concerns is a key challenge in cybersecurity


summary

Monojit Das highlights tensions between India’s data localization efforts and Western tech companies’ policies, while the FBI representative emphasizes the need to balance privacy and security concerns globally.


Unexpected Differences

Focus on education sector in cybersecurity

speakers

Ihita Gangavarapu


Samaila Atsen Bako


arguments

Private sector involvement is essential for developing cybersecurity solutions


Improving digital literacy and infrastructure to address the digital divide


explanation

While most speakers focused on critical infrastructure like finance and healthcare, Ihita Gangavarapu unexpectedly highlighted the education sector as the most impacted by cyberattacks. Samaila Atsen Bako, on the other hand, emphasized improving digital literacy and infrastructure, which indirectly relates to the education sector but from a different perspective.


Overall Assessment

summary

The main areas of disagreement revolved around data localization policies, the balance between privacy and security, and the specific approaches to involving the private sector in cybersecurity efforts.


difference_level

The level of disagreement among speakers was moderate. While there was a general consensus on the importance of cybersecurity for national security, speakers had different perspectives on implementation strategies and priorities. These differences reflect the complex nature of cybersecurity challenges and the need for diverse approaches tailored to specific national contexts.


Partial Agreements

Partial Agreements

All speakers agree on the importance of private sector involvement and trust in cybersecurity efforts, but they differ on how to achieve this. Ihita Gangavarapu focuses on private sector solutions, Paula Nkandu Haamaundu emphasizes the need for better information sharing frameworks, and the FBI representative stresses building trust between public and private sectors.

speakers

Ihita Gangavarapu


Paula Nkandu Haamaundu


AUDIENCE


arguments

Private sector involvement is essential for developing cybersecurity solutions


Lack of trust and information sharing between organizations hinders cybersecurity efforts


Trust is a key factor in the relationship between cybersecurity and national security


Similar Viewpoints

Both speakers highlighted the importance of collaboration between the private sector and government in developing and implementing cybersecurity measures.

speakers

Ihita Gangavarapu


Monojit Das


arguments

Private sector involvement is essential for developing cybersecurity solutions


Government plays a crucial role in setting policies and frameworks for cybersecurity


Both speakers emphasized the critical role of trust in facilitating information sharing and collaboration between different stakeholders in cybersecurity efforts.

speakers

AUDIENCE


Paula Nkandu Haamaundu


arguments

Trust is a key factor in the relationship between cybersecurity and national security


Lack of trust and information sharing between organizations hinders cybersecurity efforts


Takeaways

Key Takeaways

Cybersecurity and national security are deeply interconnected in the digital age


Trust is a critical factor in cybersecurity collaboration between organizations and nations


A multi-stakeholder approach involving government, private sector, and civil society is needed to address cybersecurity challenges


Capacity building, especially in digital literacy and infrastructure, is crucial for improving cybersecurity in developing nations


International cooperation and information sharing are essential for combating global cyber threats


Resolutions and Action Items

Explore opportunities for bilateral and multilateral cooperation on cybersecurity issues


Develop more robust frameworks for sharing threat intelligence and best practices internationally


Focus on building trust between nations and organizations to facilitate better information sharing


Prioritize capacity building initiatives, especially in developing countries


Work towards creating international standards or frameworks for cybersecurity


Unresolved Issues

How to balance national security interests with the need for international cooperation on cybersecurity


Addressing the challenges of data localization and sovereignty in a globalized digital world


Finding the right balance between privacy and security in cybersecurity policies


How to effectively combat state-sponsored cyber attacks without escalating international tensions


Developing a universally accepted definition of critical infrastructure in cyberspace


Suggested Compromises

Focus initial international cooperation efforts on universally agreed-upon issues like combating child exploitation online


Develop bilateral agreements for sensitive national security matters while pursuing broader multilateral cooperation on general cybersecurity issues


Create tiered systems of information sharing that allow for different levels of disclosure based on sensitivity and trust levels


Establish neutral international bodies to facilitate cyber threat information sharing between nations


Thought Provoking Comments

When we talk about cybersecurity, it has direct implications on national security, and there are certain key initiatives and strategies that nations have taken, and my perspective will be purely from an Indian context.

speaker

Ihita Gangavarapu


reason

This comment set the stage for examining the intersection of cybersecurity and national security from a specific country’s perspective, providing concrete examples.


impact

It led to a detailed discussion of India’s cybersecurity initiatives and frameworks, offering insights into how one nation is addressing these challenges.


Cybersecurity today to me was the significant development we have done. Today we are courtesy that again sometimes brings a debate whether, you know, whether the move was good or not, bringing private players to give in data at a very cheaper rate.

speaker

Monojit Das


reason

This comment highlighted the tension between accessibility and security in cybersecurity policy.


impact

It broadened the discussion to include economic and social factors in cybersecurity, leading to considerations of the trade-offs involved in policy decisions.


So in Tanzania, we are still based on the same element that collectively build, but also collaboratively enhance the knowledge of the people in understanding the core pillars of security, confidentiality, integrity, and availability of all the resources in their best interest.

speaker

Karsan Gabriel


reason

This comment provided a perspective from a developing country, emphasizing the importance of education and collective effort in cybersecurity.


impact

It shifted the conversation to consider the challenges and approaches of countries at different stages of technological development.


Looking at and going back, the only challenge, I think, and this is where the public-private partnership becomes so important, is that many times it’s really taken a lot of work for us. I love that you all use the word trust because I think that’s really what this all comes down to.

speaker

FBI Representative


reason

This comment emphasized the critical role of trust in cybersecurity efforts, particularly in public-private partnerships.


impact

It led to a deeper discussion on the challenges of information sharing and collaboration between different sectors and countries.


I think there’s some promise in the sense that we see the efforts of the private sector. I mean, even META in Nigeria tends to do a lot around child online protection, anti-fraud efforts, a whole lot of non-profits.

speaker

Samaila Atsen Bako


reason

This comment highlighted the role of private sector and non-profit organizations in addressing cybersecurity challenges.


impact

It broadened the discussion beyond government efforts to consider the multi-stakeholder nature of cybersecurity solutions.


Overall Assessment

These key comments shaped the discussion by broadening its scope from a focus on government policies to a more comprehensive view of cybersecurity that includes private sector involvement, education, trust-building, and international cooperation. They highlighted the complex interplay between national security, economic development, and technological advancement in addressing cybersecurity challenges. The discussion evolved from country-specific examples to considering global collaboration and the unique challenges faced by developing nations, emphasizing the need for tailored approaches and multi-stakeholder engagement in cybersecurity efforts.


Follow-up Questions

How can we better align national security priorities with rapidly evolving cybersecurity threats?

speaker

MODERATOR


explanation

This question addresses the need to keep national security strategies up-to-date with the fast-changing cybersecurity landscape.


What gaps exist between cybersecurity practices and national security agendas, and how can we bring them together?

speaker

MODERATOR


explanation

This explores potential misalignments between cybersecurity implementations and broader national security goals, seeking ways to integrate them more effectively.


How can we implement the national cyber framework in India?

speaker

MODERATOR


explanation

This question seeks more details on the practical application of India’s recently introduced national cyber framework.


How do African countries address laws that address critical infrastructure cyber attacks?

speaker

MODERATOR


explanation

This question aims to understand the legal frameworks in African countries for protecting critical infrastructure from cyber threats.


Are we adequately sharing globally the indicators of compromise and the threat side?

speaker

AUDIENCE


explanation

This question addresses the need for improved global collaboration in sharing cybersecurity threat intelligence.


How can we collaborate together to make sure that these challenges can be addressed in a multi-stakeholder or international cooperation?

speaker

MODERATOR


explanation

This question explores ways to enhance international cooperation in addressing cybersecurity challenges.


Should we have an international cybersecurity framework or is bilateral cooperation enough to establish capacity programs?

speaker

MODERATOR


explanation

This question considers the most effective approach for international cybersecurity collaboration and capacity building.


What can we do to use or stop government from attacking people, attacking different political interests?

speaker

Samaila Atsen Bako


explanation

This question addresses concerns about government-sponsored cyber attacks and how to prevent them.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #70 Improving local online service delivery in a global world

Open Forum #70 Improving local online service delivery in a global world

Session at a Glance

Summary

This discussion focused on improving local online service delivery in a global context, with a particular emphasis on the Local Online Service Index (LOSI) methodology. Speakers from various countries shared their experiences and challenges in implementing and assessing e-government services at the local level. The session highlighted the importance of aligning national and local government strategies for digital transformation, with Saudi Arabia presenting a successful model of coordination between different levels of government.

Key challenges identified across multiple countries included low digital literacy, funding scarcity, and lack of specialized human resources. The discussion also addressed the need for standardization of services across different municipalities within countries, as well as the difficulties in assessing local government services due to varying organizational structures and service provision models.

Several countries, including India, Tunisia, and Cambodia, shared their experiences in implementing LOSI and other assessment frameworks. These case studies demonstrated the value of such assessments in identifying areas for improvement and benchmarking progress. The United Arab Emirates presented their digital maturity model, which incorporates elements of LOSI and other best practices.

The discussion also touched on future trends, including the potential use of artificial intelligence in assessing and improving e-government services. Speakers emphasized the importance of continuous improvement and the need for innovation in local government, even if it means accepting some level of failure in the process.

Overall, the session underscored the global nature of the challenges in local e-government development and the potential for international cooperation and knowledge sharing to drive improvements. The LOSI methodology was presented as a valuable tool for guiding and assessing progress in this area, with potential for further refinement and expansion.

Keypoints

Major discussion points:

– Challenges and opportunities in applying the Local Online Services Index (LOSI) methodology to assess local e-government services

– Experiences of different countries in implementing and using LOSI

– Alignment and cooperation between national and local levels of government in digital transformation

– Use of AI and other technologies to improve local e-government services

– Importance of citizen engagement and meeting local needs in e-government

Overall purpose:

The goal of the discussion was to share experiences and best practices in assessing and improving local e-government services using the LOSI methodology, as well as to explore challenges and future directions for local e-government development.

Tone:

The overall tone was informative and collaborative. Speakers shared their countries’ experiences in a factual manner, while also expressing enthusiasm for improving local e-government. There was a sense of mutual learning and desire for cooperation among participants. The tone became slightly more urgent towards the end when discussing the need for innovation and alignment between government levels.

Speakers

– Dimitrios Sarantis: Senior Research Analyst, UNU Operating Unit on Policy-Driven and Electronic MRIs in Portugal

– Angelica Zundel: Consultant for UN

– Ayman Alarabiat: Professor, Al-Balqa Applied University, Jordan 

– Dr Gayatri Doctor, CEPT University, India

– Mehdi Limam: Member of the Tunisian E-Governance Society

– Abdulaziz Zakri: Representative from Digital Government Authority (DGA), Saudi Arabia

– Manal Al Afad: Digital government and open government expert, Telecommunications and Digital Government Regulatory Authority, United Arab Emirates

– Yin Huotely: Representative from Monitoring and Evaluation Department, Digital Government Committee, Ministry of Post and Telecommunication of Cambodia

– Vannapha Phommathansy: Representative from Digital Government Centre, Ministry of Technology and Communications, Laos

– Nevine Makram Labib Eskaros: Professor, Chair of Computer Information Systems Department, Sadat Academy for Management Sciences, Egypt

Additional speakers:

– Delfina Soares: Professor

– Young-Hwan Jin: Representative from the Seoul National University research team

Full session report

Improving Local Online Service Delivery: A Global Perspective on LOSI Implementation

This comprehensive discussion brought together experts from various countries to explore the challenges and opportunities in implementing and assessing local e-government services, with a particular focus on the Local Online Service Index (LOSI) methodology.

LOSI Methodology and Components

Dimitrios Sarantis introduced the LOSI methodology, explaining that it consists of four main criteria:

1. Institutional framework

2. Content provision

3. Services provision

4. Participation and engagement

The LOSI assessment evaluates 86 indicators across these criteria, providing a comprehensive view of local e-government capabilities. Sarantis also highlighted the complexities in assessing local government services due to varying organizational structures and service provision models across different countries and municipalities.

Country Experiences and Challenges

Several countries shared their experiences in implementing LOSI and other assessment frameworks:

1. Jordan: Ayman Alarabiat highlighted issues with awareness, resources, and resistance to change. He emphasized the need for capacity building and change management strategies.

2. India: Gayatri Doctor reported that the LOSI assessment helped identify gaps in online services and challenges with distributed portals. The assessment process led to improvements in service delivery and citizen engagement.

3. Tunisia: Mehdi Limam explained how they leveraged EGDI scores and the UN toolkit to implement LOSI effectively. He stressed that LOSI helps identify weaknesses and benchmark digital maturity, driving continuous improvement in local e-government.

4. Cambodia: Yin Huotely emphasized the need to improve connectivity and digital literacy.

5. Laos: Vannapha Phommathansy highlighted additional infrastructure and adoption challenges due to the country’s early stage of development.

6. United Arab Emirates: Manal Al Afad presented their digital maturity model, which incorporates elements of LOSI and other best practices.

7. Saudi Arabia: Abdulaziz Zakri shared a successful model of coordination between different levels of government, aligning national and local efforts through governance frameworks. He highlighted plans to expand their e-government initiatives to ten major cities across the country.

Common challenges across countries included low digital literacy, funding scarcity, lack of specialized human resources, and the need to improve connectivity and infrastructure.

UN Local Government Toolkit

Angelica Zundel presented the UN Local Government Toolkit, designed to support LOSI implementation. This resource provides guidance and best practices for local governments looking to improve their online services.

Strategies for Improvement and Future Directions

1. Alignment of national and local efforts: An audience member, Delfina, raised the question of how local governments align their efforts with national-level strategies. The discussion highlighted the importance of coordination across different levels of government.

2. Expanding the LOSI network: Sarantis explained the purpose of the LOSI Network in facilitating knowledge sharing and addressing common barriers.

3. Leveraging emerging technologies: Nevine Makram Labib Eskaros from Egypt proposed using artificial intelligence to assess and improve e-government services, presenting a five-stage framework for AI integration in e-government assessment.

4. Regular assessment and benchmarking: Speakers emphasized the value of LOSI in identifying areas for improvement and tracking progress over time.

5. Addressing diverse local contexts: The discussion acknowledged the need to balance standardized assessment with country-specific needs and contexts.

Key Takeaways and Future Considerations

Dimitrios Sarantis concluded the session with key takeaways:

1. The importance of continuous improvement in local online service delivery

2. The value of knowledge sharing and learning from diverse experiences

3. The need for ongoing refinement of the LOSI methodology

Future considerations include:

1. Standardization of assessment across diverse government structures

2. Effective engagement of policymakers in utilizing LOSI results

3. Addressing resource constraints, particularly in developing countries

4. Ethical and effective integration of AI in e-government assessment

The session ended with a call for participants to complete a questionnaire regarding services provided at the local government level, further contributing to the ongoing development of LOSI.

Conclusion

The discussion underscored the global nature of the challenges in local e-government development and the potential for international cooperation and knowledge sharing to drive improvements. The LOSI methodology was presented as a valuable tool for guiding and assessing progress in this area, with potential for further refinement and expansion. As countries continue to develop their local e-government services, the insights shared in this discussion provide a foundation for more targeted and effective improvements, ultimately aiming to enhance service delivery for citizens worldwide.

Session Transcript

Dimitrios Sarantis: Okay, we start. I welcome you to IG session entitled improving local online service delivery in a global world. My name is Dimitrios Sarantis, Senior Research Analyst in the UNU Operating Unit on Policy-Driven and Electronic MRIs in Portugal, together with Angelica Zundel and Denis Husser from United Nations Economic Cooperation. And we will moderate this session. I would like to welcome our eight distinguished speakers in the panel. And before starting the session, I would like to make a brief reference of the session structure and describe its sections. Firstly, I invite online participants to submit their comments and questions in the Q&A section of the online session. With the help of Angelica, we will gather all of them and do our best to transfer them to our panelists. Of course, participants in the room can set questions orally. Online participants have the possibility to also make comments if they request it. So let’s go quickly in session. The first part, the first section, is entitled Opportunities and Challenges in Applying LOSI. In this section, panelists will identify benefits of applying LOSI in their countries. They can suggest possible use of LOSI application results and ways of using them in policymaking. They can mention examples from their own experience. They will suggest ways to engage policymakers and local researchers in LOSI network activities. And they may identify challenges. of applying LOGIE and possible ways of facing them. And they will suggest, hopefully, ways of improving LOGIE methodology and expanding the network. After that, we will have a Q&A section. Then Angelica will briefly present a government LOGIE toolkit. She will explain this toolkit that is created from UNDES and UNUIGOV in order to support LOGIE application. The next section is local government present and future. So panelists will present the existing needs of citizens and cities in their countries. They will describe ways, for example, applications, technologies, innovations of facing these needs in local government level. They will also suggest future trends in local government coming from their countries. And also, they may suggest ways that we can assess local government development. And challenges and ways of facing these assessment methods and possible ways of collaboration and funding sources, which are problems that we face currently. Okay. Before going to the panelists, I will say, let’s say, the main topics of LOGIE instrument, which is a collaboration of UNDES and UNUIGOV. We started this collaboration back in 2018, and the reason was to support local government development. Here’s EGDI to assess e-government at the national level. So because citizen is more close to local government than the national one, we thought that there is a need to cover there. So we came up with this methodology. Very briefly, at the moment, the methodology comprises 95 indicators. And it is divided in six areas, in six criteria. In brackets, you can see the indicators that we use for each criteria. The first one is institutional framework. The next one is content provision. So we assess aspects about content, then services provision. So what online services are provided by the city. Participation and engagement. The next one, technology. The fifth, and the recently introduced one, e-government leadership. So together with this methodology that is applied biannually from UNDES and UNEGO, and the results are published in United Nations e-government survey, we use local government questionnaire, which gathers information from local government municipalities in preparation for the upcoming survey. So this gives us a better insight from local governments because of public officials that give us this information. The results from this assessment are published in United Nations e-government knowledge base and biannually. And you can see them in detail, all the results of the city. So we will not spend more time on that. We apply this methodology in the most populous city of each country, of the 193 member states. So, because there is… applied in more cities worldwide, we designed and applied the idea of Lossy Network, which invites interested institutions to participate with their resources and with our support, UNDESA and UNEGOV, in order to apply Lossy methodology in a larger number of cities in their country, after signing a memorandum of understanding. Okay. Now, I think it is the right time to go to the first section, where we will see partners that have already joined Lossy Network, and they will talk about their experience. The first panelist is Ayman Al-Arabyat, Professor Ayman Al-Arabyat, from Al-Balqa Applied University. Ayman, the floor is yours.

Ayman Alarabiat: Good morning, everyone. First of all, I would like to thank UNDESA, UNEGOV, IGF, and also the Sudanese government for organizing this forum. I will start with telling you about the story, how I got involved in Lossy. Of course, the main reason is my teacher, Professor Delfina Suarez. I like the concept, and I think it’s very crucial, very important to evaluate and assess the e-services at local level. Two years ago, Dimitris and I have conducted a study, we evaluate around 19 cities’ portals or websites in Jordan. So at first, Jordan is a small country in the Middle East, but we are, as Jordanians, we believe it’s great because of its people. At local level, we have two administrative levels of government, one under the Ministry of the Interior, for the municipalities, they are controlled under the local or the Ministry of Local Administration. We have around 100 municipalities in Jordan, however, the majority of Jordanian municipalities are, has a financial problem because of many financial situations in Jordan. The main expenses in Jordan municipalities goes to salaries, around from 50 to 85% of municipalities’ budget goes to salaries. As I said, we have evaluated around 19 cities in Jordan, for our methodology, Dimitris and I agreed that we remove around 16 indicators from the LOSI methodology that are related to service provision indicators. The reason that all of these 15 services are provided by national department or national ministries. in Jordan. For our result, a table that all Jordanian municipalities were ranked at middle or lowest group, except one city, it’s the capital, Amman. All of Jordanian municipalities face main challenges and main limitations in technology, service provision, and also in citizen engagement. However, our result indicates that small municipalities sometimes doing better than larger municipalities, despite having fewer resources. Maybe that’s related to the vision or the strategy that those small municipalities have taken in their perspective. Now, after we have done our study, we promote LOSI in Jordan. How we did that, we sent our report to local administrative ministry. We also have interactive lectures with greater Amman municipalities, and we have online session with Arab Smart Cities Network that are located in Jordan. At that session, around 100 persons from around 70 municipalities have attended that online session. Now, for the challenges and opportunities, in fact… In fact, we face a main challenge regarding the awareness of local administrative or local official about the international evaluation. They are not familiar with that. They also have limited resources, and digital transformation is not in the top of their priority. Of course, there was a resistance to change that we found when we talked to them. And also, I delivered some training program in IPA. It’s an institution of public administration in Jordan for many times. And the attendance was from many municipalities, and they do not have full awareness of the importance of digital transformation. And if they have, they don’t know how to move to digital transformation. The second challenge is regarding how we could transform this theoretical result into practical actions. For our opportunity, I think we should work in the long-term strategy for engagement. We have started some initial talk with the Ministry of Local Administration in Jordan. And also, we are trying to collaborate with potential partners like Arab Smart Cities Forum. They are expressing their willingness to participate with us in the next study that we are what we are going to do in 2025. So, thank you very much for your listening. So, of course, any valuable suggestions will be welcome. Thank you very much. Thank you, Dimitris.

Dimitrios Sarantis: Thank you, Professor Al-Arabyat, especially for the challenges… and the ways that we can move forward… engaging policy makers in using lousy results. Let’s move now to the second speaker, Dr. Gayatri Doktor… from CEPT University in India, which is an online speaker. And, Dr. Gayatri, we welcome you. Please, the floor is yours.

Gayatri Doctor: Thank you very much. First of all, I would like to thank UNDESA, UNU-IGAV and IGF… for having this open forum… giving an opportunity to talk about the LOSI experience… and the pilot studies that we have done in India. Can I have the next slide, please? So, as everybody knows, India is a large and diverse country. We started off with the first pilot of LOSI in 2023… where we assessed the most populous city in each state… and the union territories. And we applied the LOSI methodology. This came out to 27 states, 9 union territories. But, of course, two cities did not have municipal government portals. So, our total assessment was of 34 cities. I had a student researcher… who worked with me on this, Soumya Mehta. In the second pilot, which we did, we decided to concentrate on only one state, which was Gujarat. And in Gujarat, we targeted 53 urban local bodies, or 53 cities, which was across municipal corporations and municipalities, with the municipal governance portals. And we also tried to study two cities where they did not have a municipal government portal. So this was also done with a student researcher, Devanshi Shah. Can we move to the next? In the Indian context, when we look at it, there are a variety of portals that are available. We have centrally-government portals of the government of India. Each and every state has state-governed portals. Then comes the district level, where there are district portals. And we have the city websites. Could be municipalities, municipal corporations, or city councils. This breakup is basically made on the basis of the population of the cities. So there are multiple modes of service delivery to access citizen service. There are some services available on district, state, or central portals. And the urban local bodies, that is the city websites, according to the Indian jurisdiction system, is supposed to perform certain specific services. We have something called the 74th Amendment and 18 services which a city should perform. When we did these studies, both of the pilots, we could identify cities based on high, medium and low maturity. We did not have anybody with the very high maturity. The LOCIE methodology helped us to assess and improve the efficiency, accessibility and the quality of the locally government services. Basically, when we assessed the 34 cities across the country, it was a little more difficult, but when we accessed a particular state, which were the 53 cities, we could immediately tell the type of accessibility and the type of services and the quality of the local e-government service. The next slide, please. So of course, when we did the LOCIE across both the pilots, we could see that we could identify the gaps in the online service. So there were some cities which were in the high, some in the middle, some in the low. So why were they? So we could see the cities, there was transparency and accountability and it helped us make some informed decisions as to which cities needed more information on their websites and things like that. With the introduction of the LGQ, though we did not get too many feedbacks from many cities, it was an understanding of how the government also appreciates this particular evaluation. Users were also… experience of users can be evaluated. Next slide, please. Of course, we did have challenges in applying the LOCIE network because all the services were not available on one city portal. They were distributed across different portals like the district, the state, the central, or some parastatal bodies. Also, some cities did not have an active or an updated MGP. There was some of them, the data was incomplete. There is some tax information which is supposed to be mentioned in the part of the LOCIE thing. But in India, there are various forms of tax, income tax, property tax, professional tax. So all these were not, it was difficult to capture all the elements in the current LOCIE. And also the organizational structure under LOCIE in the infrastructure institution setup, the organization structure has to be defined, which varies depending on the classification of the cities. So there were these type of challenges. And India being a diverse country, we have lots of languages. So the availability of the MGP accessible in multiple languages is also a constraint. Of course, once you give suggestions, there is always some sort of resistance to change, to implement the suggestions. In addition to which, resource constraints, both financial and human at the local government level, are always there. Next slide, please. So there is, when we do a benchmarking and best practices, because LOCIE helps us to identify the good performing cities, and policymakers can benchmark their services against the successful models so that other cities can adopt these particular things. It also helps policymakers to identify the areas and allocate resources and do some strategic planning. Of course, over a period of time, you can monitor the LOCIE data and track the progress of the cities and their impact over time. Next, please. This is just generic, that how do you do it? You can have stakeholder workshops and seminars. You can create policy briefs and reports. Do public consultations and feedback with the users, because citizen-centric services must have interactions with the citizens. And being in a country where there are a lot of different levels of government, we could have some intergovernmental collaborations with an approach to improving e-government services. Next, please. So on the whole, I would say that LOCIE is a very valuable tool for assessing and improving the e-government services at the local level. It helps in service improvement, resource allocation, policy formulation, benchmarking and best practices, capacity building and citizen engagement, and transparency and accountability. Thank you. The way forward, we are going to be doing a LOCIE pilot 3 in 2025, where we would be even studying in the Indian context, there is a state level e-service delivery system. So we would be studying that and comparing it with LOCIE. Thank you. Thank you, Dr. Gayatri.

Dimitrios Sarantis: Well, India is a huge country. So for us, it’s a very important partner because we can have a very useful insight in order to improve LOCIE from this type of organizational structure, from this federation type of structure in the country. And I will just highlight here this aspect, the organizational structure in India varies depending on the classification of cities. They have municipal corporations, municipalities, different types of organizational units. And we face this not only in India, we face this aspect also in other countries that it is not easy to identify the organizational unit in municipality level, in city level that provides the services that we assess in our instrument. For example, in the UK, they have this structure of cities, municipalities, boroughs, the smaller ones. So which one provides the services? This comes in many places and is an issue that we should discuss in the future and find solutions. Anyway, we will proceed because, thank you, Dr. Gayatri. Now the next panelist is Mehdi Limam from the Tunisian government. society. The panel is yours.

Mehdi Limam: Good morning everyone. I’m Mehdi Rimem, a member of the Tunisian E-Governance Society and today I have the pleasure to share with you the experience of implementing the LUCI methodology in Tunisia and to discuss the opportunities and challenges it presents. We will take a look at the benefits of LUCI, key steps for implementation and the path forward for applying this methodology. Let’s begin with the first slide. This is a representation of the result of the implementation of LUCI in Tunisia. We evaluated 24 municipality portals across the 24 states. As you can see, only 9 achieved the rank of middle while the rest have low scores. We won’t dive too much into the results. For full details, I encourage you to consult our report which outlines our findings and insights from our evaluation. Next, LUCI has proven to be a powerful tool in enhancing local government. By evaluating the government portals, LUCI can help identify specific weaknesses, enabling municipalities to improve their services which can lead to more user-friendly and efficient portals. Additionally, benchmarking digital maturity through LUCI provides the municipalities with clear metrics, allowing them to strategically plan their digital transformation. One of LUCI’s greatest strengths is its ability to foster collaboration by learning from global best practices. LUCI enables governments to benefit from shared experiences. Countries that apply the methodology can serve as valuable case studies. Now, in the next slide, the question is how can countries effectively implement closely. Based on our experience in Tunisia, there are three key steps. First, as we can see, we can start with the EGDI scores. Analyzing the country EGDI scores provides a strong foundation for understanding the digital maturity of the country. The next step will be leveraging the UN Local Government Toolkit, as we see in the next slide. This toolkit provides guidance on Lucy indicators with concrete examples. And the final step, we suggest studying the country background to determine which criteria are likely to be present and which ones are not likely to be provided. This will help save time and ensure a more efficient evaluation. But implementation is just the beginning. To really leverage Lucy, we must build on opportunities. Next slide, please. Next. Thank you. First, collaboration is key. Partnering with other countries interested in Lucy allows for the exchange of experiences and best practices. We’re currently working on collaboration. Second, we must invest in training municipal employees on emerging technologies. Civil society can lead this effort to build local capacities. We also recommend launching initiatives for data collection and publication. Civil society can collaborate with municipalities to make data accessible on portals, which increase transparency and citizens trust. Finally, fostering public-private partnership is essential. This collaboration can expand e-services like digital payments. The private sector can provide expertise and resources for municipalities to deliver more modern and better solutions. Of course, we had some challenges, engaging policymakers and ensuring the methodology adoption at the local level can be difficult. Limited resources, both technical and financial, often slow implementation. Also ensuring consistent stakeholders engagement and overcoming resistance to change in local government can be challenging. But by expanding the LUCIE network, refining the methodology and involving stakeholders, we can address these barriers. In conclusion, this was our experience and findings in the Tunisian E-Governance Society in implementing the LUCIE methodology. I would like to extend my gratitude to Vivienne, UNDESA and UNUGOV for the opportunity and their trust in our work. I also want to thank our entire team for their tremendous efforts in making this implementation a success. And to all of you here today for following and engaging with us, we are open for collaboration and look forward to strengthening local governance globally. Thank you.

Dimitrios Sarantis: Thank you, Mehdi. Pynisia is one of the most recent LUCIE partners in our network. So now it is time to open the first Q&A section for comments, questions to our panelists or to any comments that you may have. Allow me to do the first question to Mehdi, starting from this thing that he mentioned at the end about the collaboration with the policy makers using the LUCIE results. So I would like to ask you Mehdi, if you have tried or you have succeeded to disseminate and serve the knowledge. produced from LOSI application in your country with government officials and decision-makers in your country? And if yes, or not the same, how do you think they should use the extracted results? And give us some thoughts which could be the next steps to improve local government development in Tunisia using LOSI output.

Mehdi Limam: Well, thank you. Currently, we have a slight problem. There is hesitance in local governments because we are waiting for the elections. The local entities’ mandates are over, but the elections are delayed. So there is hesitance to make any change, waiting for the new elected officials.

Dimitrios Sarantis: Okay. Thank you, Mehdi. So I invite now the participants or the online participants to, if they have any questions, please, it is the time to do it now. Yes, please. Please introduce yourself also for all the participants.

Audience: Hello. Yeah, my name is Young-Hwan Jin from the Seoul National University research team for doing the application of the LOSI methodologies in South Korea. So we just concluded all the researches and submitted the drafts to the UN DASA, and we are trying to wait for the comments right now. And this is a very great opportunity to have a very strong network to enhance the status of the local online services by measuring them and suggesting the new features. directions of the local online services. So I believe as this impact dated in every two years, and that’s why they came to date, and continuously they’re suggesting the most challenging indicators, such as the rise of the use of AI, and update of the internet, you know, the standards. Why? But in here, I try to ask the very one good question about the new future direction of the LUCI. So I believe that one of the most strong role and the purpose of LUCI is providing some guideline for the local government, like acting like a lighthouse, showing them the way, how the future looks like. So in here, I try to mention that the struggles, I’m not going to ask this as challenges, but struggles we have in the South Korea. So as a background, the South Korean cities can see that they’re some of the most developed countries in terms of the data governments and online services. So in our research, they show that the last of the cities, they have very high score in terms of the LUCIs. But the most, the third part we had is that the some of the services that is indicated in the LUCI index is not authorized to provide in the local government, because a lot of services there’s mentioned in the LUCI is not allowed to provide by the local government. Some of them are controlled and provided by central government or the state government or provincial governments. So I believe this, the very similar problem was already mentioned in the Indian cases than any other cities, something like that. In these kinds of cases, so what are the future direction we should have in the side of the UNESCO and the UNU, that that is one of the question I should ask. So how we deal with these kinds of struggles? Thank you.

Dimitrios Sarantis: Thank you. Thank you very much for the question. Well, I don’t know if I have the answer. The only thing that I can say is that we have identified clearly this problem. This is one of the basic problems that we face, designing the indicators specifically for the services criterion. Because our colleague here identifies this problem that not all cities in the world provide the same services and are authorised from the government to provide the same services in local level. So this is an issue that we should face. We try to do our best and identify, let’s say, the most common services that are used in all cities around the world. It’s very difficult to do that. What I can say is that Lossy Network is a solution, maybe partly in this problem, because what we do with our partners in each country is to identify their specific needs in their context, in their national context, and adjust those services in the specific case. So this is a solution. But again, when we go to the UN, the government survey every two years, it is very difficult to find the ideal set of services for each city. You can understand that. It’s very difficult. Finally, an answer to this would be the citizen at the end is not interested if he or she receives the service from the city or from the central government. Finally, he should receive the service that he expects to receive. So we have this approach in our methodology. We don’t seek and ask in the assessment if the city provides this service. service, but if the municipality website gives the access to this service, so for example, if they give the link to the national government for this service, this is enough because the citizen can receive this service in the way that he wants. But yes, this is a very interesting hot topic for us and for this reason, we have something for you at the end. We have made a questionnaire in order to identify, maybe to try to improve this set of services that we ask. We will, you will see at the end, and we are asking all of you to feel this. Thank you very much for your question. Any other questions or should we move to the next? If not. Okay. Now, Angelica will take the floor from you in Deza and she will give you a brief presentation of a very interesting, very useful, not interesting, very useful toolkit that we have designed and offered to everyone. It is publicly open and yeah, she will say more about that in a while, Angelica.

Angelica Zundel: Thank you, Dimitris. Hello, everyone. For those who don’t know me, my name is Angelica and I work as a consultant for UN. Let me quickly show you, so I’m just presenting first this QR code, which will give you access to our UN eGovKB city data page on which you can access your latest results for the biggest city in your country. So I’ll just give you a few seconds to scan this or insert the link and then I’ll just quickly show you what that looks like on the website. And of course, I can also link the paste, sorry, paste the link on the chat afterwards. So hopefully you can see, yes, you can see my screen. So this is the page you’ll land on. Again, this is for the city data of the latest UNIGOV survey, and you can find all the major cities. So let’s say I’m interested in Istanbul’s latest findings. You’ll land on this page, which will give you an overview of the LOCIE of 2024, comparing to the world leader, as well as the sub-region leader, which is Riyadh. And then you’ll find more granular data on each of the sub-components of the LOCIE. And this is especially useful for you to understand where your strengths and where your weaknesses are. So in this case, if I were Istanbul, I would understand that technology and e-participation perhaps are two of the sub-components that I’d need to work on more, particularly in order to improve my LOCIE performance. Now with this sort of knowledge in mind, we encourage you to then check out our local e-government toolkit, which again, you can access here through this QR code or the link. Just giving you a few seconds to scan that, but again, I’ll share that again on the chat and also by email. So it’s especially useful to use in complement to what you just saw on the city data page, because the local e-government toolkit is essentially structured around the LOCIE and the six sub-components. Each module sorry, I hope you can hear me, is based on these sub-components. And so from what I saw as Istanbul, I would, let’s say, be interested in the technology sub-components. So I’ll click here and then I’ll land on a slide deck, which will give me information on which indicators are. assessed in this subcomponent as per the latest 2024 survey. So here’s an index of all the different indicators, just to give you an overview. And then let’s say I’m interested in what exactly that consists of. So what does that mean? I’ll click on the slide. And then each slide, which essentially looks the same here, it has the same structure. But for each indicator, it’ll give you an explanation of what that is exactly, why is it important, and then a generic kind of guide on how to implement those in your local government portal, as well as a case study or a guide for each indicator. So I’ll keep it short, because we have other presentations. But this is just a little preview for you to know how to use this toolkit, as well as the city data page, to really ensure that you can improve your LOC performance. So back to, I think, the main slides. Or rather, I think we’ll proceed with the fourth presentation. And if you have any questions, feel free to put them in the chat or email me. Thank you.

Dimitrios Sarantis: Thank you, Angelica. So yes, please have a look at the toolkit. You will find it very useful. So now we move to the second section that is named Local Government, Present and Future. So here, we have speakers from countries that have not applied yet LOGi methodology, but I think they are most of them in this track. So the first speaker is from Saudi Arabia, Abdulaziz Zakri from DGA. And he will present us a few things to show us why DGA, why Saudi Arabia and Riyadh have made such a great improvement in the government recently. Abdulaziz, the floor is yours.

Abdulaziz Zakri: Thank you, Dimitris. We are very happy to have you here today in the Kingdom of Saudi Arabia. Arabia and it’s my pleasure to be here among this audience and it’s really honored today to speak about Riyadh’s achievement in becoming one of the top three cities globally in local online services and as a matter of fact to achieve this status Riyadh has worked in three driving factors that set it apart. The first factor is Vision 2030 and by improving the quality of life and enhancing the quality of a government services and boosting the digital economy as well as encouraging the digital innovation. The second factor is the digital government strategy with its five pillars. We speak here about satisfied citizen and enabled businesses, effective government, efficient investment and regulated ecosystem. Third factor is third factor is of course is important Riyadh needs is about service delivery, citizen empowerment and inclusion and we make sure that no one left behind and lastly daily life improvement. All of them are driven by SDG which is sustainable development goals. Here let’s look very quickly at the four foundational pillars of Riyadh digital transformation strategy. Starting with empowering businesses, prioritizing beneficiaries and encouraging active participation and harnessing emerging technology. Together those principles or pillars have propelled Riyadh as innovative and interconnected hub or urban hub where different system services and people are seamlessly connected. This is a very important part as the Digital Government Authority and Riyadh Municipality have collaborated extensively to implement the LUCI assessment framework and this relation between Digital Government Authority and Riyadh, I would say, has driven the alignment of municipal services with the global standards as well as enhance the digital processes and integration or integrated innovative technologies. And this relation lead to strengthen Riyadh today as we are positioning as a top global performer today in a government. And we decided to expand this successful journey with other major cities in the Kingdom of Saudi Arabia like Mecca, Jeddah, Medina, Dammam. And this is not only because the successful story of Riyadh. Also, it is part of the plan of the Saudi authorities for the project called Saudi or city, smart city in the Kingdom of Saudi Arabia to be implemented for 10 cities across the country. And it is the first stage for this plan. And yes, and we believe that we are delivering because we have all success factors to achieve it. Here, an example for a municipal service for Riyadh and this is part of the success use cases of Riyadh which is MyCity. MyCity is an online application. It provides Riyadh citizen and resident access to essential services, also enabling community participating in improving the urban landscape of the city. One of the services is called snap and send, and this is about if you want to fix your street, or there is a light in the street is broken, or anything will affect any issue related visual appeal, you just have to take a picture and you send it, and you have surface level agreement with agreed time to have the respond to any issue that you have it around you or any location in Riyadh city. And we also have a valid platform is not only application, it provide service through the web. It is a data driven platform in a national scale, serving all municipalities in Saudi Arabia, including Riyadh. And this is also a good example for collaboration between Ministry of Municipal and Housing, and also Riyadh city, and as well as DGA to deliver the best quality of the services. Riyadh also has or provides GIS information to support Riyadh and users to take a decision based on the information. For example, if you have a business, you’d like to build this business, you can choose your location. And also you would like to buy a house and you can understand from this information how to make the decision. And also we place a participation and public engagement as a priority in our decision making. It is by law, every policy also, and every legislation has to be announced to the public and to receive their feedback before we make the decision and take it into the action. And this guarantee that public participation in policymaking is 100% is happening by cooperating with the stakeholders and partners of the city. On the other hand, most decisions related to Riyadh are being taken through consulting people and the service design. by this consultation and reflect to improve this quality of services. Sorry. Sorry for the confusion. And on the other hand, most decisions related to Riyadh are being taken through consulting people and services designed to improve by co-creation activity to ensure users’ satisfaction and service efficiency. And of course, if you would like to report any issue or complain through Riyadh City, we have these services and also we are offering open data for public to use it and make success case from data for Riyadh. This is the last part and one of the most slide I like about Riyadh. As you can see here, some of the Giga project in Riyadh. We can speak about like Green Riyadh Project and King Salman Bar. We can see at the Riyadh Development Program. And if we speak briefly about two projects, let’s start with the Green Riyadh Project. It’s about transforming the city with a million of trees and to have a cleaner air and sustainable space for healthier and greener future. Also, as you can see, King Salman Bar Project is one of the largest urban parks in the world. And it’s all about green spaces and cultural hubs and recreation to create vibrant and thriving of Riyadh. I will end my topic today with this slide and thank you very much for your attention. Thank you.

Dimitrios Sarantis: Thank you very much, Abdulaziz, for your excellent presentation. And thank you because you showed us, you gave us a flavor. some ideas why Riyadh and Saudi have made this great progress in e-government, because you showed us real practical life applications that serve the citizen in real life. So, and also what I identified, collected from your presentation, this is this example that the success of local government in Riyadh will not stop here, but will work as a model to be expanded in the rest of the cities in Saudi. And maybe this is a pattern that can be followed also in other cases. So, thank you very much. And let’s move to the next speaker who comes from United Arab Emirates, Manel Al-Affad from Telecommunications and Digital Government Regulatory Authority. The floor is yours.

Manal Al Afad: Thank you, Demetrius. First of all, I would like to thank you and DISA and UNU for this opportunity to be part of this distinguished panel and distinguished workshop. And thank you for King Saudi Arabia for hosting this fabulous event. For me, it’s the first time to attend the IGF, and really, it’s my pleasure. This is Manel Al-Affad. I am from United Arab Emirates. I’m a digital government and open government expert. I’m leading the UAE competitiveness profile. I’m here today to present for you the UAE story with regard to implementing Lucy unofficially and some other instruments. UAE commitment to digital transformation, actually, it’s a fundamental on the way UAE The fourth pillar, which is forward ecosystem, consists of the digital transformation and providing the most prestigious interactive proactive service for the whole community in the UAE. We have four pillars under this strategy. First of all, forward society, forward economy, forward diplomacy, and forward ecosystem, which is the main pillar we are talking about. Okay, what does the citizen need? All of us here are citizens, either expat or local, either a tourist or maybe from outside the country. All of us seek for a service, all of us want to know, want to get a service. So here in UAE, we are focusing on leveraging advanced technology, fostering seamless interaction between government and society, ensuring equitable access to essential service, whatever user is. So it is digital service, yes, but we need to reach for a person who cannot use digital tools. We need to go to him and provide him a service. Okay, how to assess the local e-government in the UAE? Actually, we are using two instruments, two tools. One of it is with which is Lucy methodology, we are implementing on seven emirates. And the second one, which is the UAE digital maturity model, which is built using a best practice model. One of this model is the UN e-survey and the Lucy methodology, which is built on the DNA of this digital maturity model, which consists of three pillars, leadership and policy, technology accelerators, organizations and data. You can see here this key framework implemented on seven local. emirates or d govs and 14 federal entities 14 federal entities which is the main sectors here providing service for community and it is assessed each two year each two year we are assessing this maturity model last year was the baseline and you can access it online of course what if what is we done for the world we are cooperating with bsi and providing and transferring this digital maturity to a pass standard which is passed 2009 2024 alhamdulillah we released this pass in cooperation with british standard institution on february which is digital maturity for government organization it is a guideline for any digital government or organization all over the world to for strategic integration of technology for efficient service deliver and for applicability globally so this is standard available for anyone they can download it from bsi you can download it from bsi website okay i would like to present from this platform abu dhabi case where is it’s providing the government service system which which is a super app for providing a digital service for the community its name or branded tem which is in english done okay this platform tem consists of more than 700 plus city services provided for the whole citizen across the abu dhabi emirates and even a tourist This application available you can check it online and you can check the website as well. It consists of 2 million users and more than 10 million digital government transactions annually. It consists with 221 women using a service only dedicated for women and 57,000 for elderly which is senior citizens and 99,000 young people using a service through this platform in Abu Dhabi, 15,000 for people with determination using a service specified for this category and this application or this super app super platform saved 24,000 24 million 300 visits to customer including 1 million and 900 visit for elderly and 382,000 visit for people of determination. This was for 2023 data and now we have different data in Abu Dhabi platform. Actually Abu Dhabi platform super app one of the success story on a local level it is now an organization it’s transferred to an organization with a director general with a factory to produce a service on a local level in cooperation with the whole organization under Abu Dhabi digital government. And here I would like to thank you all for hearing me I hope that it was not long and I will be happy to receive any question.

Dimitrios Sarantis: Thank you and and yeah you mentioned it’s not a question just a comment you mentioned that you do regularly assessment in UAE so every two years so maybe this explains or is an one of the explanation why UAE is well ahead regarding digital government. So you assess and then you improve things. Let’s move and we will have maybe the time at the end for questions. Next speaker is Yin Huatli from Cambodia, from Monitoring and Evaluation Department. Please, the floor is yours.

Yin Huotely: Thank you so much. Good morning, everyone. First of all, I would like to thank to the organiser, especially to Dimitris for giving me the opportunity to join such a wonderful event. This is my first time to join ITS forum here. My name is Ellie. I’m from the Digital Government Committee, Ministry of Post and Telecommunication of Cambodia. I’m in charge of M&E department. I’m very amazing of the presentation of Saudi Arabia and also UAE because here I am just give the overview of the local e-government in Cambodia because we are quite young and also we may be left behind the UAE and then Saudi Arabia. We come here to seek for the recommendation and also for the cooperation from the UNDESA and UNGOV for the cooperation for the fusion cooperation. Thank you. I have a short presentation here. Just before we go deep to the local service that provided by the local government to the local citizens, I just give the overview of the… e-government in Cambodia. We start the e-government project in 2002. This e-government project is under the financial and technical support from the Republic of Korea. The first e-government project in Cambodia, we call GAI, Government and Resilient Information System. We have the five core information systems in here. FEC, real estate information system, electronic approval system, resident information system, and also the B Corp registration information system and the connectivity. In this period, the support of the project is very large in five years. At that time, the e-government was implemented by involving all the 27 line ministries, but only one implementer. At this time, we can say that the issues of ownership is a big ownership problem. When the project ends, some of the core projects are no longer used and will move to the expertise line ministries, such as, for example, the real estate information system. They move to the Ministry of Land Management and Planning by using the LMAB. Also, for the electronic approval system, this system is no longer used. Right now, the MPTC, Ministry of Post and Telecommunications, are using the document workflow. This is for internal use only. For the resident information system, they have to move to the Ministry of Interior. At that time, from the period of their moving, they are also implementing a new system by their own. Also the vehicle information system moved to the Ministry of Public Work and Transportation and also they implement their own system too. And during the period of 2010 to 2020, yeah, the ownership issues is solved, but the lime district began to implement their own system. So the increase of silo system and also the interoperability issues also occur at that time. From 2020, here to solve the problem, Royal Government of Cambodia worked on the two policy frameworks we call Digital Society and Economy Policy Framework and also the Digital Government Policy Framework. And at that time, yeah, at that time, the four core DPI, digital platform for the core government, four digital platform, we established by the, recognized by the government and we use it as the central, the interoperability platform. The first is we call Verify Digital Authentication, verify.gov.kh and this is the government platform that can also write all the government issues document and with the standard QR code and this system is using the blockchain technology and very accurate, yeah. Right now, we We have more than 600,000 certificates that are using the Verify platform. Also, we have another digital platform called CAMDX, it’s for data exchange. This provides to register the business online registration. This platform interoperates between the line ministries that are involved with the business registration together, so the user can use only one platform in order to register their online business. Also, we have a digital payment called Bakong. Citizens can transfer money from one bank to another bank very easily. Right now, no charge. Also, the digital ID. Here is the current status of the digital government of Cambodia. This is the source I got from the digital government policy. As I mentioned, the role of ministries and institutes, they have developed their own public system. No interconnected, so the lack of information sharing from a ministry to another ministry is very left behind. Also, for the services from the government to the citizens and the services from the government to the business, the citizens can go to only one office called One Window Service Office. This One Window Service Office connects to all the line ministries that provide the public service to the citizens. and the business. Right now, the one-window service office is provided more than 1,000 service to the citizen. The one-window service office is operating countrywide. In all the country, they have one-window service. And also, yeah. And in this year, in 2024, there’s a pilot test to deploy 114 services fully online for the four sectors, agriculture, handicraft, tourism, culture, and fine arts to the Phnom Penh, in the Phnom Penh, the capital city of Cambodia, and to some districts that have fully set up infrastructure. Yeah. This is the key challenge for improving the local service in Cambodia. The first is limited access of internet access. This is what we call the connectivity. We still have a problem with the internet access, especially in the urban area. And also, some internet quality in the Phnom Penh, the main city of Cambodia, also has some part. Also, the quality of internet also has a problem. And at that time, yeah, we got the Ministry of Post and Telecommunication, and also the Telecommunication Directorate of Cambodia take action on this. They build more internet. and also have the internet speed test app in order to give the local citizens to report the internet quality to the regulator office here. And also the law of digital literacy, this is very crucial part for improving the local government in Cambodia to be fully online because the citizens are lack of basic digital skill and also the limited digital capacity of the one-window service officer also. Sorry, Ian, I’m sorry, I’m going to have to ask you to summarize because we have two more presentations and a very, very long time, so please summarize. Thank you so much. Okay, thank you. So this time, the Minister of Post and Telecommunications also worked with UNESCO to work on, to put on the digital media and information literacy framework and for further action according to the activity that mentioned in the framework. And also the resource size of cybersecurity and teleoperability that I mentioned in the report and resistant chain. And thank you so much.

Dimitrios Sarantis: Thank you very much for the presentation and the full picture of a government infrastructure that you have in Cambodia. And as you requested and asked before, we are here to help you. So whatever you need in regarding national level assessment or local government assessment, we are here when you ask us to provide any guidance in these areas. And of course, you are welcome to Losi Network where you can find a lot of support. Moving forward, the next speaker is Vanama Phommathani from Laos, from Digital Government Centre, Ministry of Technology and Communications. Please, the floor is yours.

Vannapha Phommathansy: Yeah, good afternoon. My name is Vanama Phommathani from Digital Government Centre. Government Centers of Lao PDR. I’m the second to last speaker. I’m trying to be quick. However, I can’t even turn on. I think the slide was not there yet. Okay. Okay. So, I will just keep presenting first. So, Lao PDR is a small landlocked and also least developing country. We are in one of the ASEAN countries. Okay. Maybe I’m going to go back to the start. All right. So, in 2024, the released UN e-government index, our indicator was 152. We did a lot, improved a little bit, but then there is still a lot to catch up, even comparing to our neighboring country like CLMB. So, the government, we also realize the importance of digital transformation where we define three key pillars. One is digital government, digital economy, and the third one is digital society. We are actually the late adopter, and we are actually the newcomer in terms of digital transformation. So, in 2022, with the help of UN, United Nations, UNDP, we conducted the country-level digital majority assessment. I think that many indicators, we also look into LOSI, we look into the UN e-government index. So, we assess into six key pillars. So, out of the five metrics, we are in the nascent stage. The overall country only scored 1.7, the ministry levels 1.8, and the provincial level is 1.3. So, when we talk about the local online service index, it’s going to fall under the provincial level, where we have 18 provinces and 142 districts. So, at the province level, we still have a lot of things to improve, especially the interconnectivity, the languages, and the local content, and also the digital skills and the quality. of the services. So the challenges for Laos PDR, I think we share very similar challenges with other countries and also the participants at the panels. So the limited digital infrastructure, silo system, legacy processes, low digital literacy, also funding resources, as Laos is the, we call it the least developing country in our, we have mountainous area, a lot of development and infrastructure need to be made that use a lot of investment. And also we have low user adoption, as most of the people are still, you know, having difficult buying the smartphone or getting the laptop or even go online. And the language and also cultural barriers where most of the content have to be available in English, so Laotian language was not on the map. And also the regulatory enforcement is also a challenge. So what we help the local government is that we create this government website platform, so that most of the city and municipality, they don’t have their own platform, they don’t have the portal. So we can help them, zero code, they can just come in and adopt this and they can use this platform, just plug and play. And the second one is we have in Laos portal, we want to gather all the portal of all the government, both at the local and also central. And we also have the government, we call it a super application, we call it GovX, we also onboarding services from line ministry, but we also have to work with the local government to include their services, including like the bus tracking, postal tracking, electronic document tracking and the form. And digital ID will probably be the next step that we have to

Yin Huotely: take on in order for us to do the verification and also to process the electronic transaction. So at the city level, we have the, we call it on one door service centers, but it’s now considered as a hybrid model. So all the transaction has to still go to physically, but at least the citizen can access to information with the application where they know what kind of licenses they need and how much for the application fees and how long it will take. But however, the electronic transaction and the digital payment is not available yet. So the future plan, we also hope to take the LOCE, and so UN Economic Index, and map it into our local digital government index. And we also want to access the local government every two years. I think start from January, we wanted to do some dissemination and also the consultation with our local government. So the question is, is LAO ready for LOCE? LOCE is very interesting and it’s very good to keep, however, I think having to improve our understanding on also having to, because at the end of the day, if we assess ourselves, it’s going to be all no, there will be no point. So I think having to build the infrastructure, having people digitally distribute, and also having the city to onboard their services and online and make it able to do transaction online. That will be, I think, the first step. And the second step, we also will take a look into LOCE and mapping ourselves and get ourselves on board. So with that, I think I have no more time. So thank you so much.

Dimitrios Sarantis: Thank you very much for the presentation. This is how LOCE can be used as the, also, it was the comment of a Korean partner, Korean colleague here, that it can be used as a guide to the country. What should we implement at least in an initial level, and then you can move forward to them. Also you identified some challenges, low digital literacy, funding scarcity, lack of specialized human resources, which are common findings in countries that show low. And you also gave… I remember well you use a centrally provided platform as a solution to local government, so this is a kind of solution to the resources. Anyway we should move, thank you very much, we should move to the last speaker which is online. So we welcome Professor Naveen Makrab from Sadat Academy for Management Science in Egypt. Naveen the floor is yours. You should unmute maybe. Can’t hear you Naveen. Unmute please. Still can’t hear you. She’s not muted but I think it’s on her side. Naveen can you check your sound please? There’s something. Seconds. Ah there we go. Can you hear me right now? Yes.

Nevine Makram Labib Eskaros: Okay it was synchronized with my mobile phone. So let me thank you for your very kind invitation, really happy to be amongst all of these eminent experts in the domain. I’ll be sharing my presentation with some insights about how to make use of AI in assessing the government services, especially that in Egypt we’re very keen and very much interested in aligning the AI strategy with all of the other strategies and plans to assess our services. So where is the presentation? I’m sharing my presentation so is it clear? Yes can you just click on present it’s on the lower menu. Exactly yes perfect thank you. So I’m the chair of computer information. Information Systems Department in Sadat Academy for Management Sciences, and I’m also the vice president of ISIS Act, which is the Egyptian Society for Information Systems and Computer Technology. My main specialization is medical informatics, so I also work on the board of our Ministry of Health and Population. I thought that it would be very useful to make use of all of the artificial intelligence technologies in this very important assessment. So by AI, of course, we’re not talking only about artificial intelligence, but also about the machine learning, how to feed the machine with all of the data so as to take some prediction and some patterns into consideration whenever it comes to policymaking or decision making, and also the deep learning and finally the generative AI. All of these, as we all know, are aligned with the achievement of all of the SDGs. So why have I suggested this framework? It’s because now we’re in the stage of working on the second phase of our AI strategy in Egypt, and we’re kind of redesigning the electronic services provided to citizens. So I thought that it would be more useful and more beneficial to all of us to see if we can make use of all of these technologies. My proposed framework so as to leverage AI in order to provide better assessment of the government e-services is composed of these five stages, how we’re going to collect and prepare and pre-process the data. I developed a very simple AI model. I included some assessment criteria. And I highlighted the AI-powered assessment, what would AI help and aid here, and finally how we can work on the continuous improvement. Regarding the data collection, we have many data sources. The important ones are coming from government websites, mobile applications, some social media platforms now associated to all of the ministries, the user feedback forms, and some performance metrics. As for the data extraction, I focused on the text mining because with text mining techniques we’re able to extract information from all of the unstructured text such as the user reviews, especially that we also have a system for the complaints of any citizen regarding any service. So it’s kind of a central system that collects all of the complaints and that deals with all of these complaints. Regarding the data cleaning and pre-processing, of course we do some pre-processing of the data before feeding it into the system or the framework here, and we aim here at removing any noise, any inconsistency. We also try to handle the missing values with many well-known technologies related to AI, and finally we have good data so as to be able to feed it into our system. Regarding the AI model development proposed here, I added the with natural language processing we can assess the sentiment and we can analyze the sentiment of the citizens which is very very important in assessing the user satisfaction. We also apply the text classification so as to be able to categorize the user feedback and identify any common issue. We also work on the topic modeling so as to be able to discover any underlying themes in user feedback, which is again very important. I also suggested to apply supervised and unsupervised machine learning for the supervised we make use of the classification. So as to be able to predict the service performance metrics with the unsupervised we’re aiming at identifying patterns and anomalies. And finally, I thought about applying deep learning techniques, especially the artificial neural networks techniques, so as to be able to help with the prediction. The benefits of the AI powered assessments lies in these four points. First of all, we’ll be able to have an automated monitoring system, which will help to identify any problems that arose at any time. It’s kind of real time system. We’re also being able to apply both the predictive and the prescriptive analytics. In our country nowadays, we’re applying what we call the data driven decision making. So we make use of the data analytics framework. And we’re keen about having some prediction to help whenever it comes to policy making, especially in our two priorities, whether we’re talking healthcare or education, and of course, the citizen satisfaction. And finally, we have the continuous improvement, which is an advantage of being able to apply the AI here. When I proposed this framework, and I was thinking how we’re going to implement it, I thought that we will be faced with many ethical considerations, like the data privacy, which is a problem, or let’s say a challenge with all of the data protection laws and regulations, and how to make sure that the AI system that will be developed based on this framework, how this will help without posing any threat or any problem to humans, especially that in Egypt, we’re focusing nowadays on the responsible AI. We have the National Council of Artificial Intelligence that started in November 2019 and in 2023, we had developed a charter for responsible AI, which has many pillars and it’s trying to make sure that any of the AI systems developed in Egypt or used in Egypt should be aligned with our cultural aspects, our values, and making sure that it is for the benefit of humans, not posing any threats. Regarding the assessment criteria, since we’re talking AI, I thought about, yes, yes, okay. That’s kind of the last slide, if you allow me. So here we have the assessment, the usability, the accessibility, and the efficiency. I also added here a proposed roadmap if we are to implement this framework. So we should focus on having data repositories, on having national centers so as to be able to integrate these data, and on developing and deploying solutions. And a main challenge for us in Egypt is the update of the regulations and policies and to ensure the technology trends. Final words. Thank you. By applying this, we can improve citizen satisfaction. Thank you very much.

Dimitrios Sarantis: Thank you very much, Nevene, for the presentation. You introduced the AI perspective in local government. It is the future in local government. So thank you very much. I don’t know if there is only one question or not. Okay. We ran out of time, unfortunately. So is there any question? Yes, if you have one question, you can…

Audience: If there is time I will try to be very brief because it is an issue that I think comes every time we talk about local government and national things are connected. One of the main difficulties that we realize when conducting these assessments and when working with government at local level is how can and how are government at the local level aligning things with what is defined at the national level. Because we know that most of the countries are more, it is not for all, but most of them are more developed and more mature at the national level. They have strategies, many different strategies, many roadmaps, many action plans, etc. They have also many platforms, but at local level usually things are not so developed. All of you that are working at the local level and also all of you that have responsibilities at the national level, how are you interacting and discussing and aligning things and reusing things? This connects with the presentation that we attended from Saudi Arabia, where from what I could have understood, there is a strong linkage between the DGA that acts mainly, I think, if I’m not wrong, at the national level and what happened in Riyadh. Maybe it is also related to that, that we have this position, Riyadh is also our position in the ranking, because this has been, based on the experiences that we hear from many different responsibles for ICT at local level, one of the biggest difficulties and challenges that is how to align things, particularly when this connects with the lack of resources at the local level. So this is a discussion that I would like, if we have time, to hear and to hear from you, to see how this alignment, if it exists or not, and what is hindering this kind of alignment, and if it is or not relevant. This is just one point.

Dimitrios Sarantis: Thank you, Delfina. Yes, if you have any comment on that, on this alignment between different levels of government in Saudi, which I think it exists.

Abdulaziz Zakri: All right. I believe we have a very strong governance framework, an operational model that supports this achievement, because we have a steering committee that supports the international indices. We have a digital transformation committee. We have a technical committee. And Riyadh is part of the steering committee for EGDI that is led by a digital government authority. This committee has more than 25 entities that they are meeting regularly, and we are checking and evaluating and monitoring the KPIs that we have built. And also we do bi-weekly meetings and workshops. I’m talking generally, including Riyadh. We have done more than 500 workshops, and we have a regular assessment that we are working closely with Riyadh, and we make sure that they are aligning and complying with the Lucy framework. And we also use the whole government approach, that all governments are working together. And I would say the framework and the business model that we are working on is the main factors that make us achieve this level.

Dimitrios Sarantis: Thank you, Abdulaziz. the alignment mechanism works properly in the Saudi. This is maybe the secret behind the success. Thank you. So here we have also…

Abdulaziz Zakri: We have a very good colleagues at DGA that we are working for delivering and we love the country and they provide all driving factors or successful factors and we say always why we don’t deliver it and we feel that from DGA because I saw my colleagues from Ministry of Interior we are really working together and I would like to thank him also because he’s attending today the session and we really feel that all government entities are working for one goal. Thank you.

Dimitrios Sarantis: Thank you very much. Okay so now going at the end we have a questionnaire for you for all participants to fill where we can try to collect to identify which services are provided in local government level in each city that you are a resident. So please fill it because it gives us a good insight, good information regarding the services provided in local level in each area of the world. And closing now so just very briefly by bullet some key takeaways from this workshop. Alignment in different levels of government, national, local, state level and a cooperation mechanism is a must. We identify also a gap applying assessment methods in local government so consequently governments and decision makers do not have a clear picture of the status regarding local government development in their countries. There are some challenges identified by speakers, low digital literacy, funding scarcity and lack of specialized human resources that are common findings in slow level development countries regarding local government. And also, I think that we should also reconsider the innovation thinking in local government. We should raise the bar from what our expectations are, iterate, pilot, to be more forgiving. Innovation sometimes means failure, so we have to get comfortable with that. You need to move fast, learn, and forgive. So we need to try more regarding innovation in local government. So with this one, closing the session, I would like to thank my colleagues in UNUIGOV who critically helped to design and develop our unit, Professor Delfina Soares, our research coordinator Morten Meijerhoff, Zoran Zordanovsky, and of course our co-organisers Ewin Deza, Vincenzo Acquaro, Denis Huzar, and Angelica Arzuntel. Last but not least, I would like to thank our host country, Kingdom of Saudi Arabia, and its people for their exceptional hospitality. Your warm welcome, attentiveness, and support were invaluable and are greatly appreciated. Thank you very much, all of you.

A

Ayman Alarabiat

Speech speed

85 words per minute

Speech length

610 words

Speech time

428 seconds

Jordan faced challenges with awareness, resources, and resistance to change

Explanation

Jordan encountered difficulties in implementing LOSI due to limited awareness among local officials about international evaluations. There were also challenges with limited resources and resistance to digital transformation at the local government level.

Evidence

The speaker mentioned that many municipalities in Jordan have financial problems, with 50-85% of their budget going to salaries. He also noted a lack of awareness about digital transformation among local officials.

Major Discussion Point

Experiences and challenges in implementing LOSI

Agreed with

Yin Huotely

Vannapha Phommathansy

Agreed on

Challenges in implementing e-government services at local level

G

Gayatri Doctor

Speech speed

108 words per minute

Speech length

1027 words

Speech time

569 seconds

India identified gaps in online services and challenges with distributed portals

Explanation

India’s LOSI implementation revealed gaps in online services across different cities. A major challenge was that services were distributed across various portals at central, state, and district levels, making assessment difficult.

Evidence

The speaker mentioned conducting two pilot studies in India, one assessing 34 cities across states and union territories, and another focusing on 53 urban local bodies in Gujarat.

Major Discussion Point

Experiences and challenges in implementing LOSI

Agreed with

Mehdi Limam

Dimitrios Sarantis

Agreed on

LOSI is a valuable tool for assessing and improving local e-government services

M

Mehdi Limam

Speech speed

119 words per minute

Speech length

672 words

Speech time

336 seconds

Tunisia leveraged EGDI scores and the UN toolkit to implement LOSI

Explanation

Tunisia used EGDI scores as a foundation for understanding the country’s digital maturity. They also utilized the UN Local Government Toolkit to guide their LOSI implementation process.

Evidence

The speaker outlined a three-step process for LOSI implementation: analyzing EGDI scores, using the UN Local Government Toolkit, and studying the country background.

Major Discussion Point

Experiences and challenges in implementing LOSI

LOSI helps identify weaknesses and benchmark digital maturity

Explanation

LOSI was described as a powerful tool for enhancing local government by identifying specific weaknesses in municipal portals. It also provides clear metrics for benchmarking digital maturity, allowing strategic planning for digital transformation.

Evidence

The speaker mentioned that LOSI evaluation of 24 municipality portals across 24 states in Tunisia revealed that only 9 achieved a middle rank while the rest had low scores.

Major Discussion Point

Benefits and future directions for LOSI

Agreed with

Gayatri Doctor

Dimitrios Sarantis

Agreed on

LOSI is a valuable tool for assessing and improving local e-government services

Expanding LOSI network and refining methodology can address barriers

Explanation

The speaker suggested that expanding the LOSI network and refining the methodology could help address implementation barriers. This includes overcoming challenges in engaging policymakers and ensuring consistent stakeholder engagement.

Major Discussion Point

Benefits and future directions for LOSI

Y

Yin Huotely

Speech speed

112 words per minute

Speech length

1400 words

Speech time

747 seconds

Cambodia is working to improve connectivity and digital literacy

Explanation

Cambodia faces challenges in implementing e-government services due to limited internet access and low digital literacy. The government is taking steps to improve connectivity and digital skills among citizens and government officials.

Evidence

The speaker mentioned building more internet infrastructure and developing a digital media and information literacy framework with UNESCO.

Major Discussion Point

Experiences and challenges in implementing LOSI

Agreed with

Ayman Alarabiat

Vannapha Phommathansy

Agreed on

Challenges in implementing e-government services at local level

V

Vannapha Phommathansy

Speech speed

159 words per minute

Speech length

583 words

Speech time

219 seconds

Laos is in early stages and faces infrastructure and adoption challenges

Explanation

Laos is in the early stages of digital transformation and faces challenges with limited digital infrastructure and low user adoption. The country is working on improving its e-government services but recognizes the need for significant development.

Evidence

The speaker mentioned that Laos ranked 152 in the UN e-government index and scored low in a country-level digital maturity assessment.

Major Discussion Point

Experiences and challenges in implementing LOSI

Agreed with

Ayman Alarabiat

Yin Huotely

Agreed on

Challenges in implementing e-government services at local level

A

Abdulaziz Zakri

Speech speed

129 words per minute

Speech length

1239 words

Speech time

574 seconds

Saudi Arabia aligned national and local efforts through governance frameworks

Explanation

Saudi Arabia has implemented strong governance frameworks and operational models to support e-government achievements. This includes various committees and regular meetings to align national and local efforts in digital transformation.

Evidence

The speaker mentioned a steering committee for international indices, a digital transformation committee, and a technical committee. He also noted that over 500 workshops have been conducted.

Major Discussion Point

Strategies for improving local e-government

M

Manal Al Afad

Speech speed

118 words per minute

Speech length

787 words

Speech time

399 seconds

UAE uses a digital maturity model to assess local governments

Explanation

The UAE has developed a digital maturity model to assess and improve e-government services at both federal and local levels. This model is based on international best practices and includes various pillars for evaluation.

Evidence

The speaker described a digital maturity model with three pillars: leadership and policy, technology accelerators, and organizations and data.

Major Discussion Point

Strategies for improving local e-government

N

Nevine Makram Labib Eskaros

Speech speed

139 words per minute

Speech length

1125 words

Speech time

484 seconds

Egypt proposes using AI to assess and improve e-government services

Explanation

Egypt is exploring the use of AI technologies to assess and improve e-government services. The proposed framework includes data collection, AI model development, and continuous improvement processes.

Evidence

The speaker outlined a five-stage framework for leveraging AI in e-government assessment, including data collection, preprocessing, AI model development, assessment criteria, and continuous improvement.

Major Discussion Point

Strategies for improving local e-government

A

Angelica Zundel

Speech speed

146 words per minute

Speech length

584 words

Speech time

239 seconds

UN provides a toolkit to support LOSI implementation

Explanation

The UN has developed a local e-government toolkit to support countries in implementing LOSI. This toolkit provides guidance on LOSI indicators and includes concrete examples for implementation.

Evidence

The speaker demonstrated the toolkit, showing how it is structured around LOSI sub-components and provides explanations, implementation guides, and case studies for each indicator.

Major Discussion Point

Strategies for improving local e-government

D

Dimitrios Sarantis

Speech speed

120 words per minute

Speech length

2673 words

Speech time

1335 seconds

LOSI can guide countries on what to implement at initial levels

Explanation

LOSI serves as a guide for countries in the early stages of e-government development. It helps identify what should be implemented at the initial levels of local e-government services.

Major Discussion Point

Benefits and future directions for LOSI

Agreed with

Mehdi Limam

Gayatri Doctor

Agreed on

LOSI is a valuable tool for assessing and improving local e-government services

A

Audience

Speech speed

145 words per minute

Speech length

720 words

Speech time

296 seconds

Aligning national and local e-government efforts is crucial

Explanation

The alignment between national and local e-government efforts is a critical challenge for many countries. This alignment is essential for effective implementation of e-government strategies and services across different levels of government.

Evidence

The audience member pointed out that most countries are more developed at the national level, with strategies and platforms, while local levels often lag behind.

Major Discussion Point

Benefits and future directions for LOSI

Agreements

Agreement Points

LOSI is a valuable tool for assessing and improving local e-government services

Mehdi Limam

Gayatri Doctor

Dimitrios Sarantis

LOSI helps identify weaknesses and benchmark digital maturity

India identified gaps in online services and challenges with distributed portals

LOSI can guide countries on what to implement at initial levels

Multiple speakers agreed that LOSI is an effective tool for evaluating and enhancing local e-government services, helping to identify areas for improvement and providing benchmarks for digital maturity.

Challenges in implementing e-government services at local level

Ayman Alarabiat

Yin Huotely

Vannapha Phommathansy

Jordan faced challenges with awareness, resources, and resistance to change

Cambodia is working to improve connectivity and digital literacy

Laos is in early stages and faces infrastructure and adoption challenges

Several speakers highlighted common challenges in implementing e-government services at the local level, including limited resources, low digital literacy, and infrastructure issues.

Similar Viewpoints

Both speakers emphasized the importance of structured frameworks and models for assessing and aligning e-government efforts at national and local levels.

Abdulaziz Zakri

Manal Al Afad

Saudi Arabia aligned national and local efforts through governance frameworks

UAE uses a digital maturity model to assess local governments

Unexpected Consensus

Need for collaboration and knowledge sharing in LOSI implementation

Mehdi Limam

Gayatri Doctor

Dimitrios Sarantis

Expanding LOSI network and refining methodology can address barriers

India identified gaps in online services and challenges with distributed portals

LOSI can guide countries on what to implement at initial levels

Despite representing countries at different stages of e-government development, these speakers all emphasized the importance of collaboration and knowledge sharing in implementing LOSI, suggesting a shared recognition of its value across diverse contexts.

Overall Assessment

Summary

The main areas of agreement included the value of LOSI as an assessment tool, common challenges in implementing e-government services at the local level, and the importance of structured frameworks for aligning national and local efforts.

Consensus level

There was a moderate level of consensus among speakers, particularly on the challenges faced and the potential benefits of LOSI. This consensus suggests a shared understanding of the importance of local e-government development and the need for standardized assessment tools, which could facilitate more targeted and effective improvements in local e-government services globally.

Differences

Different Viewpoints

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement were related to the specific challenges and approaches in implementing LOSI across different countries, reflecting varying levels of digital maturity and local contexts.

difference_level

The level of disagreement was relatively low, with most speakers focusing on their own country’s experiences rather than directly contradicting each other. This suggests that LOSI implementation is highly context-dependent, and strategies need to be tailored to each country’s specific needs and challenges.

Partial Agreements

Partial Agreements

Both speakers agreed on the need for improving digital infrastructure and literacy, but their approaches and specific challenges differed based on their countries’ contexts.

Yin Huotely

Vannapha Phommathansy

Cambodia is working to improve connectivity and digital literacy

Laos is in early stages and faces infrastructure and adoption challenges

Similar Viewpoints

Both speakers emphasized the importance of structured frameworks and models for assessing and aligning e-government efforts at national and local levels.

Abdulaziz Zakri

Manal Al Afad

Saudi Arabia aligned national and local efforts through governance frameworks

UAE uses a digital maturity model to assess local governments

Takeaways

Key Takeaways

LOSI (Local Online Service Index) is a valuable tool for assessing and improving local e-government services

Many countries face common challenges in implementing LOSI, including limited resources, low digital literacy, and resistance to change

Alignment between national and local e-government efforts is crucial for success

AI and other emerging technologies offer potential for improving e-government assessment and service delivery

Regular assessment and benchmarking helps drive continuous improvement in local e-government

Resolutions and Action Items

Expand the LOSI network to include more partner countries

Refine the LOSI methodology to address challenges identified by implementing countries

Encourage use of the UN Local Government Toolkit to support LOSI implementation

Conduct regular (e.g. biannual) assessments of local e-government in partner countries

Unresolved Issues

How to standardize assessment of services across countries with different government structures and service provision models

How to effectively engage policymakers in using LOSI results

How to address resource constraints, especially in developing countries

How to balance standardized assessment with country-specific needs and contexts

Suggested Compromises

Use centrally-provided platforms to support local governments with limited resources

Allow some flexibility in LOSI indicators to account for country-specific service provision models

Leverage partnerships and knowledge sharing between more and less advanced countries in e-government

Thought Provoking Comments

We have identified clearly this problem. This is one of the basic problems that we face, designing the indicators specifically for the services criterion. Because our colleague here identifies this problem that not all cities in the world provide the same services and are authorised from the government to provide the same services in local level.

speaker

Dimitrios Sarantis

reason

This comment acknowledges a fundamental challenge in assessing local e-government services globally, highlighting the complexity of creating standardized metrics across diverse governance structures.

impact

It sparked discussion on how to adapt assessment tools to different local contexts and led to consideration of more flexible approaches like the LOSI Network.

We decided to expand this successful journey with other major cities in the Kingdom of Saudi Arabia like Mecca, Jeddah, Medina, Dammam. And this is not only because the successful story of Riyadh. Also, it is part of the plan of the Saudi authorities for the project called Saudi or city, smart city in the Kingdom of Saudi Arabia to be implemented for 10 cities across the country.

speaker

Abdulaziz Zakri

reason

This comment illustrates how success in one city (Riyadh) can be leveraged to drive digital transformation across an entire country, showcasing a strategic approach to scaling e-government initiatives.

impact

It shifted the discussion towards the importance of national-level planning and coordination in local e-government development, prompting questions about alignment between different levels of government.

I thought that it would be more useful and more beneficial to all of us to see if we can make use of all of these technologies. My proposed framework so as to leverage AI in order to provide better assessment of the government e-services is composed of these five stages…

speaker

Nevine Makram Labib Eskaros

reason

This comment introduced a novel perspective on using AI for assessing e-government services, demonstrating how cutting-edge technology could be applied to improve evaluation methodologies.

impact

It broadened the scope of the discussion to include future technological developments in e-government assessment and raised important considerations about data privacy and ethical AI use in government.

One of the main difficulties that we realize when conducting these assessments and when working with government at the local level is how can and how are government at the local level aligning things with what is defined at the national level.

speaker

Audience member (Delfina)

reason

This comment highlighted a critical challenge in e-government implementation – the alignment between national strategies and local execution, which is often overlooked in discussions focused solely on technology.

impact

It prompted a deeper exploration of governance structures and coordination mechanisms between different levels of government, leading to insights about successful models like Saudi Arabia’s approach.

Overall Assessment

These key comments shaped the discussion by highlighting the complexities of assessing and implementing e-government services at the local level. They moved the conversation from a focus on specific tools and metrics to broader considerations of governance structures, national-local alignment, and the potential of emerging technologies like AI. The discussion evolved to emphasize the importance of flexible, context-aware approaches to e-government assessment and implementation, while also exploring how successful models can be scaled and adapted across different settings.

Follow-up Questions

How to address the challenge of different organizational structures and service provision models across cities and countries when applying LOSI?

speaker

Dimitrios Sarantis and Young-Hwan Jin

explanation

This is a recurring challenge in applying LOSI across different contexts, affecting the comparability and applicability of the assessment.

How can countries effectively implement LOSI and leverage its results for improving local e-government?

speaker

Mehdi Limam

explanation

Understanding best practices for LOSI implementation and utilization can help countries maximize the benefits of the assessment.

What are effective strategies for engaging policymakers and ensuring methodology adoption at the local level?

speaker

Mehdi Limam

explanation

Overcoming resistance to change and ensuring buy-in from local governments is crucial for the success of LOSI assessments and improvements.

How can the LOSI methodology be improved to better accommodate the varying levels of digital development across different countries?

speaker

Yin Huotely

explanation

Adapting LOSI to be more relevant for countries at different stages of digital development could increase its usefulness and adoption.

What are the best practices for aligning local e-government initiatives with national digital strategies?

speaker

Delfina Soares

explanation

Understanding how to effectively coordinate between national and local levels of government is crucial for successful e-government implementation.

How can artificial intelligence be effectively and ethically integrated into local e-government assessment and improvement?

speaker

Nevine Makram Labib Eskaros

explanation

Exploring the potential of AI in e-government assessment could lead to more efficient and effective evaluation and improvement processes.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #2 Bridging Gaps: AI & Ethics in Combating NCII Abuse

WS #2 Bridging Gaps: AI & Ethics in Combating NCII Abuse

Session at a Glance

Summary

This panel discussion focused on the ethical use of AI in combating non-consensual intimate image (NCII) abuse. Experts from various organizations, including Meta, Digital Rights Foundation, and SWGFL, explored the potential benefits and risks of using AI in this context.

The panelists emphasized the importance of putting victims and survivors at the center of any technological solutions. They discussed the need for AI systems to be adapted to different cultural and legal contexts, as current models are often trained on Western data. The experts highlighted the potential of AI in detecting and preventing NCII abuse, but also stressed the importance of maintaining human oversight and easy reporting mechanisms for users.

Privacy concerns were a significant topic, with panelists noting the sensitive nature of the data involved and the need for transparency in how AI systems handle this information. The discussion touched on the challenges of balancing the use of AI for protection with respecting user autonomy and privacy.

The panel addressed the evolving nature of online harms, including the rise of deepfakes and synthetic content. They noted that while the images may be fake, the harm to victims is real and can have severe psychological impacts.

Accountability was another key theme, with panelists discussing the need for better collaboration between platforms, law enforcement, and NGOs to hold perpetrators accountable. The experts called for more research, investment in NGOs working in this space, and the development of ethical frameworks and governance structures for AI use in combating NCII abuse.

The discussion concluded with a call for a global effort to develop AI solutions focused on safeguarding users and creating robust guardrails to protect against misuse. The panelists emphasized the need for ongoing dialogue and collaboration among various stakeholders to address this complex issue effectively.

Keypoints

Major discussion points:

– The ethical challenges and potential benefits of using AI to combat non-consensual intimate image (NCII) abuse

– The importance of putting victims/survivors at the center when developing AI tools and policies

– The need for more transparency from tech companies on how they are using AI to address NCII

– The evolving nature of NCII abuse, including the rise of AI-generated deepfakes

– The gaps in legal frameworks and accountability measures for perpetrators of NCII

The overall purpose of the discussion was to explore how AI technology could be responsibly developed and deployed to help combat NCII abuse, while considering the ethical implications and potential risks.

The tone of the discussion was thoughtful and nuanced throughout. Panelists acknowledged both the potential benefits of AI in addressing NCII as well as the ethical concerns and need for caution. There was a sense of urgency about the issue, but also recognition of the complexity involved in developing effective solutions. The tone became slightly more urgent towards the end when discussing the need for better legal frameworks and accountability measures.

Speakers

– David Wright: CEO of UK charity SWGfL and Director UK Safer Internet Centre 

– Nighat Dad: Founder of Digital Rights Foundation, member of Meta Oversight Board, member of UN Secretary General’s AI High Level Advisory Board

– Karuna Nain: Online safety expert, former director of Global Safety at Facebook/Meta

– Sophie Mortimer: Manager of the Revenge Porn Helpline and Report Harmful Content Service at SWGFL

– Boris Radanovic: Head of Engagements and Partnerships at SWGFL

Additional speakers:

– Deepali Liberhan: Global Director of Safety Policy at Meta

– Niels Van Pamel: Policy Advisor, Child Focus Belgium

– Adnan A. Qadir: Senior Legal and Advocacy Advisor, SEED Foundation

Full session report

The panel discussion on the ethical use of artificial intelligence (AI) in combating non-consensual intimate image (NCII) abuse brought together experts from Meta, Digital Rights Foundation, SWGFL, and the Revenge Porn Helpline. The conversation explored the potential benefits and risks of using AI in this context, highlighting the complex challenges faced by stakeholders in addressing this sensitive issue.

Ethical Considerations and AI Implementation

A central theme of the discussion was the need to prioritize victims and survivors when developing technological solutions. Sophie Mortimer of the Revenge Porn Helpline emphasized that victim privacy and consent must be at the forefront when using AI tools. The panel debated the terminology of “victims” versus “survivors,” acknowledging the importance of empowering language while recognizing the ongoing nature of the harm.

Karuna Nain and Deepali Liberhan outlined their company’s approach to AI and safety, highlighting the potential of AI in detecting and preventing NCII abuse. They noted that AI can help with the scale and speed of content moderation, but stressed that human oversight remains essential. Boris Radanovic from SWGFL used an analogy comparing the development of AI to the Wright brothers’ plane, emphasizing the need for continuous improvement and refinement.

Nighat Dad, founder of the Digital Rights Foundation, raised a crucial point about the cultural nuances that AI systems need to account for. She noted that current AI models are often trained on Western data and contexts, potentially limiting their effectiveness in other parts of world. This observation highlighted the need for more diverse and culturally sensitive AI development to ensure global applicability.

Evolving Nature of Online Harms and Victim Support

The panel addressed the rapidly changing landscape of online harms, including the rise of deepfakes and synthetic content. Nighat Dad pointed out that while the images may be fake, the harm to victims is real and can have severe psychological impacts. The discussion also revealed changing demographics of NCII victims, with an increasing number of cases targeting men and boys.

The crucial role of helplines in providing support and resources was highlighted, with Sophie Mortimer noting that case volumes for helplines are rising exponentially. David Wright mentioned on-device hashing tools like StopNCII.org as a means of empowering victims. Karuna Nain and Sophie Mortimer provided more details about StopNCII.org, explaining how it allows users to create digital fingerprints of their intimate images without uploading the actual content, helping to prevent their distribution on participating platforms.

Legal Challenges and Platform Accountability

The discussion revealed significant gaps in current legal frameworks for addressing NCII abuse. Panelists highlighted the challenges of prosecuting NCII cases and called for better collaboration between platforms and law enforcement agencies. They emphasized the need for more research, investment in NGOs working in this space, and the development of ethical frameworks and governance structures for AI use in combating NCII abuse.

Karuna Nain called for greater transparency from tech companies about how they are using AI to combat NCII. This sentiment was shared by other panelists, who emphasized the need for platforms to improve their reporting mechanisms and cooperation with law enforcement agencies.

Gender Dynamics and Cultural Considerations

Nighat Dad discussed the gender dynamics of NCII, highlighting how societal norms and cultural contexts can exacerbate the impact on victims, particularly women and girls in conservative societies. The panel acknowledged the need for AI systems and support services to be adaptable to different cultural contexts and sensitive to the unique challenges faced by victims from diverse backgrounds.

Conclusion and Future Directions

The discussion concluded with a call for a global effort to develop AI solutions focused on safeguarding users and creating robust guardrails to protect against misuse. Key takeaways included:

1. The potential of AI to help combat NCII when implemented ethically with human oversight

2. The importance of prioritizing victim privacy, consent, and empowerment

3. The need for improved transparency from platforms and better collaboration with law enforcement

4. The crucial role of helplines and victim support services

5. The importance of adapting AI systems and support services to diverse cultural contexts

6. The need for continued research and investment in NGOs working on NCII issues

As the conversation progressed, it became clear that addressing NCII abuse requires a multifaceted approach involving technology, policy, and support services. The panelists’ insights underscored the complexity of the challenge and the need for continued research, adaptation, and collaboration to develop effective strategies in this rapidly evolving digital landscape.

Session Transcript

David Wright: to this particular workshop that we’re having, looking at, or entitled, Bridging the Gaps, AI and Ethics in Combating NCII Abuse. And NCII abuse is around non-consensual, intimate image abuse, which is a subject that we’re going to be exploring over the course of this panel. I’m David Wright, I am CEO of a UK charity, SWGFL, and a director of the UK Safe Internet Centre. We will explore, and a couple of my colleagues are here, will explain some of the, more aspects of this, some of the things that we do, particularly the Revenge Porn Helpline, and also StopNCII.org. And so we will clearly cover some of those gaps. I’m joined, in terms of this panel conversation, by a number of very esteemed guests and panellists. And I’m going to introduce those to you, just to start with. And so, we’ve got a series of questions that we’ll be asking. And so, each of the panellists, and I’ll just, first of all, introduce Negat Dad, in the middle. So, Negat is from, a founder of the Digital Rights Foundation, and also a member of the Metta Oversight Board, as well as part of the UN Secretary General’s AI High Level Advisory Board. If I next turn to Karuna, who’s joining us online. So, Karuna is an online safety expert, with two decades of experience, in the intersection of online safety, policy, government affairs, and communications. She consults with tech companies and non-profits, on their strategy policies, and technology to make the internet safer. Karuna previously served as a director, Global Safety… at Facebook, Meta, where she spent nearly a decade working on issues of child online safety and well-being, women’s safety, and suicide prevention. At Meta, she partnered with SWGFL to launch StopNCI.org to help victims of non-consensual intimate image abuse. Prior to Facebook, Karuna worked at the U.S. Embassy in India, Ernst & Young, India’s first 24×7 news channel, New Delhi Television, and German broadcaster. Karuna is a graduate of St. Stephen’s College, University of Delhi, and has completed her post-graduate studies from Albert Ludwig’s University. So, welcome to Karuna. Also joined by Deepali, sitting next to me. So, Deepali Liberhan is Global Director of Safety Policy at Meta, and has been with Meta for over a decade. She leads a team of regional safety policy experts and works on policies, tools, partnerships, and regulation across core safety issues. Also joined by one of my colleagues, Sophie Mortimer, online from the UK, where it’s rather early. Thank you, Sophie. Sophie is Manager of the Revenge Born Helpline and also the Report Harmful Content Service at SWGFL. She coordinates a team of practitioners to support adults in the UK who have been affected by the sharing of intimate images without consent and other forms of online abuse and harms. As part of the StopNCI.org team, she works with NGOs around the world to support their understanding of StopNCI and the help it can give victims and survivors in their communities. The NGO network shares learning and best practice to ensure that StopNCI evolves as a proactive tool that works for everyone, wherever they are. And finally, if I turn on to my far right, is my colleague, Boris. Boris Radanovich is an expert in the field of online safety and currently serves as the Head of Engagements and Partnerships at SWGFL. the UK-based charity, which we’ve already talked about. He works with the UK Safe Internet Centre, which is part of the European InSafe Network, in educating and raising awareness about online safety for children, parents, teachers and other stakeholders across the world. Boris has worked extensively with various European countries, including Croatia, where he worked at the Safe Internet Centre there, and has been involved in numerous missions to countries like Belarus, Serbia, Montenegro, North Macedonia, and present online strategies, online safety strategies, to government officials and NGOs. His focus is on protecting children from online threats, such as cyberbullying, child sexual exploitation and scams, as well as empowering professionals through workshops and keynote speeches. One of the key contributions today includes leading online safety education efforts, where he emphasises the evolving risks in the digital world, such as grooming and intimate image abuse. His involvement with initiatives like StopNCI.org reflects his commitment to helping prevent non-consensual sharing of intimate images. Introductions complete. So what we’ve got, I’m just going to invite all the panellists just to give us a couple of minutes introduction, and then we’ve got a series of structured questions that will open to each of the panellists, and then to everyone in the room here, and also to those of you online as well. So we will be having a really in-depth conversation about this, and based on some of the conclusions from what you now can understand, it is a very esteemed panel in this particular subject. So if I can just, Nighat, if I can throw it over to you, just a two-minute introduction into this. Thank you.

Nighat Dad: Can you hear me? Okay. Yeah, no, thank you so much, David, for organising this panel. It’s a pity that we are doing this on the last day. It should have been on the first day, because many of us have been working on the issue of non-consensual intimate imagery and videos for the last several years, and not only working on the issue and addressing it, but also looking into solutions. Of course, the Helpline, Revenge Porn Helpline in the UK, and at Digital Rights Foundation in Pakistan, we also started a Helpline, Cyber Harassment Helpline, and we collaborate together on this as well. In 2016, we started this, and the main idea was basically to address online harms that young women and girls face in a country like Pakistan. And there’s so many cultural, contextual nuances that many a times platforms are unable to capture that, and that was the main reason that why… we started the helpline not only to address these complaints by young women and girls in the country, but also to give a clear picture to the platforms that how they can actually look into their products or mechanisms, reporting mechanisms, or remedies that they are providing to different users around the world. I think I’ll just say one thing and stop there, that over the years we have seen that online harms or violence against women, or tech-facilitated gender-based violence, now we have so many names of this, but non-consensual intimate imagery around the world has very different consequences in different jurisdictions. In many part of the world, it kind of limits itself to the online spaces, but in some jurisdictions it turns into offline harm against especially marginalized groups like young women and girls. And in last couple of years, I think the very concerning thing is how AI tools are easily accessible to bad actors where they are making deep fakes and synthetic histories of women, not only normal users, but also women in the public spaces, and verifying those deep fakes I think is a challenge, not only people who have been working on this issue, but for the law enforcement, and then you just look at the larger public who absolutely have no idea how to verify this, and they just believe what they see online. And I think this is the challenge that we all are facing at the moment. I’ll stop here.

David Wright: Nigat, thank you very much. Yes, a subject we will get into without any doubt. I’m next going to just throw it to Karuna who’s joining us online. Karuna, for just a couple of introduction. Thank you.

Karuna Nain: Thank you so much, David. I do want to give a shout out to you for organizing the discussion on this topic because I don’t think we’ve done enough work or had enough dialogue as to how the power of artificial intelligence can be used to actually prevent some of this distribution of intimate imagery or to deter perpetrators online. And lastly, from also to support victims, you know, we’ve heard time and time again as to how absolutely debilitating it can be to be in that moment where you know you are worried that your intimate images are going to be shared online or they have actually been shared online and you’ve just come to know, and there’s so much that we can do with artificial intelligence to support people in that moment to give them the opportunity to actually perpetrate themselves online. So I just want to give a shout out to you for organizing this very very important discussion and I’m looking forward to hearing what you know comes out of this workshop and the kind of the ideas that are generated as to how not just tech platform but nonprofits, such as Southwest group for learning and you know they got digital rights Foundation can actually leverage to be able to support people online.

David Wright: Thank you very much. Very kind. But yeah, as you say, let’s let’s try to harness some of the power about this rather than necessarily the some of the challenges that we always see as well. So thank you very much. Next we’ll turn to Deepali.

Deepali Liberhan: Thanks David and thank you, Karina I think that that was really very informative and I think that it was very clear that we have to be very, very careful when we think about safety and multi prong so we think about a couple of things when we’re thinking about safety, we think about do we have the right policies in place on what is okay and not okay to share on the platform, we have our tools and features to give users choice and control over what over what they’re seeing and to years to be able to address some of the work that we’ve been seeing. I just want to step back a little bit and talk about how currently StopNCII.org came to being when Meta heard loud and clear from a lot of our experts, a lot of our users, that NCII is a huge issue. And Karuna was actually one of the people who was working on this. And we were able to actually move beyond just being able to address this issue at a company level on our platforms and address it at a cross-industry level. So I think there is really a genuine place for industry and civil society to come together to address some of these harms in a very scalable way, something as important as non-consensual intermediary. And we’ve also to come together and try and understand, as Karuna put it, what is the ways that we can use this technology to actually help victims or provide education or provide resources? So we do that currently on our platforms. So for example, if you look at something like NCII, or let me give you an example of suicide and self-injury, we’re able to use proactive technology to identify people who have posted content which can contain suicidal content or content referring to eating disorders. And we’re able to catch that content and able to send them resources, as well as connect them with local helplines. That is such an important way that we can use technology to make sure that people who need the help are able to get it. And sometimes there are not quick solutions, but it takes time to have discussions work together. And it’s a combination of technology and the advice of experts who are actually working on this issue to come up with solutions both to prevent that harm, to address that harm, and to provide resources and support to victims. Sorry, David, I know I took the long way. long way to this, but I just wanted to provide some context.

David Wright: Deepali, thank you very much. Next, I’m going to throw it, Sophie, to you in terms of two minute introduction. Thank you.

Sophie Mortimer: Thank you, David, and good morning, everyone. Having worked supporting survivors of intimate image abuse for over eight years, I do think that we need to approach the use of AI in providing support with caution. We know that there are advantages to be gained by the use of AI technologies in reporting harmful content at scale and with speed. However, it’s also important to remember that victims and survivors can be abused with these tools and may not want to engage with them while seeking support, because trust is understandably degraded. We in fact, we have previously worked at Southwest Grid for Learning on developing an AI support tool. And ultimately, we decided that the risks were not outweighed by the benefits, certainly not at this time. We simply couldn’t be sure that the technology could safeguard people in their time of need adequately enough. I really hope this will change because I think there is huge potential here, and that we can revisit these concepts, but it’s just really imperative that we have trust in the security of such a tool and that it prioritises the safety and wellbeing of users.

David Wright: Thank you, Sophie. And finally, Boris.

Boris Radanovic: Thank you, David. And thank you very much for organising this and good morning, everybody. I think at least a good personal note to note, it is morning. And if I’m going to call on anything in my introduction is that we all, especially the policy and the governance sector, need to wake up to the benefits and potential threats of AI. If we know anything in the last couple of decades, online safety and protection of children and adults, is that modalities of harm are changing rather rapidly. And speaking about the application of AI or the benefits of AI, we are missing. And I’m really, really glad that this is on the last day of IGF. So I hope this conversation will continue. But we are missing governance and structure and frameworks coming from and being supported by, yes, the industry, yes, the NGOs, yes, the researchers, but as well, nation states across the world. And if I can jump off a point from Karuna, absolutely, we need a broader conversation on this, of understanding, yes, the potential threats to it, but as well, emphasizing the benefits and how it can be utilized to better protect and better align with some of our policies. And I would agree with my dear colleague, Sophie, from the Revenge Porn Helpline, that currently, the threats do outweigh the benefits, and we need to make sure that advocating for the proper use of tools such as StopNCI.org and other inventive ways of solving already known problems by AI, with AI, or at least with the support of AI, is gonna be imperative going forward. And only thing that I can say, is the possible support coming out of the technology capabilities of AI is tremendous, and we need to rein that in and understand it much, much better than we do now.

David Wright: Okay, Boris and everyone, thank you ever so much. And we’re also joined from a moderation perspective, colleague Niels is also managing the online aspects of this. So those of you joining us online, if you’re asking any questions… Excellent, we’ve got one. Okay, so by way of diving into this particular issue that you’ve heard some brief introduction, in terms of specific questions, as we get down into the aspects about AI, but particularly in context of non-consensual intimate image abuse, I’m first going to turn to Nighat. So Nighat, the question to pose to you, so your advocacy for digital rights, particularly in regions with differing privacy laws, places you at the forefront of this debate. How should AI systems for NCII detection be adapted ethically to fit varying cultural and legal contexts?

Nighat Dad: Yeah, I think Sophie and Boris touched a little bit on that. Yes, we can use AI systems to our benefit as well and harness them in terms of giving speedy remedies to the victims and survivors of tech-facilitated gender-based violence. But at the same time, I think in our context, we have to be extra careful and cautious. AI systems need to solve for cultural nuance and we know that current models are trained on English and other Western contexts and languages. But I’m also hopeful and optimistic that while we are having these conversations, these conversations will lead to a new generation of AI that will better understand cultural and linguistic nuance. And I understand that sitting at the UN Secretary General’s AI high-level advisory body, we have had those conversations in the last one year where we brought global majority perspective from different angles, not just that the conversations around AI are only happening in the Global North by some Global North countries and global majority countries are not really part of those conversations. And until or unless you are not part of the conversation, you actually don’t know how to address different issues while using AI technologies or being aware of the threats and risks of these technologies. So I think these conversations are happening at different spaces. I’m glad that we are also talking about this as different helplines and those who are addressing NCII. But I think it’s also important that we cannot, we understand that we can’t solely rely on AI to combat NCII. Platforms, social media platforms still need to commit to human moderators and human review and they need to create easy pathways for users to escalate this content when automation misses it. So that’s three things that comes to my mind. Broader training for AI, continued human oversight, and user-friendly reporting mechanisms. I’d also like to see transparency and constant auditing of AI so we can see how well these automated content moderation systems are performing and transparency should be granted to civil society so that there are opportunities for third-party reviews of how these models perform. And I’d just like to plug our white paper that we released from Oversight Board which is around content moderation in the era of AI. And it sort of draws our own experiences for the last four years while dwelling into cases that we have decided and looking into so many cases related to gender and tech-facilitated gender-based violence that that users have faced on meta platforms. And we looked into the tools, we looked into the community guidelines and policies of meta and gave them really good recommendations. But also this white paper is not just for meta platforms, this is for all platforms who are actually using AI to combat harassment on their platforms. And there are so many recommendations that we have given and one of them is basically constant auditing of their AI tools on their platforms that they are using, but also giving access to third parties like researchers in terms of like what kind of feedback that they can give to the meta. And I think meta has a leverage because they have a very good initiative as a trusted partners initiative. and they can leverage that sort of ecosystem in terms of getting feedback and also providing them support who are already addressing tech facility to gender-based violence.

David Wright: Some great, really great points there and I’m really struck too by, you know, the point about the westernized data and extensive training models is a really good point and also want to recognize the global leadership that you provide in this space and have done for so many years. So it’s great to have you here and what an opportunity for everybody to ask questions too. So Megha, thank you very much. Okay, just as I sort myself out, next I’m going to turn Karuna to you in terms of question. And so now as both a trustee of ours, very important trustee of ours, thank you very much, and obviously a key advocate as well for Stop NCI having been the one with the original idea and certainly what the heavy responsibility we feel for Stop NCI in having yours being largely created it. So your question, you know, as a driving force behind Stop NCI.org, what role do you see AI playing in scaling global NCI protection efforts? What ethical principles are essential to ensuring AI tools support victims without compromising user autonomy? Karuna?

Karuna Nain: Thank you, David. And, you know, both, I think there are two questions. Yours is a two part question and both really, really important, you know, questions. The one thing that, you know, just following up from, you know, what Nughat was saying, I think there’s not been enough transparency. from the tech industry, unfortunately, as to how they’re currently leveraging the power of AI in this space. We’ve heard a lot of how they’re using AI to get ahead of, for example, child sexual abuse materials or anything related to child abuse on their platforms. But they’re not sharing enough of how they are using AI to get ahead of some of the harmful non-consensual sharing of intimate images on their platform. And credit to Meta Deepali. Meta has been one of the few companies that’s really talked about how they were able to leverage the power of AI in one way. And I’m not sure, Deepali, if you’re going to touch on this later. And forgive me if I’m sealing your thunder here. But the work that the use of AI, especially in closed secret groups where victims may not be aware that their intimate images are being shared. So using AI in those spaces to be able to proactively identify if an image or a video is potentially non-consensually shared and to pump it up to reviewers for reviewing the content and taking it down if it is NCI. I think that’s a really great example of how this technology can be used to get ahead of the harm. Because many times we’ve heard from victims that the onus and the burden on them for reporting, for trying to check if this content has been shared online, is excruciatingly painful. So I think I talked about this in my earlier opening statement as well. There are three ways, particularly I think that companies could be leveraging the power of AI to get ahead of this harm prevention. So if there are signals which they have on their platforms, if someone is, for example, updated their relationship status to say that they’ve recently been through a breakup or expressed any kind of trauma or hurt, which could potentially mean that they have intimate images which they might want to send through stopncii.org, for example, to nip the harm in its bud. Or if deterrence, if someone is trying to upload NCII, if the signals are all there, could the platform then bump up an education card to tell them that this? is actually harmful, it’s illegal in many countries, or again, you know, to really stop that abuse in its tracks and not allow that content to be shared in the first place. And third is, of course, you know, supporting victims. Again, you know, things like if someone is searching for NCIR-related resources on a search engine or on a platform, then could you bump up something like StopNCIR.org to them at that point to tell them these kind of services or these kind of support options exist, helplines exist around the world. Many victims don’t know, and this is the first time that they’re ever hearing of this abuse when they’re experiencing it. But, you know, both, all three actually, Sophie, Nighat, and Boris, all really raised very important points about thinking through some of the risks and some of the loopholes with deploying AI without being very thoughtful about it. So a few things that I’d love to list down, just, you know, things that we learned when we were building StopNCIR.org or working with Sophie and other helplines around the world on what is it that organizations really need to keep in mind when they’re building out these technologies to support victims. One, keeping victims at the center of the design, making sure that you’re not speaking on behalf of them, you’re giving them agency, you’re empowering them, but not taking any decisions on behalf of them. Two, no, you know, shaming or victim blaming. They’re under enough pressure, enough stress. This is not their, you know, mistake that, you know, intimate images are being shared. This is on the perpetrator. That’s, you know, trust. Trust is not a bad thing, you know, over here. It’s the perpetrator who’s broken the trust, and they need to feel ashamed, not the person who’s in those intimate images. You know, Nighat talked about bias and just, you know, making sure that any technology that is developed is not taking into, is taking into account other instances where, you know, this content, it may not be NCII. I’m not sure that AI, you know, is at that stage right now where it needs more training. It needs more support to be able to make sure that it’s 100% accurately identifying content as NCII and recognizing those biases. is really important part of it. Also, accountability and transparency. If tech companies are using these technologies, I’m hoping that they are, or if organizations, nonprofits are thinking about how they can use AI in this space, being transparent, being accountable, ways for people to report. Nighat talked about how important reporting still is even in these scenarios, giving people the ability to reach out to the service or the platform is really important. And of course, I will always keep harping on prevention if there are ways that this technology can be used to prevent the harm in its first place, to deter the harm, I think that a lot more work should be done over there because once the harm has happened, it’s already quite late. So the more work that can be done in that space would be really great. I’ll stop there, a lot of things that I’ve thrown out.

David Wright: Karuna, thank you very much. Can I also, perhaps we’ve made an assumption and not really introduced stopnci.org. Karuna, can I ask you to do that? Just to explain briefly to everybody what stopnci.org is and how it works.

Karuna Nain: Absolutely, and Sophie, please jump in if I’m missing anything, I know it’s your baby and I’m just talking about it. But stopnci.org, the whole goal behind stopnci.org is to support people to really stop the abuse in its tracks. The way stopnci.org works is that if you think you have, if you have intimate images, which you are worried, which will be used without your consent on any one of the participating platforms, you can use this platform to create hashes or digital fingerprints of those photos and videos and share those hashes with the participating platforms so that if anyone tries to upload that photo or that video on those participating platforms, they can get an early signal that this content may violate their policies. They can send it to their reviewers or use their technology to determine whether this violates their policies or not and stop that content from being shared on their services. So it’s really, you know, it’s very much a prevention tool. If content is being shared on platforms already, we encourage people to actually report on that platform to get fastest action. But if you’re worried that it’s going to be shared on any one of the participating platforms, in addition to that, you can use stopnci.org to stop that abuse in its tracks. Sophie, I don’t know if I missed anything and if you wanna add anything onto that.

Sophie Mortimer: Beautifully done. I would just highlight the fact that these digital hashes are created on somebody’s own device. They don’t have to send that image to anyone. And I think that’s enormously empowering and a huge step forward in the use of technology that puts the victims and survivors right at the heart of these evolutions.

Nighat Dad: Absolutely. One more thing, if I can just add, sorry, David, Sophie, is about privacy preserving way in which stopnci.org has been built. In addition to not taking those photos and videos, just taking the hashes from the victims, very minimal data is asked of the victims because we know that this is such a harrowing experience. We don’t want to stop them from using the service in any way. And I think that’s also very important as we’re talking about ethics around building of any of this AI technology, making sure whatever data is collected, it’s minimal, it’s proportionate to what is needed to run these services, but not using the data for anything than what you’re collecting it for. And you’re telling people that you’re collecting the data for and also not using the data without their consent for anything. I think it’s really, really important. Privacy and data should be at the center of the design of any AI technology that’s built in this space.

David Wright: Thank you both. Yeah, amazing kind of explanation from the two people leading this. Thank you. Next, we’re gonna come to Deepali, who we’ve already heard. So he’s Director of Global Safety and Policy at META. So Deepali, with your expertise on safety, can you talk about how META is thinking about responsible development of AI? Can you give some examples of how META is thinking about safety and AI and the challenges ahead, clearly in context of NCII?

Deepali Liberhan: Thanks, David. I think Karuna has done such a good job of it, but I’m gonna try and add some additional context. The first thing that I wanna say is just to step back and talk a little bit about how Meta is. currently using AI. So I’ve been with Meta, like I said, for about a decade. When I joined, and in fact, when Karuna joined as well, when we talked about safety and when we talked about our community standards and community standards essentially are rules, which we say clearly what is okay and not okay to post on our platforms, including NCII, including hate speech, including CSAM, we used to really encourage user reporting because we didn’t really have proactive technology built out at that time. So we were dependent on the signal that we were getting from users to be able to understand why that content is violating and rely more on human review and have that content reviewed by human reviewers to be able to take the appropriate action. Over the years, we’ve invested in really developing proactive technology to be able to catch a majority of this content even before it’s reported to us. So we know that a lot of people will see content and just not report it, or they may feel like their peers will judge them for reporting it. So proactive technology really helps to identify that content and remove it. And also it doesn’t remove the need for human reviewers, but it makes their job easier and we’re able to do this better at scale. So today, for example, we publish in our community standards, we’re able to remove a majority of the content that violates our community standards before it is even reported to us. And we’re also trying to work on understanding how our large language models can essentially help us do this better. And two ways where we think that there’s going to be an impact is going to be one is speed and the other is accuracy. Will it be able to help us identify this content even faster? And if we’re able to identify this content faster, what is the accuracy with which that we can take action in an automated way, which also lessens the time that human reviewers need to spend on really the important cases versus the cases where there’s clearly very high confidence that it is violating and therefore it can be taken down quickly. So I mean, to answer the issue in a shorter version, I think that there is a lot more scope for this technology, but there continues to be the importance of a combination of using automated technology as well as human review. So we are taking the right actions and we’re taking the appropriate actions. As we move to responsible AI, David, Mera has an open source approach to our large language models. As you know, we’ve open sourced our large language models. And therefore it’s been really important that before we do this, that we have a very thoughtful and responsible way on how we’re thinking of developing AI within this company. And I’m gonna talk about a couple of pillars that we have that we consider when we’re talking about Gen AI. Actually, I’m not gonna go through all the pillars because that’s gonna take a lot more time. I’ll go through a couple of them. The first is obviously robustness and safety. It’s really important that we do two things before we’re releasing our large language models. The first is stress testing the models. So we have teams internally and externally who stress test those models and stress testing or what we call red teaming is essentially making sure that experts are stress testing the models for finding vulnerabilities. And then we are able to identify those vulnerabilities. So to give you an example, we have specialist teams at Meta who are stress testing or red teaming the large language models. We also open it up to the larger public to be able to stress test. So for example, in Las Vegas, there’s a conference called the DEF CON Conference, and our large language models were tested. Over 2,500 hackers actually stress-tested those models and identified vulnerabilities, which we then used to inform the development of our models. The second thing is fine-tuning the models. Fine-tuning the models, essentially, is fine-tuning the models so that they’re able to give more specific responses, and especially that they are, in some cases, fine-tuned to deliver expert-backed resources. So to give an example of this, currently what happens on Facebook and Instagram is if somebody posts content where they are feeling suicidal, or for example, there are mental health issues, and either somebody reports it, or we’re able to proactively find that content, we are able to send expert-backed resources to the person, which is essentially connecting to helplines. So if you’re sitting in the UK, you will get connected to a UK helpline. If you’re sitting in India, you’ll get connected to an India helpline. This is because I think fundamentally we believe that we are not the experts on safety in terms of providing this kind of informed support. And to the point that Sophie made, we don’t want our technologies to be able to actually be providing that support. What we want to do is we want to make sure that we are making available the right tools by which the young people, or vulnerable people, or targeted people can use the right resources. So coming back to fine-tuning our AI models, the AI models will be fine-tuned through expert resources. So if somebody talks about suicide or self-injury, the response should not be that it’s going to provide you guidance. The response that it will throw up is here are a list of expert organizations that you can contact in your particular organization. And I know I’m repeating myself, but this is a really important way in which we can use these technologies to provide a level of support that we have been able to provide on other platforms like Facebook and Instagram. The third thing that I want to talk about, essentially, when we’re talking about safety and robustness is a lot of people don’t really understand AI or AI tools and AI features. So we’re also working with experts, generally, to try and ensure that they are understanding what Gen AI is. So for example, we work with experts to have resources where parents have tips on how to talk to young people about Gen AI, et cetera. So these are just a couple of things that, at a high-level overview, we think about when we’re thinking about building AI responsibly. I want to quickly cover the other pillars. I won’t talk too much about it. But the other pillars that we’re thinking about, because it’s about safety and robustness, but it’s also about making sure that there’s privacy so that we have a robust privacy review to make sure that there is transparency and control. As everybody on the panel said, it’s really important to be transparent about what you’re doing. with your Gen AI tools and products. And we are also working cross-industry to be able to develop standards, to identify provenance, to make sure that users understand whether their content is generated by AI or is not generated by AI. And the other pillar really is good governance, as we talked about, transparency, good governance, as well as, and I don’t know if Nikhat mentioned this, but fairness is really important. Fairness to ensure that there is diversity, as well as that it’s inclusive in terms of these technologies, because we all know that access to these technologies is still an issue. So, I mean, that is overall our approach to responsible AI. Let me give you one example. I know I’ve talked about robustness and safety, but in terms of fairness and inclusivity, we actually have a large language model where we are able to translate English into over 200 languages. And there are some of the lesser known languages. And I say this because in the trust and safety space, a lot of the material that we develop and a lot of the experts that exist are in English language. And this is another example of not particular to NCII, but overall in the trust and safety space that we can actually use a lot of these products and tools that have been developed to further enhance safety and make sure that this messaging is available in the languages that people really understand and not just English or Western languages. Wikipedia, for example, is using this to translate a lot of information. of their content into these languages. So I think that there is two things. There is a lot more work to be done, but I think that there is a great room for collaboration in terms of both how we prevent this, how we address it, but how do we even collaborate better in being able to support some of the people who are dealing with these issues in actually a better way than we’ve currently been able to do so. The last thing that I would say is that I know that sometimes we get asked this a lot. We have a community standards which make it very clear what kind of content is not allowed on the platform, irrespective of if it’s organic content or it’s been developed by Gen AI. If it violates our policies, we will remove that content. And we’ve updated our community standards to make that very clear as well.

David Wright: Deepali, thank you very much. It’s great as well to hear that. Also, the use of and creation of some of the tools, too. Particularly, I’m interested in the translation to different languages, which we probably all know is a real challenge. And I know from a Stop NGI perspective, we do struggle with that, trying to make it as accessible and to support as accessible as we possibly can. So thank you, Deepali, for that. And also for Meta’s help and Meta’s support with Stop NGI, too. Next, Sophie, I’m going to come to you. And so, as we’ve already heard, your work with the Revenge Porn Helpline, I know that particularly well. The question that I want to pose is about what ethical dilemmas have you observed with technology to address NCI abuse? particularly regarding privacy and consent. How do you think AI systems should be designed to respect these sensitive boundaries? Sophie?

Sophie Mortimer: Thank you, David. I think it’s a crucial question because there’s no doubt that the development of this technology is moving at pace. And I think we could all get quite carried away with what we can achieve with these technologies, but it’s so important that we put the victim and survivor experience at the centre of them. I could probably talk for quite a while on this, but I’ll try and keep it a bit tighter. But I think, crucially, the supporting the privacy of victims who in a moment of absolute crisis is really, really key. So we can use AI tools to help identify and remove non-consensual content, but that requires access to people’s very sensitive images and data. And that can be a huge concern to individuals who might fear the access of technology, because it’s technology that has participated in their abuse. They can fear data breaches or a lack of transparency in how their information is being stored and processed. So there’s a real dilemma there in balancing that need for intervention with the protection of victims and the preservation of their privacy and stopping future harm. We can use AI technologies to track the use of someone’s images. And this could be enormous. And I think Dipali referenced this in terms of the use of technology to handle the scale and the speed at which this content can move across platforms. But that just brings more complexity. So the methods for tracking content can concern victims around surveillance, or that there’s a risk of creating systems that monitor individuals more broadly than we’re. intended? How will those images and that data be used in a way that won’t impact on people’s privacy and autonomy? Then the use of people’s data is always very, very concerning. It’s very sensitive personal information used to address this harm. There can be a lack of transparency from many platforms about how these systems are used. How are the models trained? Again, the large language models that are used, and this has been referenced already, I think, by Nagat earlier, that we know that they don’t always respond as well, perhaps, to people of different cultural, religious, or ethnicities. That’s really, really challenging for the risk of presenting false positives and false negatives. Also, one area that’s often referenced is around synthetic sexual content, which is often referred to as deepfake. I think there’s a tendency to say, well, we can identify that as fake so that the harm is less. I think the evidence of some victim and survivor voices is that that just isn’t the case. I think just labelling something as fake can undermine the experience of individuals because there is a real loss of bodily autonomy and self-worth. It can cause really significant emotional distress. If we only focus on the falseness of an image, an AI system might overlook the broader psychological and social impacts on individuals. Certainly, AI can help with evidence collection and privacy. There’s a real role there in terms of watermarking or embedding metadata that helps track the origin, but then there’s more ethical questions there around consent. privacy of people’s data and do people understand? I think again it’s already been referenced that people’s access to technology and understanding of how these technologies works around the world and we can’t assume consent and it’s just really important that consent that is given is really informed and I think we’ve got a lot of work to do there to ensure that we have that but it is absolutely crucial. I also think sometimes that the technology moves fast but perpetrators of abuse move fast as well and for all the safeguards we put in we have to be aware that perpetrators are working hard to circumvent them so we need to be really flexible in our thinking and I think the priority for me is keeping that human element. Humans understand humans and can hopefully foresee some of these issues and combat ways but also to put humanity at the heart of our response to individuals who are humans themselves to state the obvious but don’t want to be supported entirely by technology, they want access to humans and that human understanding.

David Wright: Sophie thank you very much. Perhaps it is a point as well here just to talk that the term victims being used and I know we’ve had this conversation because I think there’s been, there’s often has been criticism that we shouldn’t be using this in terms of terminology, we shouldn’t be using the word victim and that shouldn’t be the case. That should be really survivor and I think I’ll perhaps if I may put words in your mouth given the conversations that we’ve had is that no, particularly from a revenge porn helpline perspective, no we very much do support victims. Our job is to make them a survivor. Now clearly anybody’s entitled to have a reference however they see fit but certainly I think we’re making the point here is that whilst our job is to make victims into survivors of particular tragic circumstances we’re not always successful which just goes to highlight the, in large, in many cases the catastrophic impact that this has, that this abuse has on individuals lives. I don’t know Sophie is there anything you want to add there? No I think you’re right David, we

Sophie Mortimer: We tend to take quite a neutral position when speaking to people because it’s not our place as a helpline to identify somebody as victim, survivor or anything else. So in practice, we reflect back to people what they say, but I completely agree that the majority of people coming to us are very much, would identify themselves as victims, because we are usually there for them quite early in that journey. And it is absolutely our aim to make them a survivor and in the hope that they can leave all of those labels behind and put this totally behind them.

David Wright: Thank you, Sophie. Nighat?

Nighat Dad: No, I think this is very interesting, because in our helpline, we also when we address folks who reach out to us, we are very careful, what do we call them? And we sort of leave it to them, what do they want themselves to call? Many times when we call them survivors, and this is like our priority, like we say, we call them survivors and not victims, because they are reaching out and they’re fighting the system. And they’re like, but I haven’t received any remedy. So I’m still a victim. So this is a very interesting conversation. And I think it should entirely be on the person who is facing all of this, to call whatever they want to call themselves, either victims or survivors. Our priority is to call them survivors, that many times we were like, I’m not that resilient. Don’t call me survivor. I don’t have that much energy left to fight back platforms or the legal system that they are dealing with.

David Wright: Yeah, Sophie, I don’t know if you want to react to that. And I’m here, I’m thinking too around, often when we’re approached by the media, that they want to speak to somebody who we’ve supported. being careful not to add a name, which we have a policy of specifically not because of the acute vulnerabilities that the individuals have. Sophie?

Sophie Mortimer: I completely agree with Nigat. It’s not our place to apply that label, and certainly the majority of people who come to us would describe themselves as a victim. In fact, I’m not sure I can recall anyone who self-identified without any prompting as a survivor, because that is not how people are feeling in that moment and in that space. The harm feels so out of people’s control, because what has happened is on platforms. We all know that images now can move quite fast, and the fear, that loss of control is the overwhelming feeling that people have when they come to services like ours. That doesn’t make anyone feel like a survivor, unfortunately, at that time. That’s why we use neutral language in that first instance and reflect back what somebody will say to them, because that’s how they’re feeling. I hope that we provide that reassurance that when they hang up the phone, they will be feeling better than they did when they picked it up.

David Wright: Thank you, Sophie. Also, for all that hard work that goes in the background as well, which I know acutely to the extent of that. Okay, finally, I’m going to turn to Boris. I say finally, too, because that’s after Boris has given us a contribution. This is when we open the floor for further, for your questions, either in reaction to anything that you’ve particularly heard, or indeed if there’s any other aspects that we perhaps haven’t covered, both within the room and also online as well. Boris, if I just turn to you. As Boris has said, he’s Head of Partnerships and Engagement at SWGFL. Given your extensive work in online safety, particularly at SWGFL, How do you see AI evolving as a tool to support NCI detection and intervention? What are the ethical frameworks do you believe are necessary to avoid potential harm to users while ensuring victim or survivor or whichever terminology we deem fit to support them? Boris.

Boris Radanovic: Thank you for that very much. I just want to say I’m really honored and proud to sit amongst heroes, definitely in the space, and thank you so much for the invitation. Thinking about AI, a quote came to my mind, and please agree or disagree with me. Specifically talking about AI, I think we know little about everything and a lot about nothing. While we fully understand the complexity of the space, whether that’s the technology behind AI or the stakeholders implementing AI, I don’t think we fully understand nor utilize the true power or the possible true power of AI. There is much more we could be doing. I know in a two-minute contribution, trying to unpack it might be a bit difficult, but I do hope that these conversations and this session reaches stakeholders from policy and government, but as well stakeholders from the industry sector. I loved the notion of stress testing and using hackers and all of that, but I would also advocate for, as we just had a conversation about users and victims and children and a lot of people that we don’t maybe fully grasp to be those first movers to test out or stress test those AI models so we can see maybe a different way of thinking. Talking about that, I think we need to go back to foundations, and the foundation is that the current models may or may not, and in some cases they do consist of, their data sets having child sexual abuse material in it, non-consensual intimate imagery in it, and many other that we probably don’t know, illegal or harmful material. So, if any, and I hope a lot of stakeholders are listening, let’s first clean out the fundamentals of the tools that we are supposed to be using. And number one, yes, you can use StopNCR.org hashes that we can help you clear out already known instances of non-consensual intimate image abuse, but we must go further. And as we spoke, and as I listened to really admirable contributions from every speaker here, I hope somebody’s listening to me and in a year’s time will prove me right that we are missing a global power force in AI development to focus on safeguarding, to focus on the guardrails, to be the solution for all of these companies that are having the same issues and problems. If there’s anybody who’s willing to work on that, SWGFL is here and definitely willing to support. But let me come back to the question about detection and intervention. I think that’s an important two-piece of a much larger, much larger picture. Yes, we can talk about detection of behaviors. And again, from a perpetrator point of view, but as well from a user’s point of view that might put them at risk without knowing it. And we need to utilize AI tools en masse to help us mitigate some of those issues. But as well, we need to talk about how do we then engage with those perpetrators after we detected it? And how do we guide those people to the right course of actions? Or what are the consequences of their repeated, and sometimes we know that those are happening on platforms, repeated offenses, or people or individuals taking part in something called a collector’s culture. With their intentionally collecting thousands and hundreds of images of other various individuals. And we know they exist on many platforms. And the question is, okay, now that we use AI to detect this, what are we going to do next? And how are we going to act on that when we’re talking about intervention? I think as well, Sophie and the Revenge Porn Helpline are already using a rather innovative way, I would say, of utilizing AI tools to help us mitigate the number of reports using a chatbot function that allows us to collect those reports. communicate and support much, much bigger, larger number of people than we would be able to do with just human support. So again, when we talk about intervention, how about user-specific, mental and health-based, legal-based support that those users can gain when or if they encounter that harm online. But as well, I think the question was about frameworks, and I think we are missing a lot, as I said in prior introductions, governments, frameworks, structures on the use, but as well, research that stands behind it. And ethical frameworks needs to be user-focused and user-centric, victim-informed or survivor-informed, most definitely, but then balancing the threat of having access to the most sensitive pieces of data that you have, and that is the data of your own or others’ abuse, and how does it unfold and to whom and where, but at the same time, extremely sensitive data sets that we might learn from and research and maybe mitigate some of those risks in the future. So I’m not trying to say this is a easy thing to do, but I’m saying that we should start combating it now before we are ending up in a much, much, I would say, more difficult space to entangle. So if I’m looking at it, what do we need to do? I think from a stakeholder company’s perspective, we need more dedication, and I support Meta, and I love our friends from all over the world involved with StopNCI.org, but we are at 12 of the biggest platforms in the world engaged with us. We need hundreds, we need thousands of platforms dedicating to this, of advocating for a solution in this space, and then bringing it forward, and definitely more investment to NGOs and researchers across the world battling in this space, because we are at the forefront, and we are non-governmental, small and agile organizations. We are meant to be at the forefront, but as we know, with every arrow, there is a long, long piece behind it that needs to be supporting us and pushing us forward. And I love the quote and the comment from Karuna about transparency. Absolutely, we need more transparency. And as well, please agree or disagree with me that in now the first movers, the first companies that we see in the AI space, correct me if I’m wrong, but it does seem to me that they’re more interested in safeguarding their intellectual property and their finances, but instead protecting and safeguarding their users. And I think that’s a big question if we move with the AI as a part of every part of our daily life, what do we value more? And I think many of us sitting here and many of us listening would advocate for the privacy and protections, but as well, safeguarding of our users first and foremost. And then we can build upon those tools. And maybe in the end, I was trying to find a picture that helps me better understand the extreme rapid rise and development of AI. And I remember, I don’t know if you saw the first movies and first pictures of the Wright brothers and the planes when we started inventing them. And then after a couple of meters, the plane crashed. Then they spent months or years developing, then a couple of dozens of meters, then hundreds of meters. So we evolved rather slowly and then more rapidly. The plane, something that brought us all here in this wonderful city of Riyadh. I think with AI, we are slowly moving at light speed pace of development, but we have no idea who’s flying and we have no idea how we’re going to land. So I fully advocate that we need to fix the foundations and invest more in clearing the data sets, invest in the NGOs around the world, battling these issues and trying to find solutions and help us all understand better and use AI better. So hopefully we can land safely and find a better and more powerful use for AI for the benefit of us all. I think that would be it. And thank you so much.

David Wright: Thank you, Boris. There’s a point to finish on, forgive me. And if anyone does have any ideas about how NTI is going to land. then we would very much like to hear that. Okay, so now we’re going to turn it to you. We’re gonna turn it over to you in terms of any particular questions that anyone has. Niels, have we got any questions? Not yet online, but I might introduce a personal question then in that case, I have the mic anyway.

Audience: First of all, thank you all very much for these very, very valuable contributions. It was a very interesting panel. For those who don’t know me, I’m Niels Van Pamel from Child Focus, which is the Belgian safer internet center. So I definitely agree with almost everything has been said here. Also with the comment of Sophie that like how deep fakes right now that we are maybe focusing too much into showing that something is a fake, but that doesn’t like, it doesn’t really matter for a victim but it’s for example, somebody who’s a victim of deep noting with fake naked pictures that everybody believes to be real anyway, right? So we’ve done a study last year on deep noting and seeing how this, first of all, how the market looks like, what is happening with young people in Belgium right now and how this is exploding in our faces right now. And we’ve seen first of all, that the long-term traumatic impact for a victim is exactly the same as for somebody compared to victims of real NCII, right? So first of all, we need to also debunk some myths. And I also wanted to add to, I think it was Boris who said that like, that we have to take into account how fast things are changing and moving right now. And also we’re jumping to conclusions. To give an example, in this study, we noticed it’s a study from 2023, that 99% of all the victims of deep noting are women and girls. But this year, 50% of the cases we opened at Child Focus were also men that were victims. So what we concluded is that in the early days, 2023, Most of the victims were girls because the data sets that were used were only working on girls and women. But right now in a world where sextortion is like perpetrators who want to sextort victims, they are using AI also much more in their behalf to guide themselves, misguide themselves. And this technology apparently now also works into having boys into, how do you say this, make deep nudes of boys. So if we don’t do research, more research in finding out how these technologies are finding more and more ways into new vulnerable groups, we might look over them. So that was maybe a comment on that. So we need longitudinal follow up and academic research. So that was my comment here.

Nighat Dad: Can I just respond to the men becoming victims of sextortion? At our helpline, when we started, we started keeping in mind that more young women and girls are actually becoming victim and survivor of this kind of crime. But we ended up getting 50% complaints from men. And starting from 2016, up till now, we never said no to them, because we started the helpline only for women. But when men reaching out to you from a context and culture where shame is so much associated with anyone, men or women, and what we noticed that young men had nowhere to turn to. There was other helplines for women for psychological support. Of course, the cyber harassment helpline was the unique one. But there were other helplines for women, but none for the men. So we ended up dealing with their complaints. Another thing that we noticed that young boys and men were hesitant to go to the law enforcement as well. And again, the culture of shame associated to it. But also, I think now this is more related to privacy. They were really scared, like women, to give their evidence to the law enforcement that how they will… will deal with or how they will protect my data when I’ll give it to them as an evidence and they will work on my case. So what they wanted basically was to report to the platform first and the helpline. So their first line of reporting was always the helpline and the platform instead of law enforcement. So I think it kind of touches upon that it’s beyond any gender or sex. This impacts everyone and especially from conservative cultures, women still find some space to talk to each other but young boys actually, they just suffer in silence.

David Wright: Yes, and then maybe to add to this comment when you said like boys are scared to go to law enforcement with evidence. I guess that’s where on-device hashing comes in.

Boris Radanovic: Wonderful, thank you so much. Niels, appreciate the comment and if I may, I think it proves to the point that both of us that the modalities of harm are changing so rapidly that even we who our job is to follow them sometimes have a difficulty. And I loved using this, especially describing deepfakes is that the image may not be real, but the harm is. And we need to understand in a fast evolving AI visual space, we now have more and more AI tools that are being developed that can, based on one prompt, design you a couple of minutes of video that that unfortunately use case will extend and unfortunately being far more wide reaching where we can have fake or digitally altered imagery than now videos that might seem or might not be real but the harm that we already now know. So we don’t need another reason, we don’t need more experiences that we know that the harm perpetrated amongst towards victims and users will be real. So thank you so much for that comment.

David Wright: Okay, just to carry on that theme and just before, again, I opened for a question here. Sophie, is there any response to that? And particularly as well, knowing the increasing call volume as well as Nagat said, changes in terms of gender? Yes.

Sophie Mortimer: interesting to hear what Naguette was saying. We certainly have always had a substantial proportion of cases of male victims affected by sextortion, though that was a proportion that rose significantly at around 2020, and hasn’t really fallen away ever since, and now makes up between a quarter and a third of our caseload. And I think it’s interesting that they’re talking about the creation of synthetic content to use in sextortion, but of course AI is also being used to generate those conversations at scale, and the presentation of the person that the victim survivor thinks that they are talking to, it can also be AI-generated. And that, of course, just ramps up the scale of these forms of abuse and, frankly, crimes. The other thing that struck me was I looked at some cases earlier in the year, and very, very harmful to them in their own communities. And I know it’s always the staple that we refer to of a woman without wearing a headscarf, but that was the reality that people are experiencing, and that can cause enormous harm. So I think we need to be aware that there are, again, broader definitions of intimacy globally, and we need to be very nuanced in our responses, but also be aware that there are, again, broader definitions of intimacy globally, and we need to be very nuanced in our responses, but also how these technologies can be used to cause other forms of harm as well. So there are huge challenges here. Not least the tenfold increase in case volume in the last four years. And there’s that. Absolutely, David. Case numbers can continue to rise year on year, and yes, certainly in the last four or five… years they have they have risen exponentially. Okay thank you very much.

Karuna Nain: David I don’t know if you can see me but you know I just wanted to follow up on the gender discussion to just check in with both Sophie and Nigat based on what they’re seeing on the helpline because the initial research that I’ve seen also indicates that usually it’s more financially motivated when it’s you know related to men and to boys versus you know with women there are other motivations which are at play. Is this consistent with what you all are seeing on your helplines or you know what are you all hearing from people who are calling in?

Nighat Dad: Yeah Corona I’ll quickly respond to that. I think it’s kind of changing from men who are public figures like politically active, human rights defenders, journalists and their intimate imagery or videos are actually one way of sort of intimidating them into silence basically. So it’s also changing from financial side to the other motives by the bad actors.

Speaker 1: Just a short point in terms of the role that companies and this is not just Meta but companies like Meta and other social media platforms can play in disseminating a lot of education as well as resources because I think that that’s really important as well and I know a lot of people mentioned sextortion so we’ve recently run a sextortion PSA in a number of countries where we worked with experts to be able to develop the exact messaging that is really important for young people and you know young women and young men too here and I think that that’s something that is you know that’s something where more of us can collaborate because I think that everybody’s doing things in isolation but I think that there is really room for collaboration in those spaces.

David Wright: Thank you very much. Okay I’m going to walk out here because I think we’ve got a question. If we can just ask you introduce yourself as well, that would be great.

Audience: Thank you. My name is Adnan. I’m Senior Legal Advisor, SEED Foundation. It’s a local NGO in Iraq, Kurdistan region. I have one question regarding, so before that, I want to thank all the panelists for their valuable insights and thoughts. I want to talk about accountability and how we can promote accountability, like these perpetrators, for example, on these platforms, for example. They are not conducting or committing one crime and leave. They will be posting or using the content maybe for someone else, later for someone else. So is there anything that those companies do with regard to holding them accountable? And the second question will be, I know that there is always a line when we talk about collaboration with courts and judicial authorities handing over evidence and materials that have been removed, because that will help accessing justice for those women. A lot of times when women seek assistance, some of them seek stopping the content or removing the content, but others, they want justice and they want the perpetrators to be accountable. Thank you.

David Wright: That’s a great question. Thank you very much. Panel?

Audience: We’ve got an online question. Carissa is asking, do existing legal frameworks hold any weight in preventing NCII, both real and AI-generated, both real and AI-generated, such as ICCPR, Article 17 for privacy? Okay. And one more question from the field. Hi, I’m a researcher based in Germany, and I have a question to the representative from META. I’m actually reporting hate speeches and, you know, like, sexually abusive content on a weekly basis. And I do it not just for work, but also on a personal level. The problem is that there are three possibilities. Only one request of mine was accepted by Meta. The second situation is that there was no response at at all, and there was no way for me to challenge or to continue my request or to send any follow-up request. And the third possibility was that there was no acceptance of my request. So in the case of wanting to follow up with my own request or challenging your meta decision, what would you suggest me to do? And I also want to ask, what is meta take on punishing the perpetrators of those images? Because I know that so far, the highest punishment is to deactivate or to delete the account. And my question to the woman in chat of a helpline, I’m sorry, I don’t remember your name. In your experiences, were there any women or gender diverse people who complained about sexual abuses? Because actually, I’m also doing research on online gender-based violence. And in my own research, there are a lot of trans teenagers or gender diverse people who actually face the issues. And also, how would you reach out to the people who don’t really understand the issues and who don’t really have any hope of addressing their issues? Thank you.

David Wright: OK. I think there’s a lot of commonality between the two questions. And perhaps, there’s one specifically for meta. And given, as well, the oversight board, there’s a relevant point. So the question about particularly to do with prosecution, the first question, anyone wish to respond?

Nighat Dad: I can respond from Meta’s perspective. So we work with law enforcement agencies across the globe. And in terms of when we get valid legal requests, we will respond with the data that is required to prosecute, which is the job of the prosecutors. We also disclose in transparency reports that we publish in terms of the number of data requests that we’ve received from authorities and how many data requests that we’ve complied with. We also have teams who work in Meta who are working directly with law enforcement authorities to ensure for the crimes, and I’m not just talking about NCII here, for the really high severity crimes that we have someone in case they need a point of contact. What I will say is there is less visibility that we have in terms of the actual prosecutions. So, for example, where we have the issue of child sexual abuse material, we are required to report as a U.S. organization to NCMEC. NCMEC then works with law enforcement authorities in terms of making sure that really sensitive data is available in a very privacy protective manner, which is available to law enforcement authorities. We don’t really have visibility on how that data is being used to prosecute the perpetrators, and I think that it’s an important link of the chain that is missing, and I think one of the things that we talk about is it’s a whole chain and somebody asks what do you do in addition to deplatforming. I think that all stakeholders have a role and we can remove the content and we can deplatform and we can work with law enforcement agencies to respond to valid requests, but I think that there needs to be a lot more transparency in terms of the prosecutions. We know that in a lot of countries, a lot of these crimes may be reported but not necessarily are prosecuted for a number of reasons including lack of capacity, lack of understanding, lack of resources or just the inability to prosecute. Yeah, responding to the researcher, so not only as a helpline and sitting at the oversight board, we actually investigated a bundle of deep fake images, one from India and one from US. And we actually recommended so many things to Meta around the gaps that we saw. But one thing that was clear to us was that Meta platforms need to create pathways for users to easily report this type of content and they must act quickly as well. And it shouldn’t matter whether the victim is a celebrity or a regular person. What we noticed that in the cases that we picked up were of the celebrity, they were public people, like public persons. But then when their content went viral, that’s where we took up the case. But what exactly is the mechanism at Meta in terms of giving importance to every user’s report is a matter of concern. I would also say that what we do as a helpline, we raise awareness a lot in different institutions, schools, colleges, try to work with the government, although it’s not their priority. But just to let people know that this kind of crime exists, but there are also remedies and who they can reach out to. And I think you raised a point around repeated offenders. And I think that’s also a point of concern for us that when people who are repeated offenders, they find a way to come back to platform and then do the same thing. And I think this is like a really question to the platform that what they do with the repeated offenders.

David Wright: Thank you, Nighat. Also to do with Carissa’s question, I think too, Sophie, I think this one, I anticipate you may have a response to as well in terms of, Nils, I think the question was around existing legal frameworks hold any weight in preventing NCII. I suspect you have a response to that one.

Sophie Mortimer: Thanks, David. I’ll try not to take too long. But quickly first on the evidence point. I think, unfortunately, even in the UK, where we’ve had legislation around the sharing of non-consensual intimate images for almost 10 years now, the collection of evidence still represents challenges and there is no consistent approach. We have 43 police forces in the UK, so consistency is always a challenge, but certainly there’s nothing really around evidence. But we have, as a helpline, provided statements to the police. We can just establish what we have done, facts, dates, the links that we have removed. And I also think there’s a bit of work around categorisation of intimate images, because I think sometimes this content, and it’s very off-putting for victims, is shared amongst multiple police officers with the prosecuting services and in courts. And that’s a massive barrier, I think, to people coming forward. And I think that we could do some work around supplying information and categories that should be accepted by courts, so that all those individuals don’t have to view content. And I think that would be quite a supportive measure to get people coming forward and supporting prosecutions. But in terms of legal frameworks, as I say, it’s nearly 10 years since the UK first got legislation in this area. And in fairness to them, it wasn’t great legislation to start with, but they responded fairly quickly, government, in that they recognised within six-ish years that the legislation wasn’t fit for purpose and a really thorough review was done. And we got new legislation at the beginning of this year, which is much more comprehensive and focuses on the consent of the person that is depicted in an image rather than the intentions or motivations of perpetrators. And that’s quite a powerful step forward, because this intention to cause distress is still quite current in other forms of legislation around the world. But there is definitely more. more to do in terms of the status of legal content. We are, this content, we are campaigning in the UK for non-consensual intimate images, particularly after conviction, to be classified as illegal content and to be treated in the same way as child sexual abuse material, to give us the same powers to remove, we are already good at removal, we have great relationships with industry, but where we can’t because there are multiple non-compliant sites whose business model is based on the sharing of this sort of content, and they don’t comply with us, and they don’t comply with other regulators, and they are hosted in countries beyond the reach of regulation. So I think it’s really important that we find other ways of leveraging the law to make this content much less visible, to give people the security that they can actually move on with their lives and not be in fear that their images are two, three clicks away from being viewed by anyone.

David Wright: Thank you. Thank you, Sophie. I wanna give a shout out too, as well, to the draft UN Cybercrime Convention that was published in August, and particularly I think UNODC’s global strategy in terms of the inclusion of NCII. It was much to our surprise, the inclusion of NCII within the new cybercrime, or at least the draft Cybercrime Convention, which we’re anticipating will be ratified next year, that all states should have laws to do with NCII. So perhaps in response to that question, will we have some today? There are some. To what weight? We’ve heard from Sophie there’s some degree, but they can prove quite porous. But then optimism around a push, a direction across the world in terms of laws that will help in this regard. I’m conscious we’ve only got. a couple of minutes left. You wanted to make a quick comment, Boris? I’ll try.

Boris Radanovic: Thank you for the questions, and as well. Far from me to say coming from an NGO and working in this space from an NGO perspective, all of the three questions come back to the same thing in my mind. We talked about accountability, legal frameworks, and then reporting. It does come back to the middle letter of this conference, and that is the Internet Governance Forum. I don’t think the scary question is what are you gonna do, Meta, TikTok, Reddit. I think the scary question is what are we gonna do, and how are we gonna define accountability for perpetrators on those platforms, develop the legal frameworks and the governance to then make sure that the platforms do follow that and then have accountability on there, and I think that’s a difficult question for us to define, and absolutely the legal frameworks, I would say, need to be more inspired around the world, more forward-looking, so we as a society in all cultures and different nation-states defined how do we approach accountability of abuse in a digital space, and how do we hold those accountable to rights, and I think it’s a far more diverse question that we need to discuss as a society than one stakeholder can answer, but I’m here for it, and if anybody has a good idea or an inspiring legal framework around the world, please do share it.

David Wright: Which will probably have to be the closing remark. Given we’ve run out of time, the transcribe’s stopped. Hopefully we’ve given you some form of response here, and also the panel, we’ve always said, so we’re a world-leading panel in terms of insight, so I pay tribute to all of your work, and I would invite everyone to show our recognition for both the extraordinary work that these people do together with the panel session as well, so thank you very much. .

N

Nighat Dad

Speech speed

147 words per minute

Speech length

2434 words

Speech time

987 seconds

AI models need to account for cultural nuances and non-Western contexts

Explanation

Nighat Dad emphasizes the importance of AI systems being adapted to understand cultural and linguistic nuances, especially in non-Western contexts. She points out that current AI models are often trained on English and Western data, which can lead to biases and inaccuracies when applied globally.

Evidence

Nighat mentions her experience on the UN Secretary General’s AI high-level advisory body, where they brought global majority perspectives to AI discussions.

Major Discussion Point

Challenges and Ethical Considerations in Using AI to Combat NCII

Helplines play a crucial role in providing support and resources

Explanation

Nighat Dad highlights the importance of helplines in addressing online harms, particularly for young women and girls in countries like Pakistan. She explains that helplines provide a clear picture to platforms about the contextual nuances of online abuse and offer support to victims.

Evidence

She mentions the Digital Rights Foundation’s Cyber Harassment Helpline started in 2016 to address online harms faced by young women and girls in Pakistan.

Major Discussion Point

Supporting Victims and Survivors of NCII

Platforms need better reporting mechanisms for users

Explanation

Nighat Dad emphasizes the need for social media platforms to create easier pathways for users to report content like deepfakes. She stresses the importance of quick action on reports, regardless of whether the victim is a celebrity or a regular person.

Evidence

She references the Oversight Board’s investigation of deepfake cases from India and the US, which led to recommendations for Meta to improve its reporting mechanisms.

Major Discussion Point

Supporting Victims and Survivors of NCII

Agreed with

Karuna Nain

Boris Radanovic

Agreed on

Need for transparency in AI use by platforms

S

Sophie Mortimer

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Victim privacy and consent must be prioritized when using AI tools

Explanation

Sophie Mortimer emphasizes the importance of approaching AI use in victim support with caution. She stresses that victims’ trust in technology may be degraded due to their experiences, and their privacy and consent must be prioritized in any AI-based support systems.

Evidence

She mentions a previous AI support tool project at Southwest Grid for Learning that was ultimately not implemented due to concerns about adequately safeguarding users.

Major Discussion Point

Challenges and Ethical Considerations in Using AI to Combat NCII

Agreed with

Nighat Dad

David Wright

Agreed on

Importance of victim-centric approaches

Differed with

Karuna Nain

Differed on

Use of AI in victim support

Victim-centric language and approaches are needed

Explanation

Sophie Mortimer discusses the importance of using neutral language when interacting with those affected by NCII. She explains that many individuals identify as victims rather than survivors when first seeking help, and it’s crucial to reflect their own language back to them.

Evidence

She shares that in her experience, most people contacting their helpline describe themselves as victims, not survivors, due to the overwhelming feeling of loss of control.

Major Discussion Point

Supporting Victims and Survivors of NCII

Agreed with

Nighat Dad

David Wright

Agreed on

Importance of victim-centric approaches

Existing laws often fall short in addressing NCII

Explanation

Sophie Mortimer discusses the limitations of current legal frameworks in addressing NCII. She highlights the need for more comprehensive legislation that focuses on the consent of the person depicted in an image rather than the intentions of perpetrators.

Evidence

She mentions the UK’s experience with NCII legislation, which was revised after about six years due to being unfit for purpose. The new legislation implemented in early 2023 is more comprehensive.

Major Discussion Point

Legal Frameworks and Accountability

Case volumes for helplines are rising exponentially

Explanation

Sophie Mortimer notes that the number of cases reported to helplines has increased dramatically in recent years. This rise in case volume highlights the growing prevalence of NCII and the increasing need for support services.

Evidence

She mentions a tenfold increase in case volume over the last four years.

Major Discussion Point

Emerging Trends and Challenges

S

Speaker 1

Speech speed

154 words per minute

Speech length

2049 words

Speech time

793 seconds

AI can help with scale and speed of content moderation, but human oversight is still needed

Explanation

The speaker emphasizes that while AI can significantly improve the scale and speed of content moderation, human oversight remains crucial. They stress the importance of combining automated technology with human review to ensure appropriate actions are taken.

Evidence

The speaker mentions Meta’s use of proactive technology to catch violating content before it’s reported, while still maintaining human review processes.

Major Discussion Point

Challenges and Ethical Considerations in Using AI to Combat NCII

Education and awareness efforts are important

Explanation

The speaker highlights the importance of educating users about online safety and the risks associated with NCII. They emphasize the role that social media platforms can play in disseminating educational content and resources.

Evidence

The speaker mentions Meta’s recent sextortion PSA campaign in several countries, developed in collaboration with experts.

Major Discussion Point

Supporting Victims and Survivors of NCII

Platforms need to improve cooperation with law enforcement

Explanation

The speaker discusses the importance of cooperation between social media platforms and law enforcement agencies in addressing NCII. They explain that platforms respond to valid legal requests with necessary data for prosecution, but note that there’s often a lack of visibility on the outcomes of these cases.

Evidence

The speaker mentions Meta’s transparency reports that disclose the number of data requests received from authorities and how many were complied with.

Major Discussion Point

Legal Frameworks and Accountability

K

Karuna Nain

Speech speed

186 words per minute

Speech length

1547 words

Speech time

497 seconds

Transparency is needed on how AI tools are being used by platforms

Explanation

Karuna Nain emphasizes the need for more transparency from tech companies about how they are leveraging AI in addressing NCII. She points out that while companies have been open about using AI for issues like child sexual abuse material, there’s less information about its use in combating NCII.

Evidence

She mentions Meta’s use of AI in closed secret groups to proactively identify potentially non-consensual content for review.

Major Discussion Point

Challenges and Ethical Considerations in Using AI to Combat NCII

Agreed with

Nighat Dad

Boris Radanovic

Agreed on

Need for transparency in AI use by platforms

Differed with

Sophie Mortimer

Differed on

Use of AI in victim support

B

Boris Radanovic

Speech speed

180 words per minute

Speech length

2078 words

Speech time

689 seconds

Current AI models may contain problematic training data that needs to be addressed

Explanation

Boris Radanovic raises concerns about the training data used in current AI models, which may include illegal or harmful material such as child sexual abuse content or non-consensual intimate imagery. He emphasizes the need to clean up these fundamental aspects of AI tools.

Evidence

He suggests using tools like StopNCII.org hashes to clear out known instances of non-consensual intimate image abuse from AI training data.

Major Discussion Point

Challenges and Ethical Considerations in Using AI to Combat NCII

Agreed with

Nighat Dad

Karuna Nain

Agreed on

Need for transparency in AI use by platforms

Perpetrators are using AI tools in sophisticated ways

Explanation

Boris Radanovic points out that perpetrators are increasingly using AI tools in sophisticated ways to carry out abuse. He emphasizes the need for AI systems to detect and intervene in these behaviors, while also considering how to engage with perpetrators after detection.

Major Discussion Point

Emerging Trends and Challenges

Internet governance needs to evolve to better address online harms

Explanation

Boris Radanovic argues that internet governance needs to evolve to better address online harms like NCII. He emphasizes the need for society as a whole to define how to approach accountability for digital abuse and how to hold platforms accountable.

Major Discussion Point

Legal Frameworks and Accountability

D

David Wright

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

On-device hashing tools like StopNCII.org empower victims

Explanation

David Wright highlights the importance of tools like StopNCII.org in empowering victims of NCII. These tools allow users to create digital fingerprints of their images without uploading them, providing a way to prevent the spread of non-consensual content.

Evidence

He describes how StopNCII.org works, creating hashes of images on the user’s device and sharing these with participating platforms to prevent upload of matching content.

Major Discussion Point

Supporting Victims and Survivors of NCII

Agreed with

Sophie Mortimer

Nighat Dad

Agreed on

Importance of victim-centric approaches

Global frameworks like the UN Cybercrime Convention are promising

Explanation

David Wright mentions the draft UN Cybercrime Convention as a promising development in addressing NCII globally. He notes that the convention includes provisions requiring all states to have laws addressing NCII.

Evidence

He references the draft UN Cybercrime Convention published in August and the UNODC’s global strategy including NCII.

Major Discussion Point

Legal Frameworks and Accountability

A

Audience

Speech speed

155 words per minute

Speech length

900 words

Speech time

346 seconds

Sextortion cases are increasing, including against men/boys

Explanation

An audience member notes that sextortion cases are increasing, and there’s a growing trend of men and boys becoming victims. This highlights the evolving nature of online sexual abuse and the need for support services to adapt to these changes.

Evidence

The audience member cites a study from 2023 showing 99% of deepfaking victims were women and girls, but in the current year, 50% of their cases involve male victims.

Major Discussion Point

Emerging Trends and Challenges

Agreements

Agreement Points

Need for transparency in AI use by platforms

Nighat Dad

Karuna Nain

Boris Radanovic

Platforms need better reporting mechanisms for users

Transparency is needed on how AI tools are being used by platforms

Current AI models may contain problematic training data that needs to be addressed

The speakers agree that there is a need for greater transparency from tech companies about how they are using AI to combat NCII, including better reporting mechanisms and addressing issues with training data.

Importance of victim-centric approaches

Sophie Mortimer

Nighat Dad

David Wright

Victim privacy and consent must be prioritized when using AI tools

Victim-centric language and approaches are needed

On-device hashing tools like StopNCII.org empower victims

The speakers emphasize the importance of prioritizing victim privacy, consent, and empowerment when developing and implementing AI tools to combat NCII.

Similar Viewpoints

Both speakers recognize the potential of AI in addressing NCII but emphasize the continued need for human involvement, whether in content moderation or in developing more comprehensive legal frameworks.

Sophie Mortimer

Speaker 1

AI can help with scale and speed of content moderation, but human oversight is still needed

Existing laws often fall short in addressing NCII

Unexpected Consensus

Increasing prevalence of male victims in NCII cases

Nighat Dad

Sophie Mortimer

Audience

Helplines play a crucial role in providing support and resources

Case volumes for helplines are rising exponentially

Sextortion cases are increasing, including against men/boys

There was an unexpected consensus on the increasing prevalence of male victims in NCII cases, challenging the traditional narrative that primarily focuses on women and girls as victims. This highlights the need for support services to adapt to these changing demographics.

Overall Assessment

Summary

The main areas of agreement include the need for greater transparency in AI use by platforms, the importance of victim-centric approaches, the necessity of balancing AI capabilities with human oversight, and the recognition of evolving victim demographics in NCII cases.

Consensus level

There is a moderate to high level of consensus among the speakers on these key issues. This consensus suggests a shared understanding of the complex challenges in combating NCII and the need for multifaceted approaches involving technology, policy, and support services. The implications of this consensus point towards a potential for collaborative efforts in developing more effective strategies to address NCII, while also highlighting the need for continued research and adaptation to emerging trends.

Differences

Different Viewpoints

Use of AI in victim support

Sophie Mortimer

Karuna Nain

Victim privacy and consent must be prioritized when using AI tools

Transparency is needed on how AI tools are being used by platforms

While Sophie Mortimer emphasizes caution in using AI for victim support due to privacy concerns, Karuna Nain advocates for more transparency from tech companies about how they are using AI to combat NCII.

Unexpected Differences

Gender distribution of NCII victims

Nighat Dad

Audience

Helplines play a crucial role in providing support and resources

Sextortion cases are increasing, including against men/boys

While Nighat Dad initially focused on young women and girls as primary victims, the audience member’s comment about increasing sextortion cases against men and boys revealed an unexpected shift in victim demographics. This highlights the evolving nature of NCII and the need for support services to adapt.

Overall Assessment

summary

The main areas of disagreement centered around the readiness and appropriate use of AI in combating NCII, the balance between technological solutions and human oversight, and the evolving nature of NCII victims and perpetrators.

difference_level

The level of disagreement among speakers was moderate. While there were differing perspectives on the implementation of AI and the approach to victim support, there was a general consensus on the importance of addressing NCII and the need for improved legal frameworks and platform accountability. These differences highlight the complexity of the issue and the need for a multifaceted approach involving various stakeholders.

Partial Agreements

Partial Agreements

Both speakers agree on the potential of AI in content moderation, but disagree on the current state of AI readiness. Speaker 1 emphasizes the immediate benefits of AI with human oversight, while Boris Radanovic highlights the need to first address problematic training data in AI models.

Speaker 1

Boris Radanovic

AI can help with scale and speed of content moderation, but human oversight is still needed

Current AI models may contain problematic training data that needs to be addressed

Similar Viewpoints

Both speakers recognize the potential of AI in addressing NCII but emphasize the continued need for human involvement, whether in content moderation or in developing more comprehensive legal frameworks.

Sophie Mortimer

Speaker 1

AI can help with scale and speed of content moderation, but human oversight is still needed

Existing laws often fall short in addressing NCII

Takeaways

Key Takeaways

AI has potential to help combat NCII, but must be implemented ethically with human oversight

Victim privacy, consent and cultural nuances must be prioritized when developing AI tools

Platforms need to improve transparency around AI use and cooperation with law enforcement

Helplines and victim support services play a crucial role but are facing rising case volumes

Legal frameworks for addressing NCII are improving but still have significant gaps

Emerging threats like AI-generated deepfakes pose new challenges

A multi-stakeholder approach involving industry, civil society and governments is needed

Resolutions and Action Items

Platforms should provide more transparency on how AI is being used to combat NCII

More research is needed on evolving trends and impacts of NCII across different demographics

Stakeholders should collaborate on developing ethical frameworks for AI use in this space

Efforts should be made to expand tools like StopNCII.org to more platforms

Unresolved Issues

How to effectively hold perpetrators accountable across jurisdictions

Balancing use of AI for detection/prevention with privacy and consent concerns

Addressing non-compliant websites that host NCII content

Improving consistency in evidence collection and categorization for prosecutions

Mitigating bias in AI models used for content moderation

Suggested Compromises

Using AI for initial detection but maintaining human review for final decisions

Allowing victims to choose how their data is used in reporting/removal processes

Balancing removal of content with preservation of evidence for potential prosecutions

Thought Provoking Comments

Over the years we have seen that online harms or violence against women, or tech-facilitated gender-based violence, now we have so many names of this, but non-consensual intimate imagery around the world has very different consequences in different jurisdictions. In many part of the world, it kind of limits itself to the online spaces, but in some jurisdictions it turns into offline harm against especially marginalized groups like young women and girls.

speaker

Nighat Dad

reason

This comment highlights the global variability in impacts of NCII abuse, emphasizing how cultural context shapes consequences.

impact

It broadened the discussion to consider cultural and jurisdictional differences, setting the stage for a more nuanced global perspective.

We simply couldn’t be sure that the technology could safeguard people in their time of need adequately enough. I really hope this will change because I think there is huge potential here, and that we can revisit these concepts, but it’s just really imperative that we have trust in the security of such a tool and that it prioritises the safety and wellbeing of users.

speaker

Sophie Mortimer

reason

This comment introduces a critical perspective on the limitations and risks of AI in addressing NCII abuse.

impact

It shifted the conversation to consider the ethical implications and potential drawbacks of AI solutions, balancing the earlier optimism about technology.

I think just labelling something as fake can undermine the experience of individuals because there is a real loss of bodily autonomy and self-worth. It can cause really significant emotional distress. If we only focus on the falseness of an image, an AI system might overlook the broader psychological and social impacts on individuals.

speaker

Sophie Mortimer

reason

This insight challenges the assumption that identifying fake images solves the problem, highlighting the deeper psychological impacts.

impact

It deepened the discussion on the nature of harm in NCII abuse, moving beyond technical solutions to consider emotional and social consequences.

AI systems need to solve for cultural nuance and we know that current models are trained on English and other Western contexts and languages. But I’m also hopeful and optimistic that while we are having these conversations, these conversations will lead to a new generation of AI that will better understand cultural and linguistic nuance.

speaker

Nighat Dad

reason

This comment addresses a critical limitation in current AI systems while expressing optimism for future improvements.

impact

It sparked discussion on the need for more diverse and culturally sensitive AI development, emphasizing the importance of global perspectives.

I think from a stakeholder company’s perspective, we need more dedication, and I support Meta, and I love our friends from all over the world involved with StopNCI.org, but we are at 12 of the biggest platforms in the world engaged with us. We need hundreds, we need thousands of platforms dedicating to this, of advocating for a solution in this space, and then bringing it forward, and definitely more investment to NGOs and researchers across the world battling in this space

speaker

Boris Radanovic

reason

This comment emphasizes the need for broader engagement and investment from platforms and stakeholders to address NCII abuse.

impact

It shifted the discussion towards the need for more comprehensive and collaborative approaches, highlighting the scale of the challenge.

Overall Assessment

These key comments shaped the discussion by broadening its scope from technical solutions to encompass cultural, ethical, and psychological dimensions of NCII abuse. They highlighted the complexity of the issue, emphasizing the need for nuanced, culturally sensitive approaches that go beyond simple technological fixes. The discussion evolved to consider the global variability of impacts, the limitations of current AI systems, the psychological depth of harm, and the need for broader stakeholder engagement. This multifaceted exploration led to a more comprehensive understanding of the challenges and potential solutions in combating NCII abuse.

Follow-up Questions

How can AI systems for NCII detection be adapted ethically to fit varying cultural and legal contexts?

speaker

David Wright

explanation

This is important to ensure AI tools are effective and appropriate across different regions and cultures.

What role can AI play in scaling global NCI protection efforts?

speaker

David Wright

explanation

Understanding AI’s potential in this area could help improve and expand protection efforts worldwide.

What ethical principles are essential to ensuring AI tools support victims without compromising user autonomy?

speaker

David Wright

explanation

This is crucial for developing AI tools that help victims while respecting their privacy and agency.

How can we ensure transparency and constant auditing of AI content moderation systems?

speaker

Nighat Dad

explanation

This is important for understanding how well these systems perform and identifying areas for improvement.

How can platforms create easier pathways for users to report NCII content and ensure quick action regardless of the victim’s public status?

speaker

Nighat Dad

explanation

This is crucial for improving victim support and ensuring equal treatment of all users.

How can we address the issue of repeated offenders who find ways to return to platforms after being removed?

speaker

Nighat Dad

explanation

This is important for preventing ongoing abuse and improving platform safety.

How can we improve the collection and handling of evidence in NCII cases to better support prosecutions?

speaker

Sophie Mortimer

explanation

This is crucial for improving legal outcomes and supporting victims seeking justice.

How can we develop a consistent approach to categorizing intimate images for legal purposes?

speaker

Sophie Mortimer

explanation

This could help streamline legal processes and reduce barriers for victims coming forward.

How can we leverage the law to make NCII content less visible, particularly on non-compliant sites?

speaker

Sophie Mortimer

explanation

This is important for reducing the spread of NCII and helping victims move on with their lives.

How can we develop more forward-looking legal frameworks and governance structures to address digital abuse and hold perpetrators accountable?

speaker

Boris Radanovic

explanation

This is crucial for creating effective, long-term solutions to combat NCII and other forms of online abuse.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.