IGF2024
Launch / Award Event #103 Uncovering the Digital Economy Trends of 2025
WS #246 Cyber diplomacy, peace and development in the Middle East
WS #246 Cyber diplomacy, peace and development in the Middle East
Session at a Glance
Summary
This discussion focused on cyber diplomacy and peace in the Middle East, led by James Shires due to the absence of other panelists. Shires explained that cyber diplomacy involves diplomatic efforts around cybersecurity issues, distinct from digital diplomacy. He noted that cyber diplomacy in the Middle East has evolved significantly in recent years, with countries becoming more engaged in international cybersecurity processes.
The conversation covered various aspects of cybersecurity in the region, including the use of AI in conflicts, content moderation, and the role of private sector actors. Shires highlighted the complexities of defining cybersecurity universally, as definitions often reflect cultural values and opinions. He also discussed the challenges of assessing cybersecurity maturity in the Middle East compared to other regions, emphasizing the importance of considering local priorities and goals.
The discussion touched on the impact of external actors and infrastructure ownership on cyber diplomacy in the Middle East. Shires noted the push for data localization in Gulf states and the involvement of private sector companies in cybersecurity efforts. He also addressed the connection between cybersecurity and democracy, as well as the evolving role of private companies in conflict situations.
To promote cyber peace in the region, Shires recommended improving data collection on cyber threats, fostering multi-stakeholder collaboration, enhancing internet connectivity, and increasing engagement with UN processes. He emphasized the importance of regional mechanisms in developing cybersecurity practices and competencies that can be translated to the global level.
Keypoints
Major discussion points:
– Cyber diplomacy in the Middle East, including challenges and opportunities
– Use of AI and digital technologies in conflicts, particularly in Gaza
– Data localization and digital sovereignty efforts in the region
– Role of private sector companies in cybersecurity and conflicts
– Need for better data and multi-stakeholder approaches to cyber peace
Overall purpose/goal:
The discussion aimed to explore cyber peace and diplomacy issues in the Middle East, covering both regional dynamics and connections to global cybersecurity governance. It sought to provide nuanced perspectives beyond typical Western framings of cyber threats in the region.
Speakers
– James Shires, Co-director at Virtual Routes
Area of expertise: Cyber peace and diplomacy in the Middle East
(Multiple audience members asked questions and made comments)
Full session report
Cyber Diplomacy and Peace in the Middle East: A Comprehensive Overview
This impromptu discussion, led by Dr. James Shires, co-director of Virtual Routes, focused on cyber diplomacy and peace in the Middle East. The conversation covered various aspects of cybersecurity in the region, including the use of AI in conflicts, content moderation, and the role of private sector actors.
Definition and Scope of Cyber Diplomacy
Dr. Shires began by clarifying the definition of cyber diplomacy, distinguishing it from digital diplomacy. He explained that cyber diplomacy specifically refers to diplomatic efforts surrounding cybersecurity issues, characterised by the involvement of transnational stakeholders and the private sector. This definition set the foundation for the subsequent discussion, focusing on cybersecurity aspects in international relations rather than broader digital issues.
Evolution of Cyber Diplomacy in the Middle East
The discussion highlighted that Middle Eastern states have become increasingly engaged in cyber diplomacy in recent years, particularly concerning the UN cybercrime convention negotiation process. Dr. Shires noted a significant shift from little engagement to active participation in the UN processes. He also underlined the concept of “maturity” in assessing cyber diplomacy landscapes, emphasizing the importance of understanding regional contexts.
Challenges in Cybersecurity
Several challenges in the realm of cybersecurity were discussed:
1. Infrastructure Vulnerabilities: Dr. Shires pointed out that some countries, such as Iran, are more vulnerable to cyberattacks due to infrastructure weaknesses.
2. Data Localisation: There is a significant push for data localisation and digital sovereignty in Gulf states, reflecting concerns about data ownership and control.
3. AI in Conflicts: The use of AI targeting systems in conflicts, particularly in Gaza, was raised as a major concern. This led to an extensive discussion about the potential for indiscriminate attacks and the ethical implications of AI in warfare.
4. Private Sector Involvement: The increasing role of private sector tech companies in cybersecurity and conflicts was noted, particularly in the context of Israel and Gaza. This highlighted the complex landscape of cyber actors in the region.
5. Content Moderation and Disinformation: The challenges of managing online content and combating disinformation in the region were discussed.
Defining Cybersecurity
A significant part of the discussion focused on the challenges of defining cybersecurity. Dr. Shires emphasized that there is no universally agreed definition, and that different actors often have varying interpretations based on their interests and contexts. This lack of a common definition was noted as an ongoing challenge in the field of cyber diplomacy.
Recommendations for Cyber Peace
In response to a question from the audience, Dr. Shires offered several recommendations to promote cyber peace in the region:
1. Improved Data Collection: There is a need for better data on cyber threats and actors in the region.
2. Multistakeholder Collaboration: Working collaboratively with civil society and industry was suggested as a crucial approach to addressing cybersecurity issues.
3. Enhanced Engagement with the UN Processes: While developing regional cybersecurity practices, there should be increased engagement with UN processes.
Reconstruction Efforts
The discussion briefly touched on reconstruction efforts in conflict-affected areas like Gaza and Yemen, highlighting the importance of considering cybersecurity in post-conflict rebuilding.
Conclusion
The discussion provided a comprehensive overview of the current state of cyber diplomacy and peace in the Middle East. It highlighted the complex interplay between technological advancements, geopolitical interests, and ethical considerations in the realm of cybersecurity. The conversation emphasized the need for improved data collection, multi-stakeholder collaboration, and engagement with international processes to address the evolving challenges in this field. As the region continues to develop its cyber capabilities and diplomatic strategies, these insights will be crucial in shaping future policies and practices in cyber diplomacy and peace.
Session Transcript
James Shires: Good afternoon. This is slightly awkward, because this session was organized on cyber peace and diplomacy in the Middle East. Unfortunately, the speakers were unable to make it due to urgent health issues. So they didn’t manage to make it on their flight. I am the only speaker who was planning to this session, who is actually here in person. But we emailed the Secretariat to say, can we cancel the session due to the lack of speakers? They did not remove it from the agenda. And so you are all here, because you’re expecting to see a session on cyber peace and diplomacy in the Middle East, although we do not have panelists. Now, and that applies to everyone online, as well. So hi, everybody online, and thank you for joining. We have two options. Option one is everyone enjoys the Riyadh sun, has a coffee, finds another workshop they would like to engage in. Option two is we have more of an open discussion on cyber peace and diplomacy in the Middle East. I should introduce myself. My name is James Shires. I’m co-director of an organization called Virtual Routes. And my background is on researching cyber peace and diplomacy in the Middle East. I wrote a book called ‘The Politics of Cybersecurity in the Middle East’, and have worked on this extensively throughout the region. So if you would like to have a conversation, an open conversation, about these issues with me, I am very open to that. If you would rather get a coffee or go to a different workshop, then I will not be offended in the slightest. Everyone is still here. OK. So let me set the scene somewhat, because this is a controversial topic. It’s one that doesn’t often get the attention it deserves. If you especially talk to outsiders, policymakers in Europe and the US, about cyber peace or cybersecurity in the Middle East, they will think of one country, usually. And they’ll worry about a country that has both been the target of significant cyber operations, and has also conducted those offensive cyber operations itself. And that’s Iran. So the framing in a lot of the Western policy world on cyber Middle East is an Iranian cyber threat. Just as if you were to go to the same conferences, and say you’re worried about the major threats, you would hear talk of another three or four similar states. The same list of states. But that is not the kind of cyber peace and diplomacy in the Middle East that I want to talk about today. Because my experience and my research in this area suggests that there’s a lot more nuance. There’s a lot more interesting things going on. And that, actually, there are some really promising signs of cyber diplomacy in the Middle East that other areas can work on as well. I can talk about a few of those examples if we’d like to. But I can also just open the floor to your own reflections. We have someone who already said it. We, I think the scheduling for the panels is currently, I would say, in flux. As in, the secretariat have not responded too much to request to cancel or move panels around. Any rescheduling for me, I think, is risky. Because it probably just wouldn’t happen. So, would anyone like to open the floor for any questions? Why did you come to this session? What did you expect to hear from this session? I have a microphone, I can pass around.
AUDIENCE: Hello, everyone. So, maybe my question here is, what is cyber diplomacy to you? And how has the difference in the global landscape today inspired the creation, or like, the general idea of cyber diplomacy?
James Shires: There’s a whole set of questions on cyber diplomacy, as in the use of cyber or digital tools or digital diplomacy that is very interesting and important. How do ministries of foreign affairs adapt to the digital world? How do they use AI, for example, in their day-to-day lives? That’s not, for me, cyber diplomacy. It’s diplomacy about cybersecurity issues. It’s distinct because these issues have stakeholders that are much more transnational than many other issues, right, you can’t necessarily tie down the technical community governing the internet to particular states or responsible states. And it’s also a lot more based on the private sector as well, right? So that, for me, is cyber diplomacy in general. It has its own challenges based on the kinds of actors involved, and also the technical experience required for the issue itself. So if you want to engage in cyber diplomacy, there’s a relatively high bar to entry. Now, that’s not unusual in diplomacy. Most diplomacy on science and technology requires some level of existing knowledge, but cyber diplomacy may be more than others. So that would be my framework for what cyber diplomacy is. Now, please, go ahead.
AUDIENCE: How do you think it’s going to look five years from now, 10 years from now? Because maybe five years ago and 10 years ago, it wasn’t the same, it wasn’t what it is now.
James Shires: So to bring the conversation back to the Middle East, five years ago, there was very little cyber diplomacy in the Middle East, I would say, right? The extent to which most Middle East states engaged with UN cybersecurity governance processes was very little. You had some at places like the Open Ended Working Group, the OEWG, putting forward relatively virtual statements at the start of every group, right? So Iran would make sure it aligned with particular views, mainly sort of rejecting what it saw as kind of the Western dominance of these processes. And so would Syria align with that as well, right? So you saw some coalition building, but many other states just didn’t engage at all. That really changed with the UN Cybercrime Convention, right? Because there’s a long history to cybercrime laws in the region, right? They have not signed up to the Budapest Convention on Cybercrime, which was the European mechanism, but they did have very early cybercrime laws often in the Gulf, right? So the first in Saudi Arabia was in 2006, and then it was updated eight years later after the Arab Spring and after the Arab Convention on Cybercrime, right? But for by the League of Arab States. So the idea of a UN Convention on Cybercrime was something that really, I think, spoke to these states more so. And you had a lot of very active engagement, especially towards the end of that process. So that’s where I see that going from there. Now, you said, where does it go in the future? Now, we’ve had lots of attempts by Saudi Arabia and other states to put forward new means of cyber diplomacy. You’ve had things like the digital cooperation organization. I expect to see more of those efforts. I expect to see a lot more cyber diplomacy as soft power, as the ability to include things like cybersecurity issues on major geopolitical stages, you have the Doha forum in Qatar, you have the Manama dialogues in Bahrain, all of these states are trying to put forward, you know, their take and their interpretation of geopolitical issues and to act as a convening space for quite sensitive geopolitical topics. I expect cyberspace to continue and grow in those areas as well. There are more microphones, if anyone else would like to come in, please. One there and then we’re here. And do come to the purple table. It’s nice if you like it.
AUDIENCE: Thank you. So when I look at cybersecurity, there’s a digital infrastructure component, right? So we are seeing in some of the wars targeted removal of ICT infrastructure, which means people are not. And that’s a big part. So how would so one question is, in that realm, how would cyber diplomacy actually act? The second one is actually the softer part. So there’s the algorithmic part, right? So when you are having a project that can identify targets very specifically that are downloaded from one area, and then remove them in war, this becomes a major challenge, but it may not just it may not be war, it could be disinformation. So we’re also seeing a lot of disinformation come in at the same time. So I think there’s three things that I’m looking at. One is the information and the literacy and maybe how that works over there. Romania just now had an issue where they actually cancelled their presidential elections or postponed it because of this issue. Where would it come in? So rebuilding after, for example, a war, we often see it’s the same people who supply the equipment for the war, which I find very strange. So just asking your view of very delicate balances.
James Shires: Okay, that’s three big questions, right? There’s one cyber tools in conflict and regulating cyber tools in conflict. There’s two, which is content issues like misinformation and disinformation. And there’s three, which is conflict reconstruction, right? So let’s let’s let’s talk about them separately, right? The first one, cyber tools in conflict. Now, here, I would like to do a little bit of a contrast with a global perspective. In the years after the Stuxnet virus, when everyone thought this is the advent of cyber war, you had lots of commentators saying, this is what cyber war will look like. It’ll be high sophistication, extremely high resource investment targeted at sort of strategic sites. You have a lot of speculation in the following years about what cyber war will look like after that. You then have the Ukraine conflict in 2022. And this challenges expectations, right? Cyber tools are blunted, right? The cyber defenders seem to have a good chance of repelling cyberattacks. There’s a lot of capacity building and investment going into Ukraine’s allies doing that. Then you have Israel’s invasion of Gaza. And there, the only cyber component, really sort of truly understood is the sort of the link to actors outside the conflict for a very simple reason. All infrastructure in Gaza is targeted and pretty much destroyed. Not just telecoms infrastructure, but water and energy and everything else. So there is no cyber tools in conflict in the Gaza side, because there is no use of cyber tools there. On the other side, you have a bigger conversation about the role of digital intelligence in targeting in conflict. That’s not necessarily sort of cybersecurity narrowly understood. Compromising devices, getting into devices. But this idea that in conflict, you use digital signals and device signals to do kinetic targeting, i.e. to kill people and bomb people, is extremely obvious in both those conflicts. In Ukraine, it goes from things like mobile phones on the frontline, and malicious apps being used by soldiers, then being used for drone strikes. In Gaza, it’s stuff like AI targeting systems, putting up lists of potential target sites to Israeli missile strikes, right? So this idea of digital targeting in conflict changes the landscape significantly, it sometimes gives greater advantage to already asymmetric conflicts. And by that, I mean, it enables states to bomb more, and to bomb more indiscriminately. The third, so that’s, that’s just a quick thing on cyber tools in conflict, right, as where we are now. The second point is on content and disinformation. Now, we’ve already talked about cybercrime laws. Now, most cybercrime laws in this region have a strong content component, right? They have a error saying disinformation or fake news or certain kinds of content will be prosecuted, will be considered crimes under this law. That’s not the case in other regions. So the Budapest Convention, for example, very clearly excluded content from its list of potential criminalizable offences. This was very much a political decision, right, as one that a lot of, for want of a better word, Western countries rejected wholesale up until the concerns about mis and disinformation stemming from largely from the US elections, right? They then understood that there was some need to focus on disinformation and content issues. There’s also a parallel discussion of online safety, right, of bullying and much more human security issues affecting content moderation and regulation. So both of these pressures have essentially slightly flattened the spectrum of positions for how to regulate content online. Most states now agree that there should be some regulation of online content. They disagree significantly about how much and what kinds of content, right? But most of these disagreements don’t necessarily lie in the technology itself, they lie in the definitions underlying that technology. For example, of a what is a criminal act online? What is a what violates national laws on media freedom and similar, right? So there’s a wide range of spectrum there. And there are a wide range of positions here in the region as well. Now, of course, there’s mis and disinformation become such a tangled topic, because you have to really piece it apart, right? So when you tracking disinformation operations around the Gaza conflict, right, they are doing, essentially, they are tracking kinds of the kinds of opinion that might be widely seen on many streets in many, many Arab countries, right. So it’s very difficult to disentangle the identification of disinformation from the political sides and positions that are taken by the people feeding that, or either promoting or reading that information. I’ll probably leave that there, because there’s a lot more we could say about that. The third point, say again, reconstruction. Now, I don’t have much to say on reconstruction, especially in the Middle East context, because the obvious point places here are in the war after the war in Yemen, and whatever will happen after Gaza, right, there will be a lot of investment required on. And of course, increasingly, you see Lebanon requiring massive reconstruction funds as well, right? The way in which cyber relates to that is in a way quite surface level, right, because frankly, it’s physical reconstruction, it’s all other kinds of infrastructure need to be rebuilt. But maybe there’s an opportunity there, right? Maybe there’s ability to bring in new forms of like very modern telecoms infrastructure, right? This is me trying to be as optimistic as possible in a situation that is incredibly pessimistic. So I would say there is maybe some potential for reconstruction, but the big decisions about who pays for it, even when it happens, right, we’re not there yet. So I wouldn’t be able to comment any further. There’s a question online. So should I read out the question? Well, yeah. Gaza has become the world’s first laboratory for testing and using, this is a question, artificial intelligence-based systems and weapons to commit apartheid war crimes and genocide against the oppressed Palestinian people. That has led to huge numbers of civilian deaths. What can be the role and responsibility of the United Nations governments in dealing with the weaponization of artificial intelligence that is contrary to human values and international law? So thank you very much for this question. And I’m not sure where to look because panelists can’t really see me. So this is a crucial point, right? And I have a feeling that many of you are at this panel because it is the only panel on any topic remotely like this at the current IGF, right? You scroll through the schedule. There is a panel tomorrow afternoon on transnational repression and cybercrime. So you should make sure that you attend that if you’re around. But there is not much discussion of conflict in general. There is certainly not much discussion of the conflict in Gaza, and let alone any discussion of the use of AI targeting tools that I mentioned. The Lavender system or others in terms of how that should be addressed by the UN. Now, I’ll make two short points. One is that the UN system is already mobilizing to look at the war in Gaza from a point of view of international law and what is prosecutable and not, especially by the International Criminal Court. So that is already happening. It happens slowly. There are prosecutors there looking at the well-publicized indictments of the leaders on both sides that is happening. So that is happening by the ICC, which obviously there are discussions of jurisdiction. But the UN system itself, there are also resolutions condemning what is happening, but then go into the existing geopolitical divides, right? Whether this is a security council or at the General Assembly. So the UN system is mobilizing, but always could do more. Now, this is better. Now, in terms of the IGF, question one would be whether the IGF should address AI itself, right? It is the internet governance forum. One could argue it has a narrow mandate to address internet governance, which does not include all the many social and conflict-related implications of AI, right? So it could say this is not part of our mandate. Now, we have already heard from the many high level speeches yesterday and so far, but the Internet Governance Forum does intend to include AI very firmly within its remit and you can see from many panels on AI ethics and responsibility and social impacts that this is the case. So I would say that certainly the Internet Governance Forum should look at the use of AI in conflict. It should link to other initiatives like the REAM summits, responsible use of military AI to understand not only how AI is being used now, but what is the potential for putting guardrails on it in future. If the lessons from the cyber security debate are anything, like tell us anything, they tell us that the intergovernmental multilateral process for putting anything in place around AI, especially military will be long and convoluted and probably dissatisfactory at the end. So, but there’s still hope. Please go ahead, yeah. I’m actually also going to, I had another meeting I thought, so the power wasn’t going on, but I’m just going to send a quick message while you discuss that, please.
AUDIENCE: As the public researchers about cyber diplomacy in the Middle East, United Nations has published it. So if there is, where can we read this? It’s good to watch or it’s good to read these stuffs. I was just going to chime in on the last matter about the use of AI in kinetic conflict. So, on the military side, the phrase to look for is human in the loop, because that’s how military people think of it, right? Is if there’s a human in the decision-making loop and the kill chain, then there’s ultimately a person upon whom responsibility falls and they know how to do that. Whereas if it was just software all the way down, then it’s machine learning, training data, who filtered the data, blah, blah, blah, blah, blah, blah. And ultimately everyone can evade any responsibility for anything. And on the NGO side, there’s the campaign to stop killer robots, which is the coalition of NGOs that are working in this area.
James Shires: Thanks, Bill. And to tie that human in the loop conversation back to what the question was asking about, which is Israel’s actions in Gaza, right? There is not a discussion there about there not being humans in the loop, right? That is not the issue that is taking center stage. The issue is that the humans in the loop are operating with insufficient constraints on collateral damage, on targeting limitations, on the numbers of strikes they’re conducting, et cetera, et cetera. So it’s a very difficult, it’s a very different scenario about how to regulate the use of AI in conflict when the real problems coming from the use of AI are not from the use of itself. It’s from its incorporation into the existing decision-making, flaws in decision-making. I hope that helps answer the question. We had one here, which was, where’d you read about this, right? So there’s a book called Cyber War and Peace in the Middle East, which was published by the US DC-based Middle East Institute a couple of years ago. There’s a book on the politics of cybersecurity in the Middle East, which you don’t want to read. And there’s also recent work, especially on the AI and the Gaza War, for example, by Anwar Majeneh, who’s at Stonehill College in the US. So yeah, there’s a few people writing on this, but it’s not a very large community either. So I can have my card afterwards. I will send you a list of references. Can I go online and then come back to you? Because I think there was an online question. So again, I’ll read it out. Existing internet governance system is not sufficient to respond to the policy issues related to data, domain names, safety, health, common infrastructure, technical standards and content, and requires the adoption of a comprehensive approach and a new architecture. Can the smart combination of multilateral governance model plus multi-stakeholder consultation be appropriate? Can the model of ICAO, the International Aviation Organization, be a good example to ensure the legality, health, and safety of cyberspace? So I guess this is not a Middle East-related question, but it is an important question. It’s well, how fit for purpose is the current internet governance system? Now, the International Civil Aviation Organization is a really interesting example, because there you have an extremely highly regulated industry, right? Not only is airplane building a very highly regulated activity, but airplane communications, everything to do with airspace, is extremely highly regulated, right? So in a way, it is kind of the opposite of the internet, right? The internet, by default, by design, in the technologies, is not regulated, right? Anyone can use it, they can set it up, they can create a network, they can connect that network to other networks, and so on, right? So, in a way, the ICAO is the wrong, is exactly, entirely the wrong end of the spectrum to think about internet governance, right? You have very few actors who are already used to highly, high levels of regulation, cooperating to ensure they can build and maintain sort of safe passage for aircraft. The internet, on the other hand, is sort of a really wide diversity of actors, all of whom are able to do what they want and try and do what they want really quickly, and so you have to try and bring those in. So, I think a combination of a multilateral governance model plus multistakeholder consultation, I think, would just be insufficient, right? You just would not get the right people around the room, you would not be enabled to enforce or act on any recommendations or things that are made because you wouldn’t have, yeah, the multilateral governance would not be effective. That’s what I’d say there, but I know there are other people in the room who might want to come in as well. So, would you like to come in? You want to come in, please? You have a, you have a question? Okay, we have two questions, and if you’d have, you have a question, right? Okay, I’ve answered it already. Okay, you have a question, you go. And we have one at the back there, go ahead.
AUDIENCE: Can you hear me? I can hear you, yeah. So, the fact that we have like different definitions for that certain cybersecurity term, we are going to lead an effort to find like a unified definition for a single cybersecurity term. What the step that should be followed?
James Shires: So, we would like to have a single universally agreed definition of cybersecurity. Now, the problem arises, right, when you start stepping outside of technical definitions of security, right? So, the classic technical definition are things like confidentiality, availability, and integrity, right? And you can define those definitions and you can define those properties within networks relatively well, right? With those classic Alice Bob diagrams about who can read what when, that’s how you’d get to those definitions. But even then, integrity, right? Which is usually the communication is the same as coming out as it was going in, and it also hasn’t had some kind of time, it has the right timestamp as well, right? Because if you delay communication, that’s also a failure of integrity. These properties have also been redefined, right? So, a lot of the disinformation and misinformation work is badged as integrity work, right? So, maybe Facebook or Meta would try and think about their content moderation efforts within that framework. So, you very quickly get pushing at the boundaries of these technical concepts to make them include much thicker ideas about what shouldn’t be included. And that’s the root of the problem with defining cybersecurity. Because then you get into things like, okay, well, what is security? It means, does it mean that you don’t have the, there’s no access to a network, right? You have a secure network. That’s a very kind of black and white definition, right? There’s inside and outside. Is it being able to respond and make sure you can continue to function, right? That’s much more resilience focused definition of security. Or as many people around this forum say, is it, should there be a human idea of cybersecurity, right? It’s like, it’s actually, what does it mean for people, whether something, whether they are secure or not in what they do online. And that’s possibly the thickest definition you have, which includes everything to do with content, all the technical aspects of cybersecurity and everything else. So, the reason we find it hard to define cybersecurity is because contained within this discussion is a lot of our own values and opinions and our cultural background as well, right? So, we start unpicking those whenever we start to get to the definition. So, I don’t think there will be a single definition. I might be wrong. If someone can give me a single universally agreed definition of cybersecurity, I’d love to hear it. Do you have a microphone? Yeah, that feels odd.
AUDIENCE: Yeah, I was just wondering, how would you assess the maturity of the cyber diplomacy landscape in the Middle East compared to other regions of the global South, whether it be Africa or Southeast Asia or Latin America or the Caribbean?
James Shires: Yeah. So, I would say maturity is an, I don’t like the term maturity, point one. There’s a ITU maturity index. There’s a lot of maturity surveys. And it implies that all that’s needed is sort of a certain amount of capacity and then everyone will engage in the same way, right? So, the problems come from lack of maturity. I actually think that that’s not necessarily the case here, right? So, you have very deliberate choices by states, what they invest in. Do they invest in cyber diplomacy? Like, classically understood. Do they see the UN discussions, the OEWG, et cetera, as providing value? Or do they do something else as well, right? Do they do something more national or more regional? So, I think you can do a regional comparison. You can do them based on indexes, such as the ITU index. You can do them based on the Oxford Cybersecurity Capacity Building, maturity model, right? There’s a lot of ways you can do inter-regional comparison. My preference is always to say, okay, what is the, what are people, or decision makers and leaders in the region, what do they want to get out of these discussions? And are they getting those out of them? Are they getting what they want out of them? So, for example, with the ITU index, right, are they able to communicate that their, that their country is digitally advancing, right? That it’s a tech power or a tech hub for the region, right? You’ve seen that very powerfully in the UAE, in Saudi Arabia, right? Are they able to say, in terms of capacity building, to devote projects, to devote funding to projects they are interested in, right? In maybe countries that they have a diplomatic interest in, whether that’s in the Horn of Africa, in Djibouti or somewhere else. Yes, of course, right? And are they able to do education? There’s a lot of very impressive open education initiatives in Arabic and in English, in the Emirates and Qatar and Saudi Arabia, to really reach the population. So, yes, there’s a lot of activity there, right? The way that, when I’ve done, I’m speaking to people at professional conferences, things like that in the region, often this then becomes, so if you reject the global indexes of maturity and you say, it’s all about what you want, this then becomes a conversation about standards. Okay, how many companies in the region are obeying ISO 27,000? How many of them are obeying the NIST or how many of them are adopting the NIST standard, right? Because you don’t obey it. And there you might get a much lower answer, right? You might say, actually, they’re either doing it for only audit purposes, but they are not sort of practicing the kinds of measures that they would advise, or they are even not engaging in the standards conversation at all. And there you can point to probably more reliable metrics. There are lower levels of maturity, especially in some sectors in this region that would need a lot of advancing to go further on. But that’s possible at a regional level, right? So you could do that through the GCC, through the Gulf Cooperation Council Committee. Yeah, further on as well, so please.
AUDIENCE: Thank you, so if I look at, I guess the whole panel’s on cyber diplomacy and peace, right, so there’s an angle where you have to look at peace within the Middle East, that is Middle East by Middle East, and maybe there’s Middle East with others, because we know there’s a lot of actors outside Middle East involved in the Middle East, which disrupts peace, right? So if cyber diplomacy is about influence and getting people aligned to the cause, and everybody wants peace, because peace is when you prosper, you don’t prosper very much unless you’re selling things that disrupt that, right? So most of the hardware and the tech comes from other sides of the world. How will this impact cyber diplomacy in the future when you control the narrative, or you control the data? And we see that already, I think out of 500 cables, most of them, 99% is private sector cables. And if you look at the private sector companies owning them, you know where they are, they’re mostly in the West. So do you think this becomes a challenge in the future?
James Shires: I certainly think it does. I think, so there’s a very, there’s two answers to the question. One, like, ownership of hardware, of infrastructure, and ownership of data are both really live issues, especially for this region. There’s a strong push for data localization. So if you look at cloud regulations in the Gulf States, you look at agreements that they’ve struck with main cloud providers, they are really pushing for localization agreements through partnership with local companies. So if you have Google Cloud Platform, Amazon, etc., then they are working with partner companies, and they are also asking for dedicated data centers. Now, I published something quite recently on this, on the role of cloud computing and cybersecurity in diplomatic interaction, so between Israel and the UAE through the Abraham Accords, and some of that analysis showed how meaningless some of this is. So the requirements that Israel and the UAE put on the data centers, they said you have to have local data centers. Okay, great. But actually, we want a whole cloud region. Okay, right. So we want our own independent cloud region here, which is actually at least three independent… If you remember that, at least three independent data centers are at least like certain physical distance apart, right? And they didn’t build these data centers. They just rented some of them. Some of them weren’t sufficiently apart to count as a cloud region, so they just changed the definition slightly, right? So this demand for localization is kind of met in name but not in reality. So yeah, that’s point one. Point two is kind of the role of external actors, right? And here, you know, you have to look at what is really going on in terms of offensive cyber tools. And here we switched tack slightly to go to kind of a threat landscape. There’s an actor called, by the threat, intelligence industry predatory sparrow, which has been linked to cyber attacks in Iran that have had disruptive effects on railways, on steel plants, on fuel stations, things like that. Now, part of this is because kind of the infrastructure in Iran, as far as I know, is outdated, right? It doesn’t have the ability to modernize its whole digital critical infrastructure, right? So relatively easy to target. But also, you know, this actor, whoever it is, activist or state sponsored, is pushing the boundaries, right? They are disrupting critical infrastructure to a certain extent and then rolling back on that, not going as far as they claim they could do. So there’s also actors in the Middle East that are consciously pushing the boundaries of what is acceptable through cyber conflict and seeing how far they can go there. And that’s a really worrying trend. Please, I don’t know if this one works.
AUDIENCE: I think it does. We’ll share a mic. I’ll put these on so I can hear. No, actually, I don’t need it. Thanks for organizing this session. Actually, it’s a good opportunity because we get a chance to talk. So I really like this. I had a couple of reflections. I don’t come from the security field. But one thing that has, and I’m based in Belgium. My name is Jamal Shaheen. I’m based in Belgium at the University of Belgium. So I just had a couple of questions. And I’ll start with one, which is about in Europe, at least, we see this connection between cybersecurity. It’s now with the new European Commission that’s just been put in. They’re bringing this connection between cybersecurity and democracy or stability of institutions in the field. And I was wondering whether that’s something that you’ve seen panning out in this area. And then the second question was, as part of the Ukraine conflict, I’ve noticed that companies like Microsoft have been publishing reports, threat reports. Which have been saying what Microsoft as a private actor has been doing to defend Ukraine. And changing the landscape from security actor to the types of security actors that you talk about when you talk about cyber diplomacy. Maybe incorporating different types of actors. I was wondering how that’s playing out in this region, particularly in light of the last question. When we talk about different types of actors. Is the private sector now playing a bigger role? And of course, that private sector being largely based outside of the region. Is that playing an important role in this space? And how will that play out? Because then the opposite of that is they’re moving towards more calls for digital sovereignty. You call data localization, but digital sovereignty. Which only further gives space for more conflictual responses. I’m a bit concerned about that kind of evolution of this dialogue.
James Shires: Yeah. Oh, yeah. That’s right. Thank you. Thank you. Very. Yeah. Really interesting points. Right. So. On the first one, this idea that the private sector is becoming almost a combatant. Right. They are providing support in Ukraine via Microsoft. Critical services. Some of them defense services. Some of the military services. Right. And it’s very hard to distinguish. If you provide a cloud platform to the Ukrainian government, some of it is used for defense. Some of it is not. What is a legitimate target then for the other side? How far do they become involved in that war? These are really difficult questions that I think a lot of people are very worried about. And I also think that the boundaries are being really tested there. So, in the Middle East complex that I’ve talked about, in Yemen and in Gaza, there’s not the same question at the moment. There’s not the same involvement of private sector actors on the defensive. Now, there are on the Israeli side. So, it’s a major cybersecurity hub. There are big Israeli companies. You might want to check out the Palestine Laboratory, which is a very carefully researched book on the use of the Palestinian territories for developing technologies throughout Israel’s history. So, that’s not just to do with AI in Gaza now, but it’s a longer term phenomenon. And that means that companies are very closely connected to the Israeli state. That’s not a new thing, and it’s entirely conscious as well. You have Israeli leaders like Netanyahu very much selling the cybersecurity industry through things like NSO Group and others that they sell. It’s very much to this region and this country as well, that they are using cyber offensive and defensive as diplomacy. And a lot of the recruitment comes out of the military. So, this idea that the private sector is involved comes from a very, I guess it would say a European or American squeamishness. A Silicon Valley idea that these companies are not part of the state. They’re not associated with state military activities. It just doesn’t exist in the Israeli context. Right. So, that’s why it’s really different. And because they’re such a central hub, they don’t have the need for international companies to come in the same way. And of course, on the other side in Gaza and Yemen, they’re not getting the same level of support. Just to state the blatantly obvious. There is no one going in and saying, okay, we will migrate your cloud infrastructure online. We will provide you with a digital e-government service. None of that is happening purely because of political priorities.
AUDIENCE: If we needed cyber peace or we needed peace in the region, and you come back to your main frame of question, what are the four or five things you recommend we do? Three things. Thank you so much for giving me a wrap up.
James Shires: And for those online, I know we had a couple of questions. I hope you found this stimulating. And just to remind everyone who’s joined a little bit later, I am kind of having a discussion because many of the panelists couldn’t originally make it. So thank you, everyone, for sticking with it. What are the four or five things in terms of cyber peace? Now, the first is data, right? Data on use of cyber tools in conflict, on the actors using those tools, right? At the moment we rely on, as this gentleman said, private sector threat intelligence reports that are very skewed towards certain actors. It’s not a good source of data, but it’s the only one most researchers have. So, new and different kinds of data about what kinds of cyber security threats there are on the ground, right? And then you can have a conversation about how to counter those threats once you know more about them. Working in a multistakeholder way, right? Working more with civil society, with industry and with internet governance organizations that aim to promote non-political aims, right? So, there’s a lot of controversy here in the region, right? You have everything we’ve talked about so far today from content moderation and censorship to offensive cyber tools and spyware and things like that. Finding ways where countries can agree that they want greater and better internet access, maybe, and better connectivity would be number two, right? And using diplomacy to achieve those aims. And then number three would be plugging in as much as possible to the UN processes, right? They have all the flaws that I’ve already talked about, but at the moment there are no alternatives, right? And if you use regional mechanisms, whether it’s the GCC, whether it’s other regional groupings, to develop your own practices, to develop states’ competencies and collaboration in cyber security, and then translate that to the global level, to the multilateral level, that I think would be the last recommendation. Thank you all for what has been an extremely interesting conversation. Please do follow up. I have some cards here. I know I said I promised people some references. I promised people some things to read, and I will definitely do that if you come and approach me afterwards. So thank you so much.
James Shires
Speech speed
151 words per minute
Speech length
5915 words
Speech time
2343 seconds
Cyber diplomacy is diplomacy about cybersecurity issues, distinct due to transnational stakeholders and private sector involvement
Explanation
James Shires defines cyber diplomacy as diplomacy focused on cybersecurity issues. He emphasizes that it is unique due to the involvement of transnational stakeholders and the private sector, which differs from traditional diplomacy.
Evidence
He mentions that the technical community governing the internet cannot be tied to particular states, and that the private sector plays a significant role.
Major Discussion Point
Cyber Diplomacy in the Middle East
Differed with
AUDIENCE
Differed on
Definition and scope of cyber diplomacy
Middle East states have become more engaged in cyber diplomacy in recent years, especially around cybercrime conventions
Explanation
James Shires notes that Middle Eastern countries have increased their involvement in cyber diplomacy, particularly in relation to cybercrime conventions. This marks a shift from their previous limited engagement in UN cybersecurity governance processes.
Evidence
He mentions the active engagement of Middle Eastern states in the UN Convention on Cybercrime process, especially towards the end.
Major Discussion Point
Cyber Diplomacy in the Middle East
Cyber diplomacy is increasingly used as soft power by Middle Eastern states to promote their geopolitical interests
Explanation
James Shires argues that Middle Eastern countries are using cyber diplomacy as a form of soft power. They are leveraging cybersecurity issues to advance their geopolitical interests and position themselves as tech hubs or powers in the region.
Evidence
He cites examples such as the Doha forum in Qatar and the Manama dialogues in Bahrain, where states are using these platforms to present their interpretations of geopolitical issues, including cybersecurity.
Major Discussion Point
Cyber Diplomacy in the Middle East
AI targeting systems are being used in conflicts like Gaza, raising concerns about indiscriminate attacks
Explanation
James Shires discusses the use of AI targeting systems in conflicts, particularly in Gaza. He raises concerns about how these systems might lead to more indiscriminate attacks and potentially give greater advantage in asymmetric conflicts.
Evidence
He mentions AI targeting systems being used to create lists of potential target sites for Israeli missile strikes in Gaza.
Major Discussion Point
AI and Cyber Warfare
Agreed with
AUDIENCE
Agreed on
AI use in conflicts raises concerns
Infrastructure vulnerabilities make some countries like Iran easier targets for cyberattacks
Explanation
James Shires points out that outdated infrastructure in countries like Iran makes them more vulnerable to cyberattacks. This vulnerability allows actors to more easily target and disrupt critical infrastructure.
Evidence
He cites examples of cyberattacks on Iranian railways, steel plants, and fuel stations, attributed to an actor called ‘predatory sparrow’.
Major Discussion Point
Cybersecurity Challenges in the Middle East
There is a push for data localization and digital sovereignty in Gulf states
Explanation
James Shires discusses the trend of Gulf states pushing for data localization and digital sovereignty. This involves efforts to keep data within national borders and have more control over digital infrastructure.
Evidence
He mentions cloud regulations in Gulf States and agreements with main cloud providers for localization and dedicated data centers.
Major Discussion Point
Cybersecurity Challenges in the Middle East
Gather better data on cyber threats and actors in the region
Explanation
James Shires recommends gathering more comprehensive and unbiased data on cyber threats and actors in the Middle East. He argues that current data sources are limited and skewed, hindering effective policy-making and threat response.
Evidence
He points out the current reliance on private sector threat intelligence reports, which he describes as very skewed towards certain actors.
Major Discussion Point
Recommendations for Cyber Peace
Agreed with
AUDIENCE
Agreed on
Importance of data on cyber threats
Work in a multi-stakeholder way with civil society and industry
Explanation
James Shires advocates for a multi-stakeholder approach to cyber peace and diplomacy in the Middle East. He emphasizes the importance of involving civil society and industry alongside governments in addressing cybersecurity challenges.
Major Discussion Point
Recommendations for Cyber Peace
Engage more with UN processes while developing regional cybersecurity practices
Explanation
James Shires recommends that Middle Eastern countries should increase their engagement with UN cybersecurity processes. At the same time, he suggests developing regional cybersecurity practices that can later be translated to the global level.
Evidence
He mentions using regional mechanisms like the GCC to develop states’ competencies and collaboration in cybersecurity.
Major Discussion Point
Recommendations for Cyber Peace
AUDIENCE
Speech speed
153 words per minute
Speech length
1093 words
Speech time
428 seconds
There is a need for more data on cyber threats and tools used in conflicts in the region
Explanation
An audience member highlights the importance of gathering more comprehensive data on cyber threats and tools used in conflicts in the Middle East. This data is crucial for understanding the cybersecurity landscape and developing effective responses.
Major Discussion Point
Cyber Diplomacy in the Middle East
Agreed with
James Shires
Agreed on
Importance of data on cyber threats
The “human in the loop” concept is important in discussions of AI in military decision-making
Explanation
An audience member introduces the concept of “human in the loop” in the context of AI and military decision-making. This concept emphasizes the importance of human oversight and responsibility in AI-assisted military operations.
Evidence
The speaker mentions that military personnel consider human involvement in the decision-making loop as crucial for assigning responsibility.
Major Discussion Point
AI and Cyber Warfare
There are calls for the UN and IGF to address the use of AI in conflicts
Explanation
An audience member raises the issue of AI use in conflicts, particularly in Gaza, and calls for action from the UN and Internet Governance Forum (IGF). They suggest these organizations should play a role in addressing the ethical and legal implications of AI in warfare.
Major Discussion Point
AI and Cyber Warfare
Agreed with
James Shires
Agreed on
AI use in conflicts raises concerns
Differed with
James Shires
Differed on
Definition and scope of cyber diplomacy
Private sector tech companies are playing an increasing role in cybersecurity and conflicts
Explanation
An audience member points out the growing involvement of private sector tech companies in cybersecurity and conflicts. This trend is changing the landscape of security actors and raising questions about the role of these companies in diplomatic and military affairs.
Evidence
The speaker mentions Microsoft’s involvement in defending Ukraine as an example of this trend.
Major Discussion Point
Cybersecurity Challenges in the Middle East
Agreements
Agreement Points
Importance of data on cyber threats
James Shires
AUDIENCE
Gather better data on cyber threats and actors in the region
There is a need for more data on cyber threats and tools used in conflicts in the region
Both James Shires and an audience member emphasized the need for better data on cyber threats and tools used in conflicts in the Middle East region.
AI use in conflicts raises concerns
James Shires
AUDIENCE
AI targeting systems are being used in conflicts like Gaza, raising concerns about indiscriminate attacks
There are calls for the UN and IGF to address the use of AI in conflicts
Both James Shires and an audience member expressed concerns about the use of AI in conflicts, particularly in Gaza, and the need for addressing its implications.
Similar Viewpoints
Both recognize the increasing role of private sector and non-governmental actors in cybersecurity and conflicts, suggesting a need for multi-stakeholder approaches.
James Shires
AUDIENCE
Work in a multi-stakeholder way with civil society and industry
Private sector tech companies are playing an increasing role in cybersecurity and conflicts
Unexpected Consensus
Limitations of current cyber diplomacy frameworks
James Shires
AUDIENCE
Engage more with UN processes while developing regional cybersecurity practices
There are calls for the UN and IGF to address the use of AI in conflicts
Despite different focuses, both James Shires and the audience member unexpectedly agreed on the need for more engagement with international bodies like the UN, while also recognizing the limitations of current frameworks in addressing emerging issues like AI in conflicts.
Overall Assessment
Summary
The main areas of agreement centered around the need for better data on cyber threats, concerns about AI use in conflicts, the increasing role of private sector in cybersecurity, and the need for more engagement with international bodies while developing regional practices.
Consensus level
There was a moderate level of consensus on key issues, particularly on the need for better data and addressing AI in conflicts. This consensus suggests a shared recognition of emerging challenges in cybersecurity and diplomacy in the Middle East, which could potentially lead to more collaborative efforts in addressing these issues.
Differences
Different Viewpoints
Definition and scope of cyber diplomacy
James Shires
AUDIENCE
Cyber diplomacy is diplomacy about cybersecurity issues, distinct due to transnational stakeholders and private sector involvement
There are calls for the UN and IGF to address the use of AI in conflicts
While James Shires focuses on cybersecurity issues in his definition of cyber diplomacy, an audience member suggests expanding its scope to include AI use in conflicts, implying a broader interpretation of cyber diplomacy.
Unexpected Differences
Role of private sector in cybersecurity and conflicts
James Shires
AUDIENCE
Infrastructure vulnerabilities make some countries like Iran easier targets for cyberattacks
Private sector tech companies are playing an increasing role in cybersecurity and conflicts
While James Shires focuses on state-level vulnerabilities and attacks, an audience member unexpectedly brings up the increasing role of private sector tech companies in cybersecurity and conflicts. This difference highlights a potential gap in the discussion about the evolving landscape of cyber actors.
Overall Assessment
summary
The main areas of disagreement revolve around the scope of cyber diplomacy, the focus of data collection efforts, and the role of different actors in cybersecurity and conflicts.
difference_level
The level of disagreement appears to be moderate. While there are some differences in perspective, they seem to stem from different areas of focus rather than fundamental disagreements. These differences highlight the complexity of cyber diplomacy and security in the Middle East, suggesting a need for a more comprehensive and multi-stakeholder approach to address the region’s cybersecurity challenges.
Partial Agreements
Partial Agreements
Both James Shires and the audience agree on the need for better data on cyber threats in the region. However, they differ in their focus, with Shires emphasizing the importance of unbiased data sources, while the audience member specifically highlights the need for data on tools used in conflicts.
James Shires
AUDIENCE
Gather better data on cyber threats and actors in the region
There is a need for more data on cyber threats and tools used in conflicts in the region
Similar Viewpoints
Both recognize the increasing role of private sector and non-governmental actors in cybersecurity and conflicts, suggesting a need for multi-stakeholder approaches.
James Shires
AUDIENCE
Work in a multi-stakeholder way with civil society and industry
Private sector tech companies are playing an increasing role in cybersecurity and conflicts
Takeaways
Key Takeaways
Cyber diplomacy in the Middle East has increased in recent years, especially around cybercrime conventions
Middle Eastern states are using cyber diplomacy as soft power to promote their geopolitical interests
AI targeting systems are being used in conflicts like Gaza, raising concerns about indiscriminate attacks
There is a push for data localization and digital sovereignty in Gulf states
Private sector tech companies are playing an increasing role in cybersecurity and conflicts in the region
Resolutions and Action Items
Gather better data on cyber threats and actors in the Middle East region
Work in a multi-stakeholder way with civil society and industry on cybersecurity issues
Engage more with UN processes while developing regional cybersecurity practices
Unresolved Issues
How to effectively regulate the use of AI in military conflicts
The role of the UN and IGF in addressing AI weaponization
How to balance digital sovereignty efforts with international cooperation on cybersecurity
The lack of a universally agreed definition of cybersecurity
Suggested Compromises
None identified
Thought Provoking Comments
For me, cyber diplomacy is the, is diplomacy about cybersecurity issues, right? There’s a whole set of questions on cyber diplomacy, as in the use of cyber or digital tools or digital diplomacy that is very interesting and important. How do ministries of foreign affairs adapt to the digital world? How do they use AI, for example, in their day-to-day lives? That’s not, for me, cyber diplomacy. It’s diplomacy about cybersecurity issues.
speaker
James Shires
reason
This comment provides a clear definition and scope for cyber diplomacy, distinguishing it from digital diplomacy. It sets the foundation for the rest of the discussion by clarifying the topic.
impact
This definition helped focus the conversation on specific aspects of cybersecurity in international relations, rather than broader digital issues.
Gaza has become the world’s first laboratory for testing and using artificial intelligence-based systems and weapons to commit apartheid war crimes and genocide against the oppressed Palestinian people. That has led to huge numbers of civilian deaths. What can be the role and responsibility of the United Nations governments and UNIGF in dealing with the weaponization of artificial intelligence that is contrary to human values and international law?
speaker
Online Audience Member
reason
This comment brings up a highly controversial and current issue, connecting AI, cybersecurity, and ongoing conflicts. It challenges the discussion to address real-world applications and ethical implications.
impact
This question shifted the conversation towards the ethical use of AI in conflict situations and the role of international organizations in regulating such technologies.
As the public researchers about cyber diplomacy in the Middle East, United Nations has published it. So if there is, where can we read this? It’s good to watch or it’s good to read these stuffs.
speaker
Audience Member
reason
This question highlights the need for accessible, credible sources of information on cyber diplomacy in the Middle East, pointing to a gap in public knowledge.
impact
It led to a brief discussion on available resources and literature on the topic, potentially helping attendees find more information after the session.
Ownership of hardware, of infrastructure, and ownership of data are both really live issues, especially for this region. There’s a strong push for data localization.
speaker
James Shires
reason
This comment introduces the important concepts of data sovereignty and localization, which are crucial in understanding the geopolitics of cybersecurity in the Middle East.
impact
It broadened the discussion to include economic and political aspects of cybersecurity, beyond just technical issues.
In Europe, at least, we see this connection between cybersecurity. It’s now with the new European Commission that’s just been put in. They’re bringing this connection between cybersecurity and democracy or stability of institutions in the field. And I was wondering whether that’s something that you’ve seen panning out in this area.
speaker
Jamal Shaheen
reason
This comment introduces a comparative perspective, bringing in European approaches to cybersecurity and its connection to democratic institutions.
impact
It prompted a discussion on the differences between European and Middle Eastern approaches to cybersecurity, highlighting the importance of regional context.
Overall Assessment
These key comments shaped the discussion by broadening its scope from a narrow focus on cyber diplomacy to include ethical considerations of AI in conflict, data sovereignty, and regional differences in approaches to cybersecurity. The discussion evolved from defining terms to exploring real-world applications and implications, particularly in the context of ongoing conflicts in the Middle East. The comments also highlighted the interconnectedness of cybersecurity with broader political, economic, and ethical issues, demonstrating the complexity of the topic and the need for multifaceted approaches in both research and policy-making.
Follow-up Questions
How will cyber diplomacy look 5-10 years from now?
speaker
Audience member
explanation
Understanding future trends in cyber diplomacy is important for anticipating challenges and opportunities in the field.
How would cyber diplomacy act in the realm of targeted removal of ICT infrastructure during conflicts?
speaker
Audience member
explanation
This is crucial for understanding the role of cyber diplomacy in protecting critical infrastructure during conflicts.
What can be the role and responsibility of the United Nations, governments, and UNIGF in dealing with the weaponization of artificial intelligence that is contrary to human values and international law?
speaker
Online participant
explanation
This question addresses the urgent need for international governance and regulation of AI in warfare.
What steps should be followed to find a unified definition for cybersecurity terms?
speaker
Audience member
explanation
A common understanding of cybersecurity terms is essential for effective international cooperation and policy-making.
How would you assess the maturity of the cyber diplomacy landscape in the Middle East compared to other regions of the global South?
speaker
Audience member
explanation
This comparison is important for understanding regional differences and potential areas for improvement in cyber diplomacy.
How will the control of hardware, tech, and data by Western companies impact cyber diplomacy in the future?
speaker
Audience member
explanation
This question addresses the potential power imbalances in cyber diplomacy due to technological disparities between regions.
Is there a connection between cybersecurity and democracy or stability of institutions in the Middle East, similar to what is seen in Europe?
speaker
Jamal Shaheen
explanation
Understanding this connection is important for assessing the broader societal impacts of cybersecurity policies.
How is the role of private sector companies in cybersecurity and conflict playing out in the Middle East?
speaker
Jamal Shaheen
explanation
This question explores the changing dynamics of cyber actors and the potential implications for regional stability and sovereignty.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
WS #97 Interoperability of AI Governance: Scope and Mechanism
WS #97 Interoperability of AI Governance: Scope and Mechanism
Session at a Glance
Summary
This discussion focused on the interoperability of AI governance, exploring its scope and potential mechanisms. Participants examined the concept of interoperability in AI governance, addressing issues that need global attention and obstacles to implementation. Key topics included the role of multilateral and multi-stakeholder approaches, the importance of trust-building, and the need for cultural and sustainable interoperability.
Speakers emphasized the significance of balancing regional variations with global approaches, highlighting the need for flexibility in governance frameworks. The discussion touched on the challenges of the digital divide and the importance of capacity building in developing countries. The role of the United Nations in global AI governance was a central point, with participants acknowledging its legitimacy while noting limitations in enforcement capabilities.
The conversation explored various forums and mechanisms for implementing AI governance, including the potential of blockchain technology and the need for agile responses to rapid technological advancements. Participants stressed the importance of streamlining efforts to avoid duplication and the need for clear mandates in international bodies.
The discussion also addressed concerns about AI sovereignty and data flows, suggesting potential solutions like “data embassies.” Speakers highlighted the need for a balance between efficiency and fairness in governance structures and the importance of prioritizing key issues in international forums.
Overall, the discussion underscored the complexity of achieving interoperability in AI governance and the need for continued dialogue and cooperation among diverse stakeholders to address global challenges while respecting regional and national interests.
Keypoints
Major discussion points:
– Understanding interoperability of AI governance, including legal, semantic, and technical layers
– The role of different actors (governments, private sector, civil society) in addressing AI interoperability
– Balancing regional variations with global approaches to AI governance
– The role of the United Nations and other international bodies in global AI governance
– Challenges of AI sovereignty and data flows between countries
The overall purpose of the discussion was to explore different perspectives on how to achieve interoperability in AI governance at a global level, while respecting regional and national differences. The panelists aimed to identify key issues, challenges, and potential solutions for creating more aligned and coordinated approaches to governing AI internationally.
The tone of the discussion was collaborative and constructive throughout. Panelists built on each other’s points and offered complementary perspectives. There was general agreement on the importance of interoperability and international cooperation, even while acknowledging challenges. The tone became slightly more urgent when discussing the need for the UN and other bodies to move quickly enough to keep pace with AI developments.
Speakers
– Olga Cavalli: Director of the South School of Internet Governance, Dean of the National Defense University of Argentina
– Yik Chan Chin: Associate Professor from Beijing Normal University, Co-leader of the IGF Policy Network on Artificial Intelligence
– Sam Daws: Senior Advisor of Oxford Martin Artificial Intelligence Governance Initiative, Director of Multilateral Artificial Intelligence
– Mauricio Gibson: Head of International Artificial Intelligence Policy, Artificial Intelligence Policy Directorate, Department of Science, Innovation and Technology, United Kingdom
– Xiao Zhang: Deputy Director of China Internet Network Information Center, Deputy Director of China IGF
– Poncelet Ileleji: CEO of Yoko Labs, Banjul, Gambia
– Neha Mishra: Assistant Professor from Geneva Graduate Institute, Switzerland
– Heramb Podar: Centre for AI and Digital Policy, India
Additional speakers:
– Dino Cataldo Dell’Accio: Chief Information Officer at the United Nations Venture Fund, Leader of the dynamic coalition on blockchain assurance standardization at IGF
Full session report
Expanded Summary of AI Governance Interoperability Discussion
Introduction
This discussion, featuring experts from various fields and regions, focused on the interoperability of AI governance, exploring its scope, potential mechanisms, and challenges. The conversation aimed to identify key issues and potential solutions for creating more aligned and coordinated approaches to governing AI internationally while respecting regional and national differences.
Understanding AI Interoperability
The discussion began with a broad definition of AI interoperability, extending beyond technical aspects to include legal, semantic, and policy dimensions. It encompasses the ways through which different initiatives, including laws, regulations, policies, codes, and standards that regulate and govern artificial intelligence across the world, could work together more effectively and impactfully.
Speakers expanded on this concept, emphasising various aspects:
1. Yik Chan Chin stressed the importance of a broad definition beyond technical systems.
2. Sam Daws highlighted cultural interoperability and sustainability aspects.
3. Mauricio Gibson emphasised the need to balance regional variations with global approaches.
4. Xiao Zhang positioned AI interoperability as part of broader digital transformation.
5. Poncelet Ileleji underscored the importance of inclusivity and public interest in interoperability.
Key Issues and Obstacles for Global AI Governance
The discussion identified several critical challenges in achieving global AI governance interoperability:
1. Risk categorisation, liability, and training data risks (Yik Chan Chin)
2. Geopolitical tensions and unequal distribution of AI capabilities (Yik Chan Chin)
3. Sustainability and energy demands of AI systems (Sam Daws)
4. Keeping pace with rapid technological advancement (Mauricio Gibson)
5. Trust-building and data sovereignty concerns (Xiao Zhang)
Yik Chan Chin elaborated on specific global issues that need addressing, including:
– Harmonizing risk categorization across different jurisdictions
– Addressing liability issues for AI-caused harm
– Managing risks associated with training data and model outputs
– Tackling the unequal distribution of AI capabilities globally
Sam Daws particularly emphasised the need for an interoperable global approach to AI sustainability, noting the increasing energy demands of AI systems and the importance of measuring, tracking, and incentivising better energy and water use in data centres, chips, and algorithms.
Role of Different Actors in Addressing AI Interoperability
The speakers agreed on the importance of involving multiple stakeholders and diverse perspectives in addressing AI interoperability and governance. Key points included:
1. The importance of multi-stakeholder and interdisciplinary approaches (Yik Chan Chin)
2. The need for cross-regional forums for lesson sharing, such as the UN AI Advisory Body and regional initiatives like the Council of Europe and CAHAI (Sam Daws)
3. The government’s role in convening stakeholders and capacity building (Mauricio Gibson)
4. A multilateral orientation with multi-stakeholder engagement (Xiao Zhang)
5. The UN’s convening power and coordination role (Neha Mishra)
Mauricio Gibson emphasized the need for clarity in messaging and avoiding duplication in multilateral efforts. Xiao Zhang stressed the importance of finding priorities and focusing on them in UN efforts.
United Nations’ Role in Global AI Governance
The speakers generally agreed on the UN’s important role in AI governance, highlighting its legitimacy, capability for rapid response, and potential as a platform for dialogue and collaboration. Specific points included:
1. The UN as a platform for policy dialogue and collaboration (Yik Chan Chin)
2. The UN’s capability for rapid response in emergencies (Sam Daws)
3. The need to streamline UN agencies and define clear duties (Mauricio Gibson)
4. The UN’s legitimacy from equal representation of countries (Xiao Zhang)
5. The potential for a UN enforcement role in AI safety/security (Poncelet Ileleji)
Cultural and Inclusivity Aspects
An important area of consensus emerged around the importance of cultural aspects and inclusivity in AI interoperability. Sam Daws and Poncelet Ileleji both emphasised these points, highlighting the need for diverse cultural inputs into AI development, including insights from low-resource languages and indigenous peoples.
AI Sovereignty and Trust
In response to an online question, the discussion touched on AI sovereignty. Xiao Zhang emphasised trust as a fundamental issue in AI development and governance, shifting the focus from technical aspects to human and social factors. This led to further discussion on how to build trust in AI systems and the role of different stakeholders in this process.
Role of IGF and National and Regional Initiatives (NRIs)
Xiao Zhang highlighted the importance of the Internet Governance Forum (IGF) and its National and Regional Initiatives (NRIs) in AI governance. He emphasized their role in fostering multi-stakeholder dialogue and contributing to the development of AI governance frameworks at various levels.
Blockchain and AI Integration
Dino Cataldo Dell’Accio, an audience member, introduced the potential of integrating blockchain technology with AI to address interoperability and trust issues. He suggested this could provide a common layer of trust and transparency for AI systems, potentially enhancing interoperability and addressing some governance challenges.
Digital Economy Agreements
Neha Mishra brought up the relevance of digital economy agreements in relation to AI interoperability, suggesting that these agreements could play a role in facilitating cross-border data flows and AI governance.
Conclusion
The discussion underscored the complexity of achieving interoperability in AI governance and the need for continued dialogue and cooperation among diverse stakeholders. Key takeaways included the need for a broad understanding of AI interoperability, the importance of multi-stakeholder approaches, the challenge of balancing regional variations with global approaches, and the critical role of the UN in facilitating dialogue and coordination.
While there was general agreement on many points, unresolved issues remain, including how to effectively streamline and coordinate AI governance efforts across multiple UN agencies and forums, specific mechanisms to bridge the AI divide between developed and developing countries, and how to balance data sovereignty concerns with the need for global interoperability.
The Policy Network on AI (PNAI) announced its upcoming session and the release of its main report on AI interoperability and other issues. Participants were encouraged to attend the session and read the report for further insights on AI governance interoperability and good practices.
Session Transcript
Olga Cavalli: Are we ready? Okay, thank you. Thank you very much for being with us at lunchtime. This is really remarkable. Thank you for being with us. My name is Olga Cavalli. I’m from Argentina. I’m the director of the South School of Internet Governance. By the way, our booth is over there. And I am the dean of the National Defense University of Argentina. And I’ve been invited to moderate this very important session. Thank you very much for inviting me as moderator. This is a great honor for me. This session is a workshop, Interoperability of Artificial Intelligence Governance, Scope and Mechanisms. So let me very briefly give the scope of this workshop. Interoperability is often understood as the ability of different systems to communicate and work seamlessly together. This is the concept that we all have about interoperability of different systems, software and machines. But the IGF Policy Network on Artificial Intelligence, also called PNI, definition of interoperability in the 2023 report is slightly different. I think it’s more broad, which is very interesting. The report, this definition includes the ways through which different initiatives, including laws, regulations, policies, codes, standards that regulate and govern artificial intelligence across the world, could work together in legal, semantic and technical layers that become more effective and impactful. This reminds me of the definition that was made by the WIEGEC about Internet Governance. That was a broader definition of Internet Governance, not only the technical identifiers and the technical coordination. At the same time, development and uptake of artificial intelligence systems are proliferating. them all the time at unprecedented pace and across sectors. A concerted effort in governing artificial intelligence is vital to harness the opportunities while remaining the challenge and risk as a result of new technologies. We all are working on different regulations in different countries and regions. As artificial intelligence is increasingly embedded in our society, it is critical that global governance frameworks encourage interoperability to promote a safe, secure, fair and innovative artificial intelligence ecosystem. So finally, interoperable systems and interoperable governance frameworks that effectively address the risks and impacts become really imperative. This is why we are here with a group of very distinguished panelists that I will introduce them now. We have Dr. Ying-Chang Ching. She’s an associate professor from Beijing Normal University from China and she’s also co-leader of the IGF Policy Network on Artificial Intelligence, PENI. Thank you for being with us, Ying. From remote, we have Mr. Poncelet Nyeleki. I hope I pronounced it correctly. He’s the CEO of Yoko Labs, Banjul from Gambia, Africa. Well, are you here? Hello. I can see you. We have Mr. Sam Doss. He’s senior advisor of Oxford Martin Artificial Intelligence Governance Initiative, Oxford University, United Kingdom. And also, he’s the director of Multilateral Artificial Intelligence. And we have Dr. Xiao Chang. She’s deputy director of China Internet Network Information Center, CINIC, and deputy director of China IGF. And we have Mr. Mauricio Gibson. Mauricio is head of International Artificial Intelligence Policy, Artificial Intelligence Policy Directorate. Department of Science, Innovation and Technology from the United Kingdom. So, welcome all of you. Thank you. And also, we have a very, very big audience, which is fantastic. And we have to deal with this noise and sound thing, but we will manage. Don’t worry. And also, we have a discussant that she will give us her input at the end of the interventions of our panelists, Dr. Neha Misha. She’s Assistant Professor from Geneva Graduate Institute, and she’s in Switzerland. So, we will organize our workshop in this way. We have three policy questions that will be answered by our distinguished panelists. And then, we have some comments from Neha from remote. And then, we will open the floor for the intervention of our audience. So, I will post first the first policy question to our distinguished panelists, which is about understanding the interoperability of AI governance. So, for the panelists, what is your understanding of interoperability? And what are the most issues that need to be addressed at the global level? And what are the obstacles? I don’t know who would like to start responding this question. I won’t put you on the spot. Okay. Welcome. The floor is yours.
Yik Chan Chin: Thank you, Olga. So, I speak on behalf of the PNAI because I’m the co-leader of the subgroup on the interoperability of AI governance. So, from a PNAI’s point of view, as Olga mentioned, we take a broad understanding of the interoperability. So, we particularly look at legal, semantic, and technical layer of the interoperability because we identified that the most important layers in terms of interoperability. So, we also look at how the law, regulation, policy, code, standard across a different part of the world can work together and address those important problems at a global level and make it more effective and impactful. So in terms of a global issue we would recommend to address in the short term or the medium term, there are several of them. I think the most important is AI models risk categorization and the evaluations. I think most of the country will agree on that. But we have a different approach in terms of the categorized risk and also evaluate mechanism. The second one, we identify liability, the liability of the AI system. And the third one is the risk of the AI training data. So we know that the AI system depends on the data that used to train the AI. So the risk of the training data is the third issue we think is important to address globally. And the third one, the last one is a technical standardization, the alignment of the technical standard and the regulatory fragmentation and the divergence requirement. So these are the global issue we recommend to focus on. So what is a major obstacle? The major obstacle, first of all, we identify is geopolitical tensions, the tensions between different powers in the world. And the second one is lack of the trust, lack of the trust among different countries, regions, and even the countries. And the third one is unequal distribution of AI technology and the maturity of their policymaking. So we see different AI technology power that have different power dynamics and the maturity of their governance. And the third one, that is about AI interoperability policy. Because we see a lot of the national, regional, international interoperability policy, but they have a different principle, values, objectives, and the priority. I think that’s it. That’s all for me. Thank you.
Olga Cavalli: Thank you very much. That was very interesting, especially following the comments in the Open Ceremony about the difference between the Global South and the North. Sam, you want to also tell us about what is your understanding about interoperability?
Sam Daws: Thank you very much. It’s a real pleasure and privilege to be here. I wanted to commend Yik-Chan Chin and the others for a wonderful P&I report on interoperability. Building on her comments, I would add that two areas I think that we need to additionally focus on. One is we need an interoperable global approach to the sustainability of AI. So AI and energy demands are set to grow with increasing multimodal inference with the use of IoT data and with agentic AI. So we need interoperable ways to measure, to track and to incentivise better energy and water use of data centres of chips, algorithmic efficiency and data sobriety. So work is already underway on this in the ITU, ISO, IEC, IEEE-SA or these Geneva-based acronyms. But also with the International Energy Agency on the energy track, partners such as the Green Software Foundation and the UN Environmental Programme, which takes a triple planetary crisis approach mapping the full life cycle of AI, all the way from mining through to end-of-life reuse. We also need international scientific collaboration for AI’s positive climate contributions. For example, in new materials research in solar PV and batteries and climate weather modelling through digital twins. also especially I think looking towards Belen in Brazil, COP30 next year, AI will really help deliver efficiency targets across all industries. So what are the obstacles to this particular issue area? Well building on Yip Chan’s remarks, national security and economic competition factors are significant. We’ve seen US export controls on high-end chips on China and in return China restricting gallium germanium antimony in response. Countries then race to acquire high-end chips detracting focus from building interoperability on sustainability approaches. The other obstacle is that tracking energy use by grids and companies can be economically sensitive. So companies don’t always volunteer this themselves and while companies have been doing a remarkable job, NVIDIA, Google and so on, in achieving 100x efficiencies in data centers and chips and software design, the overall electricity of use of AI continues to rise. So we need a multi-stakeholder framework for industry transparency and accountability. Singapore is a great member state example of integrating sustainability into its AI verify and model gen AI frameworks. And lastly I want to touch briefly just in one minute on cultural interoperability because it’s not talked about enough in AI governance but we really need cultural interoperability addressed at a global level. For humanity to flourish it’s vital that our diverse cultures feed into AI so we can better use it to live good and meaningful lives and that includes insights from low resource languages and also the wisdom of indigenous as people who have a minimal digital footprint, not captured by large language models trained on the internet. The trend for valid reasons towards sovereign AI at a national and regional level, especially in data governance and LLM worldviews, is, I think, going to continue. That’s not in itself an obstacle to interoperability. The obstacle would be if we have a fragmentation into a closed loop of culturally informed, epistemological, generative AI ecosystems – it’s a bit of a mouthful – such as a more socially conservative BRICS AI alliance that President Putin announced this week, an ecosystem around that, alongside a more liberal Western one, creating two bounded rationalities, separated by mistrust.
Olga Cavalli: About sustainability and how that affects our environment. Because we use technology, but technology has impact in the environment. That’s very interesting. Also, what you mentioned about the society and languages. Languages are my hobby, so I commend you for that comment. And I commend you both for being so respectful about the time, responding to the questions. Any comments from Echan or Mauricio? Yes, please, what’s for you interoperability?
Mauricio Gibson: People hear me? Yes. Thank you all for having me. It’s a pleasure to be here. I’m going to build on what everyone said, and it’s really helpful insights here, but give it a more practical government application perspective. So I think recognizing what people have said, I believe that, you know, there are innately going to be those different government interests, which at times will compete, but I think seeing how interoperability can happen is looking at the sort of broader areas where there are opportunities for cooperation and recognizing and honing in those particular areas and also looking at how we can plug the gaps and continue to build on those areas where there are gaps and sort of work towards that further progression of coordination. And I think a lot of that is sort of building those foundations and building blocks of what are the core principles that we’re starting to see across different governance work streams. And I think, you know, that doesn’t necessarily mean to be harmonization, but just really like building on that, because there will be the regional and domestic variation. And just, yeah, working on that gradually is a fundamental element of it. Though, I think what are the sort of core issues for interoperability and the obstacles that need to be overcome? I think, yeah, echoing what everyone said here, but also I think thinking more broadly about the sort of technological advancement of AI. And, you know, we’re hearing a lot more about not just gen AI, but agentic AI and the many challenges on governance of this, not least because of the nature of who is responsible building on liability there. You know, keeping up in terms of governance with these challenges is going to be a real battle. And so from the UK perspective, I think the science behind the most advanced AI, which is progressing, you know, at an exponential rate is a real focus, or has been a real focus, not least with the safety reporting that we’ve been producing, or secretariating the producing and bringing a lot of scientific evidence on this. And the state of the science, you know, it’s rapidly evolving, and we’re having to, you know, been having to produce a lot of reporting on a regular basis. And even by that point, you know, is that gonna be out of date? You know, how can we keep up with that? And understanding the scientific basis is gonna be a vital thing to try and overcome. I think the other thing is building on what people have said as a sort of capacity building element to, you know, I think there are differing understandings in different environments. The digital divide is so significant and given the advances in technology, we, you know, supporting policy officials, civil servants, public sector, and all, you know, everyone to support their AI talent uptake in all sorts of parts of the world will help an understanding of how they can engage in the governance process at the international level and in their own domestic system too. And I think a third point is supporting that is the sort of clarity and messaging that is needed to different communities across the world. So to support on things like sustainability or on the sort of cultural cross exchange of information, I think, you know, how do we land the key points that are needed to support that interoperability? And one element of that I think is using the different forums and I think what we’re seeing though, however, is in the multilateral domain, there is still a lot of duplication and the messaging isn’t clear and it’s not very clear how or where people want to sort of prioritize particular engagement on governance in these different areas. So some people are seeing some things happening in the UN, some people want to see in other areas, but one obstacle that needs to be overcome is the duplication, some of the activity and how can we try and manage that, see how they fit together and that’s gonna be a real challenge, I think, and something that we need to sort of work together on going forward.
Olga Cavalli: Thank you, Mauricio. You bring a very interesting point. It’s especially for developing economies. This is sometimes it’s hard to follow up with all the spaces where all these things are debated and decisions are made. made and policies are developed, maybe at the regional or global level, so that’s extremely challenging. And also you brought an interesting concept about capacity building, which is, as we know, cyber security, cyber defence, cyber crime, artificial intelligence, we are running short with the people trained. Yi-Chan, you want to add any comment?
Xiao Zhang: Actually, I think you might, I want to respond to your questions and I think you would respond to all the speakers. And we have three questions, one by one. The first one is your understanding, my understanding of what is interoperability of AI system. And for me, I think I use one word is one, it should be one ecosystem. Let me give example, I want to make comparison to internet. You know, for the internet in the past 50 years, only 50 years, and as I can say, it’s a one world, one internet. Why? It’s digital economy, it’s flourishing, there’s so many applications, but we have found something in common, and we divide internet governance to layers, at least in the technical layer, I mean, logic layer, we found TCP and IP, and we can connect them, we obey the same rules. That means, even though the contents, different countries have different regulations for the contents or something like that, but under the technical layer, we obey the same regulation. So that means we can work together, ecosystem. So as an internet user, you can use any application, you send email, you call VIP, or you search online, or something, you don’t feel that you are roaming around the world. seamlessly. I mean that we’re very very seamlessly. That means you feel very, you don’t feel that where it is. So I think for AI system, at least we should find something that we can work together as one ecosystem. So that’s my response to the first question. And the second one is what’s the priority, what’s the most important thing that we should do? I think actually because we have different culture, we have different development stages, and the priority for each country is not the same. For economic growth, for different areas. So our understanding of AI governance is quite different. I think because 2.6 billion people around the world have no access to internet. So AI means nothing to them. We cannot leave them behind. So maybe I think the most important thing is to sit down and find the priority of all the questions of AI and find priorities. What is the, we can narrow down, what is the global issue and what is maybe for developing countries and for Africa and for, we can one by one go. It’s not just AI risk or something like that. They have no AI. How to say AI risk? So I think development issue is also very important. And for the third one, so question is what’s the obstacle? As Professor Chen said, we actually have a lot of discussion. I think trust is the most important issue. There is, AI is built on trust. And it’s not limited to the geopolitical reasons. We shouldn’t have different ecosystem. So it’s all this ecosystem are built on trust. So how to build trust? I think this is something we need to discuss. Thank you.
Olga Cavalli: Thank you very much about trust. as we know, artificial intelligence is based off a big amount of data, capacity of processing that data, and some algorithms that gather that information. So trust is, I would say that it’s a layer overall that gives this tool the confidence for us to use it. So that’s a very interesting point from you. So I will address our distinguished panelists the second question, which is, how can different actors address interoperability and how can we balance regional variations with global approaches? Who would like to start? I don’t want to put someone on the spot. Xiao, Yichen, okay.
Yik Chan Chin: So yeah, I’m Yichen and this is Xiao. So yeah, I think I just jump in because from the PNAS perspective, just want to mention that we just released, now we’re going to release our main report in the main hall in the 4, 3.30. We have main sections. So in that main session, we’re going to report, release our report, this year’s report. So part of the report is interoperability and also liability and environmental issues and the labor issues. So welcome to join us in the main hall. So from the PNAS perspective, I think we are look at, I think how can we work together? I think multistakeholder is very important and the way from our own experience, cause I’m leading the group. So we got a lot of the input from different sectors and from around the world. So it just surprised how much information evidence you can collect through the multistakeholder models because we have a different sector, private government and academia. So that is really, really impressive for me personally. and as a group as well. And the second thing I think that interdisciplinary research is very important because it’s really complicated to understand the AI system and how to evaluate, how to test, how to know the security and the safety issue. So for us, I think it’s a multi-stakeholder plus the interdisciplinary research team is important. So in terms of how to balance between the regional variation and the need for global harmonization alignments, I think that’s a crucial issue because we have to respect the regional and the national diversity, where at the same times, you know, try to align at the global level. What we suggest is that, first of all, they have like a, do not recognize, we do not think actually we need only one global layer, like we have UN, of course, at the global level, but UN’s role, it’s not to everything. Actually, it’s more about the coordination. So we actually, we respect the regional diversity and the national diversity. So first of all, I think that we have to make sure local needs has to be met, just like Xiao mentioned. So we have local needs and regional needs. Then what’s happening at the next level, so it’s bottom up from a community, from national. Now we have a regional initiative, we already saw so many regional initiatives, like Latin America, African unions, and Asians, and of course, EU. So we have seen all these regional initiatives, but what we need to do in the end is how do we coordinate and meet all these national, regional initiatives to the global level. So that’s what we need to do in terms of interoperability. So definitely we can do this. The first way we can use that is in our report, we identified some existing very effective mechanism to do the global interoperability. For example, like we talk about UN as a multistakeholder platform for us to negotiate. and communicate. So this is one way we can have some policy dialogue. The second one, we can think about the international collaboration in terms of the AI safety governance. So we have a good example set up by the K government in terms of the test and the verified AI safety. And then there’s many AI safety institutes already set up in Europe and also in Japan, in US, even in China, they’re going to, we have a regional one, we do not have a national one. So this is a good kind of a collaboration. And the third one is kind of a technical industry self-regulation and the technical integration. So this is an existing mechanism already there, it can help to do the global interoperability. And then the second mechanism we can use is a compatibility mechanism. So for example, we’re talking about like a mutual recognition so we can have a mutual recognition with regulatory approach. So we have a divergence in terms of regulation, but then we can have some mutual recognition and all. So this is one kind of a mechanism. The second one, we can rely on the international standards set up by the IEEE ISO. And they also collaborated together, all these international standards setting under the ITU, they collaborated together as well to align with each other. Second, we can talk about a certificate, security certificate, okay. And also joint AI safety testing or alignment mandate. In the end, still we can have a harmonization of the AI regulation or even the harmonization of the AI principle and terminology. The last one I want to mention is very important in terms of national and regional policymaking. So the policymaker in the national domestic level and the regional level, when they do the policy, they should try to incorporate. international standard in their policymaking. So when they’re doing policymaking, first of all, of course, we have to respect a domestic public interest objectives. But at the same time, if they try to align with the regional international standard, which will reduce the unnecessary barriers and the cost in the end, you know, for the interoperabilities. So try to ensure alignments of global standard. And the last one is increasing international regulatory coercions. So reduce unnecessary sections and divergence about the regulatory framework. I think I’ll stop there.
Olga Cavalli: Thank you. Thank you very much. And also this difference between mainly developing economies by technology developed by developed economies. So, and once they have to develop the regulations, then they have to have that in mind. So thank you very much for your comments. Who would like to follow Mauricio? Sam, please go ahead.
Sam Daws: Building on Nick Chan’s comments, the regional approaches reflecting diverse cultural approaches and interests is not in itself a bad thing. It’s inevitable. And I think it can be very positive. But policy interoperability can be more difficult once nations have enshrined their approaches in law when negotiated regional agreements. At that point, tools like international crosswalks, other tools remain valuable to determine docking points and to clarify taxonomy and language differences. But in the future, I think we can do better in two ways. And I’ve tried a bit creatively about this question. First is to start earlier. So start early in understanding the nascent approaches of other regions at the same time that we draft our own national and regional approaches. Those of you familiar with UN negotiations, negotiating UN resolutions, know that once a region has negotiated a common position, it’s very hard to unpick it in the face of criticism or objections from other groups. So sometimes just being aware of the key concerns of other groups can allow subtle changes in language or in framing rather than on substance as we elaborate our own position, which then aids interoperability of approaches later. And I think we can consciously use the four tracks coming out of the UN Digital Compact and the H-Lab so that we can use them for early iterative knowledge. exchange through policy dialogue, through standards exchange, through scientific convening and through capacity building. The other sort of area I thought we could be creative with is using cross regional forums. So forums that have at least one member state from more than one region for lesson learning to reduce regional siloing of AI approaches. So let’s use cross regional political, cultural, economic and scientific forum at both a member state and a multi-stakeholder level. So we’ve got the IGF of course now, other examples, the Organization of Islamic Cooperation, the DCO that Saudi Arabia leads, the Digital Forum of Small States, Singapore leads, CISA in Central Asia that Kazakhstan leads, the Arab League, the Shanghai Cooperation Organization, the CIS, the Organization of Turkic States, OECD GPAY, BRICS and the Belt and Road. All of these have a contribution. And also we mustn’t forget the role of the International Science Council, the ISC and National Academies. I think those are vital, especially since two or three years ago, the ISC embraced the social sciences as well as the natural sciences. And I feel strongly that psychologists, economists and social anthropologists have important insights into how human behavior can be an obstacle to policy interoperability approaches. So we need them at the table. And then lastly, the network of AI safety institutes can also play a potential cross regional interoperability role. But I would say only if it can broaden its membership and its agenda to widen its relevance to the global South.
Olga Cavalli: Thank you, Sam. And thank you for. naming the examples of inter-regional spaces of debate, because I was going to ask you, but you already mentioned that. I will think about something for Latin America, or maybe we can talk about that.
Sam Daws: I would say CEPI-ELAPE for Latin America.
Olga Cavalli: Okay, thank you for that. And who would like to follow? Please, Yichen.
Xiao Zhang: Well, I’m Shell. I can add something. Well, definitely multi-stakeholder engagement is very important, but definitely I think the AI governance should be multilateral oriented. So I think I’m a little different from the two, but we find something common. Both multi-stakeholder and multilateral engagement are very important, but AI governance is very different. So I still want to make a comparison with internet governance. You know, when internet just happens, something happens, it’s not that it’s not harm, it do not harm, no harm normally, but AI is totally different. With the beginning of AI, we know it can bring risks. It could be like comparable to the atomic bomb, and we know it can, all of the life could be at risk. So it’s definitely because of the, because it could be using a weapon in the military, or because it needs the sanity or something like this. So it’s totally different from the internet. It must be the country oriented. So I think it’s a multilateral, very important things, the engagement on multilateral, they have resources. It’s not something technical problem, it’s something of the legal, it’s something understanding of AI, what it is, and it’s a harm it could bring. So normally, of course, multi-stakeholder engagement is very important, very important, but multilateral is definitely should, because they have the resources, we have the action to take. So I think the both of the start two sides are very important, and it’s totally different from internet governance. Thank you.
Olga Cavalli: I think you bring a very interesting- Very interesting point. So when you mean multilateral, you mean governments talking to governments? Is that the idea, like United Nations? And the interaction that you mentioned with the multi-stakeholder spaces, I think that would be the ideal way to work. Because governments have a special role in taking care of economy, security of the country, ordering the laws and all the environment. So very interesting point of view. Mauricio, you want to add something?
Mauricio Gibson: Thank you. Yeah, I mean, just building on what Chet was saying, I think, and what you were saying, Olga, about the role the government could play. I mean, giving that government perspective here, you know, we can convene a range of different stakeholders, you know, using that interaction, and engage these spaces to really understand the issues that are being reflected by different stakeholders, and help funnel that into action and policy domestically and internationally. I guess that’s a useful conduit that we can provide in delivering the needs of the stakeholders. Building on what you were saying, and others were saying, I guess, you know, there is the resource question. There is the resource that governments have. And I think building on, I think, what I was saying before, again, about capacity building, and the particular role governments can play by using that resource. So we can point to a UK-led AI for development program where we’ve invested almost 80 billion, 80 million, sorry, in development programs in Africa, now increasingly in Asia. And a lot of that can go into skills and to compute. And that’s a clear example of where we can really, like, leverage the resource that we have to support what’s going on the ground. You know, further action on sort of upskilling and governance is a key component of that too, not least, I think, in many areas. And I think particularly on safety, I think it’s an area that we’re trying to use our resource, our experience in convening. a safety institute using the AI Safety Summits to really highlight to a wider global audience about all the safety components and risks that have been mentioned by my colleagues here. I think a second point is also sort of the better communication of the key tools that support things like interoperability in the private sector. So practical examples I can give. So we’ve got the UK, as we funded the AI Standards Hub, which is an international networking mechanism that can help socialize technical standards across the world and bring together a different industry and a multi-stakeholder environment to really talk about these particular areas. And I think by having those conversations that can really bring to light a lot of the really areas that might come across as a bit difficult to access in the sort of standard setting community to a wider audience. We’ve developed sort of AI management essentials as well, which is a self-assessment tool to make sure that, you know, if you’re a business, you can support assurance, support the trustworthiness, developing things in line with policymaking, sorry, policy principles that might be of importance. So like transparency, accountability, things like this. But then, you know, thinking back to, I guess, the sort of public sector adoption element, how can we support and communicate to the public sector ourselves and really enhancing the processes for enabling them to really build the uptake on a lot of this too. And I think with that, you know, going a bit more deeply or a bit broader actually in terms of implementation. So obligations like, you know, we can talk about interoperability in terms of the, you know, the important principles that we might share, but it’s how do you help implement that in practice? And I think there’s a role for governments to support those mechanisms, working with regulators, ensuring that there is the necessary support, guidance and upskilling for those who are working domestically. to look at the international activity and bring it to the domestic level as well and translating the things that are happening at the international level, which we’re working together and doing that domestically. I think one particular example also in sort of the more advanced AI front is the work that the G7 has been doing on sort of the Hiroshima AI process, which is looking at codes of conduct for advanced AI. OECD is looking to implement that. And there’s a case of like monitoring and then keeping that going and a regular assessment of what’s going on to help implement obligations. And then I guess also like, how do we strengthen the foundational principles? So looking at what we were just talking about, so reinforcing whilst we’re implementing, it’s also important to like bring to light where those overlaps are with other areas. So for, I would give a practical example of recent engagements. So we’ve sponsored an OECD and African Union dialogue. The second one took place in Cairo a month ago. This was a really positive space where there was a workshopping on an African charter of trustworthy AI. So what was looked at was a range of different governance mechanisms and tools, including the OECD principles, which looks towards interoperability as well as the UNESCO ethics recommendations. Bringing all these together and looking at how we can draw on different things to support new work that is happening in the African environment. And we want to sort of continue with that work and help support it. So it’s kind of strengthening what’s out there, bringing those two things together and helping that communication and using the resource that we have to help support it is a really key thing. And finally, on your second part of your question about the sort of regional disparities and bringing that together in the global environment, how do we do that and getting the balance right? So the OECD African Union is a combination of two regional activities, bringing that together is a really helpful example. Another example is… This year we adopted and signed the world’s first AI treaty, which is the Council of Europe on Human Rights through Law and Democracy. This was really interesting because it brought together a global grouping and even with that, there were a few challenges in really like getting agreement on some of the core principles and the real detail, but we got there in the end because we were able to keep language broad and flexible enough to enable different global regulatory regimes to engage in it. And I think that’s the key thing. So obviously, whilst it’s a legally binding treaty, enabling space in text to support regional variation is gonna be really key in getting that balance. And I think that’s something that we have to sort of continue to recognize whilst we work towards interoperability and move and progress in these revolution of discussions in this space.
Olga Cavalli: Thank you, Mauricio, for this very good examples of cooperation. And I love the standards hub. I like very much that concept. You know, all the internet is global, it’s based on standards as you were mentioned at the beginning. So I think that agreeing on global standards is the key to have a global understanding of anything. So thank you very much, Mauricio. And now I will share the third question. And thank you all of you for being so respectful of times. Some of you took a little bit more, but some others a little bit less. So it’s a good balance in between all of you. And so the last question for you is, the role of the United Nations in global artificial intelligence governance. What role should the United Nations play in tackling the artificial intelligence governance? Who would like to start? Sam, please go ahead.
Sam Daws: Thank you very much. Before that, I’ve just noticed that Poncele from Gambia has disappeared from our screen and it would be valuable to get an African perspective. You’re there, great. so yeah make sure that he’s still accessible, great to see you. First and foremost the UN can help build trust to gender interoperability so this is very much building on Xiao’s point and trust is not a fixed constant it’s based on regular interaction so people-to-people contact which is why IGF is of such value it’s based on attitude we need to approach this issue with empathy with approaching knowing the other with curiosity and trust is built on experience so a track record of cooperation through predictable tracks so we can begin by an integrated global implementation of the two UN General Assembly resolutions agreed by consensus this year one proposed on responsible AI by the US co-sponsored by China and the other proposed by China on AI capacity building co-sponsored by the United States and both of those are guided by again universally agreed UNESCO recommendations on ethical AI so I think that is our foundation. Then I would suggest we focus on AI capacity areas capacity building in areas where cooperation has already shown to be able to be advanced despite geopolitical headwinds so this is areas like food security, biodiversity, climate change, health emergency prevention, macroeconomic stabilization, counter-terrorism and crime and data for the implementation of the UN sustainable development goals. The GDC and the HLAB on AI have given us a good roadmap it’s clear that the role of the UN is not or at least not for now to regulate AI nor is it to enforce compliance but that may come over time but the UN Secretary General can provide moral leadership on the need for inclusive AI. ITU, other agencies, DESA, UNDP can help bridge the AI digital divide through capacity building. The UN can be a source of scientific insights and expert data to guide decision making and convene policy dialogue in a standard setting. So lastly, and this is again trying to be a bit creative, I think the UN should look at the success of common security in the peace and security domain and look at whether those organizations could also play a role. I think if existing common security organizations established to build trust in the peace and security and economic domains, they could collaborate also in areas where AI can support shared objectives and knowledge exchange. So I’ve got a different set of acronyms here to the cross-regional ones, but the OSCE, CISA, ASEAN, the African Union, the EU, the GCC, the OAS from Latin America, Caribbean community from Gulag, the Pacific Forum, these are all examples where they have shown demonstrated ability to build confidence building through diplomatic engagement, which could be applicable. And finally, we’ve seen the emergence in the peace and security AI space of some very good initiatives by the Netherlands and South Korea, co-sponsored more recently by countries like Switzerland, Kenya, and others on the responsible AI in the military realm. And those have been very good. They’ve included China, which is really important. We’ve also seen US-China bilateral successful consultations on not using AI in nuclear guidance systems. I think we need to move in a direction of travel to back to the UN for AI peace and security. So the Security Council being more seized of AI in peace and security, it’s already done some work, as well as the UN’s work on non-proliferation and disarmament in Geneva and Vienna. Thank you.
Olga Cavalli: Thanks to you, Sam. Thank you for summarizing what has been happening in the UN. and your suggestions about the future. That’s very interesting. Who would like to follow the comments from Sam? Yes, please, the floor is yours.
Yik Chan Chin: Thank you, Sam, because we know he’s an expert in terms of the UN. So thank you very much for your comment, very insightful comment. So I think from the PNAS perspective, as we mentioned before, so international collaboration, the UN’s two resolutions is one example how to do the international collaboration. And the second one is very important. Actually, the UN has a function and also has a legitimacy to form the common objective of governance, because this is from the UN General Assembly. And so, for example, like a safe, secure, trustworthy, artificial intelligence, which is agreed in the General Assembly, that’s the two resolutions. So this is the two functions. And the third one is about IJF. The UN should strengthen the governance for the non-stakeholder, especially like a kind of police dialogue, which can have a marginal system structure to facilitate the exchange and to understand your policy and legislation. The best practice, of course, of the country and the cultures, because it’s very important that black people come to IJF. Sometimes, some colleagues told me, it’s just a talking show, because we talk here all the time. So we do not have enforcement power, okay? It’s very important to understand each other, build up the personal connections and a dialogue. So actually, it’s the most important, the multistakeholder platform, and that you should use it as a platform to have a dialogue, to have a dialogue, to have a dialogue with each other. built up a global AI dialogue. But at the same time, it also strengthens AIG’s capacity. For example, in terms of financial support, medical support, and resources support. Of course, there’s other overlapping functions by different agents. That’s why I really want to give one example, which is my personal example. So I’m part of the OEWG. Many people know that. So it’s about ICT securities. So the OEWG is a binary process. The Singapore ambassador took over as a chair, and then he was on the OEWG. So what he did for me, he invited a multi-stakeholder, like other sector, like private, and also an NGO, to participate in the OEWG, and give them a policy consultation role. So he listened to me, or Joe, some private sector joined in this consultation, was asking them, you know, the OEWG process, would you really help? Because I was invited as a representative from China, but I’m not, and to brief the delegate about the security issues. So we can see this is, the UN is doing some reform in terms of how do they cooperate with the multi-stakeholder dynamics and the multilateral process. So this is a very positive progress. So I’m going to the OEWG, and set up a presence, you know, other agents, other multilateral process. Thank you.
Olga Cavalli: Thank you very much. We already mentioned this, this side of the plan about multilateral and multi-stakeholder spaces. Ms. Yunxiao. Okay, sounds good. Yep.
Mauricio Gibson: Yeah, I’ll be right back. Thanks for the floor and yeah thanks my colleagues for setting that out. I mean yeah I think there are a lot of really interesting opportunities at the UN and I think from the perspective it’s a real opportunity here I think with the conclusion of the dualistic compact that’s been mentioned and you know there’s an opportunity I think for us to really like capture the opportunities that are presented with the UN’s convening power bringing every country and a range of stakeholders together through environments like this to really highlight potential of cross-cultural information exchange and sharing and building that mutual understanding and I think really highlighting and reinforcing the points that you were saying Chan about the building that understanding is I think really fundamental to the value add here in the UN and I think the one thing to sort of clarify is there are because of that fact that there are so many different UN bodies and agencies I think it’s really important to reinforce the importance of complementarity coordination role for the UN to to not duplicate but highlight where it has those different value adds depending on each relevant agency and an activity that is going on you know see as mentioned UNDP has a role in capacity building more widely on AI governance we’re seeing more interest across different agencies to play to play a bigger role but I think what we need is coordination so and an understanding through each agency what exactly is needed to be delivered on the ground giving that practical benefit moving away from just conversation about the principles on interoperability and coordination between these but actually supporting coordination on the ground and delivering actual benefits to those communities who are most acutely feeling the digital divide as well. I think just one of the ways of actually delivering on that, you know, looking at what you mentioned, Sam, about these global dialogue on AI governments, such as one initiative, which has been proposed through the final Global Digital Compact text, and which is just about to launch in terms of negotiations for the modalities of it. I think it’s important that we really, like, highlight the opportunity for sharing information in these forums and actually building that understanding somewhere like this, highlighting the different initiatives that are around enforcing those points about, you know, these are the actions that we are doing, how to build and understand that, bringing those together. And there is a role, you know, for the IGF to be considered, and it’s interesting you mentioned that, and the role of the OEWG, you know, kind of, you know, we need to, like, consider these in these next stages of thinking about that. And I think, on top of that, it’s important that we don’t, you know, create too many things. It’s meant to be in the margins of existing conferences. How can we leverage those existing activities? You know, the ITU AI for Good Summit, there is the UNESCO Global Forum on Ethics as well. How can we work together on these? And then you also mentioned the sort of scientific panel on AI, and so the UK is very interested in this because we produced safety reporting, so on advanced AI risk, which is on an expert panel, leading the scientists, synthesizing research, and I think there’s a role for this. And the role for the UN to really highlight and reinforce a lot of the research that is out there and bring it to a wider audience so we can support that inclusivity in terms of the understanding of science and actually move forward when you have that understanding of science. But I think, again, like, you know, it’s ensuring these are clear and grounded in, you know, scope, the scope is clear, the mandates are clear, so we don’t get into a situation where things are muddied. And that is also reflective of why a lot of people. We are talking about the WSIS process. There are consideration of the role of AI in this process, but we need to make sure that there’s effective coordination so it actually best delivers for people through effective AI governance. And just the final thing to underpin a lot of this is, as I mentioned before, there’s differing approaches, there’s technological advancement that is moving so quickly. It’s vital that we stay alert to the need for agility in AI governance and flexible approaches that can adapt to different developments in the world. I think at times, the UN might not be the quickest moving institution and the system is not the most quick moving, but we need to recognize that we need to keep up with the advanced technology and that’s a fundamental thing as well.
Olga Cavalli: Thank you, Mauricio. I like the concept that you mentioned about the United Nations being the point of… Spreading information to other countries that are part of it. Yixuan, the floor is yours.
Xiao Zhang: Absolutely. As Chris said, I totally agree. At the UN, the United Nations, the best place… Governance… Park… Is… Should actually be more important as it is based on… Yixuan was telling me… Some successful experience in… Like… Climate… Sensitivity… And… So, I think… Something totally different… It’s actually part of the digital transformation of economy and society. It should not be something very… It’s not a single thing. It should be… Actually, it’s part of the digital… Part of the economy, society, and system. So, I see this provides a best place for us to… And, you know, I can come here and I don’t have the energy, resources, and budget to go anywhere. So, I can come here once a year, but I do have, for example… Also, in the UN system, not only… Which is here, but also you have… Again, it’s a must-see program. And I just think it’s a good one. So, we need simultaneous… I think… Resources, and we have… You know, now we have to… I’m not sure if I’m… You know, it’s just like… It’s important to keep making up, but like… And this should continue. I’ve seen how the GDC has something… Something… Solid tasks, solid tasks. And I believe that IGF could… To carry… To carry some of the tasks. That should be the main track. Thank you.
Olga Cavalli: Thank you very much. And for this sort of combination of United Nations and IGF. Yeah, interesting. We have not forgotten you. I will give the floor to you now that our distinguished speakers here in the room have already answered the questions. Would you like to comment about the three questions that we have been talking about? What is the role of the United Nations and how different actors could interact to work on this very important issue? Welcome, the floor is yours.
Poncelet Ileleji: Thank you very much, Olga. Thank you to all the speakers, Professor Chin, Sam and Mauricio. I would like to say first and foremost that we have seen all the three questions that were asked. All my colleagues and speakers, talking from a PN AI perspective, spoke about the three key pillars of what we are talking about in terms of measures, tools and mechanisms, in terms of interaction and interconnection and in terms of communication and cooperation. And I will want to focus, coming from the Global South perspective, I would like to focus on the communication and cooperation part, you know, I have to be a little bit biased here. And I’ll say one thing that guides me in this is for us to remember that at the end of the day, we have in September the Governing for AI for Humanity by the UN advisory body. And one of the key recommendations was about the set up of an independent international scientific panel on AI, which should be multidisciplinary. And we also have issues that I feel that… Some talked about them, and one that was very key for me, if I relate it to recommendation one of that UN advisory body report, deals with producing quarterly thematic research, which will help achieve the SDGs. And when we look at whether it’s climatic issues, when we look at poverty, these are things that AI can be used as an enabler. We have to remember that at the end of the day, we want people to have inclusivity. We want public interest to be well-represented. And within the policy network for AI, we try to look at things from that perspective. In as much as possible, we have various stakeholders, but we try to look at the constituencies we come from. And that is why aligning it with all the regional initiatives, whether it’s the African Union, whether it’s the EU, is very important. But I think if AI can make a difference to us achieving the SDGs, we have gone a long way, where we build on trust and equity. Thank you.
Olga Cavalli: Thank you very much for that, especially as I think about trust and contributions about countries and organizations. I would like to give the floor now to Miha. She has been patiently listening to all of what our colleagues have been saying. Miha, what’s your comments about the debate and exchange of ideas that we have been having?
Neha Mishra: Thank you. Thank you very much, Olga. And also I joined the others in congratulating the PNI for the report, and I’m so delighted to be a part of this panel. So the discussion has been an embarrassment of riches. I really don’t know how I can add, but I wanted to weave some of the ideas that I thought was common through the discussions. The first thing I thought that was very interesting… is the different dimensions of interoperability that the different speakers mentioned. In addition to the technical, legal, and the semantic interoperability, which is often discussed, there were also other dimensions of cultural interoperability or sustainable or environmental-related issues being brought together. I think it was quite interesting when some of the governmental perspectives were shared, particularly how to navigate the different interests of different governments to figure out an interoperability framework that might be feasible. Also, I think here from a practical implementation perspective, questions might be relevant in terms of thinking about whether it needs a more modular approach, whether it’s something to be tested in specific sectors, how incremental it should be, and what the prospects of a multi-stakeholder approach are, because one thing that I thought was common through the discussion also was that multi-stakeholderism and multilateralism need to align with each other, and there can be certain tension points that need to be resolved. I also found it very interesting that a lot of the speakers, including Poncelot, brought this idea of the developmental divide, the AI divide, and there was a lot of very, very encouraging discussion on how to bridge the different gaps. I think one perspective I would like to add to that is that while it is great to think of capacity-building initiatives of more meaningful international regulatory cooperation, one should also be conscious about the limits of interoperability in the sense that in certain scenarios, developing countries, least-developed countries may not be able to participate in many of the interoperability dialogues. So to that extent, it is important to assess what are the areas in which we are looking for interoperability and how representative those discussions are. And while I fully encourage that it is important to have these open dialogues, to have more sustained technical and capacity building initiatives, this is an incremental slow process and developing countries should not lose their autonomy to decide how they want to develop their AI frameworks given that it can have very specific influences across different communities. And that’s why it was very important, I think, at the beginning to highlight the cultural aspect of the human layer of interoperability into the discussions. I also found it very interesting that we discussed so many different variety of tools and mechanisms and different stakeholders, different organizations, including at the global level, the UN, that can contribute to different aspects of interoperability. But at the same time, I agree with Mauricio that it’s important to streamline these efforts and to not duplicate these efforts. From a Global South perspective, I think the question is also really very practical as to if there are multiple fora, they only are able to invest that many resources and they might have to choose between different fora and that also can create competition between different fora. And in that sense, I think the UN still has a very continued important role as being the umbrella organization or the framework organization where a lot of, at least the high level values could develop. But at the same time, I think it’s inevitable and that’s why I think it was very helpful that Sam mentioned so many different examples, both of intra-regional, inter-regional and different kinds of even transnational policy networks. And I think even Mauricio mentioned how the private sector could be involved because I think between In setting these high-level principles and achieving them in practice, there are many, many different stakeholders, including private sector, civil society, academics, engineers, technical bodies, different cultural groups, different communities, and really bringing them together is not an easy task. So it was quite helpful to have that overarching perspective. I think one last point I would like to mention is that, and this is a question that we often think of, even from my disciplinary training as an international lawyer, one question we often think of is, you know, how multilateralism is changing in the current world, and even in the context of, I think, looking at AI interoperability, I think, especially because the development of the technology is not necessarily always state-driven, but also driven by a variety of private organization and standard development bodies, I think the need to find better modalities of engagement between the multi-stakeholder bodies, the transnational regulatory bodies, the private sector bodies, and the multilateral bodies is important. And I don’t think it is going to be a perfect process. I think it is about continuing efforts and figuring out what are the tension points, geopolitical conflicts that are completely not resolvable, and what can be, and it was great to see also many examples being discussed, where despite all the geopolitical differences, the developmental differences, there are common points of consensus and coordination that one can see at the UN level or at other international or regional bodies. I’ll end my comments here. Thank you so much.
Olga Cavalli: Thank you, Priya, for such a very concise and complete summary of what has been discuss and I like the concept that you mentioned that this is a process. I think the journey is a destination that we are going through. We have been talking about internet governance for almost 20 years so far, so maybe we think how many years are we going to talk about artificial intelligence. I would like to give the floor now to our nice audience that have been patiently waiting for an opportunity to talk. Please grab a mic and introduce yourself and tell us your name
Dino Cataldo Dell’Accio: and your organization. Thank you very much for the very engaging and enlightening discussion. My name is Dino Delacroix, I’m the Chief Information Officer at the United Nations Venture Fund and I’m also involved in the IGF. I play several roles. One of my roles is to lead the dynamic coalition on blockchain assurance standardization. The comment and question that I would like to pose to the speakers is I think it’s time to acknowledge that AI does not work in isolation but there is a convergence of AI with many other technologies and I think there is an opportunity for example see how the convergence between AI and blockchain can indeed address many of the issues presented by the interoperability needs. There were many references to trust and I think blockchain can indeed provide that common layer of trust in demonstrating that there is at least a data source because on blockchain we can store data sets that can be audited, can be verified in a transparent manner, can be validated as an input to the AI and also there is a synergy between the two technologies not only one way but both ways. The ability of AI to calculate and predict the volume of transactions for blockchain can usually it to be relatively slow and not performant, can help blockchain scalability issues. So that’s the point I wanted to bring to attention. Thank you.
Olga Cavalli: Thank you very much for your comments. Any other questions, comments from the audience? Do we have online questions?
Heramb Podar: Yes, so we also have a number of comments and a question from the online audience. So I think a bunch of you touched on how multi-stakeholder bodies such as the UN are in the point of information sharing, but they are ultimately relatively slow. So what would you want to see in terms of improving them to address this kind of rapid pace of technology and to keep up with that pace?
Mauricio Gibson: Yeah, that’s a very good question and yeah, it touches on the last points I was making at the end there, which I think is a fundamental question. I think the challenge is, if we’re getting to the crux of it, so UN reform is a long drawn out process. I think we’d have to think about these new stages of AI governance as we move to the next chapter of implementing the Global Digital Compact and WSIS and consider what are our core priorities we need to work on to achieve that agility. For example, with the scientific panel, we need to learn lessons and draw on the experiences of previous scientific panels that have been developed. Some have taken a lot longer than others. Some have different parallel political negotiating processes. I think if there’s a way of connecting with sort of more multi-stakeholder ad hoc engagement, so for example, the UK’s international AI safety reporting, you know, secretariat outside the UN, if we can draw on the experiences of existing initiatives that doesn’t require new UN bodies necessarily to be stood up to take a while to keep up with the technology, that would be a much better way of being more nimble and agile to the advances of the technology and that can apply in other areas as well.
Heramb Podar: So, another question we had from the online audience is how do we promote, you know, AI sovereignty? A lot of countries are worried about, you know, how data flows, how, who keeps the data and, you know, there’s a lot of countries, particularly India as an example, Russia as an example, who want to develop their own kind of national intelligences. So how do we balance that and balance, you know, this kind of like broader interoperability conversation around, you know, global south inclusion and having a unified approach?
Olga Cavalli: Who would like to take the question? Sam, go ahead, the floor is yours.
Sam Daws: So just on blockchain, absolutely, I think indelible, sorry, indelible ledgers are, I think, a real tool in increasing accountability in the future. Blockchain is one such thing, and it’s great to hear that someone involved in the UN pension fund is thinking about it for that purpose. On the speed of technology, we’ve got an exponential increase, and not just the UN, but I think governments are finding it very hard to actually set policies in response to it. The UN is capable of very rapid response. I worked in Kofi Annan’s office in the early 2000s, and we worked on a 24-7 response time for conflicts around the world. So if you look at the IAEA, the UN Security Council, the work of the World Food Programme in UNHCR in emergency situations, WHO in Ebola outbreaks and so on, we see remarkable ability to come to speed. But member states must want that capability. Member states have failed again and again at providing an independent preventive capability and strategic forecasting and these sorts of areas to international organisations. So I think there’s that, and where the UN takes a longer time, it’s often valuable to actually grow understanding of culture issues over time. So I think there is a role for the UN to be slow and steady, and there’s a role for the UN to be fast. I think it’s only after we have a major AI accident, God forbid, that we’re likely to see agreement that the UN can have the capacity for enforcement in the AI safety and security realm. In the meantime, the UN will rely on each nation state’s intelligence, military, foreign affairs, other resources to be able to monitor threats and challenges in real time. And lastly, on data sovereignty, there’s an interesting idea that’s floated of data embassies where you can have data of your own country stored somewhere else where perhaps you can have renewable power to power the data centre quite cheaply, but that that data is inviolable in the same way that diplomats are inviolable. So you can have these little kind of data embassies around the world. I think that’s an interesting concept that could be developed further.
Yik Chan Chin: OK, yeah, I think the blockchain, you know, I think in China or even in the private company already use the blockchain to do the security and encryption. So yeah, I agree with that. In terms of UN’s role, I think just like Mauritius said, there’s so many overlapping agents. So in order to streamline, they need to have a clear definition of each duties and reduce the overlapping. And I agree with Sam, because I’m personally involved in OEWG process. I think the UN has a big capacity, you know, collect and reach out to around the world, collect information from multi-stakeholder, even bilateral countries, you know. But the negotiation between states is so slow, personal experience, very slow. So how can they speed up the negotiation process? I think that’s the key. And in terms of sovereignty issue, AI sovereignty, my colleague published a paper on that. So I think the thing is, we need to figure out, as I said, what should be solved internationally, globally, what should be left for the individual country, left to their own jurisdiction. And this has to be discussed. This is a process, we have to reach agreement on that. Just like internet, you know. Some, like, we have a core infrastructure, which is public good, even global public good. But in terms of the content moderation, which is left to the national jurisdictions. So we need to have common jurisdictions, and at the same time, national jurisdictions. Thank you, that’s my comment.
Xiao Zhang: Yes. I want to respond to the online question of the UN, United Nations. Well, I think UN is not perfect, and a lot of limitations. But it’s better than nothing. And you see, I think always there is a balance between efficiency and fairness, and always a balance. So I think what we should do, maybe we call on the leadership in the AI era. Because the leadership’s awareness of what is happening is so important. And it should be AGL, assumption, engagement, something like this. And also, what I should suggest, is narrow it down. We find some priorities on the UN agenda. GDCL following, and find some priority, and we can focus. And step by step, I think the UN still could play a very important role in the AI era. And besides, I think for the engagement of the IGF and the multi-stakeholder approach. And I think, actually, we can strengthen this approach, because we have NRI, the National Regional Branch of each IGF. It’s very, very important. And if we have every NRI, it can support the policymaking of AI. So that’s my point. And I still think UN and IGF could play an important role in this era.
Olga Cavalli: Thank you all very much. Any other questions from the audience? From online? Haram? No. So I will give the floor, one last comment from. each of our distinguished panelists who would like to start, and then we have aquestion online.
Heramb Podar: Okay, can you read it? Yes, so IamSarmanova asks, in which specific forum do you see global implementation focused AI governance coordination taking place, especially as we think about, you know, duplication between all of these different forums and potential geopolitical tensions or baggage which might come with certain forums?
Olga Cavalli: Who would like to take that? I think, Sam, you’re the expert.
Sam Daws: I’m not the expert. Everybody is here. I mean, I would say that the UN being the treaty-based universal body is the go-to for where we can to implement AI governance mechanisms. So that’s the UN General Assembly, its six committees, and the tracks that the Global Digital Compact have set in train. And I think we’re going to see wonderful synergies within and across those four tracks going forward, and I hope we can then bring in all the valuable regional, mini-lateral, and national approaches into that. But the UN is only as strong as the willingness of its member states to cooperate. So the UN is great at the level of principles, but as I said, I don’t think there’s appetite among member states to get into regulation and enforcement. So we’ll need into operability, which I think is one of the purposes of this panel, is that reality. Thank you.
Olga Cavalli: Thank you, Sam.
Yik Chan Chin: I think I totally agree with Sam. I think the UN, one of the reasons, I think a very fundamental reason, is the UN, every country has one vote. So it’s equal, okay? Because no matter you are small, medium, or the strong nation, each country has one vote. So this gives the fundamental legitimacy of the UN, you know, we have an equal fit to participate. So yeah, so UN will, from PNN’s perspective, you can read our report. So we support UN as the focal point for AI governance or dialogue, but the enforcement, you know, it’s just like Sam said, how the state gives, each state gives the power to UN, you know, to at least laterally, in terms of the safety issue, security issue, maybe we can give more power to UN for the enforcement. Thank you.
Xiao Zhang: I totally agree with Ike.
Olga Cavalli: Thank you. Any comments from Mauricio Bonslet or Niha? Yeah, thank you very much, colleagues.
Poncelet Ileleji: I totally agree with all my colleagues, and I would encourage colleagues to read the SOP group report on AI governance in terms of interoperability and good practices that was led by my colleague, Professor Yeh Chin, who did a fantastic job on that. It covers a lot of stuff. And no matter what happens within interoperability, we have to know that public interest and inclusivity matters. And that will be my closing remark. Thank you very much.
Olga Cavalli: Thank you very much. Neha, any comments, final comments?
Neha Mishra: One thing we haven’t spoken about at all, but I just want to add it to the mix, is that increasingly, certain digital economy agreements are also looking at interoperability-related issues and trying to find synergy between technical and regulatory interoperability. And that’s something I want to add to the mix, because we haven’t discussed it at all. And I think there are prospects. I think there are prospects, especially at the regional level, or between like-minded countries that sign these digital economy agreements. Thank you.
Olga Cavalli: Thank you, Neha. Thank you, Poncelet. We have four minutes. I will give the floor for the last comment. We have more comments from audience? Any other questions from the audience, from online? No, from the audience. OK. Last comments of the three, four minutes that we have. I cannot read from five minutes. I know. OK. I think we have had a very interesting session. Thank you all very much. Thank you, I-Chan, Chiao, Sam, Mauricio, Neha, Haram, Poncelet, and all the audience. Thank you for being so patient and so active in participating in this very important session. Thank you for allowing me to moderate it. You want to say something? Please go ahead.
Xiao Zhang: I propose a group picture.
Olga Cavalli: Oh, a picture. Yes, that’s very important. Now we take a picture. And yes.
Heramb Podar: Just a very quick note. I’ll share the link to the interoperability report. I’ll share the link to the interoperability report. since it was mentioned by a lot of my speakers and a lot of the panelists also, co-authors on the report. We also have, for those in person, the PNAI session happening in the main Peter E. Hall, which starts in, I believe, about 20 minutes. So we look forward to seeing you there. And if you have more questions or you want to just know more about the PNAI work, please feel free to join. Thank you.
Olga Cavalli: Thank you for that. I participated and I was one of the leaders of the PNAI and labor issues. So thank you for allowing me also to do that. No sacas una foto? OK. Let’s do the picture. Thank you all very much. Thank you. Thank you. Thank you. Escuchame, como se puede borrar esa imagen? No, eso. Yes. Can you remove that?
Heramb Podar: I don’t. I can’t have that. They will have to remove it. Give me one second. Give me one minute.
Mauricio Gibson: Oh, no. We’ve lost him. Poncele, can you come back? Poncele? Poncele, can you come back? Poncele? Can you hear us? Poncele, are you there? Hello? OK.
Yik Chan Chin
Speech speed
149 words per minute
Speech length
2145 words
Speech time
859 seconds
Broad definition beyond technical systems
Explanation
Yik Chan Chin presents a broader understanding of interoperability that goes beyond technical systems. This definition includes legal, semantic, and technical layers of interoperability, focusing on how different initiatives can work together across the world.
Evidence
The speaker references the PNAI’s 2023 report which includes this broader definition.
Major Discussion Point
Understanding and Scope of AI Interoperability
Risk categorization, liability, and training data risks
Explanation
Yik Chan Chin identifies key global issues that need to be addressed in AI governance. These include AI models risk categorization and evaluations, liability of AI systems, and risks associated with AI training data.
Major Discussion Point
Key Issues and Obstacles for Global AI Governance
Geopolitical tensions and unequal distribution of AI capabilities
Explanation
Yik Chan Chin highlights major obstacles to AI interoperability, including geopolitical tensions and lack of trust among countries. She also points out the unequal distribution of AI technology and maturity of policymaking as significant challenges.
Major Discussion Point
Key Issues and Obstacles for Global AI Governance
Importance of multi-stakeholder and interdisciplinary approaches
Explanation
Yik Chan Chin emphasizes the importance of multi-stakeholder and interdisciplinary approaches in addressing AI interoperability. She suggests that these approaches can help collect diverse information and evidence from different sectors around the world.
Evidence
The speaker references her experience leading a group that received input from different sectors globally.
Major Discussion Point
Role of Different Actors in Addressing AI Interoperability
Agreed with
Sam Daws
Mauricio Gibson
Agreed on
Importance of multi-stakeholder approach
UN as platform for policy dialogue and collaboration
Explanation
Yik Chan Chin supports the UN as a focal point for AI governance and dialogue. She emphasizes the UN’s legitimacy due to equal representation of countries, with each country having one vote regardless of size or power.
Evidence
The speaker references the PNAI’s report which supports the UN as a focal point for AI governance.
Major Discussion Point
United Nations’ Role in Global AI Governance
Agreed with
Sam Daws
Xiao Zhang
Agreed on
UN’s role in AI governance
Sam Daws
Speech speed
130 words per minute
Speech length
2172 words
Speech time
1001 seconds
Cultural interoperability and sustainability aspects
Explanation
Sam Daws emphasizes the need for cultural interoperability in AI governance. He argues that diverse cultures should feed into AI development to ensure it can be used to live good and meaningful lives, including insights from low-resource languages and indigenous wisdom.
Evidence
The speaker mentions the trend towards sovereign AI at national and regional levels, especially in data governance and LLM worldviews.
Major Discussion Point
Understanding and Scope of AI Interoperability
Sustainability and energy demands of AI systems
Explanation
Sam Daws highlights the need for a global approach to AI sustainability. He points out that AI energy demands are set to grow with increasing multimodal inference, IoT data use, and agentic AI, necessitating interoperable ways to measure, track, and incentivize better energy and water use.
Evidence
The speaker mentions ongoing work in ITU, ISO, IEC, IEEE-SA, and collaborations with the International Energy Agency and UN Environmental Programme.
Major Discussion Point
Key Issues and Obstacles for Global AI Governance
Cross-regional forums for lesson sharing
Explanation
Sam Daws suggests using cross-regional forums for lesson learning to reduce regional siloing of AI approaches. He proposes leveraging forums that have at least one member state from more than one region for knowledge exchange.
Evidence
The speaker lists several cross-regional forums such as the Organization of Islamic Cooperation, the Digital Forum of Small States, and BRICS.
Major Discussion Point
Role of Different Actors in Addressing AI Interoperability
Agreed with
Yik Chan Chin
Mauricio Gibson
Agreed on
Importance of multi-stakeholder approach
UN’s capability for rapid response in emergencies
Explanation
Sam Daws argues that the UN is capable of very rapid response when needed. He suggests that the UN’s speed in AI governance depends on member states’ willingness to provide it with the necessary capabilities.
Evidence
The speaker cites examples of UN agencies’ rapid responses in conflicts, emergency situations, and disease outbreaks.
Major Discussion Point
United Nations’ Role in Global AI Governance
Agreed with
Yik Chan Chin
Xiao Zhang
Agreed on
UN’s role in AI governance
Mauricio Gibson
Speech speed
171 words per minute
Speech length
2855 words
Speech time
1000 seconds
Balancing regional variations with global approaches
Explanation
Mauricio Gibson emphasizes the need to balance regional variations with global approaches in AI governance. He suggests focusing on broader areas of cooperation while recognizing and addressing gaps in coordination.
Evidence
The speaker mentions the importance of building on core principles across different governance work streams.
Major Discussion Point
Understanding and Scope of AI Interoperability
Keeping up with rapid technological advancement
Explanation
Mauricio Gibson highlights the challenge of keeping up with the rapid pace of AI technological advancement in terms of governance. He emphasizes the need for understanding the scientific basis of advanced AI to overcome this obstacle.
Evidence
The speaker references the UK’s focus on producing regular safety reporting on advanced AI risks.
Major Discussion Point
Key Issues and Obstacles for Global AI Governance
Government role in convening stakeholders and capacity building
Explanation
Mauricio Gibson emphasizes the role of governments in convening different stakeholders and engaging in capacity building. He suggests that governments can use their resources to support AI talent uptake and governance processes globally.
Evidence
The speaker mentions the UK-led AI for development program investing in skills and compute in Africa and Asia.
Major Discussion Point
Role of Different Actors in Addressing AI Interoperability
Agreed with
Yik Chan Chin
Sam Daws
Agreed on
Importance of multi-stakeholder approach
Need to streamline UN agencies and define clear duties
Explanation
Mauricio Gibson suggests the need to streamline UN agencies and clearly define their duties to reduce overlapping and duplication of efforts. He emphasizes the importance of coordination and understanding what each agency needs to deliver on the ground.
Major Discussion Point
United Nations’ Role in Global AI Governance
Xiao Zhang
Speech speed
143 words per minute
Speech length
1193 words
Speech time
497 seconds
AI interoperability as part of digital transformation
Explanation
Xiao Zhang views AI interoperability as part of the broader digital transformation of economy and society. She argues that AI should not be treated as a single, isolated issue but as an integral part of the overall digital ecosystem.
Evidence
The speaker draws a comparison with internet governance, emphasizing the need for a unified ecosystem approach.
Major Discussion Point
Understanding and Scope of AI Interoperability
Trust and data sovereignty concerns
Explanation
Xiao Zhang highlights trust as a crucial issue in AI governance. She emphasizes that AI is built on trust and that addressing trust issues is essential for creating a unified AI ecosystem.
Major Discussion Point
Key Issues and Obstacles for Global AI Governance
Multilateral orientation with multi-stakeholder engagement
Explanation
Xiao Zhang argues that AI governance should be multilaterally oriented while recognizing the importance of multi-stakeholder engagement. She emphasizes that countries must lead AI governance due to its potential risks and impacts.
Evidence
The speaker contrasts AI governance with internet governance, highlighting AI’s potential for harm and its implications for national security.
Major Discussion Point
Role of Different Actors in Addressing AI Interoperability
UN’s legitimacy from equal representation of countries
Explanation
Xiao Zhang supports the UN’s role in AI governance, emphasizing its legitimacy derived from equal representation of countries. She argues that the UN provides a fair platform where each country has an equal voice regardless of size or power.
Major Discussion Point
United Nations’ Role in Global AI Governance
Agreed with
Yik Chan Chin
Sam Daws
Agreed on
UN’s role in AI governance
Poncelet Ileleji
Speech speed
138 words per minute
Speech length
433 words
Speech time
187 seconds
Inclusivity and public interest in interoperability
Explanation
Poncelet Ileleji emphasizes the importance of inclusivity and public interest in AI interoperability. He argues that these aspects should be central considerations in all interoperability efforts.
Major Discussion Point
Understanding and Scope of AI Interoperability
Potential for UN enforcement role in AI safety/security
Explanation
Poncelet Ileleji suggests the potential for the UN to take on an enforcement role in AI safety and security. He implies that this could be a future development in the UN’s role in global AI governance.
Major Discussion Point
United Nations’ Role in Global AI Governance
Neha Mishra
Speech speed
146 words per minute
Speech length
982 words
Speech time
402 seconds
UN’s convening power and coordination role
Explanation
Neha Mishra highlights the UN’s convening power and its potential role in coordinating AI governance efforts. She suggests that the UN can serve as an umbrella organization for developing high-level values in AI governance.
Major Discussion Point
Role of Different Actors in Addressing AI Interoperability
Agreements
Agreement Points
Importance of multi-stakeholder approach
Yik Chan Chin
Sam Daws
Mauricio Gibson
Importance of multi-stakeholder and interdisciplinary approaches
Cross-regional forums for lesson sharing
Government role in convening stakeholders and capacity building
The speakers agree on the importance of involving multiple stakeholders and diverse perspectives in addressing AI interoperability and governance.
UN’s role in AI governance
Yik Chan Chin
Sam Daws
Xiao Zhang
UN as platform for policy dialogue and collaboration
UN’s capability for rapid response in emergencies
UN’s legitimacy from equal representation of countries
The speakers agree on the UN’s important role in AI governance, highlighting its legitimacy, capability for rapid response, and potential as a platform for dialogue and collaboration.
Similar Viewpoints
Both speakers emphasize the need to address the challenges posed by the rapid advancement of AI technology, including its sustainability and energy demands.
Sam Daws
Mauricio Gibson
Sustainability and energy demands of AI systems
Keeping up with rapid technological advancement
Both speakers highlight the challenges posed by geopolitical tensions, unequal distribution of AI capabilities, and issues of trust and data sovereignty in AI governance.
Yik Chan Chin
Xiao Zhang
Geopolitical tensions and unequal distribution of AI capabilities
Trust and data sovereignty concerns
Unexpected Consensus
Cultural aspects of AI interoperability
Sam Daws
Poncelet Ileleji
Cultural interoperability and sustainability aspects
Inclusivity and public interest in interoperability
Despite their different backgrounds, both speakers emphasize the importance of cultural aspects and inclusivity in AI interoperability, which is an unexpected area of consensus given the often technical focus of AI discussions.
Overall Assessment
Summary
The main areas of agreement include the importance of multi-stakeholder approaches, the UN’s role in AI governance, the need to address rapid technological advancements, and the significance of cultural and inclusivity aspects in AI interoperability.
Consensus level
There is a moderate to high level of consensus among the speakers on key issues, suggesting a shared understanding of the challenges and potential solutions in AI governance and interoperability. This consensus implies that there is a strong foundation for developing collaborative approaches to AI governance, although differences in emphasis and specific concerns remain.
Differences
Different Viewpoints
Role of multilateralism vs multi-stakeholderism in AI governance
Xiao Zhang
Yik Chan Chin
Xiao Zhang argues that AI governance should be multilaterally oriented while recognizing the importance of multi-stakeholder engagement. She emphasizes that countries must lead AI governance due to its potential risks and impacts.
Yik Chan Chin emphasizes the importance of multi-stakeholder and interdisciplinary approaches in addressing AI interoperability. She suggests that these approaches can help collect diverse information and evidence from different sectors around the world.
While both speakers acknowledge the importance of multi-stakeholder engagement, Xiao Zhang emphasizes a stronger role for multilateral, country-led governance, while Yik Chan Chin places more emphasis on multi-stakeholder approaches.
Unexpected Differences
Overall Assessment
summary
The main areas of disagreement centered around the balance between multilateral and multi-stakeholder approaches in AI governance, and the specific ways to improve the UN’s effectiveness in this domain.
difference_level
The level of disagreement among the speakers was relatively low. Most speakers shared similar views on the importance of interoperability, the need for global cooperation, and the significant role of the UN in AI governance. The differences were mainly in emphasis and specific implementation strategies rather than fundamental disagreements. This suggests a generally aligned perspective on the topic, which could facilitate progress in developing global AI governance frameworks.
Partial Agreements
Partial Agreements
Both speakers agree on the importance of the UN’s role in AI governance, but they differ on how to improve its effectiveness. Sam Daws focuses on the UN’s potential for rapid response given member states’ support, while Mauricio Gibson emphasizes the need for streamlining and clear definition of duties among UN agencies.
Sam Daws
Mauricio Gibson
Sam Daws argues that the UN is capable of very rapid response when needed. He suggests that the UN’s speed in AI governance depends on member states’ willingness to provide it with the necessary capabilities.
Mauricio Gibson suggests the need to streamline UN agencies and clearly define their duties to reduce overlapping and duplication of efforts. He emphasizes the importance of coordination and understanding what each agency needs to deliver on the ground.
Similar Viewpoints
Both speakers emphasize the need to address the challenges posed by the rapid advancement of AI technology, including its sustainability and energy demands.
Sam Daws
Mauricio Gibson
Sustainability and energy demands of AI systems
Keeping up with rapid technological advancement
Both speakers highlight the challenges posed by geopolitical tensions, unequal distribution of AI capabilities, and issues of trust and data sovereignty in AI governance.
Yik Chan Chin
Xiao Zhang
Geopolitical tensions and unequal distribution of AI capabilities
Trust and data sovereignty concerns
Takeaways
Key Takeaways
AI interoperability needs to be understood broadly, encompassing technical, legal, semantic, cultural and sustainability aspects
Multi-stakeholder and interdisciplinary approaches are crucial for addressing AI interoperability challenges
There is a need to balance regional variations with global approaches to AI governance
The UN has an important role to play in AI governance, particularly in facilitating dialogue and coordination
Trust-building and addressing the AI divide between developed and developing countries are key challenges
Rapid technological advancement poses challenges for governance frameworks to keep pace
Resolutions and Action Items
The Policy Network on AI (PNAI) will release its main report on AI interoperability and other issues
Participants encouraged reading the PNAI report on AI governance interoperability and good practices
Unresolved Issues
How to effectively streamline and coordinate AI governance efforts across multiple UN agencies and forums
Specific mechanisms to bridge the AI divide between developed and developing countries
How to balance data sovereignty concerns with the need for global interoperability
Concrete steps to make UN processes more agile in responding to rapid AI advancements
Suggested Compromises
Using existing cross-regional forums to facilitate dialogue and lesson-sharing on AI governance
Leveraging both multilateral and multi-stakeholder approaches in a complementary manner
Focusing UN efforts on coordination and high-level principles rather than detailed regulation
Allowing for regional variations in AI governance approaches while working towards global alignment on key issues
Thought Provoking Comments
Interoperability is often understood as the ability of different systems to communicate and work seamlessly together. But the IGF Policy Network on Artificial Intelligence definition of interoperability in the 2023 report is slightly different. The report, this definition includes the ways through which different initiatives, including laws, regulations, policies, codes, standards that regulate and govern artificial intelligence across the world, could work together in legal, semantic and technical layers that become more effective and impactful.
speaker
Olga Cavalli
reason
This comment introduces a broader definition of interoperability that goes beyond technical aspects to include legal and policy dimensions. It sets the stage for a more comprehensive discussion.
impact
This framing shaped the entire discussion by encouraging participants to consider interoperability from multiple angles, including legal, semantic, and technical layers.
We need an interoperable global approach to the sustainability of AI. So AI and energy demands are set to grow with increasing multimodal inference with the use of IoT data and with agentic AI. So we need interoperable ways to measure, to track and to incentivise better energy and water use of data centres of chips, algorithmic efficiency and data sobriety.
speaker
Sam Daws
reason
This comment introduces the important dimension of sustainability in AI governance, which had not been mentioned before.
impact
It broadened the scope of the discussion to include environmental concerns and sparked further comments on the need for a holistic approach to AI governance.
For humanity to flourish it’s vital that our diverse cultures feed into AI so we can better use it to live good and meaningful lives and that includes insights from low resource languages and also the wisdom of indigenous as people who have a minimal digital footprint, not captured by large language models trained on the internet.
speaker
Sam Daws
reason
This comment highlights the importance of cultural diversity and inclusion in AI development, bringing attention to often overlooked perspectives.
impact
It led to further discussion on the need for inclusive AI governance and the challenges of bridging the digital divide.
Trust is the most important issue. There is, AI is built on trust. And it’s not limited to the geopolitical reasons. We shouldn’t have different ecosystem. So it’s all this ecosystem are built on trust. So how to build trust? I think this is something we need to discuss.
speaker
Xiao Zhang
reason
This comment emphasizes the fundamental importance of trust in AI systems and governance, shifting the focus from technical aspects to human and social factors.
impact
It led to further discussion on how to build trust in AI systems and the role of different stakeholders in this process.
From a Global South perspective, I think the question is also really very practical as to if there are multiple fora, they only are able to invest that many resources and they might have to choose between different fora and that also can create competition between different fora.
speaker
Neha Mishra
reason
This comment brings attention to the practical challenges faced by Global South countries in participating in multiple AI governance forums, highlighting issues of resource constraints and potential forum shopping.
impact
It led to a more nuanced discussion about the need for streamlined and inclusive global governance mechanisms that consider the constraints of developing countries.
Overall Assessment
These key comments shaped the discussion by broadening its scope from purely technical considerations to include legal, policy, sustainability, cultural, and trust dimensions of AI governance. They also highlighted the challenges of creating truly global and inclusive governance mechanisms, particularly considering the resource constraints of developing countries. The discussion evolved from defining interoperability to exploring its practical implications across various domains and stakeholders, emphasizing the need for a holistic, inclusive, and trust-based approach to AI governance.
Follow-up Questions
How can we develop interoperable approaches to measure, track and incentivize better energy and water use of AI systems?
speaker
Sam Daws
explanation
This is important to address the sustainability challenges posed by increasing AI energy demands.
How can we build cultural interoperability into AI governance frameworks?
speaker
Sam Daws
explanation
This is crucial to ensure AI systems reflect diverse cultural perspectives and wisdom, including from low-resource languages and indigenous peoples.
How can we improve the speed and agility of UN processes to keep up with the rapid pace of AI technology development?
speaker
Online audience member
explanation
This is important to ensure global governance mechanisms can effectively address emerging AI challenges in a timely manner.
How can we balance national AI sovereignty concerns with the need for global interoperability?
speaker
Online audience member
explanation
This is crucial to reconcile countries’ desires to develop their own AI capabilities with the benefits of a unified global approach.
How can blockchain technology be integrated with AI to address interoperability and trust issues?
speaker
Dino Cataldo Dell’Accio
explanation
This could provide a common layer of trust and transparency for AI systems.
In which specific forum should global implementation-focused AI governance coordination take place?
speaker
Online audience member (IamSarmanova)
explanation
This is important to avoid duplication of efforts and address potential geopolitical tensions in different forums.
How can digital economy agreements contribute to AI interoperability?
speaker
Neha Mishra
explanation
This could provide another avenue for addressing interoperability issues, especially at regional levels or between like-minded countries.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Main Session 2: Protecting Internet infrastructure and general access during times of crisis and conflict
Main Session 2: Protecting Internet infrastructure and general access during times of crisis and conflict
Session at a Glance
Summary
This Internet Governance Forum 2024 session focused on protecting and ensuring access to internet infrastructure during conflicts and crises. Panelists discussed the impacts of internet shutdowns and infrastructure destruction on civilians, highlighting how these actions violate human rights and hinder humanitarian efforts. They examined existing normative frameworks, including international laws and UN resolutions, that address internet access and critical infrastructure protection.
The discussion emphasized the disproportionate harm caused by internet shutdowns and the need for governments to refrain from such actions. Panelists explored various responses and alternatives, including technical solutions from private sector companies and initiatives by international organizations like the ITU. The role of humanitarian agencies in providing internet access during crises was also debated.
Participants stressed the importance of multi-stakeholder collaboration to address these challenges. They called for better implementation of existing norms rather than creating new ones. Suggestions for action included stigmatizing internet shutdowns, penalizing governments that implement them, and improving coordination among various stakeholders.
The session concluded with proposals for the IGF community to take more concrete steps. These included potentially creating a best practice forum on the topic, incorporating internet access considerations into conflict monitoring by bodies like the UN Security Council, and leveraging the upcoming WSIS+20 review process to highlight these issues. Overall, the discussion underscored the critical need to protect internet access and infrastructure as essential resources for civilian populations, especially during conflicts and crises.
Keypoints
Major discussion points:
– The impact of internet shutdowns and infrastructure destruction on civilians, especially in conflict zones
– Existing international laws and norms regarding protection of internet access and infrastructure
– The role of different stakeholders (governments, companies, civil society) in responding to and preventing shutdowns
– Technical and policy solutions to maintain connectivity during crises
– The need for more coordinated, consistent responses from the international community
The overall purpose of the discussion was to explore how to protect and ensure internet access for civilians during conflicts, natural disasters, and other crises. The panelists aimed to identify gaps in current frameworks and propose concrete actions the Internet Governance Forum community could take.
The tone of the discussion was largely serious and concerned, given the gravity of the issues being discussed. However, there were also moments of constructive problem-solving and cautious optimism about potential solutions. The tone became more action-oriented towards the end as participants proposed specific next steps.
Speakers
– Anriette Esterhuysen: Director of the IGF, from South Africa
– Mohamed Shareef: Private sector, Digital Telecommunications company, former minister of state for digital and communications in the Maldives
– Cynthia Lesufi: Minister counsellor in the South African mission in Geneva, ITU council working group chair
– Lama Fakih: Director of Middle East and North Africa work at Human Rights Watch
– Kojo Boakye: Meta Vice President for policy for Africa and the Middle East
– Nadim Nashif: Executive director of 7amleh, the Arab Centre for Social Media Advancement, from Palestine
– Peter Micek: From Access Now, teaches at Columbia University
Additional speakers:
– Ernst Noorman: Cyber ambassador of the Netherlands
– Audience members who asked questions (unnamed)
Full session report
Internet Governance Forum 2024 Session: Protecting Internet Access During Conflicts and Crises
This Internet Governance Forum (IGF) 2024 session addressed the critical issue of protecting and ensuring access to internet infrastructure during conflicts and crises. The discussion brought together experts from government, private sector, civil society, and international organisations to explore challenges and potential solutions for maintaining internet connectivity in times of upheaval.
Key Impacts of Internet Disruptions and Infrastructure Destruction
The panellists unanimously agreed that internet disruptions and infrastructure destruction have severe humanitarian consequences and violate human rights. Lama Fakih, Director of Middle East and North Africa work at Human Rights Watch, emphasised that such disruptions not only infringe upon human rights but also hinder humanitarian aid efforts, citing specific examples from Gaza where internet shutdowns have impeded access to vital information and services. Nadim Nashif from 7amleh highlighted the devastating impact of infrastructure destruction in Gaza on the civilian population, including the loss of communication with family members and access to essential online services.
Kojo Boakye from Meta highlighted the significant economic costs of internet shutdowns, noting that they can cost countries up to 1.9% of their daily GDP.
Normative Frameworks and Legal Obligations
The session explored existing normative frameworks and legal obligations regarding the protection of internet access and infrastructure. Ernst Noorman, Cyber Ambassador of the Netherlands, pointed out that the UN General Assembly has endorsed 11 norms prohibiting damaging critical infrastructure in cyberspace. He also discussed the Freedom Online Coalition’s work in promoting internet freedom and human rights online.
Lama Fakih elaborated that international human rights law requires internet restrictions to be necessary and proportionate. Cynthia Lesufi, from the South African mission in Geneva, highlighted ITU resolutions calling for assistance in rebuilding telecommunications infrastructure after conflicts or disasters, specifically mentioning the ITU Council resolution on assistance to Palestine for rebuilding its telecom networks.
Responses and Alternatives to Internet Disruptions
The discussion then turned to potential responses and alternatives to internet disruptions. Kojo Boakye described how the private sector is developing technical solutions, such as WhatsApp proxy, to maintain connectivity during shutdowns. Peter Micek from Access Now highlighted the role of humanitarian agencies in providing connectivity in crisis situations and suggested the creation of a UN cable-laying fleet to assist in infrastructure rebuilding efforts.
Civil society organisations were recognised for their important work in documenting violations and providing technical assistance. Anriette Esterhuysen, Director of the IGF, emphasised the need for rapid response capabilities to repair infrastructure in affected areas.
Role of the Internet Governance Community
The panellists agreed on the crucial role the internet governance community can play in addressing these challenges. Ernst Noorman suggested that the IGF could establish best practices for protecting internet access during conflicts and crises and emphasised the need for capacity building and multistakeholder involvement in implementing norms.
Lama Fakih called for efforts to stigmatise internet shutdowns by governments, framing them as unacceptable actions by states. Kojo Boakye suggested penalizing governments that implement internet shutdowns.
Cynthia Lesufi highlighted the importance of incorporating these issues into the upcoming WSIS+20 review process. Peter Micek referenced the Global Digital Compact as a potential framework for addressing internet shutdown issues.
Areas of Disagreement and Partial Agreement
While there was broad consensus on the importance of maintaining internet access, some differences emerged regarding the roles and responsibilities of various stakeholders. For instance, while Lama Fakih emphasised the legal obligations of governments under international human rights law, Kojo Boakye focused on the role of private sector companies in developing technical solutions.
Thought-Provoking Comments and Future Directions
Several thought-provoking comments shaped the discussion, including Lama Fakih’s description of ongoing internet disruptions in Gaza and Mohamed Shareef’s insights on the unique challenges faced by small island nations.
An audience member from Sudan provided a powerful testimony about the impact of internet shutdowns in their country, highlighting the severe consequences for civilian populations and the challenges faced by humanitarian agencies in providing connectivity during crises.
The session concluded with proposals for concrete actions, including exploring the establishment of an IGF best practice forum on protecting internet access in conflicts and crises, highlighting internet infrastructure protection issues in the WSIS+20 review process, and developing a working group to implement Global Digital Compact language on internet shutdowns.
Unresolved issues remained, such as how to effectively enforce international laws and norms against internet shutdowns, and how to balance legitimate security concerns with maintaining internet access. The discussion also raised important questions about content moderation in crisis situations, the role of platforms in ensuring equitable access globally, and the potential for technical solutions like ESIM infrastructure and community networks built on decentralised power grids.
In conclusion, the session underscored the critical need to protect internet access and infrastructure as essential resources for civilian populations, especially during conflicts and crises. It highlighted the importance of multi-stakeholder collaboration and the implementation of existing norms, while also identifying areas for further research and action by the IGF community.
Session Transcript
Anriette Esterhuysen: Good afternoon, everyone, and welcome to this Internet Governance Forum 2024 main session under the theme of the contribution to the Internet, to peace and sustainability. My name is Anriette Esterhuysen, I’m from South Africa. I’m the director of the IGF, and I’m the director of the IGF’s international forum on international and global communications, and this session is, I think, one of the most significant and maybe one of the most topical sessions that we have at this year’s IGF. It’s trying to address the concern of how do we protect and ensure access to Internet access, and we’re going to explore this session from the perspective of what the impact is, what the impact is on ordinary people, on communities, when Internet infrastructure is destroyed or becomes unavailable. And we’re going to look at what are the norms? Are there norms, are there normative frameworks for responding or preventing this from happening? And then we’re also going to look at what are the alternatives? What measures can be taken? What actions can be taken to get this multi-stakeholder community to play the role that it is usually so fundamentally committed to, which is to ensure an open, free, interoperable Internet for everyone. And then finally, we will look at future-oriented actions. Where are the gaps? Are there gaps at the normative level? Are there gaps at the implementation level? And what can the IGF do? So, I’m balancing too many devices here. To introduce you to my panel, and then I’m also going to introduce you to my fellow moderator, which is Peter Micek from Access Now, who’s joining us from New York. I can hear an echo. Can everyone hear the echo? Is there anything we can do about the echo? Should I hold the microphone? Okay, so, I’m going to move a little bit further away. That helps, that helps. So, Peter Micek will join us from New York, and he’ll make some opening remarks, but I first wanted to introduce you to my panel. So, starting immediately from my left, we have Mohamed Shareef, who’s currently with the private sector, in a company called Digital Telecommunications, and he’s been a minister of state for digital and communications in the Maldives. Next to him, we have Ambassador Allen Snoherman, the cyber ambassador of the Netherlands. And next to him, I’m very pleased and proud to have my compatriot, Ms. Cynthia Lesufi, who’s minister counsellor in the South African mission in Geneva, and she’s also the international telecommunications union council working group chair for the world summit on the information society, and the sustainable development goals. Next to Cynthia, we have Lama Fakih from Human Rights Watch, who leads their work. She’s the director of Middle East and North Africa work at Human Rights Watch. And next to Lama, we have Kodjo Boakye, who is the Meta Vice President of the Middle East and North Africa, and she’s also the director of Human Rights Watch. And next to Kodjo Boakye, who is the Meta Vice President for policy for Africa and the Middle East. And then joining us online, we have from Palestine, Nadim Nassif, executive director of Hamleh, the centre, the Arab Centre for Social Media Advancement, and also joining us online is Professor Madeleine Carr. Is Madeleine with us already? Good. And also, Professor Nassif, who is the director of the Centre for Social Policy and Cybersecurity at University College London. To get us welcome, Peter and Nadim and Madeleine and everyone else who’s with us online. Peter, do you want to start us off with some of the reflections and talking points that we feel we should try and address in this session?
Peter Micek: I’m excited to explore this intersection of connectivity, infrastructure, and instability, and how they relate to peace, development, and sustainability. It’s core to the work of my organization, Access Now, and my teaching at Columbia University. Taking this beyond to start the context of Internet governance, but to sustainable development broadly, we have sobering new facts. The global multidimensional poverty index published by the UNDP in October.
Anriette Esterhuysen: Peter, just pause a little bit. We have an audio issue. Can our tech support team please help? Peter, try speaking again. Let’s test if it works. Go ahead, speak.
Peter Micek: I don’t think we can hear you right now. Can you hear me, not if you can hear me?
Anriette Esterhuysen: Okay, try speaking. We can’t hear you. My colleagues here who helped organize this session, can you just check and help us fix this? Okay, I think while Peter, let’s try once more. Try speaking again.
Peter Micek: As I was saying, the new multidimensional poverty index by UNDP in Oxford found that 1. 1 billion people are living in acute poverty, and a staggering 455 million of them are in countries experiencing war or fragility. That’s nearly half of people in acute poverty are experiencing war, and these conflicts have intensified. These don’t always go in the right direction. In 2023, last year, the keep it on coalition that my organization used documented the highest number of Internet shutdowns in the world. These Internet shutdowns occurred in 39 countries, and nearly half of them were in countries experiencing war or fragility. These Internet shutdowns are intentional disruptions, and unfortunately, it looks like 2024 saw even higher numbers than 2023, despite states in the global digital compact saying we must, quote, refrain from Internet shutdowns and measures that target Internet access. So with all of these problems, what do we do? In response, we attempt to mitigate the impact of these shutdowns, and we do this by using the technical innovations of the ITF. In response, we attempt to mitigate. We scramble to create works around through technological innovation, public and private donors devote energy and resources to whip up networks in dire conditions, while humanitarian actors increasingly relying on digital, say, cloud -based systems or biometric solutions, remote delivery platforms, and community engagement platforms, are automated. They are just like we are looking to these technical fixes, whether it’s satellite, internet, or joining emergency telecommunications clusters to provide quick import of hardware and assets. We’re looking at ways to make sure that we’re not just automating the systems, but we’re also automating the technology. We’re looking at ways to make sure that we’re not just selling hardware and assets and seeking unlikely partnerships across all industries in real-time. This is tough work and it’s a scramble, so there has to be a better way. All of our societal systems rely in some way on connectivity and electricity. We recognize the sheer and growing importance of connectivity in civilian life, and, perhaps, more so when everything around you is engulfed in violence and war. So, I want to put out there that we need to focus on connectivity and providing it and protecting that infrastructure, because it’s a lot harder to bring back, and these Sisyphean efforts in the moment of crisis encounter a lot of trouble. So, let’s start by looking at first principles in addition to the reactive workarounds that we’ve all been putting together. Thank you.
Anriette Esterhuysen: Thanks a lot. Peter, I want to check French translation sorted out now. There were issues with the French interpretation, so I just am assuming that it is okay. I’m just checking on the Zoom to see if it is. And it looks now they cannot hear me. I’m not audible. So, I’m going to go back to the slides. And it looks now they cannot hear me. I’m not audible. The Zoom participants say that they cannot hear me. No audio for Zoom. I can’t see the tech team. Who’s doing the zoom? Can you hear me on through the headsets? People in the room can hear me. No one can hear me on remote. You still can’t hear me? No. Sorry, peace, can you please try and find someone who’s dealing with the zoom and sort this problem out? Thank you. The zoom participants have lost audio completely. Oh it’s back, it’s back. Peace, it’s fine, it’s back. I think it’s back now. Good. So let’s look at what the impacts are. So Lama, I want you to start. When we talk about these disruptions and the destruction of infrastructure or the interference of infrastructure or damage, what does this actually mean for people on the ground, for ordinary citizens, for civilians, for communities?
Lama Fakih: Thank you, I hope everyone can hear me well. During times of conflict, civilians, journalists, first responders.
Anriette Esterhuysen: Sorry, I am so sorry to interrupt you, but no, the zoom audio problem has not been sorted out. So we’ll continue, but please can we have our virtual participants able to hear this session? Can you see the transcript? Nadim and Peter, can you see the the captioning? Just nod if you are able to. Oh you can’t hear me, so let’s go ahead, Lama, I’ll type. Thank you.
Lama Fakih: So as I was saying, during times of conflict, civilians, journalists, first responders, they rely on the internet to document and share evidence of abuse and to provide life-saving assistance. During times of political crisis, protesters leverage the internet to organize online and to stand up for their rights. And yet time and again, we have seen states and armed groups take action to deliberately shut down access and to destroy telecommunication systems in ways that violate people’s rights. In conflicts, as Peter was laying out, there may be multiple causes of these disruptions to communications networks, and they are sometimes deployed in tandem. Palestinians in Gaza have endured over a year of ongoing phone and internet disruption as a result of relentless airstrikes by the Israeli government and other actions that the government has taken. These actions have included damage to core communication infrastructure, cuts to electricity, fuel blockades, and apparently deliberate shutdowns through technical means. According to UN OCHA, on October 10, airstrikes conducted by the Israeli military targeted several telecommunications installations, destroyed two of three main lines for mobile communication, and this left residents in Gaza reliant on just one line for mobile and internet connectivity. It resulted in disruptions to services, and on October 27, at the start of Israel’s ground incursion into Gaza, the connectivity came to a grinding halt during an approximately 34-hour communications blackout. Paltel, one of the few remaining service providers that is still operational in Gaza, confirmed to Human Rights Watch in November of 2023 that when service was restored without their intervention, it was clear that the disruptions were intentional. In times of conflict, authorities and armed groups should refrain from deliberately shutting down or destroying telecoms infrastructures because of the disproportionate harm that it has on civilians. When governments and armed groups target the infrastructure, they often justify these measures as necessary for public safety, curbing the spread of misinformation or for legitimate military reasons. But such sweeping measures are more like collective punishment. When the internet is off, people’s ability to express themselves is limited. The economy suffers. Journalists are not able to upload evidence of abuses that they are documenting. Students are cut off from their lessons. Taxes can’t be paid, and those needing health care often cannot access life-saving assistance. When India blocked access to the internet in Kashmir for months in late 2019, Indian officials justified the action by saying it was necessary to temporarily limit access to the internet during the period of crisis to avoid permanent loss of life. Four UN special rapporteurs condemned the move, however, warning that the shutdown in Kashmir was inconsistent with the norms of necessity and proportionality. In other words, inconsistent with the law. Practically, at least one study by a researcher at the Stanford Global Digital Policy Incubator has found that shutdowns are actually counterproductive to deterring violent incidents. It tracked a quadrupling of violence when networks were disrupted as compared to cases when the internet stayed on. Shutdowns, they draw headlines, but subtler, equally devastating techniques to manipulate the internet deserve attention too. Authorities possess an arsenal, ranging from blocking specific social media applications or messaging applications, to throttling traffic, to restricting live streaming. And these are all the types of weapons that we need to contend with in ensuring that people have connectivity during times of crisis and conflict. Thank you.
Anriette Esterhuysen: Thanks very much, Lama. And I believe that our Zoom participants can hear now if they use not original audio, but if they go to one of the language tracks. So audio works in English, and you can also select captioning to those of you who are online by clicking on more at the bottom of your Zoom screen and selecting captions. And Nadeem, can you tell us a little bit more about the very specific context, and I know Hamle has done research on this, on the impact of the destruction of telecommunications infrastructure in Gaza, and what that is doing to people.
Nadim Nassif: Hi, everybody. I’m not sure if you are hearing me or not, because there are issues with the audio. We can hear you. So thank you, everybody, for having us today and having the possibility to speak and to be part of this event. I am here also and was asked to speak also about the current situation, specifically in Gaza, and basically regarding the destruction of the infrastructure, the telecom infrastructure, as was mentioned before. But I think it’s important also to go a little bit back at this, because the infrastructure in Gaza has been held captive and controlled by the Israeli occupation since 1967, right? So the occupation, the Israeli occupation, controlled the Palestinian telecom infrastructure since the occupation of 1967. And since then, basically, it’s a kind of a hostage infrastructure that was not allowed to perform, to progress. Historically also, when also agreements were signed by the Palestinian Authority and Israel, it was agreed, basically, that this arrangement will develop into a Palestinian telecommunication industry or sector that will be independent, finally, as there will be an independent Palestinian state. I’m talking now about early 90s, 93, 94, with the Oslo agreements, with Paris Accords, all of these agreements that happened at that time never were realized. And until today, as we know, I mean, obviously, there was no Palestinian independent state and the telecom infrastructure is yet to be controlled by the Israeli side, which means basically that all the components of the Palestinian telecom, whether it would be for cell towers or for other components, has to get approval by the Israeli side, right? And for many years and for economic reasons and for assuming that also Palestinian users would prefer Israeli telecommunication companies rather than Palestinian ones, they prevented the Palestinian telecommunication sector from developing. So that sector was held captive. But we also need to remember that also there is no independence in the sense of being connected with the world. All the infrastructure basically goes through the Israeli side, right? And the Israeli side is giving… the Palestinian side, the access to the worldwide internet. And simply as they have the access, they can also cut the access. And this is basically what happened in Gaza. So during the war, during the genocide that is happening in Gaza, it’s not only about destruction of the infrastructure that Israel did, and deliberate attacks on the infrastructure, on the cell towers, it also performed at least 17 times of deliberate shutdown, total shutdown of the whole internet and telecommunication of the Palestinians in Gaza. Now, in our research, in our last research that we published in Hamle, we basically speak about an infrastructure that the assumption or the estimate is basically 75%. At that time, we speaking about the research that was done in summer this year, that 75% of the infrastructure was damaged, and 50% of the telecommunication infrastructure was totally destroyed. That does not exist. We assume that since then the level of destruction is even much worse. So this is where we are now, we are at a situation where there’s a whole destruction of the telecommunication. And we know the impact, I mean, the humanitarian impact, how connectivity is a lifeline in the crisis, how families cannot call their loved one to make sure that they are okay, they cannot call for hospitals, for help. Even people who were under the rubble and need some time to call when they are in that situation for help, they cannot do this. So you can imagine how much this is a devastating situation that people cannot call for any kind of help, cannot communicate with their families in times of crisis, in times where this communication is most needed. I think it’s important really to think about what’s happening on the broader context, because if this is being a precedent, and in wartime, countries and governments like the Israeli government can do this, this probably will be repeated in other places, because I think many things and many borders and limits that we were used to be respected in the international law, in the international humanitarian law are being broke by Israel and violated by the Israeli government. I think we also need to think what’s the impact globally on other wars, on other conflict zones, when such thing happen in the future, and why collectively we need to think about mechanisms, how we can stop that, how we can prevent this, their situation from happening in other places. I think for Gazans, there’s a need for obviously for ceasefire to stop the attack, to stop the genocide, to stop the war. But beyond that, there are immediate solutions that need to be given to Gazans in order to overcome the right now situation, specifically when we speak about emergency people, specifically when we speak about journalists and media people who need the connection. So E-SIMS was and still one of the solutions, and there are other technical solutions that the telecom company, the Palestinian company, speaking about like cells on wheels and other kind of technical solution. But obviously there’s a need for long-term solution, and I think it’s really important to emphasize when we speak about the future, and hopefully the day after when there will be ceasefire, that there will be a reconstruction of the Palestinian telecommunication sector, that this telecommunication sector will get the newest technologies in order to move on. One thing that I did not mention at the beginning, that we’re speaking about the telecom infrastructure in Gaza that was destroyed, but this telecom infrastructure that was destroyed, we are talking about second generation, right? I think Gaza before the war was one of the last places on the globe having only second generation. So the need now is basically that the international community will put enough pressure that there will be a reconstruction after the ceasefire also on the telecom, and that Israel will allow the newest technology to enter Gaza to be rebuilt and to make sure that the people there are reconnecting, and there is a long-term solution happening there. Again, I think many people spoke about Palestine and what was happening in Palestine as one kind of one big laboratory in terms of surveillance, in terms of infrastructure, but what’s happening there is really something that is impacting globally. It’s not only a Palestinian’s problem. Lots of the impact there, lots of the precedent that’s happening in Gaza, unfortunately, we will see them in other places if we don’t put the right mechanisms to stop that and to have enough pressure on Israel to stop the war on their side. Thank you for having me.
Anriette Esterhuysen: Thanks very much, Nadim, for that. Peter, I’ve been kind of kicked out of the Zoom, or my Zoom is frozen, so I’m relying on you to watch what happens online. I mean, we’ve talked about shutouts, we’ve talked about the destruction of infrastructure. I’m not sure, Lama, I was so busy with the tech issues. If you mentioned sanctions, because I think that’s another form of excluding people, such as in the case of Sudan, which has been not just impacted by war and conflict, but also for a long time by sanctions, which has made people not able to access certain applications and services. But, Sharif, I wanted you to talk about climate change, and specifically the situation of small island developing states, who are incredibly vulnerable to cyclones, to other forms of disruptions, tsunamis. How does that kind of, when you have one fiber-optic cable link, how does that affect people if it’s disrupted?
Anriette Esterhuysen: Thanks for that, Sharif. Let’s talk a little bit now about what the Normative Framework is that exists. Unfortunately, Madeline Carr cannot join us. She’s having an issue with her registration. But we have Ambassador Norman and Lama Yu as well. But starting with you, Ambassador, you are part of, the Dutch government has come I’m wondering if you could talk a little bit about the multistakeholder community, the multistakeholder community, which is currently chairing the Freedom Online Coalition. I know this is an issue that you have looked at, and also, the Netherlands introduced this idea of the norm to protect the public core of the Internet. What do you feel we have within this multistakeholder community from a normative framework that gives guidelines? Well, the norm of the public core, I think, is very clear. It says that no state or non-state actor should interfere with the public core of the Internet. And it says then explicitly, it defines that public core as including transmission media, naming and numbering systems, as critical Internet resources. But from your perspective, what do we have from a normative, at the level of normative frameworks? And what do we have from a multistakeholder community? And what do we have from a multistakeholder community?
Ernst Noorman: Thank you. Thank you very much for also mentioning the public core, which was indeed a concept introduced by the Netherlands in 2015. But actually, I also have to talk about the UN, because we talk a lot right now also in the so-called open-ended working world, but also, I want to talk a little bit about the open-ended working world. So, as I mentioned, we know that the UN, the older states, have established several non-binding norms on the responsible state behavior in cyberspace. And these 11 norms have been endorsed by the General Assembly. And are part of a framework of the responsible state behavior in cyberspace. And these 11 norms have been endorsed by the General Assembly. And these 11 norms have been endorsed by the General Assembly. And these 11 norms have to do with critical infrastructure. First, states that, first, it states that states should not conduct or knowingly support ICT activity contrary to its obligations on the international law that intentionally damages critical infrastructure. And according to the third norm, states should respond to appropriate requests for assistance by another state whose critical infrastructure is subject to malicious ICT acts. Now, what’s interesting is that norms, that while norms provide that it’s up to states to designate the national critical infrastructure, which is kind of logic, you know, like the Netherlands, for us, the Port of Rotterdam is critical, while for Switzerland, of course, it’s critical. And, of course, the port is less important. So, yeah, it depends really on the country itself. But they do include as examples of critical infrastructure, the technical infrastructure essential to the general availability or integrity of the Internet. And while these norms are nonbinding, they do articulate a clear expectation by the international community with regard to the behavior of states. So, the first step is to designate the critical infrastructure, which is the critical infrastructure, and for the normative framework to be valuable, it needs to be implemented effectively. And effective implementation means complementary initiatives and enhance each organization and stakeholders to support the resilience of critical infrastructure, also in time of conflict. So, the second step is to designate the critical infrastructure, which is the critical infrastructure, and for the technical community, also part of the FOC, by the way, the Freedom Online Coalition. Of course, these organizations in charge of the functioning of the Internet have a role in ensuring a resilient infrastructure and general availability in the integrity of the Internet. Now, who should better understand what is needed other than these organizations? The first step is to designate the critical infrastructure, which is the critical infrastructure, and for the technical community, it needs to be implemented effectively. And I briefly, I believe that there’s an important role for the ITU in coordinating responses, which is probably a lesser-known function of the ITU. They do important work in disaster relief, but also in crisis, such as in Ukraine and Gaza, and we need collective expertise in coalitions, but also in multilateral agreements, and for now my colleague zenie who’s in the room, I shall introduce him to you very shortly. So, at the third chair, my name is
Anriette Esterhuysen:
Ernst Noorman: My name is off. I’m a member of the European Union, and I’m a member of the European Union, and I’m a member of the European Union, and I would like to share with you a little bit about how we as a European Union can contribute as a advisory body, and further thinking on ensuring how submarine cables can be protected, how they can become more resilient, and how we can quicker repair also in case of malfunctioning or damaging the submarine cables, and how we can make sure that they are protected. Now, most importantly also with technical organizations such as ITU, but also ICANN and NCC and other regional organizations need to remain neutral for them to be able to function effectively. They also mean for all of us that we need to show some restraint in asking these organizations to intervene in functioning of the public core of the Internet, or the global digital networks that carry out data. So you talk about sanctions, but on this level, we say you should leave them, do their neutral work to protect the core of the Internet, to make sure that, indeed, human rights workers, activists, journalists, healthcare workers, can keep working on and using the Internet. Thank you.
Anriette Esterhuysen: Thank you, Ambassador Dohm. You actually make it sound so clear and simple, and yet we know in practice it’s not. But, Lama, are there human rights norms and international human rights law that applies in these contexts?
Lama Fakih: Thank you. And in parallel to the normative frameworks that the Ambassador clearly laid out, we have a lso the legal frameworks of international human rights law and humanitarian law. And under international human rights law, governments have an obligation to ensure that Internet-based restrictions and attacks on infrastructure are both necessary and proportionate to a very specific security concern. General shutdowns and attacks on infrastructure violate multiple rights, including the rights of the freedom of expression and information, and hinder other rights, like the right to freedom of assembly. In their 2015 joint declaration on freedom of expression and responses to conflict situations, United Nations experts and rapporteurs declared that, even in times of conflict, using communications kill switches can never be justified unless there is an international human rights law. And, as a result of the international human rights law, constitutional human rights law. And, even in times of conflict, using communications kill switches can never be justified under human rights law. Multiple UN resolutions have condemned intentional disruption of Internet access, and they call on states to refrain from carrying them out, including during conflict. And now, when it comes to the laws of war, while computer warfare is an important part of the international human rights law, it is also an important part of the international human rights law. And, as was specifically addressed in the Geneva conventions, the basic principles and rules on the methods and means of warfare remain applicable. That means that attacks must be targeted against military objectives. They can neither be indiscriminate nor arbitrary. And, the principle of necessity under human rights law is likely to be unlawfully disproportionate, whether carried out by airstrikes or cyber warfare. The principle of necessity under human rights law permits measures that accomplish a legitimate military objective that are not otherwise prohibited by international humanitarian law. But, shutting down the Internet may serve a legitimate military objective. The principle of proportionality prohibits actions in which the expected civilian harm is excessive in relation to the military advantage. And we know that Internet and phone shutdowns and attacks on critical infrastructure can cause considerable harm to the civilian population, including leading to death and injury by preventing civilians from communicating with each other around safety considerations. They also hinder the work of journalists and human rights monitors, who can provide information on the situation on the ground, including reporting of possible laws of war as violations. And, importantly, the restrictions hamper the ability of humanitarian agencies to assess and provide assistance to populations at risk. The lack of information regarding the conditions and circumstances of human rights violations is a problem. And, as I said, the principle of proportionality prohibits actions in which the expected civilian harm is excessive in relation to the military advantage. may also increase the likelihood of injury and death. I think this is very acute in the in the case of Gaza which Nadeem has also laid out for us. A complete shutdown of internet and phone communications to large areas can also amount to a form of collective punishment by imposing penalties on people without a clear legal basis. And with regard to the ITU, article 34 of the ITU’s constitution on the stoppage of telecommunications gives license to ITU member countries to block telecommunications in quote which may appear dangerous to the security of the state or contrary to its laws to public order or to decency end quote. While article 35 on the suspension of services gives member states quote the right to suspend the International Telecommunication Service. These articles have been invoked by some states as granting legal authority to block communications including to implement internet shutdowns. These provisions must however be applied together with and subject to the additional obligations that states have under international human rights law to respect the rights of freedom of expression and other applicable rights. Both the Office for the High Commissioner for Human Rights and the Special Rapporteur on the Rights of Freedom of Peaceful Assembly and Association have called on states to consider revising those provisions in order to align them explicitly with international human rights standards. The Special Rapporteur has also recommended that the ITU issue guidance clarifying that those provisions should never be understood as authorizing internet shutdowns. In a welcome move the ITU did take the historic step of condemning the communications blackout in Gaza and called for life-saving access to networks to be restored there.
Anriette Esterhuysen: Thanks very much for that Lama and I mean it’s actually I think really notable that there is an elaborate body of international laws and norms that do apply in these contexts. Before I open it to the audience I want to ask Peter, Nadim and our panelists in the room if you have anything to add before we move on to the next segment which will be looking at alternatives and responses. But anything to add at this point or any questions you have for one another? Anyone from the audience with a question? If you have a question you have to move to the front to the stage and Peter if you can solicit comments online please.
Peter Micek: Definitely we do have comments online talking about the alarming trend of internet shutdowns including during critical times like protests, elections and civil unrest. If I could use my prerogative I would just add that those ITU rules even do require some process, some procedural requirements that states notify the ITU for example of the disruptions and the reason for the temporary stoppage and blockage in telecoms and often these procedural these procedural requirements are not followed and you’ll see that even in states that do allow for internet shutdowns under law that they’re often not followed according to procedure and so there’s no notice to the population of the reason, of the duration, the extent of the blocking and yeah as a lawyer personally that has given courts the opportunity to step in and say you may have this power but you are not exercising it according to the methods and the protocols set out in law. So just one more aspect. Ambassador Norman and then Korto.
Ernst Noorman: Thank you. One point I would like to add because I didn’t go into the Freedom Online Coalition actually which we’re chairing this year maybe not all knows about the Freedom Online Coalition but it’s a coalition that exists since 2011 and that is a coalition of countries including also an advisory network of NGOs, academia but also the private sector in how to protect human rights online and we are chairing this year the Human Rights Coalition and now we have we added four members this year and now we are 42 member coalition and the the objective is to to strengthen ourselves also in the discussions at mainly UN level like now also with the WSIS plus 20 process with the angle of human rights how to protect the internet how to protect the open accessibility and operability of the internet the freedom on the net is an extremely important topic and also internet shutdowns has been discussed time and again also what’s how to bring it about also in the international discussions to see to ensure more responsible behavior of states as you know Lama already indicated also in her contribution on the rules which already exist on keeping the communication lines open we did a lot of effort also in the GDC to ensure a strong language also on the on the internet shutdowns we were looking for stronger language stronger linkage to to international law in the end that was not part of it but it was indeed that internet shutdown should be avoided but we are very much aware as a freedom online coalition we have to continue working on that it’s it’s not an issue that’s been settled in actually the freedom on the net report of Freedom House has showed it has been on the increase unfortunately thank you
Anriette Esterhuysen: thanks ambassador.
Kojo Boakye : Yet thanks for the interventions from my esteemed colleagues on the right and the overview from Peter a couple of things I wanted to point out in part because I’ve been I feel like I’ve been part of this WSIS process since 2005 and a crew and along that and along those years I’ve learned quite a lot from listening to people I think we broadly use the term internet shutdowns and there may be some people who are less learned that might think the whole place it goes dark I also want to point out there are partial internet shutdowns that arguably just as challenging or if not more so in some cases especially when you see particular apps or particular parts of the internet which are used for freedom of expression and exercising human rights I think the other piece that’s really important as well and I’ve sometimes we forget about it and I have to remind people not that many need reminding that there are so many people still offline so Peter’s great overview I think spoke about 1.1 billion people in abject poverty and 50% of those people being subject to their infrastructure or whatever being out of place it’s important to remind ourselves that despite the huge effort and impact we’ve had in connecting people so many people still remain offline and actually many of the problems we’ve spoken about especially with regard to sustainability and the challenges that come from that actually amount to people being in abject poverty not being connected not being able to get jobs and actually destroying parts of the ecosystem themselves as a result as they seek to eke out a life so I just wanted to point that out but I felt the opening comments were fantastic.
Anriette Esterhuysen: Thanks Kodjo that’s always important for us to acknowledge we have two speakers from from the audience and introduce yourself and be brief.
Audience: Hello my name is JJ I come from New York City and I run an internet resilience research firm. This is one of the most robust conversations I’ve heard from this forum on the concept of shutdowns and you guys mentioned some really good points one gentleman I believe the Dutch man Dutch ambassador mentioned e-sims and then there was also discussion about COP and energy my question would be that I’ve lived through an internet shutdown I’m from Tigray and I’ve experienced what it looks like from a diasporic perspective to be shut off from your community so my question would be if we accept that the internet infrastructure is tied to the power infrastructure and we accept that e-sims in Gaza have allowed for still a constant live streaming of the atrocities then do you feel that as a coalition the UN and various member states have a duty to their citizens who are from those regions that they have encouraged to emigrate to their countries for various economic benefits and support do they have a duty to focus on e-sims and power infrastructure because their citizens deserve the ability to connect and communicate with their family members in these in these shutdowns regions and how do you see a coalition that actually focuses on these nation-states duty to their citizens to get involved in these conversations thank you thanks JJ the
Anriette Esterhuysen: Next person.
Audience: Thank you my name is Khaled Mansour I serve on meta oversight board as Lama made clear we have enough body of international human rights law to oblige countries not to cut the internet services especially when there are dire consequences humanitarian and not only human rights but humanitarian and lives of people who which are lost now the question is that there’s no enforcement or enforceability of this big body of international human rights law and also states and rightly so maybe still have the sovereign right to take decisions using other elements of the international law to justify that and I guess my question to Lama and maybe to Nadim and others what would be the way out in in light of the lack of enforceable We’ve got to find a new device, the basic of the platform, which is scared of slow development, mechanisms, but secondly in lack of a technical solution that probably will happen in a few years where Internet access will not be subject to sovereign authority. Thank you.
Anriette Esterhuysen: Lama, one was directed to you, Nadim but anyone else want to comment on that?
Lama Fakih: I think issues around compliance with international human rights law are always a challenge. We do need to see enforcement from, you know, other countries. And in the context of Internet shutdowns, I mean, these are abuses that often take place in the context of other violations that are also being perpetrated. So, the government of Iraq may be using excessive force against the people of Iraq, but they’re also using excessive force against protesters. And the actions in response to that that stakes can take are varied, and they can include things like targeted sanctions, in some cases, stopping providing military assistance, condemnation, but also accountability does need to be a critical component of this, and, you know, we have seen in the context of war crimes that are being perpetrated, and we have seen with the crisis in Gaza, that there has been a real crisis in enforcement, and ensuring that there are accountability for crimes, but, at the same time, there are judicial processes that are moving forward, that I think we need to support and invest in, and collectively use our influence with other governments to use their legislation. So, I think, you know, we need to make sure that we’re using our influence, and we need to use our leverage to ensure more rights-respecting practice.
Ernst Noorman: Thank you, also, for that, those easy questions, and I felt, indeed, as one of the speakers said, that we have a lot of dialogue, a lot of diplomacy, and diplomacy means negotiation, but also a lot of dialogue, and enforcement of voluntary norms, or international law, has always been a challenge, but it doesn’t mean we are excluded from responsibility to really have serious, in-depth consultations with other countries. We, for instance, have, from my perspective, a lot of dialogue, and I think, you know, we need to make sure that we don’t exclude other countries, and especially also in the position of the chair of the Freedom Online Coalition, I always felt it rather actually easy to bring up a subject like human rights online, internet shutdowns, and our concerns, if that would happen in a certain country, and then, you know, ask a question, what they think about it, and I think also what Lama said, I think, you know, we are not excluded from responsibility, and we have to be able to do that, and I think that’s also the reason why we have such an argument for governments, and for me, then, to present those arguments, that it’s actually counterproductive, such a measure, but it’s, in the end, also, you know, in this world, it’s complicated, we have different interests, different views on topics, but it’s our role as diplomats to try to convince each other. » Thank you.
Peter Micek: We have a hand online. Nadim, can you come in?
Nadim Nassif: I think the main issue of the main one here is accountability, and it’s clearly that there’s a lack of accountability to governments and states who are violating, specifically when we are talking about the Gaza and the Israeli government, that is not being held accountable. I think, for me, as a Palestinian, it’s very sad to see the double standard that’s happening in this aspect, because the enforcement that happened when the Russians invaded and occupied Ukraine was very clear, it did not include negotiation, there was very clear opposition, also from governments, but also from companies, by the way, companies that were connected to that crisis in a very decisive way, in a way that they blocked certain content, allowed certain content, companies like Starlink and others suggested help and gave help, not in a conditional way. This was totally the opposite when it comes to the Palestinians and when it comes to the Israeli government and how we deal. And I think this is just, we learn from this that there is a lack of accountability in the way that some of the Western companies and some of the Western governments are dealing with the situation. I’m not saying that this was not, it’s great that it happened in Ukraine and it’s great that there was a decisive approach. The problem that this approach does not happen in the global majority countries when there’s a tragedy, a conflict, a genocide. I mean, I’m speaking about Palestine, but I can speak about other places and how those people who are supposed to be protecting democracy and human rights values and enforcing did not have any clear state or any clear action in those cases.
Anriette Esterhuysen: Thanks, Nadim, for mentioning that. And I think that’s one of the issues we want to address in this session. How do we have a more coherent, consistent response, both in terms of international and international humanitarian law, but also from the Internet global multi-stakeholder community? Let’s now move on to, did you want to add something there quickly?
Lama Fakih: Quickly. I feel we didn’t respond to JJ’s question. And I just wanted to say I think in some respects the answer to Internet shutdowns has been, the response has been more from private sector engagement in terms of trying to find solutions for people on the ground. And I hear you in what you’re saying in terms of what more can states do to ensure that their residents or their citizens are able to maintain connectivity with others where they’re suffering from Internet shutdowns. But I do think it’s a space where innovative solutions coming from the private sector can really also reinforce what governments are doing to try to keep people connected. And I think, you know, looking at how ESIMs have been used in Gaza, but also thinking about how people have tried to circumvent shutdowns in places like Iran, there needs to be a strong alliance there in terms of thinking through the solutions.
Anriette Esterhuysen: Thanks, Lama. Yes, what responses? Let’s move on to this. Khodro, to start with you, what are the alternatives? How do private sector companies, how can they respond? What are the concrete technical, operational and policy measures? And I see Peter is just also adding a question here. Can international laws regarding to shutdowns be taken to international courts?
Kojo Boakye : I mean, I think, Lama, you implied, yes, we can. But, Khodro, yes, the private sector, how does it respond and how can it respond in such contexts? I like to think that our approach and effort to responding is comprehensive, despite some of the challenges that people have outlined, not only in their comments, but also in their questions. I think as you spoke, you asked about policy. So, for us as a company, we have founded our work on human rights principles. We have a global human rights policy. We’re part of the global network initiative alongside other companies. We have the UN guiding principles on business and human rights as well that all our work is founded in. And I think having that mainstreamed into the company, not just within policy teams and not those teams that just focus on human rights, but into everything we do in terms of product development as well, is a super important start. And then I think it’s about analyzing some of those issues. The short-term shocks we have in terms of total Internet shutdowns and partial Internet shutdowns that I mentioned where particular services or parts of the Internet are shut down is something we’ve been thinking about for a long time. I think Lama just spoke about what do we do when governments, when governments shut down the Internet? I think the example was Iran. But we could cite a number of other governments who have sought to shut down the Internet either by destroying infrastructure or in some cases phoning mobile operators and telling them to turn the service off. We can think about how we might continue to deliver services to ensure that people don’t remain voiceless. One of the steps we’ve taken is with WhatsApp, for example, which I think everybody here I assume has used. We’ve built a proxy service to WhatsApp where volunteers and others around the world set up servers to enable people to continue using a version of WhatsApp that enables them in places like Iran to carry on connecting with people, to carry on delivering services and help and sharing information about what’s going on. I think that’s important. But I think I’ve also been struck by the fact that the way we’ve approached infrastructure. So one of the challenges that a number of people on the panel have mentioned, the challenges we have around infrastructure, certainly submarine cables, this was a vivid example where submarine cables are cut. I am heartened by the fact that our most recent builds like 2Africa, the longest submarine cable in the world, the first to connect east to west, west to east of Africa, however you want to do it, more connected to Africa. I can give you all the talk. I can give you all the talk. I can give you all the talk. to the point that I was just making, and I think that’s a really important point. I think we have a lot of different talking points, but what I’m heartened by is the fact that we buried that cable more than 50% deeper than other submarine cables or submarine cables are normally buried, in part because we’re trying to ensure that the cuts that we have seen, and my own family have suffered from in Ghana, when SAP free is cut and suddenly the Internet is cut off for five days as they seek to go to the other end of the world. So, I think that’s a really comprehensive approach to dealing with things. And then in the midst of crisis, I think there are ways, and I think we’ve spoken about a range of crisis, one, conflict, which obviously we’ve seen affect infrastructure and services everywhere, but also natural disasters, and I think in those instances, and most recently, I can think of earthquakes in Morocco, Turkey, devastating earthquakes everywhere, but certainly more devastating than some we might see in Turkey or in the Middle East, and I think that’s a really important approach to dealing with those. But I think that’s also a way of working with disaster agencies, sharing data for good with agencies like disaster relief to enable people to be helped in the way those disaster agencies do, and also sharing some of our network insights with some of our private sector partners in the mobile industry. Where is your network damaged most? How should you be routing particular traffic to ensure that people can stay connected? And how do you build networks that are more resilient to shocks? And how do you build networks that are more resilient to disruptive policy, and how do you mainstream that into the company? How do we work in the midst of crisis or Internet shutdowns as they happen, i.e., the WhatsApp proxy? And then thinking about the future, how do we build networks that are more resilient to the kind of shocks? And then for companies like ours that have a range of data that could be used to support people in the way that they do, how do we build networks that are more resilient to the kind of shocks? And how do we do that in a compliant way, but certainly how do we share that with organizations who are best placed to help? And I think we’ve done all of that, but there is, for us and many of our competitors, as well as our partners in government and civil society, much, much, much more work to do.
Anriette Esterhuysen: And thanks, Gordian. Just a follow-up question on that. I’m not sure if it’s a question of collaboration, but I’m just wondering, is there a collaborative mechanism of some kind between different private sector corporations?
Kojo Boakye : Yeah, I mean, META has always been about partnership. The keep-it-on initiative that Peter described is not news to us. We’ve been a key part of that and continue to be. We continue to work with, as I mentioned, NGOs, and any other kind of partners. Our infrastructure builds. We have a lot of partners in the private sector. We have a lot of partners in the civil society. We have a lot of partners in the civil society, et cetera, et cetera. So we will continue to approach it in this way through partnership. We think that’s key. And then in the context of these current conflicts, the conflict in Gaza, the conflict in the war in Russia and Ukraine, what measures have you taken in the Gaza context, for example? I think we have a range of partners looking at that. We have sought to keep voice and expression open when possible. I know there are a number of allegations about what ourselves and many other companies have done, but we have a team dedicated, not only our global human rights team, but also within my team, a community engagement and action team that is dedicated to continue engaging with those on the ground. If I gave you more insights into the work we’ve done with journalists, we know how many journalists have been killed in this conflict, and with other organisations on the ground, it would take far too much time. But we are proud of the work we’ve done, but clearly, as I said earlier, have much more work to do in this conflict, as you’ve mentioned, the Ukrainian conflict, and the other conflict that I have to stress, we should not forget, the conflict in Sudan at the moment as well, in which 12, more than 12 million people have been displaced, and we continue to work hard on that as well.
Anriette Esterhuysen: In fact, we have many Sudanese participants here at the IJF, so it’s good to hear from them as well. Cynthia, I want to turn to you, because I think one of the most significant responses, at least from the inter-governmental sector, has been the ITU Council resolution on assistance to Palestine in restoring infrastructure, and I think it’s been very significant that certainly from where I am, I look at this issue, we’ve probably had a more public and a more deliberate response from the inter-governmental community than from the multi-stakeholder community, which I think is a challenge to the IJF, but tell us a bit about that resolution, how it came about, and how you see it having an impact.
Cynthia Lesufi: Thank you, and I also want to take this opportunity to thank the organisers to give us this opportunity to come and speak here. Perhaps the first thing that one would want to highlight in terms of the role of the ITU in as far as the issue of Palestine is concerned is to firstly look at the preamble of the ITU constitution, which is quite clear in terms of the increasing significance of the telecommunications and the economic and social development of all nations, and the convention of the ITU goes further to state that its objective is to facilitate the development of telecommunications services, and also to provide the widespread benefit of new telecommunications technologies, and also to facilitate the development of new telecommunications technologies for all people around the globe. And, earlier on, we heard about the persisting digital divide that the world continues to experience, and maybe just to give specific numbers in terms of what the ITU has published to date in terms of the people who are on the internet, which is about 5.5 billion people who are online, which then would give us an estimate of about 68 per cent of the global population online. However, this does not mean that our problems are over, and I’m talking from the point of view of ITU, is that the digital divide persists to really haunt all of us. And, having said that, you mentioned that the ITU has been adopted a resolution, a council resolution, but perhaps there’s a need to also mention that before the adoption of the council resolution this year in 2024, the ITU have actually adopted a plan for building of the telecommunication infrastructure in countries which are in need. For instance, there’s a plenipotentiary conference which the ITU adopted, which is resolution 125, which, among others, is calling for a framework of activities by the three sectors of the ITU to continue and enhance in order to provide the assistance and the support to Palestine for building and developing its infrastructure. And, it is also, that resolution is also calling to enable Palestine to urgently extend and install and own and manage its own fibre and broadband telecommunication networks, and including fibre optics links between governments and major states. But, in addition to that, there’s another plenipotentiary resolution that the ITU has adopted, which is on assisting and supporting countries in special need for building their telecommunication sector. And, that resolution, in particular, it resolves that the special action undertaken by the General Secretary of the ITU and the Director of the Development Bureau with special assistance from the Radio Communication Bureau of the ITU and the standardisation sector should continue to be activated in order to provide appropriate assistance and support to countries in special need, referred to, there is an annexure to that resolution which lists all the countries that need assistance in terms of rebuilding their telecommunication infrastructure. But, again, as you’ve said, the ITU recently in its Council of 2024 adopted the ITU Council resolution, which is a new resolution in addition to what I’ve mentioned. And, the resolution, it is actually calling for continued assisting in the monitoring and providing regular reports on particular needs of Palestine in the field of telecommunication and to prepare proposals for effective technical assistance. And, among other things, that resolution resolves to direct or to instruct the directors of the three sectors of the ITU particularly to monitor and provide regular reports on particular needs of Palestine in the field of telecommunication and to prepare proposals for effective technical assistance. And, in addition to that, to carry out the assessment on the impact of the war in Palestine on ITU programmes and activities in the region and to provide the report to the Council. And thirdly, that resolution calls to ensure adequate financial and human resource mobilization, including under the budget of the ITU and the Information Communication Technology Development Fund for the implementation of the actions that are proposed by the number of ITU resolutions. And with this, it is quite clear that the ITU is really, and its members, is trying to put together the resolutions that are guiding its membership to ensure that the problems that are currently experienced in Gaza and any other country as a result of the devastating wars and the conflict that are happening in the world, the ITU is actually putting in place measures and procedures to ensure that those countries, they rebuild their telecommunication infrastructure. And I think I’ll stop at this. Thank you.
Anriette Esterhuysen: Thanks a lot for that, Cynthia. And, you know, I think it’s also worth us looking and sharing if there are people in the audience or online from the internet technical community that have engaged in similar initiatives to respond and assist, to share that as well. Because I would assume also, Cynthia, that implementing that resolution will require collaboration. Not, you know, ITU member states are not going to be able to do that on their own. They’d have to work with the private sector, they’d have to work with national ministries, civil society and the technical community. But I want to hand over to Peter here. My battery is also running out. Any comments online that we should share? Any questions you have? Before we go into our final segment, we’re just looking at what can this community do to help get us to a more concrete place of securing access and infrastructure?
Peter Micek: Thank you so much. I do want to follow up on that great presentation of the work at the International Telecommunication Union. It’s really been remarkable and heartening to see that really swift action across the ITU in response to the conflict and the war in Gaza. And as you say, coming out with coordination programs that are applicable across many situations of conflict and crisis. So building on that, I wanted to ask what coordination should we expect from humanitarian aid agencies to provide populations they serve with access to the internet or to secure and open communications tools? Now, I know, you know, in Sudan, for example, most agencies have trouble fulfilling their missions of accessing people in need. And in those places, there are citizen led mutual aid groups standing up to play a role in ensuring access. While in Gaza, as we’ve heard, there are restrictions on the transfer of telecoms, hardware and assets to those in need. So what can we expect and what should we expect from these aid agencies?
Anriette Esterhuysen: Anyone wanting to respond to that? Cynthia? Lama? Kodjo, did you not hear the question? The question is what response from the international humanitarian agencies? Is that right, Peter? And you can repeat the question.
Peter Micek: Okay. Sure. So I was noting that in places like Sudan, humanitarian aid agencies are having trouble accessing the population in need. And their mutual aid groups have played a role in providing access to the internet. In Gaza, there are restrictions on transferring telecoms hardware to populations in need. In these situations, what should we expect of humanitarian and aid agencies to fulfill their mission of protection of civilians and the provision of aid? Should we expect them to provide access to the internet and secure telecoms?
Anriette Esterhuysen: I would say I think some of them try to do that. I think the International Red Cross and Red Crescent does try to do that. I think, can they do it alone? I think that is very, very difficult for them to do. I think they recognize that they need to do that, but it’s extremely difficult for them to do that. And not all of them have the capacity to do that. But Cynthia, I see you wanted to add that. Yes.
Cynthia Lesufi: Thank you. Earlier on, you said that in terms of the implementation of some of the resolutions that the ITU have adopted, it’s not easy to implement them. And that’s quite… You are quite correct. It’s not easy at all. And you pointed out to the continuation of the multi-stakeholder approach in terms of implementing some of this. And now I’m talking from the ITU perspective and trying to respond to the question is that I think from the ITU, as Ambassador has mentioned before, we are operating in a very… It’s a diplomatic environment and the decisions that we take are based on consensus based. And the continuous deliberation is quite important. So for the ITU, if I were to talk from that perspective, we believe in continuous facilitation of cross stakeholder dialogue towards co creating and aligning around common agenda for action and advocacy regarding the non fragmentation of the internet infrastructure during conflict. And from where we are standing as ITU member states is that this will ensure combining and leveraging the complementary roles and diverse capabilities of larger stakeholders and promote that inclusive participation in protecting the internet infrastructure. Thank you.
Anriette Esterhuysen: Thanks, Cynthia. Lama, do you want to add? Just briefly also, I mean, to respond, Peter, to your question.
Lama Fakih: I think governments have an obligation to facilitate deliveries of humanitarian assistance. And when they are encumbering internet access, they are encumbering those operations in unlawful ways. And I think there will be humanitarian agencies that do seek to provide connectivity for their staff and in the communities where they’re operating so that they can deliver on their mission. It’s not their obligation to do so. And I think what can help is the monitoring of the impact that the lack of connectivity also has on the delivery of assistance, because that also helps to underline where the government is also not adhering to its obligations under the law.
Kojo Boakye : Everything that Lama said, and then I think it’s not… I asked for a repeat of the question, but very mindful that it doesn’t feel like my place to call on humanitarian agencies to do much more than we at Metta see them doing already. We’ve supplied ad credits to a number of humanitarian agencies working on all the conflicts that you’ve mentioned and more. WhatsApp has become a key tool for many humanitarian agencies, and we see the effort they go to. I think some of the comments that have been made about the breakdown of the response of the international community to many of these crises probably highlight some of the challenges they face, and it would be great to have someone from that particular community, the disaster agency response community on this panel to speak to it. But thanks for repeating the question.
Anriette Esterhuysen: In fact, we did have real interest from Philippe Stoll from the Red Cross to be here, but he was not able to participate. They actually do very significant work in this field. I see there’s a question. I don’t know if they are online hands, Peter, but I see two people. Please go ahead briefly, introduce yourself and ask your question.
Audience: Thank you. My name is Michel Lambert. I’m working with the Canadian organization called Equality. We are dedicated to build alternative technology, particularly to respond to Internet shutdowns, network disruption, and we also manage the SplinterCon process. I hope most of you know it. The SplinterCon process is gathering every meeting, like hundreds of people with new technologies, particularly to respond to those situations. The last edition was last week in Berlin. My comment is, I mean, we have all kinds of technologies and they’re working, working well, working not well, but globally speaking, we are not having access to the resource to implement them properly as the political side or the big private sector is struggling to find ways to respond to the issues. We’re not having really… We could play a role there, but we’re still maintaining some sort of the margin. We just have some sort of resource to small project. It’s looking nice, it’s fun, but then at the end of the day, we’re not responding to the crisis to the level that we could eventually do it. So I feel that’s a bit of a lost opportunity that the world is having here and that we could engage more, particularly in… crisis where we know that tomorrow there’ll be another situation and we still are not involving, engaging enough to to use the resource that we have. So I’m just, it’s a call. We are here, that community is here, it’s an alternative to response and we could eventually contribute more if there would be some means to engage with us. Thank you.
Anriette Esterhuysen: Thanks Michel and I think we’re going to come back to you how this community can respond soon. The next speaker please.
Audience: Yes, hi everyone. My name is Mike Walton. I’m from the United Nations Refugee Agency. I just want to flag up the Connectivity for Refugees project that we work on and that is truly a multi-stakeholder approach and working with ITU, GSMA and many governments on this. So connectivity in crisis is critical. As soon as a connected then information is critical. We have 14 million refugees visiting our help websites. Without that access they would not be able to have access to that information. But I also want to flag up that as soon as those connectivity and those communication channels are up, they can also be the subject of misuse and across the multiple languages and the multiple platforms that exist, it’s a question to the panel. How can we make sure that we can moderate and we can ensure content policies are properly policed or managed when there are so many different languages and so many different capacities in place and what’s the balance between the people role in this and what’s the balance between the AI role in that if there is one at all?
Anriette Esterhuysen: Thanks for that. We actually fairly deliberately did not want to delve into content moderation because it is in itself such a challenging issue. But I’m gonna let Peter take over from me right now and I see Nadim has his
Peter Micek: hand up. Yes, just before there’s some comments in the chat about also an initiative in Ukraine of operators trying to provide hardware and support to keep the internet working. There’s also concern about states asserting kind of sovereignty and sovereign rights and to push back against I think efforts to serve populations in their states. So yeah, Nadim, please take it away.
Nadim Nassif: Thank you, Peter. I just wanted to add something. I know that you, Henriette, said you don’t want to get into content moderation, but just to say that it’s important also to speak about access on platforms, access that is a responsibility of the companies because we are speaking about physical infrastructure and to get the access through them. But what would happen if you do have the internet access but the platform is deplatforming you or preventing you from using it? And we saw this in social media platforms, including Meta, that is deplatforming and restricting accounts during the war in Palestine and in other places, not allowing newsworthiness, not allowing journalists to do their job, not allowing also human rights defenders to document the violations that are happening on the ground. But this is not only social media issue, this is also a payment platform, digital payment platform. We know that paper, for example, a major one, is not allowing certain countries to use its platform and this is very problematic. We know that there are crowdfunding platforms that have been deplatforming certain people when they try to make donations for Gaza or for other places. So just to say that it’s not only about the infrastructure, we need also to speak about the access that people have or do not have from which countries, especially when it comes to the global majority, that they are either not allowed to use this platform or even when they are on the platform, they are being deplatformed. And I think this is also an important issue to deal with.
Anriette Esterhuysen: Peter, anyone online that wants to respond? Anyone here that wants to respond? I think this gets at the fragmentation discussions a bit that we’ve seen previous internet governance forums. And also the IGF has the policy network on internet fragmentation, which I think has one of the aspects of their work that I think is very significant, is their highlighting of the fragmentation of user experience, which is something we can’t underestimate. Any of the panelists want to add anything before we go into our what next segment? Any other questions from the room? I saw a comment earlier in the Zoom, while I was still in the Zoom, with someone expressing concern that there’s also harmful use of infrastructure, or infrastructure can be used, or internet infrastructure can be used by people with bad intentions. I think that’s actually important point, but I want to come back because I think Lama covered that. And I was part of the Global Stability Commission, or Global Stability of Cybers… Global Commission Stability of Cyber States, who worked with the public core norm. And our conclusion was, is that when it comes to the internet, it’s very difficult to isolate what part of that infrastructure is being used by hospitals, by aid workers, as opposed to what’s being used by bad actors. And I think your point, Lama, about proportionality is very, very relevant here. And I think this notion that it’s legitimate to disrupt or destroy internet infrastructure, because bad actors are using it, I think illustrates how disproportional a response that is. But I think there was a… Peter, am I right?
Peter Micek: There was a comment to that effect in the chat. Yes, definitely, that under the guise of protecting internet infrastructure, states could be protecting combatants, or protecting their own sovereignty in ways that are ultimately harmful, or malicious even.
Anriette Esterhuysen: Well, let’s move on, if nobody is going to add anything. I think I also just want to recognize that we have probably not done justice to the range of responses, and alternatives, and solutions that are being developed by businesses, by civil society organizations, relief organizations, activists, national governments, local governments. I think there is a lot happening also in the internet technical community, but I think it’s very fragmented, and it’s very hard for us, I think, to see those responses connect to normative frameworks, and application and compliance of international law, and of voluntary norms. But let’s now think about the internet governance multi stakeholder community that comes together here at the IGF. And panelists, what do you think this community should be done? I think you all made it quite clear that the gap is not really at the level of norms, or is there a gap at norms, if you want to highlight that. What do you think the IGF can do, and participants in the IGF community from the internet technical community, as well as from governments, from the UN system, and civil society, what can we do to prevent what seems to be very ad hoc responses to these types of disruptions and destructions? Ad hoc, characterized by double standards, often too late, too little. So, I’m going… Kodja, you look like you have something to say, so I’m going to start with you.
Kojo Boakye : I was trying to capture the question in terms of what we can do, what we can do as a multi stakeholder group, or as individuals.
Anriette Esterhuysen: I think you’re very welcome to talk about what you can do as individuals or individual companies, but I think what we want to get at is a more structured intervention, a more coherent intervention, which can galvanize the diversity of role players and actors that we have in this forum, and that can create more compliance, more consistence, more coherence, and ultimately, that can ensure that people have access to the internet in context of conflict and crisis.
Kojo Boakye : I think that’s helpful. I think, having sat on this panel and learned from others coming in and previously, I think this diversity that you speak about is beautiful in many ways, but also creates some challenges in understanding exactly what different parties and indeed, the multi stakeholder group might do. And I think conversations like this are very, very helpful, but capturing the ways in which companies, civil society organizations, governments, international organizations, etc., are working toward this would be the first step. And to be frank, and because I’m a simple man and we like to simplify things sometimes in order to get them done, carving away some of the things that drive that diversity, if that makes sense. I think there are a myriad of things and that can create some confusion and ambiguity of exactly what we might do. And I think clarity over what works, and I know that there are infinite things that work, but clarity over what works, what’s optimal, and works would be really, really important. And whether that’s under the guise of the IGF, which I think continues to be a super important platform. the guise of the ITU or or the UN itself. I’m not, it’s above my pay grade to answer that particular question.
Anriette Esterhuysen: Thanks Gorcho. Ambassador.
Ernst Noorman: I’m not sure it’s my pay grade but I will go into the UN part also. You know the discussions on norms are incredibly difficult and challenging and it’s of course geopolitics is in play with the subjects in all discussions. But now we have a set of norms which are not non-legally binding and but they are based on and complement international law. So we have a recognized, we have recognized that international law applies in cyberspace. This means that we have some legally binding obligations under the Charter but also under international humanitarian law and human rights law. So what we need now is not new norms as some countries are suggesting but we have to make sure that the 11 norms we have endorsed are now also being implemented. And to ensure implement implementation is efficient we need engagement of all stakeholders. This means of course a role for the UN agencies and the technical community as well as the private sector, civil society and academia. And next year the open-ended working group will end its mandate and we hope that will be followed by an action-oriented mechanism allowing for constructive and active participation of stakeholders and with the so-called program of action. So our focus right now should be on implementation. The implementation should not only be multi-stakeholder but also multidisciplinary and multisectoral. And we also need to develop a common understanding how international law applies and we have welcomed the resolution adopted recently at the conference of the ICRC that provides further elements on the application of international humanitarian law in the context of the use of ICTs. And we should work hard on capacity building and that’s also something we cannot do alone as states we do that have to do that all together. This means this is the basis for great implementation of the normative framework and we can leave no one behind and growing expertise on cyber is crucial. We always say you know cyber security and and cyberspace is teamwork we have to work to do that together to make it resilient and to keep it open and free and accessible. Maybe a nice example how we also do the capacity building at the UN level is the women in cyber program where we together with a number of countries have meanwhile trained about 47 women from countries who were not actively participating at discussion in New York and now we have turned 47 women and they are actively at the table well trained with good contribution which means that not only the female voices heard in 2023 more than 50% of the contribution were from women but it also countries who are previously not involved didn’t under really understand the discussions are now completely involved and understand the topics can play an important role. So in that sense I think it’s a common and shared responsibility also to make sure that everyone has a capacity and that’s some also a military stakeholder involvement which is one of the main topics at the IGF here today. Thank you.
Anriette Esterhuysen: Thanks Ambasar. At least part of what you said sounds like something that an internet governance forum, best practice forum, one of the intersessional modalities in the IGF could actually look at. My organization is very involved in a best practice forum on online gender-based violence about six, seven, eight years ago and it really helped map how our responses can take place without compromising human rights. But Sharif, what do you think can be done?
Anriette Esterhuysen: Thanks a lot, Sharif. Lama?
Lama Fakih: Thank you. I think we need to stigmatize internet shutdowns so that it is, you know, it is the act of a pariah state to shut down the internet. The internet is so intertwined in our ability to realize our rights that undermining connectivity is, you know, so abhorrent that states in good standing do not exercise this kind of behavior. And I think we do that by enforcing the normative framework with things like the keep it on campaign which access now has spearheaded and we do work collectively to minimize the effectiveness of attempts to shut down the internet and I think a lot of the initiatives there have been generated from the private sector and we can think through more of what are ways to undermine governments who are trying to do this and there’s a role here also for internet service providers because it’s, you know, the governments that dictate the blackouts as internet service providers that are actually implementing them and what more can they do to push back against these requests to interpret them narrowly. Is there scope under domestic law to file lawsuits in response to these demands? You know, businesses also do have responsibilities under the guiding principles, the UN guiding principles, businesses and human rights which should anchor them in thinking about how to respond to these requests which can be far-reaching and also have far-reaching consequences on human rights.
Anriette Esterhuysen: Thanks for that, Lama. Peter, there’s a comment online.
Peter Micek: Yes, just a very practical proposal for a project to fund and operate a mini fleet of UN cable laying ships. So it’s very expensive to lay these submarine cables and very expensive to operate the ships that repair them but to help LDCs and developing countries especially after climate crises to get faster service maybe a little UN cable keeping force.
Anriette Esterhuysen: Yeah, a rapid response. Kojo, yes, and by the way I didn’t buy that comment about the simple man, not for a minute by the way. So I’m gonna give you the mic again.
Kojo Boakye : Microphone. I’m so sorry, should always remember the microphone. No, I just wanted to add to what Lama was saying about stigmatize which I think is super important and I just had a quick chat with her off camera offline and we spoke about the need to penalize as well and I think one of the things that we’ve found really helpful in our engagements with those governments who have either decided to do full internet shutdowns or partial internet shutdowns of things like Instagram and Facebook is the actual cost to the economy and actually when all of us are well-tuned with speaking with policymakers it boils down to costs and benefits many of their decisions and some of the decisions we’ve spoken some of the things we’ve spoken about here today have boiled down to a government or a regime or whomever assuming will this enable me to keep stay in power or not others it’s you know an economic cost or whatever else and actually pointing out the cost and letting that information become part of their calculus of this cost-benefit analysis is so so important so I’m I align with Lama on stigmatize where possible penalize but really that engagement and ensuring that governments understand the cost is really really important and the fact that in many countries the digital economy is the economy now it underpins everything has become helpful in that.
Anriette Esterhuysen: it’s absolutely true but I also can you know having worked to oppose and counter shutdowns sometimes governments that shut down the internet is very much aware of the cost that does not stop them and and that’s
Kojo Boakye : what I meant sometimes you’re not going to get around a government who believe this is the only way I’m going to maintain national security or stay in power but increasingly you want those that information to be part of their calculus and I think that’s really really important
Cynthia Lesufi: yes thank you I’m just thinking aloud as I’m sitting here and listening I’m sitting here and thinking, as I’m listening to my fellow panellists, I’m trying to answer the question that you have asked. And I’m saying to myself, perhaps we, as the Internet Governance Forum community, we have a good opportunity in front of us. I mean, next year, we’re talking about the versus plus 20 review process. And some of these ideas that I’m sharing here, perhaps there is a need for us to consider highlighting them or reflecting them in the versus plus 20 review process. And just to make them more visible and try and solve some of the challenges that we spoke about here. So I think that’s what I thought we should consider as the IGF community to consider. Thank you.
Anriette Esterhuysen: Thank you very much for that. Very relevant remarks, Cynthia. Nadim?
Nadim Nassif: Yes, I think, I mean, obviously, as a civil society organization, we are limited with our capacity and what we can do. But I think it’s important to keep our work in terms of the research, in terms of documenting the violations that are happening there, also from the side of governments, but also from the side of companies and to make sure that there is at least a process of accountability and that there is publicity to those violations that are happening. But beyond that, I think it’s also important for us as civil society, especially ones who are in the humanitarian field, to give the needed assistance possible. In terms of ESEMs, like short assistance and other technical small solutions maybe that are still in our capacity to help the residents, especially when we speak about segments of the population like journalists, media people, first responders and others, their work is urgent and it’s important to keep them online, to keep them connected with the rest of the world.
Anriette Esterhuysen: Thanks, Nadim. Peter?
Peter Micek: Thank you. So the chat, again, is getting very technical and concrete, which is great. There’s another suggestion for ESEM infrastructure and community networks that exist on top of a decentralized power grid. So I started by talking about how conflicts, you know, impact access to electricity, and there is a proposal for a fleet of Wi-Fi-capable networks that are portable batteries charged by mobile solar stations. And so, yeah, just very concrete proposals. But if I could use my prerogative and speak a bit to the normative and governance discussions, this has been really rich. I think, again, from our respective pathway to preserve civilian telecommunications is to have armed groups, armed groups of people, armed groups of society, and the normative and governance discussions is to have armed groups and armies respected as non-military in nature, and as the OHCHR recently said, and through our own documentation, we found the Internet is a resource indispensable to the survival of civilian populations. As Lama said, the principles of distinction and proportionality in the corporate and private sector absolutely plays a role. They’re being targeted with partial shutdowns, but they also can at times push back and help circumvent disruptions and even hold governments accountable. We’ve seen companies file lawsuits against governments for demanding disproportionate disruptions. As we’ve gathered today at the IGF, it’s, I think, a good time to re-assert if we’re going to reach the 2030 agenda and provide Internet access in all countries to protect our common digital home, recognize the protected status of the public core, and remind folks that to preserve connectivity and conflict, the responsibility lies first and foremost with the parties to the conflicts themselves, and those parties perhaps in the environmental and climate realm accountable for that climate change. This, of course, though, means, as we’ve heard, we can and must proceed in concert together. We can’t see destruction of ICTs normalized as part of conflict, but rather part of the solution. Protecting civilians in conflict and preserving access to good legitimate information sources requires robust connectivity. If I could just put out there what I’ve heard in terms of a few ways forward, I think the Global Digital Compact did set out good language on Internet shutdowns. Perhaps there’s a working group to implement that language that could come forth. There could be new best practice forum on this topic at the IGF. I know we’ll be convening again soon in Norway in June. The Freedom Online Coalition has spoken and will speak further, I think, about telecommunications access in conflict, and I expect them to continue coordination with those 41 member states. And then on accountability, I think it is incumbent on courts like the International Criminal Court and regional bodies, the OECD, which recently did fine against telecom A1 in Austria for contributing to a shutdown, to find accountability. And even the UN Security Council, I think, should incorporate attention to access to telecommunications and the conflicts that it monitors. So that’s enough from me, I think, using my prerogative. Thank you,
Anriette Esterhuysen: Thanks very much, Peter. Yes, I think I just want to really add, I mean, Lama said we should stigmatize shutdowns. I would like us to stigmatize violation of international law. You know, I think we’ve become as a global community far too tolerant of the disregarding of international law by some of our member states in the UN community. I really do think there’s a role for the IGF in this place, because while I think that the international legal framework might clarify what some of the accountabilities of states are, I think we need to explore what the technical community can do. I think we might see an absence of a technical community voice in these contexts, and that might simply be because they’re not clear on exactly what their roles and accountabilities are. And I think that’s something that the IGF can explore. And I think Cynthia mentioned WSIS, and there are two concepts that were part of WSIS that I think is very relevant here, and that’s international cooperation and digital solidarity. I think if we can use the IGF and use this space to build that solidarity and build that cooperation, we’re in an era of people talk about the digital Cold War. I think that’s the work of the IGF, to counteract digital Cold Wars and to build the kind of collaboration across borders and across stakeholder groups to ensure that people have access always, everywhere. So thanks very much to everyone, to the panelists, and thank you to the MAG members, to Lito and Peace and others who assisted with organizing this session, to the captioners and the tech team, even though the Zoom was a disaster in the beginning, we forgive you, and to everyone who is here to join in this session. Thank you very much. And a big hand to the panel.
Audience: Excuse me, I’m here, I’m just short, I was waiting, sorry, but I think because I’m not tall, sorry, I just want to suggest something, I’m from Sudan, work as a researcher with the grassroots movement, from 2018 to 2022, the internet has been shut down in Sudan collectively for like up to four months, and when the war starts, tomorrow it will be one year out of the internet in Sudan, and when the international community talk about the humanitarian aid, especially inside Khartoum, the food plan asks people to apply online, and that’s when we say it’s not that flexible idea, and you need to talk with the grassroots people and emergency room people to know the solution for this stuff, but how we can ask people to apply online, and there is no internet to have the food aid. The second thing, when it comes to the internet, the RSF can bring the Starlink, and it’s equal for one hour, like two dollars, just for people to ask for help. I think when the international community can work with the Starlink people, or work with the grassroots movement, we can find the solution to help those people to ask for help at this online thing, and yes, I agree with the section for 30 years, that also affect in our rights as a human being to have an access to ask for help. Thank you.
Lama Fakih
Speech speed
156 words per minute
Speech length
2059 words
Speech time
787 seconds
Disruptions violate human rights and hinder humanitarian aid
Explanation
Internet disruptions and infrastructure destruction violate human rights and impede humanitarian assistance. These actions prevent civilians from accessing vital information and services during crises.
Evidence
Examples of impacts include journalists unable to document abuses, students cut off from lessons, and people unable to access healthcare.
Major Discussion Point
Impact of Internet Disruptions and Infrastructure Destruction
Agreed with
Nadim Nashif
Peter Micek
Agreed on
Internet disruptions violate human rights and hinder humanitarian aid
Internet shutdowns are counterproductive and increase violence
Explanation
Studies have shown that internet shutdowns are ineffective in deterring violence and can actually lead to an increase in violent incidents. This contradicts the justifications often given by governments for implementing shutdowns.
Evidence
A study by Stanford Global Digital Policy Incubator found a quadrupling of violence when networks were disrupted compared to when the internet stayed on.
Major Discussion Point
Impact of Internet Disruptions and Infrastructure Destruction
International human rights law requires internet restrictions be necessary and proportionate
Explanation
Under international human rights law, governments must ensure that any internet-based restrictions are both necessary and proportionate to a specific security concern. General shutdowns and attacks on infrastructure violate multiple rights.
Evidence
UN resolutions have condemned intentional disruption of internet access and call on states to refrain from carrying them out, including during conflict.
Major Discussion Point
Normative Frameworks and Legal Obligations
Differed with
Kojo Boakye
Differed on
Role of private sector in addressing internet disruptions
Laws of war principles of distinction and proportionality apply to internet infrastructure
Explanation
The principles of distinction and proportionality in international humanitarian law apply to attacks on internet infrastructure. Shutdowns and attacks on critical infrastructure can cause considerable harm to civilian populations and may be disproportionate to military objectives.
Evidence
Internet and phone shutdowns can lead to death and injury by preventing civilians from communicating about safety considerations and hindering humanitarian aid.
Major Discussion Point
Normative Frameworks and Legal Obligations
Nadim Nashif
Speech speed
151 words per minute
Speech length
1851 words
Speech time
730 seconds
Destruction of infrastructure in Gaza has devastating humanitarian impact
Explanation
The destruction of telecommunications infrastructure in Gaza has severe humanitarian consequences. It prevents people from communicating with loved ones, calling for help, or accessing vital services during the crisis.
Evidence
Research by Hamle estimates that 75% of the telecommunications infrastructure in Gaza was damaged, with 50% totally destroyed.
Major Discussion Point
Impact of Internet Disruptions and Infrastructure Destruction
Agreed with
Lama Fakih
Peter Micek
Agreed on
Internet disruptions violate human rights and hinder humanitarian aid
Civil society documenting violations and providing technical assistance
Explanation
Civil society organizations play a crucial role in documenting violations of internet access and providing technical assistance to affected populations. This work is important for accountability and helping people stay connected in crisis situations.
Evidence
Mention of providing ESIMs and other technical solutions to help journalists, media people, and first responders stay online and connected with the rest of the world.
Major Discussion Point
Responses and Alternatives to Internet Disruptions
Speech speed
112 words per minute
Speech length
504 words
Speech time
269 seconds
Climate change threatens internet infrastructure in small island states
Explanation
Small island developing states are particularly vulnerable to climate change impacts on their internet infrastructure. Rising sea levels and increased frequency of extreme weather events pose significant threats to submarine cables and other critical infrastructure.
Evidence
Example of the Maldives, where submarine cables may need to be redeployed inland due to sea level rise, but limited land area poses challenges.
Major Discussion Point
Impact of Internet Disruptions and Infrastructure Destruction
Ernst Noorman
Speech speed
162 words per minute
Speech length
1814 words
Speech time
668 seconds
UN norms prohibit damaging critical infrastructure in cyberspace
Explanation
The United Nations has established non-binding norms on responsible state behavior in cyberspace. These norms include prohibitions on intentionally damaging critical infrastructure through ICT activities.
Evidence
Reference to 11 norms endorsed by the UN General Assembly, including those related to critical infrastructure protection.
Major Discussion Point
Normative Frameworks and Legal Obligations
IGF could establish best practices for protecting internet access
Explanation
The Internet Governance Forum (IGF) could play a role in establishing best practices for protecting internet access. This could involve multi-stakeholder engagement and focus on implementing existing norms rather than creating new ones.
Evidence
Mention of the need for multi-stakeholder, multidisciplinary, and multisectoral implementation of existing norms and international law.
Major Discussion Point
Role of the Internet Governance Community
Agreed with
Cynthia Lesufi
Anriette Esterhuysen
Agreed on
Need for multi-stakeholder approach to protect internet access
Cynthia Lesufi
Speech speed
121 words per minute
Speech length
1098 words
Speech time
543 seconds
ITU resolutions call for assistance in rebuilding telecom infrastructure
Explanation
The International Telecommunication Union (ITU) has adopted resolutions calling for assistance in rebuilding telecommunications infrastructure in countries affected by conflict or disasters. These resolutions aim to support countries in special need for building their telecommunication sector.
Evidence
Reference to ITU Resolution 125 and a recent ITU Council resolution on assisting Palestine in restoring infrastructure.
Major Discussion Point
Normative Frameworks and Legal Obligations
Importance of highlighting issues in WSIS+20 review process
Explanation
The upcoming WSIS+20 review process presents an opportunity to highlight issues related to protecting internet infrastructure during conflicts. This could help address challenges discussed and make them more visible in international discussions.
Major Discussion Point
Role of the Internet Governance Community
Agreed with
Ernst Noorman
Anriette Esterhuysen
Agreed on
Need for multi-stakeholder approach to protect internet access
Kojo Boakye
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
Private sector developing technical solutions like WhatsApp proxy
Explanation
Private sector companies are developing technical solutions to help maintain internet access during shutdowns or disruptions. These solutions aim to provide alternative means of communication when traditional channels are blocked.
Evidence
Example of WhatsApp developing a proxy service to enable continued use of the app in places like Iran where internet access is restricted.
Major Discussion Point
Responses and Alternatives to Internet Disruptions
Differed with
Lama Fakih
Differed on
Role of private sector in addressing internet disruptions
Peter Micek
Speech speed
142 words per minute
Speech length
1626 words
Speech time
683 seconds
Humanitarian agencies working to provide connectivity in crises
Explanation
Humanitarian agencies are increasingly recognizing the importance of providing connectivity in crisis situations. They are exploring ways to ensure access to the internet and secure communication tools for the populations they serve.
Evidence
Reference to agencies scrambling to create workarounds through technological innovation and joining emergency telecommunications clusters.
Major Discussion Point
Responses and Alternatives to Internet Disruptions
Agreed with
Lama Fakih
Nadim Nashif
Agreed on
Internet disruptions violate human rights and hinder humanitarian aid
Anriette Esterhuysen
Speech speed
147 words per minute
Speech length
3419 words
Speech time
1388 seconds
Need for rapid response capabilities to repair infrastructure
Explanation
There is a need for rapid response capabilities to repair internet infrastructure damaged during conflicts or disasters. This could involve international cooperation and dedicated resources for quick deployment.
Evidence
Mention of a proposal for a mini fleet of UN cable laying ships to help developing countries get faster service after climate crises.
Major Discussion Point
Responses and Alternatives to Internet Disruptions
IGF should explore roles and responsibilities of technical community
Explanation
The Internet Governance Forum (IGF) should explore the roles and responsibilities of the technical community in protecting internet access during conflicts and crises. This could help clarify accountabilities and encourage more active involvement from the technical community.
Major Discussion Point
Role of the Internet Governance Community
Agreed with
Ernst Noorman
Cynthia Lesufi
Agreed on
Need for multi-stakeholder approach to protect internet access
Agreements
Agreement Points
Internet disruptions violate human rights and hinder humanitarian aid
Lama Fakih
Nadim Nashif
Peter Micek
Disruptions violate human rights and hinder humanitarian aid
Destruction of infrastructure in Gaza has devastating humanitarian impact
Humanitarian agencies working to provide connectivity in crises
The speakers agree that internet disruptions and infrastructure destruction have severe humanitarian consequences, violating human rights and impeding aid efforts.
Need for multi-stakeholder approach to protect internet access
Ernst Noorman
Cynthia Lesufi
Anriette Esterhuysen
IGF could establish best practices for protecting internet access
Importance of highlighting issues in WSIS+20 review process
IGF should explore roles and responsibilities of technical community
The speakers emphasize the importance of multi-stakeholder engagement in establishing best practices and norms for protecting internet access, particularly through forums like the IGF and WSIS+20 review process.
Similar Viewpoints
Both speakers emphasize the importance of international legal frameworks and norms in regulating state behavior regarding internet access and infrastructure protection.
Lama Fakih
Ernst Noorman
International human rights law requires internet restrictions be necessary and proportionate
UN norms prohibit damaging critical infrastructure in cyberspace
Both speakers highlight the role of non-state actors in developing technical solutions and providing connectivity during crises or shutdowns.
Kojo Boakye
Peter Micek
Private sector developing technical solutions like WhatsApp proxy
Humanitarian agencies working to provide connectivity in crises
Unexpected Consensus
Climate change impact on internet infrastructure
Anriette Esterhuysen
Climate change threatens internet infrastructure in small island states
Need for rapid response capabilities to repair infrastructure
While the discussion primarily focused on conflict-related disruptions, there was unexpected consensus on the need to address climate change impacts on internet infrastructure, particularly for vulnerable states.
Overall Assessment
Summary
The main areas of agreement include the humanitarian impact of internet disruptions, the need for multi-stakeholder approaches to protect internet access, the importance of international legal frameworks, and the role of non-state actors in providing technical solutions.
Consensus level
There is a moderate to high level of consensus among the speakers on the fundamental issues surrounding internet disruptions and infrastructure protection. This consensus suggests a strong foundation for developing comprehensive strategies to address these challenges, though specific implementation details may require further discussion and negotiation.
Differences
Different Viewpoints
Role of private sector in addressing internet disruptions
Lama Fakih
Kojo Boakye
International human rights law requires internet restrictions be necessary and proportionate
Private sector developing technical solutions like WhatsApp proxy
While Lama Fakih emphasizes the legal obligations of governments under international human rights law, Kojo Boakye focuses on the role of private sector companies in developing technical solutions to maintain internet access during disruptions.
Unexpected Differences
Focus on climate change impacts
Other speakers
Climate change threatens internet infrastructure in small island states
While most speakers focused on conflicts and intentional disruptions, Mohamed Shareef unexpectedly highlighted the threat of climate change to internet infrastructure in small island states. This broadens the discussion beyond human-caused disruptions to include environmental factors.
Overall Assessment
summary
The main areas of disagreement centered around the roles and responsibilities of different stakeholders (governments, private sector, international organizations) in addressing internet disruptions and protecting infrastructure.
difference_level
The level of disagreement was moderate. While speakers generally agreed on the importance of maintaining internet access, they had different perspectives on how to achieve this goal. These differences reflect the complex, multi-stakeholder nature of internet governance and highlight the need for collaborative approaches that involve all relevant actors.
Partial Agreements
Partial Agreements
Both speakers agree on the importance of international frameworks for protecting internet infrastructure, but they focus on different aspects. Ernst Noorman emphasizes UN norms prohibiting damage to infrastructure, while Cynthia Lesufi highlights ITU resolutions for rebuilding infrastructure after conflicts or disasters.
Ernst Noorman
Cynthia Lesufi
UN norms prohibit damaging critical infrastructure in cyberspace
ITU resolutions call for assistance in rebuilding telecom infrastructure
Similar Viewpoints
Both speakers emphasize the importance of international legal frameworks and norms in regulating state behavior regarding internet access and infrastructure protection.
Lama Fakih
Ernst Noorman
International human rights law requires internet restrictions be necessary and proportionate
UN norms prohibit damaging critical infrastructure in cyberspace
Both speakers highlight the role of non-state actors in developing technical solutions and providing connectivity during crises or shutdowns.
Kojo Boakye
Peter Micek
Private sector developing technical solutions like WhatsApp proxy
Humanitarian agencies working to provide connectivity in crises
Takeaways
Key Takeaways
Internet disruptions and infrastructure destruction have severe humanitarian impacts and violate human rights
There are existing normative frameworks and legal obligations that prohibit damaging critical internet infrastructure
Multi-stakeholder collaboration is needed to develop technical solutions and provide connectivity in crises
The Internet Governance Forum (IGF) community has a role to play in establishing best practices and exploring responsibilities of different stakeholders
Resolutions and Action Items
Explore establishing an IGF best practice forum on protecting internet access in conflicts and crises
Highlight internet infrastructure protection issues in the WSIS+20 review process
Develop a working group to implement Global Digital Compact language on internet shutdowns
Freedom Online Coalition to continue coordination on telecommunications access in conflict
Unresolved Issues
How to effectively enforce international laws and norms against internet shutdowns
Specific roles and responsibilities of the technical community in protecting internet infrastructure
How to address content moderation and platform access issues during crises
Balancing legitimate security concerns with maintaining internet access
Suggested Compromises
Focus on implementing existing norms rather than developing new ones
Engage with governments to highlight economic costs of shutdowns while acknowledging security concerns
Develop rapid response capabilities for infrastructure repair while respecting state sovereignty
Thought Provoking Comments
Palestinians in Gaza have endured over a year of ongoing phone and internet disruption as a result of relentless airstrikes by the Israeli government and other actions that the government has taken. These actions have included damage to core communication infrastructure, cuts to electricity, fuel blockades, and apparently deliberate shutdowns through technical means.
speaker
Lama Fakih
reason
This comment provided concrete examples of how internet infrastructure can be disrupted or destroyed during conflict, highlighting the multifaceted nature of the problem.
impact
It shifted the discussion from general concepts to specific, real-world impacts, prompting further exploration of the humanitarian consequences of such disruptions.
Small island states, island nations, our connectivity to the rest of the world is really primarily these days with submarine cables. So if you take the case of the Maldives, we have a few submarine cables landing in the Maldives. Two of them connect us to India and Sri Lanka, and we have three more cables that we are working on connecting us directly to the Southeast Asia and all the way up to Europe.
speaker
reason
This comment introduced the unique challenges faced by small island nations in maintaining internet connectivity, bringing attention to the geographical and infrastructural aspects of the issue.
impact
It broadened the scope of the discussion to include climate change and natural disasters as factors affecting internet access, leading to considerations of how to build more resilient infrastructure.
We have a set of norms which are not non-legally binding and but they are based on and complement international law. So we have a recognized, we have recognized that international law applies in cyberspace. This means that we have some legally binding obligations under the Charter but also under international humanitarian law and human rights law.
speaker
Ernst Noorman
reason
This comment provided important context on the existing legal and normative frameworks governing internet access and infrastructure protection during conflicts.
impact
It shifted the conversation towards discussing implementation and enforcement of existing norms, rather than creating new ones, and emphasized the need for multi-stakeholder engagement in this process.
I think we need to stigmatize internet shutdowns so that it is, you know, it is the act of a pariah state to shut down the internet. The internet is so intertwined in our ability to realize our rights that undermining connectivity is, you know, so abhorrent that states in good standing do not exercise this kind of behavior.
speaker
Lama Fakih
reason
This comment proposed a strong stance on internet shutdowns, framing them as unacceptable actions by states.
impact
It sparked discussion on how to create social and political pressure against internet shutdowns, leading to suggestions for penalization and emphasizing the economic costs of such actions.
Perhaps we, as the Internet Governance Forum community, we have a good opportunity in front of us. I mean, next year, we’re talking about the versus plus 20 review process. And some of these ideas that I’m sharing here, perhaps there is a need for us to consider highlighting them or reflecting them in the versus plus 20 review process.
speaker
Cynthia Lesufi
reason
This comment connected the discussion to broader internet governance processes, suggesting a concrete way to move the conversation forward.
impact
It provided a practical next step for the IGF community to address the issues discussed, potentially influencing future policy discussions.
Overall Assessment
These key comments shaped the discussion by grounding it in real-world examples, highlighting the complexity of the issue across different contexts (conflict zones, small island states), and emphasizing the need for multi-stakeholder approaches. They also shifted the conversation from describing problems to proposing solutions, including strengthening existing norms, creating social and economic pressures against internet shutdowns, and leveraging upcoming policy processes to address these issues. The discussion evolved from a focus on technical and legal aspects to include broader considerations of human rights, economic impacts, and the role of various stakeholders in ensuring internet access and protecting infrastructure.
Follow-up Questions
How can we ensure content policies are properly policed or managed across multiple languages and platforms in crisis situations?
speaker
Mike Walton (UN Refugee Agency)
explanation
This is important to address the challenges of content moderation in multilingual crisis contexts while balancing human and AI roles.
How can we address the issue of platforms deplatforming or restricting access to users from certain countries, especially those from the Global South?
speaker
Nadim Nashif
explanation
This is crucial to ensure equitable access to digital platforms and services globally, beyond just physical infrastructure.
What role can the technical community play in protecting internet access and infrastructure during conflicts and crises?
speaker
Anriette Esterhuysen
explanation
Understanding the technical community’s role is important for developing comprehensive strategies to maintain internet access in crisis situations.
How can we implement a UN-operated fleet of cable-laying ships to assist developing countries in maintaining and repairing submarine cables, especially after climate crises?
speaker
Online participant (via Peter Micek)
explanation
This proposal addresses the need for faster internet service restoration in developing countries affected by climate events.
How can we develop ESIM infrastructure and community networks that exist on top of a decentralized power grid?
speaker
Online participant (via Peter Micek)
explanation
This technical solution could provide resilient internet access in areas with unreliable power infrastructure.
How can the IGF community incorporate discussions on protecting internet access during conflicts into the WSIS+20 review process?
speaker
Cynthia Lesufi
explanation
This would help highlight and address challenges related to internet access in conflict zones at a high-level policy forum.
How can we better coordinate humanitarian aid agencies to provide populations they serve with access to the internet or secure and open communications tools?
speaker
Peter Micek
explanation
This is crucial for ensuring effective aid delivery and communication in crisis situations.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Open Forum #38 Harnessing AI innovation while respecting privacy rights
Open Forum #38 Harnessing AI innovation while respecting privacy rights
Session at a Glance
Summary
This panel discussion focused on the intersection of AI innovation and privacy protection, exploring challenges and potential solutions in AI governance. Experts from various fields, including government, academia, and regulatory bodies, shared insights on balancing technological advancement with privacy rights.
The discussion highlighted the OECD’s recent work in updating AI principles and establishing a partnership with the Global Partnership on AI. Panelists emphasized the importance of a comprehensive approach to AI governance, considering privacy alongside other values such as fairness, transparency, and human agency. They noted the challenges in balancing these sometimes conflicting priorities, particularly when dealing with human rights that cannot be traded off.
Privacy concerns were examined across the AI lifecycle, from data collection to model deployment and retirement. The experts stressed the need for age-appropriate design in AI systems, especially concerning children’s data protection. The conversation also touched on the convergence of AI with other technologies like blockchain and neurotechnology, highlighting the complexity of privacy protection in a rapidly evolving technological landscape.
Panelists discussed the role of data protection authorities in developing practical approaches to safeguard privacy while fostering innovation. They emphasized the importance of global governance frameworks and the need to translate principles into enforceable actions. The discussion concluded with calls for strengthened legal frameworks, increased transparency, and greater involvement of civil society in AI and privacy-related policymaking.
Overall, the panel underscored the critical nature of privacy protection in AI development and deployment, advocating for a balanced approach that considers both innovation and human rights.
Keypoints
Major discussion points:
– The intersection of AI and privacy, including challenges and risks
– The need for global governance frameworks and cooperation on AI and privacy
– The AI lifecycle and how privacy considerations apply at each stage
– The role of data protection authorities in regulating AI and privacy
– Balancing innovation with privacy protection in AI development
Overall purpose:
The goal of this discussion was to explore the complex relationship between AI and privacy, examining key challenges, policy approaches, and potential solutions for protecting privacy rights while fostering responsible AI innovation. The panel aimed to bring together diverse perspectives from government, academia, technical experts, and regulators to have a comprehensive dialogue on this important issue.
Tone:
The overall tone was informative and collaborative. Speakers shared insights from their respective areas of expertise in a constructive manner. There was a sense of urgency about addressing privacy challenges, but also optimism about finding solutions through cooperation. The tone became slightly more impassioned toward the end as audience members raised additional concerns, but remained respectful and solution-oriented throughout.
Speakers
– Lucia Russo: Moderator
– Juraj ÄŒorba: Senior expert for digital regulation and governance from the Slovak Ministry of Informatization; Chair of the OECD AI working on official investment governance; Chair of the Global Partnership on AI
– Clara Neppel: Senior director at IEEE; Co-chair of the OECD expert group on AI data and privacy
– Thiago Guimarães Moraes: Specialist on AI governance and data protection at the Brazilian Data Protection Authority
– Jimena Viveros: Member of the UN Secretary General’s High-Level Advisory Body on AI, Managing Director and CEO of IQuilibriumAI
Full session report
AI Innovation and Privacy Protection: Challenges and Solutions in Governance
This panel discussion brought together experts from government, academia, and regulatory bodies to explore the complex intersection of AI innovation and privacy protection. The conversation highlighted key challenges in AI governance and potential solutions for safeguarding privacy rights while fostering responsible technological advancement.
Key Themes and Challenges
1. Privacy Concerns in Advanced AI Systems
The panelists unanimously agreed that advanced AI systems pose significant privacy challenges due to their extensive data requirements. Juraj ÄŒorba, representing the Slovak Ministry of Informatisation and the OECD, emphasized that AI’s dependence on data inherently creates privacy issues. Clara Neppel from IEEE noted that generative AI exacerbates these concerns through vast data collection and potential re-identification of individuals.
Specific examples of privacy challenges included:
– The potential for AI systems to infer sensitive information from seemingly innocuous data
– Risks of re-identification in anonymized datasets
– Challenges in obtaining meaningful consent for data use in complex AI systems
Thiago Guimarães Moraes, from the Brazilian Data Protection Authority, highlighted the complex trade-offs between privacy, fairness, and utility in AI systems. Jimena Viveros, a member of the UN Secretary General’s High-Level Advisory Board on AI, expanded on this, noting that AI data collection and use can have far-reaching effects on democratic institutions and geopolitics.
2. Global Governance and Regulatory Frameworks
There was strong consensus on the need for global governance frameworks and harmonized regulations to address the transboundary nature of data and AI-related privacy challenges. ÄŒorba mentioned that the OECD has updated its AI principles and definition to reflect technological developments and privacy concerns. He also highlighted the relevance of the UN Digital Compact in relation to AI governance.
Viveros advocated for UN recommendations aimed at creating a global AI data framework to protect human rights. She also proposed recognizing data as a “digital public good,” sparking discussion about new approaches to data governance in the AI era.
Moraes highlighted the role of data protection authorities in developing guidance and regulatory sandboxes to address AI privacy issues. He emphasized their work in:
– Providing technical assistance to organizations implementing AI
– Developing guidelines for privacy-enhancing technologies
– Collaborating with other regulatory bodies to address cross-cutting issues
3. Balancing Innovation and Privacy Protection
A key point of discussion was the challenge of balancing AI innovation with privacy protection. Neppel stressed the importance of weighing the economic benefits of AI against privacy risks. She introduced the concept of the AI lifecycle and its implications for privacy, noting that privacy considerations must be integrated at every stage of AI development and deployment.
Moraes emphasized the need for privacy-enhancing technologies and techniques like differential privacy. However, he argued that from a human rights perspective, privacy and other fundamental rights cannot be compromised or traded off, stating, “Human rights cannot be traded off. And that’s here one of the main challenges. We are talking about trade-off of values in a technical level that they cannot mean undermining of human rights.”
4. Intersections with Other Technologies
The discussion highlighted the importance of considering AI privacy issues within the broader context of emerging technologies. ÄŒorba noted that the convergence of AI with technologies like blockchain and neurotechnology creates new privacy challenges. He stressed the need to consider the full “digital stack” when addressing AI and privacy governance.
An audience member raised the specific issue of blockchain’s immutability and its implications for data privacy. The panelists acknowledged the challenges this poses, particularly concerning data deletion rights and the right to be forgotten.
Key Solutions and Recommendations
1. Age-Appropriate Design and Children’s Data Protection
Neppel emphasized the crucial importance of age-appropriate design in AI systems, particularly concerning children’s data protection. She highlighted the need for special safeguards and considerations when AI systems interact with or process data from minors.
2. Privacy-Enhancing Technologies and Techniques
The panelists discussed various technical approaches to enhancing privacy in AI systems. Differential privacy was highlighted as a potential technique to balance data utility with privacy protection. Moraes stressed the importance of these technologies in practical implementation of privacy principles.
3. Global Cooperation and Harmonized Regulations
There was strong agreement on the need for international cooperation in developing AI governance frameworks. The speakers advocated for harmonized regulations and the adoption of international AI governance standards at national levels. ÄŒorba mentioned the OECD’s expert group on AI, data, and privacy as an example of ongoing international efforts.
4. Strengthening Legal Frameworks
The discussion concluded with calls for strengthened legal frameworks to ensure effective privacy protection in the age of AI. This includes updating legislation to keep pace with technological advancements and raising public awareness about AI and privacy issues.
Thought-Provoking Insights
Jimena Viveros provided a particularly impactful perspective, stating, “AI is data, so we cannot have AI without data. And data comes with privacy issues, that’s just a problem.” This succinctly captured the fundamental tension at the heart of the discussion.
Thiago Guimarães Moraes emphasized the non-negotiable nature of human rights in AI development, highlighting the challenge of balancing technical trade-offs without compromising fundamental rights.
Conclusion
The panel discussion underscored the critical importance of addressing privacy concerns in AI development and deployment. While there was broad agreement on the challenges and the need for global cooperation, the conversation revealed the complexity of balancing innovation, economic benefits, and fundamental rights protection.
Key takeaways included:
– The need for privacy considerations throughout the AI lifecycle
– The importance of international collaboration in developing governance frameworks
– The role of data protection authorities in guiding responsible AI implementation
– The potential of privacy-enhancing technologies in addressing AI privacy challenges
As AI continues to advance, ongoing dialogue and collaborative efforts will be crucial in developing effective governance frameworks that safeguard privacy while fostering responsible technological progress. The discussion highlighted that while technical solutions are important, they must be underpinned by strong legal frameworks and a commitment to protecting fundamental human rights in the digital age.
Session Transcript
Lucia Russo: organized by the OECD on how to harness AI innovation while protecting privacy rights and exactly this is the very focus of this panel today and it’s a concern that has been heightened by recent developments in the technology and the OECD recommendation in its revision earlier this year has evolved to reflect the evolving technological landscape increased challenges raised by advanced AI systems include privacy rights so in our discussions today we would like to navigate these three main aspects the privacy challenges in the advanced AI systems and the policy landscape for AI governance and in relation with privacy and how to develop practical forward-looking solutions. I am joined for this discussion today by an exceptional panel of experts who bring diverse perspectives on AI governance spanning from government policy technical community academia and regulators and so I would like to join and to welcome today Juraj Korba senior experts for digital regulation and governance from the Slovak Ministry of Informatization and Juraj is the chair of the OECD AI working on official investment governance and chair of the global partnership on AI. We have Clara Neppel senior director at IEEE and co-chair of the OECD expert group on AI data and privacy and Tiago Guimarães Moraes specialist on AI governance and data protection at the Brazilian Data Protection Authority. We will also have Ximena Viveiros joining us I believe a little later and she’s a member of the UN secretary general’s high-level advisory body on AI. So the way this panel will unfold will be to have our speakers bring their perspectives around this topic and then we will also have time for a discussion with the audience both here and then online we are monitoring the chat so we will give voice to those who have questions online. So I will now start with Juraj and there should be some slides on the screen. So Juraj as the chair of the working party on AI governance played a key role in guiding the discussions that have led to decisions of OECD recommendation on AI. Motivations behind updating the OECD recommendation and also tell us what were the primary costs that were advanced AI systems and how these affect those.
Juraj Čorba: One, two, three, do you hear me please? If you could change please my machine I’m afraid it’s not properly. Mike, sounds like Mike, right thanks. One, two, three. Oh, this is better now, I hope, or not really. Is it better? One, two, three. But anyway, at least you hear me. So first of all, I would like to thank the organizers for providing again an opportunity for the international organizations to share the latest results of their work, including the OECD. We are happy to be here. This has been an outstanding year for us at the OECD, for us who work in the AI agenda, for multiple reasons. One of the reasons is the fact that we have created a so-called integrated partnership with the Global Partnership on AI. So the family of countries that cooperate and share knowledge together, and not only knowledge, but hopefully also solutions. The family is expanding, so now we are covering 44 different jurisdictions from all around the world. I was trying to calculate actually what proportion of the world population we cover in the Global Partnership on AI now, and it’s 40% of the world population. So it’s really a significant club. Now, notwithstanding the enlargement and possible further enlargement in 2025, we managed, as was already mentioned by Lucia, to update the first ever intergovernmental document on AI, which was adopted in 2019 by the OECD, the so-called OECD AI principles, which were then later incorporated into G20 AI principles, into the first international convention on AI at the Council of Europe, with participation of non-European countries, and to some extent also into the AI Act of the European Union, with which some of you may be familiar with. So there are some successes that we really can look back at, and I must say I’m proud for the whole group that we managed. Now, when it comes to the reasons why we had to update the OECD AI principles in 2024, it was primarily for reasons of clarity, for reasons of reflecting on the latest technological development, and of course we had to take account of many different interests that have been raised. As you know, the OECD works, and now also the Global Partnership on AI, after the integration, we all work on a consensus basis. So in order to be able to actually come to any modifications, any updates, we had to listen to basically hundreds of people, not only people acting on behalf of the governments, but also people involved in the expert groups. You will learn more from Clara on the go. So this was a very interesting exercise, but surprisingly enough, we managed to have this revision updated in May by the ministers in Paris. Now, one of the key milestones that I would like to convey to you, on the basis of the work that we did, is actually the definition of the Artificial Intelligence as such. So when we discuss the impact of Artificial Intelligence on privacy or personal data, we really need to make sure that we discuss the same thing. In other words, what is actually the Artificial Intelligence when we talk about it? How we can recognize, or can we actually recognize and make a clear difference between AI and what we would call classical software systems? Now, you can judge our work. If you go to the OECD website, you will find an explanatory memorandum on the updated AI definition there. You will see how we actually arrived at the final solution. I recommend you to read this. And of course, there it is clear from the definition as such that any AI is highly dependent on data, on its quality, and of course, there is a clear bridge to the privacy concerns. The last thing in relation to the AI definition I would like to mention is, of course, the fact that the definition is imperfect by definition. In other words, it’s a work in progress. It will be reviewed again. And we also need to understand that making a clear line between software as we know it, or as we knew it, and the new elements that we call Artificial Intelligence is not necessarily as clear-cut as we would wish. We should rather see it as a scale, because also, of course, the systems that we call AI, they are also dependent and interact with classical software as well. So, it’s very delicate. Now, with the privacy, of course, we need to realize that, as I mentioned, AI is hungry for data. It needs data to be actually built and to work properly. The thing is that, of course, any restrictions on the use of data can be detrimental for building of AI models. At the same time, to complete the triangle, it’s not only about building of models and systems, but it’s of course also about the way security environments access information about us and evaluate possible threats and risks. So any limitations there, of course, interact also with this field, which is not always discussed, but we need to be aware of this. So it’s a delicate balance we need to draw between the protection of privacy on one hand, and security needs and the needs of building up of AI models and systems on the other hand. There are three principles in our OECD AI principles, which are foundational also now for the whole global partnership on AI community. And these three principles, they explicitly mention the need to protect privacy. But of course, we recognize that even inside this broad family of countries and jurisdictions, the approaches to privacy vary. And they are, of course, also contingent on certain cultural notions, on political approaches. So many issues are in place there. With that, I would like to commend the work of the expert groups. We have multiple groups comprised of experts feeding into the work of our bodies at the Global Partnership on AI and at the OECD. So this is a treasure, a big asset that we can build upon. You are all welcomed to find out more about the way we work. And of course, the more we can engage with you in a meaningful way, the more knowledge and the more understanding we can build. And last but not least, I would like to also commend the work of the UN Advisory Board on AI, of which Ximena is a distinguished member, for Mexico. If you look at the UN Advisory Board report that was published in September, and if you look at the UN Digital Compact that was adopted in New York City also in December, there you will find that basically when it comes to the first pillar of the UN Digital Compact, which is to create knowledge and understanding of the AI systems and the impacts on economy, society, etc., it is actually the OECD and the Global Partnership on AI that is relied on to feed into this first pillar of the UN Digital Compact to provide the necessary knowledge to share it with the global community. So besides the opportunity there at the OECD and Global Partnership on AI to engage with all of you, we can certainly then engage also at the global level together. With that, Lucia, thank you very much again for having me here today.
Lucia Russo: Thank you, Juraj, for providing this overview also of the most recent work of the OECD and what we have been engaged in during this past very busy year. So now I would like to welcome and turn to Ximena. Ximena, you are an international lawyer and scholar and advisor on AI and peace and security. You also lead a consultancy firm, iEquilibrium AI, which is specialized on AI and peace and security. And as we heard, you served as a member of the United Nations Secretary General High-Level Advisory Board on AI. What we would like to hear from you is if you could unpack the social risks that you have identified with the intersection of AI and privacy and perhaps also comment on how proposed UN recommendations aim to create a more robust global framework for responsible AI deployment. Thank you.
Jimena Viveros: Hello. I don’t know if anyone can hear me. Yes? Okay, great. So it is great to be here, sorry for the delay. So thank you for the introduction. And I would also like to start commending the work of the OECD and the new partnership with JIPAI, which I think is going to be very fruitful and going to be very good for advancing global governance and recommendations in this space. So I’m happy to be an expert in different of the working groups. I look forward to contributing on that. So as Juraj was saying, AI is data, so we cannot have AI without data. And data comes with privacy issues, that’s just a problem. So when we look at it from the perspective of peace and security at large, it brings a lot of problems. Because if we look at it even from, say, the civilian domain, we live in a society where everything we consume, it consumes back our data. Whether we willingly accept it or, you know, just because there’s no other choice. So all of that data gathering by all of these platforms is then fed into systems, which could be civilian, which could be military, which could be of some security organizations, intelligence organizations, and we don’t know what the purpose of it will be at the end. So we see this problem also in terms of all of the decision support systems. And for, say, autonomous weapons and other types of security implications that come along with the systems that work in this space. So we have a lot of complications regarding that. What we also find is now the big hype with generative AI and all of the breaches that come in that space, which we are all very familiar with. Which is all just exacerbated by the different jurisdictions and approaches that are being used universally. So what we’re witnessing is just a patchwork of initiatives. So that’s why we should really strive towards global governance. And the work that we did at the advisory body of the Secretary General leading to the Summit of the Future, and what was a part of the Global Digital Compact and the Pact for the Future, it included this because we mentioned the security problems that comes with all of this data breaches, hacking, misuse of information, malicious or unintended uses, both in the civilian and the military domain, which affects the broader international stability frameworks. So we, in the report, highlighted that even beyond the implications of data and privacy security problems at the individual or at the community level, there’s also very large-scale impact on society. And we say in the report that it could even affect democratic institutions as a whole, in terms of misinformation and the erroneous use of data, which can also affect the geopolitical, the economy in different parts of the world, in different regions, as we have seen already. Another problem that we have with data and with privacy in terms of security is the fact that we are now shifting the power dynamics of the world in terms of the technological dependency. So it’s not about who has the best systems, it’s about who has the best data or who has more data. And that is something that has been accumulated even years before AI was booming, like it is now. So we have a problem. We also have a problem in the lack of data. It’s a risk in itself because misrepresentation, bias, all of these things are a clear problem in terms of data. And this also affects the privacy of children. That’s a big risk that we have identified and everything regarding future generations. So now the question is what we can do about this. So first of all, we should really recognize data as a digital public good. This is something that is also stated in the Global Digital Compact and that has been quite at the high list of the agendas of the Secretary General. All these common digital goods. So data is one of them. And what we could do is create a global AI data framework to protect all kinds of human rights that can be affected by the use of data. And obviously implicating privacy issues. The GDC also offers some solutions, for example, awareness raising, capacity building, controlled cross-border data flows to foster a responsible, equitable, and interoperable framework to maximize the data benefits while minimizing the risks to data security and privacy. Because as I said, the lack of data is also a risk in itself. So that’s why it’s so important the work that the OECD and GPAI has been doing in this respect. Because it’s precisely that. Awareness raising and capacity building and just bringing experts together to come up with solutions. Because the risks and the problems we have identified many times. The thing is how to do it. do it and how to come up with actionable recommendations because this is vital. So the OECD recommendations that were revised now this year with all of the human-centered AI issues is vital and I recommend for whoever hasn’t read it that to read it because it’s a really important material that you can find there and obviously cooperation and synergies across organizations, across jurisdictions, across communities, across everything is vital because everything is complementary and everything helps. So with that I will close. Thank you.
Lucia Russo: Okay, thank you so much Ximena for this is really great work that you have been doing and outlining also the key risks and also some policy solutions already. So now I will move to Clara. As we heard from your eye the OECD has also established an expert group looking at particularly the interrelations between AI data and privacy and you are co-chairing that expert group. So what we would like to hear from you is what are the motivations that led to the establishment of this group but also what methodological approach you’re using to assess comprehensively privacy risks across the AI life cycle and lastly if you could please share with the audience the key findings that have emerged from the first report that was published with the support of the expert group.
Clara Neppel: Thank you for inviting me here as well and I’m very pleased to share our experience with this cross-section what you just mentioned, the collaboration between different communities. So as mentioned by both of my co-panelists before we had privacy issues with AI even before generative AI but this has been exacerbated with the vast collection of data across basically geographies but also the possibility to then re-identify individuals but also to identify let’s say characteristics which were not even disclosed in the first place. I think I very much like to say even very often you are surprised by the thing that the system knows about you which can be accurate or not accurate and if it’s accurate then you’re a kind of Orwellian space and if it’s not accurate you’re in a kind of a Kafka space. So luckily we now know that at least generative AI is not always to be relied on so I think that’s maybe the positive effect of let’s say the vast adoption of AI. So with the OECD that has been so active in AI governance as mentioned by Jura there are already a lot of expert groups so I’m part of the AI and climate expert group as well as on the AI futures expert group and I’m co-chairing now this expert group on AI data governance and privacy and so you asked me about the motivation of why we created this. So I think that in the AI communities you will find a lot of technologies of course also civil society and so on which are looking to the different aspects of AI that start realizing and also establishing frameworks for governance for these different aspects. In the data privacy community we already have an established frameworks, we have jurisdictions we know how to enforce, we have also institutions and of course methodologies. So what we saw in the AI space that there is a lot of innovation that you just mentioned also addressing privacy but without knowing that there is already a lot of work going on in the other community and the other way around. So this was I think the main motivation to bring these two communities together and establish this working group and indeed the first deliverable of this working group is this report that was published in June. So one of the deliverable outcomes was really also to map the principles, the AI principles to the privacy principles and as you can see here it’s a lot. I will just go into some which I think are specifically relevant. So principle one is really about inclusive growth, sustainable development and well-being and I think here it’s really something which is very close to my heart namely to weighing economic and social benefits of AI against risk to privacy right and really this for me translates to have the right balance between the metrics of success. So not only concentrate on profit and performance but really also on planet and people and I think that has a lot to do with what we just heard before and privacy being one of the important aspects here. The second is really about really respecting the rule of law, human rights and democratic values and here it’s also interesting to learn from each other’s terminology. So we both have established definition of what transparency means but it’s not for instance exactly the same justice for fairness in the AI space. Transparency relates more into how the system is set up, what kind of deliverable, so how understanding what the outcome is. In the privacy space it’s more about data collection and the intended use. So again we needed to map the different definitions also so that we have the same language and here I also see the human rights impact assessment. So I just had a session yesterday about Huderia which was set up by the Council of Europe, the human right impact assessment framework that also needs to be harmonized with the data protection requirements. So I already talked about transparency. I think robustness security is something that Ximena also alluded to. Here it’s also coordinating or data security technologies, privacy and enhancing technologies being for instance one of the most important ones. And last but not least it’s about also accountability and here I think that’s what we bring, let’s say I’m a technologist myself, what we bring to the data privacy community is the understanding of the technical aspects. So specifically to the AI lifecycle and where in the AI lifecycle privacy can play an important role. Also beyond data collection because also at the inference space and also other phases privacy is important. So yes the next one. So this is basically the AI lifecycle which is now the basis for further developing privacy related recommendations but also others. This is, as you can see, it starts from planning and design and what is new now, this was also revised, is that we have a new phase of retire and decommissioning. So it goes through collection and processing of data, building of models, testing, make it available for use and deployment operation and monitoring. And you can see here, so basically what we want to do now as a next step of our working group is to go to every phase and see which recommendations, policy recommendations we have for these phases. Especially when it comes to collection and processing of data we have to see what does it mean, you know, the limitation of AI when it comes to data collection, what does it mean if we are looking at a large language model, data scraping for the web, what are the privacy implications to that, which are of course a lot. What is the role of synthetic data? A lot of large language models are now fed by synthetic data which is also generated by models itself. So here I think it’s an important evolution that we also need to take into account. And of course data quality which as was mentioned before is important for accuracy but also for discrimination and bias. And we go further as you can see here then what is going to be important also to see what does it mean to have a right to forget in the AI systems, what kind of oversight and accountability as well as transparency measures we can put in place. We know now for the moment the data cards but we should work towards having more than that for transparency. And well I think that basically this is a work in progress. So as I said we want to go into each of these phases and also share and welcoming also inputs. Thank you.
Lucia Russo: Thank you, Clara, for this overview. This is really instructive as well for those of us who are not privacy experts as you are. And so it’s good to see how privacy affects each of the stages of the life cycle of an AI system. So I will now turn to Tiago, who is a specialist and he has the perspective also of the Brazilian Data Protection Authority. And what I would like to ask you is what are the most critical privacy challenges that you are observing in the context of advanced AI systems? And also on the practical side, how are data protection authorities developing practical approaches and solutions to protect privacy rights and while fostering innovation? I’ll come with the mic.
Thiago Guimarães Moraes: Okay, well, first of all, thanks a lot, Lucia, for the invitation, for not only the invitation to be here, but also for an invitation for being part of this community of the Group of Experts on AI Data and Privacy, which I’ve been following since the beginning of the year. So basically since its inauguration, right? And it has been amazing to be part of this community where we see the amazing work that has been done, which you just very accurately gave some highlights today. And I could start from here. I could say that many of these topics that have been just highlighted by Clara is part of the day-to-day critical thinking that regulators such as data protection authorities have been struggling on. And what I would like to share here, and starting from this challenges perspective, is that as privacy community starts to understand what AI governance and AI regulation means, when you start from the privacy, data protection, and standing point, is that you have to see how all these other values that now are coming. And that’s why I put that circle there, where we have like privacy, fairness, cybersecurity, transparency, human agency, and I know there are others, but these are some of the main values that we see in several frameworks. They come, and when you look at it in a more technical level, and you see the technical community is always thinking about trade-offs, which does make sense on a technical perspective, because what you’re trying as a technician is like to create parameters and try to see like how much you can achieve of any of these values. But at the same time, and as like anyone that works with policymaking, especially for a legal approach, human rights cannot be traded off. And that’s here one of the main challenges. We are talking about trade-off of values in a technical level that they cannot mean undermining of human rights. So this for me is the biggest challenge, and not only for regulators from the privacy field, but in any other, but for sure, since data protection authorities have been working with managing the rights to privacy and data protection, this means that this is our day-to-day looking how these measures are coming and working in a balancing of the human rights that we should be concerned about. And just to give an idea, and this is like the other images I’m sharing, the one in the middle, it just shows this very quite, I would say, a bit of a common sense that when we are talking about one of the main features of like de-identifying with the idea of anonymization, we have this anonymization like privacy utility trade-off, right? And this, of course, is just very illustrative. I’m putting this arc because we shared the work before where we show, okay, what we might be looking here is trying to find this optimal point where you can still assure some level of privacy, but guaranteeing utility for the system, of course. But when we go to real use case, things are not so simple, especially when we are considering other values. So just to consider between privacy and fairness, for example, fairness itself is already challenging for you to define on a technical level, and there are several parameters for that to try to guarantee some aspects of what fairness could mean in a technical level, like some ideas of group fairness and some parameters that try to translate what we should expect in the idea of statistical parity, for example. But when you add to that privacy issues and like how to bring, for example, privacy techniques, privacy enhancing techniques here, it gets even more challenging. And I’m just sharing here in the last part some of the work that we found of, like, this is not the NPD’s work, but it’s work of, like, the technical community that has been working, the privacy community who has been working on that, how they were trying to find this adequate balance on how, for example, you embed differential privacy, which is a technique that, like, for the privacy communities well know, how you can use that and try to find this balance. It’s still ensuring a good fairness level as well for this fairness parameters, for example. And what was particularly interesting in this research, and that’s why I’m sharing here, is that they found out that when you’re looking for federated learning models, which are models that are trained at local level, and then you aggregate all the data for the main AI model, you can apply differential privacy in the local parameters to ensure, first, a better privacy protection, because if you’re just applying the global level, you actually leave the local models unprotected from the privacy perspective. But another interesting thing is that you have to fine tune the level of noise that you’re adding on the differential privacy, because if you go too far, you don’t only lose accuracy, but you bring several issues of, like… Okay. Yeah. Okay. So, maybe now… Can you listen to me? Oh. Okay. Half of the room is listening to me. Yes? Okay. Good. So, well, then let’s turn then to the last slide, so I can… Oh, it’s me here. Oh, good. How do we go? So, this is just also very generally speaking, like, what DPAs such as… Yeah. Okay. Now… Okay. Yeah. I mean, every Internet Governance Forum, we have a free digital forum, we should have tech stuff, so we see how challenging that can be in practice, right? So, talking about… Ah, I see. Okay. Thanks for the tip. So, well, the DPAs, NPD has been doing that, but several others, they have been working first on guidance, like, so we can share best practice, good practice on some specific topics. Just recently, NPD has published a work on the DPAs, a work on the idea of, like, how generative AI is bringing some challenges for privacy, like what Clara just said, that sometimes synthetic content is being created, which it can infer some personal data, and it can infer in an accurate and an inaccurate way, and in both cases, there are consequences. So, we try to tackle a bit of, like, what’s part of this discussion, and we know some other peers have been doing the same, so we know, like, in France, for example, Kenil has been doing a very interesting work on that, and Singapore, the authority there, is doing a sandbox on the paths for generative AI, privacy-enhanced technologies for generative AI, so we see that, like, the works are doing, we are doing both in the theoretical level with guidance, but also more hands-on, like sandboxes. Us, as the Brazilian Data Protection Authority, we are starting a pilot sandbox next year on algorithmic transparency, so we can discuss this concept and what does this mean in the context of a data protection framework, like ours in this case, the Brazilian GPD, and besides that, we have been also, all the DPAs, I could say, have been asking themselves what are the roles, now that we have AI regulations coming up, and we have to think, should Should we be the main central AI authority, or even if that’s not the case, because sometimes can be very political level discussion, what will be our role and how we can ensure that our role is still guaranteed and protected, even like if we are now in a more complex environment where we have to work together with other regulators that are also dealing with data-related, data governance-related issues. So I think I’ll stop from here, but thanks again for the invitation.
Lucia Russo: Thank you, Tiago. It works? Okay. So I have some follow-up questions, but I would like this to be also a conversation with you. So with Mike, maybe I give you this.
Audience: Thank you very much. You can hear okay? Thank you very much for these very interesting interventions. The issue… That’s a really great example. Hold it like this. Okay. That’s okay? Okay. Thank you. This is a really nice case study of what’s happening across technologies, of this issue of convergence. I’m with UNICEF. We’re looking at… I’ve led work on AI and the high-level advisory board and how AI impacts children, and we’re looking at how neurotechnology impacts children. And AI and neurotechnology have converged. So these issues, privacy is the issue, but if you even look at the technologies, whose responsibility is it to set the governance rules? So I was really interested to hear about the working group. And my question, Clara, is kind of what’s the end goal of this interesting and useful exercise? Because it sounds like there are some governance recommendations within the AI space, within the privacy space. You’re looking at kind of mapping them, but what’s the output? Is it a new merge set, or is it kind of an update on both sides, or do we update the principles from time to time, which is necessary? UNICEF also has recommendations for AI and children, and we’ve been reflecting. They came out 2021. The world has changed. It’s time to refresh those. The principles stay, but how you apply them is… So yes, that’s… Where do we go from here? Thank you.
Clara Neppel: Thank you. And thank you for bringing up the issue of children. And actually, I also wanted to bring it because it’s, I think, a big issue, not only on privacy, but also on mental health of our future generation. So I think that we have different issues how we can tackle this. So just to give you an example of now specifically age-appropriate design, because that is something which I think we need to take into account in the AI design system. And we are working, for instance, IEEE with the Five Rights Foundation to set up, I hope, a universal standard of how to collect children’s data. Okay? If you can hear me. And so I think that is one of the practical examples of what we can do for the moment on a voluntary basis. But in certain jurisdictions, it is already like in, I think, in the UK, where it is obligatory. So also, I think this is what we want to do in the working group, is first of all, understand what are the issues, what are already… Do we already have solutions that we can leverage from each other? To identify the gaps, and some of them would be very certainly policy recommendations, but we also very clearly want to target the developers, for instance, when it comes to scraping data, so that they understand what the policy implications are… Sorry, what the legal implications are, because a lot of them don’t have that. So it’s both sides.
Lucia Russo: Is there any other question? Yes.
Audience: Hello. Thank you so much for the presentation. I’m from Nanting Youth Development Service Centre, and in my study field, there’s this technology called blockchain, specifically for data, and by storing data in multiple different sectors, any slight changes to the data can be tracked and detected. But at this, very much protected transparency of data, but this technique in itself is at the centre of debate on privacy. So, like you said, it’s like a trade-off system. So I want to know how you guys think of this technology and how we can actually find that balance.
Thiago Guimarães Moraes: Okay. Okay, yes. So, actually… Does it work? Okay. So, yeah, thanks for the question, because it’s actually very important, and then I can try to give an idea of, like, as a DPA, I shared the experience of the Brazilian Data Protection Authority, but also from what we heard of other peers, some similar approach happened. Like, we do look… Usually, most data protection authorities, DPAs, as we call, we have units that work with monitoring technology progress. I am part of one of these units there in Brazil. We know other institutions, like in the UK, in France, they have something similar. The interesting thing about this innovation technology monitoring units is that they have to look not only to AI, but several other technologies, like blockchain. So blockchain, for example, is a topic that we also follow. We have, like… Part of our team is working on looking on specific privacy-related issues, and with the blockchain technologies. One very big challenge, and when we’re talking about privacy and blockchain, is because usually when you register information, the blockchain, it stays in the blockchain. And we do have a right for elimination of personal data. So how can we do that, right, if personal data is embedded in the blockchain? So what I can say for you is that this is part of the discussion that we’re having. It’s very challenging when we decide to provide a solution, because we have to be very sure what we are proposing in a policy level. So far, as I know, this is a topic that the privacy regulators, the privacy community has been discussing, but I am not aware of a very strong argument of how it should work. And I do believe that what we need to come to this answer is to be better engaged with the technical community that’s working on that. And we’ve seen this work happening in the AI governance level. I can say this work of the OECD is a big example, and I think we should have more of the same in the blockchain discussion, because eventually we will actually be seeing these two emerging technologies getting together more and more as time passes by. So thanks for bringing blockchain to the discussion.
Juraj Čorba: If I may, I’ll just briefly intervene. One, two, three. Do you hear me? Okay. So on the topic of converging technologies, just like blockchain and other, it is very important actually to realize that when we talk about privacy and AI, we cannot really discuss only this. You really need to have a whole picture of the whole digital stack. In other words, we can hardly talk about governance of privacy in AI without actually fully understanding the implications of digital platforms for privacy and the way platforms are being driven by AI or enabling AI via collection of data about the users, right? The same applies to Internet of Things, because those data taken from the sensors from the Internet of Things, they will feed into the AI systems. The same applies to digital finance, possibly now also to some new efforts in the field of biology, which can be even more delicate when it comes to privacy and our biological predispositions and design. So there, I think it’s a very good example. Also, of course, blockchain as well. But this point that was raised really leads us to the necessity to have a full picture of how these different digital spheres interact and how they are integrated into the most sophisticated services and products available on the market, because the most successful ones on the market, they manage to integrate all these environments together. And of course, then the implications for privacy are even more imminent.
Jimena Viveros: So just to add on the conversation, I think what we, if we’re talking about human rights, and I think we should, start from there. The right to privacy stems from the right to identity, right? So, which also is very linked with the right to be forgotten, right? So, what we’re witnessing now is we’re trying to create or foster the protection of our personal digital identity or signature or print, and that is a new type of concept that we haven’t had thought about before. So, when this information, our personal information, especially when it comes to biometrics, genomics, neurotechnologies that are happening, and all types of personal and information, when it’s locked into something, such as blockchain or any other technology or any other environment, it’s complicated, and especially because sometimes the capturing of this information isn’t necessarily consensual or well-informed, and this is a problem that has been happening for a long time, but the situation now is what is being given to this information, whether it’s locked, whether it’s just being captured now or whatever is happening. So, because, again, coming back to the implications on peace and security, I mean, we can think of, in law enforcement, you know, predictive policing, we can think of border control, we can think, again, with biometrics, which is pretty dangerous, and then even in governmental services, you know, even access to healthcare, access to loans, access to financial services, housing, whatever, like all of these things are being predetermined by the data that is stored and how they represent or misrepresent a person. So, I think it’s very important to remember that at the basis of privacy, it’s identity, and that is the one most precious thing that we have, and that’s why we should all strive to protect it.
Lucia Russo: Thank you so much, all the speakers. We have two questions here and one online. Okay, I think I’ll take the one online first, but we have only two minutes to go. So, please, quick reaction from the speakers. How do we deal with privacy by design within the AI changing state? One quick reaction so that we can hear another question from the floor.
Thiago Guimarães Moraes: Well, it was, it’s very welcome, this discussion, because when we started discussing by design process, like privacy by design, we are talking, we’re asking how we go hands-on from now, right? Okay, we are building amazing policy frameworks, but how these frameworks translate to concrete considerations, and what I can say that has been proven to be a nice experience from the part of the DPAs is using the sandboxes, because all the privacy sandboxes that have been organized, like from CNIL, from the Norwegian DPA, the ICO, Brazil now, Singapore, we see that like we usually bring some discussion that in the end of the day, what we are trying to test with that particular given technology, like AI, for example, but it can also be blockchain technologies or just a data sharing practice, is come up in the end with like good practice and way of like how this actually is a practical experimentation on privacy by design. So I would stop here, because I know we don’t have much time.
Clara Neppel: I would just like to add one sentence here. I think some of the issues are so important that it should be enforced, like coming back to the children, I think that the collection of children data should be really regulated, because that has enormous implications for them and for our society. And for others, I think that the context will be important. Again, what was mentioned before, privacy is dependent on the context. Some things need to be enforced by regulation. Sometimes we need to take privacy into account for a specific use and without trade-offs, hopefully, or having the optimal trade-off. Thank you.
Lucia Russo: Thank you. I think we are at time and five minutes. Thank you. Okay, so please.
Audience: Thank you very much. Martina Legal Malakova from Slovakia. I have the question for Emina. Have you seen the lawyers today could protect the human rights that today for the new emerging technologies, we don’t have often the laws, but we have only principles? Thank you.
Jimena Viveros: Yes, so that’s a problem indeed. I mean, these principles, these guidelines are very useful as stepping stones, but they’re not binding. And then we come to the problem of enforceability. So, what we need is the adoption of the standards, protocols, guidelines, principles, however you want to call them or frame them, and get them adopted at the national level and push for them to be actually consolidated into global governance. Because if we don’t have a global framework that’s global, like internationally, because all of this is transboundary. So, if you know, like, again, coming back to what I said at the beginning, we just have like this patchwork of initiatives, even if the regional, that’s not enough. We need something that’s global, so that everyone is protected in the same way, because our information is everywhere. So, we just need to convert principle into action and action that is enforceable, that it can be monitored, that can be, you know, verified, and that there’s proper oversight mechanisms. So, that’s why I was mentioning before, like a centralized authority that controls all of this and conducts all of this oversight at the international level would be a good approach. But in the meantime, all we can do, and it’s very valuable work, is these principles, which are ethical values, whatever, but stemming always from human rights, which already are existing. And the problem now that we’re facing is the revamping of even those basic human rights that have been there for the past 70 years. But with the excuse of AI, everyone is kind of like opening up the box again and rethinking whether they are applicable. They’re always applicable, but we just need to find the way to integrate it into the reality that we’re living in. So, the solution is get governments to regularize it in a harmonized way, and then make it a global governance regime. Thank you.
Lucia Russo: Okay, the very, very last question.
Audience: Thank you so much for your presentation. My name is Hasara Tebi. I’m from Mawadda Association for Family Stability at Riyadh, Saudi Arabia. What I have is actually not a question. It’s an input, a fit of mind. So, the rapid advancement of AI technology had led to increased collection and processing of personal data, overturned without sufficient safeguards to protect privacy. Innovation in AI heavily relies on vast amounts of data, heightening the risk of privacy violation and misuse of data in ways that can harm individuals. There is a growing concern that current legalization lags behind technological progress, creating gaps that allow the explosion of personal information without explicit consent or comprehensive understanding by individuals. We call to strengthen legal frameworks, update and enhance legislation to ensure effective privacy protection in the age of AI, ensure transparency and accountability, require companies and organizations to clearly disclose how data is collected and used, while implementing the robust accountability mechanism for violations. Engage civil society to include civil society organization and users in the development of AI and privacy-related policies and regulations. We recommended the following. Develop impact assessment tools, create and utilize tools to assess the impact of AI technologies on privacy before their implementation, raise awareness and provide training, organize training programs for developers and policy makers to emphasize the importance of privacy and strategies to protect it during the design and deployment of AI system. And finally, encourage exceptional initiatives such as His Royal Highness the Crown Prince’s Global Child Protection in Cyberspace, CBC initiatives, which aims to strengthen collective action, unify international efforts and raise global awareness among decision makers about the growing threat to children in cyberspace. Thank you.
Lucia Russo: Thank you so much and I think we couldn’t have a better way to end this passionating debate. I think we could have gone on and on discussing with you. It’s a topic that deserves a lot of the policy attention as we are seeing and this is really at the core of the discussions that we are undertaking in the international AI governance and privacy sphere. So with that, I would really like to thank the distinguished speakers here, Juraj, Ximena, Clara, Tiago for their excellent contributions, but as well as the audience for participating so vividly in this discussion with us. Thank you.
Juraj ÄŒorba
Speech speed
132 words per minute
Speech length
1485 words
Speech time
674 seconds
AI systems are highly dependent on data, creating privacy concerns
Explanation
Juraj ÄŒorba emphasizes that AI systems require large amounts of data to function properly. This dependency on data raises significant privacy concerns as it involves collecting and processing vast amounts of information, potentially including personal data.
Evidence
The OECD AI principles explicitly mention the need to protect privacy.
Major Discussion Point
Privacy Challenges in Advanced AI Systems
Agreed with
Clara Neppel
Thiago Guimaraes Moraes
Jimena Viveros
Agreed on
AI systems pose significant privacy challenges
OECD updated AI principles to reflect technological developments and privacy concerns
Explanation
Juraj ÄŒorba discusses the recent update to the OECD AI principles. The revision was made to address the evolving technological landscape and increased challenges raised by advanced AI systems, including privacy rights.
Evidence
The updated OECD AI principles were adopted in May by ministers in Paris.
Major Discussion Point
Policy and Governance Approaches for AI and Privacy
Agreed with
Jimena Viveros
Thiago Guimaraes Moraes
Agreed on
Need for global governance and harmonized regulations
Convergence of AI with other technologies like blockchain and neurotechnology creates new privacy challenges
Explanation
Juraj ÄŒorba points out that AI is converging with other technologies such as blockchain and neurotechnology. This convergence creates new and complex privacy challenges that need to be addressed.
Evidence
Examples of converging technologies mentioned include Internet of Things, digital finance, and biology.
Major Discussion Point
Intersections of AI, Privacy, and Other Technologies
Clara Neppel
Speech speed
137 words per minute
Speech length
1492 words
Speech time
648 seconds
Generative AI exacerbates privacy issues through vast data collection and potential re-identification
Explanation
Clara Neppel highlights that generative AI has intensified privacy concerns due to its extensive data collection practices. This technology also introduces the possibility of re-identifying individuals or revealing characteristics that were not initially disclosed.
Evidence
Neppel mentions the surprise people often experience when AI systems know things about them that weren’t explicitly shared.
Major Discussion Point
Privacy Challenges in Advanced AI Systems
Agreed with
Juraj ÄŒorba
Thiago Guimaraes Moraes
Jimena Viveros
Agreed on
AI systems pose significant privacy challenges
Importance of weighing economic benefits of AI against privacy risks
Explanation
Clara Neppel emphasizes the need to balance the economic and social benefits of AI against potential privacy risks. She suggests that success metrics should not only focus on profit and performance but also consider impacts on people and the planet.
Evidence
Neppel refers to the OECD AI principle of inclusive growth, sustainable development, and well-being.
Major Discussion Point
Balancing Innovation and Privacy Protection in AI
Differed with
Thiago Guimaraes Moraes
Differed on
Approach to privacy protection in AI systems
Importance of age-appropriate design and protecting children’s data
Explanation
Clara Neppel stresses the significance of age-appropriate design in AI systems, particularly concerning the collection of children’s data. She highlights this as a crucial issue not only for privacy but also for the mental health of future generations.
Evidence
Neppel mentions IEEE’s work with the Five Rights Foundation to establish a universal standard for collecting children’s data.
Major Discussion Point
Balancing Innovation and Privacy Protection in AI
Thiago Guimarães Moraes
Speech speed
132 words per minute
Speech length
1855 words
Speech time
839 seconds
Trade-offs between privacy, fairness, and utility in AI systems pose challenges
Explanation
Thiago Guimaraes Moraes discusses the complex trade-offs between privacy, fairness, and utility in AI systems. He points out that while technicians often think in terms of trade-offs, from a human rights perspective, these values cannot be compromised.
Evidence
Moraes provides an example of the challenge in balancing privacy and fairness in federated learning models using differential privacy techniques.
Major Discussion Point
Privacy Challenges in Advanced AI Systems
Agreed with
Juraj ÄŒorba
Clara Neppel
Jimena Viveros
Agreed on
AI systems pose significant privacy challenges
Differed with
Clara Neppel
Differed on
Approach to privacy protection in AI systems
Data protection authorities are developing guidance and sandboxes to address AI privacy issues
Explanation
Thiago Guimaraes Moraes explains that data protection authorities are creating guidance documents and implementing sandbox environments to address privacy challenges in AI. These efforts aim to share best practices and provide practical solutions for privacy protection in AI systems.
Evidence
Moraes mentions the Brazilian Data Protection Authority’s upcoming pilot sandbox on algorithmic transparency.
Major Discussion Point
Policy and Governance Approaches for AI and Privacy
Agreed with
Juraj ÄŒorba
Jimena Viveros
Agreed on
Need for global governance and harmonized regulations
Need for privacy-enhancing technologies and techniques like differential privacy
Explanation
Thiago Guimaraes Moraes emphasizes the importance of privacy-enhancing technologies and techniques, such as differential privacy, in addressing AI privacy challenges. These approaches can help balance privacy protection with maintaining utility and fairness in AI systems.
Evidence
Moraes references research on applying differential privacy in federated learning models to enhance privacy protection.
Major Discussion Point
Balancing Innovation and Privacy Protection in AI
Challenges of implementing “privacy by design” in rapidly changing AI landscape
Explanation
Thiago Guimaraes Moraes discusses the difficulties of implementing privacy by design principles in the context of rapidly evolving AI technologies. He emphasizes the need to translate policy frameworks into concrete considerations for AI developers.
Evidence
Moraes mentions the use of regulatory sandboxes by various data protection authorities to test and develop good practices for privacy by design in AI systems.
Major Discussion Point
Balancing Innovation and Privacy Protection in AI
Blockchain’s immutability poses challenges for data deletion rights
Explanation
Thiago Guimaraes Moraes highlights the conflict between blockchain technology’s immutability and the right to erasure of personal data. This creates a significant challenge for privacy protection in blockchain-based systems.
Evidence
Moraes mentions that this is an ongoing discussion in the privacy community, but no strong solutions have been proposed yet.
Major Discussion Point
Intersections of AI, Privacy, and Other Technologies
Jimena Viveros
Speech speed
143 words per minute
Speech length
1539 words
Speech time
645 seconds
AI data collection and use can affect democratic institutions and geopolitics
Explanation
Jimena Viveros points out that the extensive data collection and use by AI systems can have far-reaching impacts beyond individual privacy. She argues that these practices can affect democratic institutions and geopolitical dynamics on a large scale.
Evidence
Viveros references the potential for AI-driven misinformation and erroneous use of data to impact democratic processes and regional economies.
Major Discussion Point
Privacy Challenges in Advanced AI Systems
UN recommendations aim to create a global AI data framework to protect human rights
Explanation
Jimena Viveros discusses the UN’s efforts to establish a global framework for AI data governance. This framework aims to protect various human rights that can be affected by AI’s use of data, with a focus on privacy protection.
Evidence
Viveros mentions the Global Digital Compact and its proposals for awareness raising, capacity building, and controlled cross-border data flows.
Major Discussion Point
Policy and Governance Approaches for AI and Privacy
Need for global governance and harmonized regulations to address transboundary nature of data
Explanation
Jimena Viveros emphasizes the necessity for global governance and harmonized regulations in AI and data protection. She argues that the transboundary nature of data requires a unified international approach rather than a patchwork of regional initiatives.
Evidence
Viveros suggests the creation of a centralized international authority for oversight and monitoring of AI and data governance.
Major Discussion Point
Policy and Governance Approaches for AI and Privacy
Agreed with
Juraj ÄŒorba
Thiago Guimaraes Moraes
Agreed on
Need for global governance and harmonized regulations
AI’s use of biometric and genomic data raises concerns about digital identity protection
Explanation
Jimena Viveros highlights the privacy risks associated with AI’s use of sensitive biometric and genomic data. She emphasizes the importance of protecting individuals’ digital identities, which are closely linked to the right to privacy and the right to be forgotten.
Evidence
Viveros mentions examples of how this data could be used in law enforcement, border control, and access to various services like healthcare and finance.
Major Discussion Point
Intersections of AI, Privacy, and Other Technologies
Agreed with
Juraj ÄŒorba
Clara Neppel
Thiago Guimaraes Moraes
Agreed on
AI systems pose significant privacy challenges
Agreements
Agreement Points
AI systems pose significant privacy challenges
Juraj ÄŒorba
Clara Neppel
Thiago Guimaraes Moraes
Jimena Viveros
AI systems are highly dependent on data, creating privacy concerns
Generative AI exacerbates privacy issues through vast data collection and potential re-identification
Trade-offs between privacy, fairness, and utility in AI systems pose challenges
AI’s use of biometric and genomic data raises concerns about digital identity protection
All speakers agreed that advanced AI systems, particularly generative AI, pose significant privacy challenges due to their extensive data requirements and potential for re-identification or misuse of personal information.
Need for global governance and harmonized regulations
Juraj ÄŒorba
Jimena Viveros
Thiago Guimaraes Moraes
OECD updated AI principles to reflect technological developments and privacy concerns
Need for global governance and harmonized regulations to address transboundary nature of data
Data protection authorities are developing guidance and sandboxes to address AI privacy issues
The speakers emphasized the importance of developing global governance frameworks and harmonized regulations to address the transboundary nature of data and AI-related privacy challenges.
Similar Viewpoints
Both speakers highlighted the need to balance the benefits of AI against potential risks to privacy and broader societal impacts, including effects on democratic institutions and geopolitics.
Clara Neppel
Jimena Viveros
Importance of weighing economic benefits of AI against privacy risks
AI data collection and use can affect democratic institutions and geopolitics
Both speakers emphasized the importance of implementing specific technical and design measures to enhance privacy protection in AI systems, particularly for vulnerable groups like children.
Thiago Guimaraes Moraes
Clara Neppel
Need for privacy-enhancing technologies and techniques like differential privacy
Importance of age-appropriate design and protecting children’s data
Unexpected Consensus
Convergence of AI with other technologies creating new privacy challenges
Juraj ÄŒorba
Thiago Guimaraes Moraes
Convergence of AI with other technologies like blockchain and neurotechnology creates new privacy challenges
Blockchain’s immutability poses challenges for data deletion rights
While the focus was primarily on AI, there was unexpected consensus on the need to consider privacy challenges arising from the convergence of AI with other emerging technologies like blockchain and neurotechnology.
Overall Assessment
Summary
The speakers generally agreed on the significant privacy challenges posed by advanced AI systems, the need for global governance frameworks, and the importance of balancing innovation with privacy protection. There was also consensus on the need to consider the convergence of AI with other technologies in addressing privacy issues.
Consensus level
High level of consensus among speakers, suggesting a strong foundation for developing comprehensive approaches to AI governance and privacy protection. This consensus implies that future policy discussions and regulatory efforts may focus on implementing globally harmonized frameworks that address the complex interplay between AI, privacy, and other emerging technologies.
Differences
Different Viewpoints
Approach to privacy protection in AI systems
Clara Neppel
Thiago Guimaraes Moraes
Importance of weighing economic benefits of AI against privacy risks
Trade-offs between privacy, fairness, and utility in AI systems pose challenges
While Clara Neppel emphasizes balancing economic benefits against privacy risks, Thiago Guimaraes Moraes highlights the challenges in balancing privacy, fairness, and utility, stating that from a human rights perspective, these values cannot be compromised.
Unexpected Differences
Role of blockchain in privacy protection
Thiago Guimaraes Moraes
Juraj ÄŒorba
Blockchain’s immutability poses challenges for data deletion rights
Convergence of AI with other technologies like blockchain and neurotechnology creates new privacy challenges
While both speakers mention blockchain, their perspectives differ unexpectedly. Thiago Guimaraes Moraes focuses on the challenges blockchain poses for data deletion rights, while Juraj ÄŒorba sees blockchain as part of a broader convergence of technologies creating new privacy challenges.
Overall Assessment
summary
The main areas of disagreement revolve around the approach to balancing privacy protection with innovation in AI, the specific methods for implementing privacy safeguards, and the role of emerging technologies in privacy challenges.
difference_level
The level of disagreement among the speakers is moderate. While they generally agree on the importance of privacy protection in AI systems, they differ in their approaches and emphasis on specific aspects. These differences reflect the complexity of the issue and the need for multifaceted solutions in AI governance and privacy protection.
Partial Agreements
Partial Agreements
Both speakers agree on the need for enhanced privacy protection, particularly for children’s data. However, they propose different approaches: Clara Neppel suggests age-appropriate design and universal standards, while Thiago Guimaraes Moraes focuses on privacy-enhancing technologies like differential privacy.
Clara Neppel
Thiago Guimaraes Moraes
Importance of age-appropriate design and protecting children’s data
Need for privacy-enhancing technologies and techniques like differential privacy
Similar Viewpoints
Both speakers highlighted the need to balance the benefits of AI against potential risks to privacy and broader societal impacts, including effects on democratic institutions and geopolitics.
Clara Neppel
Jimena Viveros
Importance of weighing economic benefits of AI against privacy risks
AI data collection and use can affect democratic institutions and geopolitics
Both speakers emphasized the importance of implementing specific technical and design measures to enhance privacy protection in AI systems, particularly for vulnerable groups like children.
Thiago Guimaraes Moraes
Clara Neppel
Need for privacy-enhancing technologies and techniques like differential privacy
Importance of age-appropriate design and protecting children’s data
Takeaways
Key Takeaways
Advanced AI systems pose significant privacy challenges due to their reliance on vast amounts of data
There is a need for global governance frameworks and harmonized regulations to address AI privacy issues
Balancing innovation with privacy protection is a key challenge in AI development and deployment
The convergence of AI with other technologies creates new privacy risks that need to be addressed
Protecting children’s data and implementing age-appropriate design in AI systems is crucial
Resolutions and Action Items
OECD expert group to continue mapping AI principles to privacy principles and develop recommendations for each stage of the AI lifecycle
Data protection authorities to provide guidance and conduct regulatory sandboxes on AI privacy issues
Push for adoption of international AI governance standards at national levels
Unresolved Issues
How to implement privacy-by-design principles in rapidly evolving AI systems
Balancing trade-offs between privacy, fairness, and utility in AI systems
Addressing privacy challenges posed by blockchain and other emerging technologies in conjunction with AI
Enforceability of non-binding AI ethics principles and guidelines
Suggested Compromises
Using differential privacy techniques to balance privacy protection and data utility in AI systems
Developing universal standards for collecting children’s data that balance protection and innovation
Considering context-specific approaches to privacy regulation, with stricter enforcement for critical areas like children’s data
Thought Provoking Comments
AI is data, so we cannot have AI without data. And data comes with privacy issues, that’s just a problem.
speaker
Jimena Viveros
reason
This succinctly captures the fundamental tension between AI development and privacy concerns.
impact
It set the tone for much of the subsequent discussion about balancing AI innovation with privacy protections.
We should really recognize data as a digital public good.
speaker
Jimena Viveros
reason
This reframes how we think about data ownership and governance in the AI era.
impact
It sparked discussion about creating global frameworks for data and AI governance to protect rights while fostering innovation.
Human rights cannot be traded off. And that’s here one of the main challenges. We are talking about trade-off of values in a technical level that they cannot mean undermining of human rights.
speaker
Thiago Guimaraes Moraes
reason
It highlights the tension between technical optimization and fundamental rights protection in AI development.
impact
It shifted the conversation to focus more on how to implement human rights protections in practice when developing AI systems.
We can hardly talk about governance of privacy in AI without actually fully understanding the implications of digital platforms for privacy and the way platforms are being driven by AI or enabling AI via collection of data about the users, right?
speaker
Juraj ÄŒorba
reason
This comment emphasizes the interconnected nature of AI, privacy, and other digital technologies.
impact
It broadened the scope of the discussion to consider AI privacy issues in the context of the entire digital ecosystem.
At the basis of privacy, it’s identity, and that is the one most precious thing that we have, and that’s why we should all strive to protect it.
speaker
Jimena Viveros
reason
This comment gets to the core of why privacy matters, connecting it to fundamental human rights and identity.
impact
It refocused the discussion on the human impact of privacy violations and the importance of protecting individual identity in the digital age.
Overall Assessment
These key comments shaped the discussion by highlighting the complex interplay between AI development, data usage, and privacy protection. They moved the conversation from abstract principles to practical challenges in implementing privacy safeguards, while emphasizing the need for global cooperation and human rights-based approaches. The discussion evolved from technical considerations to broader societal implications, underscoring the multifaceted nature of AI governance and privacy protection in the digital age.
Follow-up Questions
How to balance economic and social benefits of AI against risks to privacy rights?
speaker
Clara Neppel
explanation
This is a key challenge in developing AI governance frameworks that protect privacy while enabling innovation.
How to implement age-appropriate design in AI systems to protect children’s privacy and mental health?
speaker
Clara Neppel
explanation
Protecting children’s data and wellbeing is a critical issue as AI systems become more prevalent.
How to address the challenge of the right to erasure of personal data in blockchain systems?
speaker
Thiago Guimaraes Moraes
explanation
This highlights the tension between blockchain’s immutability and data protection rights.
How to create a global AI data framework to protect human rights affected by data use?
speaker
Jimena Viveros
explanation
A global framework is needed to address the transboundary nature of AI and data flows.
How to develop practical, enforceable global governance mechanisms for AI and privacy?
speaker
Jimena Viveros
explanation
Moving from principles to enforceable rules is crucial for effective AI governance.
How to implement privacy by design within the rapidly changing AI landscape?
speaker
Audience member (online)
explanation
This is important for proactively addressing privacy concerns in AI development.
How to strengthen legal frameworks to ensure effective privacy protection in the age of AI?
speaker
Hasara Tebi (audience member)
explanation
Updating legislation is crucial to keep pace with technological advancements in AI.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
WS #279 AI: Guardian for Critical Infrastructure in Developing World
WS #279 AI: Guardian for Critical Infrastructure in Developing World
Session at a Glance
Summary
This panel discussion focused on the challenges and opportunities of using AI to secure critical infrastructure in developing countries. Experts from various sectors discussed key issues including cybersecurity risks, capacity building, and international cooperation.
Panelists highlighted several challenges faced by developing countries, including legacy infrastructure, lack of cybersecurity expertise, and limited resources. They emphasized the need for upskilling technical professionals and leveraging AI to enhance threat detection and response capabilities. The importance of multi-stakeholder collaboration was stressed, with calls for partnerships between governments, private sector, and civil society to develop affordable and accessible AI-powered security solutions.
The discussion explored strategies for reducing dependence on foreign technology, including developing robust domestic legal frameworks, fostering regional cooperation, and investing in local capacity building. Panelists also addressed the need to balance AI-driven security with privacy concerns and ethical considerations, suggesting a risk-based approach and adherence to international standards and human rights principles.
Key recommendations included prioritizing critical infrastructure protection, developing national and regional cybersecurity frameworks, and participating in international forums to share best practices. The importance of tailoring solutions to local contexts while adhering to global standards was emphasized. Panelists also discussed the need for sustainable funding models and tiered pricing to ensure accessibility for developing countries.
Overall, the discussion underscored the potential of AI in enhancing critical infrastructure security while highlighting the need for collaborative, ethical, and context-sensitive approaches to implementation in developing countries.
Keypoints
Major discussion points:
– Challenges in securing critical infrastructure in developing countries, including legacy systems, lack of expertise, and digital transformation issues
– Strategies for training and upskilling cybersecurity professionals in developing countries
– Risks associated with AI systems for critical infrastructure and ways to mitigate them
– Need for international cooperation and partnerships to share best practices on critical infrastructure security
– Balancing AI-driven security with privacy and ethical considerations
The overall purpose of the discussion was to explore how developing countries can leverage AI to enhance the security of critical infrastructure, while addressing challenges around expertise, resources, and international cooperation.
The tone of the discussion was largely informative and collaborative. Speakers shared insights from their various backgrounds and perspectives, building on each other’s points. There was an emphasis on the need for global cooperation and knowledge sharing to address these complex challenges. The tone remained consistent throughout, with speakers maintaining a constructive and solution-oriented approach.
Speakers
– Harisa Shahid: Co-organizer of the session
– Muhammad Umair Ali: Co-organizer of the session, employed in the AI field, represents private sector
– Hafiz Muhammad Farooq: Cybersecurity architect at Saudi Aramco, 20+ years experience in network and cybersecurity
– Jenna Fung: Program director for NetMission.Asia Internet Governance Academy, leads Asia Pacific Youth IGF
– Daniel Lohrmann: Deals with public sector portfolio at Presidio, cybersecurity professional with 30+ years experience
– Gyan Prakash Tripathi: Lawyer, worked with think tanks and research organizations, represents Civil Society Stakeholder Group
– Jacco-Pepijn Baljet: Senior policy officer at the Ministry of Foreign Affairs of the Netherlands
Additional speakers:
– Fernando: Part of Brazilian youth delegation, works in network provider
– Thuy: From .vn directory, part of technical community
Full session report
Expanded Summary: AI for Securing Critical Infrastructure in Developing Countries
This panel discussion brought together experts from various sectors to explore the challenges and opportunities of using AI to secure critical infrastructure in developing countries. The session, co-organised by Harisa Shahid and Muhammad Umair Ali, featured speakers with diverse backgrounds in cybersecurity, policy, law, and youth engagement.
Key Challenges in Securing Critical Infrastructure
The discussion began with Hafiz Muhammad Farooq, a cybersecurity architect at Saudi Aramco, outlining three major challenges faced by developing countries, particularly in the MENA region:
1. Legacy infrastructure: Outdated systems create vulnerabilities and are difficult to secure.
2. Lack of cybersecurity expertise: There is a shortage of professionals skilled in protecting industrial control systems.
3. Digital transformation issues: Rapid adoption of new technologies without adequate security measures.
Jacco-Pepijn Baljet, from the Dutch Ministry of Foreign Affairs, added that limited resources and budget constraints further exacerbate these challenges. Jenna Fung, representing the youth perspective, highlighted knowledge gaps due to less exposure to new technologies in developing countries and emphasized the digital divide that necessitates tailored capacity-building strategies.
Leveraging AI for Enhanced Security
Despite these challenges, speakers agreed that AI presents significant opportunities for improving critical infrastructure security. Hafiz Muhammad Farooq emphasised that AI can augment threat detection and response capabilities, enabling automated analysis of large-scale infrastructure data. However, Daniel Lohrmann, a cybersecurity professional with over 30 years of experience, cautioned that AI systems themselves face risks such as data poisoning attacks, privacy attacks, adversarial attacks, model theft, and dependency or vulnerability supply chain attacks.
Lohrmann also highlighted a unique advantage of AI in overcoming language barriers, suggesting that AI could make cybersecurity solutions available in multiple languages, thereby increasing accessibility for developing countries. He specifically mentioned critical infrastructure sectors such as utilities, finance, government, and transportation as key areas for AI application.
Capacity Building Strategies
The panel agreed on the critical need for capacity building in developing countries. Jenna Fung advocated for developing national strategies tailored to local contexts and leveraging online resources and regional educational opportunities. Daniel Lohrmann suggested establishing public-private partnerships for knowledge transfer and implementing tiered pricing models for AI security solutions to ensure affordability.
An audience member, Fernando, raised concerns about retaining cybersecurity professionals in their home countries. In response, Hafiz Muhammad Farooq suggested focusing on producing more talent rather than retention, emphasizing the importance of continuous education and skill development.
International Cooperation and Partnerships
Speakers unanimously agreed on the importance of international collaboration. Jacco-Pepijn Baljet emphasised the need to exchange best practices and lessons learned through international forums. He also highlighted the relevance of the Global Digital Compact in AI governance. Hafiz Muhammad Farooq called for the development of global standards and frameworks specifically for AI in critical infrastructure, mentioning the AI Act and AS2 directive as examples.
Gayan Prakash Tripathi, representing the Civil Society Stakeholder Group, suggested forming regional knowledge-sharing and R&D blocks to compound available resources. He also proposed a three-pronged strategy for developing countries to reduce dependence on foreign technology:
1. Robust domestic legal framework and strategic contracting
2. Inclusive, transparent, and accountable government mechanisms
3. Regional cooperation and capacity-building for long-term sovereignty
Balancing Security, Privacy, and Ethics
The discussion addressed the need to balance AI-driven security with privacy concerns and ethical considerations. Daniel Lohrmann advocated for implementing robust data governance and secure model development practices. Jacco-Pepijn Baljet suggested adopting a risk-based approach to AI systems regulation and emphasized the need for both high-level international agreements and local legislation to address ethical considerations in AI implementation.
Key Recommendations and Action Items
1. Develop national strategies for tailored capacity building in cybersecurity and AI.
2. Establish public-private partnerships for knowledge transfer and technology access.
3. Work towards creating global standards and frameworks for AI in critical infrastructure security through multi-stakeholder cooperation.
4. Implement tiered pricing models for AI security solutions to ensure affordability for developing countries.
5. Increase collaboration and knowledge sharing through international forums and regional partnerships.
Resilience Strategies and Future Considerations
Daniel Lohrmann suggested specific resilience strategies against AI-driven attacks, including AI-powered threat intelligence, red teaming, simulated attacks, and comprehensive incident response plans. The panel also discussed the importance of balancing global principles with local context when developing AI and cybersecurity policies.
Conclusion
The discussion underscored the potential of AI in enhancing critical infrastructure security while highlighting the need for collaborative, ethical, and context-sensitive approaches to implementation in developing countries. Future initiatives in this area are likely to focus on collaborative efforts, knowledge sharing, and capacity building, while also addressing the ethical and security concerns associated with AI implementation.
Session Transcript
Harisa Shahid: community here, and I’m joined by my co-organizer and partner, Mr. Mohammad Omar Ali from Pakistan, who is employed in the AI field and represents the private sector group here. And without further ado, I would like to introduce the esteemed panelist for our session today. We have Mr. Jacob Papagen, who is a senior policy officer at the Ministry of Foreign Affairs of the Netherlands. He has vast experience in fostering partnerships to address cyber-related issues across several countries. He has served at the permanent mission of the Netherlands to the UN in Vienna on nuclear issues and at the European Union delegation to the Council of Europe in Strasbourg. Next, we have here Mr. Hafiz Farooq with us. Mr. Hafiz Farooq is a cybersecurity architect at Saudi Aramco. He holds over 20 years of experience in network and cybersecurity. He is a three-time fellow at the Internet Corporation for Assigned Names and Numbers, ICANN, and is serving as the member of the Root Server System Advisory Committee and other working groups. He also serves on the advisory board of several Fortune 500 companies. He is an esteemed cybersecurity professional and is joining us here on site. Now, without further delay, I would like to give the stage to Mr. Mohammad Omar Ali, who would like to introduce our online speakers. So you can continue, Omar.
Muhamad Umair Ali: Hi, Harisa. Hi, everyone. Thank you so much for joining. So yes, without further delay, I would like to introduce the virtual speakers. We have Mr. Daniel Lohmann. Mr. Lohmann is an esteemed cybersecurity professional. He currently deals with public sector portfolio at Presidio. He’s an accomplished author and award-winning cybersecurity professional with over 30 years of work experience, starting from the National Security Agency at the United States government, and then has worked up. with Department of Homeland Security as well as for White House and other organizations. So he’s joining us today from New York. I guess it’s quite an early time there. So thank you, Mr. Daniel, for joining us. Followed by that, Jenna Funk. Ms. Jenna Funk is the program director for the NetMission.Asia Internet Governance Academy. She also leads the Asia Pacific Youth IGF as well as an elected member of the Youth Coalition on Internet Governance Steering Committee. She’s joining us from Toronto, Canada today. Welcome, Jenna. And up next, we have the final panelist, Mr. Gayan Prakash Tripathi. Mr. Gayan is basically a lawyer and has worked with several think tanks and research-based organizations. He is joining us as a representative of the Civil Society Stakeholder Group today, and he is currently based in Vienna, Austria. Thank you so much, Mr. Gayan and everyone else for joining. Over to you, Harza.
Harisa Shahid: Okay, thank you so much, Romain, and thank you so much for our esteemed speakers for joining us today. So starting with this session, there is a question that what is actually the critical infrastructure? So introducing the critical infrastructure, it refers to the physical and digital systems, assets, and networks that are essential to the functioning of a society and economy. These systems are crucial for ensuring public safety, economy stability, and national security. Any disruption or damage to critical infrastructure can have serious consequences for the public health, safety, and obviously the national and global economy. Such infrastructure includes but is not limited to energy infrastructure, the transport network, healthcare network, financial services, defense services, and critical manufacturing, among others. Today, we aim to discuss navigating the security of such critical infrastructure in the rapidly developing world. developing age of AI through the multi-stakeholder participation, international cooperation, capacity development, resource allocation, and building resilience into the infrastructure for the developing countries. So, this brings me to my first question, and I would like to ask Mr. Hafiz Farooq that, what are the unique challenges faced by the developing countries, like particularly we have seen in Middle East to the North Africa and South Asia, in securing critical infrastructure from cyber threats and how can AI be used to address them?
Hafiz Muhammad Farooq: First of all, thank you very much for inviting me today for this great panel discussion. I’m Hafiz Farooq from Saudi Aramco, so it’s a great question. I would say in the developing countries, especially MENA itself, the major challenge we have is in the area of critical infrastructure. I would say there are three major areas where we see there are issues. The area number one, I would say is the legacy infrastructures. In the developing countries, the companies don’t have huge budget to upgrade their security infrastructures. They keep using outdated systems and technologies because of lack of resources and lack of budgets, and here comes a problem. So, these old system they keep using, they have lots of vulnerabilities, they don’t have security features which are required these days. So, they actually create a huge attack surface for attackers to attack on your infrastructure, and here comes a problem. So, the legacy system is one problem. The second problem which I want to highlight in the critical infrastructure is the lack of security expertise. You know, the lack of expertise, in the critical infrastructure domain is a global problem. It’s not only a problem for the developing countries, it is a problem everywhere. But obviously developing countries are also getting the heat of this problem. You will find many security experts in the industry who know about the TCP protocol, but when you talk about any ICS protocol, like Modbus TCP, you will not find many experts who know in-depth detail of the technology. So I would say lack of expertise is one of the problem and companies need to dedicate some budget on training their resources, training their individuals to make sure they are on top of the new technologies coming in this area. And third important area which I want to highlight is the digital transformation. It’s not an issue, I know all of you guys love digital transformation and I really appreciate that also, but the problem is people do spend money on digital transformation, but they don’t give attention to spending some money on securing the digital infrastructure. So when are you deploying these digital infrastructure, make sure that you deploy cybersecurity controls on top of that. And if you don’t do that, these transformation will become a pain in time to come. So you need to keep this thing in mind. Now coming to the second part of your question, Harsha, which is about how can we use AI for this? I think obviously AI is a great technology, it can do many major stuff to secure our critical infrastructures, but two areas are the key areas where AI can be very useful. One of them is threat detection and response. So you can ingest all your data from your critical infrastructure in real time to your algorithms and they can find anomalies out of your daily operation and find out if there is a real time security threat. So detection and response can be augmented by AI big time, there is no doubt about it. Especially a company like Aramco, we have like a massive infrastructure, I mean, we have a million. of assets scattered all across the world. We need army of resources, army of SOC analysts sitting in real time, sitting in the SOC, doing analysis on these events, which is impossible. So here comes the role of AI, where AI algorithms can tap in they can jump in and they can make life easy for you. This is what my company is doing. We can’t just employ hundreds of security analysts just to do everything. We have to rely on AI. So I hope I answer your question. Thank you.
Harisa Shahid: Perfect, very well. Thank you so much for the great points made. One of the thing you highlighted is the lack of expertise as we all know that it is a very major problem. And the first issue that comes to mind when we talk about cybersecurity and AI that for you to deploy these solutions it must have the required expertise in order to work in these areas. So this brings me to my next question, which is to Jenna Fung, that what are the most effective strategies for training and upscaling technical professionals in developing countries? As I’ve seen that you have been working with some civil organizations and the community as well. So, and how we can leverage AI for critical infrastructure security and like what is the limitation for their adoption? So I would, the floor is all yours, Jenna. Okay. I think Jenna is unable to unmute herself. Can you please make her a co-host? Okay, can we make Ms. Jenna Fung the co-host?
Muhamad Umair Ali: Also, can we do this the same for Gyan and for Daniel Lorman? I think I did put requests to the IGF host but haven’t heard back from them yet.
Harisa Shahid: Okay, they’re working on it. It’s done? Okay, it’s done. I think Jenna, can you please try again? Okay. Okay, I think, while Janet tries to reconnect, we can move on to our next question, which is from Mr. Dan. What are the primary cybersecurity risks associated with AI systems, and how can these risks be mitigated to protect the critical infrastructure? Dan, are you able to unmute yourself? I think the host is unmuting me again and again, and not Dan and Jenna. Can you please unmute Mr. Dan and Jenna Fung? Hello, can you hear me now? Yeah, yeah, perfect. We can hear you.
Daniel Lohrman: Yeah, but I cannot, the video is not started, so I don’t know if you can see me, but I can certainly start talking if you’d like. Yeah, sure. Yeah, I’m getting a message saying the host must unmute you, or the video is not enabled. So, yeah, so thank you all. First of all, great to join you today, and as soon as the video comes live, I’ll be happy to be on video, but it’s great to be with everyone. I’m actually in Michigan in the USA, and this question is a really important question. I mean, there’s a lot of different challenges. Just to repeat again, you want me to answer the primary cybersecurity risk associated with AI systems, and how can these be mitigated? Is that correct? Yeah, yeah, yeah. Great. So, I mean, first of all, I just would say that AI is being used extensively to attack us, so AI systems can be exploited to execute large-scale automated attacks, such as spear phishing malware distribution. And AI-driven attacks are actually spreading and broadening and deepening the attacks against critical infrastructure worldwide. So this is happening all over the United States right now, all over the world right now. And still the video is still not working. But on the actual AI systems themselves, I think I just want to mention four or five different areas and I can dive into some how ways we can mitigate these. But from data poisoning attacks to privacy attacks, adversarial attacks. So for example, data poisoning attacks, malicious actors manipulate training data to bias or compromise AI models leading to faulty decision-making. And so that, for example, poison data can cause an AI system managing a power grid to misclassify a threat resulting in an outage. Privacy attacks, AI system might reveal patient data during training or an adversarial attack. Attackers input specifically crafted data to deceive AI systems causing incorrect outputs. And so we need to make sure that those are protected against. Another type of risk we have is model theft. AI models are stolen via exposed APIs, application programming interfaces or insider threats enabling attackers to duplicate or misuse them. So stolen models can be weaponized to attack critical systems or sold to competitors. And then just a couple more, I’ve just mentioned some dependency or vulnerability supply chain attacks. So third-party components or open source libraries using AI systems might contain vulnerabilities. A compromised library and an AI application managing water supply can serve as an entry point for attackers. So the second part of your question, I’ll just mention briefly. So what are some things we can do around mitigating these? And mitigating AI cyber security. security risks in critical infrastructure require us to have a robust data governance model such as validating data sets using differential privacy, which is a technique to prevent data poisoning and privacy attacks. We also need to make sure we’re doing secure model development, including adversarial training and regular updates to build resilience. And when we have attacks, you know, be able to sustain them and recover. Access controls, encryption, and network segmentation can protect against unauthorized access and the spread of these attacks. And then with the third-party risks, you know, they can be reduced through stringent vetting and secure software practices. Continuous monitoring with AI-driven anomaly detection can ensure proactive threat management. And then lastly, I just want to mention instant response plans need to be updated. And you know, models like NIST, there’s a variety of different really great instant response plans. Be ready, but when attacks do happen, that you can collaborate on threat intelligence to strengthen people’s defenses. And as was mentioned earlier, there needs to be more training and awareness to create a culture of security and resilience. So those are some of my… It’s working now, so good to see you. And so those are some of my opening comments.
Harisa Shahid: Thank you so much. Thank you so much, Mr. Dan. And now I would like to move towards Jenna. And Jenna, I will repeat the question. The question was that what are the most effective strategies for training and upskilling technical professionals or youth in the developing countries to leverage AI for critical infrastructure security? And what do you see are the limitations for their adoption?
Jenna Fung: All right, thank you so much. I hope that I am audible in the room and awesome. And I got a thumbs up from Dan as well, so I assume remote participants can also hear me clearly. Thanks for having me on this panel. Given my background, I work mostly with young people in Asia Pacific regarding capacity building, having some knowledge to a certain extent about cybersecurity and infrastructure and all that. Although I can’t speak for the technicalities of all the subject matter, but I do have some opinions on how I see what we could do or do better for capacity building, especially if, I mean, assuming the title of our sessions and in many sense, we are using this ever evolving technologies for critical infrastructure these days. And with my experience working with many young people in Asia Pacific in the past six, seven years, and you could see that many of the time there’s some knowledge gap in that. And as I currently reside in North America as well. And so I think that’s some differences when we are somewhat exposed to the same level of like development, things are really new and people who have knowledge, like for example, governments or companies are using it on infrastructures or things that people from anywhere use every day. But because for example, people in a developing countries might have less resources or opportunity to be exposed to informations or resources to learn more about it. And they essentially become more vulnerable and there’s like a big gaps in between. And I will spare you all the details about what is about digital divide and all that. I think essentially, of course, like ideally, like you said, is that there should be a tailor-made national, like a tailor-made… strategies on how to do capacity building for people who implement or execute this kind of technologies in their works. But especially, for example, government or even civil servants who use it at their work, I think they should be the first group of people who need resources. But also there is like people who receive the use of people who are being impacted by the implementation of these technologies in the infrastructure as well. They should develop their literacies towards this kind of tools or the use of AI as well. So I think a national strategies will be ideal and helpful. But many of the time, because like I said, there’s like financial restraints or resources and there are many other even more critical things that you need to invest and put effort into it or prioritize because there’s like geopolitical tensions or you need to allocate other resources and prioritize your energy for dealing with, for example, climate change and all that. And there are times that capacity building will sometimes being put behind our head. So I think there are times where individuals or especially young people in developing country can leverage the power of the Internet, perhaps to kind of look for resources elsewhere. And even like not within your own country, perhaps you can look within your regions if there’s like any NGOs or organizations are providing this kind of like. educational opportunity for you to enrich your own knowledge. I mean, perhaps like many, many people are aware of a lot of like big corporations also offer some sort of like skill trainings, like a, like a micro credentials opportunity for you to learn about things as well. So I think that will be helpful for young people to kind of like develop knowledge as well. So I will stop here and hopefully can chat about more and touch upon other questions as the audience also ask questions later on. Thanks.
Harisa Shahid: Thank you so much, Jenna. And the points are very well made. One of the most important point I would highlight here, as Jenna mentioned, that we can look within our own region to educate people. And because we are specifically talking about the developing countries, it’s always difficult for the developing countries to invest more resources and to get resources from across the borders. Right. So this leads to the next question, which I would like to have from Gyan, that how can developing countries rely less on foreign technology to ensure security of critical infrastructure and maintain digital sovereignty?
Gyan Prakash Tripathi: Thanks, Sarasa. And hello, everyone. Excellent to see many familiar faces on the panel and in the audience. This question of digital dependency and export of technology, as well as the governance architecture, is also something that kept up popping during the now concluded first research cycle at the Observatory of Information and Democracy. Through our meta-analysis of global literature, we observed that the emergence of epidemic injustice due to the corporate incentives, strategies and practices involved in designing, developing, selling and controlling socio-technical solutions that are at the heart of information ecosystems. These then make global South nations vulnerable to exploitations. by further privileging of information and knowledge that are neither representative nor inclusive. To address this, I suggest a three-pronged strategy that emphasizes legal safeguards, multi-stakeholder accountability, and capacity-building measures. The first prong is a robust domestic legal framework and strategic contracting. Here, global South and developing nations must codify clear obligations into their legislations, which must themselves be human rights-centric. They must enact legislation and regulations that mandate transparency, human rights due diligence, and data protection protocols for all technology suppliers, regardless of their origin. They must also have stringent contractual terms that demand technology transfer, skills development, and long-term support arrangements. By these provisions, they could also include mandatory training for local engineers, commitment to open standards, and clear exit strategies that can prevent vendor lock-in. The second prong that I would strongly suggest is inclusive, transparent, and accountable government mechanisms, which can be achieved through multi-stakeholderism. There must be a clear and direct independent oversight by bodies that include government representatives, CSOs, industry experts, and also human rights advocates. But I don’t think I need to elaborate on that in this forum. And the approach is, of course, well-documented. The third and more critical prong of the strategy is regional cooperation and capacity-building for long-term sovereignty. It is pertinent that global South nations form knowledge, R&D, and cooperative blocks to compound the resources that they have available. This can be done either through collaboration with geographically proximate countries facing similar challenges to develop common legal and technical standards. Another way it can be achieved is by forming issue- or interest-based blocks. which can increase collective bargaining power and reduce the risk of exploitative deals. Each prong of this strategy seeks to reinforce sovereignty, protect local interests and uphold human rights standards. By implementing this, developing countries can create a balanced, forward-looking legal and policy ecosystem which will respect human rights, reinforce sovereignty and foster resilient, fair and beneficial technology partnerships. Thank you and back to you, Harisa.
Harisa Shahid: Thank you so much, Kian, for your valuable input. So after listening to every speaker here, I would like to move on to Mr. Jackal. You have worked with the government sector and have expertise in that. So I would like to ask you that what do you see are the key challenges and opportunities in establishing international partnerships to share best practices and technologies for critical infrastructure security, particularly in the context of AI?
Jacob-Pepijn Baljet: Thank you, Harisa, and thank you to all the speakers before me, which I can actually echo many of the points made before, because together they bring all the perspectives together and I think that’s also very symbolic for the IGF as well, where all stakeholders come together and learn from each other. So thank you for that. I would say, of course, we’ve heard it before too, one of the challenges usually is to bring enough human resources and finances together. And on the other hand, AI can also relieve a lot of challenges in terms of human resources, as the other speaker said, because you can use AI instead of actually checking all the cyber security vulnerabilities by persons. So I would say… These challenges together mean that one has to prioritize. And especially in terms of critical infrastructure internationally, we have not agreed one agreed definition of what is critical infrastructure. And maybe we don’t also need it because every country and every region is different. We did agree in the UN that, of course, the core infrastructure of the Internet, the general availability and integrity of the Internet, is part of the critical infrastructure. And I’m glad you said you were also involved with ICANN. And, of course, ICANN is also part of this. So I would also like to stress that. But other than that, of course, I think many countries will share many critical infrastructure ideas. It’s quite logical to say that energy grid or water supply or your own cybersecurity operation center or your SOC or your CSIRT are part of your critical infrastructure. So I would say one has to prioritize. Every country would have to find their own national priorities. But you can, of course, exchange within your region. It was also mentioned. What are your most priority issues? And one opportunity in international partnerships is also exchanging best practices and exchanging ideas. Also, negative experiences. It’s also very important that you share negative experiences together so that people can learn from each other. And let’s see, there are a number of international mechanisms already for that. There’s both the Internet Governance Forum, but there’s also the AI for Good Summit with the ITU in Geneva. There are many forums open to stakeholders. There’s also the Global Forum on Cyber Expertise, which is built to share this knowledge together and to bring supply and demand on capacity building together to actually bring stakeholders from the private sector and from the public sector together. the governments together from both the Global South and other countries. And also these days we also hear a lot about the Digital Cooperation Organization. It’s also an interesting organization that brings stakeholders together. I don’t know if they do a lot of work on AI yet, but I think that will be a logical step too. And here at the IGF there’s also a lot of talk about the Global Digital Compact and what has come out of it on the UN level. There are a number of mechanisms that will have to be implemented now on AI governance. And you also see there that it’s built to bring all the stakeholders together. And I think that’s the most key message I want to give, that it’s important that any mechanism or international partnership actually brings both civil society, academia, private sector, the technical community, and the governments together to really learn from each other and not only speak in their own bubble or in their own silo, and also between different owners. Because critical infrastructure, sometimes it’s owned by the state and sometimes it’s a private sector company. And then it’s important also within your country that you have mechanisms that you can exchange knowledge and exchange experiences within your country between the different stakeholders.
Harisa Shahid: Very true. Thank you so much for your input. Now this leads to the next question, as we have already mentioned, that it’s important for the collaboration between all the stakeholders, including the private sector, the government, and civil society to develop and deploy AI-powered security solutions that are affordable, because we are talking about the developing countries, and accessible to developing countries. So please, over to you, Mr. Dan. Yeah, thank you for the question. And I really appreciate the comments that were just given, because I
Daniel Lohrman: I think they really lead into that really well. And I think that this is a huge challenge. I would just echo some of the comments and just, I’ve prepared a number of different aspects of how private sector companies can collaborate with governments and civil society. I think, first of all, it just starts with a commitment that you wanna do it, you know, that we all, I mean, there’s a saying we say a lot in the US, which is when you’ve climbed the ladder, you need to send the ladder back down and help other people back up. It’s in all of our interests, the global interests to work together, to partner. Many, many companies, certainly in the US, but all around the world have great partnerships in developing countries. Others, it’s a new thing for them, but they recognize that it’s in the long-term best interests of everyone, the whole society, but also their own companies of where they wanna go and how they wanna work together and partner in the future. So how can you do that? You know, public-private partnerships is a big one. Talk about NGOs, non-governmental organizations, partnering with those. I think from a practical perspective, you need to have tiered pricing models, offering subsidized or tiered pricing for AI-powered solutions to ensure affordability for low-income regions. I think that’s done in other areas in society. We’ve talked about, you know, for example, pharmaceutical prices, drug prices for different things at different parts of the world. There’s models around that. The same kinds of things may need to be considered with AI and technology. Capacity building and skills development across organizations, and really making sure we have local training programs that meet the local needs. Because, you know, I’m sitting here in the United States. Obviously, I don’t understand the specific needs of the developing countries. I’ve actually. We’re very interested in that and working together with those on this panel and others around the world, and we’d love to help in different parts of the world in developing countries in Africa and around the world. But honestly, you know, developing these partnerships that transfer AI expertise to local professionals will ensure long-term sustainability, and it needs to be done and contextualized and localized to those. I think thinking about long-term sustainable models, infrastructure investment, partnering with governments to build the necessary digital infrastructure, such as cloud storage, broadband access in developing regions, but also ensuring that local needs are being met, you know, from privacy perspectives, but that we have proper funding mechanisms in place as well. And I think that’s a big challenge, leveraging international development funds. I know this is a UN panel, but looking at ways we can do grants, finance, initial deployments of AI-powered solutions, and then really talk about, I saw some questions, maybe we’ll get to those in a few minutes, but around local pilots or proof of concepts in a local context. I think those are really important. So, you know, affordable, accessible, you know, it really is going to require multi-stakeholder coalitions. So really establishing coalitions with international organizations, whether that be the UN, World Bank, really talking about working together with NGOs, as I mentioned, advocacy groups, and then just really making sure that we all speak the same language. And I just wanted to close on that. I think, you know, even some of the different terms we use in the U.S. are different than some of the terms that different people use around the world, and part of that is language. you know, different views on different spellings of words in English and that kind of a thing. I’m horrible at foreign languages, by the way, so I admit that up front. But just even the terminology and as we as we think about AI, I think AI can help us and I think it’s a little positive note. You know, I’ve seen applications in the USA with different counties and cities and governments around the United States where they’ve, you know, used to have one or two languages and now they’ve come together and they now support 140, 150 languages. And those same applications can be scaled to work in a wide variety of different communities that maybe, you know, even in like, you know, Washington, D.C., for example, Montgomery County is a great example. The Monty app application is called Monty, M-O-N-T-Y. It’s a great application. That’s in Washington, D.C. in the United States, but it serves communities, people from all over the world that live in that area that now have access to over 100 applications in their own language. And so I think AI can help us in that. And I think it can actually be part of the solution to make solutions that are available, maybe in English, available in multiple languages around the world. So the ability to reuse applications, to be able to learn from others, I think is a big part of this solution. And being able to, you know, not reinvent the wheel, if you will, but actually, you know, partner and say, OK, this government in the U.S. and this government in Europe is doing this really successful application. How can we apply that in developing countries?
Harisa Shahid: Exactly. Very well.Hi, sir. Are you still speaking? We are unable to hear you. Oh, I’m so sorry. Actually, I switched my channel. apologies for that. So OK, moving forward, I have a similar question for you as well, Mr. Hafiz Farooq, that how can multiple stakeholders work together developing a global or a regional framework? Because if we see, some regions do have a framework for cybersecurity and things like that. But if I talk about some developing countries, like if I talk about my country, Pakistan, we don’t have a framework specifically for the cybersecurity or information security. So how can multiple stakeholders work towards developing a global or a regional framework for incorporation of AI in the critical infrastructure security?
Hafiz Muhammad Farooq: Thank you, Harsha, for another great question. I generally agree with what Dan said. There has to be a global standard, first of all. So I agree. The year of frameworks and the legislation, it was very good for the cybersecurity industry because we have seen many standards, many legislations coming across. I will give you a few examples. I will take an example of Singapore. Singapore recently released their cybersecurity master plan 2024, which deals with critical infrastructures. Similarly, we have Hong Kong. First time they passed the bill for the protection of critical infrastructures. That is another example. Similarly, USA has revamped their cybersecurity strategy by including security threat cases for the protection of the critical infrastructure. So this shows how the cybersecurity industry is moving towards legislation, frameworks, and the standardization. Also, you might be aware, most of you guys, about the European Union. They recently enforced an AS2 directive and AI Act. This is something very promising as well. So things are. We’re really positive in 2024, but only I think the missing part is the critical infrastructure legislation, I would say. All these legislation I’m talking about, they don’t cover use of AI for the cybersecurity of the critical infrastructures. That is the missing piece right now. How to address that? As Dan said, it has to be global first. I don’t think so that the only regional approach is going to help us. First of all, developing countries and the technology giants, they need to sit together and they need to work on a global framework first, and then the regional frameworks, they should follow them. I don’t think so that only a regional country like Pakistan or for example, even Saudi Arabia, I mean, they can’t handle the bigger spectrum of cybersecurity threats alone. They need to work in collaboration. I mean, for example, UN is a good forum. ITU is a good forum. They should take the lead and they should actually standardize the use of AI for the cybersecurity of the critical infrastructure. I think more research and development and more collaboration is required for the time being to understand how AI is going to be used, how AI is going to be used for the protection of our infrastructures. So there’s still more work to do in years to come, and I hope, I mean, we move fast before the attacker, they start using AI. We, the defensive guy, we should start using the AI as well to protect our infrastructure. I hope I answer your question. Thank you.
Harisa Shahid: Definitely. So when we talk about AI, there comes some ethical considerations and some other security issues as. well because AI has its own concerns as well. So moving on to Mr. Jacko, I have a question for you that how can governments in developing countries effectively balance the need for AI driven security with privacy concerns and ethical considerations?
Jacob-Pepijn Baljet: Thank you Harissa and thank you Hafiz also for mentioning the need for the standards. I think maybe to start off with your question about security and privacy and ethical considerations, which is a great question and I think it’s a very relevant question. I would start off with saying that maybe there’s not really a dichotomy between the two, but you really need both at the same time. And usually more privacy or more ethical considerations do not mean less security. Both would go hand in hand of course. And also here I would say this question, actually when you talk about cybersecurity, I think that the basic principles, the general principles, they are both, they will be the same whether you use AI or not. And I think the only big difference is that for AI it enlarges many things and it makes many things much more impactful, especially positively and negatively. You can better defend against cyberattacks with AI, but you can also, of course, there is also the risks of the privacy risks and the risks of false data that you are using to train models is much bigger. So I think that that does require international cooperation. The EU AI Act was also mentioned, the risk-based approach to AI systems. So I think the best way to actually incorporate this is both, and I’m going to also continue with Hafiz’s threat here on international standards. and cooperation is to both think about what do we have in common universally, we have universal human rights, we have universal standards already there at the UN level and also yeah, basically globally. And then next to that, we have a local context where everything is happening and in Pakistan, this is different than from the Netherlands, or it’s also different in Saudi Arabia. And I think we need to take that into account too. So I think to actually we’ve seen that in the Global Digital Compact too, when we talk about ethical considerations, we’ve had a push to include that in the Global Digital Compact. So I think the best approach to do this is to have a high level, and I agree with Hafiz that that can be at the UN, to have a broader level agreement on the general principles. And then on the lower level, on a more regional or local level, you can have more legislation or you can have specific critical infrastructure legislation that then you can look at the UN level and say, okay, we agreed these general principles here on the protection of privacy or on the protection of security also. And we base our national legislation on this. And then next to that, because the stakeholder cooperation is important, you can do that locally, but you can also do it internationally. We have many standard organizations internationally. Of course, always a challenge and it’s always a huge investment to actually engage in standard organizations at the IETF or at the ISO or at different organizations. But this is a platform where you can engage with the big technology companies and with the suppliers and with the civil society. So I think it is important to also look there on a technical level with a multi-stakeholder approach, which is in principle open to everyone, but which can be improved for inclusivity to start there with the standards for incorporating AI in the cyber security field. Thank you.
Harisa Shahid: Exactly. And with that, we are coming to the end. of our session concluding the points we have highlighted that the skill building is very important and for the skill building collaboration exactly the main point which every speaker has highlighted here that the collaboration between all the stakeholders is the crucial part to enhance the AI and cybersecurity to raise awareness about the use of AI because right now AI is not being much used for the protection of our critical infrastructure so the awareness is very important so thank you so much for all to all of our speakers for joining us today and now we are moving towards the Q&A session so I would like the audience to if you have any questions please feel free to ask
Muhamad Umair Ali: just to chip in here followed by the on-site audience we also have a question from the online audience so I think we can proceed with the on-site audience first and followed by that I can ask the question that is in the chat box from the online audience
Harisa Shahid: yeah sure so we have one question here
Audience: hello everyone my name is Fernando I am part of Brazilian youth delegation and I work in a network provider so I’m part of the technical sector one thing that was presented as a problem was the lack of professionals in the cybersecurity AI but another problem that I see is even with a long and continual cybersecurity training most of the professionals eventually go to another country to work so basically my question is how to retain these talents in their country to continue their work
Harisa Shahid: Yeah, that’s a very important question. So anyone from the panel would like to ask? Oh, sorry, would like to answer?
Hafiz Muhammad Farooq: Yeah, I would just want to add a comment on the first of all, Fernando, great question. I mean, you know, the world is getting global. I mean, the the people that are moving around is very difficult to retain a talent. Companies are looking for talents. If you’re sitting in Brazil, and the company needs you some other part of the world, they will they will hook you from there. So this challenge is there. But I think instead of retaining the talent locally, the the challenge is to produce the talent. So because most of these systems, as I told you before, their legacy, I mean, I’m not saying that we are not professional, we are professional, but the systems are so old, that there are not enough documentation available, there is not much stuff available where you can simulate where you can train yourself to see how the system is going to operate. So I think instead of localizing localization of resources, we should concentrate more on the training aspect. And maybe the old legacy vendors, I mean, they should start maybe redoing the documentation. I mean, so actually, the my point is that the knowledge base should be increased instead of trying to localize the resource at a different location. Thank you.
Harisa Shahid: Thank you, Mr. Farouk. Does this answer your question? Okay, perfect. So any more questions we have?
Muhamad Umair Ali: I do have one from the
Harisa Shahid: We have one from the on-site participant.
Muhamad Umair Ali: I’m sorry, on-site or online?
Audience: My name is Thuy. I’m from .vn directory. We are from technical community. Then I have a question that we are talking about promoting using AI to protect our critical infrastructures. So I have a question that what infrastructure do you think will be in scope? of the critical infrastructure. Thank you.
Harisa Shahid: So your question is, basically, what infrastructure do we think is the critical infrastructure, right?
Audience: Yeah. My question for the panelists is that, what can you name some infrastructure that will be in the scope of critical infrastructure that should be protected by AI promotion here? Thank you.
Harisa Shahid: Thank you so much. So anyone from the panel would like to answer the question?
Daniel Lohrman: I can start. Certainly, from the US perspective, we have dedicated sectors that we’ve listed. There’s, depending on which people you talk to, 16 or 17 sectors. So everything from all utilities, water, power, but to finance, certainly banks, insurance companies, et cetera, government sectors. So that’s state, local, federal levels, all different levels of government. But then you can start talking about transportation. So clearly, airlines, trains. I mean, so really, all of the core physical infrastructure in society, and it’s actually a website that you can go to and just type in critical infrastructure, certainly USA. But in North America, there’s a very defined list and what’s covered and what’s not covered in the critical infrastructure.
Harisa Shahid: Thank you so much, Jack. Does this answer your question? Actually, we are running out of time. If you have any question, you can connect with our speakers Will that be OK? Thank you. Amir, do we have a, we can take, I think. only one more question from the off-site participants?
Muhamad Umair Ali: Yes, we can take one question. It’s for Dan. The question is from Ankita Rathi, and the question is, can you then please elaborate on the specific resilience strategies that the organizations should develop to recover from AI-driven attacks?
Daniel Lohrman: Absolutely. There’s a number of things people can think about. When you start thinking about threat intelligence, invest in AI-powered threat intelligence to detect and predict emerging attack patterns. Basically, fight AI with AI. Making sure that cyber attacks are moving faster than ever. You need to fight fire with fire almost is the mentality. You can also have red teaming and simulated attacks. Having tabletop exercises, AI augmented defense tools that allow you to respond very quickly. First of all, you need to know about these attacks that are happening and be able to respond very quickly. But I think overall, you really start with a good resilience strategy. Resilience is a very popular word in the US cybersecurity community right now. I think globally, it’s a hot word. You need to have a comprehensive incident response plan. If your critical infrastructure is attacked, whether that be the water, the utilities, the banks, you need to be aware of it. You need to be able to detect it, and you need to have all parts of your organization able to respond, not just from a technology perspective, but people, process, and technology. That means communication. It means working with all levels of, if your bank was hit, if your utility was hit, water supply was hit, everyone needs to know from the. the business side of things, to your clients, to your customers, what are the steps you’re going to take? How are you going to respond quickly? So once you detect that, being able to respond and recover quickly in a resilient way is really, really key, especially with the ransomware attacks that we’re seeing around the world right now. So hopefully that’s a short answer to a much longer question.
Muhamad Umair Ali: Yes, that was quite helpful. Thank you so much for that, Dan. I think that brings us towards the closure of the session. Any concluding remarks, Harisa? Or any photographic sessions?
Harisa Shahid: Yeah, I think we should have a photograph with all the online speakers and on-site speakers. So can you all please turn on your cameras? Jenna, can you do that? Should I stop sharing the screen?
Muhamad Umair Ali: I think it’s occupying quite a good part on the screen.
Harisa Shahid: Yeah, yeah, yeah, you can.
Hafiz Muhammad Farooq
Speech speed
154 words per minute
Speech length
1267 words
Speech time
493 seconds
Legacy infrastructure creates vulnerabilities
Explanation
Developing countries often use outdated systems and technologies due to budget constraints. These legacy systems have vulnerabilities and lack modern security features, creating a large attack surface for cybercriminals.
Evidence
Companies in developing countries don’t have huge budgets to upgrade their security infrastructures.
Major Discussion Point
Challenges in Securing Critical Infrastructure in Developing Countries
Lack of cybersecurity expertise, especially for industrial control systems
Explanation
There is a global shortage of cybersecurity experts, particularly in the domain of industrial control systems. This lack of expertise is more pronounced in developing countries, making it difficult to secure critical infrastructure.
Evidence
Many security experts know about TCP protocol, but few have in-depth knowledge of ICS protocols like Modbus TCP.
Major Discussion Point
Challenges in Securing Critical Infrastructure in Developing Countries
Agreed with
Jenna Fung
Daniel Lohrmann
Agreed on
Need for capacity building in developing countries
Digital transformation without adequate security measures
Explanation
Companies in developing countries often invest in digital transformation without allocating sufficient resources for cybersecurity. This creates vulnerabilities in the newly deployed digital infrastructure.
Evidence
People spend money on digital transformation but don’t give attention to spending money on securing the digital infrastructure.
Major Discussion Point
Challenges in Securing Critical Infrastructure in Developing Countries
AI can augment threat detection and response capabilities
Explanation
Artificial Intelligence can significantly enhance threat detection and response in critical infrastructure security. AI algorithms can analyze real-time data from infrastructure to identify anomalies and potential security threats.
Evidence
AI algorithms can tap in and make life easy for you, especially in companies with massive infrastructure like Saudi Aramco.
Major Discussion Point
Leveraging AI for Critical Infrastructure Security
AI enables automated analysis of large-scale infrastructure data
Explanation
AI can process and analyze vast amounts of data from critical infrastructure in real-time. This capability is particularly valuable for large organizations with extensive infrastructure that would be impossible to monitor manually.
Evidence
Saudi Aramco has millions of assets scattered across the world, requiring AI algorithms to analyze events instead of relying solely on human SOC analysts.
Major Discussion Point
Leveraging AI for Critical Infrastructure Security
Develop global standards and frameworks for AI in critical infrastructure
Explanation
There is a need for global standards and frameworks specifically addressing the use of AI in critical infrastructure security. These standards should be developed through collaboration between developing countries and technology leaders.
Evidence
Examples of recent cybersecurity legislation and frameworks in various countries, such as Singapore’s Cybersecurity Master Plan 2024 and the EU’s NIS2 directive and AI Act.
Major Discussion Point
International Cooperation for Critical Infrastructure Security
Agreed with
Jacco-Pepijn Baljet
Daniel Lohrmann
Gyan Prakash Tripathi
Agreed on
Importance of international collaboration
Jacco-Pepijn Baljet
Speech speed
151 words per minute
Speech length
1216 words
Speech time
480 seconds
Limited resources and budget constraints
Explanation
Developing countries often face challenges in allocating sufficient human and financial resources for cybersecurity. This limitation makes it difficult to implement comprehensive security measures for critical infrastructure.
Major Discussion Point
Challenges in Securing Critical Infrastructure in Developing Countries
Exchange best practices and lessons learned through international forums
Explanation
International partnerships provide opportunities to share best practices and experiences in critical infrastructure security. Various forums and organizations facilitate this knowledge exchange between countries and stakeholders.
Evidence
Examples of international mechanisms include the Internet Governance Forum, AI for Good Summit, Global Forum on Cyber Expertise, and Digital Cooperation Organization.
Major Discussion Point
International Cooperation for Critical Infrastructure Security
Agreed with
Daniel Lohrmann
Gyan Prakash Tripathi
Hafiz Muhammad Farooq
Agreed on
Importance of international collaboration
Balance global principles with local context in policy development
Explanation
Effective policies for AI-driven security in critical infrastructure should consider both universal principles and local contexts. This approach ensures that global standards are applied while addressing specific regional needs and challenges.
Evidence
The EU AI Act and its risk-based approach to AI systems was mentioned as an example of balancing global principles with local implementation.
Major Discussion Point
Balancing Security, Privacy and Ethics in AI-driven Security
Jenna Fung
Speech speed
133 words per minute
Speech length
629 words
Speech time
282 seconds
Knowledge gaps due to less exposure to new technologies
Explanation
Developing countries often have limited access to information and resources related to new technologies. This lack of exposure creates knowledge gaps, making populations more vulnerable to cyber threats.
Evidence
People in developing countries might have less resources or opportunity to be exposed to information or resources to learn more about new technologies.
Major Discussion Point
Challenges in Securing Critical Infrastructure in Developing Countries
Develop national strategies for tailored capacity building
Explanation
Countries should create customized national strategies for capacity building in AI and cybersecurity. These strategies should address the specific needs of different groups, including government officials, civil servants, and the general public.
Major Discussion Point
Strategies for Capacity Building in Developing Countries
Agreed with
Daniel Lohrmann
Hafiz Muhammad Farooq
Agreed on
Need for capacity building in developing countries
Leverage online resources and regional educational opportunities
Explanation
Individuals in developing countries can use online resources and regional educational programs to enhance their knowledge of AI and cybersecurity. This approach can help overcome resource limitations at the national level.
Evidence
NGOs or organizations within regions may provide educational opportunities. Large corporations also offer skill training and micro-credentials.
Major Discussion Point
Strategies for Capacity Building in Developing Countries
Agreed with
Daniel Lohrmann
Hafiz Muhammad Farooq
Agreed on
Need for capacity building in developing countries
Daniel Lohrman
Speech speed
151 words per minute
Speech length
1958 words
Speech time
774 seconds
AI systems themselves face risks like data poisoning and adversarial attacks
Explanation
AI systems used for critical infrastructure security are vulnerable to various types of attacks. These include data poisoning, privacy attacks, adversarial attacks, and model theft, which can compromise the effectiveness and reliability of AI-driven security measures.
Evidence
Examples include data poisoning attacks that can cause an AI system managing a power grid to misclassify threats, and adversarial attacks where specifically crafted data deceives AI systems.
Major Discussion Point
Leveraging AI for Critical Infrastructure Security
AI can help overcome language barriers in security applications
Explanation
AI technologies can facilitate multilingual support in security applications. This capability allows for broader access to security information and resources across diverse populations.
Evidence
Example of the Monty app in Montgomery County, Washington D.C., which supports over 100 languages using AI technology.
Major Discussion Point
Leveraging AI for Critical Infrastructure Security
Establish public-private partnerships for knowledge transfer
Explanation
Collaboration between private sector companies, governments, and civil society is crucial for developing and deploying AI-powered security solutions. These partnerships can facilitate knowledge transfer and ensure solutions are affordable and accessible to developing countries.
Major Discussion Point
Strategies for Capacity Building in Developing Countries
Agreed with
Jacco-Pepijn Baljet
Gyan Prakash Tripathi
Hafiz Muhammad Farooq
Agreed on
Importance of international collaboration
Implement tiered pricing models for AI security solutions
Explanation
To make AI-powered security solutions more accessible to developing countries, companies should consider implementing tiered or subsidized pricing models. This approach ensures affordability for low-income regions while maintaining the quality of security solutions.
Evidence
Comparison to tiered pricing models used in other industries, such as pharmaceutical pricing for different parts of the world.
Major Discussion Point
Strategies for Capacity Building in Developing Countries
Agreed with
Jenna Fung
Hafiz Muhammad Farooq
Agreed on
Need for capacity building in developing countries
Implement robust data governance and secure model development practices
Explanation
To mitigate risks associated with AI-driven security systems, organizations should implement strong data governance models and secure development practices. This includes techniques like differential privacy to prevent data poisoning and privacy attacks.
Major Discussion Point
Balancing Security, Privacy and Ethics in AI-driven Security
Establish multi-stakeholder coalitions to address ethical concerns
Explanation
Addressing ethical considerations in AI-driven security requires collaboration between various stakeholders. Multi-stakeholder coalitions can help ensure that AI solutions are developed and deployed responsibly, considering diverse perspectives and concerns.
Major Discussion Point
Balancing Security, Privacy and Ethics in AI-driven Security
Gyan Prakash Tripathi
Speech speed
132 words per minute
Speech length
432 words
Speech time
196 seconds
Form regional knowledge-sharing and R&D blocks
Explanation
Developing countries should create collaborative blocks for knowledge sharing and research and development. This approach allows countries to pool resources, increase collective bargaining power, and reduce the risk of exploitative deals in technology partnerships.
Evidence
Suggestions include collaboration with geographically proximate countries facing similar challenges or forming issue- or interest-based blocks.
Major Discussion Point
International Cooperation for Critical Infrastructure Security
Agreed with
Jacco-Pepijn Baljet
Daniel Lohrman
Hafiz Muhammad Farooq
Agreed on
Importance of international collaboration
Develop legal frameworks mandating transparency and human rights due diligence
Explanation
Developing countries should establish robust domestic legal frameworks that require transparency and human rights due diligence from technology suppliers. These frameworks should include provisions for technology transfer, skills development, and long-term support arrangements.
Evidence
Suggestions include enacting legislation with clear obligations, stringent contractual terms, and mandatory training for local engineers.
Major Discussion Point
Balancing Security, Privacy and Ethics in AI-driven Security
Agreements
Agreement Points
Importance of international collaboration
Jacco-Pepijn Baljet
Daniel Lohrmann
Gyan Prakash Tripathi
Hafiz Muhammad Farooq
Exchange best practices and lessons learned through international forums
Establish public-private partnerships for knowledge transfer
Form regional knowledge-sharing and R&D blocks
Develop global standards and frameworks for AI in critical infrastructure
Speakers agreed on the crucial role of international collaboration in addressing cybersecurity challenges for critical infrastructure, emphasizing knowledge sharing, partnerships, and global standards development.
Need for capacity building in developing countries
Jenna Fung
Daniel Lohrmann
Hafiz Muhammad Farooq
Develop national strategies for tailored capacity building
Leverage online resources and regional educational opportunities
Implement tiered pricing models for AI security solutions
Lack of cybersecurity expertise, especially for industrial control systems
Speakers concurred on the importance of capacity building in developing countries, suggesting various strategies to address knowledge gaps and resource limitations in cybersecurity and AI.
Similar Viewpoints
Both speakers highlighted the challenges faced by developing countries in securing critical infrastructure due to resource limitations and outdated systems.
Hafiz Muhammad Farooq
Jacco-Pepijn Baljet
Legacy infrastructure creates vulnerabilities
Limited resources and budget constraints
Both speakers emphasized the need for strong governance frameworks and practices to ensure responsible development and deployment of AI-driven security solutions.
Daniel Lohrmann
Gyan Prakash Tripathi
Implement robust data governance and secure model development practices
Develop legal frameworks mandating transparency and human rights due diligence
Unexpected Consensus
AI as both a solution and a potential risk
Hafiz Muhammad Farooq
Daniel Lohrmann
AI can augment threat detection and response capabilities
AI systems themselves face risks like data poisoning and adversarial attacks
While speakers generally viewed AI positively for enhancing cybersecurity, there was an unexpected consensus on acknowledging the potential risks associated with AI systems themselves, highlighting the need for a balanced approach in AI implementation.
Overall Assessment
Summary
The speakers largely agreed on the importance of international collaboration, capacity building, and the need for robust governance frameworks in addressing cybersecurity challenges for critical infrastructure in developing countries. There was also a shared recognition of both the potential benefits and risks associated with AI in cybersecurity.
Consensus level
High level of consensus among speakers, suggesting a strong foundation for developing comprehensive strategies to enhance critical infrastructure security in developing countries using AI. This consensus implies that future initiatives in this area are likely to focus on collaborative efforts, knowledge sharing, and capacity building, while also addressing the ethical and security concerns associated with AI implementation.
Differences
Different Viewpoints
Approach to retaining cybersecurity talent
Hafiz Muhammad Farooq
Fernando (audience member)
Instead of retaining the talent locally, the the challenge is to produce the talent.
How to retain these talents in their country to continue their work
While Fernando raised concerns about retaining cybersecurity professionals in their home countries, Hafiz Muhammad Farooq suggested focusing on producing more talent rather than retention.
Unexpected Differences
Overall Assessment
summary
The main areas of disagreement were relatively minor and focused on different approaches to addressing similar challenges in cybersecurity and AI implementation in developing countries.
difference_level
The level of disagreement among the speakers was low. Most speakers presented complementary perspectives on the challenges and solutions for implementing AI in critical infrastructure security for developing countries. This low level of disagreement suggests a general consensus on the importance of international cooperation, capacity building, and addressing resource constraints in developing countries.
Partial Agreements
Partial Agreements
Both speakers agreed on the need for international collaboration, but Jacco-Pepijn Baljet emphasized sharing best practices through existing forums, while Hafiz Muhammad Farooq focused on developing new global standards specifically for AI in critical infrastructure.
Jacco-Pepijn Baljet
Hafiz Muhammad Farooq
Exchange best practices and lessons learned through international forums
Develop global standards and frameworks for AI in critical infrastructure
Both speakers agreed on the importance of knowledge transfer, but Jenna Fung emphasized individual-led learning through online resources, while Daniel Lohrmann focused on establishing formal public-private partnerships.
Jenna Fung
Daniel Lohrmann
Leverage online resources and regional educational opportunities
Establish public-private partnerships for knowledge transfer
Similar Viewpoints
Both speakers highlighted the challenges faced by developing countries in securing critical infrastructure due to resource limitations and outdated systems.
Hafiz Muhammad Farooq
Jacco-Pepijn Baljet
Legacy infrastructure creates vulnerabilities
Limited resources and budget constraints
Both speakers emphasized the need for strong governance frameworks and practices to ensure responsible development and deployment of AI-driven security solutions.
Daniel Lohrmann
Gyan Prakash Tripathi
Implement robust data governance and secure model development practices
Develop legal frameworks mandating transparency and human rights due diligence
Takeaways
Key Takeaways
Developing countries face significant challenges in securing critical infrastructure, including legacy systems, lack of expertise, and resource constraints.
AI can be leveraged to enhance critical infrastructure security, particularly for threat detection and response.
Capacity building and international cooperation are crucial for improving cybersecurity in developing countries.
A multi-stakeholder approach involving governments, private sector, and civil society is necessary to develop effective AI-powered security solutions.
Balancing security needs with privacy and ethical considerations is essential when implementing AI for critical infrastructure protection.
Resolutions and Action Items
Develop national strategies for tailored capacity building in cybersecurity and AI
Establish public-private partnerships for knowledge transfer and technology access
Work towards creating global standards and frameworks for AI in critical infrastructure security
Implement tiered pricing models for AI security solutions to ensure affordability for developing countries
Increase collaboration and knowledge sharing through international forums and regional partnerships
Unresolved Issues
Specific methods to retain cybersecurity talent in developing countries
Detailed strategies for balancing AI-driven security with privacy concerns in different national contexts
Concrete steps for developing countries to reduce dependence on foreign technology while maintaining digital sovereignty
Specific resilience strategies for organizations to recover from AI-driven attacks
Suggested Compromises
Balance global principles with local context when developing AI and cybersecurity policies
Adopt a risk-based approach to AI systems regulation to address both security needs and ethical concerns
Focus on producing more cybersecurity talent locally rather than solely trying to retain existing professionals
Thought Provoking Comments
In the developing countries, especially MENA itself, the major challenge we have is in the area of critical infrastructure. I would say there are three major areas where we see there are issues. The area number one, I would say is the legacy infrastructures.
speaker
Hafiz Muhammad Farooq
reason
This comment provided a structured analysis of key challenges facing developing countries in securing critical infrastructure, introducing important context for the discussion.
impact
It set the stage for exploring specific issues like outdated systems, lack of expertise, and digital transformation challenges in developing regions. This framed much of the subsequent conversation around capacity building and resource allocation.
AI-driven attacks are actually spreading and broadening and deepening the attacks against critical infrastructure worldwide.
speaker
Daniel Lohrmann
reason
This highlighted the urgency of the issue by emphasizing how AI is being weaponized against critical infrastructure.
impact
It shifted the discussion to focus more on the immediate threats and need for AI-powered defenses, rather than just theoretical benefits of AI for security.
I think essentially, of course, like ideally, like you said, is that there should be a tailor-made national, like a tailor-made… strategies on how to do capacity building for people who implement or execute this kind of technologies in their works.
speaker
Jenna Fung
reason
This comment emphasized the need for localized, context-specific approaches to capacity building rather than one-size-fits-all solutions.
impact
It prompted more discussion about how to develop effective training strategies tailored to developing countries’ specific needs and constraints.
To address this, I suggest a three-pronged strategy that emphasizes legal safeguards, multi-stakeholder accountability, and capacity-building measures.
speaker
Gyan Prakash Tripathi
reason
This comment offered a comprehensive framework for addressing digital dependency issues in developing countries.
impact
It broadened the conversation beyond just technical solutions to include legal, governance, and capacity-building dimensions. This multifaceted approach influenced subsequent comments about international cooperation and policy development.
I think AI can help us in that. And I think it can actually be part of the solution to make solutions that are available, maybe in English, available in multiple languages around the world.
speaker
Daniel Lohrmann
reason
This comment highlighted a specific, practical application of AI to address language barriers in cybersecurity education and implementation.
impact
It shifted the tone to a more optimistic view of AI’s potential to solve some of the challenges discussed earlier, particularly around accessibility and localization of resources.
Overall Assessment
These key comments shaped the discussion by progressively broadening its scope from specific technical challenges to encompass policy, governance, and capacity-building dimensions. They highlighted the complexity of securing critical infrastructure in developing countries, emphasizing the need for tailored, multifaceted approaches that leverage AI while addressing its risks. The discussion evolved from identifying problems to proposing concrete strategies for international cooperation and localized implementation, ultimately presenting a more holistic view of the challenges and potential solutions in this domain.
Follow-up Questions
How can developing countries form knowledge, R&D, and cooperative blocks to compound available resources?
speaker
Gyan Prakash Tripathi
explanation
This is important for developing long-term sovereignty and increasing collective bargaining power in technology partnerships.
How can AI help in making solutions available in multiple languages around the world?
speaker
Daniel Lohrmann
explanation
This is crucial for making AI-powered security solutions accessible and usable in diverse linguistic contexts, especially in developing countries.
How can we develop a global framework for incorporating AI in critical infrastructure security?
speaker
Hafiz Muhammad Farooq
explanation
A global framework is necessary to standardize the use of AI for cybersecurity of critical infrastructure across different countries and regions.
How can we improve inclusivity in international standard organizations for AI and cybersecurity?
speaker
Jacco-Pepijn Baljet
explanation
Improving inclusivity is important to ensure that developing countries can participate in setting global standards for AI in cybersecurity.
How can we increase the knowledge base and documentation for legacy systems in critical infrastructure?
speaker
Hafiz Muhammad Farooq
explanation
This is important for training new professionals and retaining expertise in managing and securing older critical infrastructure systems.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.