AI and Cybersecurity 

7 May 2024 10:35h - 11:00h

Table of contents

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Full session report

Navigating the complexities of autonomous weaponry: A conversation with Ljupco Gjorgjinski

In a detailed discussion with Vladimir Radunovic, Ljupco Gjorgjinski, a senior fellow of the Diplo Foundation and former chair of the group of governmental experts on Lethal Autonomous Weapons Systems (LAWS), delved into the complexities surrounding the development and regulation of autonomous weaponry. Gjorgjinski highlighted the public's familiarity with the concept through science fiction and the term "killer robots," but stressed that these portrayals oversimplify the issue. He explained that LAWS encompass a broad range of technologies capable of operating in various domains, including land, sea, air, space, and cyberspace, without human intervention.

The conversation touched upon the challenges of defining LAWS due to their complexity and the importance of characterizations over strict definitions. Gjorgjinski referenced the International Committee of the Red Cross's (ICRC) definition, which focuses on autonomy in critical functions such as the selection and engagement of targets. He also noted that while the group of experts operated within the framework of conventional weapons, the issue of LAWS intersects with weapons of mass destruction and has not adequately addressed the autonomy in cyber weaponry.

Radunovic raised concerns about the hackability and vulnerability of digital technologies, including autonomous systems, and the potential for misuse through social engineering. Gjorgjinski expanded on this, pointing out the multiple points of vulnerability in cyberspace, including the physical infrastructure, the network, the users, and the software. He emphasized the importance of building resilience to withstand attacks on these vulnerabilities.

The discussion also explored the evolving nature of warfare and the legal and normative frameworks that govern it. Gjorgjinski reflected on the historical context of disarmament and the challenges of defining war in the age of cyber operations and autonomous agents. He underscored the dangers of gray zones that blur the lines between peace and war, potentially leading to unintended escalations.

Radunovic questioned the application of international law in cyberspace and whether new treaties are needed to address the unique challenges posed by digital technologies and LAWS. Gjorgjinski argued that while treaties can be useful, they are not the only means to maintain peace and stability. He cautioned against rushing into specific norms that might inadvertently create exploitable gray zones.

Gjorgjinski concluded by emphasizing the ongoing process of maintaining peace and stability in the face of new technologies, advocating for a cautious and comprehensive approach to regulation that avoids the pitfalls of premature norm-setting. The dialogue underscored the need for continued deliberation and the development of norms that adapt to the complexities of modern warfare and emerging technologies.

Session transcript

Vladimir Radunovic:
Today, with us, we have Mr. Ljubco Jorginski, he's a senior fellow of Diplo Foundation. He chaired the group of governmental experts on what is known as Lethal Autonomous Weapons System, or LAWS, in 2019-2020. This group of governmental experts was formed under certain conventional weapons, and we'll discuss today, particularly, the aspects related to how artificial intelligence, digital technology in general, fits with the issues of international peace and security. Ljubco, welcome, and thank you for your time and for joining us. Thank you, Vlada, for inviting me to this diplomat's sofa. I want to start with the topic, which is, well, the acronym is LAWS, Lethal Autonomous Weapons Systems, but in the general population, people have probably not heard much about that. They have heard a lot from us about killer robots, as the sort of AI combined with robots and all the fears that we might have from that. To what extent this is the same concept in terms of the problem, in terms of challenges, but also in terms of the agenda that you've been dealing with? So LAWS versus killer robots, if you can clarify what is the topic?

Ljupco Gjorgjinski:
Well, images such as that one of a killer robot, or there's a popular video of slaughter bots where young people are killed by drones that have an aspect of lethality in them. They can shoot and kill a human target. These kinds of images are powerful. We've grown up with a lot of science fiction. We've grown up with the Terminator image. We've grown up with all these images that have been the result of our collective fantasy. And on the one hand, it's a useful image to start the conversation with, but it's also a simplification. It doesn't get to the real complexity of what we're talking about. Autonomous weapons systems are within the LAWS. One of the issues was how to define it exactly because of this complexity. We went around this problem by looking at characterizations, but there are other actors who have had definitions and they are useful. So the ICRC, for instance, defines an autonomous weapon system, any weapon system with autonomy in its critical functions, which means that they can select and by select, what is meant is search for, detect, identify, track and destroy. And by that is meant, you know, use force against, neutralize, damage targets without human intervention. So we're talking about really a weapon system that can be used on and underwater, on land, in air, in space, in cyberspace. So it's a very, very broad issue. Now within the GGO of LAWS, it functions within the convention of certain conventional weapons, which is the penultimate tool of international humanitarian law. But that also means that we deal with conventional weapons. So we're dealing with weapons that, let's say, are not weapons of mass destruction. So very easy kind of line between weapons of mass destruction and conventional weapons. And this is where we start to get to the higher degrees of complexity. So the lower degrees of complexity that we're talking about is what is it like, what kind of weapon is it? It can be used on land, in the air, et cetera, et cetera. We're talking about system, so autonomous weapon system, but this can be a system of systems. So you can see already also the complexity intensifying. But then when we start talking about the fact that autonomy is an attractive aspect to militaries, that it can be used not just in conventional weapons, but also in weapons of mass destruction, and that there's a lot of points of tangency that can be used within these connections, whether it is nuclear stability and the offensive or defensive capacities of missiles, and using autonomy in some aspects of it, or more aspects of it, or more aspects of it. And then there's one more, actually, aspect of complexity that is not yet properly dealt with, and perhaps we can go at some point in this conversation in that, which is autonomy in cyber. So we, in the GDG, dealt again as conventional weapons, then there's the weapons of mass destruction, but this aspect of autonomous software agents has not been properly touched and is something that deserves a bit more attention.

Vladimir Radunovic:
Two risks that you touched upon. One is, it's actually a digital technology, and as such, as you mentioned, it's hackable. And it's vulnerable, because as we see and we discuss in cybersecurity realm, generally digital products are quite insecure and these vulnerabilities are being used or misused. The other risk is the humans. It's still driven and composed and put up by the humans. Both of those are hackable, both the software and the code and digital technology and the humans, as we see from social engineering techniques. Now, is that something that is being discussed within this same discussion at the moment? Or you still see that that's what's coming on the agenda? And what do you see as the main questions there?

Ljupco Gjorgjinski:
You're right that in mentioning both the, let's say, the software agents, the software that is hackable and the humans, but there's actually much more. When one looks at even what is cyberspace, cyberspace is the collection of the machines that are there, the network that connects the different machines, the servers where they are on a given geography, they're given on a given space. They're not in a cloud, they are somewhere. The people who are using them and the software that is there. You can see all the points of vulnerability that are at stake here. It's not just the software that you can attack or even just the humans or try to misdirect in your own. But it's the servers and it's the connections and it's, is this kind of a connection? Is it that kind of a connection? Is it an internet type of a connection? Is it a radio wave type of a connection? Is it the frequency that is used? Satellites are used, not used, GPS, you take out GPS, a lot of aspects of vulnerability. So I guess the more advanced the militaries are thinking about this and building resilience, but you're right. I mean, the more network you become, the more you add to this complexity in a way, the more vulnerable you also become. If you don't build in aspects of resilience that would allow to withstand an attack on one of those vulnerable points.

Vladimir Radunovic:
It'll be interesting to see how these different processes will connect in future and what might be the point of connection, but we'll get back to that shortly. So thanks for the valuable reflections on that, Ljupco. Let me focus now on a set of open issues that I know from the cyber discussions, cybersecurity discussions that are high on the agenda within the Group of Governmental Experts Open-Ended Working Group. And I wonder, and I guess that some of those are quite the same in the DG laws that you chaired. So I'll just nudge you with a couple of topics and ask you for quick reflections on those. One question is the disarmament approach, the disarmament framing of this topic. In cyber discussions, you have certain governments, let's say the US sometimes, which is opposing framing these discussions as disarmament because of the enabling potential of the internet and cyber as a technology and trying to connect broader aspects such as human rights, economy and so on and avoiding framing it as disarmament. In your case, probably the setting is also a little bit more towards the disarmament. But I wonder whether there are any opposing positions and views of states how to frame the discussion.

Ljupco Gjorgjinski:
Well, Vlada, disarmament is really a very noble ideal. It's one of the pillars of the United Nations. And it has come from a certain context and the context is the ending of, obviously, World War II. And when we had the Cold War, where it was possible in some parts of it to really focus on one aspect and not possible to another, there have been spurts where, like the development of the Biological Weapons Convention, where it was possible, it was seen by the major militaries that it was not something that they could use and not fewer repercussions of their own, as well as it was seen as really a bad way of waging war. Then Chemical Weapons Convention, the various regimes and treaties that are focused on nuclear and fissile material, on nuclear non-proliferation. And they've all found their window of opportunity when it was possible to develop them. But if one goes, and excuse me for perhaps stretching a little bit the framework, but I think it's a good way of looking at it. And that is, what is war? Now, it may seem a very obvious answer, what is war? We know what is war, armed conflict, war, people getting killed. But really, when one looks at the development of international law, we have seen the development of what constitutes the start of war, what constitutes an end of war. So it has been very clear, start and end, and much of the focus for hundreds of years has been on this, really. What is war? What is a just war? What is a war that you can justify? How do you justify it? Do you justify it based on theology, justify it based on rule of law? Like where do you draw from the justification? So this is the use of Bellum kind of tradition, and it's really the platform on which international law has been built. And then we have the use in Bello, which is more or less the last 100, 150 years, really with Ari de Nantes, with with the development of international humanitarian law, the Geneva Conventions. So once armed conflict starts, then this is allowed, this is not allowed. Civilians cannot be touched, people who are out of conflict cannot be touched, wounded, et cetera, et cetera. This is what you need to do with the wounded. This is what you need to do. And then this has developed. Okay, so in an interstate war or even in an intrastate conflict, this is what is allowed, this is what is not allowed. Then you have international human rights law that says even at the individual level, if every individual is a human rights holder, they have rights and responsibilities in this regard, the state has a responsibility too. So this is really a network, a regime complex of many different norms that apply in war. When we've started the nuclear age, it's a famous saying in one of the key documents of nuclear strategy of the time that said that before, until that point, the aim of states was to win wars. Now the aim of state is to prevent war. And obviously they're talking mostly about major power conflict and these major powers have nuclear weapons. So all of a sudden it has to be a different one. Now that has allowed for smaller conflicts to be sometimes with the help of these major powers, sometimes not, but it has happened at the edges as one may seem. But it's very clearly defined that we don't want a major power war because that means nuclear war and that can mean the end of civilization. What we're seeing right now are gray zones opening up and this is very dangerous. And cyber here is really the best descriptor of this because is it a war if a state actor acts in cyber towards another state actor? At what point is it war? Is it just when it's hacking data of government agencies even if they're secret service agencies or intelligence agencies? Is it war when national infrastructure is affected with kinetic effect when a power nuclear power plant or even just the railway system or the water and sewage system? At what point is war? And at what point is it not war? So all of a sudden we have ambiguity, blurring of the lines here. And that can really lead to unplanned escalations in certain contexts that can really lead to major war because of ambiguity. So let me now bring in here one aspect that I said earlier has not been properly touched which is that autonomy in cyber. You have software agents that have been within this definition that I said earlier of being fully in software, in cyberspace. So if we have a connection of software plus something, a drone, a ship, this or that then we're talking really about a robot. But if we have it purely within the cyber realm then we're talking about autonomous software agents, self-activating, self-sufficient, persistent computation that is going on autonomous in their functions. Now that can be an algorithm that can decide there is an attack coming and it can automate the response or even it can act autonomously in this response. So we have algorithms entering this and further expanding this gray zone that we're talking about. How does the algorithm know where to stop if such a situation happens? Does it only stop after it has defeated a possible adversarial software code or just limiting it? Does it stop there? Does it try to go further and see where it comes from and attack there? Does it attack part of its own algorithms and programs and even systems and systems of systems if it thinks that they have been compromised? Thinks is a very problematic perhaps term but at the same time, there is decision-making power within this algorithm purely within the cyber realm. So we have a technique that is called automatic exploit generation that does exactly this. It really looks at finding bugs and exploiting them, really acting on them. We've had something that is called mastermind that has been already been talked about a few years back already. But these autonomous intelligent agents are something that we have not properly talked about yet. And they are within this gray zone, even of our own, it's not discussed in the GG because we're talking about conventional weapons. It's not really discussed in the open and the working group or the GG on cyber as well. I think there was like, I read both the report and the chair summary of the open and the working group. In the chair summary, there's one mention that this issue of autonomy in cyber agents has been raised as a specific concern, but that's it. And that's the reason because it is hard to open up because, but it needs to be opened up. We need to look at those gray zones between regimes as well, between UN bodies, where we leave out some aspects, where we leave out autonomy in weapons of mass destruction, for instance, we are yet to open that. So this prism of looking at it, which is an old prism of looking what is war, what is peace, may be a useful way of starting it. It's not gonna be easy really to open up, but it's necessary.

Vladimir Radunovic:
And as you mentioned, I mean, both of those are dual technologies and the dual use technologies and probably connecting different discussions like the GG and laws, the open and the working group or the future of the process of cyber, the internet governance forum discussions and many others to look at those questions from different perspectives, both disarmament and enabling if we wish would be the right way. Now I have another question for you is the international law applies in cyberspace. I guess we would all agree that it applies on digital technologies as well. And generally the governments agree on that. What they don't agree is how it applies. And you just touched upon some very specific aspects such as what does it mean the armed use of force or the armed attack in cyber? How would the article 51 of self-defense in UN charter apply? So one of the questions we have in cyber is, yes, the international law applies, but until we decide and agree how it actually applies, do we need to work on a separate treaty or do we need it hands in hand another separate cyber treaty or digital technology treaty along with international law and they should work together? Any such discussions, dilemmas, when it comes to laws and AI generally, whether we need a new treaty or we can rely on our current international law and so on?

Ljupco Gjorgjinski:
It's important to start with an understanding of the international system and what international law really means. International law is not domestic law. As I've said earlier, in any given well-organized country, you should have three specific branches of governance. You don't have that at the international level. You don't have a properly legislative body like a parliament is in a given country. You don't have really a government like a government has in a national system. You don't have really a judiciary like one has in a national government. You have some aspects of that. You have perhaps a general assembly which can have some soft law feeling to it. You can have these kinds of discussions and they can come up with principles and they can have perhaps a political declaration which is binding or not binding. You can have a security council resolution, which is binding, right? So that's perhaps the highest level of normative potential we have at the international level. But we don't have such a system. And when one understands that, one perhaps relaxes a little bit about the need. This needs to be a treaty and that's the only way of addressing it. No, what is necessary is maintaining peace and stability. And that may be done sometimes through a treaty. That may be done sometimes by continued deliberations. Sometimes within that continued deliberation, you can have a stronger norm. You can have a treaty or a convention or just even an international norm like the concept of meaningful human control within the GJON laws is an attempt to do exactly that. But sometimes you may also err on the side of caution by developing a norm. And by doing that in international law, you have a lex specialis, takes precedence over a more general law. So if you have a specific law on something or a norm, that should take precedence over a more general law. And there, as attractive as that idea is, there is a danger that you develop such a norm before the full maturity of the aspect that is trying to regulate or addressed in some way. And by doing that, you may leave parts that are not addressed, which again can create a dangerous gray zone because it can say, well, this is what's illegal, but this here, it doesn't say that that's illegal, so it must be legal. And this is where we can fall in traps and in thinking we need to address this fully, just like that. And once we have a treaty, it's gonna be, no, perhaps you miss out some things which create other gray zones that can be exploited. So a final product, like a treaty or convention, is an attractive idea and a necessary one for many things. But sometimes when you have such high complexity, it's not necessarily the only way of maintaining peace and stability and security, which should be the ultimate goal. And that's not a goal, that's a process that is kept on, nurtured, in order for there to be stability in the world and in order for new technologies to not be disruptive to that stability.

Vladimir Radunovic:
Thank you, Ljupco, for these great reflections.

LG

Ljupco Gjorgjinski

Speech speed

165 words per minute

Speech length

2840 words

Speech time

1035 secs

VR

Vladimir Radunovic

Speech speed

154 words per minute

Speech length

851 words

Speech time

332 secs