Open Forum #73 The Need for Regulating Autonomous Weapon Systems

16 Dec 2024 11:15h - 12:30h

Open Forum #73 The Need for Regulating Autonomous Weapon Systems

Session at a Glance

Summary

This panel discussion focused on the challenges and risks posed by autonomous weapons systems and the urgent need for international regulation. Experts from various fields, including diplomacy, technology, academia, and civil society, debated the complexities of governing AI in military applications.

The discussion highlighted the rapid development of AI-powered weapons and the potential consequences of their unregulated use. Participants emphasized the need for a binding international treaty by 2026 to prohibit autonomous weapons that cannot comply with international humanitarian law and to regulate other such systems. However, challenges were noted, including geopolitical tensions, the difficulty of defining meaningful human control, and the gap between technological advancements and policy-making.

Several speakers stressed the importance of a multi-stakeholder approach, involving not just diplomats and military experts, but also scientists, engineers, and civil society. The discussion touched on the complexities of AI systems, including inherent biases, limitations in testing and validation, and the potential for unintended consequences.

The global implications of autonomous weapons were highlighted, with particular concern for the disproportionate impact on the Global South. Participants called for increased capacity building and education to address the AI divide between nations. The need for public awareness and engagement was also emphasized.

While some speakers expressed optimism about reaching an international agreement, others cautioned about the difficulties in achieving consensus given the rapid pace of technological change. The discussion concluded with a call for urgent action, recognizing the “Oppenheimer moment” in AI weapons development and the need for smart, flexible regulation that can keep pace with technological advancements.

Keypoints

Major discussion points:

– The urgent need for regulation and governance of autonomous weapons systems and AI in military contexts

– Challenges in developing effective regulations due to rapidly evolving technology and geopolitical tensions

– The importance of a multi-stakeholder approach involving governments, industry, academia, and civil society

– Concerns about the risks of autonomous systems making complex decisions without meaningful human control

– The need for capacity building, especially in the Global South, to address the “AI divide”

Overall purpose/goal:

The discussion aimed to raise awareness about the risks posed by autonomous weapons systems and AI in military contexts, and to explore potential governance approaches and regulations to address these risks. The panelists sought to highlight the urgency of the issue while acknowledging the complexities involved.

Tone:

The overall tone was one of concern and urgency, but also pragmatism. Speakers emphasized the gravity of the risks while also acknowledging the challenges in developing effective regulations. There was a mix of cautious optimism about the potential for international cooperation and more pessimistic views about the likelihood of reaching binding agreements in the near term. The tone became somewhat more urgent towards the end as speakers emphasized the need for immediate action given that autonomous systems are already being deployed in conflicts.

Speakers

– Gregor Schusterschitz: Ambassador from Austria

– Wolfgang Kleinwächter: Moderator

– Ernst Noorman: Ambassador from the Netherlands, Chair of the GGE laws

– Vint Cerf: Internet pioneer

– Jimena Viveros: Commissioner on the Global Commission on Responsible AI in the Military Domain, member of various AI commissions

– Olga Cavalli: Dean of the Defense University in Argentina

– Chris Painter: Former US Cyber Ambassador

– Ram Mohan: Chief Strategy Officer of Identity Digital, former ICANN board member

– Kevin Whelan: Head of UN Office for Amnesty International

Additional speakers:

– Milton Mueller: From Georgia Tech (online participant)

– Hiram: From Encode Justice, part of Stop Killer Robots Coalition (audience member)

– Artem Kruzhulin: Panelist on earlier panel about public and private sector cooperation (audience member)

– Kunle Olorundari: President of Internet Society Nigerian chapter, researcher (audience member)

– Raida Lindsay: Local digital policy expert (audience member)

Full session report

Expanded Summary of Panel Discussion on Autonomous Weapons Systems and AI in Military Contexts

Introduction

This panel discussion brought together experts from diplomacy, technology, academia, and civil society to debate the challenges and risks posed by autonomous weapons systems and the urgent need for international regulation. The conversation highlighted the rapid development of AI-powered weapons and the potential consequences of their unregulated use, emphasizing the need for a binding international treaty by 2026.

Key Discussion Points

1. Urgency of Regulation

There was strong agreement among panelists on the pressing need to regulate autonomous weapons systems. Ambassador Gregor Schusterschitz from Austria called for binding rules and limits by 2026, mentioning the Vienna Conference and recent UN General Assembly resolutions on the topic. Kevin Whelan from Amnesty International pointed out that existing autonomous systems are already being deployed in conflicts, underscoring the immediacy of the issue.

Ernst Noorman, Chair of the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE LAWS), provided insights into the GGE process, noting the participation of 125 countries and its inclusive nature. He warned that the fast pace of development is closing the window for preventive regulation.

Jimena Viveros, Commissioner on the Global Commission on Responsible AI in the Military Domain, stressed the importance of moving from discussions to negotiations. This sense of urgency was tempered by acknowledgement of the challenges involved, with Chris Painter, former US Cyber Ambassador, noting that rapid technological evolution is outpacing regulatory efforts.

2. Challenges in Regulation and Technical Limitations

Several speakers highlighted the significant technical and geopolitical challenges in regulating AI and autonomous weapons. Ram Mohan, Chief Strategy Officer of Identity Digital, emphasized the difficulty in creating unbiased and accurate AI systems, pointing out the limitations of current software engineering methods for AI. He argued that the concept of a zero-defect AI system, while appealing, faces inherent limitations and discussed the challenges of jailbreaking AI systems through prompt engineering.

Ernst Noorman noted that geopolitical tensions and mistrust are hindering progress on international agreements. The rapid evolution of technology was seen as a major obstacle, with Chris Painter highlighting how it outpaces regulatory efforts.

3. Multi-stakeholder Approach and Capacity Building

There was broad consensus on the need for a multi-stakeholder approach to governance. Ambassador Schusterschitz emphasized the importance of involving diplomats, military personnel, academia, industry, and civil society in discussions. Olga Cavalli, Dean of the Defense University in Argentina, stressed the value of including technical experts in these conversations and highlighted the challenges of training in developing economies and the high demand for cyber defense education.

Kevin Whelan highlighted the potential of the UN General Assembly process for broader engagement, while Jimena Viveros called for new governance models suited to AI challenges. This multi-faceted approach was seen as crucial for addressing the complex issues surrounding autonomous weapons systems.

4. Human Control, Accountability, and Ethical Concerns

Maintaining meaningful human control over the use of force emerged as a key concern. Kevin Whelan argued that the use of autonomous weapon systems in law enforcement contexts would be inherently unlawful and dehumanizing, as international law and standards governing the use of force rely on nuanced human judgment. Ram Mohan highlighted the challenges of human oversight with complex AI systems.

Vint Cerf, internet pioneer, provided crucial insights on the differences between AI and nuclear deterrence. He emphasized that unlike nuclear weapons, AI systems are not easily contained and can propagate in unexpected ways. Cerf stressed the importance of clear lines of accountability and the need for standardization and binding agreements to address these challenges.

5. Role of Private Companies and Current Deployments

The discussion touched on the role of private companies in deploying AI systems in current conflicts, as mentioned by audience members. This raised concerns about the lack of oversight and potential consequences of commercial entities driving the development and use of autonomous weapons systems.

Unique Perspectives and Thought-Provoking Comments

Jimena Viveros offered a thought-provoking comparison between AI and nuclear weapons, noting that AI presents a fundamentally different challenge. Unlike nuclear weapons, which were immediately classified and rarely used due to mutual assured destruction, AI technology is being widely developed and deployed without a collective understanding of its potential consequences.

Ram Mohan provided crucial technical insights, explaining that current software engineering methods for testing, quality assurance, and validation are insufficient for AI systems. This perspective highlighted the inherent challenges in developing reliable AI systems for weapons and deepened the conversation about the limitations of human control over AI.

Conclusion and Next Steps

The discussion concluded with a call for urgent action, recognizing the current critical juncture in AI weapons development. Key takeaways included:

1. The need for a legally binding instrument to prohibit autonomous weapons systems that cannot comply with international law.

2. The importance of briefing countries on developments in the GGE LAWS process.

3. The need to develop risk management frameworks for machine learning systems.

4. A call for smart and targeted regulation that keeps pace with technological development.

Unresolved issues include how to effectively regulate rapidly evolving AI technology, ensure meaningful human control over complex AI systems, and address the capacity gap between developed and developing countries in AI governance. Suggestions from audience members included focusing on regulating specific use cases of AI in weapons, such as digital triggers, rather than AI itself.

The discussion highlighted the complexity of the challenges posed by autonomous weapons systems and AI in military contexts, emphasizing the need for continued dialogue, multidisciplinary approaches, and urgent international cooperation to address these critical issues. The panel stressed the importance of including more technical experts in diplomatic discussions and developing flexible regulations that can adapt to future technological developments while balancing the need for regulation with the desire to not hinder beneficial AI innovation.

Session Transcript

Gregor Schusterschitz: But moving to a negotiation mandate to work out the details has not yet been possible. Geopolitical tensions, mistrust among states, and a potentially flawed confidence in technological solutions hinder progress, despite the urgency of the issue, given the fast pace of development and the era of autonomic weapons systems, and the preventive window for regulation closing soon. This is why Austria has been engaging actively. We hosted the Vienna Conference Humanity at the Crossroads in April this year and tabled two resolutions in the UN General Assembly on autonomous weapons systems that enjoyed the support of an overwhelming majority of states. The global discourse on autonomous weapons systems should not be limited to a constituency of diplomats and military experts only. The issue has broad implications for human rights, human security and development and thus concerns all regions and all people. From an Austrian perspective, a multi-stakeholder approach on this critical issue is therefore important. We welcome the contributions of science and academia, the tech sector and industry, and broader civil society. And in this vein, I hope that today’s discussion will further stimulate such a multi-stakeholder discourse. For Austria, there is urgency to finally move from discussions to negotiations on binding rules and limits on autonomous weapons systems. And I look very much forward to today’s discussion. Thank you.

Wolfgang Kleinwächter: I thank Ambassador Schuster-Schütz for the opening remarks. And now we move to the panel discussion. I think we have an excellent panel here. So we have another ambassador, Ernst Norman, from the Netherlands. We have a former U.S. cyber ambassador, Chris Painter. He will be online. We have Olga Cavalli, who is the dean from the University of Defense. from Argentina. We have Jimena Viveros, who is a member of the Commission on Responsible Artificial Intelligence in the military domain. She is from Mexico. And we have Kevin Wieland, the head of the UN Office for Amnesty International. And we have Ram Mohan, who is the Chief Strategy Officer of Identity Digital and a former ICANN board member. So this is really a multi-stakeholder setting here. We have experts from the government, from business, from civil society. And we know that nearly 10 years in the GGE laws, there are already negotiations, which has produced some minor results, have produced already a final document. So Ambassador Norman from the Netherlands is now the chair of the GGE laws. And I would propose that he starts giving us a good overview where we are in the process. And Mr. Ambassador, you have the floor and five minutes.

Ernst Noorman: Thank you very much. Can you hear me? Okay. Well, thank you, first of all, for inviting me to this important panel and give me the floor and to elaborate a bit on our views on this very important topic. First, to structure my intervention, I use three circles to discuss the risk and opportunities of AI in international peace and security. First, the largest circle represents AI broadly, including civilian issues, a new and still developing domain that brings opportunities, but that also poses the international community with all sorts of new challenges. Within the large circle, there’s a second smaller circle. This circle is about AI in the military domain. Questions related to this circle are more specific, what are the implications of the use of AI for the way militaries operate? What kind of rules or measures do we need to make sure militaries use AI in a responsible way? Earlier this year, the Netherlands and the Republic of Korea successfully introduced a resolution on AI in the military domain in the UN First Committee. The resolution requests a report from the UN Secretary General, providing states with a platform to exchange perspectives. The resolution was approved by a massive majority of 161 votes, with only 3 against and 13 abstentions. This resolution will initiate a dialogue independent of multi-stakeholder re-aim process, which will continue to serve as an incubator for ideas and perspectives from other sectors. The re-aim process was an initiative also from Korea and the Netherlands on responsible AI in the military domain. These two processes will complement each other, working towards inclusive discussions on AI in the military domain. And third and final circle, contained within the second circle, is the autonomous weapon systems. Although the issue first came up in the Human Rights Council in 2013, it was referred to the Convention of Certain Conventional Weapons, CCW, given its relevance to disarmament. The CCW has played a critical role in addressing emerging threats, including prohibitions and regulations on various weapon systems. The CCW then established a group of governmental experts on lethal autonomous weapon systems, the GGE Laws, for short, in 2016. Now, the GGE counts nowadays 127 high-contracting parties, that means 127 countries, plus every other country and relevant international NGOs can attend as observers and do also so. So one can say it’s a very inclusive process. My colleague, our Dutch ambassador for disarmament, Robert Indenbosch, chaired the GGE on laws through 2026. One of the strengths of the GGE is that it has all the large military states included. This can make discussions more difficult, but I believe that when we get to agreements on regulations and prohibitions, it will be much more effective. As a final point, it remains important to note that the group is increasingly working against time. What started as a concern of the future is today an urgent pressing issue as weapon systems capable of limited or no human intervention are rapidly being developed and deployed in modern battlefields. It falls on the international community, on states and other stakeholders to garner the political will to make progress on this issue. And the interest by the global community is evident as shown by the multiple regional and international conferences and UN General Assembly resolutions, all of which highlight the growing global engagement. Coming back to the question, is this an Oppenheimer moment? Can we learn something from the nuclear arms race? I am very wary of drawing historical parallels. The challenges we face are enormous, as these types of weapon systems have the potential to transform modern warfare. But they also differ from the nuclear domain in many ways. So I would be cautious to draw such parallels. A lot of important work is happening and we must continue to collaborate constructively to address the issue and to treat it with the urgency it demands. Thank you very much.

Wolfgang Kleinwächter: Thank you, Mr. Ambassador. And I would like to ask a question about the Oppenheimer moment. So my understanding from the Oppenheimer moment, it’s also a challenge to the researchers and to the academics to be aware about their responsibility, what they are doing. And just two days ago, we had the Nobel Prize ceremony in Stockholm, where the winner of the Nobel Prize for physics, Britain, raised also concerns and said, you know, this can bring a moment where we are really at risk. And in so far, you know, we should make parallels which do not working, but we could be aware of risks and circles. And sometimes we are coming back on a higher level in a situation where we have been already. And I just was informed that meanwhile, Vint Cerf, who was expected to give also some opening remarks, is now online. And I’m very happy, Vint, that you are able to make it. I think it’s very early in the morning in the United States. You have the floor now. Thank you very much.

Vint Cerf: You’re very kind. Thank you so much. As it happens, my day began at 1 o’clock this morning in Washington, D.C., so I’ve been up for a while. My previous session didn’t end timely, and I thrashed around for a while before I got to this one, so I apologize for my delay. Let me just add a little bit to what has already been discussed. First of all, some of you know about an organization called the Ditchley Foundation. It’s a US-UK organization. And among the various things that it convenes are discussions on important policy, like this one, a concern for autonomous weapons. We spent a day and a half looking at the nuclear deterrent practices and tried to ask whether they would inform any of our practices with regard to cybersecurity. And the conclusion was that the two are quite different, just as the previous speaker pointed out. For one thing, proliferation has already happened. AI is essentially everywhere. And to make matters more complicated, AI is not necessarily very reliable. And my biggest worry about trying to establish policy with regard to autonomous weapons or other potentially hazardous uses of AI is that we don’t yet know how to contain artificial intelligent agents to prevent them from executing functions that might turn out to be a considerable hazard. And so while we can try to establish policy and objectives to achieve that limitation, I think the previous speaker implied that there was a great deal of work to be done in the technical community to establish bounds on the behavior of these autonomous agents. So I think that we can’t really succeed in making policy unless we also have the technology available to enforce it. Therefore, there’s still a lot of work to be done. That’s as much as I think I need to disturb you with this morning, but thank you so much for the opportunity to intervene.

Wolfgang Kleinwächter: Thank you very much, Vint. And I hope you can stay with us and continue the discussion because our next speaker is also an expert in this field and is a member of various commissions. She is now the Commissioner on the Global Commission on Responsible AI in the Military Domain, which is also an initiative which came out from the Netherlands. But she was involved in the HLAB of the United Nations Secretary General Committee, and she’s working with the OECD as an AI expert. And I’m very happy that we have Jimena Viveros from Mexico. Jimena, if you could comment on what we have heard already, and to explain what you are doing in this commission.

Jimena Viveros: Hello. I hope you can all hear me. Perfect. Well, first of all, I would like to thank our Austrian and Dutch friends for championing such important initiatives, and also the South Korean, which are not here, I think, but are also part of this very big international effort to put these resolutions on the table, which are very welcome. And also your work with the GGE laws. So just to give a broader spectrum, for those who are not familiar with the Global Commission on Responsible Use of AI in the Military Domain, as Wolfgang said, this was an initiative created by the Netherlands and the government of South Korea. So we are a commission of, I think, 18 commissioners and around 40 experts. And we have a mandate to come up with some recommendations by the middle or end of next year regarding this. So also, I was, as Wolfgang mentioned, part of the United Nations Secretary General’s high-level advisory body on AI, where we had an issue whether or not to include the military domain in our recommendations. For those who read the report, which I hope is everyone, we did include it in the end, but it was a struggle. So the reason why it was included, I led the engagements and the consultations on the peace and security, as also I am leading the work stream on peace and security at RE-AIM. And the arguments that I use and the issues that I always raise are similar, but they might seem different in context. I always say that these technologies cannot only be looked through the military lens. That’s why I call it the peace and security spectrum, because there are so many non-state actors that are using this. And even state actors, which are civilian, like law enforcement or border controls. And non-state actors, the immediate thought is always terrorism, but we also have organized crime and mercenaries, which are increasingly relevant in the political landscape that we’re looking at right now. And it’s the exact same technology that is being used. So what we need to come up with are guidelines in the development phase to have responsible innovation, because we also don’t want to hinder innovation, because of course there are also good applications that can come out of AI in the peace and security domain, when used responsibly, when developed responsibly. But that’s the key, because when we’re talking about all of these governance initiatives, we always speak about these very abstract terms, responsible AI, ethical AI, safe AI. But the problem is when we bring it down to the operator, to the developer, to the user, to the consumer, no one really knows what obligations that kind of derives. And those are the translations that we need to make, kind of make it operational. And we have a huge problem. which is going to be implementation and which is going to be enforcement. That’s even going to be a bigger one. So that’s why we absolutely need a binding treaty as our secretary general and the ICRC called for, for 2026 with this two-tier approach, right? That is based on whether or not the systems can comply with IHL. Then those would, if they cannot, then those would be forbidden. And those who can could be regulated accordingly. But this is extremely necessary. But then we also need a centralized authority that would have the mandate to do the oversight as we do, for example, with the energy agency. I’m also a little bit cautious to call this the Oppenheimer moment because AI is a very different monster than nuclear because when, even since its origins from the splitting of the atom, it was immediately weaponized. And there was like this whole veil of secrecy around it with the Manhattan Project and with everything that happened for years. And then, you know, with the Cold War and the arms race and everything, no one really used it. Everyone was producing it, but no one really used it because it was like a mutual assured destruction. Whereas with AI, we don’t really have the conscience yet collectively that it will be the same. But as of right now, since its origins, it was used simultaneously in civilian and in military. So it was weaponized and non-weaponized uses at the same time. So that makes it even harder to control. And then you have open source, which makes it even harder to control. And it’s cheaper and the resources to create it and to do harm with it are so much more accessible and less traceable than say like a uranium plant. So that also makes it easier for non-state actors or other malicious or rogue and nefarious actors to. to get hold of this and to create big harm. Also, when you converge it or convert it into weapons of mass destruction, so with nuclear chemical bio, but also swarm drones could have the potential of being a weapon of mass destruction in itself. And that’s something that we should also keep in mind. So we have very added it to the cyberspace and all of those different types of attacks on critical infrastructure and the whole destabilization effect that AI has in the military and the peace and security domains is enormous. And a big problem that we definitely need to address and keep in mind in every single forum is the disproportionate effect that this will have on the global South. Because these weapons are not gonna be used in the global North, against the global North. These are normally weapons that will be affecting the global South. And the problem is there’s no capacity response as of yet to counter this type of threats. And this is a big, big, big issue that we should all be mindful of. So all of these initiatives, I mean, even when we’re talking about the civilian ones, so for example, at the OECD, where we only look at civilian domains, but there’s a monitoring of incidents, which I think could be very useful also for other peace and security domains. Because the lack of data is also a risk. So, and we also know that the civilian data or the data that’s been collected by civilian sources is being then used by other type of military or security agencies. So that is also a very big problem that we should all be mindful of. And so that’s basically the landscape of the risks and the threats that I see. as the most urgent, but I will leave it there to keep mindful of time.

Wolfgang Kleinwächter: Thank you, Jimena. You made a good point. Enunciation to the nuclear bombs were used, but AI weapons are produced and used. It needs more awareness, and so that means all the discussions are taking place more or less in small expert circles. So the level of public awareness about this issue is relatively low. So it means much more public awareness, and it’s the first discussion in an IGF on this issue. And one of the objectives of this discussion is to raise the level of awareness, and awareness leads to education. And we have here Olga Cavalli, who is the dean of the Defense University in Argentina. And my question to Olga is, how do you prepare the soldiers and generals of tomorrow for this new situation? Thank you, Olga.

Olga Cavalli: Thank you, Wolf. Can you hear me? Thank you. Thank you very much for inviting me. This is a very interesting question, and I like very much the perspective that our colleague from Mexico brought, is what happens with the Global South. So developing economies or the Global South, and I can bring some perspectives from Latin America. Latin American countries are engaged in different discussions and negotiations related with autonomous weapons. We have been active for more than 10 years in different spaces, saying that it’s a concern for our countries, for our region. The challenge is always for developing economies and how we approach this technology. We don’t, in general, we don’t produce this technology, we use it, it’s expensive to buy. And imagine from a capacity building perspective, how can you train our soldiers and our civilians? I like very much your perspective, it’s not only about military issues, it’s also about other uses legally or illegally of these weapons. How do you approach the technology that is so far from developing technology, how it’s developed and how it’s reachable from an affordability perspective? So it’s extremely expensive, you don’t develop it, it’s extremely hard to buy. How do you approach this training? So we have been working from our university in different collaborations with universities from developed economies from the United States, Europe and other countries. So we think that through collaboration in between different teaching spaces, that’s the way that our countries can approach and learn about these technologies. For you to have an idea, the minister called me for this position because of my training in technology and we opened a new career in cyber defense. We had more than 1,000 applications in one month. So what the authorities this morning were expressing about the need for training in cyber security, in cyber defense, it’s a reality in all the countries. So these new careers are highly demanded. So our challenge is, how are we going to train these people, for example, in things like autonomous weapons? So it’s a huge challenge for us. So I think that a way is cooperation with other universities, with other governments. We are working on that and our president is very keen in going abroad and having these agreements. So I think this is the way. And at the same time, in general, we think that a global treaty could be a very useful tool because what happens usually is that these regulations are developed in different spaces. and with different focuses, and usually the Global South is following up, but not perhaps so much into the development of the regulations. So a global agreement could be ideal. As usual, it’s difficult to achieve. So I will stop here and I will continue contributing. Thank you.

Wolfgang Kleinwächter: Thank you very much, Olga. We have two years to go until 2026, so let’s hope for the best. But… I don’t know why this happened, it goes on and on. Like this? You said like this? Probably it’s my mouse, so I have no idea. I think you need to hold it like this. Okay, yeah. Capacity building was the responsibility of Chris Painter for many, many years. Chris is well known in this community. He was the first US Cyber Ambassador, and I hope he is online. Chris, can you hear us? And then you have the floor.

Chris Painter: I can hear you. Hopefully you can hear me. Can you hear me?

Wolfgang Kleinwächter: Yes, we can hear you.

Chris Painter: Excellent, great. Well, it’s good to be here, sadly virtually. I wish I was there in person. But, you know, this debate is not new. It’s been made more urgent by the reality of AI. But I remember, and folks who know me well know that I’m a devotee of various cyber movies. And there is a, you know, going back to even 1970, the first movie where computers took over the world, called Colossus, The Forbidden Project, was exactly the scenario. Where the US decided, because they thought it would be more rational, take emotions out of it. They put a computer in charge of the nuclear arsenal. The Soviets did the same. The two talked to each other, became self-aware, and took away all civil liberties to protect humankind from itself. So this is not a new issue. It’s been dramatized over the years. And it’s certainly in Terminator and other places like that. It’s been dramatized. But I think it’s been made, obviously, more real by the emergence of AI as a real thing. Although one is other patterns. analysts have said, is still very unsettled in terms of exactly what the technology is, what its capabilities are and where it’s going. But as you say, Wolfgang, there is some urgency around this because it’s fast evolving. You know, I draw some parallels to the area of cyber and cybersecurity. And as many folks know, there’s been debate for many years now, on the cyber community about cyber attacks, cyber capabilities, cyber offensive capabilities and defensive capabilities, as moving them to an autonomous level to take the man out of the middle, in some sense. And I think that’s been the argument for that for many years has been that cyber quote, moves at the light of speed. And if your attackers can hit you, and now with artificial intelligence can hit you more often, and, and with such lightning quickness and adaptability, you need an autonomous system to respond to them. Now, the problem with that is, like in this area, generally, it’s not clear. And of course, as others have said, AI spans the entire, the entire landscape of everything from cyber tools, to drones to physical weapons. But in the cyber tools, and I think more generally, the escalation paths are still not really clear how these potential capabilities can be used, how they will work. And if you have AI working against AI, then you have even a greater chance of an escalation path, it gets out of control. And I think that’s some of the things the panels mentioned. So that’s, that’s a real concern. But even with that, I don’t think we’ve made a lot of progress kind of cabining, when you would have automated responses in cyber, you know, that’s still a live debate within countries between countries. And I don’t think we’ve seen a huge amount of progress. And of course, that’s more lethal, or has a likely, you know, the likelihood will be more lethal, less lethal, there may be some lethal cases, then then using it and more physical boundaries, as we’ve been talking about today. So that’s one concern. Another is the cybersecurity implications of attacks on these autonomous weapons systems. Like everything else, if they’re connected, they are essentially insecure at some level. If they’re not connected, there’s still ways to get into them. And so even leaving aside all the uncertainties of hey, how a AI works, and it’s not really being as secure, as unbiased, as people think it is. The cybersecurity implications, and this has been true for weapon systems more generally, has been a huge concern because you could have an adversary breaking into these weapons, changing the artificial intelligence parameters, changing when they’re used, and that again creates huge risks to peace and security. And then finally, as was pointed out by others, AI is not this unbiased system that’s out there. It depends on training. It depends on how you educate it and what the parameters are. So the thought that it could be unbiased itself is a problem. So what that leads, I think, to is what are the solutions here? And as was pointed out, the GGE on these topics has been longstanding, but has not made huge amounts of progress in terms of moving toward what many people think we need, which is a treaty. I guess I’m less optimistic that a treaty can be reached, and I base that in terms of what I’ve seen in other areas as well, where the geopolitical differences that we’re facing are ones where I think there’s unlikely to be agreement. The other issue with respect to this is because this is such a quickly evolving field, and as was pointed out by Olga and others, we still don’t know the implications of how AI can be used, how it can’t be used, how you can cabinet the technical, as Vint said, the technical requirements of this, that reaching a treaty in the short time frame of two years, I think, is going to be very difficult without the basic understanding of where the technology is going, and that technology is continuing to move fast. So then the question is, what kinds of things might we do? I think education is critically important, bringing other stakeholders, as this discussion is doing, is important. I think addressing the AI divide, as I think Olga put it, with a lot of the global South, and making sure there’s more capacity building in this area, not just in this area, but in attendant areas too, like cybersecurity and AI more generally, awareness. I think calling out use cases where you actually say, where we’ve seen these technologies being used in autonomous weapons and what the implications are, so it’s made more real, is really important. And ultimately, I think before you get to a treaty, kind of calling out what is good and what is bad, what norms have… behavior are like we have in cyberspace, but applying them, you know, different ones, I think, likely in this case, building toward a treaty eventually. I wish we could move quicker, but I have a feeling that because of that uncertainty of the technology, plus the geopolitical issues, that’s going to be very difficult to do in the short term. And I think that’s exacerbated by the oversight issues that one of our speakers raised, which I think are very difficult here, too. So, you know, I expect we’re going to have to move more incrementally, but I expect part of that is the education of both general populace, but even the people who work within the UN and in governments about what the implications of this are more generally. And with that, I’ll stop.

Wolfgang Kleinwächter: Thank you, Chris. And thank you also for putting a little bit water into the wine. Sometimes it’s also good to be more realistic than too optimistic. Anyhow, you know, as you said, all stakeholders have to be involved in the development of the framework for the future. And you need technical experts. And Ron Mohan, grown up in India, is for many, many years a technical expert in the ICANN community, was an ICANN board member, is now the CSO from Identity Digital. And this represents also the private sector. So there will be no autonomous weapon system without business. Ram, what’s your approach?

Ram Mohan: Thank you. This is one of those things where you need a village to use the microphones. So I wanted to focus on objective information and data as a basis for policymaking. I hear discussions about how to solve problems, and I hear ideas such as guaranteeing human human control with the way to achieve it by legal means. And I wanna introduce some of the risks and threats that come in the evolution of software engineering. Cause I think we have to understand the software basis and the engineering basis before we get to the legal and policy areas. AI’s own evolution means that currently known methods in software engineering of testing, quality assurance and validation are either incomplete or insufficient. Many weapons systems in the conventional area, many weapons systems demand a model of zero defects, right? So there’s a zero defect model that is expected. Now, while the concept of a zero defect AI system is appealing, it’s important to recognize some of the inherent limitations that exist there. If you look at some of the key challenges, one is data quality and bias. As Chris Painter was saying, AI systems learn from the data that they are trained on, but we also know that all data is biased and all data is inherently inaccurate. And that will strongly influence the outputs from the AI systems. The second piece is algorithmic limitations. We know that current AI algorithms are susceptible to complex or ambiguous situations. And when it comes to weapons systems, that’s almost all of the definition. All systems there are complex and ambiguous and with a lot of changing parameters in there. The third component in there is unforeseen circumstances. So AI systems are likely to struggle to… to understand unexpected inputs or situations that deviate from their training data, right? And what we have been talking about is in those cases, let’s make sure that there is human oversight. But human oversight, when there is not an understanding of how the system, the AI system arrived at the conclusion it did, the human oversight then defaults to merely intuition. And that may not be sufficient when we’re talking about human scale problems rather than just technology issues. So when you’re talking about high consequence decisions that are driven by AI, we also understand that AI systems that are learning from prior data sets, they can create novel behaviors that are neither predictable nor foreseeable. And this is exacerbated in the edge cases. One of the interesting and evolving characteristics that I have been studying is the relative ease by which you can jailbreak AI-based systems. And the jailbreaking is often a matter of expert prompt engineering. For those of you who don’t know what prompt engineering, it’s really the science, some call it an art, but I think it’s more of a science of creating effective prompts that guide the AI model to generate desired outputs, right? So you may be able to program guidelines, laws, treaties into an AI model and say, you must conform to all of these guardrails. But I think that smart… prompt engineering will likely be able to overcome those kinds of guardrails that exist. So there is a great deal of evolution that is happening in that area. And good prompt engineering can help the AI system perhaps learn to build guardrails by itself. But that same kind of prompt engineering can result in not only unintended consequences, but consequences that become part of the training dataset for the next cycle of the LLM. And when that is not documented or when that is not understandable, you are, I think, gonna have a system that compounds an original deviation from the norm. So I therefore have some concerns about a discussion that starts with the premise that human control is a good way or is the way to help solve what is evolving here. Because you can establish strong ethical guidelines, you can create international regulations, and you can build robust safety measures. But if you look at the software engineering underneath these systems, the data validation, the fact that today’s systems, it’s very hard to create a zero defect model, combined with the enormous capability of smart prompt engineering to jailbreak these systems, makes me think that we have to spend quite a bit more time in research, understanding how these systems. systems work, have a lot of simulations of those kinds of systems first, and then start to build some global frameworks and global norms of what safety should be before we can start to think about a treaty or an international agreement that makes sense, because when the foundational principles are not fully characterized, you end up, if you start to work on law or treaties, you may find that the unintended consequences may be far greater than the good that was intended.

Wolfgang Kleinwächter: I think this is a very interesting additional aspect and if I understand you, there is really a problem that even if you have human control, that the underlying technology overstretches the capacity of the human who is in control, so that it’s just on paper, but the reality could be moving in a different direction. And I think this is an issue for a lot of civil society organizations. We have involved a number of that, there is a broad NGO called Stop Killer Robots, which is also active in the GGE laws, and Kevin, you represent Amnesty International, which has also discussed this issue for some years, so what is the civil society perspective, if you have watched all these experts from diplomacy, technology, business, so what do you think about from a civil society perspective about this, and then we have time enough for two or three questions from the floor. Please prepare your questions.

Kevin Whelan: Thank you and good afternoon everyone. It’s a pleasure to be here and to speak on behalf of Amnesty International on this important topic. It’s a bit of a challenge to be, I think, maybe the ninth or tenth speaker on a panel right after lunch, so I’ll try to be as concise as possible. But it’s great because I think it gives me a bit of an opportunity to respond to some of the things the panelists have already said. I mean, I speak on behalf of Amnesty International, that’s part of various coalitions, not necessarily on behalf of all civil society groups in general. But from our perspective, I think we view the challenges and risks that come from autonomous weapon systems as imminent and as significant. And it’s for that reason we believe the international community should clarify and strengthen existing international humanitarian and human rights law through a legally binding instrument, through an instrument that would do at least three things. One, would prohibit the development, production, use, and trade in systems which by their nature cannot be used with meaningful human control over the use of force. And I hear what Rohan is saying. And I think from our perspective, I think we’re viewing this as, let’s say, a legal standard, not necessarily a technical standard, but perhaps we can discuss that in more detail. And the prohibition would extend to systems that are designed to be triggered by the presence of humans or that use human characteristics for target profiles. So this would be the so-called anti-personnel autonomous weapon systems. So in addition to that prohibition, a regulation of the use of all other autonomous weapon systems, and then on top of that, a positive obligation to maintain meaningful human control over the use of force. Now, as some of the speakers have already mentioned, the use of autonomous weapon systems in armed conflict has been at the center of the debate, much of which has taken place in the CCW. But I think as Ximena and Olga and others had talked about, this is a debate which has dimensions that are broader than armed conflict and broader than the CCW. It’s not just an issue of IHL. It’s not just an issue of weapons law, but also of human rights. And so I wanted to use a bit of time to focus on the dangers in relation to the law enforcement context, where the use of force is governed by a different threshold from that which applies in armed conflict. And so from our perspective, the use of autonomous weapon systems in this context would be inherently unlawful, as the international law and standards governing the use of force and policing rely on nuanced and iterative human judgment. So this goes back to something that Rohan was saying about the challenges that some of these systems have in dealing with complexity. We are talking about an exceedingly complex decision that should not be delegated. A law enforcement officer must continually assess a given situation in order to, if possible, avoid or minimize the use of force. I’m not saying that the legal determinations in the context of armed conflict are simple. What I am saying is the legal determinations in the context of a law enforcement context are exceedingly complex. And then if there is a system to be delegated, given the complexity of the issues we need to address, the system would have to be so complex as to render the system outside of meaningful human control. In other words, a machine so sophisticated to attempt to adapt to subtle environmental cues would make the machines inherently unpredictable. So then we come back to the notion of how do you evaluate that with something other than just intuition? And this becomes a significant issue in terms of. of accountability, because it would blur the lines of responsibility and accountability. It would undermine the right to remedy. And the last thing I wanted to point out is that the use of autonomous weapon systems in law enforcement would be dehumanizing. It would violate the right to dignity, undermine the principles of human rights compliant policing. And I think one of the panelists has already made the discussion, addressed the issue of bias in algorithms and systems. There are risks of systematic errors and bias in algorithms, in autonomous systems. We know, we’ve documented complex systems could have biased results based on biased data. For example, facial recognition can lead to profiling on ethnicity, on race, on national origin, gender and other characteristics, which are often the basis for unlawful discrimination. So then imagine adding lethality as a component to that system. And so this is one of the reasons just to say, stepping back, why we see value in the process at the General Assembly, because it has an aperture that’s broader than that in the CCW context. Thank you.

Wolfgang Kleinwächter Thank you very much. And I think we have time for one or two questions. So you need a microphone to ask a question.

Audience: Yeah, I hope you can hear me. So thank you for this wonderful panel. I think this is a very important issue. My name is Hiram. I’m from Encode Justice. We’re a part of the Stop Killer Robots Coalition. We actually had a member go to GGE laws meeting in Geneva. And it was very appalling to see only two data scientists there, like me, two people from the technical community. And it feels like, you know, in a lot of the rolling text and so on, a lot of the technical issues are overlooked. You know, diplomats expecting these systems to be controllable and reliable and predictable, just, you know, like kind of a dream. I think the question here is, what are the bottlenecks in terms of understanding? for diplomats or for you know government bodies to work towards an international treaty towards like banning autonomous urban systems or regulating autonomous urban systems.

Wolfgang Kleinwächter: Ambassador, can you take the questions here? Stop Killer Robots is an NGO in the GGE laws.

Ernst Noorman: You can hear me? Yes. Thank you very much for the question and you know what the very ambition is of the chair, my colleague, is to include as many voices at the table as well. That’s why we he’s been really actively encouraging to involve stakeholders, other organizations and only the signatory countries and observing countries but also academics and NGOs like Amnesty International and who are also involved in ICRC to get a full picture and to involve everyone. At the same time we are ambitious as well in trying to reach some agreement amongst countries and I understand also from your contribution limitations but at the same time we feel the urgent need also to be ambitious. We’ve been ambitious with RE-AIM in tabling this resolution to put it on the table and I understand from the contributions it’s going to be difficult to reach an agreement that’s actually any agreement in this area but if without ambition you won’t reach anything.

Wolfgang Kleinwächter: Okay, thank you very much and we have two questions online and then we have another one here in the room. So could we hear the first question online?

Audience: Yes, hello, can you hear me? Yes, we can hear you. Yes, hi, this is Milton Mueller from Georgia Tech. I want to go after this title again about Oppenheimer. I think I haven’t heard much about one of the main problems facing AI governance, which is the belief among certain developers of AI that they have, in fact, put us on the path of an autonomous, not just a lethal weapon system, but an autonomous superintelligence that is capable of and might inevitably result in the destruction of humanity. And you know, about a year and a half ago, two years ago, we had this massive panic, and we had the Future of Life Institute resolution that we should stop all development of AI. And it was those people who believed that they had passed an Oppenheimer moment, that they had discovered a power so awesome, comparable to Oppenheimer’s weaponization of atomic fission. And those of us who have investigated this problem now know that this is a myth. This idea of a superintelligence that is imminent, and that this superintelligence will have the power to destroy all of humanity and all of human civilization is just not a realistic thing. So I hope that, I think your discussion of the issue of lethal autonomous weapons has been much more grounded in reality. But I do want to know if we are not headed towards a sort of revival of the myth of a superintelligence that is autonomous and capable of destroying humanity.

Wolfgang Kleinwächter: Yeah, first, let’s take the second question, and then we try to find the person who can reply to Milton. The second question is…

Audience: Can you hear me? Hello? Yeah, Sivash, we can hear you. Okay. So these concerns about AI are very much shared by business leaders, but there was a recent point of view that another country, another region is on the race to develop AI, and if we slow down or withdraw from this race, they will win. So we will stay on the race, continue developing without safeguards, and after we win the race, we’ll worry about the safeguards. Shouldn’t instead the governments and all actors get into the same room and try to achieve a solution, either at UN, ICANN, or in a conference center like Potsdam or any historical place? That’s my question. Thank you.

Wolfgang Kleinwächter: Okay, thank you. We have two more questions in the room. My proposal is, all three questions in the room, that we give the three questioners in the room the possibility, and then we have a final round. So you need a microphone, so one, two, three, four, and then we close the queue, and then we have a final round among the participants, and then Ambassador Schusterwitz would make a final remark. Okay, go ahead.

Audience: Okay, I’ll make a small remark. I had a lighting session yesterday, where I was actually showing an actual military warfare drone, which is able, being, like, cost 500 bucks, able to take out a $10 million tank. And this is technology which is now actually used. And the trick is, there are already lots of attempts, and lots of successful attempts to implement AI on the battlefield, from swarm drones to some mothership drones connecting to the high queue over Starlink antenna, literally glued to this mothership drone flying high in the, like, skies. So what I have to say is, I’ve been thinking a lot about how can we protect our future. future from AI going rogue and hostile in some way. It is not a battle between humans and humans. It’s a battle between humans and some mad robots, basically. And I think we are kind of wrong way in the design of our attempts to regulate AI because you cannot regulate the development of AI. It’s super rapid and nobody actually will agree with you and hear you out and so on. But what we could regulate is finally weapons. I mean, like the problem of AI getting hostile is a problem of AI intentionally on his own way pulling the trigger, pulling the digital trigger of some weapon, no matter a pistol or intercontinental ballistic missile. So if we not limit AI, but if we limit by some UN treaty an ability to produce a weapon which is equipped with a digital trigger, which can be used by AI, we can protect yourself. But I mean, like, it may sound like weird, but a human should only be killed by God or another human. There should not be any robot with this trigger pulling. Thank you.

Wolfgang Kleinwächter: Okay, thank you very much. You need a mic. Take this one.

Audience: Hello? Yeah, can you hear me? Good. Artem Kruzhulin. I was actually a panelist on an earlier panel related to public and private sector cooperation. And my question is in a way related to this very subject. So ever since AI was a subject, there’s always been an ongoing theme of the fact that legislation is consistently in a position where it’s falling further and further behind. And it’s very difficult to continue keeping up. How would you comment on the fact that while we are still here trying to discuss conceptual ideas around the way to control these systems, there are private sector companies, such as Helsing or Unreal, that are already deploying these systems in life conflicts. And they are in a way superseding the discussion just by sheer fact that they’re actually using these systems already. And what do you see as a… the solution to these problems. Okay, thank you. All right, thank you very much. My name is Kunle Olorundari and I’m the president of Internet Society, Nigerian chapter, and at the same time a researcher. And interestingly, I wrote a paper recently that I published on ITP platform based on this subject matter that is artificial generative intelligence terrorism. And that’s more of that was what drawn me to this session because I really want to get to know more about what is being discussed. And when I listened to one of our panelists, I mean, the perspective we chew in was so interesting to me because I was actually looking in my own paper, I was looking at deontology and utilitarianism. Deontology says that, okay, fine, let’s look at how we can look at the use of AI in a moral perspective. But then, and I discovered that when I was looking at my paper and of course I set up like a focus group of experts that speaks to those issues. I discovered that, yeah, that’s going to be a bit pretty difficult because now I have to go to the extent of define what is moral, which of course I know that all of us are not going to agree on. Then on the issue of utilitarianism, right? Looking at, okay, the maximum effective use in terms of the good use. Yeah, I can say that, okay, this is a good use. And that person will say, no, that is not a good use. So I discovered that, well, there are so many perspectives. And when I had the perspective of one of our panelists where they said that, okay, yeah, we now need to look at the issue of data because all data are inherently inaccurate. That now connected to the utilitarianism and the ontology. And I was thinking, oh, wow, I think this is just the right time for us to start talking about all these issues because this has come. and there’s nothing anybody can do about it, the best thing you can do is to take it to the next level. The issue of treaty, yeah, it will definitely come, but then I think we need to start to look at how we can, you know, okay, IGF is just a forum where we discuss all these issues, we can elicit ideas, right, but there is no binding treaty or need, so I think we should be looking at how we can take this to the next level, like maybe a plenipotentiary, right, where you have the ITUTs, the radio, the standardization and development arm, where they discuss issues, probably we can have, you know, something coming out of there, so that we can take it to a level where it’s going to be binding on each and every one of us, and for me, I just want to know more if, apart from Plenipot, I’m familiar with Plenipot, is there any other platform that we can, you know, discuss the issue of, you know, standardization when it comes to AI? Thank you very much.

Gregor Schusterschitz: Okay, thank you, we have a final question here, and then we have a final round around the table, this time we start with Kevin, but yeah, it’s not too long, too much,

Audience: so can you introduce yourself and ask your question? Hi, I’m Raida Lindsay, I’m a local digital policy expert. My question was mostly covered, but I want to ask, we are seeing the deployment of autonomous decision-making today in war, especially in Gaza, and a lot of it is being piloted and demonstrated as, like, best practice around the world by these private companies, so I wonder what is the short-term solution, something that we can do today, we can campaign for today, in order to make sure that we could limit the impact of autonomous decision-making in war?

Gregor Schusterschitz: Oh, a lot of good questions, and I propose that you pick just what you want to say from your field of expertise, Kevin, and then Jimena, and then we’ll go around the table.

Kevin Whelan: Thank you. Great, thank you. Yeah, maybe just a couple points, I mean, about the complexity of technology and the challenges in fully understanding it. So I’m not a technology expert, but I don’t think you or any of us need to understand the technology to understand what’s at stake. I am not saying that you can necessarily create a system that is subject to meaningful human control. What I am saying is that if you cannot have meaningful control over a weapon system, that is a system that should not be deployed. And another point I wanted to talk about is, I think it’s been picked up by a number of questions, how to reconcile the argument that these are complex systems, we need to wait to see how they are developed, with the fact that these systems are already being deployed in multiple conflicts. And so that’s absolutely why we believe that there is urgency. And what can we do? I mean, we fully support the call of the Secretary General and the ICRC to negotiate a binding treaty by 2026. So I think what you can do is campaign on that behalf, right? Make your voices heard, talk about the urgency of this situation. Thank you.

Wolfgang Kleinwächter: Okay, thank you.

Jimena Viveros: Hi, so I think a little bit about everything is, can you hear me? Yes. Okay. The fact that, as I said, AI is a new monster and AI in the peace and security domain is an even newer, bigger monster. So we need to reimagine what governance looks like, because the traditional models of governance that we have seen so far have proven to not be the most adequate ones. So we need obviously multidisciplinary approaches, and we need engagement. also with industry of course to promote and to kind of guarantee that there’s going to be transparency and that there’s going to be some type of cooperation for enforcement because otherwise we’re just drafting dead paper as we would say. We need definitely capacity building as I said capacity response especially from the global south and in order to make that happen I mean I think everyone from wherever we’re standing in our trenches we can just speak to our policymakers demand that this is a thing so that it can become binding because otherwise we’re just going to be stuck in the same place and I do believe that it’s very important to talk about standards which was raised because that’s the only way that we can actually in a measurable way verify the type of guardrails and also how to not override them. So this is very critical in the way forward that’s why I mean like we need to reimagine the way that governance for this technology needs to happen and we need to do it very fast and very agilely because we are way behind on where we should be so it’s terrible that these systems are being field tested already live so there’s like no other phase in between and they’re just deployed because then we’re seeing the consequences all around the world and again the global south is the one that’s having the worst part of it. Thank you.

Wolfgang Kleinwächter: Thank you. We are pushed out now of the room so Ron and Mr. Batson you have just you know one minute to make a final comment and if Chris wants to say something fine so probably.

Ram Mohan: Thank you Wolfgang. I’ll be very brief. We should recognize that there are no unbiased and accurate AI decisions. We need to recognize that there are dependencies. And I think that the important thing here is to build risk management frameworks that mitigate both known and unknown risks that are accelerated by machine learning systems.

Wolfgang Kleinwächter: Mr. Ambassador.

Ernst Noorman: Thank you very much. I fully understand the frustration that we are lagging behind with the negotiations in the reality. That’s, of course, a big concern for us all, but that doesn’t excuse us from working hard towards an agreement on the subject. So we are fully committed as a chair of the GDE to work hard. We’re happy with the informal forum in New York. And we will be, as a chair, briefing in New York, the countries and the wider New York community on the development and work of the GDE. But we will be really keep on working and trying to achieve a result of 2026 that has been given us as a task. And we feel responsible for that. So we’re working towards a legally binding instrument to prohibit those autonomous weapons system that cannot be used in accordance with international law and to regulate the use of other autonomous weapons, which is a concept that’s supported broadly by many states. And it’s my hope that we can ultimately enshrine this through a new protocol in the CCW. Thank you.

Gregor Schusterschitz: Okay, thank you. And Wolfgang, is that Wolfgang just?

Olga Cavalli: And especially what Ram said, is the big challenge for universities, not only from the global South, from everywhere, being, have a multidisciplinary perspective. This is challenging for universities because each faculty is very much focused. So hearing you, I think we have to be really, really have a broad understanding of technology. Thank you for inviting me.

Chris Painter: So, and just finally on Milton’s point, what gives me some hope here is we’re actually talking about use cases. we’re not just talking about the specter of AI as some giant monster, but we’re actually looking at how it applies to autonomous weapons. And I think I completely agree with the comment that was made about focusing on several levels, including on management frameworks, because autonomous devices are not new. We’ve been talking about those for 30 years, but AI adds its complexity. But a lot of people just use AI as a talisman and they say those words and it’s supposed to mean something. I think actually getting down to brass tacks and talking about how those use cases work is important. So I don’t think we’re in that same loop that we were before. And then on locking people in a room and hope they come up with an agreement. I agree with Ernst that it’s great to have ambition. If you don’t have ambition, you don’t get anything. I think it’s unlikely locking people in a room is gonna result in something in the short term, but it’s important to have this process and it going. And then finally on capacity building, as Olga and others have said, I think that is critical. I think that’s critical to awareness. It’s critical to not just the global South but more generally. I’d say the Global Forum on Cyber Expertise, it’s the capacity building platform, has created a working group on emerging technologies and AI applying more to the cybersecurity context. But I think it also covers some of the aspects we talked about today. So I think that capacity building is another practical thing that we can do as we’re talking about what the constraints are, what the treaties are, et cetera, what the norms even are in this area as we apply them to technology. So also thank you for having me here.

Wolfgang Kleinwächter: Okay, thank you, Chris. And the final word comes from Ambassador Schuster-Schritz. Is online or are we now pushed out of the room?

Gregor Schusterschitz: Thank you very much. I would just, a few sentences, I think that summarize a bit the discussion that we had today. I think it was very good to have these various experts from various fields to show also the risk and severe consequences that unregulated autonomous weapons would have. This time pressure is what we call the Oppenheimer moment. we need to keep up with the development, we need to find regulation, I think that was clear for everyone, but we need to have very smart and targeted regulation that also keeps pace with the rapid technological development and this is not the first area where we have rapid technological development and we need to regulate it to a certain extent, but of course we require a multi-stakeholder approach here. We cannot just only have diplomats and military experts in the room that is trying to regulate, but we need scientists, we need software engineers, and we need civil society to find a way to regulate autonomous weapons that is also flexible for future developments.

Wolfgang Kleinwächter: Thank you very much, that’s the end of the story, this is the start of a new beginning, thank you and see you in next session, rounds on the next or in the informal consultation in New York. Thank you. Yeah. Mm hmm. Hey, Hey. Hey. Hey. Hey. Look. Look, yes, yes, but look, I touch it. It’s it’s good. Really, I listen my voice when it do that that do that. Sit. Yes. qi

Gregor Schusterschitz:

G

Gregor Schusterschitz

Speech speed

166 words per minute

Speech length

473 words

Speech time

170 seconds

Need for binding rules and limits by 2026

Explanation

Schusterschitz argues for the urgent need to establish binding rules and limits on autonomous weapons systems by 2026. He emphasizes the importance of moving from discussions to actual negotiations on this matter.

Evidence

Austria hosted the Vienna Conference ‘Humanity at the Crossroads’ and tabled two UN General Assembly resolutions on autonomous weapons systems.

Major Discussion Point

Urgency of regulating autonomous weapons systems

Agreed with

Ernst Noorman

Jimena Viveros

Kevin Whelan

Agreed on

Urgency of regulating autonomous weapons systems

Need to involve diplomats, military, academia, industry and civil society

Explanation

Schusterschitz emphasizes the importance of a multi-stakeholder approach in addressing autonomous weapons systems. He argues that the issue has broad implications and thus requires input from various sectors of society.

Evidence

Austria’s welcoming of contributions from science, academia, the tech sector, industry, and broader civil society.

Major Discussion Point

Multi-stakeholder approach to governance

Agreed with

Jimena Viveros

Agreed on

Multi-stakeholder approach to governance

E

Ernst Noorman

Speech speed

136 words per minute

Speech length

1004 words

Speech time

439 seconds

Fast pace of development closing window for preventive regulation

Explanation

Noorman highlights the rapid development of autonomous weapons systems, which is narrowing the window for preventive regulation. He stresses the urgency of addressing this issue before it becomes too late to effectively regulate.

Evidence

The GGE on laws has been working since 2016 and now includes 127 high-contracting parties.

Major Discussion Point

Urgency of regulating autonomous weapons systems

Agreed with

Gregor Schusterschitz

Jimena Viveros

Kevin Whelan

Agreed on

Urgency of regulating autonomous weapons systems

Differed with

Chris Painter

Differed on

Approach to regulating autonomous weapons systems

Geopolitical tensions and mistrust hindering progress

Explanation

Noorman points out that geopolitical tensions and mistrust among states are obstacles to progress in regulating autonomous weapons systems. These factors make it difficult to reach agreements on international regulations.

Major Discussion Point

Challenges in regulating AI and autonomous weapons

R

Ram Mohan

Speech speed

118 words per minute

Speech length

873 words

Speech time

440 seconds

Difficulty in creating unbiased and accurate AI systems

Explanation

Mohan argues that it is inherently challenging to create unbiased and accurate AI systems. He points out that all data is biased and inherently inaccurate, which influences the outputs of AI systems.

Evidence

Examples of data quality and bias, algorithmic limitations, and unforeseen circumstances affecting AI systems.

Major Discussion Point

Challenges in regulating AI and autonomous weapons

Differed with

Kevin Whelan

Differed on

Feasibility of creating unbiased AI systems

Limitations of current software engineering methods for AI

Explanation

Mohan highlights that current software engineering methods for testing, quality assurance, and validation are insufficient for AI systems. This creates challenges in ensuring the reliability and safety of AI-powered autonomous weapons.

Evidence

Discussion of zero-defect models and the challenges of applying them to AI systems.

Major Discussion Point

Challenges in regulating AI and autonomous weapons

J

Jimena Viveros

Speech speed

153 words per minute

Speech length

1440 words

Speech time

563 seconds

Importance of moving from discussions to negotiations

Explanation

Viveros stresses the need to transition from discussions to actual negotiations on binding rules for autonomous weapons systems. She argues that the current pace of development makes this shift urgent.

Evidence

Reference to the UN Secretary General’s call for a binding treaty by 2026.

Major Discussion Point

Urgency of regulating autonomous weapons systems

Agreed with

Gregor Schusterschitz

Ernst Noorman

Kevin Whelan

Agreed on

Urgency of regulating autonomous weapons systems

Need for new governance models suited to AI challenges

Explanation

Viveros argues that traditional governance models are inadequate for addressing the challenges posed by AI in the peace and security domain. She calls for reimagining governance approaches to better suit the unique characteristics of AI technology.

Major Discussion Point

Multi-stakeholder approach to governance

Agreed with

Gregor Schusterschitz

Agreed on

Multi-stakeholder approach to governance

O

Olga Cavalli

Speech speed

138 words per minute

Speech length

527 words

Speech time

228 seconds

Lack of technical capacity in Global South countries

Explanation

Cavalli highlights the challenge faced by Global South countries in developing technical capacity related to AI and autonomous weapons. She points out the difficulty in approaching and learning about these technologies due to limited resources and access.

Evidence

Example of high demand for new cyber defense programs in Argentina.

Major Discussion Point

Capacity building and education

Agreed with

Wolfgang Kleinwächter

Chris Painter

Agreed on

Capacity building and education

Need for multidisciplinary education on AI and autonomous weapons

Explanation

Cavalli emphasizes the importance of multidisciplinary education in understanding and addressing the challenges of AI and autonomous weapons. She argues that universities need to broaden their approach to teaching these subjects.

Major Discussion Point

Capacity building and education

Agreed with

Wolfgang Kleinwächter

Chris Painter

Agreed on

Capacity building and education

W

Wolfgang Kleinwächter

Speech speed

129 words per minute

Speech length

1327 words

Speech time

614 seconds

Importance of raising public awareness

Explanation

Kleinwächter stresses the need to increase public awareness about the issues surrounding autonomous weapons systems. He argues that discussions are currently limited to small expert circles and need to be broadened.

Evidence

Mention of this being the first IGF discussion on the topic.

Major Discussion Point

Capacity building and education

Agreed with

Olga Cavalli

Chris Painter

Agreed on

Capacity building and education

C

Chris Painter

Speech speed

190 words per minute

Speech length

1482 words

Speech time

466 seconds

Rapid evolution outpacing regulatory efforts

Explanation

Painter points out that the fast-paced evolution of AI technology is outstripping efforts to regulate it. He suggests that this makes it challenging to develop effective governance frameworks.

Major Discussion Point

Challenges in regulating AI and autonomous weapons

Differed with

Ernst Noorman

Differed on

Approach to regulating autonomous weapons systems

Role of capacity building in supporting governance efforts

Explanation

Painter emphasizes the importance of capacity building in supporting efforts to govern AI and autonomous weapons. He argues that this is critical for raising awareness and understanding of the issues involved.

Evidence

Mention of the Global Forum on Cyber Expertise creating a working group on emerging technologies and AI.

Major Discussion Point

Capacity building and education

Agreed with

Olga Cavalli

Wolfgang Kleinwächter

Agreed on

Capacity building and education

K

Kevin Whelan

Speech speed

161 words per minute

Speech length

1036 words

Speech time

384 seconds

Existing systems already being deployed in conflicts

Explanation

Whelan points out that autonomous weapons systems are already being used in current conflicts. This underscores the urgency of addressing the regulation of these systems.

Major Discussion Point

Urgency of regulating autonomous weapons systems

Agreed with

Gregor Schusterschitz

Ernst Noorman

Jimena Viveros

Agreed on

Urgency of regulating autonomous weapons systems

Need to maintain meaningful human control over use of force

Explanation

Whelan argues for the importance of maintaining meaningful human control over the use of force in autonomous weapons systems. He suggests that systems without such control should not be deployed.

Major Discussion Point

Human control and accountability

Differed with

Ram Mohan

Differed on

Feasibility of creating unbiased AI systems

Risks of autonomous systems in law enforcement contexts

Explanation

Whelan highlights the potential dangers of using autonomous weapons systems in law enforcement. He argues that such use would be inherently unlawful due to the complex decision-making required in policing situations.

Evidence

Discussion of the nuanced and iterative human judgment required in law enforcement contexts.

Major Discussion Point

Human control and accountability

V

Vint Cerf

Speech speed

150 words per minute

Speech length

323 words

Speech time

128 seconds

Importance of clear lines of accountability

Explanation

Cerf emphasizes the need for clear accountability in the development and use of AI and autonomous weapons systems. He suggests that this is crucial for responsible development and deployment of these technologies.

Major Discussion Point

Human control and accountability

Agreements

Agreement Points

Urgency of regulating autonomous weapons systems

Gregor Schusterschitz

Ernst Noorman

Jimena Viveros

Kevin Whelan

Need for binding rules and limits by 2026

Fast pace of development closing window for preventive regulation

Importance of moving from discussions to negotiations

Existing systems already being deployed in conflicts

These speakers agree on the urgent need to establish binding regulations for autonomous weapons systems, emphasizing the rapid pace of development and the narrowing window for effective preventive action.

Multi-stakeholder approach to governance

Gregor Schusterschitz

Jimena Viveros

Need to involve diplomats, military, academia, industry and civil society

Need for new governance models suited to AI challenges

Both speakers emphasize the importance of involving various stakeholders in addressing the challenges posed by autonomous weapons systems and AI, recognizing the need for diverse perspectives and expertise.

Capacity building and education

Olga Cavalli

Wolfgang Kleinwächter

Chris Painter

Lack of technical capacity in Global South countries

Need for multidisciplinary education on AI and autonomous weapons

Importance of raising public awareness

Role of capacity building in supporting governance efforts

These speakers agree on the critical importance of capacity building, education, and raising public awareness about AI and autonomous weapons systems, particularly emphasizing the needs of Global South countries.

Similar Viewpoints

Both speakers highlight the technical challenges in developing and regulating AI systems, emphasizing the limitations of current methods and the rapid pace of technological evolution.

Ram Mohan

Chris Painter

Difficulty in creating unbiased and accurate AI systems

Limitations of current software engineering methods for AI

Rapid evolution outpacing regulatory efforts

These speakers emphasize the importance of maintaining human control and accountability in the development and use of autonomous weapons systems.

Kevin Whelan

Vint Cerf

Need to maintain meaningful human control over use of force

Importance of clear lines of accountability

Unexpected Consensus

Limitations of traditional governance models

Jimena Viveros

Ernst Noorman

Need for new governance models suited to AI challenges

Geopolitical tensions and mistrust hindering progress

Despite coming from different backgrounds, both speakers recognize the limitations of current governance models in addressing AI challenges, suggesting a shared understanding of the need for innovative approaches to regulation.

Overall Assessment

Summary

The main areas of agreement include the urgency of regulating autonomous weapons systems, the need for a multi-stakeholder approach to governance, and the importance of capacity building and education. There is also consensus on the technical challenges in developing and regulating AI systems, and the need for human control and accountability.

Consensus level

There is a moderate to high level of consensus among the speakers on the key issues. This suggests a shared understanding of the challenges and potential approaches to addressing autonomous weapons systems and AI in military contexts. However, there are still some differences in emphasis and proposed solutions, indicating the complexity of the issue and the need for continued dialogue and negotiation.

Differences

Different Viewpoints

Feasibility of creating unbiased AI systems

Ram Mohan

Kevin Whelan

Difficulty in creating unbiased and accurate AI systems

Need to maintain meaningful human control over use of force

Ram Mohan argues that creating unbiased AI systems is inherently challenging due to data biases and limitations in software engineering methods. Kevin Whelan, on the other hand, emphasizes the need for meaningful human control, implying that AI systems can be sufficiently controlled if proper measures are in place.

Approach to regulating autonomous weapons systems

Ernst Noorman

Chris Painter

Fast pace of development closing window for preventive regulation

Rapid evolution outpacing regulatory efforts

While both speakers acknowledge the rapid development of AI and autonomous weapons, Ernst Noorman advocates for urgent preventive regulation, whereas Chris Painter suggests that the pace of evolution makes it challenging to develop effective governance frameworks.

Unexpected Differences

Relevance of the ‘Oppenheimer moment’ analogy

Ernst Noorman

Jimena Viveros

Geopolitical tensions and mistrust hindering progress

Need for new governance models suited to AI challenges

While the ‘Oppenheimer moment’ analogy was introduced to highlight the urgency of the situation, Ernst Noorman expresses caution about drawing historical parallels, whereas Jimena Viveros argues that AI presents a fundamentally different challenge requiring new governance approaches. This unexpected disagreement highlights the complexity of framing the issue of AI and autonomous weapons.

Overall Assessment

summary

The main areas of disagreement revolve around the feasibility of regulating AI and autonomous weapons systems, the appropriate approaches to governance, and the relevance of historical analogies in framing the issue.

difference_level

The level of disagreement among the speakers is moderate. While there is a general consensus on the need for regulation and the urgency of the issue, significant differences exist in how to approach these challenges. These disagreements reflect the complexity of the topic and the diverse perspectives of stakeholders from different sectors and regions. The implications of these disagreements suggest that reaching a unified approach to regulating AI and autonomous weapons systems may be challenging and require extensive negotiation and compromise among various stakeholders.

Partial Agreements

Partial Agreements

All speakers agree on the need for regulation, but differ in their approaches. Schusterschitz and Viveros emphasize the urgency of establishing binding rules, while Painter focuses on capacity building as a crucial step towards effective governance.

Gregor Schusterschitz

Jimena Viveros

Chris Painter

Need for binding rules and limits by 2026

Importance of moving from discussions to negotiations

Role of capacity building in supporting governance efforts

Similar Viewpoints

Both speakers highlight the technical challenges in developing and regulating AI systems, emphasizing the limitations of current methods and the rapid pace of technological evolution.

Ram Mohan

Chris Painter

Difficulty in creating unbiased and accurate AI systems

Limitations of current software engineering methods for AI

Rapid evolution outpacing regulatory efforts

These speakers emphasize the importance of maintaining human control and accountability in the development and use of autonomous weapons systems.

Kevin Whelan

Vint Cerf

Need to maintain meaningful human control over use of force

Importance of clear lines of accountability

Takeaways

Key Takeaways

There is an urgent need to regulate autonomous weapons systems, with calls for binding rules by 2026

Existing autonomous weapons are already being deployed in conflicts, outpacing regulatory efforts

Regulating AI and autonomous weapons faces significant technical and geopolitical challenges

A multi-stakeholder approach involving diplomats, military, academia, industry and civil society is crucial

Capacity building and education, especially for the Global South, is essential to support governance efforts

Maintaining meaningful human control over the use of force is a key concern

Resolutions and Action Items

Work towards a legally binding instrument to prohibit autonomous weapons systems that cannot comply with international law and regulate others

Brief countries and the wider New York community on developments in the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE LAWS)

Campaign to support the UN Secretary General’s call for a binding treaty by 2026

Develop risk management frameworks to mitigate known and unknown risks of machine learning systems

Unresolved Issues

How to effectively regulate rapidly evolving AI technology

How to reconcile the need for thorough understanding of AI systems with the urgency of regulation

How to ensure meaningful human control over complex AI systems

How to address the capacity gap between developed and developing countries in AI governance

How to create unbiased and accurate AI systems for use in weapons

Suggested Compromises

Focus on regulating specific use cases and applications of AI in weapons rather than broad, abstract principles

Develop flexible regulations that can adapt to future technological developments

Combine binding treaties with softer governance approaches like norms and standards

Balance the need for regulation with the desire to not hinder beneficial AI innovation

Thought Provoking Comments

AI is a very different monster than nuclear because when, even since its origins from the splitting of the atom, it was immediately weaponized. And there was like this whole veil of secrecy around it with the Manhattan Project and with everything that happened for years. And then, you know, with the Cold War and the arms race and everything, no one really used it. Everyone was producing it, but no one really used it because it was like a mutual assured destruction. Whereas with AI, we don’t really have the conscience yet collectively that it will be the same.

speaker

Jimena Viveros

reason

This comment provides a thought-provoking comparison between AI and nuclear weapons, highlighting key differences in their development and use that make AI potentially more dangerous.

impact

This shifted the discussion to consider the unique challenges of regulating AI weapons compared to other types of weapons. It led to further exploration of the widespread and rapid proliferation of AI technology.

AI’s own evolution means that currently known methods in software engineering of testing, quality assurance and validation are either incomplete or insufficient. Many weapons systems in the conventional area, many weapons systems demand a model of zero defects, right? So there’s a zero defect model that is expected. Now, while the concept of a zero defect AI system is appealing, it’s important to recognize some of the inherent limitations that exist there.

speaker

Ram Mohan

reason

This comment brings a crucial technical perspective to the discussion, highlighting the inherent challenges in developing reliable AI systems for weapons.

impact

It deepened the conversation by introducing technical complexities that policymakers need to consider. This led to further discussion about the limitations of human control over AI systems.

From our perspective, the use of autonomous weapon systems in this context would be inherently unlawful, as the international law and standards governing the use of force and policing rely on nuanced and iterative human judgment.

speaker

Kevin Whelan

reason

This comment introduces an important legal perspective on the use of autonomous weapons in law enforcement contexts.

impact

It broadened the scope of the discussion beyond military applications to consider the implications for domestic law enforcement. This led to further exploration of human rights and accountability issues.

Overall Assessment

These key comments shaped the discussion by introducing diverse perspectives – technical, legal, and comparative historical analysis. They collectively highlighted the complexity of regulating AI weapons, emphasizing the need for multidisciplinary approaches and urgent action. The discussion evolved from broad conceptual issues to more specific challenges in implementation and regulation across different contexts.

Follow-up Questions

How can we address the AI divide between the Global North and Global South?

speaker

Olga Cavalli

explanation

Important to ensure equitable development and use of AI technologies globally

How can we improve the involvement of technical experts in diplomatic discussions on autonomous weapons?

speaker

Hiram (audience member)

explanation

Critical to ensure technical realities are understood in policy-making

How can we regulate the development of weapons equipped with ‘digital triggers’ that could be used by AI?

speaker

Audience member

explanation

Potential approach to limit AI’s ability to autonomously use lethal force

How can governance and regulatory approaches keep pace with rapid AI development and deployment by private companies?

speaker

Artem Kruzhulin (audience member)

explanation

Addresses the gap between policy discussions and real-world implementation

What platforms or forums, beyond the ITU Plenipotentiary, could be used to discuss AI standardization?

speaker

Kunle Olorundari (audience member)

explanation

Seeks to identify effective venues for developing binding international standards

What short-term solutions or campaigns can be implemented today to limit the impact of autonomous decision-making in war?

speaker

Raida Lindsay (audience member)

explanation

Addresses urgent need for immediate action given current deployment of these technologies

How can we develop effective risk management frameworks to mitigate both known and unknown risks accelerated by machine learning systems?

speaker

Ram Mohan

explanation

Critical for addressing the inherent biases and inaccuracies in AI decision-making

How can we create a multidisciplinary approach in universities to better understand and address the challenges of AI in autonomous weapons?

speaker

Olga Cavalli

explanation

Important for developing comprehensive education and research programs

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.