Pre 10: Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative
12 May 2025 11:00h - 12:15h
Pre 10: Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative
Session at a glance
Summary
This discussion focused on the regulation of autonomous weapons systems (AWS) and the ethical, legal, and security implications of their development and use. The panel included perspectives from diplomacy, industry, technical experts, and civil society.
Key points of debate centered around the need for international regulation of AWS, the challenges of maintaining meaningful human control over these systems, and the potential risks and benefits of their deployment. The Austrian ambassador emphasized the urgency of developing binding rules and limits on AWS, highlighting concerns about compliance with international humanitarian law.
Industry representative Benjamin Tallis argued that AWS could enhance precision and effectiveness in warfare while potentially reducing human casualties for democracies. However, technical expert Anja Kaspersen cautioned about the complexities of delegating decision-making to AI systems and the need for robust governance frameworks.
Participants discussed the difficulties of reaching international agreement on AWS regulation given current geopolitical tensions. The importance of multi-stakeholder approaches and involving diverse perspectives in the debate was stressed. Concerns were raised about the potential for AWS to be hacked or misused, as well as questions of accountability for their actions.
The discussion highlighted the rapid pace of technological development in this area and the closing window for preventive regulation. While disagreements existed on some points, participants agreed on the need for continued dialogue and careful consideration of the ethical and practical implications of AWS as their development progresses.
Keypoints
Major discussion points:
– The need for international regulation and governance frameworks for autonomous weapons systems (AWS)
– Technical challenges and ethical concerns around AWS, including human control, accountability, and compliance with international law
– The changing nature of warfare and military technology, including the shift of innovation from military to civilian sectors
– The importance of a multi-stakeholder approach involving governments, industry, civil society, and technical experts
– Geopolitical tensions making international agreement on AWS regulation difficult
Overall purpose/goal:
The discussion aimed to explore the legal, ethical, and security implications of autonomous weapons systems from multiple perspectives, in order to inform public debate and potential governance approaches.
Tone:
The tone was largely collegial and constructive, with panelists respectfully disagreeing on some points while acknowledging areas of agreement. There was an emphasis on the value of hearing diverse viewpoints. The tone became more optimistic towards the end, with some participants expressing hope for eventual progress on international regulation despite challenges.
Speakers
Speakers from the provided list:
– Wolfgang Kleinwächter: Professor from University of Aarhus, moderator of the session
– Aloisia Wörgette: Ambassador from Austria to the Council of Europe
– Benjamin Tallis: Representative from Helsing, a defense industry company
– Anja Kaspersen: Representative from the technical community (IEEE)
– Chris Painter: Former US Cyber Ambassador, chair of the Global Forum on Cyber Expertise
– Elena Plexida: Representative from ICANN
– Moderator: Remote moderator for the session
Additional speakers:
– Sai: Representative from Stop Killer Robots NGO
– Audience members: Brahim Alla (intern at Acedel Strasbourg) and Frances (with YouthDIG)
Full session report
Expanded Summary: Regulation of Autonomous Weapons Systems
This discussion, part of the EuroDIG (European Dialogue on Internet Governance) event, focused on the regulation of autonomous weapons systems (AWS) and their ethical, legal, and security implications. Moderated by Professor Wolfgang Kleinwächter from the University of Aarhus, the panel brought together diverse perspectives from diplomacy, industry, technical experts, and civil society to explore the challenges and opportunities presented by these emerging technologies.
Setting the Stage
Professor Kleinwächter opened the discussion by highlighting how perceptions of cyberspace have shifted dramatically over the past two decades. While initially seen as a tool for fostering peace and understanding, cyberspace has increasingly become an arena for conflict and weaponisation. This framing set the tone for the urgency of addressing AWS regulation.
Need for International Regulation
A central theme of the discussion was the pressing need for international regulation of AWS. Ambassador Aloisia Wörgetter from Austria emphasised the importance of developing binding rules and limits on these systems. She highlighted ongoing international efforts, such as the RE-AIM initiative by the Netherlands and South Korea, and the US Political Declaration on Responsible Military Use of AI and Autonomy, as steps towards promoting responsible military use of artificial intelligence. Wörgetter also mentioned ongoing UN consultations on AWS and referenced an “Oppenheimer moment” in relation to the development of these technologies.
However, Chris Painter, former US Cyber Ambassador, expressed scepticism about the likelihood of reaching international agreements in the short term due to current geopolitical tensions. He argued that these tensions outweigh the ability to achieve consensus, particularly within the UN framework.
Technical and Ethical Considerations
Anja Kaspersen, representing the technical community (IEEE), provided crucial insights into the transformative nature of AI in military contexts. She argued that AI is not merely a weapon system but a methodology that reorganises how warfare is conceptualised and operationalised. Kaspersen highlighted concerns about the reliability of AI systems, the risks of over-reliance, and challenges in procurement and oversight. She emphasised the need for robust governance frameworks and the importance of understanding commander’s intent in the context of AWS.
Benjamin Tallis, from the defense industry company Helsing (a European defense AI company), offered a contrasting perspective. He argued that AWS could enhance precision and effectiveness in military operations, potentially reducing human casualties for democracies. Tallis stressed the importance of maintaining human control and accountability in these systems, as well as the need for explicability in AI decision-making processes. He also introduced the concept of a “drone wall” as a defensive measure against autonomous weapons.
Military and Strategic Implications
The discussion touched on the changing nature of warfare and military technology. Kaspersen noted a shift in innovation from military to civilian sectors and back, with civilian technologies increasingly influencing military capabilities. Tallis contextualised the debate within current geopolitical realities, using the example of drone usage in Ukraine to illustrate that autonomous systems are not a “silver bullet” but part of a broader military ecosystem.
Concerns were raised about the potential for AWS to be hacked or misused, as well as questions of accountability for their actions. Chris Painter highlighted the risk of cyber attacks on AI systems and critical infrastructure, while Elena Plexida from ICANN emphasised the need to protect core internet infrastructure. Plexida specifically referenced the work of the Global Commission for the Stability in Cyberspace in this context.
Governance and Accountability
The panel agreed on the importance of a multi-stakeholder approach to AWS governance, involving governments, industry, civil society, and technical experts. Kaspersen highlighted challenges in procurement and oversight of AI systems, while Tallis emphasised the importance of intent and traceability in the use of autonomous weapons.
Unresolved issues included ensuring meaningful human control over AWS, making AI systems sufficiently reliable and explainable for military applications, determining responsibility for unintended harm, and balancing military effectiveness with ethical and humanitarian concerns.
Audience Engagement
Audience members raised important questions, including concerns about the potential for asymmetric warfare and overuse of autonomous weapons. One question addressed the possibility of shutting down entire regions as a warfare strategy, while another focused on the potential for hacking AI battle systems. Panelists responded by emphasizing the importance of protecting critical infrastructure and the need for robust security measures in AWS development.
Conclusion and Future Directions
While disagreements existed on some points, participants agreed on the need for continued dialogue and careful consideration of the ethical and practical implications of AWS as their development progresses. The discussion highlighted the rapid pace of technological development in this area and the closing window for preventive regulation.
Key takeaways included the urgent need for international regulation despite geopolitical challenges, the importance of a multi-stakeholder approach, and the complex reshaping of military decision-making by AI and autonomous systems. The panel suggested continuing discussions at future forums like the Internet Governance Forum in Oslo and working towards a legally binding instrument on AWS by 2026, as called for by the UN Secretary General.
In her closing remarks, Ambassador Wörgetter expressed optimism about eventually reaching an agreement on AWS regulation, emphasizing the importance of global democratic processes. Anja Kaspersen stressed the need to navigate uncertainties and sit with contrasting realities as the field of AWS continues to evolve.
As the debate concluded, there was a sense of cautious optimism about eventual progress on international regulation, tempered by an acknowledgement of the significant challenges ahead. The discussion underscored the complexity of the issue and the need for ongoing, nuanced exploration of the technical, ethical, and geopolitical dimensions of autonomous weapons systems.
Session transcript
Wolfgang Kleinwächter: It’s one o’clock So we are waiting for Anja Kaspersen Good afternoon, everyone. Welcome to the session of Regulation of Autonomous Weapons System,
Moderator: navigating the legal and ethical imperative. My name is Istimarta and I will be the remote moderator for the session. So for now, I will be reading the rules for our remote audiences. So first, for the remote audiences, please enter with your full name. And to ask questions, raise your hand using the Zoom function, and you will be unmuted when the floor is given to you. And when speaking, please switch on the video, state your name and affiliation, and please do not share the links to Zoom meetings, not even to your colleagues. So for now, I will be giving the floor to our moderator, Professor Wolfgang Kleinwächter from University of Aarhus. Thank you very much and welcome to our session.
Wolfgang Kleinwächter: As you know, we are living in difficult times, and while everybody agreed 20 years ago that the cyberspace and the digital sphere would contribute to a more peaceful world and to better understanding among nations, we have realized in the last 20 years that the cyberspace is also an area for conflict, conflict among nations, and also a process has started where cyberspace become weaponized. During the recent Munich Security Conference, we did see a lot of discussion how this space, cyberspace, but also the outer space, has become now pulled into a discussion for military experts. So we have seen a lot of negotiations already within the United Nations, but also under the umbrella of the Convention on Certain Conventional Weapons, the CCB, where we see a discussion about new types of weapons, which we call autonomous weapon systems, AWS, and the General Secretary of the United Nations has produced a report last year, which has led to a resolution, which was sponsored by Austria, with the outcome that today and tomorrow there will be informal consultations in New York City about this new type of weapons. And that’s why, with the help of the Austrian government, we have decided to bring this very crucial and delicate and complicated debate into a broader public so that we have a better understanding what are the consequences of, quote-unquote, weaponization of the cyberspace. And so we started with an outreach workshop during the IGF in Riyadh in December, where we had the first round of discussion, and this is now the second in a series that we want to reach out more to the European public, and there will be a third workshop in Oslo. in June when we have the UN-sponsored IGF. So it means the session here is mainly an informal session so that we inform the public what’s going on and we hope we’ll have also a very good discussion. We are, Anja is here, okay, great. So unfortunately, we are still missing Winsurf who wanted to give a short opening speech because he also helped us to make the workshop in Riyadh but he’s in Los Angeles and it’s three o’clock in the morning, probably a little bit too early for him. So that means if he arrives then our remote moderator will give us a signal. So we have a good panel which gives you different perspectives. We have the Ambassador from Austria to the Council of Europe, Madame Werketter who will inform about the ongoing negotiations. We have Mr. Thales from Helsinki, this is the industry perspective. This is one of the new rising industrial stars in Germany which has specialized in the produce of one type of autonomous weapon system, mainly drones. We have Anja Kaspersen, she is from the technical community. She will speak a little bit about the technical perspective and how realistic or unrealistic is the debate about the human control over all this because human control and human oversight is a key issue in the debate. And we have then some comments from the online commentators. We have Chris Painter who was the first US Cyber Ambassador in Washington. He was then for many years the chair of the Global Forum on Cyber Expertise and is now with a conference in Geneva with UNIDIA and dealing also with these issues. Unfortunately, Marjete Schaake, a former president, member of parliament from the Netherlands, and she is now a member of the Global Commission on AI in the military domain, is conflicted and cannot make it. But we have also Elena Plexida from ICANN online, and we have from the NGO Stop Killer Robots. She’s from India and will give us a civil society perspective. So this is more or less the program, and now I give the floor to Madam Ambassador. Thank you very much.
Aloisia Wörgette: Thank you. Yes, that works. Thank you, Professor Kleinwächter. Dear colleagues, ladies and gentlemen, I see many of you here. A special welcome to everybody with a strong connection to Austria. My colleagues in Vienna, disarmament experts, have asked me to speak to you on their behalf. Also, as you know, that the Council of Europe deals with human rights, rule of law, and democracy, but has specifically no mandate for defense issues. Still, we found it very important that this topic is dealt with at EuroDIG in connection with the Council of Europe here. I want to thank you, Professor Kleinwächter, to moderate this session and want to thank all the distinguished speakers present and online to join us and contribute to this timely and important conversation. Like all transformative technologies, the application of artificial intelligence in the military domain is advancing rapidly. These developments promise to make tasks faster, easier, and more accessible. Yet, as in the civilian sector, they demand robust guardrails and limitations to ensure that artificial intelligence is used in a human rights-based, human-centered, ethical, and responsible manner. While the civilian domain is increasingly governed, and thank goodness we do find consensus on these things, with regard to the Council of Europe’s AI Convention, first legally binding international treaty on AI, European Union’s AI Act, first comprehensive global regulation, the military and defence sectors still lag behind. And let me state here that Austria has supported, during the negotiations for the Convention on Artificial Intelligence, that we do include the defence sector, but we were not successful in this regard. National security considerations have largely excluded these domains from such instruments, and no similar binding frameworks exist to date. We therefore support ongoing international efforts to promote responsible military use of artificial intelligence. These include the RE-AIM initiative by the Netherlands and South Korea, and the US Political Declaration on Responsible Military Use of AI and Autonomy. Today, we focus on one of the most critical and sensitive issues in this broader field, autonomous weapon systems, systems that can select and apply force to targets without further human intervention. AWS raises fundamental legal, ethical and security concerns. These include the necessity for meaningful human control to ensure proportionality and distinction, the need for predictability and accountability, and the protection of the right of life and human dignity. There are also serious risks of proliferation and a destabilizing autonomy arms race. These topics will be explored by our panel, and I want to link back also to the panel that started EURADIC this morning, where the execution department of the Council of Europe did report on the case law of the European Court on Human Rights. We are concerned about these things going on, and therefore Austria has taken a leading role in advancing international regulation on AWS. Last year, Austria hosted the Vienna Conference Humanity at a Crossroads to examine the ethical, legal and security implications of AWS and to build momentum for international regulation. We strongly support the joint call by US Secretary General and the ICRC President to conclude negotiations on a legally binding instrument by 2026. Over the past decade, valuable discussions have taken place, notably within the Group of Governmental Experts in Geneva and the Human Rights Council, where a growing majority of states agree on the need for international regulation, including prohibitions and restrictions. However, moving from discussion to a formal negotiation mandate remains difficult. Geopolitical tensions, mistrust and the reticence to regulate these fast-paced technologies are slowing progress, even as the window for preventive regulation is closing rapidly. Minister Kleinwächter has just mentioned that we have supported and championed the first-ever resolution on AWS in the UN General Assembly in 2023. You’re aware that this has mandated a UN Secretary General report, and last year we sponsored also the follow-up resolution, which was supported by 166 UN member states. These consultations complement the Geneva-based efforts. And Professor Kleinwächter has already also mentioned that these negotiations are taking place today and tomorrow in New York, and we would have I want to speak briefly about the need for a multi-stakeholder perspective. From our point of view, the global discourse must extend beyond diplomats and beyond military experts. The implications of autonomous weapons systems affect human rights, human security, and sustainable development, and it concerns all regions and all people. We therefore advocate a multi-stakeholder approach. Contributions from science, academia, industry, the tech sector, parliamentarians, and civil society are essential to ensure a holistic and inclusive debate. We welcome that the Council of Europe Parliamentary Assembly has already in 2023 supported a resolution on the emergence of lethal autonomous weapons systems, which references relevant international and European human rights law. We aim to broaden the discourse through outreach, like we are doing right now, such as the AWS session that we have hosted at the Internet Governance Forum in Riyadh last December, and we will continue the conversation in the Internet Governance Forum in Oslo in July. Let me just, in concluding, reiterate the urgency to act. We find humanity is at a crossroads. We must come together to confront the challenges posed by AWS. We think that we are in an Oppenheimer moment. Advocates from across disciplines are warning of the profound risks and irreversible consequences of unregulated autonomous weapons arms race. There is urgency. to finally move from discussions to negotiations on binding rules and limits. And as AWS technologies evolve, the gap between regulation and reality continues to widen. So we need decisive political leadership to shape international rules. We believe that a multi-stakeholder exchange will contribute considerably and we will remain my colleagues who are working on this armament for a long, long time, which is also an element of our active neutrality. We’ll continue the conversation. I’m looking forward to the conversation. Thank you.
Wolfgang Kleinwächter: Thank you, Madam Ambassador. And I will already now announce that we have reserved some time for interactive discussion because EuroDIG is a dialogue and so we want to get you involved to prepare your questions or your comments if we ask all the panelists. But now a great welcome to Mr. Tallis from Helsinki. I think in this context it’s the first time that we have a representative from the industry. But as just Madam Ambassador has said, a multi-stakeholder approach is needed and we have to hear all voices and it’s bad if some stakeholder groups are sitting in their silos. So you are mostly welcome and you have the floor.
Benjamin Tallis: Thank you very much indeed, Professor Kleinwächter. And thank you for revealing now that I’m the first representative of the defense industry to speak in this format. I’m braver than I thought in that case. Thank you also to the Ambassador for excellent scene-setting remarks. And coming from industry, I’m obviously here with a very fancy PowerPoint presentation to show you why everything is going to be fine. Well, you’ll notice I don’t have a PowerPoint presentation and I’m not here to tell you everything is going to be fine. My job at Helsinki, which I should clarify is not a drone maker, we do make drones. But what we actually do is make battle networks, extending from all sensors to all shooters, using AI to actually enhance the kind of battle networks that we can field, which allow us to make better decisions based on better understanding and take more effective and precise actions. So relating very much to some of the things that the ambassador already mentioned. So we don’t just make drones, and I’m not here to be a salesman for drones or any other technology. My role with Helsing is what they call thought leadership, which involves exactly engaging with third-party stakeholders, with a multitude of different actors to have that kind of multi-stakeholder dialogue to ensure that we’re aware, first of all, of all the necessary discussions that are going on that affect what we’re doing, but also to make sure that others involved in those discussions are aware of what we’re doing, what we provide, and also where the industry is on these issues. Today I speak on behalf of myself, but you’ll get an idea of where we stand. Now before joining Helsing, I was not a professional defence industry person. I was a think tanker. I was prior to that an academic, and I’ve been a government advisor working on European security in various capacities for about 20 years, including working on the field for the European Union on security missions in the Balkans in the post-conflict period there, and also in Ukraine, going back about 20 years, which is where the start of a long association with that country came from. In those capacities, when I was working also with diplomatic status, I had the chance to engage with people from the Council of Europe, as well as many civil society groups and many others who were deeply concerned with human rights, with the principles of humanitarianism, with upholding the values that actually make our democracies different from the authoritarian regimes by whom we are so clearly challenged at the moment. So with that perspective in mind, that informs the remarks that I’ll make today. It’s no secret that we are in an increasingly competitive and increasingly hostile geopolitical climate. It was mentioned that we’re seeing a destabilizing arms race. Well, I would put it to you, while it’s bad to be in an arms race, it would be worse should we lose that arms race to authoritarian regimes who have far less honorable intentions for their peoples and indeed for the world than our democratic societies do. We can see that one aspect of this competition does involve emerging defense technologies, including autonomous weapon systems, and it’s an area in which we give considerably more care than our adversaries in Russia, in China, and elsewhere do. And that’s good, that’s part of what sustains us as democracies. And it’s very important that while we work to ensure that we have the military capabilities, as well as the demonstrated resolve to ensure deterrence, we do that without undermining the democratic values that again set us apart and which give our citizens the kind of right to a hopeful future, which is the unique selling point of liberal democracies when they are at their best, and again sets us apart from our authoritarian competitors. Now we’ve seen this competition in emerging defense technology as well as in geopolitical power positioning in microcosm in Ukraine. And while a lot of people would say there’s huge amounts of transferable lessons to be learned from the Ukrainian experience, others would say, well, the Ukrainians have made virtues of many necessities, limitations of their weapons systems and so on, such as lack of air power, that don’t affect us. I think there’s an awful lot we can learn from what’s been happening in Ukraine. Not necessarily, and this might surprise you, not necessarily because there’s something truly new happening. What I would suggest is happening in Ukraine is actually the culmination of a 50-year process of military transformation that began in the 1970s. Many of you will be familiar with William Perry, Undersecretary of Defense at that time in the U.S., who famously said, our aim is to be able to see any high value target on the battlefield, to strike any target we can see, and destroy any target we can strike. That ushered in what was known as the precision networked warfare revolution, which only now do we fully have the technology to be able to exploit through massed precision strike weapons, massed persistent sensors that we can afford to field, and the kind of battle networks that can actually link those things up in a sensible way. What is the evolution there, rather than the revolution, is that because of AI, we’ve been able to make these battle networks efficient in a way that we weren’t before. That means humans are no longer brute-forcing massive amounts of data through networks that can’t handle them. Humans are no longer fat fingering, as the US military calls it, data from one machine to another that can’t talk to each other. We’re now developing the ways that we can get our intelligent machines to talk to each other. So, again, this is not necessarily new. It’s the culmination of that process, but it’s also the beginning of another process, the revolution in military affairs to come from autonomous systems, from robotics, from artificial intelligence, quantum computing, additive manufacturing, and so on. But we don’t know yet what shape that revolution will take, but we need to be prepared for that industrially, governmentally, strategically, and indeed ethically. Focusing today on what we’ve already seen, though, it’s not new in another way either. Everything that we’re seeing in terms of the ethical discussion about autonomous weapon systems, including the strike drones, intelligence surveillance and reconnaissance drones, and other systems being used in Ukraine, and which our militaries are starting slowly to procure, relates to older discussions about military affairs. What we’re essentially talking about is command and control. The whole discussion, or the whole organization of military affairs, has been based on the principle of command and control since time immemorial. What is this? It is the delegation of bounded autonomy to conduct particular tasks. And until we get to a stage where we are able to talk about artificial general intelligence, which I’m not that kind of Silicon Valley enthusiast who will tell you it’s just around the corner. I think we’re quite a long way off artificial general intelligence. Until we get to talking about that, what we’re again talking about is the delegation of particular tasks, in this case to machines rather than to humans. Now obviously that has implications for how we understand this, but the principles remain the same. When military commanders delegate to their subordinates, they do so on the basis that those subordinates are trained. They’re trained to do the task required of them. We do it on the basis that they have been tested at doing that. And because they have been trained, and they have been tested in order to be able to be predictable, to be able to be reliable, foreseeable in the things that they do, and thus to be effective also in what they do, they do what they’re supposed to do, we can trust them. And on this basis of training, testing, and trusting, I don’t actually think there is a significant difference between delegation of many of the tasks involved, between delegating to a lower human authority or to a machine. And guess what? We’ve been doing this for a long time. So again, not actually something necessarily new. Any so-called beyond visual range engagement, for example in air-to-air combat, has contained an element of this delegation. Delegation from a pilot, to a radar and targeting system, to a fire-and-forget missile. That’s delegation. Further back still, delegation to dumb bombing. Dropping a bomb over a target to try and hit it, which we were terrible at for an awful long time. Even artillery beyond visual range contains an element of exactly the same question. The difference now is that we can actually be more precise, and we are much more likely to be precise than we were before. And if you do go back and look, at the history of strategic bombing, for example, which I doubt is a favorite occupation in this building, but nonetheless, I will prevail upon you. The history of that is that we have been terribly inaccurate and terribly ineffective at that, causing massive amounts of collateral damage. So I would put it to you that actually advances in precision that follow the same rules of delegation are a potential advance for democracies. The other aspect of this, of course, is democracies do not want to fight wars of attrition. We value our people too much. We actually want to have the kind of precise weapons and make use of the kind of asymmetric capabilities that reflect our inherent advantages as societies, our unique selling point of human creativity amplified through the market mechanism and allied to government strategy that give us the edge if we leverage that over our authoritarian rivals. So again, with that said, and I’m happy to talk about an example of this that Professor Kleinwächter asked me to from Helsinki and others who use a term called the drone wall on the eastern flank, but I’d rather do that in questions in order to be able to set out this clear position first of all. So I would put it to you that it’s incumbent upon us to think through these ethical questions, but not to focus or get misdirected when doing so. Not to confuse means and ends, not to confuse actions with the actors or actants that we delegate them to, and not to confuse quote-unquote killer robots with the kind of battle networks, the kind of technology that can actually put humans where they most need to be by making more informed decisions, faster, in more effective ways that would drive the better kind of actions that democracies seek. Not only to be more precise in doing the awful things that we don’t like to do but we have to do in war, but in order to be able to win and to be able to use our strengths as democracies to actually prevail against the geopolitical and military challenge that we face today. which, if we fail to rise to, would have dire consequences for any of the kind of discussions we’re having today and for our democratic societies more widely. So with that, I’ll leave you there as the opening statement, and I look forward to discussing more on the specifics, including about the drone wall, in the questions.
Wolfgang Kleinwächter: Thank you. Thank you very much, and Anja, you are a representative from an organization of engineers. I think you have 100,000 members in the IAEA around the world. In Riyadh, we had Wim Mohammed, the CTO from Digital Identity, and he gave us a perspective and said, you know, whatever, you have a perfect software, you have some bugs in it, and so that means don’t trust all this technology, so that means you are dealing with this issue from the technical perspective. So what are your comments to the diplomatic and industry perspective, if we trust you? Thank you.
Anja Kaspersen: Thank you so much, Professor, and I should first, actually, we’ll have to correct you a little bit on numbers. So we actually are almost half a million members globally, and that just counts for the membership, not the larger ecosystems that is in the millions, and we are across 190 countries around the world. And we have been around for close to 141 years, so this was an initiative that came out of efforts with pioneers like Alexander Bell, Thomas Edison at the time, and that’s why I’m mentioning the history of it, around a core principle of how do you advance technology while keeping humanity safe. And a core part of this work was also then creating standards to make sure that all these good initiatives could also interoperate with one another without, for example, electrocuting us in the process, et cetera, et cetera. So most of you, the way that you’re connecting with one another in this room, you know, be that integrated devices, the Wi-Fi you’re connecting to in the Council of Europe, that’s actually IEEE standards. So almost everything that connects everyone in this room is one of our underlying standards. But I’m just mentioning the history of this organization because we don’t only do that, it’s also about scientific integrity, it’s about dialogue, it’s about scientific collaboration. So that’s what this group is doing worldwide and why societal issues such as the one that we’re discussing today is not something that we’ve been focusing on the last few years, but something that has been at the core of its existence, you know, from the beginning. So if you allow me, Professor, I prepared, because we all got like very strict timelines, so unusually for me, I actually prepared some remarks, but answering the questions that you just asked me. So first of all, thank you to Austria for the opportunity to intervene on this critical issue. I was lucky enough to be at the inauguration of these efforts, you know, in Vienna last year, in the Grand Palais. And I’m also, I should say, for those of you who may not know me, I have a very varied background, including from diplomacy. And I was also the former director for disarmament affairs in Geneva, where I oversaw some of these processes, including CCW, and tried to make a real push to, perhaps at that time, moving a little bit away, I called it away from the 10,000 feet perspective, and down to more practical considerations that allowed, such as, you know, my colleague on the side here to engage differently in this process. So I think that’s an important thing, how you frame this discussion can be quite alienating, or it can be inclusive, dependent, right? And I’m sure from industry, you have experienced that. So I speak today, not only from the perspective of the technical community, but also as someone who has long been engaged in international governance, including overseeing these efforts in Geneva, and contributing for decades to initiatives aimed at developing a coherent multilateral framework on the military use of technologies, as well as the broader strategic, operational, tactical, and not least, and I mention this because it’s very important, because it’s often forgotten, cultural and societal impacts, including on civil preparedness. There’s a lot of focus on civil preparedness right now, so what I’m about to say relates to that as much as it relates to the question at hand. What I want to offer is not a summary of technical challenges, which I think are by now well understood, but I would be happy to field any questions, of course, to any of you after this conference or after this meeting. What I want to focus on instead is a framing of what is structurally at stake and why, from a technical standpoint, some of the most urgent questions remain inadequately addressed. First, we must stop treating AI as a bounded technological tool. AI is not a weapon system in a traditional sense. It is a social, technical, economic methodology, if you may. It reorganizes how war is imagined, operationalized and bureaucratized. It alters the concept of decision making itself, shifting authority away from experience and judgment toward inference and correlation. What this means in practice is that the challenge is not simply how to use AI, but how it reshapes the very infrastructure of responsibility and intent. One concept that is routinely overlooked is commander’s intent. This is not a checklist or an input. It is a deep cognitive and ethical practice about anticipation, discernment and alignment across dynamic conditions. In human to human operations, it’s already complex. In human machine interaction, it becomes nearly impossible. Systems that do not and cannot reason are being asked to, in fair intent, respond to shifting environments and remain predictable without a contextual understanding this requires. Special forces are trained precisely for this kind of discernment to override instinct, interpret ambiguity and exercise calibrated judgment. These are human traits, tactical and moral. that no current complex information structure or machine learning system is built to replicate. That brings me to reliability. Reliability is not a static attribute. These systems adapt, drift and behave differently in different contexts. A model may function perfectly and still fail ethically, operationally or politically. It may perform as intended and still degrade trust or escalate instability and trigger proliferation. This is an important point when we discuss compliance with international humanitarian law. Can something be in compliance and still be harmful? Can something be compliant in war but be highly non-compliant in peace? We have to think through these scenarios. Over-reliance is not just a technical risk. It is an operational risk. It is a governance risk. And yet we routinely see systems treated as reliable in ways that ignore context, fragility and institutional constraints. Another important point. Procurement. Not a conversation that happens very often when we discuss these issues. And it’s one of the most overlooked ethical fault lines in my view. Most institutions, military or otherwise, do not build AI systems. They procure them. Increasingly, these systems are pre-trained, modular and abstracted from operational realities. And this relates to any of you that also work in public governance and that may have been included in your governments or companies’ procurement processes. These are very important issues. And increasingly, these systems are pre-trained, modular and abstracted from operational realities. This introduces profound misalignments, especially when end users have little involvement in setting technical specifications. I’ll do a little kind of flag for work that I think is just important, not because I’m selling anything, but it might provide a lot of insights for those in the room. So IEEE issued something called the IEEE P319. make a note of it, P3-119. It’s a cross-sector global procurement standard, or more like a practitioner’s handbook guideline, that helps organizations, companies, governments, militaries, to interrogate vendor claims, clarify assumptions, and surface hidden risks before integrating or embedding AI features into any form of systems. And includes questions not just for engineers, but policy makers, legal experts, and institutional decision makers. Because this, in my view, and also my institution’s view, is where ethical, you know, managing things with ethical considerations and true governance begins. We may also be cautious about the language used to frame the systems. Terms like responsible AI, trustworthy autonomy, or ethical automation, suggest a coherence and controllability that do not reflect how these systems actually operate. From a technical perspective, these labels often obscure the fact that many of these systems are built on failed approximations, trained on proxy data, deployed in contexts their designers never anticipated, and governed by assumptions, including about winning, what is winning in today’s battlefield, right? And dynamics that are not always visible to users. The failures that will matter are unlikely to be those we plan for. They will not look like system crashes. They will look like misalignments between logic and lived reality. Instead of projecting responsibility onto the system, we should talk more seriously about responsible decision-making processes at the human and institutional level. Responsibility lies not in the tool, but in the processes and choices that governs its design, deployment, oversight, and use. When that distinction is blurred, the vulnerability becomes harder to trace and governance risks become symbolic rather than substantive. Everyone in this room knows that data is the very backbone of AI-enabled systems. We had Eurodig. And yet, despite this recognition, data often remains backgrounded in this debate, treated as ambient infrastructure rather than a strategic asset. But data is never just there. It is collected, conditioned, labeled and selected, always by someone, for some purpose, under particular constraints. We must therefore ask, whose data is being used? How was it obtained? Why was it chosen? And for what outcome? These are also important questions in this debate. Questions of data integrity, veracity, provenance and security are not academic, nor are they pertaining just to the civilian domain. They are central to both performance and trust. The risk of tampering, poisoning and silent drift are real, particularly in military and intelligence contexts. If we do not account for the full data pipeline, we cannot account for the system. It’s very important we talk about weapons reviews. This brings me to infrastructure, because AI systems do not operate in isolation. Most current deployments rely heavily on legacy hardware and network-centric architectures that were not designed for systems with autonomous features. These architectures introduce friction, fragmentation and vulnerabilities, especially when retrofitted to accommodate high-intensity compute loads. This also risks undermining interoperability, particularly in joint or cross-force environments, where systems are expected to function across organizational, national and technical boundaries. This is precisely why robust, internationally applicable technical standards are so important in this domain, especially where systems must communicate, adapt and escalate decisions across contexts and constraints. And this leads directly to the question of energy. Advanced AI systems, particularly those involving real-time inference or large-scale simulation, are computational intensive. That means that they’re highly energy intensive. So, any serious conversation about AI, as well as cyber-reliant or network-centric warfare, is not just a conversation about power in the geopolitical or socio-economic sense, it is about power in the literal sense. Electricity, resilience, energy availability, and re-infrastructure security. Governance frameworks that overlook this is not just incomplete, but strategically short-sighted. This is why our anticipation strategies must change. Governance must shift from a logical prediction to one of adaptation. Systems need to be designed not only to perform, but to fail safely and visibly. That requires institutions to develop memory, reflexivity, and the ability to surface weak signals before they become structural liabilities. Here I would also flag another process that maybe even some in the room have been involved in, because it’s been a large-scale work for years, is the IEEE P7000 series. It was developed around how to guide ethically aligned design across sectors by supporting practitioners in identifying stakeholder values and translating them into system requirements from the outset. When this approach was launched, now many years ago, and been adopted across the world, it caused a critical shift in understanding that the ethical considerations must be architected into design, not added later just as an assurance. Because design decisions are never neutral. They determine what is seen, what is measurable, and what forms of harm and risk are rendered invisible. These decisions shape how systems respond to ambiguity, and how power and discretion are distributed. They are political, even when framed as technical. And once baked into architecture, these choices often become inaccessible to oversight or review. Governance must begin by recognizing this. Effective oversight is not simply a matter of control at the point of use. It depends on tracing responsibility back to the layers of abstraction and specification where many of the most consequential decisions are made. This includes questioning whose designs for whom with what assumptions against whose values. And I’ll come to the end here. I just want to say there’s a language plays a key role here. As I mentioned before, a few years ago, while working with the CCW state parties, I led a what we call computational text analysis of national statements and working papers. And it revealed a striking difference in how core technical and military concepts were framed, particularly around definitions, system limitations, mission command and human oversight. And I see this diversion still persisting today. And it continues to undermine efforts to build a shared foundation of governance. And I just give this example. And I’ve been in multiple, I’m part of multiple multilateral efforts. And I see this being a common trend. A term like redundancy might refer to fault tolerant architecture and engineering, but to inefficiency or duplication in policy. Safety might indicate statistical reliability in one field and protection in another or humanitarian protection in another. Even the term reliability can refer to technical precision, political stability or normative acceptability. These are not minor misunderstandings, they shape procurement, deployment, review and oversight. And they create governance gaps that are filled by assumption. What matters is not just taxonomy, but comprehension. So understanding how terms are used and understood in practice is essential, particularly if we are serious about building a governance framework that focuses on conversions around baseline standards. This is urgent. And I would just want to conclude by saying I want to return to an ethical point, speaking as strictly my personal capacity. In his work, Christopher Coker, my late professor, he was with the London School of Economics, warned of the dangerous illusion that technology could sanitize violence, that increased automation or distance could somehow make war more humane. It cannot, nor can it help us to define what winning means, nor should it. Technology may obscure the moral weight of decision making or create abstraction where there was once contact. but it does not eliminate responsibility. So the challenge before us is not simply one of technical control, it’s about governance and about kinds of institutions and cultures we want to build. It is about listening, not for consensus, but for the conditions that allow disagreement to be meaningful and oversight to be real. And I think that’s something this conversation could really benefit from. Thank you.
Wolfgang Kleinwächter: Thank you very much, Anja. As you see, if you are digging deeper, complexity is growing. And I think this is a good opportunity in this environment here to get many perspectives so that we get a full picture. We will hear now three shorter comments online and then I hope we can enter into a discussion with Q&A. So, Chris, you have a couple of minutes just to comment what you have heard and with your background, you are best positioned. I introduced you already. Chris, you have the floor.
Chris Painter: Great. Thank you. And it’s been a good discussion. Hopefully you can hear me. Can you hear me all right? Yes. Okay. So I come at this from a cybersecurity perspective and that’s been my background, certainly. And a couple of things. One was just mentioned, you know, the vulnerability of manned control systems, including AI systems, to cybersecurity attacks. And that’s not something that’s new, but that’s something that’s a challenge. We’ve talked about this in the nuclear area, with nuclear command and control, that although even when they separate them from the Internet as a whole, there are other dependent systems that could be susceptible to attacks. So aside from all the concerns about how AI is trained and how it’s used, it also has a concern about whether it is made less reliable because of cybersecurity attacks by adversaries who could make this much more and less reliable and amp up all the problems we talked about. The other thing, I think, is we’ve also talked in the cyber realm for a long time in terms of cyber offensive operations, you know, talking about the speed of the Internet and how we have to respond faster. that to automate cyber offensive operations to take them out of the middle. Now, those are likely not as destructive as the attacks we’re talking about here of kinetic weapon systems, but they could be destructive. They go after critical infrastructure and others. There’s long been a debate about how autonomous it can be for all the reasons that we just heard, how it’s trained, how it’s used. And I think that poses a huge problem here. And I don’t think we have a real solution to that without having humans still in the middle, rather than having an entirely automated system. And then the final thing I want to talk about is the geopolitical considerations. And I know there is an OEWG looking at this, or a GGE looking at this in this contest. And there’s been an OEWG of all the countries in the cyber context, cybersecurity context. But what the problem is there is more true than ever before. And I don’t want to be too much of a damper on this. The geopolitical considerations outweigh any ability to really reach an agreement. And though I applaud the effort to try to do some binding approach to this in the UN, I think that’s going to be, at least in the short term, very, very difficult. And that’s what we’re seeing across the board in cyber and all technological issues, really in all issues, where there’s such division within the UN and other international venues. And we’ve seen the US, for instance, I think, pull back from any kind of AI guidelines that would establish guardrails for the reasons that were noted of not wanting to constrain themselves, which is coupled with the lean to be more offensive in cyberspace, but also in other areas too. And that complicates this issue as well. So not to paint an overly non-rosy picture, but I think there are a lot of concerns on the horizon. And that doesn’t mean we shouldn’t talk about this. It doesn’t mean we shouldn’t have these efforts. I just don’t have a huge amount
Wolfgang Kleinwächter: of confidence we’re going to make progress in the short term. Thank you very much for your realistic outlook. And anyhow, it’s on the table and we have to discuss it. So Stop Killer Robot as an NGO has been involved in this from the very early days. And Sai is with us from India. Sai, probably you could comment on what you have heard this morning.
Speaker: Well, they were really interesting conversations that I heard. I’m really glad to be part of this. Thank you so much for having me here. As part of, I think, Stop Killer Robots and from civil society, one of the biggest concern is that we believe autonomous weapon systems will not be able to comply with the ethical, legal, humanitarian and moral implications that it presents. And especially it will not be able to comply with international humanitarian law, various provisions of it, including distinction, proportionality, being able to differentiate between a combatant and a non-combatant and so on and so forth. Apart from this, we also think it’s not, military technology historically has had examples of percolating into civilian uses. And they then don’t just create problems for international humanitarian law, but also raise questions about the implementation of other international law, like international human rights law, international criminal law and so on and so forth. So I think it is very important at this present state of the geopolitics to also assess properly as to how international law will be upholded with the advent of weapon systems such as autonomous weapon systems. What we believe is that the way forward is to do a legally binding instrument is through a legally binding instrument on autonomous weapon systems that completely bans autonomous weapon systems that are not able to comply with international humanitarian law and regulates other weapon systems which are not able to be used without meaningful human control or otherwise don’t have basic understandability are not able to hold people accountable as this part of international humanitarian law. So I think, because there’s a paucity of time.
Wolfgang Kleinwächter: I will stop there, but these largely seem to be our issues with autonomous weapons systems. Thank you. Thank you very much, Jutta. My understanding from the discussion in the CCW is that they have agreed on a two-tire approach. They said, okay, probably we could prohibit weapons systems where human control is impossible and we can regulate weapons systems where you have certain type of human control. But the question, what type of human control is realistic, this is another question. But I think to have this differentiation, I think it’s important to have at least a realistic way forward. So that means, you know, if you cut it in smaller pieces, it’s probably easier to negotiate. We have now the rolling text, and let’s wait and see what will happen until the end of 2025. And, you know, Guterres has set a deadline for 2026 for legal binding document. Chris has just told us that it’s rather unrealistic against the background of the geopolitical tensions. So I think all these are open questions on the table. But before I ask you to prepare your question, let me move to Elena Plexida from ICANN. I think Anna mentioned also the infrastructure which is needed, and ICANN managed one of the most important infrastructure in the digital world. It’s the domain name system, the root server system, and so that means everything which goes over the Internet needs a functioning ICANN, a functioning IP address and domain name system. So, Elena, you are not directly involved, ICANN is not directly involved in this debate, but you could be affected. So what is your view about this rather, not totally new, but new issue in this Internet community?
Elena Plexida: Thank you, Wolfgang. Thank you very much. Hello everyone. Yes, exactly. As you said, I work for one of the organizations that help maintain what we all know as the global internet. And in fact, the global internet and maintaining it and the work around it is a collective effort. There’s a togetherness in this one. It’s a peace project in fact. So being part of this discussion for me is a little bit remote. But then again, peace and stability is something that you have to work for and safeguard. Hence, the discussion about rules is really relevant. Others did mention the current geopolitical ecosystem and the deterioration and the difficulty in such an ecosystem to probably agree around norms or rules. But I would say that particularly because of this deterioration, adhering to existing norms or creating new ones where they are needed are super, super relevant. As regards technological developments, again, they’re not in our sphere. As you said, both come very rightfully so. But to me, it seems that the technological developments are so fast that if my understanding is correct, it makes it even more difficult to land to an agreement with respect to the use of autonomous weapon systems. Then we have two challenges really. Difficulty in creating an unbiased AI system or unbiased AI systems. The possibility of jailbreaking AI systems through prompt engineering. Here, I want to highlight the undoubted value of and the need to involve technology experts in conversations such as the development of norms or regulation for the use of autonomous weapon systems. As the ambassador said at the beginning, and of course, other experts, a holistic debate is indeed needed. Maintaining meaningful human control is one of the problems apparently. Then in addition, the use of such systems by non-state armed groups, if you will. Those are not really issues that any data icon is into. So I go directly into the norms, the suggestion of or the idea that there needs to be norms. And I think Chris mentioned that, if I’m not mistaken, kinetic weapons seems to be perceived like weapons that can do a much more significant damage, including in the infrastructure that maintains the internet. But those kind of systems would also do that. So I would say, undoubtedly, the most important thing is to look at the human aspect and look at norms or regulation that makes sure that we do not dehumanize, so that we do not harm people. But if I may, I would say that together with that, we should also be looking into norms that are about the infrastructure. And here, I will repeat one of my favorite norms, which comes from the Global Commission for the Stability in Cyberspace that you know very well of, Wolfgang. And it’s the norm about the core of the internet. So to make sure that such systems and other weapons, of course, but such systems do not harm, or if you will, weaponize what we call the core of the internet, technical parameters that are absolutely essential for the internet to function, such as the protocols, the DNS, the IXPs, cable systems that support entire regions or populations. And as that would constitute a threat to the stability of the global internet, and in turn, a threat to the stability of cyberspace. And the internet is a common good. And as I said at the beginning, I think it’s a peace project. So yes, putting some thought into not threatening it, together with other norms that are being considered, is something to add to the conversation. Thank you very much.
Wolfgang Kleinwächter: Thank you, Alina. And good to remember the recommendation from the Global Commission on Stability in Cyberspace a couple of years ago. that the public or the internet is seen as a common good and an attack against the public or of the internet. This was one of the conclusions from the global commission where I had the honor to be a member. This would be seen as an attack against mankind. Because it’s like polluting the air or something else. This should be seen as a crime. So far the question is what we see now with all the attacks against cable systems and other things how far this will go and which role AI could play in attacking the public or of the internet. So this is a big challenge, a complicated question and so we have to do something to avoid this and law can be an instrument but as we have seen also from the debate so it’s difficult to reach an agreement in a geopolitical situation where we have more polarization than harmonization. Anyhow, we have reached now the moment where I would ask questions from the floor. We have also some online questions. So if somebody wants to ask a question from the floor directly, yeah, one and two and please introduce yourself and then if you direct the question to one of the panelists to make it clear. So it’s always better to ask a panelist directly than to ask a general question and then we will have a certain confusion who replies best. Okay, you go first.
Audience: Good morning, I’m Brahim Alla, intern at Acedel. Strasbourg, I wanted to ask very quickly a question related to, for example, the recent events in Spain. Would it be possible to imagine shutting down areas or regions or even countries on a voluntary basis as a future modern warfare strategy, and if so, do you have insights about the influence of such behaviours or events on autonomously guided weapon systems? Thank you. My name is Frances, and I’m here with YouthDIG. I had a question, I think, for Benjamin, so I do agree that just because there are major ethical concerns, that doesn’t mean, I mean, obviously that means we need to think about this more, because it could influence warfare and practices in warfare so much, so it’s something that people are going to want to mechanise and utilise, but I’m not asking about war, but rather limited force, because if you think about how America, and especially under Obama, a lot of drone strikes were utilised, we see that democracies, even though they want to protect themselves, even outside of war, they also want to assert their ideologies, right? So, I think that if they have a technology that’s more precise, that doesn’t have any human costs to people of their own country, this, I think, would lead to overuse of this kind of technologies, because now you don’t have civilian losses, but you have serious damage to people in those countries because of psychological harms, of possible strikes happening at any moment by technologies that aren’t operated by humans, and so it’s not only the precision and the people who are targeted specifically by this, but I think it needs to overuse and also a mental disconnect, right? Because now you think, well, we’re only targeting the bad guys, but also what data is telling you who are the bad guys and what assumptions are being made by these autonomous weapons. So I think in limited force, do you think this will lead to even democracies overusing this technology? Because I think the difference here is that there’s no human cost. So it’s not like delegation. So then you get massive asymmetries and warfare and limited force because now democracies aren’t losing anyone. And so I think that’s the crucial difference that I would love to hear your opinion on.
Wolfgang Kleinwächter: Thank you. Thank you. Good question. Now we go back to the online questions. You could read it.
Moderator: So yeah, a question we have online is, would you consider a scenario wherein an enemy does not buy or make drones, but develop a counter-AI battle system to hack into even elaborately secure battle AI system? For instance, a takeover weapon mounted stones on air, on the ground, redirect and counter-target the drones that they don’t own. Would such scenario would be even remotely realistic? Okay, thank you. That’s a good question. I think it’s primarily for Benjamin and Anna. I think the first and the last is actually for Chris. Okay. Okay. Then I ask also, Chris, if you could…
Wolfgang Kleinwächter: Benjamin first. Sure. Benjamin first. Okay, go ahead.
Benjamin Tallis: Thanks. Yeah. Something very brief to say to all three. I do have points to come back to Anna’s excellent presentation as well, but we’ll see. Do you want responses to other panelists? Yeah. Okay. Very good. This is the right moment. Okay. So very quickly to Rahim. Great question. It’s about resilience and grid resilience in this case. It’s a classic case of one of Anna’s misconstrued or multiply construed terms. Inertia was the key in Spain, which is the ability of a grid to be able to withstand fluctuating power flows. Is that vulnerable to cyber attack? Yes. Is it vulnerable to multiple kinetic attack by uncrewed systems? Yes, it is. So what is the answer? Build grids with more inertia. And distribute the power across the grid, distribute the control across the grid, which is precisely what edge computing and other advances like that allow you to do in military and non-military networks. That means putting the compute power in distributed locations rather than coordinating it in a central location, which is an easy hit. So very quickly to that one. Florence, superb question. Very much conditioned by the misadventures and terrible things that the West did in the last 25 years. The problem is not with the technology, I would argue. The problem was with the intent. The problem was with the analysis and the problems with our hubris there. Big questions now about how do we order a world that is not only safe for democracy, but in which free societies can thrive. Learning from those huge errors which have massive human cost. Where the technology comes in relates to the Chris Coker point. Chris, I knew as well, knew many of his students. And the whole notion of virtuous asymmetric war, that you’re detached. We are war through the screen, etc. removes you of your human responsibility. That was quite widely shown by some studies not to be the case for drone operators who suffered considerable stress. Now you might say that’s nothing compared to what those on the receiving end were getting. But at the same time it shows there is not actually such a disconnect in the same way. We’re not in that situation anymore. We are not in a situation where we are fighting quote unquote wars of choice. We are not fighting limited wars with much weaker adversaries for marginal interests. We are in a situation of great power conflict. We’re in a situation of peer conflict. There is no one in Ukraine who would tell you that the use of drones is first of all a substitute for all the other systems they have. It’s not a single silver bullet. Second, there’s no one in Ukraine and no one around the world should believe Ukraine is not losing people. because it’s using drones. We’re facing a very, very different combat environment. So while I can see the logic of the question, I don’t think it’s the logic we should be looking at right now, because I don’t think it applies to the combat situations we’re actually likely to be in, which also relates then to the question about the drone wall. On the comments from Anna, and I’ll come to this as quick as I can, there was so much I agreed with in this, as with the comments from Saeed and others online. And I agree with Chris’s point about the geopolitical difficulty of reaching a regulation on this. Normally, we only see regulation on new weapons types when there’s an interest of the parties that operate them, when they’ve actually tried them, found out either they are massively consequential in human terms, or they don’t work, or they cause blowback. So for example, in the regulation of gas warfare after the First World War. But crucial points that come out of all of this intention and accountability. I would argue that actually, the use of advanced battle networks now gives you the chance to restore mission command, it gives you the chance to restore commanders intent by allowing commanders to focus on those key decisions. That’s something actually, we’ve been talking to militaries a lot about, they are very keen on restoring that in a way that can actually be communicated, but is based on the proliferating, very confusing battlefield, which is full of diverse systems, multiple inputs, which they have to deal with in a way that it hasn’t been before. On procurement, and end user requirements, and so on, having been through procurement processes, I disagree with the analysis that’s presented. Because the crucial part that we’ve certainly experienced and many others in our position, I mean, Helsinki is the biggest new defense company in Europe and biggest defense AI company in Europe, but there are many others doing similar things. We have to work very, very closely with the customer, which is the government, and with the end users, which are the military in order to understand the capabilities, the technical specifications, and the bounding and the way that we actually can put guardrails on what is being done. You mentioned correctly that most defense companies don’t actually build AI, they procure it. We are different, we are AI first. That’s one of the reasons we think this is a better approach because adding AI or adding software onto hardware has proven to be a very expensive, very ineffective way to build true systems that can actually work in dynamic environments. We do it the other way, we’re software defined, we build it from the AI out and that’s why we actually then started building drones because we realized we could build drones better than other people adding our software to their drones. The same thing applies for future systems, we’re stuck mentally when thinking about military things in terms of tanks, in terms of planes, in terms of ships. That’s not how we should be thinking, we need to be thinking in terms of capabilities, effects and networks. Why software defined? Because software can be updated much more easily than hardware, it can be updated and corrected much more easily. What is crucial with all of this is not only the intent, which we’ve discussed now quite a bit, but the accountability that you mentioned. And accountability I think comes in two ways, first of all you have to know whose intent was it, what orders did they give, what was the command actually given, to which human machine combination did they delegate that, then what were the effects that they should be held accountable for and can you trace it back. The second part of this is about explicability as they call it and this particularly relates to artificial intelligence at the moment. The beauty of artificial intelligence, which is why people want it, is because it reasons in ways that humans don’t. We want it to do that because it makes the decision that we can’t in the time available. However that creates the problem we don’t know why it did what it did. Well newer artificial intelligence builds in explicability as it’s called and this is still a progressing science which is why we have to be very careful about the steps forward that we take but this actually means that the AI will give account for why it reached a decision. Now you could say well what if the AI is trying to trick you, well can the AI trick another AI that’s trained to trace this stuff etc etc and so what we’re into is a progressive iteration of explicability. which allows you to get to the reasoning that was used in order to be able to provide correction over time. Now that’s actually better than we can get to with some humans, as we’ve seen over time, which is very difficult for humans to give account for why they’ve done certain things. I mean, humans, we know for all their ethical qualities, can also lie, they can also obscure. They may not have been sure why they did something. So when thinking about this, we have to again think of those two points of intent and accountability, but recognizing that geopolitical situation that we’re in, taking advantages of the technologies we have in order to make sure that we can actually defend our democracies. The very last point, why do we actually need this stuff when we’re talking in military terms? Our adversaries have it as one answer. The second thing is technologies is advancing in ways that we can use to make sure again that we don’t have to try and fight wars of attrition. Now while it’s not the case that we simply won’t be able to not lose anybody on the battlefield, as per Florence’s question, we don’t want mass casualties. We do not want mass conscription if we can avoid it. We want to use our technological edge. It used to be the case during the Cold War that Western precision and Western quality of weapons was used to counteract Soviet mass. Now the equation is different. Now we can have precise mass and we can actually be able to afford it, and we have to be able to think about that when we’re allocating defense budgets in times of scarce resources. We are going to need to put more money in, but how do we get the most effect for that while still maintaining the kind of democratic societies that we believe in in other ways? I’ll leave it there because that was already a long answer, but there’s a lot that we could go into also further in discussions about how to respect international humanitarian law, the histories of that with autonomous and semi-autonomous weapons including anti-tank mines and so on, and how that’s actually enhanced by the kind of SATA and sensor and data fusion that is now possible from using the new kinds of battle networks that are out there. Thank you.
Chris Painter: Still very, very briefly. I mean, just very briefly, I’d say on the Spain thing, absolutely, it’s possible it’s already happening. Russia is doing this against Ukraine. The whole reason we have a norm against attacks on critical infrastructure is because that’s what happens. So if Spain was a cyber attack, that’d be true. And on the area and the issue of attacking drones, or attacking AI systems, absolutely. I mean, that’s one of the worries. And especially if an adversary doesn’t have the financial wherewithal to build expensive networks, expensive drones, etc., expensive AI systems, attacking them and make them less secure is exactly what an adversary would do.
Wolfgang Kleinwächter: Okay, thanks. Are there more questions in the room? Or Anna, do you want to react to what Ben just said? I’ll say this. I think it’s an honour to Austria and to yourself, Professor, because you
Anja Kaspersen: actually brought very different views onto the panel. And I always say when I talk about this issue, it’s like the most important thing you can leave the audience with, both those in a room and those online, is good questions to ask. And when you heard me talk, and you heard Chris, and you heard Benjamin, and we represent different viewpoints, although we kind of aligned on some of the technical challenges, I hope people leave here with really good questions. Is this what’s desirable? Is this what we think? Do we believe the commander’s intent, that human intent can be translated in the way that was just described by Benjamin? I will make a small correction. I can’t remember who was saying it. But there is a common understanding that, you know, these things are being developed in the defence industrial complex. And what is the big shift? There’s two big shifts, right? One is that what used to be defence industrial complex have moved increasingly into the civilian commercial space. And more and more technology, more and more of the technologies that are now game changing, are being then brought back into the military space. So who’s actually creating the parameters and setting the parameters have shifted somewhat. So I’m not saying there’s something different to you. And I understand your company operates differently from other companies. So I respect that. There’s also a trend that more and more is, to the point of procurement, is bought off-shelf. Because it takes too long. There’s no time. There is a perception that time is not on our side, geopolitically and otherwise, that you don’t invest the same amount of money into maybe doing the specs and doing the traditional methods of procurement and acquisition that was traditionally done in this field. So there are some changes. And I’m not saying your company is on that category. But overall, those are just going to be my comments. And I have many, many more comments, which has more to do with the kind of the bigger philosophical questions, including the technical issues and what was implied and some what Benjamin said. But having such different viewpoints on this panel that allows people to really go out with some real considerations. And actually, I always say that one of the missing things in our current discourse is the inability or the diminishing ability of just sitting with contrasting realities and being uncomfortable. I think it’s worth being uncomfortable with this space. And we have to be able to sit with contrasting realities and navigate that space without getting upset or disagreeing. We’ve been smiling at each other the whole time, even when he’s been saying that I fiercely disagree with you. I’m nodding because we may have agreement on the technical side, but we may disagree on what the impact would be and how OK we are with that. So those are just different views. And that’s what ethics is about. It’s about your outlook. It’s about navigating uncertainties. It’s about sitting with the discomfort of the trade-offs that will inevitably be the result of this discourse, no matter what we do. So thank you.
Wolfgang Kleinwächter: Anne, you are so right. And I hope you will continue the debate in Oslo and beyond Oslo, because this will keep us busy, hopefully before. The Digital Winter will come, so that we have some space which we can use to avoid some of what people have called the Digital Hiroshima, or the Disciple Hiroshima, so there is still room to find a consensus to avoid the worst things. But we have one additional question online, and is there a question in the room? Because more or less we have to come to an end then, because the big plenary is waiting.
Moderator: If there is no question in the room, then please, the final question from Monika from the online. So, question to Ben. Delegating such selection of targets of AI programs has resulted in inconsiderable collateral damage in the Israeli war against Gaza. When would you say, is software safe enough to be delegated such tasks? Who should be held responsible for illegal collateral damage inflicted? The state is using the software, or the companies developing and selling the software as precision tools. Who has to take the responsibility for such hallucination of AI tools? Hallucination of AI tools. Good point, Ben.
Benjamin Tallis: Thanks for that one. It’s nice that people are engaging. I just really first want to absolutely back up what Anna said. This has been a terrific experience for that reason, that we’ve had the chance to productively disagree. And I hope the point about, it’s not only about ethics, but it’s about what democracy is at heart as well. Different points of view making their case in an arena. So again, thanks to you for convening this. In that particular case, and without commenting on particular instances, again, this is a history of warfare question. This is nothing new. Is it the supplier of the weapons, the supplier of the bullets, or so on, who actually is responsible for the effects that they have? And I think we have to be extremely careful not to… We don’t want to confuse our rightful distaste, our rightful hatred of the awful outcomes that result from war. I mean, war is awful. This is the plain, simple truth. War is something we would rather didn’t happen at almost any cost. Although, as Ukrainians would tell you, some things are worth fighting for. And that includes their democracy, their freedom. And that’s what I would hope we’d like to see in Europe too. Which is why we need to be so well-armed that it doesn’t happen, that Putin doesn’t look at us and go, So this is part of the point about building up deterrence. Now, in terms of accountability, again, which is the essence of the question, same rules as applied to other forms of warfare before. Who is responsible for the My Lai Massacre? Well, you could look up at a chain of command, you could look at the individual perpetrating that, you could look at the other individuals who didn’t stop William Calley and co. doing what they did. It’s a complex question that has many, many parts to answer. The question about whether autonomous targeting is responsible, this is a question of setting the boundaries. And this is why my company and many others want to work with democracies who set proper boundaries. And who actually set proper limits and guardrails for how you use that AI. And if they don’t, then that can be the system that’s provided. How it’s then used is ultimately up to the military and the democratically elected governments concerned. So I think there’s a key point there in terms of understanding where is the political responsibility, as well as the command responsibility, and then the frontline responsibility that all play into question there. One very last point, because Anna made a really interesting observation about technology shifting from the military world to the civilian world. I would actually argue that what we’re seeing now is the true shift of the civilian world into the military world. And anyone who’s read Christian Brose’s book Kill Chain, which I would highly recommend, despite the off-putting title, or off-putting to some, or even DIUx, or any of these other books on military innovation, will know buying off-the-shelf, exactly as Anna said, is key in many ways. Because you can now buy off-the-shelf sensors, you can buy off-the-shelf interface tools like phones, like iPads, whatever it is. that by using AI, you can actually upgrade to military-level quality and effect. I would argue actually what we’ve seen is the military world catching up with the technology of the civilian world, but of course it has different consequences when you’re actually using those systems to strike human and military targets rather than to order an Uber. So we have to actually have the serious conversations like this that we’re having today.
Anja Kaspersen: And thank you all for engaging so richly with that. So I’ve been doing arms control disarmament stuff for a long time. So even when we had the composite disarmament like fully operational, and it’s a very important thing to say about an instrument. And as you know yourself, first it takes time. You know, some of the most effective arms control instruments took not a few years, right? They took nine years, 18 years, the Chemical Weapons Convention, the Biological Weapons Convention. I’m not arguing that we should spend that time on anything that is happening in the process that you’re leading. But what has been some, you know, in the Chemical Weapons Convention, what was a big transformative shift for the conversation that happened at the UN was when the chemical industry started engaging. I’m just mentioning that because, you know, we’re trying to reflect creative disagreement because they saw the benefit of having a regulated space to make sure that the edge cases, the edge uses, what was not, you know, set up to be transparent and visible and accountable and held accountable, will be flagged and ruled out. So having all industries and those proactive industries involved is, of course, as we’ve seen with other arms control instruments, very important to make sure that what is agreed upon is implementable. I just wanted to share that observation.
Wolfgang Kleinwächter: So this is an additional argument to involve many stakeholders to get the full picture and then to find something which could be a dynamic consensus in the future. So we have reached the end of our time. And I would ask. but I’m better to give some concluding remarks. Thank you.
Aloisia Wörgette: Thank you for a fascinating panel. I will take home all the praise that Austria has been receiving for hosting this and be assured with your positive motivation we’ll continue to do it. Fascinating discussions. It is absolutely true. Maybe we are not there yet, but I’m really optimistic because we are not in controversy, we are in deliberation and this is why you are disagreeing and still smiling upon each other. It’s absolutely about avoiding unintended consequences for human rights, rule of law and democracy and of course it’s about the question of intent and this could also be an Oppenheimer moment for philosophy as such. I’m much more optimistic about whether we will get an agreement than you are because this is not only about industry and about governments. The discussion on artificial intelligence mobilizes different segments of society globally and therefore in a global democratic process we have a chance to go further because different people are looking at it and are guiding us. So thank you and enjoy EuroDIG for the rest of the days in Strasbourg. Thank you. Thank you. The meeting is closed.
Wolfgang Kleinwächter: Thank you the panelists and our moderator for the insightful discussion and thank you for the audiences for the active involvement as well. So the next session, the opening ceremony will be at 15. So we look forward to seeing you then. Thank you.
Aloisia Wörgette
Speech speed
129 words per minute
Speech length
1167 words
Speech time
539 seconds
Need for international regulation and binding rules
Explanation
Aloisia Wörgette argues for the necessity of international regulation and binding rules for autonomous weapons systems. She emphasizes the importance of addressing the legal, ethical, and security concerns raised by these systems.
Evidence
Mentions ongoing international efforts like the RE-AIM initiative and the US Political Declaration on Responsible Military Use of AI and Autonomy
Major discussion point
Regulation of autonomous weapons systems
Agreed with
– Chris Painter
– Speaker
– Anja Kaspersen
Agreed on
Need for international regulation of autonomous weapons systems
Disagreed with
– Chris Painter
Disagreed on
Feasibility and desirability of international regulation
Importance of multi-stakeholder approach
Explanation
Wörgette advocates for a multi-stakeholder approach in addressing the challenges posed by autonomous weapons systems. She argues that contributions from various sectors are essential for a holistic and inclusive debate.
Evidence
Mentions the need for input from science, academia, industry, tech sector, parliamentarians, and civil society
Major discussion point
Governance and accountability
Agreed with
– Anja Kaspersen
– Benjamin Tallis
Agreed on
Importance of multi-stakeholder approach
Chris Painter
Speech speed
194 words per minute
Speech length
659 words
Speech time
203 seconds
Challenges in reaching agreement due to geopolitical tensions
Explanation
Chris Painter highlights the difficulties in reaching international agreements on regulating autonomous weapons systems due to current geopolitical tensions. He expresses skepticism about the likelihood of progress in the short term.
Evidence
References the current state of division within the UN and other international venues
Major discussion point
Regulation of autonomous weapons systems
Agreed with
– Aloisia Wörgette
– Speaker
– Anja Kaspersen
Agreed on
Need for international regulation of autonomous weapons systems
Disagreed with
– Aloisia Wörgette
Disagreed on
Feasibility and desirability of international regulation
Risk of cyber attacks on AI systems and critical infrastructure
Explanation
Painter points out the vulnerability of AI systems and critical infrastructure to cyber attacks. He emphasizes that adversaries without the means to build expensive networks might resort to attacking and compromising existing systems.
Evidence
Mentions the example of Russia’s actions against Ukraine
Major discussion point
Military and strategic implications
Speaker
Speech speed
150 words per minute
Speech length
280 words
Speech time
111 seconds
Difficulty in complying with international humanitarian law
Explanation
The speaker argues that autonomous weapon systems will struggle to comply with international humanitarian law. They express concern about the systems’ ability to adhere to principles such as distinction and proportionality in warfare.
Major discussion point
Technical and ethical considerations
Agreed with
– Aloisia Wörgette
– Chris Painter
– Anja Kaspersen
Agreed on
Need for international regulation of autonomous weapons systems
Anja Kaspersen
Speech speed
163 words per minute
Speech length
3161 words
Speech time
1160 seconds
AI reshapes decision-making and infrastructure of responsibility
Explanation
Kaspersen argues that AI fundamentally alters how decisions are made and how responsibility is structured in military contexts. She emphasizes that AI is not just a tool, but a methodology that reorganizes warfare concepts.
Evidence
Discusses the concept of commander’s intent and how it becomes more complex in human-machine interactions
Major discussion point
Technical and ethical considerations
Agreed with
– Aloisia Wörgette
– Chris Painter
– Speaker
Agreed on
Need for international regulation of autonomous weapons systems
Concerns about reliability and over-reliance on AI systems
Explanation
Kaspersen expresses concerns about the reliability of AI systems and the risk of over-reliance on them. She points out that these systems can adapt and behave differently in various contexts, potentially leading to unforeseen consequences.
Evidence
Mentions that a model may function perfectly and still fail ethically, operationally, or politically
Major discussion point
Technical and ethical considerations
Disagreed with
– Benjamin Tallis
Disagreed on
Reliability and effectiveness of AI systems in warfare
Challenges in procurement and oversight of AI systems
Explanation
Kaspersen highlights the challenges in procuring and overseeing AI systems for military use. She argues that most institutions do not build these systems themselves, leading to potential misalignments and governance risks.
Evidence
References the IEEE P3-119 standard for helping organizations interrogate vendor claims and surface hidden risks
Major discussion point
Governance and accountability
Agreed with
– Aloisia Wörgette
– Benjamin Tallis
Agreed on
Importance of multi-stakeholder approach
Shift of technology from civilian to military applications
Explanation
Kaspersen notes a shift in the flow of technology from civilian to military applications. She points out that many game-changing technologies are now being developed in the civilian commercial space and then adapted for military use.
Major discussion point
Military and strategic implications
Benjamin Tallis
Speech speed
186 words per minute
Speech length
4196 words
Speech time
1346 seconds
Importance of human control and accountability
Explanation
Tallis emphasizes the importance of maintaining human control and accountability in the use of autonomous weapons systems. He argues that advanced battle networks can actually enhance commanders’ ability to focus on key decisions and maintain mission command.
Evidence
Discusses the concept of restoring mission command through the use of advanced battle networks
Major discussion point
Technical and ethical considerations
Agreed with
– Aloisia Wörgette
– Anja Kaspersen
Agreed on
Importance of multi-stakeholder approach
Need for explicability in AI decision-making
Explanation
Tallis highlights the need for explicability in AI decision-making processes. He argues that newer AI systems are being developed with built-in explicability, allowing for better understanding and accountability of AI decisions.
Evidence
Mentions the development of AI systems that can provide accounts of their decision-making processes
Major discussion point
Technical and ethical considerations
Potential for more precise and effective military operations
Explanation
Tallis argues that autonomous weapons systems and AI-enhanced battle networks have the potential to make military operations more precise and effective. He suggests that this technology can help democracies maintain a technological edge and avoid mass casualties.
Evidence
References the historical use of Western precision weapons to counteract Soviet mass during the Cold War
Major discussion point
Military and strategic implications
Disagreed with
– Anja Kaspersen
Disagreed on
Reliability and effectiveness of AI systems in warfare
Audience
Speech speed
141 words per minute
Speech length
393 words
Speech time
166 seconds
Potential for asymmetric warfare and overuse of autonomous weapons
Explanation
An audience member raises concerns about the potential for asymmetric warfare and overuse of autonomous weapons by democracies. They argue that the lack of human cost might lead to more frequent use of these technologies in limited force scenarios.
Evidence
References the use of drone strikes by the US, particularly under the Obama administration
Major discussion point
Military and strategic implications
Elena Plexida
Speech speed
162 words per minute
Speech length
608 words
Speech time
224 seconds
Need to protect core internet infrastructure
Explanation
Plexida emphasizes the importance of protecting the core infrastructure of the internet from attacks, including those potentially carried out by autonomous weapons systems. She argues for the consideration of norms that safeguard essential technical parameters of the internet.
Evidence
References the norm from the Global Commission for the Stability in Cyberspace about protecting the core of the internet
Major discussion point
Governance and accountability
Moderator
Speech speed
149 words per minute
Speech length
327 words
Speech time
131 seconds
Questions of responsibility for collateral damage
Explanation
The moderator raises questions about responsibility for collateral damage caused by autonomous weapons systems. They ask who should be held accountable for illegal collateral damage – the state using the software or the companies developing and selling it.
Evidence
References the collateral damage in the Israeli war against Gaza as an example
Major discussion point
Governance and accountability
Wolfgang Kleinwächter
Speech speed
134 words per minute
Speech length
1854 words
Speech time
828 seconds
Cyberspace has become weaponized
Explanation
Kleinwächter argues that cyberspace, once seen as a tool for peace, has become an area of conflict among nations. He notes that cyberspace is increasingly being pulled into military discussions and weaponized.
Evidence
References discussions at the recent Munich Security Conference about cyberspace and outer space becoming part of military expert debates
Major discussion point
Military and strategic implications of cyberspace
Need for public awareness about autonomous weapons systems
Explanation
Kleinwächter emphasizes the importance of bringing the debate about autonomous weapons systems to a broader public. He argues that there needs to be better understanding of the consequences of weaponizing cyberspace.
Evidence
Mentions the organization of outreach workshops, including one at the IGF in Riyadh and the current session at EuroDIG
Major discussion point
Public awareness and engagement
Two-tier approach to regulating autonomous weapons systems
Explanation
Kleinwächter suggests that there is emerging consensus on a two-tier approach to regulating autonomous weapons systems. This approach would prohibit systems where human control is impossible and regulate systems where some form of human control is possible.
Evidence
References discussions in the Convention on Certain Conventional Weapons (CCW)
Major discussion point
Regulation of autonomous weapons systems
Agreements
Agreement points
Need for international regulation of autonomous weapons systems
Speakers
– Aloisia Wörgette
– Chris Painter
– Speaker
– Anja Kaspersen
Arguments
Need for international regulation and binding rules
Challenges in reaching agreement due to geopolitical tensions
Difficulty in complying with international humanitarian law
AI reshapes decision-making and infrastructure of responsibility
Summary
Multiple speakers agreed on the necessity for international regulation of autonomous weapons systems, while acknowledging the challenges in reaching such agreements.
Importance of multi-stakeholder approach
Speakers
– Aloisia Wörgette
– Anja Kaspersen
– Benjamin Tallis
Arguments
Importance of multi-stakeholder approach
Challenges in procurement and oversight of AI systems
Importance of human control and accountability
Summary
Speakers emphasized the need for involving various stakeholders, including industry, academia, and civil society, in addressing the challenges posed by autonomous weapons systems.
Similar viewpoints
Both speakers highlighted the vulnerability of critical infrastructure to cyber attacks and the need for protection.
Speakers
– Chris Painter
– Elena Plexida
Arguments
Risk of cyber attacks on AI systems and critical infrastructure
Need to protect core internet infrastructure
Both speakers emphasized the importance of understanding and explaining AI decision-making processes to ensure accountability and reliability.
Speakers
– Anja Kaspersen
– Benjamin Tallis
Arguments
Concerns about reliability and over-reliance on AI systems
Need for explicability in AI decision-making
Unexpected consensus
Shift in technology flow between civilian and military sectors
Speakers
– Anja Kaspersen
– Benjamin Tallis
Arguments
Shift of technology from civilian to military applications
Potential for more precise and effective military operations
Explanation
Despite their different perspectives, both speakers acknowledged a significant shift in how technology moves between civilian and military sectors, with civilian innovations increasingly influencing military capabilities.
Overall assessment
Summary
The main areas of agreement included the need for international regulation, the importance of a multi-stakeholder approach, and the recognition of cybersecurity risks associated with autonomous weapons systems.
Consensus level
Moderate consensus on the need for regulation and multi-stakeholder involvement, but divergent views on the specific approaches and implications of autonomous weapons systems. This suggests that while there is agreement on the importance of addressing the issue, significant challenges remain in developing concrete, universally accepted solutions.
Differences
Different viewpoints
Feasibility and desirability of international regulation
Speakers
– Aloisia Wörgette
– Chris Painter
Arguments
Need for international regulation and binding rules
Challenges in reaching agreement due to geopolitical tensions
Summary
Wörgette argues for the necessity of international regulation and binding rules for autonomous weapons systems, while Painter expresses skepticism about the likelihood of progress in the short term due to geopolitical tensions.
Reliability and effectiveness of AI systems in warfare
Speakers
– Anja Kaspersen
– Benjamin Tallis
Arguments
Concerns about reliability and over-reliance on AI systems
Potential for more precise and effective military operations
Summary
Kaspersen expresses concerns about the reliability of AI systems and the risk of over-reliance, while Tallis argues that these systems can make military operations more precise and effective.
Unexpected differences
Shift in technology flow between civilian and military sectors
Speakers
– Anja Kaspersen
– Benjamin Tallis
Arguments
Shift of technology from civilian to military applications
Potential for more precise and effective military operations
Explanation
While not a direct disagreement, Kaspersen’s emphasis on the shift of technology from civilian to military applications contrasts with Tallis’s focus on the military potential of these technologies, highlighting an unexpected difference in perspective on the origin and impact of technological advancements.
Overall assessment
Summary
The main areas of disagreement revolve around the feasibility of international regulation, the reliability and effectiveness of AI systems in warfare, and the implications of AI on human control and decision-making in military contexts.
Disagreement level
The level of disagreement is moderate to high, with significant implications for the development and regulation of autonomous weapons systems. These disagreements highlight the complexity of the issue and the challenges in reaching a consensus on how to address the ethical, legal, and security concerns associated with these technologies.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers highlighted the vulnerability of critical infrastructure to cyber attacks and the need for protection.
Speakers
– Chris Painter
– Elena Plexida
Arguments
Risk of cyber attacks on AI systems and critical infrastructure
Need to protect core internet infrastructure
Both speakers emphasized the importance of understanding and explaining AI decision-making processes to ensure accountability and reliability.
Speakers
– Anja Kaspersen
– Benjamin Tallis
Arguments
Concerns about reliability and over-reliance on AI systems
Need for explicability in AI decision-making
Takeaways
Key takeaways
There is an urgent need for international regulation of autonomous weapons systems, but geopolitical tensions make reaching agreement difficult
A multi-stakeholder approach involving governments, industry, civil society, and technical experts is crucial for effective governance
AI and autonomous systems reshape military decision-making and raise complex ethical and accountability issues
Advances in AI and autonomous weapons could make warfare more precise but also risk overreliance and unintended consequences
Protecting critical infrastructure, including internet systems, from attacks is an important consideration
Resolutions and action items
Continue discussions on autonomous weapons systems at future forums like the Internet Governance Forum in Oslo
Work towards a legally binding instrument on autonomous weapons systems by 2026, as called for by the UN Secretary General
Unresolved issues
How to ensure meaningful human control over autonomous weapons systems
How to make AI systems sufficiently reliable and explainable for military applications
Who bears responsibility for unintended harm caused by autonomous weapons
How to balance military effectiveness with ethical and humanitarian concerns
How to regulate autonomous weapons given rapid technological advances
Suggested compromises
A two-tier approach: prohibit weapons systems where human control is impossible, and regulate systems where some human control is possible
Focus on setting proper boundaries and guardrails for AI use in military contexts rather than blanket prohibitions
Thought provoking comments
We are living in difficult times, and while everybody agreed 20 years ago that the cyberspace and the digital sphere would contribute to a more peaceful world and to better understanding among nations, we have realized in the last 20 years that the cyberspace is also an area for conflict, conflict among nations, and also a process has started where cyberspace become weaponized.
Speaker
Wolfgang Kleinwächter
Reason
This comment sets the stage for the discussion by highlighting how perceptions of cyberspace have shifted dramatically from optimism to concern about weaponization.
Impact
It framed the urgency and importance of the discussion on regulating autonomous weapons systems in cyberspace.
We therefore support ongoing international efforts to promote responsible military use of artificial intelligence. These include the RE-AIM initiative by the Netherlands and South Korea, and the US Political Declaration on Responsible Military Use of AI and Autonomy.
Speaker
Aloisia Wörgette
Reason
This comment introduces specific international initiatives aimed at addressing the challenges posed by military AI, showing concrete steps being taken.
Impact
It moved the discussion from abstract concerns to practical policy efforts, providing a basis for discussing governance approaches.
AI is not a weapon system in a traditional sense. It is a social, technical, economic methodology, if you may. It reorganizes how war is imagined, operationalized and bureaucratized. It alters the concept of decision making itself, shifting authority away from experience and judgment toward inference and correlation.
Speaker
Anja Kaspersen
Reason
This insight reframes AI not just as a technology but as a transformative force in military decision-making and organization.
Impact
It deepened the conversation by highlighting the broader implications of AI beyond just weapon systems, touching on fundamental changes to military operations and decision-making processes.
We are in a situation of great power conflict. We’re in a situation of peer conflict. There is no one in Ukraine who would tell you that the use of drones is first of all a substitute for all the other systems they have. It’s not a single silver bullet.
Speaker
Benjamin Tallis
Reason
This comment grounds the discussion in current geopolitical realities and challenges simplistic views of autonomous weapons.
Impact
It shifted the conversation to consider the practical military context and limitations of autonomous systems, rather than just theoretical capabilities or concerns.
The geopolitical considerations outweigh any ability to really reach an agreement. And though I applaud the effort to try to do some binding approach to this in the UN, I think that’s going to be, at least in the short term, very, very difficult.
Speaker
Chris Painter
Reason
This comment provides a sobering assessment of the challenges in reaching international agreements on AI weapons regulation.
Impact
It tempered optimism about quick regulatory solutions and prompted discussion of alternative approaches or interim measures.
Overall assessment
These key comments shaped the discussion by broadening its scope from narrow technical concerns to encompass geopolitical realities, ethical implications, and practical challenges in regulating AI in military contexts. They highlighted the complexity of the issue, showing how AI is not just a new weapon system but a transformative force in military affairs. The discussion evolved from initial framing of the problem to exploring specific initiatives, considering broader implications, and grappling with the difficulties of reaching international consensus. This progression allowed for a nuanced exploration of the topic that balanced idealistic goals with pragmatic considerations.
Follow-up questions
How can we ensure meaningful human control over autonomous weapon systems?
Speaker
Wolfgang Kleinwächter
Explanation
This was highlighted as a key issue in the debate around autonomous weapons systems and their regulation.
How can we develop robust, internationally applicable technical standards for autonomous weapon systems?
Speaker
Anja Kaspersen
Explanation
This was identified as important for ensuring interoperability and safety across organizational and national boundaries.
How can we address the energy intensity and infrastructure requirements of advanced AI systems in military contexts?
Speaker
Anja Kaspersen
Explanation
This was highlighted as a critical but often overlooked aspect of deploying AI in military applications.
How can we improve procurement processes for AI systems in the military to ensure ethical considerations are built in from the start?
Speaker
Anja Kaspersen
Explanation
This was identified as a key governance challenge in implementing AI in military contexts.
How can we protect critical internet infrastructure from being targeted or weaponized by autonomous weapon systems?
Speaker
Elena Plexida
Explanation
This was highlighted as an important consideration for maintaining the stability and security of the global internet.
How can we develop effective accountability mechanisms for decisions made by AI systems in military contexts?
Speaker
Benjamin Tallis
Explanation
This was identified as a crucial challenge in implementing AI in military applications while maintaining ethical standards.
How can we improve the explicability of AI decision-making in military contexts?
Speaker
Benjamin Tallis
Explanation
This was highlighted as an important area for further development to ensure transparency and accountability in AI-driven military systems.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event
