Pre 10: Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative
12 May 2025 11:00h - 12:15h
Pre 10: Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative
Session at a glance
Summary
This discussion focused on the regulation of autonomous weapons systems (AWS), examining the legal, ethical, and technical challenges surrounding their development and deployment. The session was moderated by Professor Wolfgang Kleinwächter and featured perspectives from diplomacy, industry, technical communities, and civil society organizations.
Ambassador Aloisia Wörgetter from Austria outlined her country’s leading role in advancing international regulation of AWS, emphasizing the urgent need for legally binding instruments by 2026 as called for by the UN Secretary-General. She highlighted Austria’s multi-stakeholder approach and compared the current moment to an “Oppenheimer moment” requiring decisive action to prevent an unregulated autonomous weapons arms race.
Benjamin Tallis from Helsing provided an industry perspective, arguing that autonomous weapons represent an evolution of existing military command and control principles rather than a revolutionary change. He emphasized that these systems involve delegating bounded tasks to machines, similar to human military delegation, and stressed the importance of maintaining democratic technological advantages over authoritarian adversaries.
Anja Kaspersen from IEEE offered a technical viewpoint, cautioning against treating AI as a simple tool and highlighting complex issues around commander’s intent, system reliability, and procurement processes. She emphasized that AI fundamentally reorganizes decision-making processes and warned against oversimplified terminology that obscures the true complexity of these systems.
Civil society representative Sai from Stop Killer Robots advocated for a complete ban on autonomous weapons that cannot comply with international humanitarian law, while online participants raised concerns about cybersecurity vulnerabilities and potential overuse by democracies. The discussion revealed fundamental disagreements about the feasibility and desirability of autonomous weapons while maintaining respectful dialogue about these critical issues facing humanity.
Keypoints
## Major Discussion Points:
– **Legal and Regulatory Framework Development**: The urgent need to establish binding international regulations for autonomous weapons systems (AWS), with Austria leading efforts through UN resolutions and pushing for a legally binding instrument by 2026, despite geopolitical challenges that make consensus difficult.
– **Technical Challenges and Human Control**: The complexity of maintaining meaningful human control over AI-powered weapons systems, including issues of reliability, predictability, accountability, and the technical limitations of current AI systems in military contexts.
– **Multi-stakeholder Approach**: The necessity of involving diverse perspectives beyond military and diplomatic experts, including industry, civil society, technical communities, and academia to create comprehensive governance frameworks for AWS.
– **Ethical and Humanitarian Concerns**: The fundamental questions about whether autonomous weapons can comply with international humanitarian law, particularly regarding distinction between combatants and civilians, proportionality, and the protection of human dignity and right to life.
– **Geopolitical Reality vs. Regulatory Ideals**: The tension between the urgent need for regulation and the practical challenges of achieving international consensus in an increasingly polarized geopolitical environment, with authoritarian regimes developing these technologies without similar ethical constraints.
## Overall Purpose:
The discussion aimed to bring the complex debate about autonomous weapons systems regulation into broader public discourse through a multi-stakeholder dialogue. This was part of Austria’s outreach initiative to inform European audiences about ongoing UN negotiations and to explore the legal, ethical, and technical challenges of regulating AI-powered military systems before they become widespread.
## Overall Tone:
The discussion maintained a remarkably civil and constructive tone throughout, despite representing fundamentally different viewpoints. Participants explicitly noted their ability to “disagree while smiling at each other” and emphasized the value of “productive disagreement.” The tone was serious and urgent given the subject matter, but remained collaborative and intellectually curious. The atmosphere encouraged “sitting with contrasting realities” and being “uncomfortable” with the ethical trade-offs involved, reflecting a mature approach to a highly contentious and complex issue.
Speakers
**Speakers from the provided list:**
– **Wolfgang Kleinwächter** – Professor at University of Aarhus, Session Moderator
– **Moderator** – Remote moderator for the session (Istimarta)
– **Aloisia Wörgette** – Austrian Ambassador to the Council of Europe
– **Benjamin Tallis** – Representative from Helsing (defense technology company), specializes in thought leadership and battle networks using AI
– **Anja Kaspersen** – Representative from IEEE (Institute of Electrical and Electronics Engineers), former director for disarmament affairs in Geneva, technical community perspective
– **Chris Painter** – Former US Cyber Ambassador, former chair of Global Forum on Cyber Expertise, currently with UNIDIA in Geneva
– **Elena Plexida** – Representative from ICANN (Internet Corporation for Assigned Names and Numbers)
– **Speaker** – Representative from Stop Killer Robots NGO, civil society perspective (identified as Sai from India)
– **Audience** – Various audience members asking questions
**Additional speakers:**
– **Brahim Alla** – Intern at Acedel, Strasbourg (audience member)
– **Frances** – Representative from YouthDIG (audience member)
Full session report
# Multi-Stakeholder Discussion on Autonomous Weapons Systems Regulation
## Executive Summary
This session, part of Austria’s international outreach initiative on autonomous weapons systems, brought together diverse perspectives from diplomacy, industry, technical communities, and civil society. Moderated by Professor Wolfgang Kleinwächter from the University of Aarhus as part of EuroDIG (European Dialogue on Internet Governance), the discussion aimed to inform European audiences about ongoing UN negotiations while exploring the technical, ethical, and regulatory challenges surrounding AI-powered military systems.
The session was part of a series including previous workshops in Riyadh and an upcoming session in Oslo, coinciding with informal consultations taking place in New York City. Participants demonstrated respectful dialogue across different viewpoints, with the Ambassador noting their ability to engage in “deliberation rather than controversy” on these complex issues.
## Key Participants and Perspectives
### Diplomatic Leadership: Austria’s Multi-Stakeholder Approach
Ambassador Aloisia Wörgetter outlined Austria’s pioneering role in advancing international regulation of autonomous weapons systems through UN resolutions and advocacy for legally binding instruments. She emphasized Austria’s inclusive approach, involving diplomats, military personnel, industry representatives, tech sector, and civil society organizations.
Drawing parallels to an “Oppenheimer moment,” Wörgetter stressed the critical importance of meaningful human control in ensuring proportionality, distinction, and accountability in warfare. She highlighted concerns about human dignity, the right to life, and preventing destabilizing arms races, while expressing optimism about achieving progress through Austria’s continued leadership in international forums.
### Industry Perspective: AI-Enhanced Battle Networks
Benjamin Tallis from Helsing, a company developing AI-powered battle networks, presented autonomous weapons as an evolution of existing military command and control systems rather than revolutionary technology. He emphasized that these systems involve delegating bounded tasks to machines within established command structures, building on decades of precision networked warfare development.
Tallis argued that advanced AI systems can actually enhance commanders’ decision-making capabilities while providing explicability for their processes. He stressed the strategic importance of democratic nations maintaining technological advantages, warning that falling behind authoritarian adversaries could have serious consequences for international stability.
### Technical Community: Complexity and Governance Challenges
Anja Kaspersen from IEEE (representing almost half a million members globally) provided detailed technical insights, cautioning against treating AI as a simple tool. She argued that AI fundamentally reorganizes decision-making processes and represents a methodology that transforms how warfare is conceived and operationalized.
Kaspersen highlighted critical challenges including current AI systems’ inability to replicate human judgment and contextual understanding required for warfare, difficulties in translating commander’s intent to machines, and the complexity of procurement processes involving pre-trained, modular systems. She referenced IEEE’s work on procurement standards (P3119) and ethically aligned design (P7000 series) as frameworks for addressing these challenges.
She emphasized infrastructure vulnerabilities, including energy dependencies and reliance on legacy systems not designed for autonomous features, advocating for governance frameworks that ensure systems “fail safely and visibly.”
### Civil Society and Additional Perspectives
The Stop Killer Robots representative advocated for ensuring autonomous weapons systems comply with international humanitarian law, reflecting broader civil society concerns about human rights implications.
Chris Painter, former US Cyber Ambassador, highlighted cybersecurity vulnerabilities and expressed concerns about achieving international agreements in the current geopolitical climate.
Elena Plexida from ICANN contributed insights about protecting internet core infrastructure, referencing the Global Commission for the Stability in Cyberspace norm about safeguarding foundational internet resources.
## Areas of Convergence
Despite different approaches, participants found common ground on several key principles:
**Multi-Stakeholder Governance**: All speakers agreed that addressing autonomous weapons requires inclusive participation from diverse sectors, recognizing that no single community possesses sufficient expertise to address all dimensions of the challenge.
**Technical Challenges**: Participants acknowledged significant limitations in current AI systems, including cybersecurity vulnerabilities, infrastructure dependencies, and challenges in replicating human judgment.
**Accountability Frameworks**: Speakers emphasized the importance of maintaining clear responsibility chains and meaningful human oversight, though they differed on implementation approaches.
**Infrastructure Protection**: An unexpected consensus emerged around protecting underlying systems that support autonomous weapons operations, recognizing infrastructure as a strategic vulnerability.
## Key Challenges and Different Perspectives
The discussion revealed various perspectives on critical challenges:
**Technology Readiness**: Participants offered different views on whether current AI technology can adequately support meaningful human control, with industry representatives more optimistic about near-term capabilities than technical community experts.
**Regulatory Approaches**: Perspectives ranged from supporting legally binding international instruments to emphasizing technical standards and procurement guidelines, reflecting different views on the most effective governance mechanisms.
**Procurement and Development**: Discussion revealed tensions between off-the-shelf procurement approaches and custom development, with implications for how well systems align with operational requirements and ethical constraints.
**International Cooperation**: Views differed on the feasibility of achieving binding international agreements in the current geopolitical environment, with diplomatic optimism contrasting with more cautious assessments of multilateral cooperation prospects.
## Interactive Discussion Highlights
The Q&A session addressed practical concerns including:
– Questions about infrastructure vulnerabilities, with references to Spain’s power grid challenges
– Discussion of drone system vulnerabilities and countermeasures
– Exploration of how different stakeholder communities can contribute to governance frameworks
– Consideration of timeline pressures versus the need for comprehensive approaches
## Technical and Operational Considerations
Key technical challenges identified include:
**Cybersecurity**: AI systems face vulnerabilities that could be exploited by adversaries, complicating accountability and reliability assessments.
**Infrastructure Dependencies**: Advanced AI systems require significant energy resources and rely on legacy architectures not designed for autonomous operations.
**Procurement Complexity**: The shift toward commercial AI development creates new challenges in ensuring military systems meet operational and ethical requirements.
**Interoperability**: Ensuring different national and organizational systems can work together while maintaining safety and legal compliance.
## Path Forward
The discussion demonstrated several constructive elements for continued progress:
**Sustained Engagement**: Austria’s leadership in organizing multi-stakeholder discussions provides a model for continued dialogue, with the series continuing to Oslo and other venues.
**Technical Standards Development**: IEEE and other technical organizations are developing relevant standards that could support governance frameworks.
**Practical Cooperation**: Opportunities exist for collaboration on cybersecurity, infrastructure protection, and technical standards even without comprehensive regulatory agreement.
**Recognition of Complexity**: Participants acknowledged the need to “sit with contrasting realities” and work through uncertainty rather than seeking premature consensus on unresolved questions.
## Conclusions
This session highlighted both the complexity of autonomous weapons governance and the value of multi-stakeholder dialogue in addressing these challenges. While participants brought different perspectives on technical feasibility, regulatory approaches, and implementation strategies, they demonstrated shared commitment to responsible development and deployment of these technologies.
The respectful nature of the discussion and identification of common ground suggest that continued engagement across sectors can be productive, even as significant questions remain about technical capabilities, legal frameworks, and international cooperation mechanisms. Austria’s outreach initiative provides a valuable model for sustained dialogue as these technologies continue to evolve and international discussions progress.
The session reinforced that addressing autonomous weapons systems requires ongoing collaboration between diplomatic, technical, industry, and civil society communities, with each bringing essential expertise to this multifaceted challenge.
Session transcript
Wolfgang Kleinwächter: It’s one o’clock So we are waiting for Anja Kaspersen Good afternoon, everyone. Welcome to the session of Regulation of Autonomous Weapons System,
Moderator: navigating the legal and ethical imperative. My name is Istimarta and I will be the remote moderator for the session. So for now, I will be reading the rules for our remote audiences. So first, for the remote audiences, please enter with your full name. And to ask questions, raise your hand using the Zoom function, and you will be unmuted when the floor is given to you. And when speaking, please switch on the video, state your name and affiliation, and please do not share the links to Zoom meetings, not even to your colleagues. So for now, I will be giving the floor to our moderator, Professor Wolfgang Kleinwächter from University of Aarhus. Thank you very much and welcome to our session.
Wolfgang Kleinwächter: As you know, we are living in difficult times, and while everybody agreed 20 years ago that the cyberspace and the digital sphere would contribute to a more peaceful world and to better understanding among nations, we have realized in the last 20 years that the cyberspace is also an area for conflict, conflict among nations, and also a process has started where cyberspace become weaponized. During the recent Munich Security Conference, we did see a lot of discussion how this space, cyberspace, but also the outer space, has become now pulled into a discussion for military experts. So we have seen a lot of negotiations already within the United Nations, but also under the umbrella of the Convention on Certain Conventional Weapons, the CCB, where we see a discussion about new types of weapons, which we call autonomous weapon systems, AWS, and the General Secretary of the United Nations has produced a report last year, which has led to a resolution, which was sponsored by Austria, with the outcome that today and tomorrow there will be informal consultations in New York City about this new type of weapons. And that’s why, with the help of the Austrian government, we have decided to bring this very crucial and delicate and complicated debate into a broader public so that we have a better understanding what are the consequences of, quote-unquote, weaponization of the cyberspace. And so we started with an outreach workshop during the IGF in Riyadh in December, where we had the first round of discussion, and this is now the second in a series that we want to reach out more to the European public, and there will be a third workshop in Oslo. in June when we have the UN-sponsored IGF. So it means the session here is mainly an informal session so that we inform the public what’s going on and we hope we’ll have also a very good discussion. We are, Anja is here, okay, great. So unfortunately, we are still missing Winsurf who wanted to give a short opening speech because he also helped us to make the workshop in Riyadh but he’s in Los Angeles and it’s three o’clock in the morning, probably a little bit too early for him. So that means if he arrives then our remote moderator will give us a signal. So we have a good panel which gives you different perspectives. We have the Ambassador from Austria to the Council of Europe, Madame Werketter who will inform about the ongoing negotiations. We have Mr. Thales from Helsinki, this is the industry perspective. This is one of the new rising industrial stars in Germany which has specialized in the produce of one type of autonomous weapon system, mainly drones. We have Anja Kaspersen, she is from the technical community. She will speak a little bit about the technical perspective and how realistic or unrealistic is the debate about the human control over all this because human control and human oversight is a key issue in the debate. And we have then some comments from the online commentators. We have Chris Painter who was the first US Cyber Ambassador in Washington. He was then for many years the chair of the Global Forum on Cyber Expertise and is now with a conference in Geneva with UNIDIA and dealing also with these issues. Unfortunately, Marjete Schaake, a former president, member of parliament from the Netherlands, and she is now a member of the Global Commission on AI in the military domain, is conflicted and cannot make it. But we have also Elena Plexida from ICANN online, and we have from the NGO Stop Killer Robots. She’s from India and will give us a civil society perspective. So this is more or less the program, and now I give the floor to Madam Ambassador. Thank you very much.
Aloisia Wörgette: Thank you. Yes, that works. Thank you, Professor Kleinwächter. Dear colleagues, ladies and gentlemen, I see many of you here. A special welcome to everybody with a strong connection to Austria. My colleagues in Vienna, disarmament experts, have asked me to speak to you on their behalf. Also, as you know, that the Council of Europe deals with human rights, rule of law, and democracy, but has specifically no mandate for defense issues. Still, we found it very important that this topic is dealt with at EuroDIG in connection with the Council of Europe here. I want to thank you, Professor Kleinwächter, to moderate this session and want to thank all the distinguished speakers present and online to join us and contribute to this timely and important conversation. Like all transformative technologies, the application of artificial intelligence in the military domain is advancing rapidly. These developments promise to make tasks faster, easier, and more accessible. Yet, as in the civilian sector, they demand robust guardrails and limitations to ensure that artificial intelligence is used in a human rights-based, human-centered, ethical, and responsible manner. While the civilian domain is increasingly governed, and thank goodness we do find consensus on these things, with regard to the Council of Europe’s AI Convention, first legally binding international treaty on AI, European Union’s AI Act, first comprehensive global regulation, the military and defence sectors still lag behind. And let me state here that Austria has supported, during the negotiations for the Convention on Artificial Intelligence, that we do include the defence sector, but we were not successful in this regard. National security considerations have largely excluded these domains from such instruments, and no similar binding frameworks exist to date. We therefore support ongoing international efforts to promote responsible military use of artificial intelligence. These include the RE-AIM initiative by the Netherlands and South Korea, and the US Political Declaration on Responsible Military Use of AI and Autonomy. Today, we focus on one of the most critical and sensitive issues in this broader field, autonomous weapon systems, systems that can select and apply force to targets without further human intervention. AWS raises fundamental legal, ethical and security concerns. These include the necessity for meaningful human control to ensure proportionality and distinction, the need for predictability and accountability, and the protection of the right of life and human dignity. There are also serious risks of proliferation and a destabilizing autonomy arms race. These topics will be explored by our panel, and I want to link back also to the panel that started EURADIC this morning, where the execution department of the Council of Europe did report on the case law of the European Court on Human Rights. We are concerned about these things going on, and therefore Austria has taken a leading role in advancing international regulation on AWS. Last year, Austria hosted the Vienna Conference Humanity at a Crossroads to examine the ethical, legal and security implications of AWS and to build momentum for international regulation. We strongly support the joint call by US Secretary General and the ICRC President to conclude negotiations on a legally binding instrument by 2026. Over the past decade, valuable discussions have taken place, notably within the Group of Governmental Experts in Geneva and the Human Rights Council, where a growing majority of states agree on the need for international regulation, including prohibitions and restrictions. However, moving from discussion to a formal negotiation mandate remains difficult. Geopolitical tensions, mistrust and the reticence to regulate these fast-paced technologies are slowing progress, even as the window for preventive regulation is closing rapidly. Minister Kleinwächter has just mentioned that we have supported and championed the first-ever resolution on AWS in the UN General Assembly in 2023. You’re aware that this has mandated a UN Secretary General report, and last year we sponsored also the follow-up resolution, which was supported by 166 UN member states. These consultations complement the Geneva-based efforts. And Professor Kleinwächter has already also mentioned that these negotiations are taking place today and tomorrow in New York, and we would have I want to speak briefly about the need for a multi-stakeholder perspective. From our point of view, the global discourse must extend beyond diplomats and beyond military experts. The implications of autonomous weapons systems affect human rights, human security, and sustainable development, and it concerns all regions and all people. We therefore advocate a multi-stakeholder approach. Contributions from science, academia, industry, the tech sector, parliamentarians, and civil society are essential to ensure a holistic and inclusive debate. We welcome that the Council of Europe Parliamentary Assembly has already in 2023 supported a resolution on the emergence of lethal autonomous weapons systems, which references relevant international and European human rights law. We aim to broaden the discourse through outreach, like we are doing right now, such as the AWS session that we have hosted at the Internet Governance Forum in Riyadh last December, and we will continue the conversation in the Internet Governance Forum in Oslo in July. Let me just, in concluding, reiterate the urgency to act. We find humanity is at a crossroads. We must come together to confront the challenges posed by AWS. We think that we are in an Oppenheimer moment. Advocates from across disciplines are warning of the profound risks and irreversible consequences of unregulated autonomous weapons arms race. There is urgency. to finally move from discussions to negotiations on binding rules and limits. And as AWS technologies evolve, the gap between regulation and reality continues to widen. So we need decisive political leadership to shape international rules. We believe that a multi-stakeholder exchange will contribute considerably and we will remain my colleagues who are working on this armament for a long, long time, which is also an element of our active neutrality. We’ll continue the conversation. I’m looking forward to the conversation. Thank you.
Wolfgang Kleinwächter: Thank you, Madam Ambassador. And I will already now announce that we have reserved some time for interactive discussion because EuroDIG is a dialogue and so we want to get you involved to prepare your questions or your comments if we ask all the panelists. But now a great welcome to Mr. Tallis from Helsinki. I think in this context it’s the first time that we have a representative from the industry. But as just Madam Ambassador has said, a multi-stakeholder approach is needed and we have to hear all voices and it’s bad if some stakeholder groups are sitting in their silos. So you are mostly welcome and you have the floor.
Benjamin Tallis: Thank you very much indeed, Professor Kleinwächter. And thank you for revealing now that I’m the first representative of the defense industry to speak in this format. I’m braver than I thought in that case. Thank you also to the Ambassador for excellent scene-setting remarks. And coming from industry, I’m obviously here with a very fancy PowerPoint presentation to show you why everything is going to be fine. Well, you’ll notice I don’t have a PowerPoint presentation and I’m not here to tell you everything is going to be fine. My job at Helsinki, which I should clarify is not a drone maker, we do make drones. But what we actually do is make battle networks, extending from all sensors to all shooters, using AI to actually enhance the kind of battle networks that we can field, which allow us to make better decisions based on better understanding and take more effective and precise actions. So relating very much to some of the things that the ambassador already mentioned. So we don’t just make drones, and I’m not here to be a salesman for drones or any other technology. My role with Helsing is what they call thought leadership, which involves exactly engaging with third-party stakeholders, with a multitude of different actors to have that kind of multi-stakeholder dialogue to ensure that we’re aware, first of all, of all the necessary discussions that are going on that affect what we’re doing, but also to make sure that others involved in those discussions are aware of what we’re doing, what we provide, and also where the industry is on these issues. Today I speak on behalf of myself, but you’ll get an idea of where we stand. Now before joining Helsing, I was not a professional defence industry person. I was a think tanker. I was prior to that an academic, and I’ve been a government advisor working on European security in various capacities for about 20 years, including working on the field for the European Union on security missions in the Balkans in the post-conflict period there, and also in Ukraine, going back about 20 years, which is where the start of a long association with that country came from. In those capacities, when I was working also with diplomatic status, I had the chance to engage with people from the Council of Europe, as well as many civil society groups and many others who were deeply concerned with human rights, with the principles of humanitarianism, with upholding the values that actually make our democracies different from the authoritarian regimes by whom we are so clearly challenged at the moment. So with that perspective in mind, that informs the remarks that I’ll make today. It’s no secret that we are in an increasingly competitive and increasingly hostile geopolitical climate. It was mentioned that we’re seeing a destabilizing arms race. Well, I would put it to you, while it’s bad to be in an arms race, it would be worse should we lose that arms race to authoritarian regimes who have far less honorable intentions for their peoples and indeed for the world than our democratic societies do. We can see that one aspect of this competition does involve emerging defense technologies, including autonomous weapon systems, and it’s an area in which we give considerably more care than our adversaries in Russia, in China, and elsewhere do. And that’s good, that’s part of what sustains us as democracies. And it’s very important that while we work to ensure that we have the military capabilities, as well as the demonstrated resolve to ensure deterrence, we do that without undermining the democratic values that again set us apart and which give our citizens the kind of right to a hopeful future, which is the unique selling point of liberal democracies when they are at their best, and again sets us apart from our authoritarian competitors. Now we’ve seen this competition in emerging defense technology as well as in geopolitical power positioning in microcosm in Ukraine. And while a lot of people would say there’s huge amounts of transferable lessons to be learned from the Ukrainian experience, others would say, well, the Ukrainians have made virtues of many necessities, limitations of their weapons systems and so on, such as lack of air power, that don’t affect us. I think there’s an awful lot we can learn from what’s been happening in Ukraine. Not necessarily, and this might surprise you, not necessarily because there’s something truly new happening. What I would suggest is happening in Ukraine is actually the culmination of a 50-year process of military transformation that began in the 1970s. Many of you will be familiar with William Perry, Undersecretary of Defense at that time in the U.S., who famously said, our aim is to be able to see any high value target on the battlefield, to strike any target we can see, and destroy any target we can strike. That ushered in what was known as the precision networked warfare revolution, which only now do we fully have the technology to be able to exploit through massed precision strike weapons, massed persistent sensors that we can afford to field, and the kind of battle networks that can actually link those things up in a sensible way. What is the evolution there, rather than the revolution, is that because of AI, we’ve been able to make these battle networks efficient in a way that we weren’t before. That means humans are no longer brute-forcing massive amounts of data through networks that can’t handle them. Humans are no longer fat fingering, as the US military calls it, data from one machine to another that can’t talk to each other. We’re now developing the ways that we can get our intelligent machines to talk to each other. So, again, this is not necessarily new. It’s the culmination of that process, but it’s also the beginning of another process, the revolution in military affairs to come from autonomous systems, from robotics, from artificial intelligence, quantum computing, additive manufacturing, and so on. But we don’t know yet what shape that revolution will take, but we need to be prepared for that industrially, governmentally, strategically, and indeed ethically. Focusing today on what we’ve already seen, though, it’s not new in another way either. Everything that we’re seeing in terms of the ethical discussion about autonomous weapon systems, including the strike drones, intelligence surveillance and reconnaissance drones, and other systems being used in Ukraine, and which our militaries are starting slowly to procure, relates to older discussions about military affairs. What we’re essentially talking about is command and control. The whole discussion, or the whole organization of military affairs, has been based on the principle of command and control since time immemorial. What is this? It is the delegation of bounded autonomy to conduct particular tasks. And until we get to a stage where we are able to talk about artificial general intelligence, which I’m not that kind of Silicon Valley enthusiast who will tell you it’s just around the corner. I think we’re quite a long way off artificial general intelligence. Until we get to talking about that, what we’re again talking about is the delegation of particular tasks, in this case to machines rather than to humans. Now obviously that has implications for how we understand this, but the principles remain the same. When military commanders delegate to their subordinates, they do so on the basis that those subordinates are trained. They’re trained to do the task required of them. We do it on the basis that they have been tested at doing that. And because they have been trained, and they have been tested in order to be able to be predictable, to be able to be reliable, foreseeable in the things that they do, and thus to be effective also in what they do, they do what they’re supposed to do, we can trust them. And on this basis of training, testing, and trusting, I don’t actually think there is a significant difference between delegation of many of the tasks involved, between delegating to a lower human authority or to a machine. And guess what? We’ve been doing this for a long time. So again, not actually something necessarily new. Any so-called beyond visual range engagement, for example in air-to-air combat, has contained an element of this delegation. Delegation from a pilot, to a radar and targeting system, to a fire-and-forget missile. That’s delegation. Further back still, delegation to dumb bombing. Dropping a bomb over a target to try and hit it, which we were terrible at for an awful long time. Even artillery beyond visual range contains an element of exactly the same question. The difference now is that we can actually be more precise, and we are much more likely to be precise than we were before. And if you do go back and look, at the history of strategic bombing, for example, which I doubt is a favorite occupation in this building, but nonetheless, I will prevail upon you. The history of that is that we have been terribly inaccurate and terribly ineffective at that, causing massive amounts of collateral damage. So I would put it to you that actually advances in precision that follow the same rules of delegation are a potential advance for democracies. The other aspect of this, of course, is democracies do not want to fight wars of attrition. We value our people too much. We actually want to have the kind of precise weapons and make use of the kind of asymmetric capabilities that reflect our inherent advantages as societies, our unique selling point of human creativity amplified through the market mechanism and allied to government strategy that give us the edge if we leverage that over our authoritarian rivals. So again, with that said, and I’m happy to talk about an example of this that Professor Kleinwächter asked me to from Helsinki and others who use a term called the drone wall on the eastern flank, but I’d rather do that in questions in order to be able to set out this clear position first of all. So I would put it to you that it’s incumbent upon us to think through these ethical questions, but not to focus or get misdirected when doing so. Not to confuse means and ends, not to confuse actions with the actors or actants that we delegate them to, and not to confuse quote-unquote killer robots with the kind of battle networks, the kind of technology that can actually put humans where they most need to be by making more informed decisions, faster, in more effective ways that would drive the better kind of actions that democracies seek. Not only to be more precise in doing the awful things that we don’t like to do but we have to do in war, but in order to be able to win and to be able to use our strengths as democracies to actually prevail against the geopolitical and military challenge that we face today. which, if we fail to rise to, would have dire consequences for any of the kind of discussions we’re having today and for our democratic societies more widely. So with that, I’ll leave you there as the opening statement, and I look forward to discussing more on the specifics, including about the drone wall, in the questions.
Wolfgang Kleinwächter: Thank you. Thank you very much, and Anja, you are a representative from an organization of engineers. I think you have 100,000 members in the IAEA around the world. In Riyadh, we had Wim Mohammed, the CTO from Digital Identity, and he gave us a perspective and said, you know, whatever, you have a perfect software, you have some bugs in it, and so that means don’t trust all this technology, so that means you are dealing with this issue from the technical perspective. So what are your comments to the diplomatic and industry perspective, if we trust you? Thank you.
Anja Kaspersen: Thank you so much, Professor, and I should first, actually, we’ll have to correct you a little bit on numbers. So we actually are almost half a million members globally, and that just counts for the membership, not the larger ecosystems that is in the millions, and we are across 190 countries around the world. And we have been around for close to 141 years, so this was an initiative that came out of efforts with pioneers like Alexander Bell, Thomas Edison at the time, and that’s why I’m mentioning the history of it, around a core principle of how do you advance technology while keeping humanity safe. And a core part of this work was also then creating standards to make sure that all these good initiatives could also interoperate with one another without, for example, electrocuting us in the process, et cetera, et cetera. So most of you, the way that you’re connecting with one another in this room, you know, be that integrated devices, the Wi-Fi you’re connecting to in the Council of Europe, that’s actually IEEE standards. So almost everything that connects everyone in this room is one of our underlying standards. But I’m just mentioning the history of this organization because we don’t only do that, it’s also about scientific integrity, it’s about dialogue, it’s about scientific collaboration. So that’s what this group is doing worldwide and why societal issues such as the one that we’re discussing today is not something that we’ve been focusing on the last few years, but something that has been at the core of its existence, you know, from the beginning. So if you allow me, Professor, I prepared, because we all got like very strict timelines, so unusually for me, I actually prepared some remarks, but answering the questions that you just asked me. So first of all, thank you to Austria for the opportunity to intervene on this critical issue. I was lucky enough to be at the inauguration of these efforts, you know, in Vienna last year, in the Grand Palais. And I’m also, I should say, for those of you who may not know me, I have a very varied background, including from diplomacy. And I was also the former director for disarmament affairs in Geneva, where I oversaw some of these processes, including CCW, and tried to make a real push to, perhaps at that time, moving a little bit away, I called it away from the 10,000 feet perspective, and down to more practical considerations that allowed, such as, you know, my colleague on the side here to engage differently in this process. So I think that’s an important thing, how you frame this discussion can be quite alienating, or it can be inclusive, dependent, right? And I’m sure from industry, you have experienced that. So I speak today, not only from the perspective of the technical community, but also as someone who has long been engaged in international governance, including overseeing these efforts in Geneva, and contributing for decades to initiatives aimed at developing a coherent multilateral framework on the military use of technologies, as well as the broader strategic, operational, tactical, and not least, and I mention this because it’s very important, because it’s often forgotten, cultural and societal impacts, including on civil preparedness. There’s a lot of focus on civil preparedness right now, so what I’m about to say relates to that as much as it relates to the question at hand. What I want to offer is not a summary of technical challenges, which I think are by now well understood, but I would be happy to field any questions, of course, to any of you after this conference or after this meeting. What I want to focus on instead is a framing of what is structurally at stake and why, from a technical standpoint, some of the most urgent questions remain inadequately addressed. First, we must stop treating AI as a bounded technological tool. AI is not a weapon system in a traditional sense. It is a social, technical, economic methodology, if you may. It reorganizes how war is imagined, operationalized and bureaucratized. It alters the concept of decision making itself, shifting authority away from experience and judgment toward inference and correlation. What this means in practice is that the challenge is not simply how to use AI, but how it reshapes the very infrastructure of responsibility and intent. One concept that is routinely overlooked is commander’s intent. This is not a checklist or an input. It is a deep cognitive and ethical practice about anticipation, discernment and alignment across dynamic conditions. In human to human operations, it’s already complex. In human machine interaction, it becomes nearly impossible. Systems that do not and cannot reason are being asked to, in fair intent, respond to shifting environments and remain predictable without a contextual understanding this requires. Special forces are trained precisely for this kind of discernment to override instinct, interpret ambiguity and exercise calibrated judgment. These are human traits, tactical and moral. that no current complex information structure or machine learning system is built to replicate. That brings me to reliability. Reliability is not a static attribute. These systems adapt, drift and behave differently in different contexts. A model may function perfectly and still fail ethically, operationally or politically. It may perform as intended and still degrade trust or escalate instability and trigger proliferation. This is an important point when we discuss compliance with international humanitarian law. Can something be in compliance and still be harmful? Can something be compliant in war but be highly non-compliant in peace? We have to think through these scenarios. Over-reliance is not just a technical risk. It is an operational risk. It is a governance risk. And yet we routinely see systems treated as reliable in ways that ignore context, fragility and institutional constraints. Another important point. Procurement. Not a conversation that happens very often when we discuss these issues. And it’s one of the most overlooked ethical fault lines in my view. Most institutions, military or otherwise, do not build AI systems. They procure them. Increasingly, these systems are pre-trained, modular and abstracted from operational realities. And this relates to any of you that also work in public governance and that may have been included in your governments or companies’ procurement processes. These are very important issues. And increasingly, these systems are pre-trained, modular and abstracted from operational realities. This introduces profound misalignments, especially when end users have little involvement in setting technical specifications. I’ll do a little kind of flag for work that I think is just important, not because I’m selling anything, but it might provide a lot of insights for those in the room. So IEEE issued something called the IEEE P319. make a note of it, P3-119. It’s a cross-sector global procurement standard, or more like a practitioner’s handbook guideline, that helps organizations, companies, governments, militaries, to interrogate vendor claims, clarify assumptions, and surface hidden risks before integrating or embedding AI features into any form of systems. And includes questions not just for engineers, but policy makers, legal experts, and institutional decision makers. Because this, in my view, and also my institution’s view, is where ethical, you know, managing things with ethical considerations and true governance begins. We may also be cautious about the language used to frame the systems. Terms like responsible AI, trustworthy autonomy, or ethical automation, suggest a coherence and controllability that do not reflect how these systems actually operate. From a technical perspective, these labels often obscure the fact that many of these systems are built on failed approximations, trained on proxy data, deployed in contexts their designers never anticipated, and governed by assumptions, including about winning, what is winning in today’s battlefield, right? And dynamics that are not always visible to users. The failures that will matter are unlikely to be those we plan for. They will not look like system crashes. They will look like misalignments between logic and lived reality. Instead of projecting responsibility onto the system, we should talk more seriously about responsible decision-making processes at the human and institutional level. Responsibility lies not in the tool, but in the processes and choices that governs its design, deployment, oversight, and use. When that distinction is blurred, the vulnerability becomes harder to trace and governance risks become symbolic rather than substantive. Everyone in this room knows that data is the very backbone of AI-enabled systems. We had Eurodig. And yet, despite this recognition, data often remains backgrounded in this debate, treated as ambient infrastructure rather than a strategic asset. But data is never just there. It is collected, conditioned, labeled and selected, always by someone, for some purpose, under particular constraints. We must therefore ask, whose data is being used? How was it obtained? Why was it chosen? And for what outcome? These are also important questions in this debate. Questions of data integrity, veracity, provenance and security are not academic, nor are they pertaining just to the civilian domain. They are central to both performance and trust. The risk of tampering, poisoning and silent drift are real, particularly in military and intelligence contexts. If we do not account for the full data pipeline, we cannot account for the system. It’s very important we talk about weapons reviews. This brings me to infrastructure, because AI systems do not operate in isolation. Most current deployments rely heavily on legacy hardware and network-centric architectures that were not designed for systems with autonomous features. These architectures introduce friction, fragmentation and vulnerabilities, especially when retrofitted to accommodate high-intensity compute loads. This also risks undermining interoperability, particularly in joint or cross-force environments, where systems are expected to function across organizational, national and technical boundaries. This is precisely why robust, internationally applicable technical standards are so important in this domain, especially where systems must communicate, adapt and escalate decisions across contexts and constraints. And this leads directly to the question of energy. Advanced AI systems, particularly those involving real-time inference or large-scale simulation, are computational intensive. That means that they’re highly energy intensive. So, any serious conversation about AI, as well as cyber-reliant or network-centric warfare, is not just a conversation about power in the geopolitical or socio-economic sense, it is about power in the literal sense. Electricity, resilience, energy availability, and re-infrastructure security. Governance frameworks that overlook this is not just incomplete, but strategically short-sighted. This is why our anticipation strategies must change. Governance must shift from a logical prediction to one of adaptation. Systems need to be designed not only to perform, but to fail safely and visibly. That requires institutions to develop memory, reflexivity, and the ability to surface weak signals before they become structural liabilities. Here I would also flag another process that maybe even some in the room have been involved in, because it’s been a large-scale work for years, is the IEEE P7000 series. It was developed around how to guide ethically aligned design across sectors by supporting practitioners in identifying stakeholder values and translating them into system requirements from the outset. When this approach was launched, now many years ago, and been adopted across the world, it caused a critical shift in understanding that the ethical considerations must be architected into design, not added later just as an assurance. Because design decisions are never neutral. They determine what is seen, what is measurable, and what forms of harm and risk are rendered invisible. These decisions shape how systems respond to ambiguity, and how power and discretion are distributed. They are political, even when framed as technical. And once baked into architecture, these choices often become inaccessible to oversight or review. Governance must begin by recognizing this. Effective oversight is not simply a matter of control at the point of use. It depends on tracing responsibility back to the layers of abstraction and specification where many of the most consequential decisions are made. This includes questioning whose designs for whom with what assumptions against whose values. And I’ll come to the end here. I just want to say there’s a language plays a key role here. As I mentioned before, a few years ago, while working with the CCW state parties, I led a what we call computational text analysis of national statements and working papers. And it revealed a striking difference in how core technical and military concepts were framed, particularly around definitions, system limitations, mission command and human oversight. And I see this diversion still persisting today. And it continues to undermine efforts to build a shared foundation of governance. And I just give this example. And I’ve been in multiple, I’m part of multiple multilateral efforts. And I see this being a common trend. A term like redundancy might refer to fault tolerant architecture and engineering, but to inefficiency or duplication in policy. Safety might indicate statistical reliability in one field and protection in another or humanitarian protection in another. Even the term reliability can refer to technical precision, political stability or normative acceptability. These are not minor misunderstandings, they shape procurement, deployment, review and oversight. And they create governance gaps that are filled by assumption. What matters is not just taxonomy, but comprehension. So understanding how terms are used and understood in practice is essential, particularly if we are serious about building a governance framework that focuses on conversions around baseline standards. This is urgent. And I would just want to conclude by saying I want to return to an ethical point, speaking as strictly my personal capacity. In his work, Christopher Coker, my late professor, he was with the London School of Economics, warned of the dangerous illusion that technology could sanitize violence, that increased automation or distance could somehow make war more humane. It cannot, nor can it help us to define what winning means, nor should it. Technology may obscure the moral weight of decision making or create abstraction where there was once contact. but it does not eliminate responsibility. So the challenge before us is not simply one of technical control, it’s about governance and about kinds of institutions and cultures we want to build. It is about listening, not for consensus, but for the conditions that allow disagreement to be meaningful and oversight to be real. And I think that’s something this conversation could really benefit from. Thank you.
Wolfgang Kleinwächter: Thank you very much, Anja. As you see, if you are digging deeper, complexity is growing. And I think this is a good opportunity in this environment here to get many perspectives so that we get a full picture. We will hear now three shorter comments online and then I hope we can enter into a discussion with Q&A. So, Chris, you have a couple of minutes just to comment what you have heard and with your background, you are best positioned. I introduced you already. Chris, you have the floor.
Chris Painter: Great. Thank you. And it’s been a good discussion. Hopefully you can hear me. Can you hear me all right? Yes. Okay. So I come at this from a cybersecurity perspective and that’s been my background, certainly. And a couple of things. One was just mentioned, you know, the vulnerability of manned control systems, including AI systems, to cybersecurity attacks. And that’s not something that’s new, but that’s something that’s a challenge. We’ve talked about this in the nuclear area, with nuclear command and control, that although even when they separate them from the Internet as a whole, there are other dependent systems that could be susceptible to attacks. So aside from all the concerns about how AI is trained and how it’s used, it also has a concern about whether it is made less reliable because of cybersecurity attacks by adversaries who could make this much more and less reliable and amp up all the problems we talked about. The other thing, I think, is we’ve also talked in the cyber realm for a long time in terms of cyber offensive operations, you know, talking about the speed of the Internet and how we have to respond faster. that to automate cyber offensive operations to take them out of the middle. Now, those are likely not as destructive as the attacks we’re talking about here of kinetic weapon systems, but they could be destructive. They go after critical infrastructure and others. There’s long been a debate about how autonomous it can be for all the reasons that we just heard, how it’s trained, how it’s used. And I think that poses a huge problem here. And I don’t think we have a real solution to that without having humans still in the middle, rather than having an entirely automated system. And then the final thing I want to talk about is the geopolitical considerations. And I know there is an OEWG looking at this, or a GGE looking at this in this contest. And there’s been an OEWG of all the countries in the cyber context, cybersecurity context. But what the problem is there is more true than ever before. And I don’t want to be too much of a damper on this. The geopolitical considerations outweigh any ability to really reach an agreement. And though I applaud the effort to try to do some binding approach to this in the UN, I think that’s going to be, at least in the short term, very, very difficult. And that’s what we’re seeing across the board in cyber and all technological issues, really in all issues, where there’s such division within the UN and other international venues. And we’ve seen the US, for instance, I think, pull back from any kind of AI guidelines that would establish guardrails for the reasons that were noted of not wanting to constrain themselves, which is coupled with the lean to be more offensive in cyberspace, but also in other areas too. And that complicates this issue as well. So not to paint an overly non-rosy picture, but I think there are a lot of concerns on the horizon. And that doesn’t mean we shouldn’t talk about this. It doesn’t mean we shouldn’t have these efforts. I just don’t have a huge amount
Wolfgang Kleinwächter: of confidence we’re going to make progress in the short term. Thank you very much for your realistic outlook. And anyhow, it’s on the table and we have to discuss it. So Stop Killer Robot as an NGO has been involved in this from the very early days. And Sai is with us from India. Sai, probably you could comment on what you have heard this morning.
Speaker: Well, they were really interesting conversations that I heard. I’m really glad to be part of this. Thank you so much for having me here. As part of, I think, Stop Killer Robots and from civil society, one of the biggest concern is that we believe autonomous weapon systems will not be able to comply with the ethical, legal, humanitarian and moral implications that it presents. And especially it will not be able to comply with international humanitarian law, various provisions of it, including distinction, proportionality, being able to differentiate between a combatant and a non-combatant and so on and so forth. Apart from this, we also think it’s not, military technology historically has had examples of percolating into civilian uses. And they then don’t just create problems for international humanitarian law, but also raise questions about the implementation of other international law, like international human rights law, international criminal law and so on and so forth. So I think it is very important at this present state of the geopolitics to also assess properly as to how international law will be upholded with the advent of weapon systems such as autonomous weapon systems. What we believe is that the way forward is to do a legally binding instrument is through a legally binding instrument on autonomous weapon systems that completely bans autonomous weapon systems that are not able to comply with international humanitarian law and regulates other weapon systems which are not able to be used without meaningful human control or otherwise don’t have basic understandability are not able to hold people accountable as this part of international humanitarian law. So I think, because there’s a paucity of time.
Wolfgang Kleinwächter: I will stop there, but these largely seem to be our issues with autonomous weapons systems. Thank you. Thank you very much, Jutta. My understanding from the discussion in the CCW is that they have agreed on a two-tire approach. They said, okay, probably we could prohibit weapons systems where human control is impossible and we can regulate weapons systems where you have certain type of human control. But the question, what type of human control is realistic, this is another question. But I think to have this differentiation, I think it’s important to have at least a realistic way forward. So that means, you know, if you cut it in smaller pieces, it’s probably easier to negotiate. We have now the rolling text, and let’s wait and see what will happen until the end of 2025. And, you know, Guterres has set a deadline for 2026 for legal binding document. Chris has just told us that it’s rather unrealistic against the background of the geopolitical tensions. So I think all these are open questions on the table. But before I ask you to prepare your question, let me move to Elena Plexida from ICANN. I think Anna mentioned also the infrastructure which is needed, and ICANN managed one of the most important infrastructure in the digital world. It’s the domain name system, the root server system, and so that means everything which goes over the Internet needs a functioning ICANN, a functioning IP address and domain name system. So, Elena, you are not directly involved, ICANN is not directly involved in this debate, but you could be affected. So what is your view about this rather, not totally new, but new issue in this Internet community?
Elena Plexida: Thank you, Wolfgang. Thank you very much. Hello everyone. Yes, exactly. As you said, I work for one of the organizations that help maintain what we all know as the global internet. And in fact, the global internet and maintaining it and the work around it is a collective effort. There’s a togetherness in this one. It’s a peace project in fact. So being part of this discussion for me is a little bit remote. But then again, peace and stability is something that you have to work for and safeguard. Hence, the discussion about rules is really relevant. Others did mention the current geopolitical ecosystem and the deterioration and the difficulty in such an ecosystem to probably agree around norms or rules. But I would say that particularly because of this deterioration, adhering to existing norms or creating new ones where they are needed are super, super relevant. As regards technological developments, again, they’re not in our sphere. As you said, both come very rightfully so. But to me, it seems that the technological developments are so fast that if my understanding is correct, it makes it even more difficult to land to an agreement with respect to the use of autonomous weapon systems. Then we have two challenges really. Difficulty in creating an unbiased AI system or unbiased AI systems. The possibility of jailbreaking AI systems through prompt engineering. Here, I want to highlight the undoubted value of and the need to involve technology experts in conversations such as the development of norms or regulation for the use of autonomous weapon systems. As the ambassador said at the beginning, and of course, other experts, a holistic debate is indeed needed. Maintaining meaningful human control is one of the problems apparently. Then in addition, the use of such systems by non-state armed groups, if you will. Those are not really issues that any data icon is into. So I go directly into the norms, the suggestion of or the idea that there needs to be norms. And I think Chris mentioned that, if I’m not mistaken, kinetic weapons seems to be perceived like weapons that can do a much more significant damage, including in the infrastructure that maintains the internet. But those kind of systems would also do that. So I would say, undoubtedly, the most important thing is to look at the human aspect and look at norms or regulation that makes sure that we do not dehumanize, so that we do not harm people. But if I may, I would say that together with that, we should also be looking into norms that are about the infrastructure. And here, I will repeat one of my favorite norms, which comes from the Global Commission for the Stability in Cyberspace that you know very well of, Wolfgang. And it’s the norm about the core of the internet. So to make sure that such systems and other weapons, of course, but such systems do not harm, or if you will, weaponize what we call the core of the internet, technical parameters that are absolutely essential for the internet to function, such as the protocols, the DNS, the IXPs, cable systems that support entire regions or populations. And as that would constitute a threat to the stability of the global internet, and in turn, a threat to the stability of cyberspace. And the internet is a common good. And as I said at the beginning, I think it’s a peace project. So yes, putting some thought into not threatening it, together with other norms that are being considered, is something to add to the conversation. Thank you very much.
Wolfgang Kleinwächter: Thank you, Alina. And good to remember the recommendation from the Global Commission on Stability in Cyberspace a couple of years ago. that the public or the internet is seen as a common good and an attack against the public or of the internet. This was one of the conclusions from the global commission where I had the honor to be a member. This would be seen as an attack against mankind. Because it’s like polluting the air or something else. This should be seen as a crime. So far the question is what we see now with all the attacks against cable systems and other things how far this will go and which role AI could play in attacking the public or of the internet. So this is a big challenge, a complicated question and so we have to do something to avoid this and law can be an instrument but as we have seen also from the debate so it’s difficult to reach an agreement in a geopolitical situation where we have more polarization than harmonization. Anyhow, we have reached now the moment where I would ask questions from the floor. We have also some online questions. So if somebody wants to ask a question from the floor directly, yeah, one and two and please introduce yourself and then if you direct the question to one of the panelists to make it clear. So it’s always better to ask a panelist directly than to ask a general question and then we will have a certain confusion who replies best. Okay, you go first.
Audience: Good morning, I’m Brahim Alla, intern at Acedel. Strasbourg, I wanted to ask very quickly a question related to, for example, the recent events in Spain. Would it be possible to imagine shutting down areas or regions or even countries on a voluntary basis as a future modern warfare strategy, and if so, do you have insights about the influence of such behaviours or events on autonomously guided weapon systems? Thank you. My name is Frances, and I’m here with YouthDIG. I had a question, I think, for Benjamin, so I do agree that just because there are major ethical concerns, that doesn’t mean, I mean, obviously that means we need to think about this more, because it could influence warfare and practices in warfare so much, so it’s something that people are going to want to mechanise and utilise, but I’m not asking about war, but rather limited force, because if you think about how America, and especially under Obama, a lot of drone strikes were utilised, we see that democracies, even though they want to protect themselves, even outside of war, they also want to assert their ideologies, right? So, I think that if they have a technology that’s more precise, that doesn’t have any human costs to people of their own country, this, I think, would lead to overuse of this kind of technologies, because now you don’t have civilian losses, but you have serious damage to people in those countries because of psychological harms, of possible strikes happening at any moment by technologies that aren’t operated by humans, and so it’s not only the precision and the people who are targeted specifically by this, but I think it needs to overuse and also a mental disconnect, right? Because now you think, well, we’re only targeting the bad guys, but also what data is telling you who are the bad guys and what assumptions are being made by these autonomous weapons. So I think in limited force, do you think this will lead to even democracies overusing this technology? Because I think the difference here is that there’s no human cost. So it’s not like delegation. So then you get massive asymmetries and warfare and limited force because now democracies aren’t losing anyone. And so I think that’s the crucial difference that I would love to hear your opinion on.
Wolfgang Kleinwächter: Thank you. Thank you. Good question. Now we go back to the online questions. You could read it.
Moderator: So yeah, a question we have online is, would you consider a scenario wherein an enemy does not buy or make drones, but develop a counter-AI battle system to hack into even elaborately secure battle AI system? For instance, a takeover weapon mounted stones on air, on the ground, redirect and counter-target the drones that they don’t own. Would such scenario would be even remotely realistic? Okay, thank you. That’s a good question. I think it’s primarily for Benjamin and Anna. I think the first and the last is actually for Chris. Okay. Okay. Then I ask also, Chris, if you could…
Wolfgang Kleinwächter: Benjamin first. Sure. Benjamin first. Okay, go ahead.
Benjamin Tallis: Thanks. Yeah. Something very brief to say to all three. I do have points to come back to Anna’s excellent presentation as well, but we’ll see. Do you want responses to other panelists? Yeah. Okay. Very good. This is the right moment. Okay. So very quickly to Rahim. Great question. It’s about resilience and grid resilience in this case. It’s a classic case of one of Anna’s misconstrued or multiply construed terms. Inertia was the key in Spain, which is the ability of a grid to be able to withstand fluctuating power flows. Is that vulnerable to cyber attack? Yes. Is it vulnerable to multiple kinetic attack by uncrewed systems? Yes, it is. So what is the answer? Build grids with more inertia. And distribute the power across the grid, distribute the control across the grid, which is precisely what edge computing and other advances like that allow you to do in military and non-military networks. That means putting the compute power in distributed locations rather than coordinating it in a central location, which is an easy hit. So very quickly to that one. Florence, superb question. Very much conditioned by the misadventures and terrible things that the West did in the last 25 years. The problem is not with the technology, I would argue. The problem was with the intent. The problem was with the analysis and the problems with our hubris there. Big questions now about how do we order a world that is not only safe for democracy, but in which free societies can thrive. Learning from those huge errors which have massive human cost. Where the technology comes in relates to the Chris Coker point. Chris, I knew as well, knew many of his students. And the whole notion of virtuous asymmetric war, that you’re detached. We are war through the screen, etc. removes you of your human responsibility. That was quite widely shown by some studies not to be the case for drone operators who suffered considerable stress. Now you might say that’s nothing compared to what those on the receiving end were getting. But at the same time it shows there is not actually such a disconnect in the same way. We’re not in that situation anymore. We are not in a situation where we are fighting quote unquote wars of choice. We are not fighting limited wars with much weaker adversaries for marginal interests. We are in a situation of great power conflict. We’re in a situation of peer conflict. There is no one in Ukraine who would tell you that the use of drones is first of all a substitute for all the other systems they have. It’s not a single silver bullet. Second, there’s no one in Ukraine and no one around the world should believe Ukraine is not losing people. because it’s using drones. We’re facing a very, very different combat environment. So while I can see the logic of the question, I don’t think it’s the logic we should be looking at right now, because I don’t think it applies to the combat situations we’re actually likely to be in, which also relates then to the question about the drone wall. On the comments from Anna, and I’ll come to this as quick as I can, there was so much I agreed with in this, as with the comments from Saeed and others online. And I agree with Chris’s point about the geopolitical difficulty of reaching a regulation on this. Normally, we only see regulation on new weapons types when there’s an interest of the parties that operate them, when they’ve actually tried them, found out either they are massively consequential in human terms, or they don’t work, or they cause blowback. So for example, in the regulation of gas warfare after the First World War. But crucial points that come out of all of this intention and accountability. I would argue that actually, the use of advanced battle networks now gives you the chance to restore mission command, it gives you the chance to restore commanders intent by allowing commanders to focus on those key decisions. That’s something actually, we’ve been talking to militaries a lot about, they are very keen on restoring that in a way that can actually be communicated, but is based on the proliferating, very confusing battlefield, which is full of diverse systems, multiple inputs, which they have to deal with in a way that it hasn’t been before. On procurement, and end user requirements, and so on, having been through procurement processes, I disagree with the analysis that’s presented. Because the crucial part that we’ve certainly experienced and many others in our position, I mean, Helsinki is the biggest new defense company in Europe and biggest defense AI company in Europe, but there are many others doing similar things. We have to work very, very closely with the customer, which is the government, and with the end users, which are the military in order to understand the capabilities, the technical specifications, and the bounding and the way that we actually can put guardrails on what is being done. You mentioned correctly that most defense companies don’t actually build AI, they procure it. We are different, we are AI first. That’s one of the reasons we think this is a better approach because adding AI or adding software onto hardware has proven to be a very expensive, very ineffective way to build true systems that can actually work in dynamic environments. We do it the other way, we’re software defined, we build it from the AI out and that’s why we actually then started building drones because we realized we could build drones better than other people adding our software to their drones. The same thing applies for future systems, we’re stuck mentally when thinking about military things in terms of tanks, in terms of planes, in terms of ships. That’s not how we should be thinking, we need to be thinking in terms of capabilities, effects and networks. Why software defined? Because software can be updated much more easily than hardware, it can be updated and corrected much more easily. What is crucial with all of this is not only the intent, which we’ve discussed now quite a bit, but the accountability that you mentioned. And accountability I think comes in two ways, first of all you have to know whose intent was it, what orders did they give, what was the command actually given, to which human machine combination did they delegate that, then what were the effects that they should be held accountable for and can you trace it back. The second part of this is about explicability as they call it and this particularly relates to artificial intelligence at the moment. The beauty of artificial intelligence, which is why people want it, is because it reasons in ways that humans don’t. We want it to do that because it makes the decision that we can’t in the time available. However that creates the problem we don’t know why it did what it did. Well newer artificial intelligence builds in explicability as it’s called and this is still a progressing science which is why we have to be very careful about the steps forward that we take but this actually means that the AI will give account for why it reached a decision. Now you could say well what if the AI is trying to trick you, well can the AI trick another AI that’s trained to trace this stuff etc etc and so what we’re into is a progressive iteration of explicability. which allows you to get to the reasoning that was used in order to be able to provide correction over time. Now that’s actually better than we can get to with some humans, as we’ve seen over time, which is very difficult for humans to give account for why they’ve done certain things. I mean, humans, we know for all their ethical qualities, can also lie, they can also obscure. They may not have been sure why they did something. So when thinking about this, we have to again think of those two points of intent and accountability, but recognizing that geopolitical situation that we’re in, taking advantages of the technologies we have in order to make sure that we can actually defend our democracies. The very last point, why do we actually need this stuff when we’re talking in military terms? Our adversaries have it as one answer. The second thing is technologies is advancing in ways that we can use to make sure again that we don’t have to try and fight wars of attrition. Now while it’s not the case that we simply won’t be able to not lose anybody on the battlefield, as per Florence’s question, we don’t want mass casualties. We do not want mass conscription if we can avoid it. We want to use our technological edge. It used to be the case during the Cold War that Western precision and Western quality of weapons was used to counteract Soviet mass. Now the equation is different. Now we can have precise mass and we can actually be able to afford it, and we have to be able to think about that when we’re allocating defense budgets in times of scarce resources. We are going to need to put more money in, but how do we get the most effect for that while still maintaining the kind of democratic societies that we believe in in other ways? I’ll leave it there because that was already a long answer, but there’s a lot that we could go into also further in discussions about how to respect international humanitarian law, the histories of that with autonomous and semi-autonomous weapons including anti-tank mines and so on, and how that’s actually enhanced by the kind of SATA and sensor and data fusion that is now possible from using the new kinds of battle networks that are out there. Thank you.
Chris Painter: Still very, very briefly. I mean, just very briefly, I’d say on the Spain thing, absolutely, it’s possible it’s already happening. Russia is doing this against Ukraine. The whole reason we have a norm against attacks on critical infrastructure is because that’s what happens. So if Spain was a cyber attack, that’d be true. And on the area and the issue of attacking drones, or attacking AI systems, absolutely. I mean, that’s one of the worries. And especially if an adversary doesn’t have the financial wherewithal to build expensive networks, expensive drones, etc., expensive AI systems, attacking them and make them less secure is exactly what an adversary would do.
Wolfgang Kleinwächter: Okay, thanks. Are there more questions in the room? Or Anna, do you want to react to what Ben just said? I’ll say this. I think it’s an honour to Austria and to yourself, Professor, because you
Anja Kaspersen: actually brought very different views onto the panel. And I always say when I talk about this issue, it’s like the most important thing you can leave the audience with, both those in a room and those online, is good questions to ask. And when you heard me talk, and you heard Chris, and you heard Benjamin, and we represent different viewpoints, although we kind of aligned on some of the technical challenges, I hope people leave here with really good questions. Is this what’s desirable? Is this what we think? Do we believe the commander’s intent, that human intent can be translated in the way that was just described by Benjamin? I will make a small correction. I can’t remember who was saying it. But there is a common understanding that, you know, these things are being developed in the defence industrial complex. And what is the big shift? There’s two big shifts, right? One is that what used to be defence industrial complex have moved increasingly into the civilian commercial space. And more and more technology, more and more of the technologies that are now game changing, are being then brought back into the military space. So who’s actually creating the parameters and setting the parameters have shifted somewhat. So I’m not saying there’s something different to you. And I understand your company operates differently from other companies. So I respect that. There’s also a trend that more and more is, to the point of procurement, is bought off-shelf. Because it takes too long. There’s no time. There is a perception that time is not on our side, geopolitically and otherwise, that you don’t invest the same amount of money into maybe doing the specs and doing the traditional methods of procurement and acquisition that was traditionally done in this field. So there are some changes. And I’m not saying your company is on that category. But overall, those are just going to be my comments. And I have many, many more comments, which has more to do with the kind of the bigger philosophical questions, including the technical issues and what was implied and some what Benjamin said. But having such different viewpoints on this panel that allows people to really go out with some real considerations. And actually, I always say that one of the missing things in our current discourse is the inability or the diminishing ability of just sitting with contrasting realities and being uncomfortable. I think it’s worth being uncomfortable with this space. And we have to be able to sit with contrasting realities and navigate that space without getting upset or disagreeing. We’ve been smiling at each other the whole time, even when he’s been saying that I fiercely disagree with you. I’m nodding because we may have agreement on the technical side, but we may disagree on what the impact would be and how OK we are with that. So those are just different views. And that’s what ethics is about. It’s about your outlook. It’s about navigating uncertainties. It’s about sitting with the discomfort of the trade-offs that will inevitably be the result of this discourse, no matter what we do. So thank you.
Wolfgang Kleinwächter: Anne, you are so right. And I hope you will continue the debate in Oslo and beyond Oslo, because this will keep us busy, hopefully before. The Digital Winter will come, so that we have some space which we can use to avoid some of what people have called the Digital Hiroshima, or the Disciple Hiroshima, so there is still room to find a consensus to avoid the worst things. But we have one additional question online, and is there a question in the room? Because more or less we have to come to an end then, because the big plenary is waiting.
Moderator: If there is no question in the room, then please, the final question from Monika from the online. So, question to Ben. Delegating such selection of targets of AI programs has resulted in inconsiderable collateral damage in the Israeli war against Gaza. When would you say, is software safe enough to be delegated such tasks? Who should be held responsible for illegal collateral damage inflicted? The state is using the software, or the companies developing and selling the software as precision tools. Who has to take the responsibility for such hallucination of AI tools? Hallucination of AI tools. Good point, Ben.
Benjamin Tallis: Thanks for that one. It’s nice that people are engaging. I just really first want to absolutely back up what Anna said. This has been a terrific experience for that reason, that we’ve had the chance to productively disagree. And I hope the point about, it’s not only about ethics, but it’s about what democracy is at heart as well. Different points of view making their case in an arena. So again, thanks to you for convening this. In that particular case, and without commenting on particular instances, again, this is a history of warfare question. This is nothing new. Is it the supplier of the weapons, the supplier of the bullets, or so on, who actually is responsible for the effects that they have? And I think we have to be extremely careful not to… We don’t want to confuse our rightful distaste, our rightful hatred of the awful outcomes that result from war. I mean, war is awful. This is the plain, simple truth. War is something we would rather didn’t happen at almost any cost. Although, as Ukrainians would tell you, some things are worth fighting for. And that includes their democracy, their freedom. And that’s what I would hope we’d like to see in Europe too. Which is why we need to be so well-armed that it doesn’t happen, that Putin doesn’t look at us and go, So this is part of the point about building up deterrence. Now, in terms of accountability, again, which is the essence of the question, same rules as applied to other forms of warfare before. Who is responsible for the My Lai Massacre? Well, you could look up at a chain of command, you could look at the individual perpetrating that, you could look at the other individuals who didn’t stop William Calley and co. doing what they did. It’s a complex question that has many, many parts to answer. The question about whether autonomous targeting is responsible, this is a question of setting the boundaries. And this is why my company and many others want to work with democracies who set proper boundaries. And who actually set proper limits and guardrails for how you use that AI. And if they don’t, then that can be the system that’s provided. How it’s then used is ultimately up to the military and the democratically elected governments concerned. So I think there’s a key point there in terms of understanding where is the political responsibility, as well as the command responsibility, and then the frontline responsibility that all play into question there. One very last point, because Anna made a really interesting observation about technology shifting from the military world to the civilian world. I would actually argue that what we’re seeing now is the true shift of the civilian world into the military world. And anyone who’s read Christian Brose’s book Kill Chain, which I would highly recommend, despite the off-putting title, or off-putting to some, or even DIUx, or any of these other books on military innovation, will know buying off-the-shelf, exactly as Anna said, is key in many ways. Because you can now buy off-the-shelf sensors, you can buy off-the-shelf interface tools like phones, like iPads, whatever it is. that by using AI, you can actually upgrade to military-level quality and effect. I would argue actually what we’ve seen is the military world catching up with the technology of the civilian world, but of course it has different consequences when you’re actually using those systems to strike human and military targets rather than to order an Uber. So we have to actually have the serious conversations like this that we’re having today.
Anja Kaspersen: And thank you all for engaging so richly with that. So I’ve been doing arms control disarmament stuff for a long time. So even when we had the composite disarmament like fully operational, and it’s a very important thing to say about an instrument. And as you know yourself, first it takes time. You know, some of the most effective arms control instruments took not a few years, right? They took nine years, 18 years, the Chemical Weapons Convention, the Biological Weapons Convention. I’m not arguing that we should spend that time on anything that is happening in the process that you’re leading. But what has been some, you know, in the Chemical Weapons Convention, what was a big transformative shift for the conversation that happened at the UN was when the chemical industry started engaging. I’m just mentioning that because, you know, we’re trying to reflect creative disagreement because they saw the benefit of having a regulated space to make sure that the edge cases, the edge uses, what was not, you know, set up to be transparent and visible and accountable and held accountable, will be flagged and ruled out. So having all industries and those proactive industries involved is, of course, as we’ve seen with other arms control instruments, very important to make sure that what is agreed upon is implementable. I just wanted to share that observation.
Wolfgang Kleinwächter: So this is an additional argument to involve many stakeholders to get the full picture and then to find something which could be a dynamic consensus in the future. So we have reached the end of our time. And I would ask. but I’m better to give some concluding remarks. Thank you.
Aloisia Wörgette: Thank you for a fascinating panel. I will take home all the praise that Austria has been receiving for hosting this and be assured with your positive motivation we’ll continue to do it. Fascinating discussions. It is absolutely true. Maybe we are not there yet, but I’m really optimistic because we are not in controversy, we are in deliberation and this is why you are disagreeing and still smiling upon each other. It’s absolutely about avoiding unintended consequences for human rights, rule of law and democracy and of course it’s about the question of intent and this could also be an Oppenheimer moment for philosophy as such. I’m much more optimistic about whether we will get an agreement than you are because this is not only about industry and about governments. The discussion on artificial intelligence mobilizes different segments of society globally and therefore in a global democratic process we have a chance to go further because different people are looking at it and are guiding us. So thank you and enjoy EuroDIG for the rest of the days in Strasbourg. Thank you. Thank you. The meeting is closed.
Wolfgang Kleinwächter: Thank you the panelists and our moderator for the insightful discussion and thank you for the audiences for the active involvement as well. So the next session, the opening ceremony will be at 15. So we look forward to seeing you then. Thank you.
Aloisia Wörgette
Speech speed
129 words per minute
Speech length
1167 words
Speech time
539 seconds
Austria leads international efforts for legally binding AWS regulation by 2026
Explanation
Austria has taken a leading role in advancing international regulation on autonomous weapons systems, hosting conferences and supporting UN resolutions. The country strongly supports the joint call by the UN Secretary General and ICRC President to conclude negotiations on a legally binding instrument by 2026.
Evidence
Austria hosted the Vienna Conference ‘Humanity at a Crossroads’, championed the first-ever UN General Assembly resolution on AWS in 2023, and sponsored a follow-up resolution supported by 166 UN member states
Major discussion point
Regulation and International Governance of Autonomous Weapons Systems
Topics
Legal and regulatory | Cyberconflict and warfare
Disagreed with
– Chris Painter
Disagreed on
Timeline and feasibility of international regulation
Multi-stakeholder approach essential including diplomats, military, industry, tech sector, and civil society
Explanation
The global discourse on autonomous weapons systems must extend beyond diplomats and military experts to include contributions from science, academia, industry, tech sector, parliamentarians, and civil society. This ensures a holistic and inclusive debate on issues that affect human rights, security, and sustainable development.
Evidence
The Council of Europe Parliamentary Assembly supported a resolution on lethal autonomous weapons systems in 2023, and Austria has hosted AWS sessions at Internet Governance Forums
Major discussion point
Regulation and International Governance of Autonomous Weapons Systems
Topics
Legal and regulatory | Human rights principles
Agreed with
– Anja Kaspersen
– Moderator
Agreed on
Multi-stakeholder approach is essential for AWS governance
Meaningful human control essential for ensuring proportionality, distinction, and accountability
Explanation
AWS raises fundamental legal, ethical and security concerns including the necessity for meaningful human control to ensure proportionality and distinction, the need for predictability and accountability, and the protection of the right to life and human dignity. There are also serious risks of proliferation and a destabilizing autonomy arms race.
Major discussion point
Human Control and Accountability
Topics
Human rights principles | Legal and regulatory
Agreed with
– Benjamin Tallis
– Anja Kaspersen
– Audience
Agreed on
Importance of accountability and responsibility frameworks
AWS raises fundamental concerns about right to life, human dignity, and risk of destabilizing arms race
Explanation
Autonomous weapons systems pose serious ethical and security challenges including threats to fundamental human rights like the right to life and human dignity. They also create risks of proliferation and could trigger a destabilizing autonomy arms race between nations.
Major discussion point
Ethical and Humanitarian Concerns
Topics
Human rights principles | Cyberconflict and warfare
Chris Painter
Speech speed
194 words per minute
Speech length
659 words
Speech time
203 seconds
Geopolitical tensions make binding international agreements extremely difficult in the short term
Explanation
The current geopolitical climate with deep divisions within the UN and other international venues makes reaching binding agreements on autonomous weapons systems very challenging. Countries are pulling back from AI guidelines that would establish guardrails, not wanting to constrain themselves militarily.
Evidence
Similar challenges seen in cybersecurity context with Open-Ended Working Groups, and the US pulling back from AI guidelines for military reasons
Major discussion point
Regulation and International Governance of Autonomous Weapons Systems
Topics
Legal and regulatory | Cyberconflict and warfare
Disagreed with
– Aloisia Wörgette
Disagreed on
Timeline and feasibility of international regulation
AI systems are vulnerable to cybersecurity attacks that could make them less reliable
Explanation
Autonomous weapons systems and AI-enabled military systems are susceptible to cybersecurity attacks by adversaries, which could significantly reduce their reliability and amplify existing problems. This vulnerability exists even when systems are separated from the internet due to dependencies on other connected systems.
Evidence
Similar vulnerabilities exist in nuclear command and control systems despite being separated from the internet
Major discussion point
Technical Challenges and AI System Reliability
Topics
Cybersecurity | Network security | Cyberconflict and warfare
Agreed with
– Anja Kaspersen
– Benjamin Tallis
Agreed on
Technical challenges and limitations of current AI systems
Wolfgang Kleinwächter
Speech speed
134 words per minute
Speech length
1854 words
Speech time
828 seconds
Two-tier regulatory approach: prohibit systems without human control, regulate systems with certain human control
Explanation
The Convention on Certain Conventional Weapons discussions have agreed on a differentiated approach that would prohibit weapons systems where human control is impossible while regulating weapons systems that maintain certain types of human control. This approach breaks down the complex issue into more manageable negotiation pieces.
Evidence
Reference to ongoing CCW discussions and rolling text negotiations with deadline set by UN Secretary General Guterres for 2026
Major discussion point
Regulation and International Governance of Autonomous Weapons Systems
Topics
Legal and regulatory | Cyberconflict and warfare
Disagreed with
– Speaker (Stop Killer Robots)
Disagreed on
Approach to regulation – complete ban versus graduated control
Speaker
Speech speed
150 words per minute
Speech length
280 words
Speech time
111 seconds
Complete ban needed on AWS that cannot comply with international humanitarian law
Explanation
From the civil society perspective represented by Stop Killer Robots, autonomous weapons systems will not be able to comply with ethical, legal, humanitarian and moral requirements, particularly international humanitarian law provisions like distinction and proportionality. The solution is a legally binding instrument that completely bans non-compliant systems.
Evidence
Historical examples of military technology percolating into civilian uses and creating problems for international human rights law and criminal law
Major discussion point
Regulation and International Governance of Autonomous Weapons Systems
Topics
Legal and regulatory | Human rights principles
Disagreed with
– Speaker (Stop Killer Robots)
– Wolfgang Kleinwächter
Disagreed on
Approach to regulation – complete ban versus graduated control
Historical military technologies eventually proliferate to civilian uses with broader implications
Explanation
Military technologies historically have examples of moving into civilian applications, which creates problems not just for international humanitarian law but also raises questions about implementation of other international law including human rights law and criminal law.
Major discussion point
Ethical and Humanitarian Concerns
Topics
Legal and regulatory | Human rights principles
Benjamin Tallis
Speech speed
186 words per minute
Speech length
4196 words
Speech time
1346 seconds
Autonomous weapons represent evolution of 50-year precision networked warfare revolution, not entirely new technology
Explanation
What’s happening with autonomous weapons is the culmination of a precision networked warfare revolution that began in the 1970s with William Perry’s vision of seeing, striking, and destroying any target on the battlefield. Current developments represent evolution rather than revolution, enabled by AI making battle networks more efficient.
Evidence
Reference to William Perry’s 1970s doctrine and the precision networked warfare revolution, examples from Ukraine showing culmination of this 50-year process
Major discussion point
Military and Strategic Perspectives
Topics
Cyberconflict and warfare | Digital standards
Disagreed with
– Anja Kaspersen
Disagreed on
Fundamental nature and readiness of autonomous weapons technology
Democracies must maintain technological edge to avoid wars of attrition and leverage asymmetric advantages
Explanation
Democratic societies value their people too much to fight wars of attrition and should leverage their inherent advantages through precise weapons and asymmetric capabilities. This reflects democracies’ unique selling point of human creativity amplified through market mechanisms and government strategy.
Evidence
Contrast with Cold War era when Western precision countered Soviet mass, now democracies can achieve ‘precise mass’ at affordable costs
Major discussion point
Military and Strategic Perspectives
Topics
Cyberconflict and warfare | Economic
Current systems follow traditional military command and control principles of delegated bounded autonomy
Explanation
The discussion about autonomous weapons systems essentially relates to traditional military command and control based on delegation of bounded autonomy to conduct particular tasks. Until artificial general intelligence is achieved, this represents delegation of specific tasks to machines rather than humans, following the same principles.
Evidence
Examples of beyond visual range air-to-air combat, artillery beyond visual range, and historical bombing campaigns that already contained elements of delegation to systems
Major discussion point
Military and Strategic Perspectives
Topics
Cyberconflict and warfare | Legal and regulatory
Disagreed with
– Anja Kaspersen
Disagreed on
Fundamental nature and readiness of autonomous weapons technology
Advanced battle networks can restore mission command and commanders’ intent through AI enhancement
Explanation
Advanced battle networks using AI can actually help restore mission command and commanders’ intent by allowing commanders to focus on key decisions rather than being overwhelmed by proliferating, confusing battlefield information from diverse systems and multiple inputs.
Evidence
Discussions with militaries showing their keen interest in restoring mission command in more effective ways
Major discussion point
Military and Strategic Perspectives
Topics
Cyberconflict and warfare | Digital standards
Disagreed with
– Anja Kaspersen
Disagreed on
Feasibility of meaningful human control and accountability
Losing the arms race to authoritarian regimes would have worse consequences than participating in it
Explanation
While being in an arms race is undesirable, losing that arms race to authoritarian regimes like Russia and China would be worse for democratic societies. Democracies give considerably more care to ethical considerations than their adversaries, which is part of what sustains them as democracies.
Evidence
Observation that authoritarian competitors have far less honorable intentions and give less consideration to ethical issues than democratic societies
Major discussion point
Military and Strategic Perspectives
Topics
Cyberconflict and warfare | Human rights principles
Accountability requires tracing intent, command delegation, and effects back through the chain of responsibility
Explanation
Accountability in autonomous weapons systems requires knowing whose intent was involved, what orders were given, what command was delegated to which human-machine combination, and being able to trace effects back through the chain. This includes both command responsibility and explicability of AI decision-making.
Evidence
Reference to newer AI systems building in explicability features and comparison to historical accountability challenges with human decision-makers
Major discussion point
Human Control and Accountability
Topics
Legal and regulatory | Cyberconflict and warfare
Agreed with
– Aloisia Wörgette
– Anja Kaspersen
– Audience
Agreed on
Importance of accountability and responsibility frameworks
Disagreed with
– Anja Kaspersen
Disagreed on
Feasibility of meaningful human control and accountability
Modern AI systems can provide explicability for their decision-making processes
Explanation
Newer artificial intelligence systems build in explicability features that allow the AI to give account for why it reached a decision. This creates progressive iteration of explicability that can be better than human accountability, since humans can lie, obscure, or be unsure about their reasoning.
Evidence
Development of AI systems trained to trace decision-making processes and the progressive science of AI explicability
Major discussion point
Human Control and Accountability
Topics
Legal and regulatory | Digital standards
Agreed with
– Chris Painter
– Anja Kaspersen
Agreed on
Technical challenges and limitations of current AI systems
Disagreed with
– Anja Kaspersen
Disagreed on
Feasibility of meaningful human control and accountability
Anja Kaspersen
Speech speed
163 words per minute
Speech length
3161 words
Speech time
1160 seconds
AI is not a bounded tool but a methodology that reorganizes how war is conceived and operationalized
Explanation
AI should not be treated as a simple technological tool but rather as a social, technical, and economic methodology that fundamentally reorganizes how war is imagined, operationalized, and bureaucratized. It alters the concept of decision-making itself, shifting authority from experience and judgment toward inference and correlation.
Major discussion point
Technical Challenges and AI System Reliability
Topics
Cyberconflict and warfare | Legal and regulatory
Disagreed with
– Benjamin Tallis
Disagreed on
Fundamental nature and readiness of autonomous weapons technology
Current AI systems cannot replicate human judgment, discernment, and contextual understanding required for warfare
Explanation
Systems that do not and cannot reason are being asked to infer intent and respond to shifting environments while remaining predictable, but they lack the contextual understanding this requires. Special forces are trained for discernment to override instinct and exercise calibrated judgment – human traits that no current machine learning system can replicate.
Evidence
Reference to special forces training for discernment, interpretation of ambiguity, and calibrated judgment
Major discussion point
Technical Challenges and AI System Reliability
Topics
Cyberconflict and warfare | Human rights principles
Agreed with
– Chris Painter
– Benjamin Tallis
Agreed on
Technical challenges and limitations of current AI systems
Disagreed with
– Benjamin Tallis
Disagreed on
Fundamental nature and readiness of autonomous weapons technology
Systems may function perfectly but still fail ethically, operationally, or politically
Explanation
A model may function as intended and still degrade trust, escalate instability, trigger proliferation, or fail ethically and politically. This raises important questions about compliance with international humanitarian law – whether something can be compliant in war but non-compliant in peace, or compliant yet still harmful.
Major discussion point
Technical Challenges and AI System Reliability
Topics
Legal and regulatory | Human rights principles
Disagreed with
– Benjamin Tallis
Disagreed on
Feasibility of meaningful human control and accountability
Procurement processes often involve pre-trained, modular systems abstracted from operational realities
Explanation
Most institutions don’t build AI systems but procure them, and increasingly these systems are pre-trained, modular, and abstracted from operational realities. This introduces profound misalignments, especially when end users have little involvement in setting technical specifications.
Evidence
Reference to IEEE P3119 cross-sector global procurement standard for interrogating vendor claims and surfacing hidden risks
Major discussion point
Technical Challenges and AI System Reliability
Topics
Legal and regulatory | Economic
Disagreed with
– Benjamin Tallis
Disagreed on
Procurement and industry involvement in system development
Commander’s intent cannot be effectively translated to machines due to complexity of human cognitive processes
Explanation
Commander’s intent is not a checklist or input but a deep cognitive and ethical practice involving anticipation, discernment, and alignment across dynamic conditions. In human-to-human operations it’s already complex, but in human-machine interaction it becomes nearly impossible to properly translate and maintain.
Major discussion point
Human Control and Accountability
Topics
Cyberconflict and warfare | Human rights principles
Disagreed with
– Benjamin Tallis
Disagreed on
Feasibility of meaningful human control and accountability
AI systems rely on legacy hardware and network architectures not designed for autonomous features
Explanation
Most current AI deployments rely heavily on legacy hardware and network-centric architectures that were not designed for systems with autonomous features. These architectures introduce friction, fragmentation, and vulnerabilities, especially when retrofitted for high-intensity compute loads, risking interoperability.
Evidence
Issues particularly evident in joint or cross-force environments where systems must function across organizational, national, and technical boundaries
Major discussion point
Infrastructure and Systemic Concerns
Topics
Infrastructure | Digital standards
Advanced AI systems are highly energy-intensive, making power infrastructure a strategic vulnerability
Explanation
Advanced AI systems, particularly those involving real-time inference or large-scale simulation, are computationally and energy intensive. Any serious conversation about AI and network-centric warfare must consider power in the literal sense – electricity, resilience, energy availability, and infrastructure security.
Major discussion point
Infrastructure and Systemic Concerns
Topics
Infrastructure | Critical infrastructure
Data integrity, provenance, and security are central to both system performance and trust
Explanation
Data is the backbone of AI-enabled systems, but it’s never just ambient infrastructure – it’s collected, conditioned, labeled, and selected by someone, for some purpose, under particular constraints. Questions of whose data, how it was obtained, why it was chosen, and for what outcome are central to performance and trust.
Evidence
Risks of tampering, poisoning, and silent drift are particularly real in military and intelligence contexts
Major discussion point
Infrastructure and Systemic Concerns
Topics
Privacy and data protection | Cybersecurity
Technology cannot sanitize violence or eliminate moral responsibility in warfare decisions
Explanation
Referencing Christopher Coker’s work, there’s a dangerous illusion that technology could sanitize violence or that increased automation could make war more humane. Technology may obscure the moral weight of decision-making or create abstraction, but it does not eliminate responsibility.
Evidence
Reference to Christopher Coker’s academic work at London School of Economics on the illusion of sanitized violence through technology
Major discussion point
Ethical and Humanitarian Concerns
Topics
Human rights principles | Cyberconflict and warfare
Design decisions in AI systems are inherently political and shape how power and discretion are distributed
Explanation
Design decisions are never neutral – they determine what is seen, measurable, and what forms of harm and risk are rendered invisible. These decisions shape how systems respond to ambiguity and how power and discretion are distributed, and once baked into architecture, become inaccessible to oversight.
Major discussion point
Ethical and Humanitarian Concerns
Topics
Legal and regulatory | Human rights principles
Effective governance must focus on responsible decision-making processes rather than projecting responsibility onto systems
Explanation
Instead of projecting responsibility onto the system itself, governance should focus on responsible decision-making processes at the human and institutional level. Responsibility lies in the processes and choices that govern system design, deployment, oversight, and use, not in the tool itself.
Major discussion point
Ethical and Humanitarian Concerns
Topics
Legal and regulatory | Human rights principles
Agreed with
– Aloisia Wörgette
– Benjamin Tallis
– Audience
Agreed on
Importance of accountability and responsibility frameworks
Elena Plexida
Speech speed
162 words per minute
Speech length
608 words
Speech time
224 seconds
Need for norms protecting internet core infrastructure from weaponization
Explanation
There should be norms ensuring that autonomous weapons systems and other weapons do not harm or weaponize the core of the internet – technical parameters essential for internet function like protocols, DNS, internet exchange points, and cable systems. Attacks on these would threaten global internet stability.
Evidence
Reference to Global Commission for Stability in Cyberspace norm about protecting internet core infrastructure, noting the internet as a common good and peace project
Major discussion point
Infrastructure and Systemic Concerns
Topics
Critical internet resources | Cyber norms
Audience
Speech speed
141 words per minute
Speech length
393 words
Speech time
166 seconds
Risk of overuse by democracies due to reduced human costs and psychological disconnect from warfare
Explanation
Democracies might overuse autonomous weapons technology in limited force scenarios because there are no human costs to their own forces, creating massive asymmetries in warfare. This could lead to psychological disconnect and overreliance on data-driven targeting decisions without proper consideration of who determines targets.
Evidence
Reference to drone strikes under Obama administration and concerns about psychological harm to populations under constant threat of autonomous strikes
Major discussion point
Infrastructure and Systemic Concerns
Topics
Cyberconflict and warfare | Human rights principles
Questions about who bears responsibility when AI systems cause collateral damage remain unresolved
Explanation
When autonomous weapons systems cause illegal collateral damage, it’s unclear whether responsibility lies with the state using the software or the companies developing and selling the software as precision tools. The question of accountability for AI system ‘hallucinations’ or errors remains unresolved.
Evidence
Reference to collateral damage in Israeli operations in Gaza involving AI-assisted target selection
Major discussion point
Human Control and Accountability
Topics
Legal and regulatory | Human rights principles
Agreed with
– Aloisia Wörgette
– Benjamin Tallis
– Anja Kaspersen
Agreed on
Importance of accountability and responsibility frameworks
Moderator
Speech speed
149 words per minute
Speech length
327 words
Speech time
131 seconds
Remote participation requires specific protocols and etiquette for effective multi-stakeholder dialogue
Explanation
The moderator established clear rules for remote audiences including entering with full names, using Zoom hand-raising functions, switching on video when speaking, stating name and affiliation, and not sharing meeting links. These protocols are essential for managing hybrid discussions on complex technical topics like autonomous weapons systems.
Evidence
Specific instructions given: ‘enter with your full name’, ‘raise your hand using the Zoom function’, ‘switch on the video, state your name and affiliation’, ‘do not share the links to Zoom meetings’
Major discussion point
Regulation and International Governance of Autonomous Weapons Systems
Topics
Digital standards | Legal and regulatory
Multi-stakeholder technical discussions benefit from structured moderation to ensure all voices are heard
Explanation
The moderator facilitated participation from both in-person and online participants, managing questions from the floor and online submissions. This approach ensures that complex technical and policy discussions about autonomous weapons systems include diverse perspectives from different stakeholder groups and geographical locations.
Evidence
Managed questions from ‘Brahim Alla, intern at Acedel’, ‘Frances from YouthDIG’, and online questions from ‘Monika’ while coordinating with remote moderator
Major discussion point
Regulation and International Governance of Autonomous Weapons Systems
Topics
Legal and regulatory | Human rights principles
Agreed with
– Aloisia Wörgette
– Anja Kaspersen
Agreed on
Multi-stakeholder approach is essential for AWS governance
Agreements
Agreement points
Multi-stakeholder approach is essential for AWS governance
Speakers
– Aloisia Wörgette
– Anja Kaspersen
– Moderator
Arguments
Multi-stakeholder approach essential including diplomats, military, industry, tech sector, and civil society
Effective governance must focus on responsible decision-making processes rather than projecting responsibility onto systems
Multi-stakeholder technical discussions benefit from structured moderation to ensure all voices are heard
Summary
All speakers agreed that addressing autonomous weapons systems requires inclusive participation from diverse stakeholders including diplomats, military, industry, tech sector, civil society, and academia to ensure comprehensive governance frameworks.
Topics
Legal and regulatory | Human rights principles
Technical challenges and limitations of current AI systems
Speakers
– Chris Painter
– Anja Kaspersen
– Benjamin Tallis
Arguments
AI systems are vulnerable to cybersecurity attacks that could make them less reliable
Current AI systems cannot replicate human judgment, discernment, and contextual understanding required for warfare
Modern AI systems can provide explicability for their decision-making processes
Summary
Speakers acknowledged significant technical challenges with AI systems, including cybersecurity vulnerabilities, limitations in replicating human judgment, while also recognizing advances in AI explicability.
Topics
Cybersecurity | Cyberconflict and warfare | Legal and regulatory
Importance of accountability and responsibility frameworks
Speakers
– Aloisia Wörgette
– Benjamin Tallis
– Anja Kaspersen
– Audience
Arguments
Meaningful human control essential for ensuring proportionality, distinction, and accountability
Accountability requires tracing intent, command delegation, and effects back through the chain of responsibility
Effective governance must focus on responsible decision-making processes rather than projecting responsibility onto systems
Questions about who bears responsibility when AI systems cause collateral damage remain unresolved
Summary
All speakers emphasized the critical importance of establishing clear accountability frameworks and maintaining meaningful human control in autonomous weapons systems, though they differed on implementation approaches.
Topics
Legal and regulatory | Human rights principles | Cyberconflict and warfare
Similar viewpoints
These speakers shared a strong preference for legally binding international regulation of autonomous weapons systems, with structured approaches that differentiate between different types of systems based on human control capabilities.
Speakers
– Aloisia Wörgette
– Wolfgang Kleinwächter
– Speaker
Arguments
Austria leads international efforts for legally binding AWS regulation by 2026
Two-tier regulatory approach: prohibit systems without human control, regulate systems with certain human control
Complete ban needed on AWS that cannot comply with international humanitarian law
Topics
Legal and regulatory | Cyberconflict and warfare
Both speakers expressed skepticism about the feasibility of reaching comprehensive international agreements in the current geopolitical climate, while emphasizing the transformative nature of AI technologies.
Speakers
– Chris Painter
– Anja Kaspersen
Arguments
Geopolitical tensions make binding international agreements extremely difficult in the short term
AI is not a bounded tool but a methodology that reorganizes how war is conceived and operationalized
Topics
Legal and regulatory | Cyberconflict and warfare
Both speakers acknowledged that autonomous weapons systems operate within traditional military command structures, though they disagreed on whether commander’s intent can be effectively translated to machines.
Speakers
– Benjamin Tallis
– Anja Kaspersen
Arguments
Current systems follow traditional military command and control principles of delegated bounded autonomy
Commander’s intent cannot be effectively translated to machines due to complexity of human cognitive processes
Topics
Cyberconflict and warfare | Legal and regulatory
Unexpected consensus
Need for infrastructure protection in AWS governance
Speakers
– Elena Plexida
– Anja Kaspersen
– Chris Painter
Arguments
Need for norms protecting internet core infrastructure from weaponization
Advanced AI systems are highly energy-intensive, making power infrastructure a strategic vulnerability
AI systems are vulnerable to cybersecurity attacks that could make them less reliable
Explanation
Despite coming from different sectors (internet governance, technical community, cybersecurity), these speakers unexpectedly converged on the critical importance of protecting underlying infrastructure systems that support autonomous weapons operations, recognizing infrastructure as a strategic vulnerability.
Topics
Critical internet resources | Infrastructure | Cybersecurity
Acknowledgment of democratic values in AWS development
Speakers
– Benjamin Tallis
– Aloisia Wörgette
– Anja Kaspersen
Arguments
Losing the arms race to authoritarian regimes would have worse consequences than participating in it
AWS raises fundamental concerns about right to life, human dignity, and risk of destabilizing arms race
Technology cannot sanitize violence or eliminate moral responsibility in warfare decisions
Explanation
Despite representing different perspectives (industry, diplomacy, technical community), these speakers unexpectedly agreed that democratic values and human rights principles must be central to AWS development, even while disagreeing on implementation approaches.
Topics
Human rights principles | Cyberconflict and warfare
Overall assessment
Summary
The speakers demonstrated significant agreement on fundamental principles including the need for multi-stakeholder governance, importance of accountability frameworks, recognition of technical challenges, and protection of democratic values and infrastructure. However, they diverged on implementation approaches, with industry representatives favoring evolutionary approaches within existing military frameworks, while diplomats and civil society advocates pushed for comprehensive legal prohibitions.
Consensus level
Moderate consensus on principles with significant divergence on implementation. This suggests that while there is a foundation for continued dialogue and potential agreement on basic governance frameworks, achieving binding international regulation will require bridging substantial gaps between stakeholder perspectives on how to operationalize shared principles. The unexpected areas of consensus on infrastructure protection and democratic values provide potential common ground for future negotiations.
Differences
Different viewpoints
Fundamental nature and readiness of autonomous weapons technology
Speakers
– Benjamin Tallis
– Anja Kaspersen
Arguments
Autonomous weapons represent evolution of 50-year precision networked warfare revolution, not entirely new technology
Current systems follow traditional military command and control principles of delegated bounded autonomy
AI is not a bounded tool but a methodology that reorganizes how war is conceived and operationalized
Current AI systems cannot replicate human judgment, discernment, and contextual understanding required for warfare
Summary
Tallis views autonomous weapons as an evolutionary development of existing military systems that can be managed through traditional command structures, while Kaspersen argues that AI fundamentally transforms warfare in ways that current systems cannot adequately handle, particularly regarding human judgment and contextual understanding.
Topics
Cyberconflict and warfare | Legal and regulatory
Feasibility of meaningful human control and accountability
Speakers
– Benjamin Tallis
– Anja Kaspersen
Arguments
Advanced battle networks can restore mission command and commanders’ intent through AI enhancement
Modern AI systems can provide explicability for their decision-making processes
Accountability requires tracing intent, command delegation, and effects back through the chain of responsibility
Commander’s intent cannot be effectively translated to machines due to complexity of human cognitive processes
Systems may function perfectly but still fail ethically, operationally, or politically
Summary
Tallis believes that AI can actually enhance human control and provide better accountability through explicable systems and traditional command structures, while Kaspersen argues that the complexity of human cognitive processes like commander’s intent cannot be effectively translated to machines, making meaningful control nearly impossible.
Topics
Human rights principles | Legal and regulatory | Cyberconflict and warfare
Procurement and industry involvement in system development
Speakers
– Benjamin Tallis
– Anja Kaspersen
Arguments
Procurement processes often involve pre-trained, modular systems abstracted from operational realities
Most institutions don’t build AI systems but procure them, and increasingly these systems are pre-trained, modular, and abstracted from operational realities
Summary
Tallis disagreed with Kaspersen’s analysis of procurement processes, stating that his company works closely with customers and end users to understand capabilities and specifications, while Kaspersen argued that most procurement involves off-the-shelf systems that create misalignments between vendor capabilities and operational needs.
Topics
Legal and regulatory | Economic
Timeline and feasibility of international regulation
Speakers
– Chris Painter
– Aloisia Wörgette
Arguments
Geopolitical tensions make binding international agreements extremely difficult in the short term
Austria leads international efforts for legally binding AWS regulation by 2026
Summary
Painter expressed pessimism about achieving binding international agreements due to geopolitical divisions and countries pulling back from AI guidelines, while Wörgette remained optimistic about Austria’s leadership in achieving legally binding regulation by 2026, citing broad societal mobilization on AI issues.
Topics
Legal and regulatory | Cyberconflict and warfare
Approach to regulation – complete ban versus graduated control
Speakers
– Speaker (Stop Killer Robots)
– Wolfgang Kleinwächter
Arguments
Complete ban needed on AWS that cannot comply with international humanitarian law
Two-tier regulatory approach: prohibit systems without human control, regulate systems with certain human control
Summary
The civil society representative advocated for a complete ban on autonomous weapons systems that cannot comply with international humanitarian law, while Kleinwächter supported a more nuanced two-tier approach that would prohibit some systems while regulating others based on levels of human control.
Topics
Legal and regulatory | Human rights principles
Unexpected differences
Industry representation and transparency in defense AI development
Speakers
– Benjamin Tallis
– Anja Kaspersen
Arguments
Modern AI systems can provide explicability for their decision-making processes
Design decisions in AI systems are inherently political and shape how power and discretion are distributed
Explanation
Despite both speakers having technical backgrounds, they had a fundamental disagreement about whether AI systems can provide adequate transparency and explicability. This was unexpected given their shared technical expertise, but reflects deeper philosophical differences about the nature of AI decision-making and accountability.
Topics
Legal and regulatory | Digital standards
Optimism versus pessimism about democratic processes in AI governance
Speakers
– Aloisia Wörgette
– Chris Painter
Arguments
Austria leads international efforts for legally binding AWS regulation by 2026
Geopolitical tensions make binding international agreements extremely difficult in the short term
Explanation
The stark contrast between the Austrian diplomat’s optimism about achieving binding regulation and the former US cyber ambassador’s pessimism about international cooperation was unexpected, revealing different perspectives on the effectiveness of multilateral diplomacy in the current geopolitical climate.
Topics
Legal and regulatory | Cyberconflict and warfare
Overall assessment
Summary
The discussion revealed significant disagreements across multiple dimensions: technical feasibility of human control, timeline for regulation, industry practices, and fundamental approaches to governance. The most pronounced disagreement was between the industry representative’s optimistic view of technological solutions and the technical expert’s concerns about systemic limitations.
Disagreement level
High level of disagreement with significant implications for policy development. The disagreements span technical, ethical, and political dimensions, suggesting that achieving consensus on autonomous weapons regulation will require bridging fundamental differences in how stakeholders understand the technology, its risks, and appropriate governance approaches. However, the respectful nature of disagreements and some areas of partial agreement suggest that continued dialogue could be productive.
Partial agreements
Partial agreements
Similar viewpoints
These speakers shared a strong preference for legally binding international regulation of autonomous weapons systems, with structured approaches that differentiate between different types of systems based on human control capabilities.
Speakers
– Aloisia Wörgette
– Wolfgang Kleinwächter
– Speaker
Arguments
Austria leads international efforts for legally binding AWS regulation by 2026
Two-tier regulatory approach: prohibit systems without human control, regulate systems with certain human control
Complete ban needed on AWS that cannot comply with international humanitarian law
Topics
Legal and regulatory | Cyberconflict and warfare
Both speakers expressed skepticism about the feasibility of reaching comprehensive international agreements in the current geopolitical climate, while emphasizing the transformative nature of AI technologies.
Speakers
– Chris Painter
– Anja Kaspersen
Arguments
Geopolitical tensions make binding international agreements extremely difficult in the short term
AI is not a bounded tool but a methodology that reorganizes how war is conceived and operationalized
Topics
Legal and regulatory | Cyberconflict and warfare
Both speakers acknowledged that autonomous weapons systems operate within traditional military command structures, though they disagreed on whether commander’s intent can be effectively translated to machines.
Speakers
– Benjamin Tallis
– Anja Kaspersen
Arguments
Current systems follow traditional military command and control principles of delegated bounded autonomy
Commander’s intent cannot be effectively translated to machines due to complexity of human cognitive processes
Topics
Cyberconflict and warfare | Legal and regulatory
Takeaways
Key takeaways
Autonomous weapons systems represent an evolution of existing military technology rather than an entirely revolutionary development, building on 50 years of precision networked warfare advancement
A multi-stakeholder approach involving diplomats, military, industry, tech sector, and civil society is essential for effective governance of AWS
Current AI systems cannot replicate human judgment, discernment, and contextual understanding required for complex warfare decisions, particularly around commander’s intent
Geopolitical tensions and polarization make reaching binding international agreements on AWS extremely difficult in the short term
The debate fundamentally centers on command and control principles – specifically how to delegate bounded autonomy to machines while maintaining meaningful human oversight
AI systems are vulnerable to cybersecurity attacks that could compromise their reliability and be exploited by adversaries
Infrastructure dependencies, including energy requirements and legacy network architectures, create strategic vulnerabilities in AI-enabled weapons systems
Accountability and responsibility frameworks remain complex, involving multiple levels from political leadership to command structure to individual operators
The shift from military-developed to commercially-developed technology being adapted for military use changes traditional procurement and oversight processes
Resolutions and action items
Continue the multi-stakeholder dialogue series with a third workshop planned for the Internet Governance Forum in Oslo in June/July
Austria will continue leading international efforts toward a legally binding instrument on AWS regulation by the UN Secretary-General’s 2026 deadline
Ongoing informal consultations in New York (mentioned as taking place during the session) to advance negotiations
Industry engagement to continue working with democratic governments to establish proper boundaries and guardrails for AI system use
Technical community to continue developing standards for AI procurement and ethical design through IEEE processes (P3119 and P7000 series mentioned)
Unresolved issues
How to effectively translate commander’s intent to autonomous systems while maintaining meaningful human control
Who bears legal responsibility when AI systems cause collateral damage – the state using the software or companies developing it
How to prevent overuse of autonomous weapons by democracies due to reduced human costs and psychological disconnect
How to address the risk of adversaries hacking or taking control of autonomous weapons systems
How to balance the need for technological advancement with ethical constraints in a competitive geopolitical environment
How to regulate systems that may be compliant with international humanitarian law but still cause broader harm
How to address the energy infrastructure vulnerabilities created by AI-intensive weapons systems
How to manage the shift from traditional defense procurement to off-the-shelf commercial technology acquisition
How to establish international norms when authoritarian adversaries show less concern for ethical constraints
Suggested compromises
Two-tier regulatory approach: completely prohibit weapons systems where human control is impossible, while regulating systems that maintain certain types of human control
Focus on responsible decision-making processes at human and institutional levels rather than projecting responsibility onto the systems themselves
Develop technical standards for AI procurement that help organizations interrogate vendor claims and surface hidden risks before deployment
Build AI systems with explicability features that can account for their decision-making processes to enable better oversight
Design systems to ‘fail safely and visibly’ rather than trying to prevent all failures
Establish norms protecting critical internet infrastructure as a common good while allowing for legitimate military applications
Engage proactively with industry (similar to chemical industry engagement in Chemical Weapons Convention) to ensure regulations are implementable
Focus on building deterrence capabilities while maintaining democratic values and ethical constraints that distinguish democracies from authoritarian regimes
Thought provoking comments
We must stop treating AI as a bounded technological tool. AI is not a weapon system in a traditional sense. It is a social, technical, economic methodology, if you may. It reorganizes how war is imagined, operationalized and bureaucratized. It alters the concept of decision making itself, shifting authority away from experience and judgment toward inference and correlation.
Speaker
Anja Kaspersen
Reason
This comment fundamentally reframes the entire discussion by challenging the basic assumption that AI weapons are simply advanced tools. Instead, Kaspersen presents AI as a transformative methodology that restructures the very nature of warfare and decision-making. This shifts the debate from technical specifications to systemic transformation.
Impact
This comment elevated the discussion from tactical considerations to strategic and philosophical implications. It forced other panelists to address not just how AI weapons work, but how they fundamentally change the nature of military decision-making and command structures.
I would put it to you that actually advances in precision that follow the same rules of delegation are a potential advance for democracies… democracies do not want to fight wars of attrition. We value our people too much. We actually want to have the kind of precise weapons and make use of the kind of asymmetric capabilities that reflect our inherent advantages as societies.
Speaker
Benjamin Tallis
Reason
This comment is provocative because it directly challenges the prevailing ethical concerns by arguing that autonomous weapons could actually be more ethical and democratic. Tallis reframes the debate from ‘should we develop these weapons’ to ‘how can democracies use their technological advantages responsibly while competing with authoritarian regimes.’
Impact
This comment created a clear ideological divide in the discussion and forced other participants to grapple with the geopolitical reality versus ethical ideals. It shifted the conversation from abstract ethics to practical strategic considerations in a multipolar world.
Can something be in compliance and still be harmful? Can something be compliant in war but be highly non-compliant in peace? We have to think through these scenarios.
Speaker
Anja Kaspersen
Reason
This comment introduces a crucial paradox that challenges the adequacy of existing legal frameworks. It suggests that technical compliance with international humanitarian law may not be sufficient to address the broader implications of autonomous weapons systems.
Impact
This observation deepened the legal and ethical analysis by highlighting the limitations of current regulatory approaches. It prompted discussion about the need for new frameworks that go beyond traditional compliance metrics.
The geopolitical considerations outweigh any ability to really reach an agreement… I don’t have a huge amount of confidence we’re going to make progress in the short term.
Speaker
Chris Painter
Reason
This comment provides a sobering reality check on the entire regulatory enterprise. Painter’s pessimism, based on his extensive experience in international cybersecurity negotiations, challenges the optimistic assumptions underlying the diplomatic efforts.
Impact
This comment injected realism into what had been a largely theoretical discussion. It forced participants to confront the practical limitations of multilateral governance in the current geopolitical climate and influenced the ambassador’s closing remarks about the need for continued dialogue despite challenges.
One of the missing things in our current discourse is the inability or the diminishing ability of just sitting with contrasting realities and being uncomfortable. I think it’s worth being uncomfortable with this space… We may have agreement on the technical side, but we may disagree on what the impact would be and how OK we are with that.
Speaker
Anja Kaspersen
Reason
This meta-commentary on the discussion itself is profound because it addresses the epistemological challenge of dealing with complex, uncertain technologies. Kaspersen advocates for embracing uncertainty and disagreement rather than seeking premature consensus.
Impact
This comment transformed the tone of the entire discussion, legitimizing disagreement and uncertainty as valuable rather than problematic. It created space for more nuanced positions and influenced the ambassador’s closing remarks about being ‘in deliberation’ rather than ‘in controversy.’
Most institutions, military or otherwise, do not build AI systems. They procure them. Increasingly, these systems are pre-trained, modular and abstracted from operational realities. This introduces profound misalignments, especially when end users have little involvement in setting technical specifications.
Speaker
Anja Kaspersen
Reason
This comment reveals a critical gap between the theoretical discussions about AI weapons and the practical reality of how they are actually developed and deployed. It highlights the disconnect between policy discussions and procurement practices.
Impact
This observation shifted the discussion toward practical implementation challenges and the role of industry in shaping capabilities. It prompted Tallis to defend his company’s approach and led to a more detailed discussion of the industry-military relationship.
Overall assessment
These key comments fundamentally shaped the discussion by introducing multiple layers of complexity that moved the conversation beyond simple pro/con positions on autonomous weapons. Kaspersen’s contributions consistently elevated the analytical framework, challenging participants to think systemically rather than technically. Tallis’s industry perspective forced ethical considerations to engage with geopolitical realities, while Painter’s pessimism grounded idealistic regulatory aspirations in practical constraints. The most transformative aspect was Kaspersen’s meta-commentary on embracing disagreement and uncertainty, which created intellectual space for nuanced positions and productive dialogue despite fundamental disagreements. This approach influenced the ambassador’s closing remarks and demonstrated how complex policy discussions can benefit from acknowledging rather than resolving tensions between competing values and perspectives.
Follow-up questions
How realistic or unrealistic is the debate about human control over autonomous weapons systems, particularly regarding meaningful human control and human oversight?
Speaker
Wolfgang Kleinwächter
Explanation
This is identified as a key issue in the debate that requires technical perspective to understand feasibility
What type of human control is realistic in autonomous weapons systems, given the agreed two-tier approach of prohibiting systems where human control is impossible and regulating systems with certain types of human control?
Speaker
Wolfgang Kleinwächter
Explanation
This relates to the ongoing CCW negotiations and the practical implementation of human control requirements
How can commander’s intent be effectively translated and maintained in human-machine interactions, especially given the complexity already present in human-to-human operations?
Speaker
Anja Kaspersen
Explanation
This addresses a fundamental challenge in autonomous systems where intent must be preserved across dynamic conditions
How can we address the vulnerability of AI-enabled autonomous weapons systems to cybersecurity attacks that could make them less reliable and amplify existing problems?
Speaker
Chris Painter
Explanation
This highlights a critical security concern that could undermine the reliability of autonomous weapons systems
Would it be possible to shut down areas, regions, or countries voluntarily as a modern warfare strategy, and how would this influence autonomously guided weapon systems?
Speaker
Brahim Alla
Explanation
This explores potential warfare strategies and their interaction with autonomous systems
Will the precision and reduced human cost of autonomous weapons lead to overuse by democracies in limited force scenarios, creating massive asymmetries in warfare?
Speaker
Frances (YouthDIG)
Explanation
This addresses concerns about lowered thresholds for conflict due to reduced perceived costs
How realistic is a scenario where enemies develop counter-AI battle systems to hack and redirect autonomous weapons they don’t own?
Speaker
Online participant
Explanation
This explores potential countermeasures and vulnerabilities in autonomous weapons systems
When is software safe enough to be delegated target selection tasks, and who should be held responsible for collateral damage – the state using the software or the companies developing it?
Speaker
Monika (online participant)
Explanation
This addresses accountability and liability issues in autonomous weapons deployment
How can we develop better international technical standards for autonomous weapons systems, especially for interoperability across organizational, national, and technical boundaries?
Speaker
Anja Kaspersen
Explanation
This is essential for ensuring systems can function effectively in joint operations while maintaining safety
How can we address the energy infrastructure requirements and vulnerabilities of AI-intensive autonomous weapons systems?
Speaker
Anja Kaspersen
Explanation
This highlights the critical dependency on power infrastructure that could be a strategic vulnerability
How can we ensure that procurement processes for autonomous weapons systems adequately address ethical considerations and end-user requirements?
Speaker
Anja Kaspersen
Explanation
This addresses the gap between system development and operational realities in military procurement
How can we develop governance frameworks that shift from prediction to adaptation, ensuring systems fail safely and visibly?
Speaker
Anja Kaspersen
Explanation
This is crucial for managing the unpredictable nature of AI systems in dynamic environments
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.