Open Forum #79 Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative

26 Jun 2025 16:00h - 17:00h

Open Forum #79 Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative

Session at a glance

Summary

This discussion at the Internet Governance Forum focused on autonomous weapon systems (AWS) and their regulation, featuring perspectives from diplomats, industry, academia, and civil society. Wolfgang Kleinwachter moderated the session, noting how military applications have become increasingly relevant to internet governance discussions over the past two decades.


Vint Cerf opened by emphasizing that computers have been integral to weapon systems since their inception, citing examples from ENIAC’s ballistic calculations to modern hypersonic weapons and multi-drone attacks. He highlighted the central challenge of maintaining human control over targeting decisions while addressing high-velocity, large-scale attacks that require computational assistance.


Austrian Ambassador Stefan Pehringer outlined his country’s leadership role in pushing for international AWS regulation, describing Austria’s initiative to move discussions beyond the exclusive Geneva-based forums to include broader stakeholder participation. He emphasized the urgency of establishing legally binding instruments by 2026, calling this potentially “the Oppenheimer moment of our generation.”


Benjamin Tallis from Helsing, Europe’s largest defense AI company, argued that democracies are in an AI arms race they cannot afford to lose to authoritarian states. He contended that ethical considerations should be viewed as competitive advantages and that autonomous systems represent an evolution of existing military command structures rather than a revolution.


Anja Kaspersen from IEEE provided a technical perspective, warning against treating AI as merely a bounded tool. She argued that AI fundamentally reorganizes how war is conceptualized and shifts decision-making away from human judgment toward automation, potentially undermining the concept of commander’s intent.


The discussion revealed significant disagreements about the risks, benefits, and appropriate regulation of autonomous weapon systems, with participants agreeing on the complexity and urgency of addressing these emerging technologies.


Keypoints

## Major Discussion Points:


– **The urgency of regulating autonomous weapons systems (AWS)** – Multiple speakers emphasized that we are at a critical “crossroads” or “Oppenheimer moment” where international regulation must be established before these weapons proliferate globally, with Austria leading efforts for a legally binding instrument by 2026.


– **The tension between military necessity and ethical concerns** – The discussion highlighted the fundamental conflict between the military’s need for AI-assisted weapons to respond to high-velocity, large-scale attacks versus ethical concerns about delegating life-and-death decisions to machines without meaningful human control.


– **Technical capabilities versus accountability challenges** – Speakers debated whether current AI systems can reliably make targeting decisions, with industry representatives arguing for AI’s precision advantages while technical experts warned about system failures, bias, and the impossibility of ensuring accountability when machines make autonomous decisions.


– **Geopolitical dimensions and the “AI arms race”** – The conversation addressed how different nations (US, China, Russia, European countries) approach AWS regulation differently, with concerns that democracies might fall behind authoritarian states that don’t share the same ethical constraints.


– **Multi-stakeholder governance and the need for broader public engagement** – Participants emphasized moving AWS discussions beyond military and diplomatic circles to include civil society, industry, academia, and affected populations, particularly from the Global South who may be disproportionately impacted.


## Overall Purpose:


The discussion aimed to bring together diverse stakeholders at the Internet Governance Forum to examine autonomous weapons systems from multiple perspectives – legal, ethical, technical, military, and human rights – in order to raise public awareness and contribute to ongoing international negotiations on AWS regulation.


## Overall Tone:


The discussion maintained a serious, urgent tone throughout, with speakers consistently emphasizing the critical nature of the decisions being made about AWS. While respectful and academic, there were clear disagreements between industry representatives who saw AI weapons as necessary for democratic defense and civil society advocates who viewed them as fundamentally incompatible with human dignity. The tone became somewhat more tense when discussing geopolitical divisions, but remained constructive overall, with participants acknowledging the complexity of the issues and the need for continued dialogue.


Speakers

**Speakers from the provided list:**


– **Wolfgang Kleinwachter** – Moderator of the session, Professor (appears to be an academic expert in internet governance)


– **Vint Cerf** – Father of the Internet, Chair of the leadership panel of the IGF (Internet Governance Forum)


– **Stefan Pehringer** – Ambassador from Austria, representing the Austrian government’s initiative on autonomous weapons systems


– **Benjamin Tallis** – Representative of Helsing (Europe’s largest defense AI company), speaking from Berlin


– **Anja Kaspersen** – Representative of IEEE-SA (Institute of Electrical and Electronics Engineers Standards Association), participating online due to injury


– **Olga Cavalli** – Dean of the Defense University in Argentina, Ministry of Defense in Buenos Aires, representing Global South perspective


– **Peixi Xu** – Professor from Beijing Communication University, providing Chinese perspective


– **Gerald Folkvord** – Representative from Amnesty International Norway, involved in the civil society movement Stop Killer Robots


– **Chris Painter** – Former cyber ambassador in the US State Department, cyber security expert participating online from Washington DC


**Additional speakers:**


None – all speakers mentioned in the transcript were included in the provided speakers names list.


Full session report

# Autonomous Weapons Systems: Navigating the Intersection of Technology, Ethics, and International Governance


## Executive Summary


This Internet Governance Forum session brought together diverse stakeholders to examine the regulation of autonomous weapons systems (AWS). Moderated by Professor Wolfgang Kleinwachter, the discussion featured perspectives from diplomats, industry representatives, academics, civil society advocates, and technical experts. The session continued discussions from the previous year’s IGF in Riyadh, highlighting the growing importance of military applications in internet governance forums.


The conversation revealed significant disagreements about AI capabilities in military applications, regulatory approaches, and international cooperation strategies, while also identifying shared concerns about the need for human oversight and broader stakeholder engagement in governance discussions.


## Introduction and Context


Wolfgang Kleinwachter opened by noting the evolution of internet governance discussions, observing that “Twenty years ago, when WSIS started, nobody talked about the military domain.” He emphasized how military applications have become increasingly relevant to these forums, building on discussions from the previous year’s IGF in Riyadh.


Vint Cerf, Chair of the IGF leadership panel, provided historical context by noting that computers have been integral to weapon systems since their inception. He cited ENIAC, which was “in operation around 1945” and whose “first use was to calculate ballistics tables.” However, Cerf acknowledged that modern developments present unprecedented challenges, particularly regarding hypersonic weapons, satellites operating at 17,000 mph, multi-drone attacks, and maintaining human control over targeting decisions in high-velocity scenarios.


## The Austrian Initiative


Austrian Ambassador Stefan Pehringer outlined Austria’s leadership in pushing for international AWS regulation, describing efforts to move discussions beyond exclusive Geneva-based forums to include broader stakeholder participation. He emphasized the urgency of establishing legally binding instruments by 2026, characterizing this moment as potentially “the Oppenheimer moment of our generation.”


Pehringer argued that autonomous weapons raise fundamental questions about human dignity, as machines making life-and-death decisions contradicts basic principles of human dignity. He stressed that meaningful human control is necessary to ensure proportionality, distinction, and accountability in weapon systems, and that delays in regulation by democratic countries work against their long-term interests.


## Industry Perspective


Benjamin Tallis from Helsing, Europe’s largest defence AI company, presented a different perspective, arguing that democracies find themselves in an AI arms race they cannot afford to lose to authoritarian states. He referenced a recent article in The Economist and mentioned his boss Gunbert Scherf’s call for a “Manhattan project for AI.”


Tallis argued that military command and control has always been based on delegated autonomy, and that autonomous weapons represent an evolution rather than a revolution. He emphasized investment in explicable AI that can provide accountability and remain within defined bounds, arguing that sensor and data fusion capabilities can enhance precision. He also referenced historical examples like the “Panzer Adverdichtmine of the 1980s” and “overpressure influence mines” to illustrate existing autonomous capabilities.


## Technical Challenges


Anja Kaspersen from the IEEE Standards Association provided a technical perspective, challenging fundamental assumptions about AI capabilities. She cited recent studies from “Salesforce, Apple, IBM and Shumilov” showing that large reasoning models collapse under pressure, generating confident outputs that fall apart under compound reasoning.


Kaspersen warned about procurement challenges, noting that most institutions receive pre-trained models abstracted from operational realities. She referenced “system collapse” research and work by Eric Salvaggio, emphasizing that legacy military architectures weren’t designed for high-intensity compute loads. She also mentioned the IEEE P3119 procurement standard for high-risk AI systems as a potential solution for improving procurement processes.


Kaspersen challenged the concept that AI systems can embody “commander’s intent,” describing it as a deeply human concept involving purpose, risk tolerance, values, and trust designed to guide action under uncertainty.


## Global South Perspectives


Olga Cavalli, Dean of the Defence University in Argentina, brought Global South perspectives to the discussion, emphasizing how autonomous weapons could disproportionately affect developing nations. She highlighted concerns about algorithm bias and the importance of educational initiatives, describing new cyber defence training programmes including free virtual courses in Spanish for Latin America.


Cavalli supported legally binding international instruments prohibiting autonomous weapons without meaningful human control, arguing that human judgement is irreplaceable in lethal force decisions and that technical programming alone cannot address ethical concerns.


## Chinese Perspectives


Professor Peixi Xu from Beijing Communication University challenged binary framings of international relations, stating “Russia is not China, China is China, not Russia.” He revealed that China is ready to accept binding prohibitions if powerful countries like the UK, US, and Russia agree to total bans.


Xu highlighted the adoption of UN resolution “778 slash 241” as progress by engaging more actors beyond governmental experts, supporting multi-stakeholder approaches while providing insight into how major powers might approach comprehensive agreements.


## Civil Society Concerns


Gerald Folkvord from Amnesty International Norway, representing the Stop Killer Robots movement, provided a human rights perspective, arguing that machines making autonomous life-and-death decisions contradicts human dignity. He posed the critical question: “Who do you hold responsible for a killer robot killing somebody in contradiction of international law?”


Folkvord thanked Austria for taking discussions out of the “exclusive club” of the Convention on Certain Conventional Weapons, noting that meaningful progress in landmine treaty discussions began when people from affected regions joined the conversation. He also thanked colleagues from the International Secretariat for their work.


## Cybersecurity Governance Lessons


Chris Painter, former cyber ambassador in the US State Department, participated from Washington DC (noting the temperature there) to provide insights from cybersecurity governance experiences. He highlighted enforcement challenges, asking: “How do you have accountability once there’s agreement among countries to any set of norms or treaties if countries don’t actually abide by them?”


His intervention grounded the discussion in practical implementation challenges, reminding participants that creating agreements is only the first step in effective governance.


## Key Areas of Discussion


The session revealed several important themes:


**Human Control and Oversight**: Despite different approaches, speakers across perspectives emphasized the importance of maintaining human oversight in life-and-death decisions, though they disagreed on implementation methods.


**Multi-Stakeholder Engagement**: Multiple speakers agreed that AWS discussions should not remain confined to government and military experts, supporting broader participation including civil society, industry, academia, and affected populations.


**Technical Limitations vs. Capabilities**: Significant disagreement emerged about current AI capabilities, with industry representatives expressing optimism about developing reliable systems while technical experts highlighted fundamental limitations and risks.


**International Cooperation**: Speakers presented different views on whether to frame AWS governance as competitive (arms race) or cooperative (multilateral agreements), with some suggesting that cooperation might be more achievable than competitive framings suggest.


## Unresolved Challenges


The discussion highlighted several ongoing challenges:


– Defining “meaningful human control” in practice across different battlefield scenarios


– Establishing accountability mechanisms when autonomous systems cause harm


– Bridging the gap between vendor claims and actual AI capabilities in military procurement


– Preventing proliferation while allowing legitimate defense applications


– Balancing military necessity with humanitarian imperatives


## Conclusion


The session demonstrated both the complexity of autonomous weapons governance and the value of multi-stakeholder dialogue. While fundamental disagreements remain about AI capabilities, regulatory approaches, and international cooperation strategies, the conversation revealed important shared concerns about human oversight and inclusive governance processes.


Wolfgang Kleinwachter closed by noting the time constraints (Vint Cerf could only stay 30 minutes, and only 5 minutes remained for discussion) and emphasized the importance of continuing these discussions. He referenced the upcoming 21st IGF in 2026 as an opportunity to assess progress on these crucial issues.


The urgency emphasized by multiple speakers suggests that the window for establishing effective governance frameworks may be limited, making continued dialogue across different perspectives essential for addressing what several participants characterized as one of the defining challenges of our time.


Session transcript

Wolfgang Kleinwachter: the representative of the Holy See has identified this issue of a terminus weapon system as an Internet-related public policy issue. Twenty years ago, when WSIS started, nobody talked about the military domain, but like all technologies in this 2,000 years of history of mankind, also the Internet and other new achievements in technology has been pulled in the military domain. And by the way, the Internet started 70 years or 60 years ago as a project under the U.S. Department of Defense, so it was not so far away from the military domain. But in the last couple of years, we have seen that here a new issue is emerging. We have several negotiations around this. There is a proposal from the Secretary General of the United Nations to agree on a document in the year 2026. We have a group of governmental experts which is negotiating this. Austria has pushed for discussion in the General Assembly of the United Nations, and so it’s very natural that we should discuss this also in the framework of the Internet Governance Forum. We did it already last year in Riyadh, and this is more or less a continuation, and the aim of this workshop is to do more outreach so that more people are aware what’s going on in this field. We will not continue the negotiations here on this table, but we want to collect various perspectives. We have an excellent panel which gives you all the various perspectives, so we will hear in a minute Vint Cerf, the father of the Internet and chair of the leadership panel of the IGF, who will give his perspective, and then we will hear from… and Ambassador Pehringer from the government of Austria about, you know, the Austin Initiative. Then we will have two different perspectives from the industry and from the technical community. Benjamin from Helsinki will give the industry perspective and Anna, unfortunately, Anna has broken her leg and so she is only online with us, will give us the perspective from the IEEE, the technical community. And then we have comments from the Global South with Olga, you know, from China with Professor Peixi Xu and from the NGO perspective from Amnesty International, Norway, it’s Gerald. And we have probably also online Chris Painter, the former cyber ambassador in the US State Office Department from Washington. So that’s the plan and now I hope Vint is online. Vint, can you hear us? And we are ready to listen to your short opening statement. Thank you.


Vint Cerf: It’s Vint and unfortunately I’m not able to open my camera unless someone on the control can do that for me. But here we go, maybe that just worked. Maybe not. I can’t enable the camera for some reason. But if you’ll forgive me, I’ll speak anyway because we only have a finite amount of time. First of all, it’s important for all of us to remember that computers have been involved in weapon systems literally from their earliest creation. The ENIAC computer was at the Army Ballistics Research Laboratory in the United States and was in operation around 1945. And its first use was to calculate ballistics tables for weapons. large-scale guns. But I also draw attention to today’s environment where hypersonic weapons are becoming available. Satellites are operating at 17,000 miles an hour. There are complex multi-drone attacks that are threatening. There are fire-and-forget and over-the-horizon weapon systems. There is a dire need for situational awareness in complex environments. We are looking at interest in creating digital emblems, which are the analog of the kinds of red crosses that you see on buildings and vehicles. And finally, we need to remember that autonomous weapons are not just kinetic weapons. Presumably, cyber-targeting is within scope of our discussion. All of this is to say that the topic is timely and important. It’s also to say that the military is faced with a fairly serious problem of responding to high-velocity attack and large-quantity attack, and they need computing to help them. So I expect that part of our discussion will be whether or not the targeting is automatic or is, in fact, under some kind of human control. And this is not a trivial question to answer, especially if the attacks are large in scale and scope. But I think that probably is part of our central debate. How do we recognize the utility of computer-based systems in dealing with conflict if we’re forced into that? And how do we do it in a way that does not get out of control? And so I think the discussion today… will almost certainly center on how do we maintain some human ability to limit the choice of targeting in an automated system so that it only goes after targets that we believe are legitimate. I don’t think that’s an easy question to answer, but it’s clearly one that this discussion needs to shed light on. So I’ll stop there, but I thank you very much for the opportunity to begin, and apologies that I’m invisible. I will try to rectify that during the course of the call. I can stay on for about another half hour. Thank you.


Wolfgang Kleinwachter: Thank you, Vint, very much. And I think you already put your finger on the crucial point, which is the human control and what is justified and what is not justified. And as we probably will also discuss, a lot of arguments are now that automated weapon systems are also part of deterrence. So that means to save peace, not to promote war. So I think this is an interesting dimension, and let’s wait and see what the various panelists can say. Now I move to Ambassador Pehringer, because Austria has started two years ago in the United Nations a discussion, and the Secretary General, Kuderesh, has presented a report last year, which has collected statements from a lot of governments and NGOs. Ambassador, so probably give us a short overview about what Austria has done and is planning, and what do you expect from the next years?


Stefan Pehringer: Thank you so much, Professor. I’m pleased to participate in the opening of today’s discussion on autonomous weapon systems with some introductory remarks from an Austrian. perspective. At the outset, allow me to thank our moderator, Professor Kleinwachter, and our distinguished speakers, both here in the room and joining us like Vint Cerf, joining us online, for contributing to this timely and important conversation. Ladies and gentlemen, like all transformative technologies, the application of artificial intelligence in military domain is advancing rapidly. These developments promise to make tasks faster, easier, and more accessible. Yet, they demand robust guardrails and limitations in the civilian as well as in the military sector to ensure that AI is used in a human rights-based, human-centered, ethical, and responsible manner. While the civilian domain is increasingly governed by regulatory framework, the military and defense sectors are lagging behind. Austria therefore supports ongoing international efforts to promote responsible military use of AI. These include the RE-AIM initiative by the Netherlands and South Korea, and the U.S. Political Declaration on Responsible Military Use of AI and Autonomy. Today, we’re going to focus on one of the most critical and sensitive issues in this broader field that Austria is particularly engaged on, autonomous weapons systems, systems that can select and apply force to targets without further human intervention. AWS raise fundamental legal and ethical concerns. These include the necessity for meaningful human control to ensure proportionality and distinction, the need for predictability and accountability, the protection of the rights to life and other human rights and the principle of human dignity. There are also serious risks from a security perspective, the risk of proliferation, including to non-state actors and the destabilizing autonomy arms race. These topics will be explored further by our expert panel. In light of these concerns, Austria has taken a leading role, as mentioned, in advancing international regulations on AWS. In April 2024, Austria hosted the Vienna Conference Humanity at the Crossroads to examine the ethical, legal and security implications of AWS and to build momentum for international regulation. Austria strongly supports the joint call by the UN Secretary General and the ECRC President to conclude negotiations on a legally binding instrument on AWS by 2026. Over the past decade, valuable discussions have taken place, notably within the group of governmental experts in Geneva and the Human Rights Council, where a growing majority of states agree on the need for international regulation, including prohibitions and restrictions. However, moving from discussion to negotiations on a regulatory instrument remains difficult. Geopolitical tensions, mistrust and the reticence to regulate these fast-paced technologies are slowing progress, even as the window for preventive regulation is closing rapidly. In response, Austria and the cross-regional group of states so far introduced two resolutions on AWS in the UN General Assembly. The first in 2023 mandated a UN Secretary General report and its 2024 follow-up resolution supported by 166 member states established open informal consultations in New York to address so far underdeveloped legal, technological, security and also ethical aspects of AWS that we want to put a particular focus on today. These consultations complemented the Geneva-based efforts and demonstrated to be very fruitful and also informative for delegations that have not yet had a chance to participate actively in this debate. Further, informal negotiations included not only states but also stakeholders from industry, academia, civil society and the tech sector. From Austria’s point of view, the global discourse on AWS must extend beyond diplomats and military experts. The implications of AWS affect humanity at large from a moral, ethical and legal point of view as well as a perspective of sustainable development. These issues concern all regions and all people. We therefore advocate a multi-stakeholder approach. Contributions from science, academia, industry, tech sector, parliamentarians and civil society are essential to ensure a holistic and inclusive debate. Today’s event builds upon earlier efforts at the Internet Governance Forum in Riyadh as mentioned last December and also the EuroDIG in Strasbourg this May. This event will lay a particular focus on ethics, human dignity and decision-making in life and death decisions, the risk of dehumanization, the value of empathy and compassion, data bias and machine learning as well as the issue of accountability. to name just a few relevant aspects in this regard. Panelists will examine ethical perspectives from their various angles. For Austria, ladies and gentlemen, humanity is at the crossroads. We must come together to confront the challenges posed by AWS. This moment may well be the Oppenheimer moment of our generation. Experts from across disciplines are warning of the profound risks and irreversible consequences of an unregulated autonomous weapons arms race. There is urgency to finally move from discussions to negotiations on building rules and limits for AWS. The longer regulatory efforts are delayed, the harder it will be to reverse course once these weapons proliferate globally. Democratic countries in particular must recognize that this delay is against their own long-term interests and the broader interests of humanity. What is needed now is decisive political leadership to shape international rules on AWS. We believe that today’s multi-stakeholder exchange will contribute significantly to this shared goal. And we count on your continued engagement on this issue. I look forward to a rich and constructive discussion. Thank you very much.


Wolfgang Kleinwachter: Thank you, Mr. Ambassador. And when you argue that… Yeah, it’s a good applause. It was a good overview about where we are and what are the challenges. And when you say that we are at the crossroads, then it has to be taken out of the hand of a small group. And we need a broader understanding, more public awareness. And the IGF is an ideal place because here all stakeholders… are represented and so that we get the full picture and the various perspectives. And so I’m very thankful that we have also in this panel the industry perspective. And Benjamin, I hope you are online and now I invite you to give your perspective. We started a discussion in Strasbourg a couple of weeks ago and here is now part two. Benjamin, you are very welcome.


Benjamin Tallis: Good afternoon to Oslo from Berlin. Yes, I can hear you well. I hope you can all hear me okay. Yes. Very good. So thank you very much, Professor Kleinwachter for the invitation. And thanks to the excellent opening remarks from Vint Cerf and from the ambassador that we just heard, which I think clearly the terrain in which we’re operating in. Very quickly, I represent Helsing, which is Europe’s largest defense AI company, its largest new defense firm in Europe. And we were founded specifically to use AI to protect democracies. Reacting to the geopolitical change that we’d seen over the last 10 to 15 years, the founders founded Helsing in 2021 in order specifically to attain tech leadership for democracies so that we can actually get the defense that we need when we see a situation of democracies increasingly under authoritarian threat and increasingly unable to deal with that threat through advanced military means, which have in the past indeed led to effective deterrence. We were losing our technological edge and we faced a security risk. That’s why Helsing was founded. I think it’s important to emphasize what both the ambassador and Vint Cerf said in different ways, which is that we are in an AI arms race. And it’s very bad to be in an AI arms race, but it would be far worse to lose an AI arms race to authoritarian states who do not share our values and wish to actively shape the world in their interest, according to their very different values rather than ours. But of course, we have to do this in a way that is compatible with our values and actually strengthens our values rather than undermines them. And that’s why at Helsing we actually see the ethical component of autonomous weapon systems as a competitive advantage, which is why we invest so much in that. We think we are industry leaders in this. Precisely for the reasons that were mentioned by the ambassador is that autonomous weapon systems, their effects have to be foreseeable. The weapon systems have to be reliable. They have to have traceable effects and ultimately have to be controllable. And this is what we invest in as part of the new generation of explicable AI that can give account for why it has done what it has done and is much more easy to keep within the bounds that we actually set for it. And keeping it in the bounds that are set for autonomy is actually nothing new in military terms. This is in fact not in any way a revolution, but the continued evolution of military command and control, which has always been based on the principle of the delegation of bounded autonomy. Sometimes delegation to a subordinate officer or subordinate soldier, sometimes delegation to a particular weapon system. And we know that as was mentioned by Vint Cerf, computers have been involved in this since their invention. But also, we’ve been dealing with autonomy in weapon systems for an awful long time. As soon as you go beyond visual range combat, be that in artillery or be that in using missiles, to a certain extent, you delegate the authority and the responsibility for the effect to that system. Of course, it’s launched by human control. It is triggered by human control. But also, we can see with the quasi-autonomous systems of the past, be that smart anti-tank mines such as the Panzer Adverdichtmine of the 1980s, or even pressure mines, using overpressure influence mines as they were called in the naval domain, there has been a willingness to delegate to to systems to actually conduct these effects. Now, of course, that’s precisely why the principles that were referred to before of discrimination, proportionality, and respect for the right to life were introduced as ways of understanding the ways that we have to be able to regulate these systems in order that they have that foreseeable effect, which is reliable and also has the effect that we want it to actually have rather than indiscriminately targeting civilians, for example. Now, I would put it to you that actually with the maturation of the revolution in military affairs, what we are seeing is the ability to conduct sensor and data fusion. And so to use targeting that is triggered not only by one set of sensors, or even by two sets of sensors, as has been in the past with some very basic systems, but by a profusion of different sensors that can actually greatly enhance the precision of the weapons that we are using and the weapons that we at Helsinki in particular try to develop, and which will give our militaries an advantage on the battlefield. Now that, to me, raises the prospect of a less problematic rather than more problematic weapons system. Now, we can see also that this is very important in the context of using the advantages in technology to leverage another of democracy’s key competitive advantages, which is our willingness to undertake delegated decision making, which is something that our authoritarian rivals consistently struggle with, be that China or be that Russia. Now, why do I say this? Because the reconnaissance strike complex, which is at the heart of the revolution in military affairs, which loosely means linking up a great degree of huge variation of sensors with a much more diverse and broad proliferation of shooters, as they’re known, or effectors such as strike drones, this only works to its full effect if you allow delegated Benjamin Tallis, Anja Kaspersen, Wolfgang Kleinwachter, Peixi Xu Xu, Chris Painter, Benjamin Tallis, Anja Kaspersen, Now, in response to the point that was made earlier about the saturation of the battle space, the complexification of the battle space, with a huge proliferation of a variety of different threats, but also a variety of different potential targets, then clearly there’s a desire among our militaries to engage in semi-autonomous or even autonomous targeting within certain battle situations. However, I don’t think that poses the kind of risks that we often talk about in this regard, because actually that would be engaged when there is a clear attack that is not necessarily involving a mixture of civilian actors plus military actors, but much more likely to be undertaken by a military when there’s only military actors or military intelligent machines involved. And we can see that actually panning out in Ukraine, where this is at its highest level of development so far. With the massive distance that is evolving between the front lines, that distance is because of the inability to maneuver. And within that space, you do not get free civilian movement. That is not an area which is basically military only and which is not available even very much for military movement. So seeing what is going through those battle spaces, which is where these semi-autonomous targeting systems, which are still in their infancy and is not something we actually particularly engage in, would be tested. I don’t think it raises quite the same questions. And it’s important to always draw this back to actually what What are the concrete examples, what are the concrete realities of battle that we’d be dealing with here, rather than focusing on abstractions that might actually take us further away from being able to defend democracies, from being able to use innovation as a form of effective deterrence, as Professor Kleinwachter said, and which would put us in a negative position in relation to this AI arms race. Now, last point to finish up. It was mentioned this might be an Oppenheimer moment. I agree fully, and the attendees at the conference or participants might be interested to read today’s website of The Economist, where my boss, Gunbert Scherf, makes a call for a Manhattan project for AI, because this is the scale of challenge that we actually face. If we lose this race, we are in deep, deep trouble. So we have to win it, but at the same time, we have to do it in the ways that I’ve outlined that actually strengthen rather than undermine our values, and I’ll leave my opening contribution there. Thank you very much.


Wolfgang Kleinwachter: Thank you very much, and I hope Anna Kaspersen is now online, but she will give us another perspective from the technical perspective and the civil society. Anna, are you there?


Anja Kaspersen: I am indeed.


Wolfgang Kleinwachter: Welcome.


Anja Kaspersen: Can you hear me okay?


Wolfgang Kleinwachter: Yes. Yeah, we can hear you.


Anja Kaspersen: Fantastic. You do it. You do it.


Wolfgang Kleinwachter: My best wishes to you. Thank you.


Anja Kaspersen: Yeah, I’m so sorry for not being able to join in person. It’s originally planned. I had an accident in the surgery, so I apologize to everyone, and Benjamin and I have been in this panel before in Strasbourg, so I see that you’d like to team us up together. I would try to refrain from commenting on what Benjamin just shared with us and leave that for the Q&A afterwards. So first of all, thank you Austria and Wolfgang in particular for the opportunity to again contribute to this vital discussion. And I speak today in my capacity as a representative of the IEEE-SA, although my comments would also carry some personal reflections, I would make that clear. And just for those who are not familiar… with us, which I assume some in the room. We are the world’s largest independent technical professional organization spanning over 190 countries and bringing together nearly half a million engineers and scientists across all domains and disciplines in the technological space. And so my remarks do not represent any political position but rather reflect a long-standing engagement from our side that dates back to the early days of internet and autonomous robotics. And I don’t know, Vint Cerf in particular knows this organization really well. He’s a recipient of some of our highly regarded awards over the course of many years. But our engagement has been very strong for many years on issues of technical governance in terms of what is happening in a multilateral space, of course, around institutional design of these technical autonomous systems. My role, for those of us, some may have seen, you know, the talk in stress work, so I try to bring something new to this conversation. But I’ve been involved in conversations around military applications and uses of AI for quite some time, ranging over a couple of decades now, including overseeing some of these processes in Geneva and the certain conventional weapons in the early days and into sort of the more mature stages of where we are now. What I want to offer here is not a summary of technical challenges per se, many of which are now widely acknowledged, but rather framing a set of observations based on, I would say, decades of international and cross-sector work and of what is structurally at stake. These are not political reflections, they are institutional and infrastructural, and the claims I caution against are not just hypothetical anymore, they are increasingly being made, and I believe they demand rigorous scrutiny. First, we must stop treating AI as… is a bounded technological tool. AI is not a weapon system in the traditional sense, as also Vint Cerf pointed out. It is a social-technical methodology, an approach, a system of methods that reorganizes how war is imagined, operationalized and bureaucratized. It shifts the burden of decision-making away from judgment accountability and towards fairness, correlation and automation. And in doing so, it reconfigures the infrastructure of responsibility itself. A core example of this is the growing assertion that AI cannot only support, but embody commander’s intent. As I’ve written quite a few places, commander’s intent is not a checklist or an input. It is a deeply human and ethical concept, an articulation of purpose, risk tolerance, values and trust designed to guide action under uncertainty. In human-to-human operations, it is already complex. In human-machine interaction, it becomes nearly impossible. And there are many brilliant scholars, military strategists, etc., that have written extensively about this. I’m happy to share some of their work if that’s of interest for those who are listening to this. Systems that simulate coherence without understanding of being asked to infer intent, respond to dynamic environments and remain predictable without the context, reasoning or values this requires. And these context, reasoning and values, you know, are highly fluid as well in battlefield context. Special forces are trained precisely to override instinct, interpret ambiguity and exercise, call it by the judgment. These are tactical and moral faculties that currently machine learning systems find it hard to replicate. Missy Cummings, a leading system safety expert has cautioned many times that such systems often behave with high confidence in environments they do not comprehend. Their outputs conceal critical reasoning gaps, projecting fluency without understanding. Recent studies from Salesforce, Apple, IBM… and Shumilov have shown that even so-called large reasoning models collapse under pressure. These are studies that have come out just in recent weeks. They generate confident, seemingly aligned outputs that fall apart under compound reasoning or multi-step logic. One scholar, Eric Salvaggio, refers to this as system collapse, a failure mode that emerges precisely when we most need systems to be reliable, interpretable, and contestable. And yet, as Meredith Whitaker has recently cautioned, there is increasing pressure to build agentic AI systems that operate with enough access and autonomy to cross the blood-brain barrier, this is her phrase, between localized systems and operating architectures. In privacy terms, as it’s already deeply concerning, in military context, it poses risks not just to security, but to institutional legitimacy of using these systems. Once a system crosses that threshold, it begins to alter how decisions are made and who or what is understood to be in command. The Financial Times, Benjamin also referred to newspaper articles, there was a recent article in the FT that highlights how claims about AI’s military promise are being driven as much by venture capital as by verified capability. And there are many differing views on this, and I should recognize it immediately, and it definitely shares, it’s important to represent that we all come from different viewpoints. But I think one kind of shared concern, which we also discussed in Strassburg, is the concern over the narrative power of such claims. And again, as Vansurf noted in his remarks, military innovation and technology have always been entwined, but AI marks a bit of a step change, in my view. It does not simply extend human capacity, it begins to reframe the very nature of operational judgment. I’m convinced that this is different. Which brings me to the issue of procurement. Most institutions, including military ones, do not build these systems. They procure them. Increasingly, the systems are muddler, pre-trained, and abstracted from the operational realities they’re meant to operate within. They come wrapped in marketing language, responsible AI, trustworthy autonomy, ethical automation. These terms suggest coherence and controllability, but often obscure the reality that such systems are trained on proxy data, generalized poorly, and fail silently. The failures that matter will not be system crashes. There will be subtle misalignments between logic and lived operational context. This is why IEEE developed the P3119, which is a cross-sector procurement standard applicable to any high-risk AI system, including those in defense. It is designed to help institutions interrogate vendor claims, clarify assumptions, and surface hidden risks before integration. It includes guidance not only for engineers, but also for policy makers, independent legal experts, institutional leaders, civil society engaged in this space, and trying to hold companies and government leaders accountable because governance begins at the level of specification. Equally critical is the question of infrastructure. Many military systems rely on legacy architectures not designed for high intensity compute loads. This introduces vulnerabilities, interoperability challenges, and strategic blind spots. Meanwhile, large-scale AI systems remain highly energy intensive. And this is not just a question of environmental impact. It is a matter of operational security and resilience. Any AI governance framework that overlooks the role of energy and what type of energy, what type of materials being used, infrastructure, or global supply chain fragility is not merely incomplete, it is strategically naive. And I will come to my final comment now. And in this context, it is unhelpful and irresponsible to frame AI governance as a This framing fuels premature adoption, suppresses caution, and elevates vendor narratives over institutional responsibility. Responsible governance is not a break on innovation, it is its condition. And I believe we must start projecting responsibility onto the tool. What matters is responsible institutional decision-making before design, during integration, and long after deployment. Without this clarity, accountability becomes hard to trace, and governance risks become symbolic rather than substantive. I will close with a reflection from my late mentor, and this is something Benjamin and I have in common. We had the privilege of studying with the same mentor, Christopher Coker, who warned against the illusion that automation could sanitize war. Automation may abstract violence, distance responsibility, obscure cause and effect, but it cannot, under any circumstances, make war more humane, nor should it. That remains an ethical question, and not one that any machine should answer. Thank you so much again, Wolfgang, for this opportunity to share some observations. And I know there are other people on the panel that are going to disagree with me, so we are looking at an interesting debate coming ahead.


Wolfgang Kleinwachter: Thank you, Anna. And you have seen it’s very complex. It’s complicated, but it’s good to have different perspective on the table. That’s why we are here, to get the full picture, and I invite now the commentators here on the table. I would start with Olga from the Global South, so that means you have now the broad spectrum. And what are your comments? How you see it from Argentina? Olga is the Dean of the Defense University in Argentina, from the Ministry of Defense in Argentina, in Buenos Aires.


Olga Cavalli: Thank you very much, Wolfgang, for inviting me again to this very interesting conversation. We had that last year in Riyadh, right? And I’ve been following it. And I would like to bring two perspectives. First, from the academic side, which is what I’m doing now. For the good news, we have started new trainings on cyber defense. For those Spanish speaking audience and participants, we have a new career on cyber defense, which is free, virtual and in Spanish. So anyone that is interested is able to apply for a site on it. We had a very good response from the Latin American community so far. And we are still developing the curricula for the next years. We started this April with this new career. And of course, we have a focus on autonomous weapons. And I would like to bring this perspective and then some comments about the Argentina negotiations, which I personally am not involved, but I’m informed about. So these focuses are included in the curricula of the career. And first, ethics and autonomous weapons, as has been already mentioned. The ethical implications of autonomous weapon systems, particularly focusing on the delegation of life and death decisions to a machine. Analyze the consequences of reduced human oversight in military context. This has to be evaluated from an academic perspective. Human control and responsibility. Evaluate and include the necessity of meaningful human control over autonomous weapon systems. It should be considered that the technical programming can be not enough to address ethical concerns. It has been mentioned like it depends on energy, on legacy machines and equipment. The important issue is that human judgment is irreplaceable. That has to always have to be. in any training in decision involving lethal force. The bias and discrimination in algorithm decision making, this is this inclusion of artificial intelligence as one of the speakers mentioned is it is a reality, it’s challenging but we have to face it and the best the best we can do is understand it. The risk of algorithm bias in autonomous systems such as biases can disproportionately harm marginalized groups and complicate the distinction between civilians and combatants. Then there is the issue about fairness, accountability, justice in artificial intelligence, this is of very high importance, it was already mentioned by other panelists, human rights and humanitarian law, there is the intersection of autonomous weapon systems with human rights and international humanitarian law. These weapons may exacerbate existing vulnerabilities especially in regions with high inequality, I mean Latin America is a region, it’s a fantastic beautiful region with a lot of diversity in nature and people but there is also high inequalities, fragile institutions sometimes and systemic violence in some places so that has to be considered and policy and legal frameworks and academic focus must include the need for robust legal and policy responses, the need for legally binding international instruments to prohibit autonomous weapon system operating without meaningful human control. We want to include we are including all these issues in all the programs so summarizing moral implications of autonomous weapons, necessity of human oversight in lethal decisions, risk of bias and discrimination in artificial intelligence driven systems, impact on rights and international legal obligations and need for binding international instruments and regulations. And about Argentina, as I said, I’m not involved in those in those negotiations, but Argentina has been actively engaged in international discussions and negotiations regarding autonomous weapon system, is a vocal proponent of robust international regulation and oversight of autonomous weapon systems, advocating for legally binding rules that ensure human control, compliance with international law and the protection of fundamental rights and security, and offer in collaboration with other Latin American and international partners, has submitted draft protocols calling for prohibitions and regulations on autonomous weapon systems. And I will stop here and maybe we have an interaction of ideas after that. Thank you.


Wolfgang Kleinwachter: Yeah, thank you. And I hope you can also prepare some questions. I hope some time is left, but we have still two commentators on the table. Professor Peixi Xu is from the Beijing Communication University. And he can give us the Chinese perspective.


Peixi Xu: Well, firstly, I would like to comment on Mr. Ambassador, what the ambassador said about the resolutions. I think you’re referring to resolution 778 slash 241 on lethal autonomous weapon systems. I would have somehow referred the adoption of such resolution to a moment that happened in the debate about the cyber norms. It happened in the debate of cyber norms in 2018. There are two parallel processes about the cyber norms debate. One is the UNGGE, among the governmental experts. The other is OEWG, the open ended working groups. to have a lot of other actors to be involved in the debate of the cyber norms. So I would say that the adoption of such a resolution as a kind of a 2018 movement, now we see more actors can be engaged in the debate of AWS. So it is a step forward in that kind of perspective. And somehow I’m also, I think some countries, a lot of countries, 152 countries voted in favor of the resolution and four voted against. Russia and India voted against and some abstained from this. So the dispute here, as I observe, is that firstly some countries would like to say, would like to keep, for example, the CCW platform as the single platform to talk about AWS or all AWS to be exact. And they are arguing, for example, for a kind of high quality result consensus report. That episode also happened, by the way, in the cyber norms debate. In that debate, I think the United States stick to the position that there should be less members for the cyber norms debate. In that case, there can be a high quality consensus report instead of a lot of other abstractions. So that moment repeated here. And also there is a kind of dispute about, in the Chinese perspective, acceptable laws and not unacceptable laws. I would say that there is a kind of resonance among countries. For example, Norway, where we are now, and also Finland, Sweden. and also France, they had argued in the GDE session that there should be a division, for example, between the AWS that, for example, if the weapons cannot, for example, apply with the international humanitarian law, they should be prohibited. So if they cannot apply with the international IHL, they should be prohibited, and if they can apply with it, then they are there. So there is a kind of resonance in that aspect to solve the issue. There is also the dispute about the definition, whether the definition of laws or AWS is clear or not. I think the disputes between the state actors concentrate on these different aspects in terms of the adoption of such a resolution. However, in the long run, I would say it is politically correct, by the way, to engage with more actors, particularly the civil society actors. So that’s a response to what you have said over there. And then I slightly disagree with what Benjamin from the industry was talking about. Particularly, I don’t agree with putting countries which are essentially different from each other into one bloc. That said, Russia is not China. China is China, not Russia. China is not the United States. The United States is not China. So it is not reasonable to put countries together. And historically speaking, by the way, Russia is similar to Europe from the Chinese history books. So I slightly disagree with this kind of perspective of putting countries together and coming back. By the way, to this talk about laws, the Chinese perspective when the Geneva NGOs visiting China, we had quite some conversations. So the Chinese official perspective, as I understand here, is that if, for example, the UK, the United States and Russia accepts a kind of a prohibition of, for example, the laws or the weapons, a total ban of everything, if the powerful countries accept the civil society group’s ideas, like a killer robots, killer robots kind of movement, and then China is very much ready to be on board, to accept such biting terms, to prohibit the certain weapons. So that is a kind of response to Benjamin in the categorization and abstraction of good guys and bad guys. And that I think I can treat as that as a kind of response to the speakers.


Wolfgang Kleinwachter: Thank you very much, Peixi Xu. And I welcome also Chris Painter, who is now on board. I hope you have finished your other meeting. But before, Chris, I invite you. Let’s hear what Gerald from Amnesty International Norway has to say, because he is involved also in the civil society movement Stop Killer Robots. So Gerald, what do you think about all the debates and the perspectives we had so far here on the table?


Gerald Folkvord: Yes, thank you very much. For Amnesty International, as a human rights organization, the digital development in the world is a central issue. We work on many different areas. And I have also to state right from the beginning that Amnesty International has been using our We have been working on artificial intelligence in our own work for a long time. We use other digital tools, and so we are far from trying to invent a world where digital development doesn’t exist. That’s very important, but we also see the enormous impact on people’s human rights that digital developments have in many areas. And we try to deal with them, and we are very concerned about areas where people give away control. So that’s also why Amnesty is so much involved in this issue of autonomous weapon systems. And I also, right at the beginning, want to thank you, and that is actually my colleagues from the International Secretariat, our actual experts who have asked me not to forget to say thank you to Austria for both arranging this meeting, but not least for taking the initiative to take this discussion out of the exclusive club of the CCW. Because this is an issue that affects everybody. And we know I’ve worked a decade, more than a decade ago, I worked on the Armistice Treaty, and we saw the dialogue change for the people in the Global South, who are the ones who are most affected of everything that happens in armament, came to the table and had their say, then things started to change. And it’s very important that those people who will definitely be mostly killed, discriminated, oppressed by the use of killer robots, that they have a say in this conversation. This is not, I agreed with some things Benjamin said, I did not agree with other things he said, but it is very important, he is right, not to abstract, look at this as an abstract issue. I know he didn’t mean it that way, I am interpreting this, but for me abstraction means to kind of look at it superficially and forget the people who are actually affected. And our job as civil society is to talk about the people who will actually be affected by what is happening. and some of the things I meant to say, Olga has already said brilliantly, so I do not have to repeat them, but from the human rights perspective, it is very clear, human rights are based on human dignity and the very idea that machines make autonomous life and death decisions about humans is a contradiction to human dignity. So this is something inherently dehumanizing and we cannot accept that. And the whole concept of human rights collapses once we say we outsource this to automatic systems, not only because it is undignified to let a computer decide who is allowed to live and who is allowed to die, it also undermines the whole system of protection of human rights and international humanitarian law, because legal agency disappears. Who do you hold responsible for a killer robot killing somebody in contradiction of international law? And one of the things I disagree with, of the things Benjamin said, was when he says that we already have for a long time had systems that delegate authority and responsibility to weapon systems, I do not agree, because the responsibility always, as of today, lies with the humans using the systems. And there always has to be in place a clear international legal system that secures accountability for those who contribute to violating international law. Once that disappears, once we say, let’s put that to the machines, machines are smarter than human beings, they will commit less mistakes than human beings, and not least, warfare by machines makes violations more invisible. It’s much more, even more difficult for the victims to actually bring those who violated their rights to justice. And I have the suspicion that this is also one of the very attractive things about autonomous weapon systems, because warfare becomes invisible. The human rights violations, the atrocities, the war crimes, they are not visible any longer. Our guys do not die any longer in the battlefields. Everything happens by machines, and so everything is allowed. And that is where we are going when we are not using this moment now to regulate this very clearly and to state very clearly what is allowed and what is not allowed, and not least also give the industry very clear guidelines on what we want them to develop and which developments they should stay off. Thank you.


Wolfgang Kleinwachter: Thank you, Gerald. I’m afraid we cannot settle all these problems here, and I’m also afraid that we will have no time left for interactive discussion, so we have just five minutes to go. And so Chris has listened to the last two or three statements, so he was involved in the debate in Riyadh and also in Strasbourg, so you know more or less the constellation. You have now the final words, and you can reflect about what you see, and then we have to conclude, unfortunately, but I’m sure that this discussion will continue in the next IGF or in the Business Plus 20 review, though so many things are on the table and have to be discussed. Chris, thank you.


Chris Painter: Thank you, and it’s great to join you guys. I wish I was there. It’s about a billion degrees in DC right now, so it would be much nicer to be in Norway, not just for the company, but for the temperature. Look, my expertise is not in autonomous weapons, but instead in cyber, and I wanted to bring, as I have in previous discussions on this, a little reflection on what’s happening with recession cyber, and in the UN there’s been some discussion of this. You know, I think one of the issues is stakeholder involvement, and I completely agree with both the seriousness. There’s now a dispute about what those norms mean, that’s what international law means, and the biggest part, and this applies to this debate too, is how do you have accountability once there’s agreement among countries, to any set of things, whether it’s abiding treaty or just norms. It doesn’t matter if countries don’t actually abide by them, if there’s no way to have some sort of responsibility, some sort of accountability, and I think that’s true here. So ultimately, I think this is an important area for the UN to look at. It’s an important area to look at outside the UN as well. might want. So on that maybe negative note, but on the positive note, it’s important we’re talking about this and continuing to talk about this and giving attention to this issue. I would agree that autonomous weapon issues have been around for a very long time, but it has greater urgency now. So with that, I’ll close my remarks, Wolfgang, and let you sum up. And again, good to hear the discussion. And it’s a very important discussion.


Wolfgang Kleinwachter: Thank you very much. And I have to apologise also for the audience and for the other panellists that no time is left to react. But this is an invitation to continue the discussion online and offline. I think we will have opportunities in the future. My understanding is that, you know, the IGF will be – the mandate of the IGF will be extended. So looking forward, another session in this format on the 21st IGF in the year 2026, even if we have no clue where this will take place. So thank you very much. And I hope you could get, you know, some food for thought from this arrangement here with the various perspectives from the Global South and different stakeholders, the Global North and industry and civil society. Thank you very much. We have three seconds to go. And now it’s over. Thank you. and many more. Thank you for watching.


V

Vint Cerf

Speech speed

121 words per minute

Speech length

447 words

Speech time

220 seconds

Computers have been involved in weapon systems since their earliest creation, with ENIAC calculating ballistics tables in 1945

Explanation

Cerf argues that the integration of computers into military systems is not new, dating back to the very beginning of computer technology. He emphasizes that ENIAC, one of the first computers, was specifically used for military purposes at the Army Ballistics Research Laboratory.


Evidence

ENIAC computer was at the Army Ballistics Research Laboratory in the United States and was in operation around 1945, with its first use being to calculate ballistics tables for large-scale guns


Major discussion point

Historical context of computer-military integration


Topics

Cyberconflict and warfare


Agreed with

– Wolfgang Kleinwachter
– Benjamin Tallis

Agreed on

Historical integration of computers in military systems


Modern battlefield complexity requires computing assistance due to hypersonic weapons, satellite speeds, and multi-drone attacks

Explanation

Cerf contends that today’s military environment has become so complex and fast-paced that human operators alone cannot effectively respond to threats. The speed and scale of modern warfare necessitate computer assistance for situational awareness and response.


Evidence

Hypersonic weapons, satellites operating at 17,000 miles per hour, complex multi-drone attacks, fire-and-forget and over-the-horizon weapon systems, and the need for situational awareness in complex environments


Major discussion point

Technological necessity in modern warfare


Topics

Cyberconflict and warfare


The central debate centers on maintaining human ability to limit targeting choices in automated systems

Explanation

Cerf identifies the core issue as finding ways to preserve human control over critical targeting decisions while still utilizing the benefits of automated systems. He acknowledges this is particularly challenging when dealing with large-scale attacks that require rapid response.


Evidence

The military faces serious problems responding to high-velocity and large-quantity attacks, and the question of whether targeting is automatic or under human control is not trivial, especially with large-scale attacks


Major discussion point

Human control in autonomous systems


Topics

Cyberconflict and warfare | Human rights principles


Agreed with

– Stefan Pehringer
– Olga Cavalli
– Gerald Folkvord

Agreed on

Importance of human control and judgment in lethal decisions


W

Wolfgang Kleinwachter

Speech speed

147 words per minute

Speech length

1247 words

Speech time

508 seconds

The Internet originated as a U.S. Department of Defense project, showing historical military-technology connections

Explanation

Kleinwachter points out that the Internet itself has military origins, having started as a Department of Defense project decades ago. This historical context demonstrates that the intersection of military applications and internet technology is not a recent development.


Evidence

The Internet started 70 years or 60 years ago as a project under the U.S. Department of Defense


Major discussion point

Historical military-technology connections


Topics

Cyberconflict and warfare | Critical internet resources


Agreed with

– Vint Cerf
– Benjamin Tallis

Agreed on

Historical integration of computers in military systems


The discussion needs broader public awareness beyond small expert groups, making IGF an ideal platform

Explanation

Kleinwachter argues that autonomous weapons discussions should not remain confined to a small circle of experts and diplomats. He advocates for broader public engagement and sees the Internet Governance Forum as an appropriate venue for multi-stakeholder dialogue on these issues.


Evidence

The IGF is an ideal place because here all stakeholders are represented and so that we get the full picture and the various perspectives


Major discussion point

Multi-stakeholder engagement


Topics

Interdisciplinary approaches


Agreed with

– Stefan Pehringer
– Gerald Folkvord
– Peixi Xu

Agreed on

Need for multi-stakeholder engagement beyond expert circles


A

Anja Kaspersen

Speech speed

152 words per minute

Speech length

1507 words

Speech time

592 seconds

AI is not just a bounded technological tool but a social-technical methodology that reorganizes how war is imagined and operationalized

Explanation

Kaspersen argues that AI represents a fundamental shift in how warfare is conceptualized and conducted, rather than simply being another weapon system. She contends that AI changes the entire infrastructure of military decision-making and responsibility.


Evidence

AI shifts the burden of decision-making away from judgment accountability and towards fairness, correlation and automation, reconfiguring the infrastructure of responsibility itself


Major discussion point

AI as transformative methodology


Topics

Cyberconflict and warfare | Human rights principles


Disagreed with

– Benjamin Tallis

Disagreed on

Fundamental nature of AI in warfare – evolutionary vs revolutionary change


Commander’s intent is a deeply human ethical concept that cannot be embodied by AI systems

Explanation

Kaspersen challenges the growing assertion that AI can embody commander’s intent, arguing that this concept requires human judgment, values, and contextual understanding that machines cannot replicate. She emphasizes that commander’s intent involves complex ethical and tactical considerations that are inherently human.


Evidence

Commander’s intent is an articulation of purpose, risk tolerance, values and trust designed to guide action under uncertainty; systems are asked to infer intent, respond to dynamic environments and remain predictable without the context, reasoning or values this requires


Major discussion point

Limitations of AI in military decision-making


Topics

Cyberconflict and warfare | Human rights principles


AI systems often behave with high confidence in environments they don’t comprehend, concealing critical reasoning gaps

Explanation

Kaspersen warns about the dangerous tendency of AI systems to project confidence and competence while actually lacking true understanding of complex situations. She cites recent research showing that even advanced AI models fail under pressure when complex reasoning is required.


Evidence

Recent studies from Salesforce, Apple, IBM and Shumilov show that large reasoning models collapse under pressure, generating confident outputs that fall apart under compound reasoning or multi-step logic


Major discussion point

AI system reliability and transparency


Topics

Cyberconflict and warfare | Human rights principles


Most institutions procure rather than build these systems, often receiving pre-trained models abstracted from operational realities

Explanation

Kaspersen highlights the procurement challenge where military institutions typically buy AI systems rather than developing them internally. These commercial systems often come with marketing claims but may not be suited for specific operational contexts and can fail in unpredictable ways.


Evidence

Systems are modular, pre-trained, and abstracted from operational realities, wrapped in marketing language like ‘responsible AI’ and ‘trustworthy autonomy’ but trained on proxy data and fail silently


Major discussion point

Procurement and system integration challenges


Topics

Cyberconflict and warfare | Consumer protection


Legacy military architectures weren’t designed for high-intensity compute loads, introducing vulnerabilities

Explanation

Kaspersen points out that existing military infrastructure was not built to handle the computational demands of modern AI systems. This creates security vulnerabilities, compatibility issues, and strategic blind spots that could compromise military effectiveness.


Evidence

Many military systems rely on legacy architectures not designed for high intensity compute loads, introducing vulnerabilities, interoperability challenges, and strategic blind spots; large-scale AI systems remain highly energy intensive


Major discussion point

Infrastructure and security vulnerabilities


Topics

Critical infrastructure | Network security


S

Stefan Pehringer

Speech speed

117 words per minute

Speech length

887 words

Speech time

451 seconds

Meaningful human control is necessary to ensure proportionality, distinction, and accountability in weapon systems

Explanation

Pehringer argues that autonomous weapons systems raise fundamental legal and ethical concerns that require human oversight to ensure compliance with international humanitarian law. He emphasizes that human control is essential for maintaining the principles of proportionality and distinction in warfare.


Evidence

AWS raise concerns including the necessity for meaningful human control to ensure proportionality and distinction, the need for predictability and accountability, protection of rights to life and human rights, and the principle of human dignity


Major discussion point

Legal and ethical requirements for human control


Topics

Human rights principles | Cyberconflict and warfare


Agreed with

– Vint Cerf
– Olga Cavalli
– Gerald Folkvord

Agreed on

Importance of human control and judgment in lethal decisions


Austria supports concluding negotiations on a legally binding instrument by 2026, as proposed by the UN Secretary General

Explanation

Pehringer outlines Austria’s leadership role in pushing for international regulation of autonomous weapons systems. He describes Austria’s efforts to build momentum for binding international agreements and the timeline proposed by UN leadership.


Evidence

Austria strongly supports the joint call by the UN Secretary General and ICRC President to conclude negotiations on a legally binding instrument on AWS by 2026; Austria introduced two resolutions in the UN General Assembly with the 2024 follow-up supported by 166 member states


Major discussion point

International regulatory framework development


Topics

Human rights principles | Jurisdiction


Multi-stakeholder approaches are essential, including contributions from science, academia, industry, and civil society

Explanation

Pehringer advocates for broadening the discussion beyond government and military experts to include diverse perspectives from various sectors of society. He argues that the implications of autonomous weapons affect all of humanity and therefore require inclusive dialogue.


Evidence

The global discourse on AWS must extend beyond diplomats and military experts; contributions from science, academia, industry, tech sector, parliamentarians and civil society are essential to ensure a holistic and inclusive debate


Major discussion point

Inclusive governance approach


Topics

Interdisciplinary approaches | Human rights principles


Agreed with

– Wolfgang Kleinwachter
– Gerald Folkvord
– Peixi Xu

Agreed on

Need for multi-stakeholder engagement beyond expert circles


Democratic countries’ delay in regulation is against their long-term interests and humanity’s broader interests

Explanation

Pehringer warns that postponing regulatory action on autonomous weapons will make it increasingly difficult to control their proliferation and use. He argues that democratic nations in particular should recognize the urgency of establishing international rules before these weapons become widespread.


Evidence

The longer regulatory efforts are delayed, the harder it will be to reverse course once these weapons proliferate globally; there is urgency to move from discussions to negotiations on building rules and limits for AWS


Major discussion point

Urgency of regulatory action


Topics

Human rights principles | Cyberconflict and warfare


B

Benjamin Tallis

Speech speed

171 words per minute

Speech length

1375 words

Speech time

481 seconds

We are in an AI arms race where losing to authoritarian states would be worse than participating

Explanation

Tallis argues that while being in an AI arms race is undesirable, allowing authoritarian states to gain technological superiority would be far more dangerous for democratic values and global security. He contends that democracies must maintain technological leadership to preserve their values and security.


Evidence

Democracies are increasingly under authoritarian threat and unable to deal with that threat through advanced military means; losing technological edge creates security risk; it would be far worse to lose an AI arms race to authoritarian states who do not share our values


Major discussion point

Strategic competition and democratic security


Topics

Cyberconflict and warfare


Disagreed with

– Anja Kaspersen

Disagreed on

Strategic framing of AI development as competitive race vs governance priority


Ethical components of autonomous weapon systems represent competitive advantages for democratic nations

Explanation

Tallis contends that investing in ethical AI development is not just morally right but strategically advantageous. He argues that democratic values can be leveraged as a competitive edge in developing more reliable and accountable autonomous systems.


Evidence

At Helsing we see the ethical component of autonomous weapon systems as a competitive advantage; we invest in explicable AI that can give account for why it has done what it has done and is more easy to keep within bounds


Major discussion point

Ethics as competitive advantage


Topics

Cyberconflict and warfare | Human rights principles


Autonomous systems should have foreseeable, reliable, traceable, and controllable effects through explicable AI

Explanation

Tallis emphasizes that modern AI development should focus on creating systems that can explain their decision-making processes and remain within defined operational boundaries. He argues this represents an evolution in military technology that maintains human oversight.


Evidence

Autonomous weapon systems’ effects have to be foreseeable, reliable, traceable and controllable; we invest in the new generation of explicable AI that can give account for why it has done what it has done


Major discussion point

Technical requirements for responsible AI


Topics

Cyberconflict and warfare | Human rights principles


Military command has always been based on delegated bounded autonomy, making this an evolution rather than revolution

Explanation

Tallis argues that autonomous weapons represent a continuation of existing military practices rather than a fundamental departure. He contends that military operations have always involved delegating decision-making authority within defined parameters, whether to subordinates or weapon systems.


Evidence

Military command and control has always been based on the principle of delegation of bounded autonomy, sometimes to subordinate officers or soldiers, sometimes to weapon systems; computers have been involved since their invention


Major discussion point

Historical continuity in military autonomy


Topics

Cyberconflict and warfare


Agreed with

– Vint Cerf
– Wolfgang Kleinwachter

Agreed on

Historical integration of computers in military systems


Sensor and data fusion can enhance precision and create less problematic weapon systems

Explanation

Tallis argues that advanced AI systems can actually improve the precision and reduce the problematic aspects of weapons by integrating multiple sensor inputs. He suggests this technological capability can lead to more discriminating and effective military systems.


Evidence

The ability to conduct sensor and data fusion using targeting triggered by a profusion of different sensors can greatly enhance the precision of weapons and give militaries an advantage on the battlefield


Major discussion point

Technology improving weapon precision


Topics

Cyberconflict and warfare


P

Peixi Xu

Speech speed

121 words per minute

Speech length

726 words

Speech time

359 seconds

The adoption of UN resolution 78/241 represents progress by engaging more actors beyond governmental experts

Explanation

Xu draws parallels between the autonomous weapons debate and previous cyber norms discussions, noting that expanding participation beyond government experts to include more stakeholders represents positive development. He sees this as similar to the 2018 cyber norms debate evolution.


Evidence

Similar to the 2018 cyber norms debate with UNGGE and OEWG processes, the resolution allows more actors to be engaged in the AWS debate; 152 countries voted in favor with only 4 against


Major discussion point

Expanding stakeholder participation


Topics

Interdisciplinary approaches | Jurisdiction


Agreed with

– Wolfgang Kleinwachter
– Stefan Pehringer
– Gerald Folkvord

Agreed on

Need for multi-stakeholder engagement beyond expert circles


Countries should not be categorized into blocs, as Russia is not China and each nation has distinct positions

Explanation

Xu objects to oversimplified categorizations that group different countries together based on assumed shared interests or values. He emphasizes that each nation has its own unique perspective and should be treated as such in international negotiations.


Evidence

Russia is not China, China is China, not Russia; China is not the United States; historically speaking, Russia is similar to Europe from Chinese history books


Major discussion point

National sovereignty in international relations


Topics

Jurisdiction


China is ready to accept binding prohibitions if powerful countries like the UK, US, and Russia agree to total bans

Explanation

Xu presents China’s official position as being willing to support comprehensive bans on autonomous weapons systems, but only if major military powers also commit to such prohibitions. This represents a conditional approach to international regulation.


Evidence

The Chinese official perspective is that if the UK, United States and Russia accept a total ban of everything, then China is very much ready to be on board to accept such binding terms


Major discussion point

Conditional international cooperation


Topics

Cyberconflict and warfare | Jurisdiction


O

Olga Cavalli

Speech speed

133 words per minute

Speech length

655 words

Speech time

294 seconds

Human judgment is irreplaceable in decisions involving lethal force, and technical programming alone cannot address ethical concerns

Explanation

Cavalli emphasizes that human oversight must always be maintained in military decisions that involve taking lives. She argues that technical solutions cannot substitute for human moral judgment and that programming alone is insufficient to address the ethical complexities involved.


Evidence

Technical programming can be not enough to address ethical concerns; human judgment is irreplaceable in any training in decision involving lethal force


Major discussion point

Irreplaceable nature of human judgment


Topics

Human rights principles | Cyberconflict and warfare


Agreed with

– Vint Cerf
– Stefan Pehringer
– Gerald Folkvord

Agreed on

Importance of human control and judgment in lethal decisions


Algorithm bias can disproportionately harm marginalized groups and complicate civilian-combatant distinctions

Explanation

Cavalli warns about the risks of algorithmic bias in autonomous weapons systems, particularly how these biases could unfairly target vulnerable populations and make it difficult to distinguish between civilians and combatants. She emphasizes this is especially concerning in regions with high inequality.


Evidence

Algorithm biases can disproportionately harm marginalized groups and complicate the distinction between civilians and combatants; Latin America has high inequalities, fragile institutions and systemic violence in some places


Major discussion point

Algorithmic bias and discrimination


Topics

Human rights principles | Gender rights online


Academic curricula must include ethics, human control, bias analysis, and policy frameworks for autonomous weapons

Explanation

Cavalli describes comprehensive educational initiatives being developed in Latin America to address autonomous weapons issues. She outlines a multi-faceted approach that covers ethical, technical, legal, and policy aspects of autonomous weapons systems.


Evidence

New cyber defense career program that is free, virtual and in Spanish; curricula includes ethics, human control and responsibility, bias and discrimination in algorithms, human rights and humanitarian law, and policy frameworks


Major discussion point

Educational and training initiatives


Topics

Online education | Capacity development


Argentina advocates for legally binding international instruments prohibiting autonomous weapons without meaningful human control

Explanation

Cavalli outlines Argentina’s active engagement in international discussions and its support for robust international regulation. She describes Argentina’s collaboration with other Latin American and international partners in pushing for binding legal frameworks.


Evidence

Argentina has been actively engaged in international discussions, is a vocal proponent of robust international regulation, and has submitted draft protocols calling for prohibitions and regulations on autonomous weapon systems


Major discussion point

Latin American regulatory advocacy


Topics

Human rights principles | Jurisdiction


G

Gerald Folkvord

Speech speed

154 words per minute

Speech length

776 words

Speech time

301 seconds

The concept is inherently dehumanizing and causes the collapse of human rights protection systems

Explanation

Folkvord argues that allowing machines to make autonomous life-and-death decisions fundamentally contradicts human dignity and undermines the entire framework of human rights protection. He contends that this represents a fundamental threat to the concept of human rights itself.


Evidence

Human rights are based on human dignity and the idea that machines make autonomous life and death decisions about humans is a contradiction to human dignity; the whole concept of human rights collapses once we outsource this to automatic systems


Major discussion point

Fundamental threat to human rights


Topics

Human rights principles


Agreed with

– Vint Cerf
– Stefan Pehringer
– Olga Cavalli

Agreed on

Importance of human control and judgment in lethal decisions


Legal agency disappears when machines make autonomous life-death decisions, undermining accountability systems

Explanation

Folkvord emphasizes that autonomous weapons systems create an accountability gap where it becomes impossible to hold anyone responsible for violations of international law. He argues that this erosion of legal responsibility undermines the entire system of international humanitarian law.


Evidence

Legal agency disappears – who do you hold responsible for a killer robot killing somebody in contradiction of international law? Responsibility always lies with humans using systems, and there must be clear international legal systems securing accountability


Major discussion point

Accountability and legal responsibility


Topics

Human rights principles | Jurisdiction


Civil society involvement is crucial as those most affected by killer robots should have a voice in the conversation

Explanation

Folkvord argues that the people who will be most impacted by autonomous weapons systems – particularly those in the Global South – must be included in discussions about their regulation. He draws parallels to the landmine treaty process where including affected populations led to meaningful change.


Evidence

Thanks Austria for taking this discussion out of the exclusive club of the CCW; when people from the Global South who are most affected came to the table in the landmine treaty discussions, things started to change


Major discussion point

Inclusive participation in governance


Topics

Human rights principles | Interdisciplinary approaches


Agreed with

– Wolfgang Kleinwachter
– Stefan Pehringer
– Peixi Xu

Agreed on

Need for multi-stakeholder engagement beyond expert circles


C

Chris Painter

Speech speed

206 words per minute

Speech length

288 words

Speech time

83 seconds

Accountability becomes difficult to trace without clear institutional decision-making processes

Explanation

Painter draws from his experience with cyber norms to highlight the challenge of ensuring accountability in autonomous weapons systems. He emphasizes that even when countries agree on norms or treaties, the real challenge lies in ensuring compliance and holding violators accountable.


Evidence

Experience with cyber norms shows there’s dispute about what norms mean and how international law applies; the biggest part is how to have accountability once there’s agreement – it doesn’t matter if countries don’t abide by them


Major discussion point

Implementation and enforcement challenges


Topics

Cyberconflict and warfare | Jurisdiction


Agreements

Agreement points

Historical integration of computers in military systems

Speakers

– Vint Cerf
– Wolfgang Kleinwachter
– Benjamin Tallis

Arguments

Computers have been involved in weapon systems since their earliest creation, with ENIAC calculating ballistics tables in 1945


The Internet originated as a U.S. Department of Defense project, showing historical military-technology connections


Military command has always been based on delegated bounded autonomy, making this an evolution rather than revolution


Summary

All three speakers acknowledge that the integration of computing technology with military systems is not new, dating back to the earliest computers and including the Internet’s origins in defense projects


Topics

Cyberconflict and warfare


Need for multi-stakeholder engagement beyond expert circles

Speakers

– Wolfgang Kleinwachter
– Stefan Pehringer
– Gerald Folkvord
– Peixi Xu

Arguments

The discussion needs broader public awareness beyond small expert groups, making IGF an ideal platform


Multi-stakeholder approaches are essential, including contributions from science, academia, industry, and civil society


Civil society involvement is crucial as those most affected by killer robots should have a voice in the conversation


The adoption of UN resolution 78/241 represents progress by engaging more actors beyond governmental experts


Summary

Multiple speakers agree that autonomous weapons discussions should not remain confined to government and military experts but should include broader stakeholder participation


Topics

Interdisciplinary approaches | Human rights principles


Importance of human control and judgment in lethal decisions

Speakers

– Vint Cerf
– Stefan Pehringer
– Olga Cavalli
– Gerald Folkvord

Arguments

The central debate centers on maintaining human ability to limit targeting choices in automated systems


Meaningful human control is necessary to ensure proportionality, distinction, and accountability in weapon systems


Human judgment is irreplaceable in decisions involving lethal force, and technical programming alone cannot address ethical concerns


The concept is inherently dehumanizing and causes the collapse of human rights protection systems


Summary

Speakers across different perspectives agree that human oversight and control must be maintained in systems that make life-and-death decisions


Topics

Human rights principles | Cyberconflict and warfare


Similar viewpoints

These speakers share a human rights-centered approach emphasizing the fundamental importance of human dignity and the dangers of delegating life-and-death decisions to machines

Speakers

– Stefan Pehringer
– Gerald Folkvord
– Olga Cavalli

Arguments

Meaningful human control is necessary to ensure proportionality, distinction, and accountability in weapon systems


The concept is inherently dehumanizing and causes the collapse of human rights protection systems


Human judgment is irreplaceable in decisions involving lethal force, and technical programming alone cannot address ethical concerns


Topics

Human rights principles


Both speakers emphasize the fundamental problems with AI reliability and the breakdown of accountability systems when machines make autonomous decisions

Speakers

– Anja Kaspersen
– Gerald Folkvord

Arguments

AI systems often behave with high confidence in environments they don’t comprehend, concealing critical reasoning gaps


Legal agency disappears when machines make autonomous life-death decisions, undermining accountability systems


Topics

Human rights principles | Cyberconflict and warfare


Both speakers view autonomous weapons as a technological evolution driven by military necessity rather than a fundamental departure from existing practices

Speakers

– Benjamin Tallis
– Vint Cerf

Arguments

Modern battlefield complexity requires computing assistance due to hypersonic weapons, satellite speeds, and multi-drone attacks


Military command has always been based on delegated bounded autonomy, making this an evolution rather than revolution


Topics

Cyberconflict and warfare


Unexpected consensus

Urgency of international regulation

Speakers

– Stefan Pehringer
– Benjamin Tallis
– Gerald Folkvord

Arguments

Democratic countries’ delay in regulation is against their long-term interests and humanity’s broader interests


We are in an AI arms race where losing to authoritarian states would be worse than participating


Civil society involvement is crucial as those most affected by killer robots should have a voice in the conversation


Explanation

Despite representing different perspectives (government, industry, and civil society), these speakers all acknowledge the urgent need for action, though for different reasons – Pehringer for humanitarian concerns, Tallis for strategic competition, and Folkvord for human rights protection


Topics

Human rights principles | Cyberconflict and warfare


Complexity and technical challenges of AI systems

Speakers

– Anja Kaspersen
– Benjamin Tallis

Arguments

AI systems often behave with high confidence in environments they don’t comprehend, concealing critical reasoning gaps


Autonomous systems should have foreseeable, reliable, traceable, and controllable effects through explicable AI


Explanation

Despite Kaspersen’s critical stance and Tallis’s industry perspective, both acknowledge the technical challenges of AI systems and the need for explainable, reliable AI – though they draw different conclusions about feasibility


Topics

Cyberconflict and warfare | Human rights principles


Overall assessment

Summary

The speakers show significant agreement on several foundational issues: the historical precedent of computer-military integration, the need for broader stakeholder engagement, and the importance of maintaining human control in lethal decisions. However, they diverge sharply on whether current technology can adequately address these concerns and whether regulation should focus on prohibition or controlled development.


Consensus level

Moderate consensus on principles but significant disagreement on implementation. The agreement on fundamental principles (human control, stakeholder engagement, urgency) provides a foundation for dialogue, but the stark differences between industry optimism and civil society skepticism about AI capabilities suggest that bridging these gaps will require substantial compromise and continued dialogue.


Differences

Different viewpoints

Fundamental nature of AI in warfare – evolutionary vs revolutionary change

Speakers

– Benjamin Tallis
– Anja Kaspersen

Arguments

Military command and control has always been based on the principle of delegation of bounded autonomy, sometimes to subordinate officers or soldiers, sometimes to weapon systems; computers have been involved since their invention


AI is not just a bounded technological tool but a social-technical methodology that reorganizes how war is imagined and operationalized


Summary

Tallis views autonomous weapons as an evolution of existing military practices of delegated authority, while Kaspersen argues AI represents a fundamental transformation in how warfare is conceptualized and conducted, shifting the entire infrastructure of responsibility.


Topics

Cyberconflict and warfare | Human rights principles


Feasibility and reliability of AI embodying commander’s intent

Speakers

– Benjamin Tallis
– Anja Kaspersen

Arguments

We invest in the new generation of explicable AI that can give account for why it has done what it has done and is more easy to keep within bounds


Commander’s intent is an articulation of purpose, risk tolerance, values and trust designed to guide action under uncertainty; systems are asked to infer intent, respond to dynamic environments and remain predictable without the context, reasoning or values this requires


Summary

Tallis believes explicable AI can be developed to operate within defined bounds and provide accountability, while Kaspersen argues that AI systems fundamentally cannot replicate the human judgment, values, and contextual understanding required for commander’s intent.


Topics

Cyberconflict and warfare | Human rights principles


Categorization of nations in geopolitical context

Speakers

– Benjamin Tallis
– Peixi Xu

Arguments

It would be far worse to lose an AI arms race to authoritarian states who do not share our values and wish to actively shape the world in their interest, according to their very different values rather than ours


Russia is not China, China is China, not Russia; China is not the United States; historically speaking, Russia is similar to Europe from Chinese history books


Summary

Tallis groups countries into democratic vs authoritarian blocs for strategic analysis, while Xu objects to such categorizations, arguing each nation has distinct positions and should not be grouped together based on assumed shared interests.


Topics

Cyberconflict and warfare | Jurisdiction


Historical precedent for responsibility delegation in weapon systems

Speakers

– Benjamin Tallis
– Gerald Folkvord

Arguments

Military command and control has always been based on the principle of delegation of bounded autonomy, sometimes to subordinate officers or soldiers, sometimes to weapon systems


Responsibility always lies with humans using systems, and there must be clear international legal systems securing accountability


Summary

Tallis argues that delegating authority to weapon systems has historical precedent in military operations, while Folkvord insists that responsibility has always remained with humans and that autonomous systems represent a fundamental departure from this principle.


Topics

Cyberconflict and warfare | Human rights principles | Jurisdiction


Strategic framing of AI development as competitive race vs governance priority

Speakers

– Benjamin Tallis
– Anja Kaspersen

Arguments

We are in an AI arms race where losing to authoritarian states would be worse than participating


It is unhelpful and irresponsible to frame AI governance as a race; this framing fuels premature adoption, suppresses caution, and elevates vendor narratives over institutional responsibility


Summary

Tallis frames AI development in terms of strategic competition where democracies must maintain technological leadership, while Kaspersen argues that framing AI governance as a race is counterproductive and leads to rushed adoption without proper safeguards.


Topics

Cyberconflict and warfare | Human rights principles


Unexpected differences

Technical feasibility vs fundamental limitations of AI systems

Speakers

– Benjamin Tallis
– Anja Kaspersen

Arguments

We invest in the new generation of explicable AI that can give account for why it has done what it has done and is more easy to keep within bounds


Recent studies from Salesforce, Apple, IBM and Shumilov show that large reasoning models collapse under pressure, generating confident outputs that fall apart under compound reasoning or multi-step logic


Explanation

This disagreement is unexpected because both speakers come from technical backgrounds and have studied under the same mentor, yet they have fundamentally different assessments of current AI capabilities. Tallis represents industry optimism about AI development, while Kaspersen cites recent research showing AI system failures under pressure.


Topics

Cyberconflict and warfare | Human rights principles


China’s conditional cooperation stance

Speakers

– Peixi Xu
– Benjamin Tallis

Arguments

The Chinese official perspective is that if the UK, United States and Russia accept a total ban of everything, then China is very much ready to be on board to accept such binding terms


It would be far worse to lose an AI arms race to authoritarian states who do not share our values


Explanation

Xu’s presentation of China’s willingness to accept comprehensive bans if major powers agree is unexpected given Tallis’s framing of authoritarian states as unwilling to cooperate on values-based governance. This reveals more nuanced positions than the binary framing suggests.


Topics

Cyberconflict and warfare | Jurisdiction


Overall assessment

Summary

The discussion reveals fundamental disagreements on the nature of AI in warfare (evolutionary vs revolutionary), the feasibility of maintaining human control through technical means, and whether to frame development as strategic competition or governance priority. There are also significant differences on geopolitical categorizations and the role of various stakeholders.


Disagreement level

High level of disagreement with significant implications. The disagreements are not merely tactical but reflect fundamentally different worldviews about AI capabilities, international relations, and governance approaches. These differences could impede progress toward international consensus on autonomous weapons regulation, as stakeholders operate from incompatible assumptions about technology, geopolitics, and institutional design.


Partial agreements

Partial agreements

Similar viewpoints

These speakers share a human rights-centered approach emphasizing the fundamental importance of human dignity and the dangers of delegating life-and-death decisions to machines

Speakers

– Stefan Pehringer
– Gerald Folkvord
– Olga Cavalli

Arguments

Meaningful human control is necessary to ensure proportionality, distinction, and accountability in weapon systems


The concept is inherently dehumanizing and causes the collapse of human rights protection systems


Human judgment is irreplaceable in decisions involving lethal force, and technical programming alone cannot address ethical concerns


Topics

Human rights principles


Both speakers emphasize the fundamental problems with AI reliability and the breakdown of accountability systems when machines make autonomous decisions

Speakers

– Anja Kaspersen
– Gerald Folkvord

Arguments

AI systems often behave with high confidence in environments they don’t comprehend, concealing critical reasoning gaps


Legal agency disappears when machines make autonomous life-death decisions, undermining accountability systems


Topics

Human rights principles | Cyberconflict and warfare


Both speakers view autonomous weapons as a technological evolution driven by military necessity rather than a fundamental departure from existing practices

Speakers

– Benjamin Tallis
– Vint Cerf

Arguments

Modern battlefield complexity requires computing assistance due to hypersonic weapons, satellite speeds, and multi-drone attacks


Military command has always been based on delegated bounded autonomy, making this an evolution rather than revolution


Topics

Cyberconflict and warfare


Takeaways

Key takeaways

Autonomous weapons systems represent a critical juncture for humanity requiring urgent international regulation, with Austria advocating for a legally binding instrument by 2026


The debate centers fundamentally on maintaining meaningful human control over life-and-death decisions, as technical programming alone cannot address ethical concerns


AI in military applications is not just a technological tool but a social-technical methodology that reorganizes how war is conceptualized and operationalized


There is an ongoing AI arms race where democratic nations must balance competitive advantages with ethical considerations and human rights protections


Multi-stakeholder engagement beyond governmental experts is essential, including civil society, industry, academia, and affected populations from the Global South


Current AI systems have significant limitations including reasoning gaps, bias issues, and inability to truly comprehend operational contexts despite appearing confident


The discussion revealed fundamental disagreements between industry perspectives emphasizing competitive necessity and civil society concerns about human dignity and accountability


Historical precedent exists for computer involvement in weapons, but current AI developments represent a qualitative shift in decision-making delegation


Resolutions and action items

Continue discussions at future Internet Governance Forums, with plans for the 21st IGF in 2026


Austria to continue pushing for UN negotiations on legally binding instruments through open informal consultations


Academic institutions to develop and expand curricula on autonomous weapons ethics, including new Spanish-language cyber defense programs


IEEE to promote P3119 procurement standards for high-risk AI systems in defense applications


Maintain multi-stakeholder dialogue beyond exclusive governmental expert groups


Unresolved issues

How to define and implement ‘meaningful human control’ in practice across different battlefield scenarios


Reconciling competitive military advantages with ethical constraints and human rights protections


Establishing clear accountability mechanisms when autonomous systems cause harm or violate international law


Addressing the gap between vendor marketing claims and actual AI system capabilities in military procurement


Resolving definitional disputes about what constitutes autonomous weapons systems requiring regulation


Balancing the urgency of regulation with the rapid pace of technological development


Managing geopolitical tensions and mistrust that slow progress on international agreements


Addressing infrastructure vulnerabilities and energy dependencies of AI military systems


Determining how to prevent proliferation to non-state actors while allowing legitimate defense applications


Suggested compromises

Distinguishing between AWS that can comply with international humanitarian law (potentially acceptable) versus those that cannot (should be prohibited), as suggested by Nordic countries and France


Focusing initial regulations on systems operating in mixed civilian-military environments while allowing more autonomy in purely military battle spaces


Implementing bounded autonomy within clear operational parameters, similar to traditional military command delegation


Developing explicable AI systems that can provide accountability for their decision-making processes


Creating international frameworks that allow defensive AI development while prohibiting fully autonomous lethal systems


Establishing procurement standards that require vendor transparency and institutional responsibility before deployment


Thought provoking comments

We must stop treating AI as a bounded technological tool. AI is not a weapon system in the traditional sense… It is a social-technical methodology, an approach, a system of methods that reorganizes how war is imagined, operationalized and bureaucratized. It shifts the burden of decision-making away from judgment accountability and towards fairness, correlation and automation.

Speaker

Anja Kaspersen


Reason

This comment fundamentally reframes the entire discussion by challenging the basic assumption that AI in weapons is simply another technological advancement. By characterizing AI as a ‘social-technical methodology’ that reorganizes war itself, Kaspersen shifts the debate from technical capabilities to systemic transformation of military decision-making structures.


Impact

This comment elevated the discussion from tactical considerations to strategic and philosophical implications. It introduced the concept that AI doesn’t just enhance existing systems but fundamentally alters the nature of warfare and responsibility structures, setting up a more complex analytical framework for subsequent speakers.


The ethical component of autonomous weapon systems as a competitive advantage… We think we are industry leaders in this. Precisely for the reasons that were mentioned by the ambassador is that autonomous weapon systems, their effects have to be foreseeable. The weapon systems have to be reliable. They have to have traceable effects and ultimately have to be controllable.

Speaker

Benjamin Tallis


Reason

This comment is provocative because it reframes ethics not as a constraint on military AI development but as a strategic business advantage. It challenges the common assumption that ethical considerations slow down innovation by arguing they actually enhance competitiveness and effectiveness.


Impact

This perspective introduced a new dimension to the debate by suggesting that ethical development and military effectiveness are not opposing forces but complementary. It shifted the conversation from viewing ethics as regulatory burden to seeing it as operational necessity, influencing how other speakers addressed the relationship between values and capabilities.


Russia is not China. China is China, not Russia. China is not the United States… So it is not reasonable to put countries together… if the powerful countries accept the civil society group’s ideas, like a killer robots… movement, and then China is very much ready to be on board, to accept such binding terms.

Speaker

Peixi Xu


Reason

This comment challenges the binary ‘democratic vs. authoritarian’ framing that had been implicit in earlier remarks. It provides crucial nuance by rejecting bloc categorization and reveals China’s conditional willingness to accept prohibitions, contingent on major powers’ participation.


Impact

This intervention significantly complicated the geopolitical narrative of the discussion. It moved the conversation away from ideological framing toward more pragmatic diplomatic considerations, suggesting that international cooperation might be more achievable than the earlier ‘AI arms race’ framing suggested.


The very idea that machines make autonomous life and death decisions about humans is a contradiction to human dignity… legal agency disappears. Who do you hold responsible for a killer robot killing somebody in contradiction of international law?

Speaker

Gerald Folkvord


Reason

This comment cuts through technical and strategic arguments to focus on fundamental human rights principles. It introduces the critical issue of legal accountability vacuum that autonomous weapons create, challenging all previous technical solutions with a basic question of responsibility.


Impact

This comment grounded the abstract technical and strategic discussion in concrete human rights concerns. It forced other participants to address not just what autonomous weapons can do, but what they should do from a legal and moral standpoint, adding an essential ethical anchor to the technical debate.


Commander’s intent is not a checklist or an input. It is a deeply human and ethical concept, an articulation of purpose, risk tolerance, values and trust designed to guide action under uncertainty. In human-to-human operations, it is already complex. In human-machine interaction, it becomes nearly impossible.

Speaker

Anja Kaspersen


Reason

This comment provides a sophisticated military-technical critique that challenges industry claims about AI’s ability to embody human decision-making. It demonstrates deep understanding of military operations while highlighting the irreducible complexity of human judgment in warfare.


Impact

This technical insight added military credibility to ethical concerns, showing that the limitations aren’t just philosophical but operational. It provided concrete military reasoning for why human control remains essential, influencing how the discussion balanced technical capabilities with operational realities.


Overall assessment

These key comments transformed what could have been a superficial policy discussion into a multi-dimensional analysis spanning technical, ethical, geopolitical, and operational domains. Kaspersen’s reframing of AI as a social-technical methodology elevated the entire conversation’s analytical sophistication. Tallis’s ‘ethics as competitive advantage’ argument introduced unexpected nuance to the industry perspective. Xu’s geopolitical intervention challenged binary thinking and revealed diplomatic possibilities. Folkvord’s human rights focus provided moral grounding, while Kaspersen’s military-technical insights bridged theoretical concerns with operational realities. Together, these comments created a rich, complex dialogue that avoided simplistic positions and demonstrated the multifaceted nature of autonomous weapons governance challenges.


Follow-up questions

How do we maintain human ability to limit the choice of targeting in an automated system so that it only goes after targets that we believe are legitimate?

Speaker

Vint Cerf


Explanation

This addresses the core challenge of maintaining meaningful human control over autonomous weapons while dealing with high-velocity and large-scale attacks that require computing assistance.


How do we recognize the utility of computer-based systems in dealing with conflict while ensuring they do not get out of control?

Speaker

Vint Cerf


Explanation

This explores the balance between leveraging technological advantages in defense while maintaining appropriate safeguards and limitations.


How can AI systems embody commander’s intent when it is a deeply human and ethical concept involving purpose, risk tolerance, values and trust?

Speaker

Anja Kaspersen


Explanation

This challenges the technical feasibility of programming human judgment and ethical decision-making into autonomous systems, particularly in dynamic battlefield contexts.


Who do you hold responsible for a killer robot killing somebody in contradiction of international law?

Speaker

Gerald Folkvord


Explanation

This addresses the fundamental issue of legal accountability and responsibility when autonomous systems make lethal decisions without direct human control.


How do you have accountability once there’s agreement among countries to any set of norms or treaties if countries don’t actually abide by them?

Speaker

Chris Painter


Explanation

This highlights the enforcement challenge in international agreements on autonomous weapons, drawing from experience with cyber norms.


What are the concrete examples and realities of battle that autonomous weapons would be dealing with, rather than focusing on abstractions?

Speaker

Benjamin Tallis


Explanation

This calls for grounding the discussion in specific operational scenarios rather than theoretical concerns, to better understand practical applications and limitations.


How can institutions interrogate vendor claims and surface hidden risks before integrating AI systems?

Speaker

Anja Kaspersen


Explanation

This addresses the procurement challenge where institutions need standards to evaluate AI systems that come with marketing language but may have hidden vulnerabilities or limitations.


How do we address the risk of algorithm bias in autonomous systems that can disproportionately harm marginalized groups?

Speaker

Olga Cavalli


Explanation

This explores the fairness and discrimination issues in AI-driven weapons systems, particularly important for regions with high inequality and vulnerable populations.


What is the clear division between AWS that can comply with international humanitarian law versus those that should be prohibited?

Speaker

Peixi Xu


Explanation

This seeks to establish technical and legal criteria for distinguishing between acceptable and unacceptable autonomous weapon systems based on IHL compliance.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.